Comments (5)
We have likely corrected many of the major issues with the original alpaca dataset. What remains are a lot of mostly minor issues (a lot of math issues still remain). So I tend to agree with the sentiment that taking the dataset and feeding into GPT-4 will not result in a huge performance boost; however, what might be interesting is having ChatGPT provide a response for all the instructions we have not curated.
This would likely result in a more verbose alpaca model that is closer to the output of ChatGPT.
Here's an example:
"instruction": "Give three tips for staying healthy."
original alpaca output:
1. Eat a balanced diet and make sure to include plenty of fruits and vegetables.
2. Exercise regularly to keep your body active and strong.
3. Get enough sleep and maintain a consistent sleep schedule.
ChatGPT output:
Sure! Here are three tips for staying healthy:
1. Eat a balanced diet that includes plenty of fruits, vegetables, whole grains, and lean proteins. Avoid processed foods and limit your intake of sugar, salt, and saturated fats.
2. Exercise regularly, aiming for at least 30 minutes of moderate activity most days of the week. This can include activities like walking, jogging, cycling, swimming, or strength training.
5. Get enough sleep each night, aiming for 7-8 hours of uninterrupted sleep.
This will help you feel more rested and energized throughout the day, and can also help boost your immune system and reduce your risk of chronic health problems.
Not sure if it would really do anything in terms of performance though. Ideally, we would diversify the dataset.
from alpacadatacleaned.
Not as good an idea as it might seem. The instructions themselves are very repetitive. When it comes to writing code, there are like 10 different questions reworded 700 times.
Most instructions are also very simple. Add a
and b
, write a function to check prime numbers, etc. GPT-3.5-turbo answers them very well and is incredibly cheap. ~1000 instructions per dollar.
The best thing that could be done to improve the dataset is to generate better and more diverse instructions.
Otherwise, it's not necessary to collect money at all. Just create an interface, and then each user can run it locally with his own API key on a specific section of the dataset (e.g. 3650 instruction to 10500 instruction for user @viraniaman94).
from alpacadatacleaned.
It might be possible to use dialogues from the Open Assistant project (https://github.com/LAION-AI/Open-Assistant) as a seed to create more questions. But maybe paying 500$ to clickworkers to create more questions for Open Assistant might be a better investment.
from alpacadatacleaned.
Are there any plans on how to fix the dataset for the math issues @gururise? Is anyone actively working on that? I'd rather try my hand at something that isn't going to be useless by the time I'm done because someone already finished it.
from alpacadatacleaned.
Are there any plans on how to fix the dataset for the math issues @gururise? Is anyone actively working on that? I'd rather try my hand at something that isn't going to be useless by the time I'm done because someone already finished it.
@HideLord fixed the first few batches of the math issues. I know there are quite a few math issues remaining. As far as I know, there is nobody working on that right now.
from alpacadatacleaned.
Related Issues (20)
- overall approach
- Incorrect key string in alpaca_data_cleaned.json
- Idea about better cleaning HOT 3
- Correct or potentially to be cleaned? HOT 6
- How are you going about cleaning? HOT 4
- Separate instructions by functionality
- Diffs as data HOT 1
- Evaluation Metric HOT 8
- good job HOT 1
- Contributing to the dataset curation with Argilla and the Alpaca Garbage collector HOT 2
- Is there a boost in performance for full fine-tuning versus LoRA? HOT 2
- Identify code snippet in "input" fields HOT 1
- Command to run the evaluation
- PIQA dataset's metric
- Is the "alpaca_data_cleaned_archive.json" file having all cleaned data? HOT 2
- The MNLI score in lm-evaluation-harness
- Where is the 9k cleaned alpaca data in the paper Alpagasus? HOT 2
- How to format dataset fields in model prompt? HOT 1
- Chinese sft data
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from alpacadatacleaned.