meta-math / metamath Goto Github PK
View Code? Open in Web Editor NEWMetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Home Page: https://meta-math.github.io
License: Apache License 2.0
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models
Home Page: https://meta-math.github.io
License: Apache License 2.0
Hi!
The github README says the data license is CC by NC but the Huggingface dataset says it's Apache-2.0.
https://github.com/meta-math/MetaMath
https://huggingface.co/datasets/meta-math/MetaMathQA
Thanks a lot!
I'm wondering as to why running the training script gives me OOM error constantly.
I'm following the exact sh file format, and
I'm using 4 x A100 80GB, so I believe there should be no problem... do you have any idea why?
I found this bug with following reproduction.
import torch
import sys
import random
import numpy as np
from transformers import LlamaTokenizer, LlamaForCausalLM, BitsAndBytesConfig, GenerationConfig
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
# bnb_4bit_quant_type="fp4",
bnb_4bit_compute_dtype=torch.bfloat16
)
random.seed(0)
np.random.seed(0)
torch.manual_seed(0)
torch.cuda.manual_seed(0)
torch.backends.cudnn.deterministic = True
device = "cuda:0"
tokenizer = LlamaTokenizer.from_pretrained("MetaMath-7B-V1.0",legacy=False)
model = LlamaForCausalLM.from_pretrained(
"MetaMath-7B-V1.0",
quantization_config=bnb_config,
device_map="auto",
)
model.config.pad_token_id = tokenizer.pad_token_id = 0 # unk
model.config.bos_token_id = 1
model.config.eos_token_id = 2
generation_config = GenerationConfig(
temperature=0.8,
max_new_tokens=512,###here is the problem
do_sample=True,
top_p=0.95,
early_stopping=True,
)
model.config.pad_token_id = tokenizer.pad_token_id = 0 # unk
model.config.bos_token_id = 1
model.config.eos_token_id = 2
eos_token_id = -100
input = "Her eyes are beautiful."
tokens = tokenizer([input]*10, return_tensors='pt', padding=True).to(device)
with torch.inference_mode():
output = model.generate(**tokens, generation_config=generation_config, return_dict_in_generate=True)
decoded = tokenizer.batch_decode(output.sequences, skip_special_tokens=True)
print(decoded)
when setting the max_new_tokens I will get the tensor error, comment it would be fine. Could you please check that? My transformer version is 4.33.3
Hi,thank you for your excellent work!
I would like to know if augmented datasets like MetaMathQA are suitable for pre-training?
Hi,
Could you please add a new baseline——MuggleMATH in the Comparing MetaMath with the LLM models on the github webpage?
MuggleMATH mainly investigates the scaling law and generalization for data augmentation for LLM math reasoning and gets comparable results in GSM8K. The address of paper is https://arxiv.org/abs/2310.05506 .
Thanks a lot!
When I run
python eval_math.py --model meta-math/MetaMath-7B-V1.0 --data_file data/test/MATH_test.jsonl --tensor_parallel_size 1
from the base directory of this repository, the final output is
start=== 0 , end==== 9223372036854775807
length==== 5000 , acc==== 0.0
I ran inference on a 1x A100 40GB. I am using vllm v0.1.y, transformers 4.33.2, and torch 2.0.1.
Hello,Using MetaMATH dataset and codes, I reproduce the experiments on 7B base model. Howerver, The accuracy I get is as follows, 17.14%, which is not consistent with yours 19.8%. Is there anything wrong with your results in the paper or anything wrong with my experiments? Can you help me? Thank you.
start=== 0 , end==== 9223372036854775807
length==== 5000 , acc==== 0.1714
The preprint states that you "release the MetaMathQA dataset". However, the huggingface dataset is empty, nor is the data in this repository.
I see there is code here for that, but is that really the standard way to evaluate on MATH? How did you make sure your results were valid if you didn't use a standard & tested eval for MATH?
Hi! I am wondering how do you control the LLM's output if you don't explicitly tell it to output the answer with the format "The answer is: "? (as written in the function process_results()) I didn't see such prompt in your provided code. I ran your code without any modifications, and the LLM does not output the answer with "The answer is", making the result unjudgable. Thank you !
BTW, could you please also provide few shot examples for eval_math and eval_gsm8k, if they exist? Thanks!
I try:
pip install -r requirements.txt
and get the following error:
cannot install pip install -r requirements.txt (line 1) and tokenizers==0.13.3 because these packages version have conflicting dependency
How do I fix the error
Hi!
Thanks a lot for releasing data and code!
Could you add a license for both so that this can be used by industry labs.
In line 87, batch data of the prompts is implemented. However, in line 95, the answer labels are not used as batches. If you output
for idx, (prompt, prompt_answer) in enumerate(zip(batch_gsm8k_ins, gsm8k_answers)):
print(prompt, prompt_answer)
if isinstance(prompt, list):
pass
else:
prompt = [prompt]
XXXXXXX
you would find that the prompt is not corresponding to the answers.
Thanks for giving prompts and datasets so anyone can reproduce your experiments. However have you ever thought about doing ablation study and analysis on the effects by different tasks and finding out why they can improve the performance?
SkyMath mentioned your work but didn't provide any details (https://github.com/SkyworkAI/Skywork). Somehow their grades suggest there may be more efficient methods to produce higher quality datasets.
I tried run_mistral.sh and get:
gsm8k acc==== 0.7376800606520091
MATH acc==== 0.2726
I also tried
export HF_SAVE_PATH="meta-math/MetaMath-Mistral-7B" && \
python eval_gsm8k.py --model $HF_SAVE_PATH --data_file ./data/test/GSM8K_test.jsonl && \
python eval_math.py --model $HF_SAVE_PATH --data_file ./data/test/MATH_test.jsonl
and get:
gsm8k acc==== 0.7710386656557998
MATH acc==== 0.278
which is also a bit different from the reported 77.7 and 28.2.
I would like to know your opinion on if this is normal and what might be the cause. Thanks!
Hello. May I know the script when you did SFT?
Did you use the full 4K context length of llama for training for each sample?
I see you have 395K examples and used 4K llama2, so an upper bound is 4K * 395k. Is it possible to get a more precise number on the number of tokens trained on?
what does --model_name_or_path "path/to/llama-2" mean?
to run the train_math.py , which model should I download to the above path?
Hello,
I attempt to replicate the experiment using metamathQA dataset to finetune mistral-7b, but the results I obtained do not match the ones shared in the repository.
I used the following parameters in run_mistral.sh
.
export MODEL_PATH='mistralai/Mistral-7B-v0.1'
export SAVE_PATH='0224_mistral-7b-metamath395'
export MASTER_ADDR="localhost"
export MASTER_PORT="1231"
export GLOO_SOCKET_IFNAME="lo"
export NCCL_SOCKET_IFNAME="lo"
export WANDB_DISABLED=true
export HF_TOKEN="token of your huggingface"
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 -m torch.distributed.launch --master_addr ${MASTER_ADDR} --master_port ${MASTER_PORT} --nproc_per_node=8 --use_env train_math.py \
--model_name_or_path $MODEL_PATH \
--data_path MetaMathQA-395K.json \
--data_length 10000000 \
--bf16 True \
--output_dir $SAVE_PATH \
--num_train_epochs 3 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 100000 \
--save_total_limit 0 \
--learning_rate 5e-6 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'MistralDecoderLayer' \
--tf32 True
python eval_gsm8k.py --model $SAVE_PATH --data_file ./data/test/GSM8K_test.jsonl
python eval_math.py --model $SAVE_PATH --data_file ./data/test/MATH_test.jsonl
and I get
gsm8k acc==== 0.6618650492797574
math acc==== 0.2274
which is different from the reported 77.7 and 28.2
Here are the detailed of my Python environment:
transformers==4.34.0
wandb==0.15.3
torch==2.0.1
sentencepiece==0.1.99
tokenizers==0.14
accelerate==0.21.0
bitsandbytes==0.40.0
I would appreciate any guidance or suggestions you could provide to help resolve this discrepancy. Thank you in advance for your time and assistance.
Best regards,
lyf-00
Thank you for your excellent job.
ModuleNotFoundError: No module named 'utils.math_utils'
I just want to test the performance of the few-shot in-context learning capability. But I found an issue. I added the Instruction and response few-shot examples before the question and the generated result after the llm.generate function remains the same. And no matter how many examples I added, the inferenced results remain the same as the zero-shot result. So could you help me with the issues?
It seems that the parameter of "data_file" is different from the command provided in the README, which refers to the parameter as "data_path" in this line of eval_math.py.
Hi,
For the metamath dataset, would it be possible to provide additional information about the subject of each data point? For example, whether this is Number theory or algebra?
Could you please publish the dataset generation script? This will ensure reproducibility and make a good contribution to the open-source LLM community.
Dear authors, thank you for the amazing work and sharing your code and data!
I wanted to ask about your evaluation code, as currently if the model outputs an answer with decimal point, it automatically rounds to the nearest integer.
In this way, a wrong answer (i.e. 8.5) could be considered correct (i.e. as 9), in spite of a calculation error, which indeed often occurs with some model generations.
In this light, I believe a stricter evaluation code may be needed.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.