![](https://private-user-images.githubusercontent.com/786476/241106339-2d967cb0-2a18-429b-8303-1257afe15ffc.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTg1MDE4MTQsIm5iZiI6MTcxODUwMTUxNCwicGF0aCI6Ii83ODY0NzYvMjQxMTA2MzM5LTJkOTY3Y2IwLTJhMTgtNDI5Yi04MzAzLTEyNTdhZmUxNWZmYy5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwNjE2JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDYxNlQwMTMxNTRaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT0xNWFhNmY2M2QwNTQ0OTVkZTUwODE5MGU1ZTZlNmY0ZWVlN2JmNWNhODgzMzM3ZTdhYjVlMDk0ZjJhOTk2ZTYwJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZhY3Rvcl9pZD0wJmtleV9pZD0wJnJlcG9faWQ9MCJ9.oZEPJimm_BHMKWcXR4VN4K_hSferCBznfeK8zcVBOSg)
A finetuner1 2 for LLMs on Intel XPU devices, with which you could finetune the openLLaMA-3b model to sound like your favorite book.
conda env create -f env.yml
conda activate pyt_llm_xpu
Warning: OncePyTorch and intel extension for PyTorch is already setup, then install peft without dependencies as peft requires PyTorch 2.0(not supported yet on Intel XPU devices.)
Fetch a book from guttenberg (default: pride and prejudice) and generate the dataset.
python fetch_data.py
python finetune.py --input_data ./book_data.json --batch_size=64 --micro_batch_size=16 --num_steps=300
For inference, you can either provide a input prompt, or the model will take a default prompt
python inference.py --infer
python inference.py --infer --prompt "my prompt"
python inference.py --bench