adrianbzg / llm-distributed-finetune Goto Github PK
View Code? Open in Web Editor NEWTune efficiently any LLM model from HuggingFace using distributed training (multiple GPU) and DeepSpeed. Uses Ray AIR to orchestrate the training on multiple AWS GPU instances
License: MIT License