This repository contains a fine-tuned Mistral-7B model specifically adapted for text generation tasks. It has been trained on a dataset of "mlabonne/guanaco-llama2-1k"
we train only 1 epoch so thats why model well not perfrom and not available to deploy inference endpoints. bcoz it require t4 gpu and it also depend on which dataset you use to train. in training process it take 15min to 1hr for 1 epoch & depend on dataset size