This repository serves as a backup of my efforts in building pipelines and models for text classification projects. ๐๐๐
Note: Due to the Non-Disclosure Agreement, data cannot be provided in this repository.
Anyway, it's my very first open-source project, so...
๐ If you find this project useful or interesting, please consider giving it a star! It helps me know that I'm on the right track. ๐
- Data cleaning (Completed: 0321)
- Modeling (Completed: 0521)
- Pipeline development (Completed: 0611)
- Finalize some codes (Completed: 0828)
To-do:
- Finalize some in-line doc str (Target: 0928)
The project code can be reused for other text classification tasks and offers the following features:
- Automatic downloading of models and tokenizers from Huggingface by simply providing the model name.
- Automated training, testing, logging, and checkpointing for any DataFrames with 'text' and 'label' columns.
- Designed pipelines for incremental training using checkpoints.
To use this project, follow these steps:
- Prepare cleaned training, validation, and testing datasets with columns such as 'text' and 'label'.
- Explore the available pre-trained language models on Huggingface that interest you.
- Copy the model name and paste it into the
model_name
variable intrain_config.py
under theconfig
folder. - ๐ง Adjust relevant parameters such as learning rate, batch size, etc., as specified in
train_config.py
. - ๐ง Try out different techniques and parameters, such as focal loss and class balancing, which are provided in the
tools
folder. - Once you have obtained the best model, you can also perform incremental training on additional data, and the necessary code is already provided for you.
Please refer to the documentation within the code for more detailed usage instructions and examples.
Contributions to this project are welcome! If you have any suggestions, ideas, or improvements, please feel free to open an issue or submit a pull request.
If you encounter any issues or have any questions, feel free to open an issue on the GitHub repository. Your feedback and suggestions are highly appreciated.
This project is licensed under the MIT License.