If you want to run the code on your own device instead of using the official kaggle notebook, Recommend use docker to setup the python enviroment on your own device. To install the kaggle enviroment, can reference here
Under the condition you have installed docker, you can use the following command to install docker image and run image:
docker pull gcr.io/kaggle-gpu-images/python:latest
# Run the image pre-built image from gcr.io
docker run --runtime nvidia --rm -it gcr.io/kaggle-gpu-images/python /bin/bash
Additional packages shoud be installed using the following command.
pip install efficientnet_pytorch
pip install prefetch-generator
pip install torchaudio_augmentations
- Download birdclef-2023 dataset
First, use the following command to donwload dataset
cd download_dataset
bash download_birdclef23.sh
mkdir /kaggle/input/birdclef-2023
cp birdclef-2023.zip /kaggle/input/birdclef-2023
unzip birdclef-2023.zip
- Dwonload the pretrained model
Then, download the official unilm/beats pretrained model
cd ./pretrained_models/unilm
bash download.sh
- Optuna Parameter Tuning
CUDA_VISIBLE_DEVICES=3 python birdclef23-optuna.py --experiment_name beats --model_name beats --eval_step 1
CUDA_VISIBLE_DEVICES=2 python birdclef23-optuna.py --experiment_name ast --model_name ast --eval_step 1
CUDA_VISIBLE_DEVICES=1 python birdclef23-optuna.py --experiment_name efficientnet --model_name efficientnet --eval_step 1
CUDA_VISIBLE_DEVICES=0 python birdclef23-optuna.py --experiment_name musicnn --model_name musicnn --eval_step 1
When finish the optuna training, can refer to the script Infer_kaggle.ipynb
to get the kaggle submission.
-
- Repeat the these codes (these codes are from here) by Pytorch
-
- Use the unilm pretrain model (completed on 7/4/2023)
-
-
Use the cav-mae pretrain model (Pending)
-
-
- Use Wav2vec, Audio Spectrogram Transformer, Musicnn to do the classification (Plan to complete on 8/4/2023)
- Add Audio Spectrogram Transformer mudule (Completed on 8/4/2023)
- Use Wav2vec, Audio Spectrogram Transformer, Musicnn to do the classification (Plan to complete on 8/4/2023)
-
- Train and Valid the performation of Unilm BEATs model (run birdclef23-unilm-finetune.ipynb) (Plan to complete on 8/4/2023)
- Training Unilm BEATs model (running on 8/4/2023)
- Train and Valid the performation of Unilm BEATs model (run birdclef23-unilm-finetune.ipynb) (Plan to complete on 8/4/2023)
-
- Add Optuna parameter adjust in the Training process. (completed on 8/4/2023)
- Merge BEATs model and AST model in the same training pipline
- Add Optuna parameter adjust in the Training process. (completed on 8/4/2023)
-
- Add audio data cache in Dataset (improve the speed of loading data). (completed on 10/4/2023)
-
- Add EfficientNet Model in the model hub. (completed on 10/4/2023)
-
- Add Wav2Vec model to experiment (Completed on 11/4/2023)
-
- Add BCE loss in all models (when use BCE, the model output need to use nn.Sigmoid()) (Completed on 13/4/2023)
-
- Add a hyper parameter
$ast_fix_layer$ to assign layers that need to be fixed (Completed on 13/4/2023)
- Add a hyper parameter
-
- Submit on Kaggle (Competed on 18/04/2023)
-
- Add model quantization to reduce the inference time. (Completed on 19/04/2023)
Some of the code in this repo is taken from there.