Working with segmentation while interacting with deep learning models. It reduces the amount of manual work for the operator.
- dockerfile build
cd ritm_interactive_segmentation
docker build --no-cache=false -t {img_name}:{tag} .
- run container
docker run -it -d -p {gpu_server_port}:{container_jupyterlab_port} --gpus all --ipc=host --shm-size=8g -v /home:/home {img_name}:{tag}
- setting dataset directory
data
└── {project_name}
├── train
│ ├── images
│ └── labels
└── valid
├── images
└── labels
- config.yaml
- EXPS_PATH: Where pipeline outputs will saved. (tensorboard log, ckpts, vis, ...)
- DATASET_PATH: Dataset root path.
- CLASS_LIST: All of class name.
- IGNORE_CLASS: This class is excluded from training because it affects bad influence during training. The classes have ambiguous shape.(contain many object or shape)
- IMAGENET_PRETRAINED_MODELS: Prae-trained model path
EXPS_PATH: "./experiments"
DATASET_PATH: "./data/HD"
CLASS_LIST: ['Background',
'Freespace',
...
'Pillar']
IGNORE_CLASS: ['Background', 'Freespace']
IMAGENET_PRETRAINED_MODELS: "./pretrained_models/hrnet_w18_small_model_v2.pth"
- download pre-trained model
- run train script
python train.py models/hrnet18s_cocolvis_itermask_3p.py
inferenceAPI_tutorial.ipynb
- milla(@DataLab)