This repository contains PyTorch implementation for CPR-CLIP (Manualscript submited to IEEE SPL).
The expensive cost of the medical skill training paradigm hinders the development of medical education, which has attracted rising attention in the intelligent signal processing community. To address the issue of composite error action recognition in CPR training, this letter proposes a multimodal pre-training framework named CPR-CLIP based on prompt engineering. Specifically, we design three prompts to fuse multiple errors naturally on the semantic level and then align linguistic and visual features via the contrastive pre-training loss. Extensive experiments have verified the effectiveness of the CPR-CLIP. Ultimately, the CPR-CLIP is encapsulated to an electronic assistant, and four doctors are recruited for evaluation. Nearly four times efficiency improvement is observed in comparative experiments, which demonstrates the practicality of the system. We hope this work brings new insights to the intelligent medical training system and signal processing community simultaneously.
# 1.Clone this repository
git clone https://github.com/Shunli-Wang/CPR-CLIP ./CPR-CLIP
cd ./CPR-CLIP
# 2.Create conda env (Pytorch2.0 will be installed.)
conda create -n CPR-CLIP python=3.10
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
pip install -r requirements.txt
# 3.Download video features and unzip them into ./pkl/
tar -xzvf TSN-Feature.tar.gz && rm TSN-Feature.tar.gz
mv TSN_Single_Feat.pkl TSN_Composite_Feat.pkl ./pkl/
The CPR-Coach dataset provides 14 single-class actions and 74 composite error actions in four different perspectives, containing 4,544 videos. The performance of composite error action recognition is measured through mAP and mmit mAP.
This repository provides the extracted video TSN features TSN-Feature.tar.gz
(~33MB). You can download it from BaiduNetDisk[star] or Google Drive.
- To train the CPR-CLIP with TSN as video backbone:
CUDA_VISIBLE_DEVICES=0 \
python ./CPR-CLIP.py --exp_name 'CPR-CLIP_w_TSN' --enable_CLIP_loss --eval_CLIP_result
- To train the CPR-CLIP+ with TSN as video backbone:
CUDA_VISIBLE_DEVICES=0 \
python ./CPR-CLIP.py --exp_name 'CPR-CLIP+_w_TSN' --enable_CLIP_loss --enable_BCE_loss
This project is based on CLIP and CPR-Coach.
If this repository is helpful to you, please give this repository a small STAR!
If you have any questions about our work, feel free to contact [email protected].
Please continue to pay attention to the subsequent code and data releasing.