Zhengzhuo Xu, Shuo Yang, Xingjun Wang, Chun Yuan
This repository is the official PyTorch implementation of the paper in ICASSP 2023.
We propose Prediction Distribution Calibration (PDC) to quantitatively evalate how proposals overcome model head preferance in LTR.
from eval import evaluate_all_metric
model, dataloader = None, None
# build your owen cls mode and image dataloader
device = 'cpu'
cls_num = None # list
# get train label distribution
res = evaluate_all_metric(
dataloader,
model,
device,
cls_num
)
pdc = res['pdc']
We will release the training code with ViTs soon.
If you find our idea or code inspiring, please cite our paper:
@inproceedings{PDC,
author = {Xu, Zhengzhuo and
Yang, Shuo and
Wang, Xingjun and
Yuan, Chun},
title = {Rethink Long-Tailed Recognition with Vision Transformers},
booktitle = {{IEEE} International Conference on Acoustics, Speech and Signal Processing, {ICASSP} 2023},
year = {2023}
}