Giter VIP home page Giter VIP logo

awesome-mixup's Introduction

Awesome-Mixup

Awesome GitHub stars GitHub forks visitors

Introduction

We summarize awesome mixup data augmentation methods for visual representation learning in various scenarios.

The list of awesome mixup augmentation methods is summarized in chronological order and is on updating. The main branch is modified according to Awesome-Mixup in OpenMixup. We first summarize fundermental mixup methods from two aspects: sample mixup policy and label mixup policy. Then, we summarize mixup techniques used in downstream tasks. Currently, we are working on a survey of mixup methods.

Table of Contents

Fundermental Methods

Sample Mixup Methods

Pre-defined Policies

  • MixUp: Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz.
  • BC: Yuji Tokozume, Yoshitaka Ushiku, Tatsuya Harada.
    • Between-class Learning for Image Classification. [CVPR'2018] [code]
  • AdaMixup: Hongyu Guo, Yongyi Mao, Richong Zhang.
    • MixUp as Locally Linear Out-Of-Manifold Regularization. [AAAI'2019]
  • CutMix: Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, Youngjoon Yoo.
    • CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features. [ICCV'2019] [code]
  • ManifoldMix: Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, Yoshua Bengio.
    • Manifold Mixup: Better Representations by Interpolating Hidden States. [ICML'2019] [code]
  • FMix: Ethan Harris, Antonia Marcu, Matthew Painter, Mahesan Niranjan, Adam Prügel-Bennett, Jonathon Hare.
  • SmoothMix: Jin-Ha Lee, Muhammad Zaigham Zaheer, Marcella Astrid, Seung-Ik Lee.
    • SmoothMix: a Simple Yet Effective Data Augmentation to Train Robust Classifiers. [CVPRW'2020] [code]
  • PatchUp: Mojtaba Faramarzi, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma, Sarath Chandar.
    • PatchUp: A Regularization Technique for Convolutional Neural Networks. [Arxiv'2020] [code]
  • GridMixup: Kyungjune Baek, Duhyeon Bang, Hyunjung Shim.
  • ResizeMix: Jie Qin, Jiemin Fang, Qian Zhang, Wenyu Liu, Xingang Wang, Xinggang Wang.
    • ResizeMix: Mixing Data with Preserved Object Information and True Labels. [Arixv'2020] [code]
  • FocusMix: Jiyeon Kim, Ik-Hee Shin, Jong-Ryul, Lee, Yong-Ju Lee.
    • Where to Cut and Paste: Data Regularization with Selective Features. [ICTC'2020]
  • AugMix: Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, Balaji Lakshminarayanan.
    • AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty. [ICLR'2020] [code]
  • DJMix: Ryuichiro Hataya, Hideki Nakayama.
    • DJMix: Unsupervised Task-agnostic Augmentation for Improving Robustness. [Arxiv'2021]
  • PixMix: Dan Hendrycks, Andy Zou, Mantas Mazeika, Leonard Tang, Bo Li, Dawn Song, Jacob Steinhardt.
    • PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures. [Arxiv'2021] [code]
  • StyleMix: Minui Hong, Jinwoo Choi, Gunhee Kim.
    • StyleMix: Separating Content and Style for Enhanced Data Augmentation. [CVPR'2021] [code]
  • MixStyle: Kaiyang Zhou, Yongxin Yang, Yu Qiao, Tao Xiang.
  • MoEx: Boyi Li, Felix Wu, Ser-Nam Lim, Serge Belongie, Kilian Q. Weinberger.
    • On Feature Normalization and Data Augmentation. [CVPR'2021] [code]
  • k-Mixup: Kristjan Greenewald, Anming Gu, Mikhail Yurochkin, Justin Solomon, Edward Chien.
    • k-Mixup Regularization for Deep Learning via Optimal Transport. [ArXiv'2021]
  • NFM: Soon Hoe Lim, N. Benjamin Erichson, Francisco Utrera, Winnie Xu, Michael W. Mahoney.
  • LocalMix: Raphael Baena, Lucas Drumetz, Vincent Gripon.
    • Preventing Manifold Intrusion with Locality: Local Mixup. [EUSIPCO'2022] [code]
  • RandomMix: Xiaoliang Liu, Furao Shen, Jian Zhao, Changhai Nie.
    • RandomMix: A mixed sample data augmentation method with multiple mixed modes. [ArXiv'2022]
  • SuperpixelGridCut: Karim Hammoudi, Adnane Cabani, Bouthaina Slika, Halim Benhabiles, Fadi Dornaika, Mahmoud Melkemi.
    • SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix Data Augmentation. [ArXiv'2022] [code]
  • AugRmixAT: Xiaoliang Liu, Furao Shen, Jian Zhao, Changhai Nie.
    • AugRmixAT: A Data Processing and Training Method for Improving Multiple Robustness and Generalization Performance. [ICME'2022]
  • MSDA: Chanwoo Park, Sangdoo Yun, Sanghyuk Chun.
    • A Unified Analysis of Mixed Sample Data Augmentation: A Loss Function Perspective. [NIPS'2022] [code]
  • RegMixup: Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip H.S. Torr, Puneet K. Dokania.
    • RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness. [NIPS'2022] [code]

Saliency-guided Policies

  • SaliencyMix: A F M Shahab Uddin and Mst. Sirazam Monira and Wheemyung Shin and TaeChoong Chung and Sung-Ho Bae
    • SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization. [ICLR'2021] [code]
  • AttentiveMix: Devesh Walawalkar, Zhiqiang Shen, Zechun Liu, Marios Savvides.
    • Attentive CutMix: An Enhanced Data Augmentation Approach for Deep Learning Based Image Classification. [ICASSP'2020] [code]
  • SnapMix: Shaoli Huang, Xinchao Wang, Dacheng Tao.
    • SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained Data. [AAAI'2021] [code]
  • AttributeMix: Hao Li, Xiaopeng Zhang, Hongkai Xiong, Qi Tian.
    • Attribute Mix: Semantic Data Augmentation for Fine Grained Recognition. [Arxiv'2020]
  • AutoMix: Jianchao Zhu, Liangliang Shi, Junchi Yan, Hongyuan Zha.
    • AutoMix: Mixup Networks for Sample Interpolation via Cooperative Barycenter Learning. [ECCV'2020]
  • Pani VAT: Ke Sun, Bing Yu, Zhouchen Lin, Zhanxing Zhu.
    • Patch-level Neighborhood Interpolation: A General and Effective Graph-based Regularization Strategy. [ArXiv'2019]
  • PuzzleMix: Jang-Hyun Kim, Wonho Choo, Hyun Oh Song.
    • Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup. [ICML'2020] [code]
  • CoMixup: Jang-Hyun Kim, Wonho Choo, Hosan Jeong, Hyun Oh Song.
    • Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity. [ICLR'2021] [code]
  • SuperMix: Ali Dabouei, Sobhan Soleymani, Fariborz Taherkhani, Nasser M. Nasrabadi.
    • SuperMix: Supervising the Mixing Data Augmentation. [CVPR'2021] [code]
  • PatchMix: Paola Cascante-Bonilla, Arshdeep Sekhon, Yanjun Qi, Vicente Ordonez.
    • Evolving Image Compositions for Feature Representation Learning. [BMVC'2021]
  • StackMix: John Chen, Samarth Sinha, Anastasios Kyrillidis.
    • StackMix: A complementary Mix algorithm. [Arxiv'2021]
  • AlignMix: Shashanka Venkataramanan, Ewa Kijak, Laurent Amsaleg, Yannis Avrithis.
    • AlignMix: Improving representation by interpolating aligned features. [CVPR'2022] [code]
  • AutoMix: Zicheng Liu, Siyuan Li, Di Wu, Zihan Liu, Zhiyuan Chen, Lirong Wu, Stan Z. Li.
    • AutoMix: Unveiling the Power of Mixup for Stronger Classifiers. [ECCV'2022] [code]
  • SAMix: Siyuan Li, Zicheng Liu, Di Wu, Zihan Liu, Stan Z. Li.
    • Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup. [Arxiv'2021] [code]
  • ScoreMix: Thomas Stegmüller, Behzad Bozorgtabar, Antoine Spahr, Jean-Philippe Thiran.
    • ScoreNet: Learning Non-Uniform Attention and Augmentation for Transformer-Based Histopathological Image Classification. [Arxiv'2022]
  • RecursiveMix: Lingfeng Yang, Xiang Li, Borui Zhao, Renjie Song, Jian Yang.
  • SciMix Rémy Sun, Clément Masson, Gilles Hénaff, Nicolas Thome, Matthieu Cord.
    • Swapping Semantic Contents for Mixing Images. [ICPR'2022]
  • TransformMix Anonymous.
    • TransformMix: Learning Transformation and Mixing Strategies for Sample-mixing Data Augmentation. [OpenReview'2022]

Label Mixup Methods

  • MixUp: Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz.
  • CutMix: Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, Youngjoon Yoo.
    • CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features. [ICCV'2019] [code]
  • MetaMixup: Zhijun Mai, Guosheng Hu, Dexiong Chen, Fumin Shen, Heng Tao Shen.
    • Metamixup: Learning adaptive interpolation policy of mixup with metalearning. [TNNLS'2021]
  • mWH: Hao Yu, Huanyu Wang, Jianxin Wu.
  • CAMixup: Yeming Wen, Ghassen Jerfel, Rafael Muller, Michael W. Dusenberry, Jasper Snoek, Balaji Lakshminarayanan, Dustin Tran.
    • Combining Ensembles and Data Augmentation can Harm your Calibration. [ICLR'2021] [code]
  • Saliency Grafting: Joonhyung Park, June Yong Yang, Jinwoo Shin, Sung Ju Hwang, Eunho Yang.
    • Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing. [AAAI'2022]
  • TransMix: Jie-Neng Chen, Shuyang Sun, Ju He, Philip Torr, Alan Yuille, Song Bai.
    • TransMix: Attend to Mix for Vision Transformers. [CVPR'2022] [code]
  • GenLabel: Yeming Wen, Ghassen Jerfel, Rafael Muller, Michael W. Dusenberry, Jasper Snoek, Balaji Lakshminarayanan, Dustin Tran.
    • GenLabel: Mixup Relabeling using Generative Models. [ArXiv'2022]
  • DecoupleMix: Zicheng Liu, Siyuan Li, Ge Wang, Cheng Tan, Lirong Wu, Stan Z. Li.
  • TokenMix: Jihao Liu, Boxiao Liu, Hang Zhou, Hongsheng Li, Yu Liu.
    • TokenMix: Rethinking Image Mixing for Data Augmentation in Vision Transformers. [ECCV'2022] [code]

Mixup for Self-supervised Learning

  • MixCo: Sungnyun Kim, Gihun Lee, Sangmin Bae, Se-Young Yun.
    • MixCo: Mix-up Contrastive Learning for Visual Representation. [NIPSW'2020] [code]
  • MoCHi: Yannis Kalantidis, Mert Bulent Sariyildiz, Noe Pion, Philippe Weinzaepfel, Diane Larlus.
    • Hard Negative Mixing for Contrastive Learning. [NIPS'2020] [code]
  • i-Mix: Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, Honglak Lee.
    • i-Mix A Domain-Agnostic Strategy for Contrastive Representation Learning. [ICLR'2021] [code]
  • Un-Mix: Zhiqiang Shen, Zechun Liu, Zhuang Liu, Marios Savvides, Trevor Darrell, Eric Xing.
    • Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation. [AAAI'2022] [code]
  • BSIM: Xiangxiang Chu, Xiaohang Zhan, Xiaolin Wei.
    • Beyond Single Instance Multi-view Unsupervised Representation Learning. [Arxiv'2020]
  • FT: Rui Zhu, Bingchen Zhao, Jingen Liu, Zhenglong Sun, Chang Wen Chen.
    • Improving Contrastive Learning by Visualizing Feature Transformation. [ICCV'2021] [code]
  • m-Mix: Shaofeng Zhang, Meng Liu, Junchi Yan, Hengrui Zhang, Lingxiao Huang, Pinyan Lu, Xiaokang Yang.
    • m-mix: Generating hard negatives via multiple samples mixing for contrastive learning. [Arxiv'2021]
  • PCEA: Jingwei Liu, Yi Gu, Shentong Mo, Zhun Sun, Shumin Han, Jiafeng Guo, Xueqi Cheng.
    • Piecing and Chipping: An effective solution for the information-erasing view generation in Self-supervised Learning. [OpenReview'2021]
  • CoMix: Aadarsh Sahoo, Rutav Shah, Rameswar Panda, Kate Saenko, Abir Das.
    • Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing. [NIPS'2021] [code]
  • SAMix: Siyuan Li, Zicheng Liu, Di Wu, Zihan Liu, Stan Z. Li.
    • Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup. [Arxiv'2021] [code]
  • MixSiam: Xiaoyang Guo, Tianhao Zhao, Yutian Lin, Bo Du.
    • MixSiam: A Mixture-based Approach to Self-supervised Representation Learning. [OpenReview'2021]
  • MixSSL: Yichen Zhang, Yifang Yin, Ying Zhang, Roger Zimmermann.
    • Mix-up Self-Supervised Learning for Contrast-agnostic Applications. [ICME'2021]
  • CLIM: Hao Li, Xiaopeng Zhang, Hongkai Xiong.
    • Center-wise Local Image Mixture For Contrastive Representation Learning. [BMVC'2021]
  • Mixup Xin Zhang, Minho Jin, Roger Cheng, Ruirui Li, Eunjung Han, Andreas Stolcke.
    • Contrastive-mixup Learning for Improved Speaker Verification. [ICASSP'2022]
  • Metrix: Shashanka Venkataramanan, Bill Psomas, Ewa Kijak, Laurent Amsaleg, Konstantinos Karantzalos, Yannis Avrithis.
    • It Takes Two to Tango: Mixup for Deep Metric Learning. [ICLR'2022] [code]
  • ProGCL Jun Xia, Lirong Wu, Ge Wang, Jintao Chen, Stan Z.Li.
    • ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning. [ICML'2022] [code]
  • M-Mix Shaofeng Zhang, Meng Liu, Junchi Yan, Hengrui Zhang, Lingxiao Huang, Pinyan Lu, Xiaokang Yang.
    • M-Mix: Generating Hard Negatives via Multi-sample Mixing for Contrastive Learning. [KDD'2022] [code]
  • SDMP: Sucheng Ren, Huiyu Wang, Zhengqi Gao, Shengfeng He, Alan Yuille, Yuyin Zhou, Cihang Xie.
    • A Simple Data Mixing Prior for Improving Self-Supervised Learning. [CVPR'2022] [code]
  • ScaleMix: Xiao Wang, Haoqi Fan, Yuandong Tian, Daisuke Kihara, Xinlei Chen.
    • On the Importance of Asymmetry for Siamese Representation Learning. [CVPR'2022] [code]
  • VLMixer: Teng Wang, Wenhao Jiang, Zhichao Lu, Feng Zheng, Ran Cheng, Chengguo Yin, Ping Luo.
    • VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix. [ICML'2022]
  • CropMix: Junlin Han, Lars Petersson, Hongdong Li, Ian Reid.
    • CropMix: Sampling a Rich Input Distribution via Multi-Scale Cropping. [ArXiv'2022] [code]

Mixup for Semi-supervised Learning

  • MixMatch: David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel.
    • MixMatch: A Holistic Approach to Semi-Supervised Learning. [NIPS'2019] [code]
  • Pani VAT: Ke Sun, Bing Yu, Zhouchen Lin, Zhanxing Zhu.
    • Patch-level Neighborhood Interpolation: A General and Effective Graph-based Regularization Strategy. [ArXiv'2019]
  • ReMixMatch: David Berthelot, [email protected], Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, Colin Raffel.
    • ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring. [ICLR'2020] [code]
  • Core-Tuning: Yifan Zhang, Bryan Hooi, Dapeng Hu, Jian Liang, Jiashi Feng.
    • Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning. [NIPS'2021] [code]
  • DFixMatch: Zicheng Liu, Siyuan Li, Ge Wang, Cheng Tan, Lirong Wu, Stan Z. Li.

Analysis of Mixup

  • Sunil Thulasidasan, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya, Sarah Michalak.
    • On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks. [NIPS'2019] [code]
  • Luigi Carratino, Moustapha Cissé, Rodolphe Jenatton, Jean-Philippe Vert.
  • Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, James Zou.
    • How Does Mixup Help With Robustness and Generalization? [ICLR'2021]
  • Muthu Chidambaram, Xiang Wang, Yuzheng Hu, Chenwei Wu, Rong Ge.
    • Towards Understanding the Data Dependency of Mixup-style Training. [ICLR'2022] [code]
  • Linjun Zhang, Zhun Deng, Kenji Kawaguchi, James Zou.
    • When and How Mixup Improves Calibration. [ICML'2022]

Survey

  • Humza Naveed.
    • Survey: Image Mixing and Deleting for Data Augmentation. [ArXiv'2021]
  • An overview of mixing augmentation methods and augmentation strategies.

Contribution

Feel free to send pull requests to add more links with the following Markdown format. Note that the Abbreviation and the code link are optional attributes.

* **Abbreviation**: Author List.
  - Paper Name. [[Conference'Year](link)] [[code](link)]

Current contributors include: Siyuan Li (@Lupin1998), Zicheng Liu (@pone7), Zedong Wang (@Jacky1128). We thank all contributors for Awesome-Mixup!

Related Project

  • OpenMixup: CAIRI Supervised, Semi- and Self-Supervised Visual Representation Learning Toolbox and Benchmark.
  • Awesome-Mixup: Awesome list of mixup augmentation methods for supervised, semi-, and self-supervised visual representation learning.
  • awesome-mixup: A collection of awesome papers about mixup.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.