This repository is dedicated to the implementation of an anomaly detection system that leverages the strengths of Deep Autoencoding Gaussian Mixture Models (DAGMM) and Contrastive Language-Image Pretraining (CLIP). The integration of these two models aims to enhance the detection of outliers in complex datasets.
DAGMM is a model introduced in the paper "Deep Autoencoding Gaussian Mixture Model for Unsupervised Anomaly Detection" (ICLR 2018), which utilizes deep autoencoders along with a Gaussian mixture model to effectively identify anomalies in an unsupervised manner.
CLIP, developed by OpenAI, is a cutting-edge model that understands information in a multimodal context, connecting textual and visual data. This allows for a more nuanced approach to anomaly detection, especially in scenarios where contextual understanding is crucial.
By combining DAGMM's robust statistical approach with CLIP's multimodal understanding, this project aims to push the boundaries of traditional anomaly detection methods. The repository includes the implementation details, usage examples.