Data-Centric AI is the approach to AI development that considers the training dataset as the centerpiece of the solution instead of the model.
This repository contains a curated list of awesome resources such as open-source libraries, tutorials, and articles that will help you understand the concepts, and start your journey on Data-Centric AI development.
We do value open collaboration and knowledge sharing, so we recommend not to limit yourself to this repository alone and check other amazing projects such as Andrew Ng's Resources Hub.
For this repository, we focus on three main pillars of Data-Centric AI.
- Pandas Profiling - Pandas Profiling extends the pandas DataFrame with df.profile_report() for quick data analysis
- SweetViz - Sweetviz is an open-source Python library that generates beautiful, high-density visualizations to kickstart EDA (Exploratory Data Analysis) with just two lines of code.
- Great Expectations - Great Expectations helps data teams eliminate pipeline debt, through data testing, documentation, and profiling.
- D-Tale - Is an open-source python auto-visualization library that brings you an easy way to view & analyze Pandas data structures. It integrates seamlessly with ipython notebooks & python/ipython terminals.
- Data Profiler - The DataProfiler is a Python library designed to make data analysis, monitoring, and sensitive data detection easy.
- whylogs - whylogs is the open source standard for data logging. It uses data profiling techniques to create whylogs profiles, which can be used as logs to enable monitoring and observability for data pipelines and ML models
- YData Synthetic - Structured Synthetic Data Generators using Generative Adversarial Networks specifically for tabular and time-series data.
- Synthpop - A tool for producing synthetic versions of microdata containing confidential information so that they are safe to be released to users for exploratory analysis.
- DataSynthesizer - DataSynthesizer generates synthetic data that simulates a given dataset. It applies Differential Privacy techniques to achieve a strong privacy guarantee.
- SDV - The Synthetic Data Vault (SDV) is a Synthetic Data Generation ecosystem of libraries that allows users to easily learn single-table, multi-table, and time-series datasets to, later on, generate new Synthetic Data that has the same format and statistical properties as the original dataset.
- Pomegranate - Pomegranate is a package for building probabilistic models in Python that is implemented in Cython for speed. Most of these models can sample data.
- Gretel Synthetics - Gretel Synthetics package allows developers to quickly get immersed with synthetic data generation through the use of neural networks.
- Time-Series-Generator - Time-Series-Generator allows developers to create synthetic timeseries datasets in a generic way that follows different trends.The goal here is to have non-sensitive data available to demo solutions and test the effectiveness of those solutions and/or algorithms.
- Zpy - Zpy solves the problem of lack of business specific datasets for computer vision based applications. Zpy uses Python and Blender (an open source 3D graphics toolset) to create synthetic datasets suitable for unique business cases.
- LabelImg - LabelImg is a graphical image annotation tool. It is written in Python and uses Qt for its graphical interface.
- LabelMe - LabelMe is an image polygonal annotation tool that uses Python and Qt.
- TagAnamoly - Anomaly detection labeling tool, specifically for multiple time series (one time series per category).
- EchoML - Play, visualize and annotate your audio files
- LabelStudio - Label Studio is an open-source data labeling tool. It lets you label data types like audio, text, images, videos, and time series with a simple and straightforward UI and export to various model formats.
- Awesome Open Source Data Annotation & Labeling Tools - A list of the open-source tools available (sorted by task type) for anyone who wants to label data. Only actively maintained tools are listed.
We are open to collaboration! If you want to start contributing you only need to create a pull request with relevant resources. We would review each and every pull request.
If you found these resources useful, please feel free to check out our Data-Centric AI Community or click here to join our slack community. We hope to say hi on the other side!