Giter VIP home page Giter VIP logo

sitraka17 / argilla Goto Github PK

View Code? Open in Web Editor NEW

This project forked from argilla-io/argilla

1.0 0.0 0.0 458.94 MB

✨ Open-source tool for data-centric NLP. Argilla helps domain experts and data teams to build better NLP datasets in less time.

Home Page: https://docs.argilla.io

License: Apache License 2.0

Shell 0.04% JavaScript 9.57% Python 42.56% Vue 16.78% Jupyter Notebook 29.59% Dockerfile 0.01% SCSS 1.44%

argilla's Introduction

Argilla
Argilla

CI Codecov CI

Open-source framework for data-centric NLP

Data Labeling, curation, and Inference Store

Designed for MLOps & Feedback Loops

🆕 🔥 Play with Argilla UI with this live-demo powered by Hugging Face Spaces (login:argilla, password:1234)

🆕 🔥 Since 1.2.0 Argilla supports vector search for finding the most similar records to a given one. This feature uses vector or semantic search combined with more traditional search (keyword and filter based). Learn more on this deep-dive guide

imagen


Key Features

Advanced NLP labeling

Monitoring

Team workspaces

  • Bring different users and roles into the NLP data and model lifecycles
  • Organize data collection, review and monitoring into different workspaces
  • Manage workspace access for different users

Quickstart

Argilla is composed of a Python Server with Elasticsearch as the database layer, and a Python Client to create and manage datasets.

To get started you need to install the client and the server with pip:

pip install "argilla[server]"

Then you need to run Elasticsearch (ES).

The simplest way is to useDocker by running:

docker run -d --name es-for-argilla -p 9200:9200 -p 9300:9300 -e "ES_JAVA_OPTS=-Xms512m -Xmx512m" -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2

ℹ️ Check the docs for further options and configurations for Elasticsearch.

Finally you can launch the server:

python -m argilla

ℹ️ The most common error message after this step is related to the Elasticsearch instance not running. Make sure your Elasticsearch instance is running on http://localhost:9200/. If you already have an Elasticsearch instance or cluster, you point the server to its URL by using ENV variables

🎉 You can now access Argilla UI pointing your browser at http://localhost:6900/.

The default username and password are argilla and 1234.

Your workspace will contain no datasets. So let's use the datasets library to create our first datasets!

First, you need to install datasets:

pip install datasets

Then go to your Python IDE of choice and run:

import pandas as pd
import argilla as rg
from datasets import load_dataset

# load dataset from the hub
dataset = load_dataset("argilla/gutenberg_spacy-ner", split="train")

# read in dataset, assuming its a dataset for text classification
dataset_rg = rg.read_datasets(dataset, task="TokenClassification")

# log the dataset to the Argilla web app
rg.log(dataset_rg, "gutenberg_spacy-ner")

# load dataset from json
my_dataframe = pd.read_json(
    "https://raw.githubusercontent.com/recognai/datasets/main/sst-sentimentclassification.json")

# convert pandas dataframe to DatasetForTextClassification
dataset_rg = rg.DatasetForTextClassification.from_pandas(my_dataframe)

# log the dataset to the Argilla web app
rg.log(dataset_rg, name="sst-sentimentclassification")

This will create two datasets which you can use to do a quick tour of the core features of Argilla.

🚒 If you find issues, get direct support from the team and other community members on the Slack Community

For getting started with your own use cases, go to the docs.

Principles

  • Open: Argilla is free, open-source, and 100% compatible with major NLP libraries (Hugging Face transformers, spaCy, Stanford Stanza, Flair, etc.). In fact, you can use and combine your preferred libraries without implementing any specific interface.

  • End-to-end: Most annotation tools treat data collection as a one-off activity at the beginning of each project. In real-world projects, data collection is a key activity of the iterative process of ML model development. Once a model goes into production, you want to monitor and analyze its predictions, and collect more data to improve your model over time. Argilla is designed to close this gap, enabling you to iterate as much as you need.

  • User and Developer Experience: The key to sustainable NLP solutions is to make it easier for everyone to contribute to projects. Domain experts should feel comfortable interpreting and annotating data. Data scientists should feel free to experiment and iterate. Engineers should feel in control of data pipelines. Argilla optimizes the experience for these core users to make your teams more productive.

  • Beyond hand-labeling: Classical hand labeling workflows are costly and inefficient, but having humans-in-the-loop is essential. Easily combine hand-labeling with active learning, bulk-labeling, zero-shot models, and weak-supervision in novel data annotation workflows.

Contribute

We love contributors and have launched a collaboration with JustDiggit to hand out our very own bunds, to help the re-greening of sub-Saharan Africa. To help our community with the creation of contributions, we have created our developer and contributor docs. Additionally, you can always schedule a meeting with our Developer Advocacy team so they can get you up to speed.

FAQ

What is Argilla?

Argilla is an open-source MLOps tool for building and managing data for Natural Language Processing.

What can I use Argilla for?

Argilla is useful if you want to:

  • create a dataset for training a model.

  • evaluate and improve an existing model.

  • monitor an existing model to improve it over time and gather more training data.

What do I need to start using Argilla?

You need to have a running instance of Elasticsearch and install the Argilla Python library. The library is used to read and write data into Argilla.

How can I "upload" data into Argilla?

Currently, the only way to upload data into Argilla is by using the Python library.

This is based on the assumption that there's rarely a perfectly prepared dataset in the format expected by the data annotation tool.

Argilla is designed to enable fast iteration for users that are closer to data and models, namely data scientists and NLP/ML/Data engineers.

If you are familiar with libraries like Weights & Biases or MLFlow, you'll find Argilla log and load methods intuitive.

That said, Argilla gives you different shortcuts and utils to make loading data into Argilla a breeze, such as the ability to read datasets directly from the Hugging Face Hub.

In summary, the recommended process for uploading data into Argilla would be following:

  1. Install Argilla Python library,

  2. Open a Jupyter Notebook,

  3. Make sure you have a Argilla server instance up and running,

  4. Read your source dataset using Pandas, Hugging Face datasets, or any other library,

  5. Do any data preparation, pre-processing, or pre-annotation with a pretrained model, and

  6. Transform your dataset rows/records into Argilla records and log them into a dataset using rb.log. If your dataset is already loaded as a Hugging Face dataset, check the read_datasets method to make this process even simpler.

How can I train a model

The training datasets created with Argilla are model agnostic.

You can choose one of many amazing frameworks to train your model, like transformers, spaCy, flair or sklearn.

Check out our deep dives and our tutorials on how Argilla integrates with these frameworks.

If you want to train a Hugging Face transformer or spaCy NER model, we provide a neat shortcut to prepare your dataset for training.

Can Argilla share the Elasticsearch Instance/cluster?

Yes, you can use the same Elasticsearch instance/cluster for Argilla and other applications. You only need to perform some configuration, check the Advanced installation guide in the docs.

How to solve an exceeded flood-stage watermark in Elasticsearch?

By default, Elasticsearch is quite conservative regarding the disk space it is allowed to use.

If less than 5% of your disk is free, Elasticsearch can enforce a read-only block on every index, and as a consequence, Argilla stops working.

To solve this, you can simply increase the watermark by executing the following command in your terminal:

curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'{"persistent": {"cluster.routing.allocation.disk.watermark.flood_stage":"99%"}}'

Contributors

argilla's People

Contributors

alexjakubko avatar ankush-chander avatar aymane11 avatar bengsoon avatar burtenshaw avatar chschroeder avatar danielto1404 avatar davidberenstein1957 avatar dependabot[bot] avatar dvsrepo avatar frascuchon avatar iakhil avatar ignacioct avatar issam9 avatar jamnicki avatar keithcuniah avatar krishnajalan avatar leireropl avatar leiyre avatar maxserras avatar moritzlaurer avatar pre-commit-ci[bot] avatar rafaelbod avatar ruanchaves avatar sakares avatar sitraka17 avatar sugatoray avatar tomaarsen avatar

Stargazers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.