Giter VIP home page Giter VIP logo

monarch-initiative / curate-gpt Goto Github PK

View Code? Open in Web Editor NEW
49.0 5.0 11.0 5.15 MB

LLM-driven curation assist tool (pre-alpha)

Home Page: https://monarch-initiative.github.io/curate-gpt/

License: BSD 3-Clause "New" or "Revised" License

Python 26.19% Makefile 0.29% Jupyter Notebook 73.28% Jinja 0.24%
ai curation gpt llm monarchinitiative obofoundry ontogpt ontologies ontology-tools biocuration

curate-gpt's Introduction

CurateGPT

DOI

CurateGPT is a prototype web application and framework for performing general purpose AI-guided curation and curation-related operations over collections of objects.

See also the app on curategpt.io (note: this is sometimes down, and may only have a subset of the functionality of the local app)

Installation

You will first need to install Poetry.

Then clone this repo

git clone https://github.com/monarch-initiative/curate-gpt.git
cd curate-gpt

and install the dependencies:

poetry install

In order to get the best performance from CurateGPT, we recommend getting an OpenAI API key, and setting it:

export OPENAI_API_KEY=<your key>

(for members of Monarch: ask on Slack if you would like to use the group key)

Loading example data and running the app

You initially start with an empty database. You can load whatever you like into this database! Any JSON, YAML, or CSV is accepted. CurateGPT comes with wrappers for some existing local and remote sources, including ontologies. The Makefile contains some examples of how to load these. You can load any ontology using the ont-<name> target, e.g.:

make ont-cl

This loads CL (via OAK) into a collection called ont_cl

Note that by default this loads into a collection set stored at stagedb, whereas the app works off of db. You can copy the collection set to the db with:

cp -r stagedb/* db/

You can then run the streamlit app with:

make app

Building Indexes

CurateGPT depends on vector database indexes of the databases/ontologies you want to curate.

The flagship application is ontology curation, so to build an index for an OBO ontology like CL:

make ont-cl

This requires an OpenAI key.

(You can build indexes using an open embedding model, modify the command to leave off the -m option, but this is not recommended as currently oai embeddings seem to work best).

To load the default ontologies:

make all

(this may take some time)

To load different databases:

make load-db-hpoa
make load-db-reactome

You can load an arbitrary json, yaml, or csv file:

curategpt view index -c my_foo foo.json

(you will need to do this in the poetry shell)

To load a GitHub repo of issues:

curategpt -v view index -c gh_uberon -m openai:  --view github --init-with "{repo: obophenotype/uberon}"

The following are also supported:

  • Google Drives
  • Google Sheets
  • Markdown files
  • LinkML Schemas
  • HPOA files
  • GOCAMs
  • MAXOA files
  • Many more

Notebooks

Selecting models

Currently this tool works best with the OpenAI gpt-4 model (for instruction tasks) and OpenAI ada-text-embedding-002 for embedding.

Curate-GPT is layered on top of simonw/llm which has a plugin architecture for using alternative models. In theory you can use any of these plugins.

Additionally, you can set up an openai-emulating proxy using litellm.

The litellm proxy may be installed with pip as pip install litellm[proxy].

Let's say you want to run mixtral locally using ollama. You start up ollama (you may have to run ollama serve first):

ollama run mixtral

Then start up litellm:

litellm -m ollama/mixtral

Next edit your extra-openai-models.yaml as detailed in the llm docs:

- model_name: ollama/mixtral
  model_id: litellm-mixtral
  api_base: "http://0.0.0.0:8000"

You can now use this:

curategpt ask -m litellm-mixtral -c ont_cl "What neurotransmitter is released by the hippocampus?"

But be warned that many of the prompts in curategpt were engineered against openai models, and they may give suboptimal results or fail entirely on other models. As an example, ask seems to work quite well with mixtral, but complete works horribly. We haven't yet investigated if the issue is the model or our prompts or the overall approach.

Welcome to the world of AI engineering!

Using the command line

curategpt --help

You will see various commands for working with indexes, searching, extracting, generating, etc.

These functions are generally available through the UI, and the current priority is documenting these.

Chatting with a knowledge base

curategpt ask -c ont_cl "What neurotransmitter is released by the hippocampus?"

may yield something like:

The hippocampus releases gamma-aminobutyric acid (GABA) as a neurotransmitter [1](#ref-1).

...

## 1

id: GammaAminobutyricAcidSecretion_neurotransmission
label: gamma-aminobutyric acid secretion, neurotransmission
definition: The regulated release of gamma-aminobutyric acid by a cell, in which the
  gamma-aminobutyric acid acts as a neurotransmitter.
...

Chatting with pubmed

curategpt view ask -V pubmed "what neurons express VIP?"

Chatting with a GitHub issue tracker

curategpt ask -c gh_obi "what are some new term requests for electrophysiology terms?"

Term Autocompletion (DRAGON-AI)

curategpt complete -c ont_cl  "mesenchymal stem cell of the apical papilla"

yields

id: MesenchymalStemCellOfTheApicalPapilla
definition: A mesenchymal cell that is part of the apical papilla of a tooth and has
  the ability to self-renew and differentiate into various cell types such as odontoblasts,
  fibroblasts, and osteoblasts.
relationships:
- predicate: PartOf
  target: ApicalPapilla
- predicate: subClassOf
  target: MesenchymalCell
- predicate: subClassOf
  target: StemCell
original_id: CL:0007045
label: mesenchymal stem cell of the apical papilla

All-by-all comparisons

You can compare all objects in one collection

curategpt all-by-all --threshold 0.80 -c ont_hp -X ont_mp --ids-only -t csv > ~/tmp/allxall.mp.hp.csv

This takes 1-2s, as it involves comparison over pre-computed vectors. It reports top hits above a threshold.

Results may vary. You may want to try different texts for embeddings (the default is the entire json object; for ontologies it is concatenation of labels, definition, aliases).

sample:

HP:5200068,Socially innappropriate questioning,MP:0001361,social withdrawal,0.844015132437909
HP:5200069,Spinning,MP:0001411,spinning,0.9077306606290237
HP:5200071,Delayed Echolalia,MP:0013140,excessive vocalization,0.8153252835818089
HP:5200072,Immediate Echolalia,MP:0001410,head bobbing,0.8348177036912526
HP:5200073,Excessive cleaning,MP:0001412,excessive scratching,0.8699103725005582
HP:5200104,Abnormal play,MP:0020437,abnormal social play behavior,0.8984862078522344
HP:5200105,Reduced imaginative play skills,MP:0001402,decreased locomotor activity,0.85571629684631
HP:5200108,Nonfunctional or atypical use of objects in play,MP:0003908,decreased stereotypic behavior,0.8586700411012859
HP:5200129,Abnormal rituals,MP:0010698,abnormal impulsive behavior control,0.8727804272023427
HP:5200134,Jumping,MP:0001401,jumpy,0.9011393233129765

Note that CurateGPT has a separate component for using an LLM to evaluate candidate matches (see also https://arxiv.org/abs/2310.03666); this is not enabled by default, this would be expensive to run for a whole ontology.

curate-gpt's People

Contributors

caufieldjh avatar cmungall avatar hrshdhgd avatar justaddcoffee avatar oneilsh avatar realmarcin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

curate-gpt's Issues

Figure out chromadb slowness issues

At some point after loading a certain number of sources simple metadata/peek operations start going incredibly slowly

Even curategpt collections list takes ~1m

This causes the UI to slow down too

I am pretty sure this is something in the chromadb layer, not something we are doing on top. In the metadata extraction

Get model context details from `litellm`

The litellm package tracks metadata on model context limits in this file: https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json

This may also be retrieved and cached with their helper function:

from litellm import get_model_cost_map
models = get_model_cost_map("")  # This can take a URL but by default it uses the one above
models["gpt-4"]
{'max_tokens': 4096, 'max_input_tokens': 8192, 'max_output_tokens': 4096, 'input_cost_per_token': 3e-05, 'output_cost_per_token': 6e-05, 'litellm_provider': 'openai', 'mode': 'chat', 'supports_function_calling': True}

The list includes multiple model types: completions, embeddings, etc.

Add a chat interface

The current way of interacting with the app is incredibly clunky - multiple selectors on the left, confusing, easy to select the wrong thing

It should be a chat interface like @oneilsh's PA interface

There could be power-user CLI commands that bypass GPT, e.g.

  • !ont_hp/search liver phenotype
  • !ont_envo/chat[gpt4] what are volcanoes?
  • !maxoa/extract PMID:123456

But anything else should be passed to a model and use ReAct to trigger appropriate action

Should 'GPT' be used in the app name?

I'm concerned about the use of 'GPT' in the name of this app, since (presumably) this app is not affiliated with OpenAI, and their brand guidelines explicitly forbid this usage:

We do not permit the use of OpenAI models or โ€œGPTโ€ in product or app names because it confuses end users.

See https://openai.com/brand

Unless you strongly disagree with OpenAI's position, you might want to consider adding an obvious disclaimer to the top of the readme (and anywhere else that's applicable) that this app is not affiliated, endorsed, or sponsored by OpenAI, or rename the app.

consider chromadb alternatives

chroma is very slow on the EC2 instance. I don't think the issue is with any fancy vector operations - it's just basic lookup operations and extracting metadata for a collection that seems to be slow.

I am not sure we actually need a vector database. It may be better to use a dedicated document store that has some kind of vector plugin

solr/ES has vector extensions. However, I don't think this would be good as a primary store for editable data

sqlite has vector extensions https://simonwillison.net/2023/Oct/23/embeddings/ -- seems to require. aplugin https://github.com/asg017/sqlite-vss -- this may make the overall build more complicated..

the native datamodel for curategpt is json documents so using mongodb as a base would be perfect. There is atlas but this seems to force some kind of cloud deployment - https://www.mongodb.com/products/platform/atlas-vector-search - this might be a good way to go bit we want the option to keep it simple with local files

cc @julesjacobsen

Evaluate groq

groq has jawdroppingly fast access to mixtral. Currently you can use the UI and API for no cost. There is throttling but it seems quite generous

it's easy to use via the awesome litellm

See https://github.com/monarch-initiative/curate-gpt/blob/main/README.md#selecting-models for general setup

First make sure you are up to date

pipx update litellm

then fire it up:

litellm -m groq/mixtral-8x7b-32768

Add this to extra-openai-models.yaml as detailed in the llm docs:

- model_name: litellm-groq-mixtral
  model_id: litellm-groq-mixtral
  api_base: "http://0.0.0.0:8000"

You can use the CLI: llm -m litellm-groq-mixtral "10 names for a pet pelican"

Chat interface uses incorrect OpenAI API key

Setting the OpenAI API key as stated in the README may not consistently set it in a way the app can access.
If I do the following:

$ export OPENAI_API_KEY=(key_here)
$ make ont-maxo
$ cp -r stagedb/* db/
$ make app

and then use the Chat interface, I get an authentication error:

openai.error.AuthenticationError: Incorrect API key provided: sk-22ENy***************************************VnKr. You can find your API key at https://platform.openai.com/account/api-keys.
2024-02-02 12:55:41.247 Removing orphaned files...
2024-02-02 12:55:41.448 Script run finished successfully; removing expired entries from MessageCache (max_age=2)

That's not my API key...but it is one I have used in the past!
It's not active anymore so it won't work here, and I'm not certain where CurateGPT is finding or why it isn't using the one I just set to OPENAI_API_KEY.

load-db-hpoa_by_pub should stream output

currently this loader will only generate output at the end. the reason it does this is that it needs to aggregate by pub. however the strategy is still pretty dumb. And v inconvenient

if self.group_by_publication:
for pub in by_pub.values():
yield pub

instead it should

  1. load all hpoa as one TSV
  2. aggregate by pub
  3. index these one at a time, yielding results

@julesjacobsen

Bypass OpenAI server overload and HTTP 500 Error

This issue was already handled in this old PR but it had a commit mistake. A new PR will be made to have a cleaner history.

When loading ontologies into CurateGPT the insertion of the data into chromaDB is very often interrupted because of a server overload on the API side.

openai.error.ServiceUnavailableError: The server is overloaded or not ready yet.

Implementing a exponential_backoff_request helped me to bypass this by trying again with an additional small sleep every time it would fail.
Its not a fancy solution but it can get the job done.

Another often occurring problem would be an HTTP 500 error, which could also be caught.

poetry run curategpt ontology index --index-fields label,definition,relationships -p stagedb -c ont_mp -m openai: sqlite:obo:mp
Configuration file exists at /Users/carlo/Library/Preferences/pypoetry, reusing this directory.

Consider moving TOML configuration files to /Users/carlo/Library/Application Support/pypoetry, as support for the legacy directory will be removed in an upcoming release.
WARNING:curate_gpt.store.chromadb_adapter:Cumulative length = 3040651, pausing ...
WARNING:curate_gpt.store.chromadb_adapter:Cumulative length = 3010451, pausing ...
ERROR:curate_gpt.store.chromadb_adapter:Failed to process batch after retries: The server is overloaded or not ready yet.
poetry run curategpt ontology index --index-fields label,definition,relationships -p stagedb -c ont_mondo -m openai: sqlite:obo:mondo
Configuration file exists at /Users/carlo/Library/Preferences/pypoetry, reusing this directory.

Consider moving TOML configuration files to /Users/carlo/Library/Application Support/pypoetry, as support for the legacy directory will be removed in an upcoming release.
ERROR:curate_gpt.store.chromadb_adapter:Failed to process batch after retries: The server had an error while processing your request. Sorry about that! {
  "error": {
    "message": "The server had an error while processing your request. Sorry about that!",
    "type": "server_error",
    "param": null,
    "code": null
  }
}
 500 {'error': {'message': 'The server had an error while processing your request. Sorry about that!', 'type': 'server_error', 'param': None, 'code': None}} {'Date': 'Wed, 17 Jan 2024 11:49:08 GMT', 'Content-Type': 'application/json', 'Content-Length': '176', 'Connection': 'keep-alive', 'access-control-allow-origin': '*', 'openai-organization': 'lawrence-berkeley-national-laboratory-8', 'openai-processing-ms': '867', 'openai-version': '2020-10-01', 'strict-transport-security': 'max-age=15724800; includeSubDomains', 'x-ratelimit-limit-requests': '10000', 'x-ratelimit-limit-tokens': '10000000', 'x-ratelimit-remaining-requests': '9999', 'x-ratelimit-remaining-tokens': '9998545', 'x-ratelimit-reset-requests': '6ms', 'x-ratelimit-reset-tokens': '8ms', 'x-request-id': '1751b1c8c5e4386f901047e4380709fb', 'CF-Cache-Status': 'DYNAMIC', 'Server': 'cloudflare', 'CF-RAY': '846e5f804ecd79c3-LHR', 'alt-svc': 'h3=":443"; ma=86400'}

Add command to load existing embeddings into a collection

See #35

So for example you can do this:

curategpt embeddings index /path/to/local/embeddings.parquet

or this:

curategpt embeddings index https://huggingface.co/datasets/biomedical-translator/monarch_kg_embeddings/resolve/main/deepwalk_embedding.parquet?download=true -f parquet

Include examples for using the annotate command

curategpt includes an annotate command that functions as traditional text annotation / CR. Give it some text, and it will give back the ontology term IDs. It's not guaranteed to find the spans, but that could be done as post-processing. The priority has been to find the concepts.

annotate has a --method option with values:

  • inline
  • concept_list
  • two_pass

Under the hood it uses https://github.com/monarch-initiative/curate-gpt/blob/main/src/curate_gpt/agents/concept_recognition_agent.py

We really need to (a) have better docstrings and (b) expose this via sphinx... but for now this issue serves as temporary docs.

We'll use these texts as a running example:

  • A minimum diagnostic criterion is the combination of either the skin tumours
    or multiple odontogenic keratocysts plus a positive family history for this disorder,
    bifid ribs, lamellar calcification of the falx cerebri or any one of the skeletal
    abnormalities typical of this syndrome
  • A clinical concept has been produced, with a diagnostic check list including
    a genetic and a dermatological routine work up as well as a radiological survey
    of the jaws and skeleton

And assume we have hpo pre-indexed using the standard curate-gpt loader.

inline

This is a RAG-based approach that finds the N most relevant concepts in the given ontology (pre-indexed in chromadb). It then presents this in the (system) prompt as a CSV of id, label pairs.

This method is designed to return the annotated spans "inlined" into the existing text, via this prompt:

Your role is to annotate the supplied text with selected concepts.
return the original text with each conceptID in square brackets.
After the occurrence of that concept.
You can use synonyms. For example, if the concept list contains
zucchini // DB:12345
Then for the text 'I love courgettes!' you should return
'I love [courgettes DB:12345]!'
Always try and match the longest span.
the concept ID should come only from the list of candidate concepts supplied to you.

ideally the system prompt will include DB:12345,courgettes but the chance of this diminishes with the size of the input document and to a lesser extent the size of the ontology.

example of output from

curategpt annotate -M inline -m gpt-4-1106-preview --prefix HP --category PhenotypicFeature -l 50 -c ont_hp -I original_id -p stagedb -s -i tests/input/example-disease-text.txt

The output annotated text in one run is:

A minimum diagnostic criterion is the combination of either the skin
  tumours or multiple [odontogenic keratocysts HP:0010603] plus a positive family
  history for this disorder, [bifid ribs HP:0030280], [lamellar calcification of the
  falx cerebri HP:0005462] or any one of the skeletal abnormalities typical of this
  syndrome

that was an easy one since the mentions in the text are more or less exact matches with hpo

For the other text:

A clinical concept has been produced, with a diagnostic check list
  including a genetic and a [dermatological routine work up HP:0001005] as well as
  a radiological survey of the [jaws HP:0000347] and [skeleton HP:0033127].

Hmm. We can see the concepts here:

spans:
- text: dermatological routine work up
  start: null
  end: null
  concept_id: HP:0001005
  concept_label: Dermatological manifestations of systemic disorders
  is_suspect: false
- text: jaws
  start: null
  end: null
  concept_id: HP:0000347
  concept_label: Micrognathia
  is_suspect: false
- text: skeleton
  start: null
  end: null
  concept_id: HP:0033127
  concept_label: Abnormality of the musculoskeletal system
  is_suspect: false

so it's getting creative, and this is wrong, the actual phenotype in GG is jaw cysts not small jaws...

concept list

This is similar to inline but doesn't attempt to relate the match to a span of text.

This is a RAG-based approach that finds the N most relevant concepts in the given ontology (pre-indexed in chromadb). It then presents this in the (system) prompt as a CSV of id, label pairs.

It uses the prompt:

Your role is to list all instances of the supplied candidate concepts in the supplied text.
Return the concept instances as a CSV of ID,label,text pairs, where the ID
is the concept ID, label is the concept label, and text is the mention of the
concept in the text.
The concept ID and label should come only from the list of candidate concepts supplied to you.
Only include a row if the meaning of the text section is that same as the concept.
If there are no instances of a concept in the text, return an empty string.
Do not include additional verbiage.

for the easier text:

curategpt annotate -M concept_list -m gpt-4-1106-preview --prefix HP --category PhenotypicFeature -l 50 -c ont_hp -I original_id -p stagedb -s -i tests/input/example-disease-text.txt

- text: '"odontogenic keratocysts"'
  start: null
  end: null
  concept_id: HP:0010603
  concept_label: Odontogenic keratocysts of the jaw
  is_suspect: false
- text: '"calcification of the falx cerebri"'
  start: null
  end: null
  concept_id: HP:0005462
  concept_label: Calcification of falx cerebri
  is_suspect: false
- text: '"bifid ribs"'
  start: null
  end: null
  concept_id: HP:0030280
  concept_label: Rib gap
  is_suspect: false

for the harder text it has lower recall but nothing is IMO outright wrong:

spans:
- text: radiological survey of the jaws
  start: null
  end: null
  concept_id: HP:0010603
  concept_label: Odontogenic keratocysts of the jaw
  is_suspect: false
- text: radiological survey of the skeleton
  start: null
  end: null
  concept_id: HP:0033127
  concept_label: Abnormality of the musculoskeletal system
  is_suspect: false

two pass

This does a first pass where it asks for all concepts found in the doc to be listed (no RAG, very vanilla chatpgpt usage). These are requested as human readable terms, not IDs, to limit hallucination. It asks for the concepts to be inlined in square brackets

Then a second pass is done on each concept, essentially grounding them. The grounding DOES use the concept_list/inline RAG method above - but in theory this should be more accurate and include the relevant concepts in the cutoff since we are just grounding rather than presenting the whole text.

The grounding prompt is:

Your role is to assign a concept ID that best matches the supplied text, using
the supplied list of candidate concepts.
Return as a string "CONCEPT NAME // CONCEPT ID".
Only return a result if the input text represents the same or equivalent
concept, in the provided context.
If there is no match, return an empty string.

let's see how it does on the easier text:

curategpt annotate -M two_pass -m gpt-4-1106-preview --prefix HP --category PhenotypicFeature -l 5 -c ont_hp -I original_id -p stagedb -s -i tests/input/example-disease-text.txt

(we can use a lower value for -l as for grounding the top 5 is likely to include the right concept [untested])

annotated_text: A minimum diagnostic criterion is the combination of either the [skin
  tumours] or multiple [odontogenic keratocysts] plus a positive family history for
  this disorder, [bifid ribs], [lamellar calcification of the falx cerebri] or any
  one of the [skeletal abnormalities] typical of this syndrome
spans:
- text: skin tumours
  start: null
  end: null
  concept_id: HP:0008069
  concept_label: Neoplasm of the skin
  is_suspect: false
- text: odontogenic keratocysts
  start: null
  end: null
  concept_id: HP:0010603
  concept_label: Odontogenic keratocysts of the jaw
  is_suspect: false
- text: bifid ribs
  start: null
  end: null
  concept_id: HP:0000892
  concept_label: Bifid ribs
  is_suspect: false
- text: lamellar calcification of the falx cerebri
  start: null
  end: null
  concept_id: HP:0005462
  concept_label: Calcification of falx cerebri
  is_suspect: false
- text: skeletal abnormalities
  start: null
  end: null
  concept_id: HP:0011842
  concept_label: Abnormal skeletal morphology
  is_suspect: false

and the harder one:

annotated_text: A clinical concept has been produced, with a diagnostic check list
  including a [genetic] and a [dermatological] routine work up as well as a [radiological]
  survey of the [jaws] and [skeleton].
spans:
- text: jaws
  start: null
  end: null
  concept_id: HP:0012802
  concept_label: Broad jaw
  is_suspect: false
- text: skeleton
  start: null
  end: null
  concept_id: C0037253
  concept_label: null
  is_suspect: false

Support triple extraction use case

In discussion with RNA-KG group (Marco Mesiti, Elena Casiraghi, Emanuele Cavalleri) and @justaddcoffee -
we would like to be able to extract triples (s, p, o) from a provided text, using graph embeddings to guide the process.
The goal is to find additional content for RNA-KG. Using OntoGPT has worked well for this so far but does not take advantage of the existing relations within the KG.

This would involve:

  • Including interface (CLI and/or GUI) to use text document as input
  • Providing way to index KGX and/or derive a schema from it
  • Building wrapper for graph embeddings.
    • Using GRAPE directly through this project would be a heavy lift, so retrieving embeddings from an external source like Huggingface would likely work better, save time, and avoid introduction of many new dependencies
  • Writing documentation for the above

Integrating some process for comparison of the extracted triples would be ideal (e.g., A vs B appears in 20 documents, 15 of them from different sources, etc).

RNA-KG group has also suggested trying an alternative vector DB (https://www.llamaindex.ai/) to see if it works better for RAG with KG data.

use agent-smith-ai to wrap additional endpoints

Currently curategpt allows both static and dynamic wrappers.

  • Static: loaded in advance
  • Dynamic: requires the backend to support some kind of relevancy-backed search

This restricts us to ETL-able objects or things like pubmed.

If we integrate agent-smith
https://github.com/monarch-initiative/agent-smith-ai

then we can wrap a lot more knowledge sources

the basic workflow here would be:

  • user asks a general knowledge question
  • agent smith figures correct APIs, issues query
  • chatagent wraps results in a text blob with citations
  • user clicks curate and text is auto-structured according to schema

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.