Giter VIP home page Giter VIP logo

superduperdb / superduperdb Goto Github PK

View Code? Open in Web Editor NEW
4.4K 4.4K 432.0 99.73 MB

๐Ÿ”ฎ SuperDuperDB: Bring AI to your database! Build, deploy and manage any AI application directly with your existing data infrastructure, without moving your data. Including streaming inference, scalable model training and vector search.

Home Page: https://superduperdb.com

License: Apache License 2.0

Python 95.03% Shell 0.45% Dockerfile 0.55% JavaScript 1.36% CSS 2.60%
ai chatbot data database distributed-ml inference llm-inference llm-serving llmops ml mlops mongodb pretrained-models python pytorch rag semantic-search torch transformers vector-search

superduperdb's Introduction

Bring AI to your favorite database!

โญ SuperDuperDB is open-source: Leave a star to support the project! โญ


๐Ÿ“ฃ On May 1st we will release v0.2. Find all major updates and fixes here in the Changelog!


What is SuperDuperDB? ๐Ÿ”ฎ

SuperDuperDB is a Python framework for integrating AI models, APIs, and vector search engines directly with your existing databases, including hosting of your own models, streaming inference and scalable model training/fine-tuning.

Build, deploy and manage any AI application without the need for complex pipelines, infrastructure as well as specialized vector databases, and migrating data, by integrating AI at your data's source:

  • Generative AI, LLMs, RAG, vector search
  • Standard machine learning use-cases (classification, segmentation, regression, forecasting recommendation etc.)
  • Custom AI use-cases involving specialized models
  • Even the most complex applications/workflows in which different models work together

SuperDuperDB is not a database. Think db = superduper(db): SuperDuperDB transforms your databases into an intelligent platform that allows you to leverage the full AI and Python ecosystem. A single development and deployment environment for all your AI applications in one place, fully scalable and easy to manage.

Key Features:

  • Integration of AI with your existing data infrastructure: Integrate any AI models and APIs with your databases in a single scalable deployment, without the need for additional pre-processing steps, ETL or boilerplate code.
  • Inference via change-data-capture: Have your models compute outputs automatically and immediately as new data arrives, keeping your deployment always up-to-date.
  • Scalable Model Training: Train AI models on large, diverse datasets simply by querying your training data. Ensured optimal performance via in-build computational optimizations.
  • Model Chaining: Easily setup complex workflows by connecting models and APIs to work together in an interdependent and sequential manner.
  • Simple Python Interface: Replace writing thousand of lines of glue code with simple Python commands, while being able to drill down to any layer of implementation detail, like the inner workings of your models or your training details.
  • Python-First: Bring and leverage any function, program, script or algorithm from the Python ecosystem to enhance your workflows and applications.
  • Difficult Data-Types: Work directly with images, video, audio in your database, and any type which can be encoded as bytes in Python.
  • Feature Storing: Turn your database into a centralized repository for storing and managing inputs and outputs of AI models of arbitrary data-types, making them available in a structured format and known environment.
  • Vector Search: No need to duplicate and migrate your data to additional specialized vector databases - turn your existing battle-tested database into a fully-fledged multi-modal vector-search database, including easy generation of vector embeddings and vector indexes of your data with preferred models and APIs.
Overview QuickStart

Example use-cases and apps (notebooks)

The notebooks below are examples how to make use of different frameworks, model providers, vector databases, retrieval techniques and so on.

To learn more about how to use SuperDuperDB with your database, please check our Docs and official Tutorials.

Also find use-cases and apps built by the community in the superduper-community-apps repository.

Name Link
Multimodal vector-search with a range of models and datatypes Open In Colab
RAG with self-hosted LLM Open In Colab
Fine-tune an LLM on your database Open In Colab
Featurization and fransfer learning Open In Colab

Why opt for SuperDuperDB?

With SuperDuperDB Without
Data Management & Security Data stays in the database, with AI outputs stored alongside inputs available to downstream applications. Data access and security to be externally controlled via database access management. Data duplication and migration to different environments, and specialized vector databases, imposing data management overhead.
Infrastructure A single environment to build, ship, and manage your AI applications, facilitating scalability and optimal compute efficiency. Complex fragmented infrastructure, with multiple pipelines, coming with high adoption and maintenance costs and increasing security risks.
Code Minimal learning curve due to a simple and declarative API, requiring simple Python commands. Hundreds of lines of codes and settings in different environments and tools.

For more information about SuperDuperDB and why we believe it is much needed, read this blog post.

Supported Datastores (more coming soon):

Transform your existing database into a Python-only AI development and deployment stack with one command:

db = superduper('mongodb|postgres|mysql|sqlite|duckdb|snowflake://<your-db-uri>')

Supported AI Frameworks and Models (more coming soon):

Integrate, train and manage any AI model (whether from open-source, commercial models or self-developed) directly with your datastore to automatically compute outputs with a single Python command:

Pre-Integrated AI APIs (more coming soon):

Integrate externally hosted models accessible via API to work together with your other models with a simple Python command:

Infrastructure Diagram

Installation

# Option 1. SuperDuperDB Library

Ideal for building new AI applications.

pip install superduperdb

# Option 2. SuperDuperDB Container

Ideal for learning basic SuperDuperDB functionalities and testing notebooks.

docker pull superduperdb/superduperdb
docker run -p 8888:8888 superduperdb/superduperdb

# Option 3. SuperDuperDB Testenv

Ideal for learning advanced SuperDuperDB functionalities and testing whole AI stacks.

make testenv_image
make testenv_init

Preview

Browse the re-usable snippets to understand how to accomplish difficult AI end-functionality with few lines of code using SuperDuperDB.

Community & Getting Help

If you have any problems, questions, comments, or ideas:

Contributing

There are many ways to contribute, and they are not limited to writing code. We welcome all contributions such as:

Please see our Contributing Guide for details.

Contributors

Thanks goes to these wonderful people:

License

SuperDuperDB is open-source and intended to be a community effort, and it wouldn't be possible without your support and enthusiasm. It is distributed under the terms of the Apache 2.0 license. Any contribution made to this project will be subject to the same provisions.

Join Us

We are looking for nice people who are invested in the problem we are trying to solve to join us full-time. Find roles that we are trying to fill here!

superduperdb's People

Contributors

aarya626 avatar aminalaee avatar anitaokoh avatar archit00sharma avatar ashishpatel26 avatar blythed avatar chrono-liu avatar dalazx avatar danibene avatar duarteocarmo avatar eltociear avatar fazlulkarimweb avatar fnikolai avatar frozenmafia avatar guerra2fernando avatar jieguangzhou avatar joanfm avatar kartik4949 avatar manmit124 avatar muralidharand avatar nenb avatar rec avatar saahil avatar sohaib90 avatar thejumpman2323 avatar therohitdas avatar thgnw avatar worlpaker avatar yogaxpto avatar zhongjiajie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

superduperdb's Issues

generalized API model

For example:

Hardcoded providers.

  • OpenAI
  • CohereAI
  • ...
db.create_model(
    provider='openai',
    route='/new_model',
    params=[''],
    output='JSON',
)

Support for pinecone.io as hashset

In superduperdb.vector_search.pinecone.hashes create a class implementing search using pinecone. In addition, add support/ a flag to models to toggle on/ off adding vectors to pinecone during ingestion.

Support for deepspeed

Deepspeed is a library for performing training and inference using model parallelism, with low code overhead. Because the code modifications are minimal, this may provide a nice modular way of doing large scale training for SuperDuperDB. This is useful if we want to enable training using Dolly 2.0.

Dolly 2 integration/ benchmarking

Dolly 2.0 is a fully permissive model developed by Databricks, shipping with weights and training data. We should be able to support self hosting of Dolly 2.0 at least intially for inference.

https://www.databricks.com/resources/webinar/build-your-own-large-language-model-dolly/thank-you?itm_data=dollyblog-intextcta-dollywebinar

https://huggingface.co/databricks/dolly-v2-12b

It's not clear if there's extra work here, or whether this can be supported with the right size hardware etc.. We may need to support saving a model in another way.

Solution for lazy importing/ handling ImportErrors

In order to make this type of thing work:

pip install superduperdb[all]
pip install superduperdb[torch]
pip install superduperdb[sklearn,pillow]

We would probably need some kind of lazy import scheme.

DuckDB connector

Depends on SuperDuperDB/roadmap#13. Create the minimal additional logic to enable DuckDB integation.

Refactor url -> uri

URI is a more accurate description of what superduperdb.utils.Downloader provides. Refactor naming,
and provide corresponding support for s3://.

cohereAI

Use CohereAI python API: https://cohere-sdk.readthedocs.io/en/latest/.

superduperdb.models.cohereai.wrapper.CohereAIBase.predict
superduperdb.models.cohereai.wrapper.CohereBase.predict_one

One configurable class for each type of CohereAI task - embeddings/ generation/ chat...
Support is available for asyncio. Use this for batch processing.

Depends on #68.

msgpack serializer

Msgpack is faster than the serializers pickle and dill. Investigate and incorporate features if this makes sense.

This will mean updating Database._create_serialized_file.

Refactor imputation allowing string for target

If imputations are simple enough, it should be possible to not to need to supply the target function.

E.g.

collection.create_imputation('<identifier>', '<model-name>', '<key-name>', '<target-key-name>')

Support for ray serving

Support for high performance ray server to serve searchers/ questions/ model passes etc..

sklearn like interface for models

pipeline.fit(X, y, query_params=None)              # query_params are used to fetch data in keys X, y
pipeline.predict(X, query_params=None)         # query_params are used to fetch data in X
pipeline.predict(X, query_params=None, persist=True)  # persist forces to write the results in the database

# an example
pipeline.predict('img', query_params=('documents', {'brand': 'Adidas Originals'}))

Create generic training `create_learning_task`

Create support for users to supply their own class with a .train() method and compatible with the generic __init__ method. This should allow a greater range of learning tasks, such as GAN learning etc..

Higher level concretizations of `create_learning_task`

In order to hold the hands of our users, somewhat, create specific instances of create_learning_task:

  • create_vector_search
  • create_classifier
  • create_clustering
  • create_autoregression

E.g. in the case of create_classifier, specify input/ target fields, and encoding model.

`

Update documentation

  • More example notebooks
    • Custom trainer
    • More types of tasks
    • Something with time-series data
    • ...
  • More explanation
    • Everything is a bit thin on the ground
    • ...

Add faiss

https://github.com/facebookresearch/faiss

It should be possible to toggle between vanilla table scan look up and efficient lookup with FAISS.
Open questions:

  • How manage recomputing every time/ vs. saving. index and updating
  • How to synchronize arbitrary measure supplied with limited options in FAISS
  • User/ warnings/ caveats
  • Documentation

Model as API

Add feature to create model based on external API call (e.g. from openAI API).
Should allow user to add credentials etc.. flexibly for multiple services.

Mechanism for doing piecemeal installs

There are currently many options to use with SuperDuperDB. Not all users will necessarily want to install all of these. Make it possible to install a base, and then power ups.

Here's an example:

https://github.com/lf1-io/padl-extensions/blob/main/setup.py

pip install superduperdb
pip install superduperdb[lightning]
pip install superduperdb[transformers]
pip install superduperdb[all]
etc...

There are points in the code where we try and infer the type of an object. This should be done in a lazy way with error handling, since otherwise we'll get errors if packages haven't been installed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.