Giter VIP home page Giter VIP logo

llm-vm's Introduction

Anarchy Logo

🤖 Anarchy Labs' LLM-VM 🤖

An Open-Source AGI Server for Open-Source LLMs

This is Anarchy's effort to build 🏗️ an open generalized artificial intelligence 🤖 through the LLM-VM: a way to give your LLMs superpowers 🦸 and superspeed 🚄.

You can find detailed instructions to try it live here: anarchy.ai

This project is in BETA. Expect continuous improvement and development.

Table of Contents

📚 About 📚

💁 What is the Anarchy LLM-VM?

The Anarchy LLM-VM is a highly optimized and opinionated backend for running LLMs with all the modern features we've come to expect from completion: tool usage, persistent stateful memory, live data augmentation, data and task fine-tuning, output templating, a web playground, API endpoints, student-teacher distillation, data synthesis, load-balancing and orchestration, large context-window mimicry.

Formally, it is a virtual machine/interpreter for human language, coordinating between data, models (CPU), your prompts (code), and tools (IO).

By doing all these things in one spot in an opinionated way, the LLM-VM can properly optimize batch calls that would be exorbitantly expensive with distributed endpoints. It furthermore strives for both model and architecture agnosticism, properly optimizing the chosen model for the current architecture.

🤌 Why use the Anarchy LLM-VM?

In line with Anarchy's mission, the LLM-VM strives to support open-source models. By utilizing open-source models and running them locally you achieve several benefits:

  • Speed up your AGI development 🚀: With AnarchyAI, one interface is all you need to interact with the latest LLMs available.

  • Lower your costs 💸: Running models locally can reduce the pay-as-you-go costs of development and testing.

  • Flexibility 🧘‍♀️: Anarchy allows you to rapidly switch between popular models so you can pinpoint the exact right tool for your project.

  • Community Vibes 🫂: Join our active community of highly motivated developers and engineers working passionately to democratize AGI

  • WYSIWYG 👀: Open source means nothing is hidden; we strive for transparency and efficiency so you can focus on building.

🎁 Features and Roadmap

  • Implicit Agents 🔧🕵️: The Anarchy LLM-VM can be set up to use external tools through our agents such as REBEL just by supplying tool descriptions!

  • Inference Optimization 🚄: The Anarchy LLM-VM is optimized from the agent level all the way to assembly on known LLM architectures to get the most bang for your buck. With state-of-the-art batching, sparse inference and quantization, distillation, and multi-level colocation, we aim to provide the fastest framework available.

  • Task Auto-Optimization 🚅: The Anarchy LLM-VM will analyze your use cases for repetitive tasks where it can activate student-teacher distillation to train a super-efficient small model from a larger more general model without losing accuracy. It can furthermore take advantage of data-synthesis techniques to improve results.

  • Library Callable 📚: We provide a library that can be used from any Python codebase directly.

  • HTTP Endpoints 🕸️: We provide an HTTP standalone server to handle completion requests.

  • Live Data Augmentation 📊: (ROADMAP) You will be able to provide a live updating data set and the Anarchy LLM-VM will fine-tune your models or work with a vector DB to provide up-to-date information with citations

  • Web Playground 🛝: (ROADMAP) You will be able to run the Anarchy LLM-VM and test its outputs from the browser.

  • Load-Balancing and Orchestration ⚖️: (ROADMAP) If you have multiple LLMs or providers you'd like to utilize, you will be able to hand them to the Anarchy LLM-VM to automatically figure out which to work with and when to optimize your uptime or your costs

  • Output Templating 🤵: (ROADMAP) You can ensure that the LLM only outputs data in specific formats and fills in variables from a template with either regular expressions, LMQL, or OpenAI's template language

  • Persistent Stateful Memory 📝: (ROADMAP) The Anarchy LLM-VM can remember a user's conversation history and react accordingly

🚀 Quickstart 🚀

🥹 Requirements

Installation Requirements

Python >=3.10 Supported. Older versions of Python are on a best-effort basis.

Use bash > python3 --version to check what version you are on.

To upgrade your python, either create a new python env using bash > conda create -n myenv python=3.10 or go to https://www.python.org/downloads/ to download the latest version.

 If you plan on running the setup steps below, a proper Python version will be installed for you

System Requirements

Different models have different system requirements. Limiting factors on most systems will likely be RAM, but many functions will work at even 16 GB of RAM.

That said, always lookup the information about the models you're using, they all have different sizes and requirements in memory and compute resources.

👨‍💻 Installation

The quickest way to get started is to run pip install llm-vm in your Python environment.

Another way to install the LLM-VM is to clone this repository and install it with pip like so:

> git clone https://github.com/anarchy-ai/LLM-VM.git
> cd LLM-VM
> ./setup.sh

The above bash script setup.sh only works for MacOS and Linux.

Alternatively you could do this:

> git clone https://github.com/anarchy-ai/LLM-VM.git
> cd LLM-VM
> python -m venv <name>
> source <name>/bin/activate
> python -m pip install -e ."[dev]"

If you are on Windows. You can follow either of the below two methods:

Before doing any of the following steps, you have to first open Powershell as administrator and run the below command

> Set-ExecutionPolicy RemoteSigned
> Press Y and enter
> exit

Now you can follow either of the below two methods:

  1. Open Powershell and do this:
> git clone https://github.com/anarchy-ai/LLM-VM.git
> cd LLM-VM
> .\windows_setup.ps1

or

  1. Open Powershell and do this:
> winget install Python.Python.3.11
> python --version
> git clone https://github.com/anarchy-ai/LLM-VM.git
> cd LLM-VM
> python -m venv anarchyai
> anarchyai\Scripts\activate
> python -m pip install -e .

Note:

  1. For the above steps to work you have to be on Windows 10 1709 (build 16299) or later build.
  2. Enable developer mode in windows settings(not compulsory but if enabled will give an added advantage)

One Last Step, almost there!

If you're using one of the OpenAI models, you will need to set the LLM_VM_OPENAI_API_KEY environment variable with your API key.

✅ Generating Completions

Our LLM-VM gets you working directly with popular LLMs locally in just 3 lines. Once you've installed it (as above), just load your model and start generating!

# import our client
from llm_vm.client import Client

# Select which LLM you want to use, here we have OpenAI 
client = Client(big_model = 'chat_gpt')

# Put in your prompt and go!
response = client.complete(prompt = 'What is Anarchy?', context = '', openai_key = 'ENTER_YOUR_API_KEY')
print(response)
# Anarchy is a political ideology that advocates for the absence of government...

🏃‍♀ Running LLMs Locally

# import our client
from llm_vm.client import Client

# Select the LlaMA 2 model
client = Client(big_model = 'llama2')

# Put in your prompt and go!
response = client.complete(prompt = 'What is Anarchy?', context = '')
print(response)
# Anarchy is a political philosophy that advocates no government...

😎 Supported Models

Select from the following models

Supported_Models = ['chat_gpt','gpt','neo','llama2','bloom','opt','pythia']

☯ Picking Different Models

LLM-VM default model sizes for local models are intended to make experimentation with LLMs accessible to everyone, but if you have the memory required, larger parameter models will perform far better!

for example, if you want to use a large and small neo model for your teacher and student, and you have enough RAM:

# import our client
from llm_vm.client import Client

# Select the LlaMA model
client = Client(big_model = 'neo', big_model_config={'model_uri':'EleutherAI/gpt-neox-20b'}, 
                small_model ='neo', small_model_config={'model_uri':'EleutherAI/gpt-neox-125m'})

# Put in your prompt and go!
response = client.complete(prompt = 'What is Anarchy?', context = '')
print(response)
# Anarchy is a political philosophy that advocates no government...

Here are some default model's details:

Name Model_Uri Model params Checkpoint file size
Neo EleutherAI/gpt-neo-1.3B 1.3B 5.31 GB
Bloom bigscience/bloom-560m 1.7B 1.12 GB
OPT facebook/opt-350m 350m 622 MB

For some other choices of memory usage and parameter count in each model family, check out the tables model_uri_tables.

🛠 Tool Usage

There are two agents: FLAT and REBEL.

Run the agents separately by going into the src/llm_vm/agents/<AGENT_FOLDER> and running the file that is titled agent.py.

Alternatively, to run a simple interface and choose an agent to run from the CLI, run the src/llm_vm/agents/agent_interface.py file and follow the command prompt instructions.

🩷 Contributing 🩷

We welcome contributors! To get started is to join our active discord community. Otherwise here are some ways to contribute and get paid:

Jobs

  • We're always looking for serious hackers. Prove that you can build and creatively solve hard problems and reach out!
  • The easiest way to secure a job/internship with us is to submit pull requests that address or resolve open issues.
  • Then, you can apply directly here https://forms.gle/bUWDKW3cwZ8n6qsU8

Bounty

We offer bounties for closing specific tickets! Look at the ticket labels to see how much the bounty is. To get started, join the discord and read the guide

🙏 Acknowledgements 🙏

License

MIT License

llm-vm's People

Contributors

4imothy avatar ab3000 avatar abhigya-sodani avatar ajay-vishnu avatar ajitalapati avatar ajn2004 avatar arshergon avatar avr-arnold avatar berkay3500 avatar bilal-aamer avatar bjornaer avatar bpanahij avatar cartazio avatar chinmayk0607 avatar daredevilteja avatar daspartho avatar dorkitude avatar fedegr avatar inf800 avatar jasonoblivion avatar jobhdez avatar julietkeynai avatar meganmerz avatar mehmetmhy avatar mmirman avatar mobley-trent avatar mr-shitij avatar saragracelien avatar us avatar victorodede avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

llm-vm's Issues

Add "model" as an optional parameter to generate/finetune function in onsite_llm.py

Right now in optimizer.py on line 238

completion = self.call_small(prompt = dynamic_prompt.strip(), model=model, **kwargs)

we call the small model (accessing the generate function) with the model parameter. Right now this does not do anything, we need to be able to provide a .pt file to that parameter and then call load_model (also defined in onsite_llm.py) in order to load that .pt file as the model to use. This needs to be done in the finetune function as well.

Data synthesis

We should use the larger LLM to synthesize data for training the small LLM with in the optimizing api

Setup dotenv

By setting up python-dotenv, the environment variables can be loaded from a .env file, which is in the root. And, I would like to work on this.

Get HuggingFace Kwargs working on the optimizer.complete endpoint

All HuggingFace arguments need to work on the optimizer.complete endpoint as well. For the complete endpoint this means the all arguments for the Huggingface .generate function must be able to be added to the optimize.complete function as well. Currently, this not something I was able to do with the max_new_tokens parameter, for example.

generalized constrained token inference interface

@itsmeashutosh43 and @VictorOdede I would love your feedback and Ideas on this

prompted by thinking about what we want out of regex/grammar constrained LLM inference, we've realized that we should just embrace having a more generic interface which those would be examples of.

I'm very much not familiar with python best practice, but heres my attempt at specifying it from a Types and stuff perspective thats slightly simplified.

"this is crudely our current api"
Class LLM(ABC):
   model : AbstractHF_Model
   tokenizer : Abstract_HF_Tokenizer
   genToken : List[Token] -> Map( Token_ID ,Log_Probability )

   generate_simple : ... bunch of stuff we pass to hf generate -> Async( String) 
"""generate_simple is our current generate """

   generate_with_Constraints : TokenConstraint -> ... same as above -> Async( String )

""" So lets talk about TokenConstraint!
"""
class TokenConstraint(ABC):

     constraint_type : type // the type of constraints we wanna have
     state_type : type //  this should be frozen/immutable, but i'll describe stuff in a way that maybe kinda works either way
     copy_state : 
     def is_valid (constraint : constraint_type ,  prefix : List of Tokens) ->  Bool : 

     def allowed_transitions(constraint: constraint_type, prefix : List of Token_id, tk : Tokenizer) -> Set(Token_id) : 

     def construct_state(constraint: constraint_type, prefix : List of Token_id, tk : Tokenizer) -> Union[None, state_type]
     // returns a state that *may* allow faster enumeration/checking of what tokens are allowed transitions next

     construct_crude_filter_set (constraint: constraint_type,tk : tokenizer,)-> Set [TokenID]
    // in eg a regex, this might be something like 
    // "the set of tokens that include only the character classes reference in the regexp"
   // it might be 

   // if we have anything complicated in constructing the current state, this should be *much* 
   // faster for filtering tokens 
   allowed_transitions_from_state (constraint : constraint_type, tk : tokenizer, st : state_type ) -> Set[token id]

    def copy_state(state : state_type)-> state_type 
   // if the state is immutable/frozen this should be the identity function 

something like this could maybe be the generic interface for constrained inference perhaps? i'm glossing over a lot of details like we're actually gonna be zeroing out the set of token indices that aren't in these sets in the log probability vectors, and all the mapping to and from tokenid <-> strings, and also the fact that we can do token constraint computation in paralell with running the gen_next_token_inference step

also in principle, things should be such that we can have instances of this interface that correspond with
logically And or Or or Xor or Nandtogether any two constraint languages I think?

Improve documentation

We need documentation for the entire project, and each sub-folder, and part. And documentation for each function in the code itself, and a standard/programatic way to display documentation

System requirements unclear

System requirements list RAM, but I'd expect an LLM client to be using VRAM. Can it use either? How do I configure the client to use VRAM if I have a GPU for example?

Add support for other LLMs

List of open-source LLMs:

Name Release date[a] Developer Number of parameters[b] Corpus size License[c] Notes
BERT 2018 Google 340 million[42] 3.3 billion words[42] Apache 2.0[43] An early and influential language model,[2] but encoder-only and thus not built to be prompted or generative[44]
XLNet 2019 Google ~340 million[45] 33 billion words   An alternative to BERT; designed as encoder-only[46][47]
GPT-2 2019 OpenAI 1.5 billion[48] 40GB[49] (~10 billion tokens)[50] MIT[51] general-purpose model based on transformer architecture
GPT-3 2020 OpenAI 175 billion[25] 300 billion tokens[50] public web API A fine-tuned variant of GPT-3, termed GPT-3.5, was made available to the public through a web interface called ChatGPT in 2022.[52]
GPT-Neo March 2021 EleutherAI 2.7 billion[53] 825 GiB[54] MIT[55] The first of a series of free GPT-3 alternatives released by EleutherAI. GPT-Neo outperformed an equivalent-size GPT-3 model on some benchmarks, but was significantly worse than the largest GPT-3.[55]
GPT-J June 2021 EleutherAI 6 billion[56] 825 GiB[54] Apache 2.0 GPT-3-style language model
Megatron-Turing NLG October 2021[57] Microsoft and Nvidia 530 billion[58] 338.6 billion tokens[58] Restricted web access Standard architecture but trained on a supercomputing cluster.
Ernie 3.0 Titan December 2021 Baidu 260 billion[59] 4 Tb Proprietary Chinese-language LLM. Ernie Bot is based on this model.
Claude[60] December 2021 Anthropic 52 billion[61] 400 billion tokens[61] Closed beta Fine-tuned for desirable behavior in conversations.[62]
GLaM (Generalist Language Model) December 2021 Google 1.2 trillion[63] 1.6 trillion tokens[63] Proprietary Sparse mixture-of-experts model, making it more expensive to train but cheaper to run inference compared to GPT-3.
Gopher December 2021 DeepMind 280 billion[64] 300 billion tokens[65] Proprietary  
LaMDA (Language Models for Dialog Applications) January 2022 Google 137 billion[66] 1.56T words,[66] 168 billion tokens[65] Proprietary Specialized for response generation in conversations.
GPT-NeoX February 2022 EleutherAI 20 billion[67] 825 GiB[54] Apache 2.0 based on the Megatron architecture
Chinchilla March 2022 DeepMind 70 billion[68] 1.4 trillion tokens[68][65] Proprietary Reduced-parameter model trained on more data. Used in the Sparrow bot.
PaLM (Pathways Language Model) April 2022 Google 540 billion[69] 768 billion tokens[68] Proprietary aimed to reach the practical limits of model scale
OPT (Open Pretrained Transformer) May 2022 Meta 175 billion[70] 180 billion tokens[71] Non-commercial research[d] GPT-3 architecture with some adaptations from Megatron
YaLM 100B June 2022 Yandex 100 billion[72] 1.7TB[72] Apache 2.0 English-Russian model based on Microsoft's Megatron-LM.
Minerva June 2022 Google 540 billion[73] 38.5B tokens from webpages filtered for mathematical content and from papers submitted to the arXiv preprint server[73] Proprietary LLM trained for solving "mathematical and scientific questions using step-by-step reasoning".[74] Minerva is based on PaLM model, further trained on mathematical and scientific data.
BLOOM July 2022 Large collaboration led by Hugging Face 175 billion[75] 350 billion tokens (1.6TB)[76] Responsible AI Essentially GPT-3 but trained on a multi-lingual corpus (30% English excluding programming languages)
Galactica November 2022 Meta 120 billion 106 billion tokens[77] CC-BY-NC-4.0 Trained on scientific text and modalities.
AlexaTM (Teacher Models) November 2022 Amazon 20 billion[78] 1.3 trillion[79] public web API[80] bidirectional sequence-to-sequence architecture
LLaMA (Large Language Model Meta AI) February 2023 Meta 65 billion[81] 1.4 trillion[81] Non-commercial research[e] Trained on a large 20-language corpus to aim for better performance with fewer parameters.[81] Researchers from Stanford University trained a fine-tuned model based on LLaMA weights, called Alpaca.[82]
GPT-4 March 2023 OpenAI Exact number unknown, approximately 1 trillion [f] Unknown public web API Available for ChatGPT Plus users and used in several products.
Cerebras-GPT March 2023 Cerebras 13 billion[84]   Apache 2.0 Trained with Chinchilla formula.
Falcon March 2023 Technology Innovation Institute 40 billion[85] 1 Trillion tokens (1TB)[85] Apache 2.0[86] The model is claimed to use only 75% of GPT-3's training compute, 40% of Chinchilla's, and 80% of PaLM-62B's.
BloombergGPT March 2023 Bloomberg L.P. 50 billion 363 billion token dataset based on Bloomberg's data sources, plus 345 billion tokens from general purpose datasets[87] Proprietary LLM trained on financial data from proprietary sources, that "outperforms existing models on financial tasks by significant margins without sacrificing performance on general LLM benchmarks"
PanGu-Σ March 2023 Huawei 1.085 trillion 329 billion tokens[88] Proprietary  
OpenAssistant[89] March 2023 LAION 17 billion 1.5 trillion tokens Apache 2.0 Trained on crowdsourced open data
PaLM 2 (Pathways Language Model 2) May 2023 Google 340 billion[90] 3.6 trillion tokens[90] Proprietary Used in Bard chatbot.[91]

identify more robust output format for data synthesis

the first version uses JSON, which can often be malformed, and theres no good error recovery in that case, need to identify and switch to a more "error tolerant" self aligning format. (meaning we can skip a bad pair and recover useful outputs)

Add LoRA, QLoRA fine-tuning for HF models

Current onsite LLM class uses full parameter fine-tuning which costly. LoRA fine-tuning will require less memory and prevent overfitting by freezing the pretrained weights.

June July tasks

Tasks for end of June and through July 2023

LLM-VM tasks

Error handling

  • Can cleanup in progress fine tunings when ctrl+c abort the local version of LLM-VM #23

  • Can retry data synthesis when given malformed json results #23

  • Maybe later should consider a more robust encoding rather than json, where mismatched brackets from unquoted responses #29

Agents

  • Get REBEL agent onto main branch LLM-VM/main #2
  • Get Backward Chaining agent onto main branch LLM-VM/main #2
  • Get all agents to use the optimizer endpoint, as opposed to the openai api #19
  • Get Flat agent on LLM-VM/main #2
  • Agent test suite #20

Documentation

  • Write documentation for LLM-VM on docusaurus. #22

Data Synthesis #3

  • Add all the parameters
  • choose a more robust encoding for question answer response sets #29
  • K-shot support
  • Prompt variation aka support alternative prompts, possibly with different defaults for each supported model
  • Track number of actual responses vs requested
  • Dedup handling
      • First exact repeat parameters
      • then semantic vector comparisons?
  • Parameterize the diverse ways you might want to define exact matches or vector comparisions

Parameter Management

  • Documenting all the parameters and making sure they’re sane #25

Instrumentation / Statistics

  • Keep track of detailed information about inference, data synthesis, response sizes, latency, quality #26

Inference Determinism (more long term question,not immediate prioriity)

  • What LLM flavors have explicit rng Seed being surfaces as optional input or something that can be part of the response metadata for reproducibility? #27

misc

  • Maybe also something about managing/tracking fine tunings you’ve generated #28

Correct fine-tuning for GPT models in onsite_llm.py

The GPT3 and Chat_GPT class models do not allow for a way to store and load a fine-tuned model into the completion pipeline. Adjust class attributes and methods to allow fine-tuned model with c_ids to be accessed in the openai call.

Data_synthesis fix

Currently data_synthesis.py asks GPT to generate all the datapoints in one call. This leads to many issues with many many incorrect datapoints being made and not enough datapoints being made. I fixed this in the experimentation_finetuning branch, and that approach of data synthesis needs to be brought to main and made robust (my current implementation is for research not deployment).

One thing that needs to be done is semantic similarity checking the way that the current data_synthesis.py does. Also my implementation in experimentation_finetuning also does not support lists as inputs, and this needs to be added as well.

Add linting and auto-formatting

Add linting and auto-formatting to keep code clean and improve efficiency

  • Add linting and auto-formatting to requirements.txt (ex. pylint, black)
  • Add pre-commit to run linting and auto-formatting
  • Fix errors raised by linter

issues finding settings.toml on a fresh install

 File "quickstart_finetune.py", line 5, in <module>
    from llm_vm.config import settings
  File "C:\Users\Abhigya Sodani\Anaconda3\lib\site-packages\llm_vm\config.py", line 55, in <module>
    if settings.big_model not in MODELS_AVAILABLE:
  File "C:\Users\Abhigya Sodani\Anaconda3\lib\site-packages\dynaconf\base.py", line 144, in __getattr__
    value = getattr(self._wrapped, name)
  File "C:\Users\Abhigya Sodani\Anaconda3\lib\site-packages\dynaconf\base.py", line 309, in __getattribute__
    return super().__getattribute__(name)
AttributeError: 'Settings' object has no attribute 'BIG_MODEL'

We need to replicate and fix these issues with dynaconf. I have this issue when I clone the repo and run pip run . .

Prompt Template for Code Synthesis

Separate into "prompt template" (a GPT-3 recognized word) and variables. Then ask the gpt-3 to change specific parameters for the template.

Prompt template: What is the [variable] of the [object]?

             - variable-object pairs = (currency, Myanmar), (price, Bitcoin)

Now, for generation we might specify differences for either prompt templates or variable/object. Based on following parameters(not comprehensive list):

   i. Style: Keep the meaning of the question same but ask in different styles.
               eg: introduce spelling mistakes, code mixing, cultural tone difference etc.

   ii. Semantic diversity: The task itself should change.

Add the above nlp metrics in code and set thresholds, keep generating till the thresholds are satisfied.

Reference:

  1. https://arxiv.org/pdf/2005.04118.pdf (behavioral testing of the nlp)
    2.https://arxiv.org/abs/2111.09963 (behavioral testing of recommender system)
    Essentially we will framing the task as NLP task or recommendation task with behavioral metrics satisfied.

eg: keep generating similar sentences with cosine_similarity 0.5 and diversity 0.4 till "n" sentences reached or "x" attempts satisfied.

improve finetune usage

quickstart_finetune.py currently demonstrates a successful finetuning of a local model.
While finetune is set to true in each completion call(lines 8, 14, 20), only the third call (line 20) results in finetuning and saving of the local model.

It is currently not obvious why we need 2 prior calls to completion before fine tuning successfully start.
If this is because we need prior examples this should be detailed in the documents and reflected in the feedback from the call itself in some way.

Self referencing

Hello, I downloaded the project (no installation) as I wanted to edit it for the bounties.

As soon as I tried to compile it gave me an error like this:
Code: from llm_vm.utils.keys import *
Error: No module named 'llm_vm'

If I had installed it there´s no doubt it would´ve worked. However, when doing imports from the same project I think the usual way to navigate through folders is using dots, like:
from ...utils.tools import *

Is there another way to go about this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.