Giter VIP home page Giter VIP logo

llama-lab's People

Contributors

hongyishi avatar jerryjliu avatar joedevon avatar logan-markewich avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

llama-lab's Issues

Expose as APIs on local/cloud using langchain-serve?

Integrate with langchain-serve

  • Exposes APIs from function definitions locally as well as on the cloud.
  • Very few lines of code changes, ease of development remains the same as local.
  • Supports both REST & Websocket endpoints
  • Serverless/autoscaling endpoints with automatic tls certs on cloud
  • Real-time streaming, human-in-the-loop support
  • Local to cloud-ready in just one command - lc-serve deploy local/jcloud <app>

Disclaimer: I'm the primary author of langchain-serve.

Auto/BabyAGI Programatic access

Any plans in the future for enabling "calling" autoGPT/babyAGI through an endpoint or a function so that we can receive results in tidbits that can be used programatically, like doing a .query() call, kinda?

Error converting string to JSON object when using Spanish options (?)

Error converting string to JSON object when using Spanish options

Problem description

I'm using the following Spanish options for the script:

  • -it: "Crea los pasos para ingresar"
  • -o: "Subsidio DS49"

To debug the issue, I added these print lines before initial_task_list = json.loads(initial_task_list_str):

print("initial_task_list_str:", initial_task_list_str)
print("initial_task_list_str tipo:", type(initial_task_list_str))
print("initial_task_list_str longitud:", len(initial_task_list_str))

The output was:

[1. Revisar los requisitos para postular al subsidio del DS49,
2. Obtener los documentos necesarios para postular,
3. Completar la solicitud de subsidio del DS49,
4. Enviar la solicitud de subsidio del DS49,
5. Esperar la respuesta de la solicitud de subsidio del DS49.]
initial_task_list_str tipo: <class 'str'>
initial_task_list_str longitud: 275

It appears there is an issue with the JSON formatting in initial_task_list_str. The simple_execution_agent.execute_task function produces an output with incorrect formatting that causes the following error when attempting to convert it into a JSON object:

json.decoder.JSONDecodeError: Expecting ',' delimiter: line 2 column 3 (char 3)```

__init__() takes exactly 1 positional argument (2 given) in llama_agi/examples/auto_runner_example.py

Dear Community,
Can someone please suggest what's the error and how to solve it?
Would be awesome to connect it with LLama_Hub context loaders.

Traceback (most recent call last):
File "/Users/mac-pro/llama-lab/llama_agi/examples/auto_runner_example.py", line 45, in
task_manager = LlamaTaskManager(
^^^^^^^^^^^^^^^^^
File "/Users/mac-pro/llama-lab/llama_agi/llama_agi/task_manager/LlamaTaskManager.py", line 40, in init
super().__init__(
File "/Users/mac-pro/llama-lab/llama_agi/llama_agi/task_manager/base.py", line 41, in init
self.current_tasks = [Document(x) for x in tasks]
^^^^^^^^^^^
File "pydantic/main.py", line 332, in pydantic.main.BaseModel.init
def __init__(__pydantic_self__, **data: Any) -> None:

Fields missing

I exported my keys but am getting this:

Reasoning: I think that using a web scraper to extract the relevant information from the documentation and storing it in a structured format would be a good approach because it would allow you to easily search and query the documentation using your custom openAI application. Additionally, using an API client library would also be a good approach because it would provide a programmatic interface to the SP-API documentation, which would allow you to easily integrate the documentation into your custom openAI application.

Plan: My current plan of action is to provide you with more information on how to use a web scraper to extract the relevant information from the SP-API documentation and how to use an API client library to provide a programmatic interface to the documentation.

Command:
{
"action": "search",
"args": {
"search_terms": "ways to make custom openAI application aware of SP-API documentation"
}
}. Got: 4 validation errors for Response
remember
field required (type=value_error.missing)
thoughts
field required (type=value_error.missing)
reasoning
field required (type=value_error.missing)
command
field required (type=value_error.missing)
(base) maximiliandoelle@Maximilians-MBP auto_llama %

Exceptions

I want you to write a summary how can I store the SP-API documenation using
LlamaIndexThinking...
Traceback (most recent call last):
File "/Users/maximiliandoelle/miniconda3/lib/python3.10/site-packages/langchain/output_parsers/pydantic.py", line 25, in parse
json_object = json.loads(json_str)
File "/Users/maximiliandoelle/miniconda3/lib/python3.10/json/init.py", line 346, in loads
return _default_decoder.decode(s)
File "/Users/maximiliandoelle/miniconda3/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/Users/maximiliandoelle/miniconda3/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/maximiliandoelle/miniconda3/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/maximiliandoelle/miniconda3/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/Users/maximiliandoelle/Projects/llama-lab/auto_llama/auto_llama/main.py", line 52, in
main()
File "/Users/maximiliandoelle/Projects/llama-lab/auto_llama/auto_llama/main.py", line 31, in main
response = agent.get_response()
File "/Users/maximiliandoelle/Projects/llama-lab/auto_llama/auto_llama/agent.py", line 54, in get_response
response_obj = parser.parse(output.content)
File "/Users/maximiliandoelle/miniconda3/lib/python3.10/site-packages/langchain/output_parsers/pydantic.py", line 31, in parse
raise OutputParserException(msg)
langchain.schema.OutputParserException: Failed to parse Response from completion To store the SP-API documentation, you can use the following steps:

  1. Search for the SP-API documentation on the web using the "search" command with the search terms "SP-API documentation".

  2. Download the contents of the web page using the "download" command with the URL of the documentation page and a name for the downloaded document.

  3. Query the downloaded document to extract the relevant information using the "query" command with the name of the downloaded document and a query string that specifies the information you want to extract.

  4. Write the extracted information to a file using the "write" command with a file name and the extracted data.

By following these steps, you can store the SP-API documentation in a file for future refe

Python Error auto llama

I followed the step to install requirements for auto-llama. When tried to run it:
python -m auto_llama

I am always getting the following python error:

_Traceback (most recent call last):
File "/Users/david/miniforge3/envs/llmstack/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/david/miniforge3/envs/llmstack/lib/python3.9/runpy.py", line 87, in run_code
exec(code, run_globals)
File "/Users/david/Project/llama-lab/auto_llama/auto_llama/main.py", line 52, in
main()
File "/Users/david/Project/llama-lab/auto_llama/auto_llama/main.py", line 42, in main
action_results = run_command(user_query, action, args, openaichat)
File "/Users/david/Project/llama-lab/auto_llama/auto_llama/actions.py", line 26, in run_command
response = analyze_search_results(
File "/Users/david/Project/llama-lab/auto_llama/auto_llama/actions.py", line 96, in analyze_search_results
doc = Document(json.dumps(results))
File "pydantic/main.py", line 332, in pydantic.main.BaseModel.init
TypeError: init() takes exactly 1 positional argument (2 given)

Looked up stack overflow , someone saw the problem of using Pydantic:
Pydantic error

Tried to ask different questions and the same error was given. Not sure what was wrong.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.