Giter VIP home page Giter VIP logo

gptscript's Introduction

GPTScript

Demo

GPTScript is a framework that allows Large Language Models (LLMs) to operate and interact with various systems. These systems can range from local executables to complex applications with OpenAPI schemas, SDK libraries, or any RAG-based solutions. GPTScript is designed to easily integrate any system, whether local or remote, with your LLM using just a few lines of prompts.

Here are some sample use cases of GPTScript:

  1. Chat with a local CLI - Try it!
  2. Chat with an OpenAPI compliant endpoint - Try it!
  3. Chat with local files and directories - Try it!
  4. Run an automated workflow - Try it!

Getting started

MacOS and Linux (Homebrew):

brew install gptscript-ai/tap/gptscript 
gptscript github.com/gptscript-ai/llm-basics-demo

MacOS and Linux (install.sh):

curl https://get.gptscript.ai/install.sh | sh

Windows:

winget install gptscript-ai.gptscript
gptscript github.com/gptscript-ai/llm-basics-demo

A few notes:

  • You'll need an OpenAI API key
  • On Windows, after installing gptscript you may need to restart your terminal for the changes to take effect
  • The above script is a simple chat-based assistant. You can ask it questions and it will answer to the best of its ability.

Community

Join us on Discord: Discord

License

Copyright (c) 2024 Acorn Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

gptscript's People

Contributors

cjellick avatar cloudnautique avatar darkthread avatar djcarpe avatar drpebcak avatar eltociear avatar g-linville avatar ibuildthecloud avatar iwilltry42 avatar kaihendry avatar keyallis avatar lucj avatar njhale avatar nw0rn avatar renovate[bot] avatar rinor avatar saiyam1814 avatar sangee2004 avatar sebgoa avatar sheng-liang avatar sirredbeard avatar strongmonkey avatar studystill avatar techmaharaj avatar thedadams avatar tylerslaton avatar vincent99 avatar will-chan avatar yatish27 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gptscript's Issues

Story-book example sometimes fails to create `pages` directory resulting in the illustration and story not being saved.

gptscript version - v0.0.0-dev-4909c513-dirty

Steps to reproduce the problem:

  1. Clone https://github.com/gptscript-ai/gptscript/
  2. Make a change in examples/story-book/story-book.gpt for tool story-illustrator to point to repo -tools: github.com/gptscript-ai/image-generation
  3. Run gptscript story-book.gpt, from examples/story-book/ dir.
  4. Script fails to create pages directory , resulting in no illustration and story not being saved.
14:04:36 download [https://oaidalleapiprodscus.blob.core.windows.net/private/org-UZ21HIS1j93qUiZVWBOGePHu/user-nfhF7kaasrYgX69n1He52KF9/img-YHduqSS5E4aBA7XG1MONy6pa.png?rscd=inline&rsct=image%2Fpng&se=2024-03-07T23%3A03%3A45Z&sig=8H6%2BTztnOeEAZb4EKrlqq9%2B4w2%2BDQKhj4bVQ02QEHMw%3D&ske=2024-03-08T18%3A17%3A24Z&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sks=b&skt=2024-03-07T18%3A17%3A24Z&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skv=2021-08-06&sp=r&sr=b&st=2024-03-07T21%3A03%3A45Z&sv=2021-08-06] to [pages/3.png]
14:04:36 download [https://oaidalleapiprodscus.blob.core.windows.net/private/org-UZ21HIS1j93qUiZVWBOGePHu/user-nfhF7kaasrYgX69n1He52KF9/img-3RybU4zqbUn8AMsSLXIfvRrP.png?rscd=inline&rsct=image%2Fpng&se=2024-03-07T23%3A03%3A46Z&sig=FVjxvA8WBTsU32xDefBy5iNcKetmRjZAaeWaxiOeR1A%3D&ske=2024-03-08T18%3A16%3A49Z&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sks=b&skt=2024-03-07T18%3A16%3A49Z&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skv=2021-08-06&sp=r&sr=b&st=2024-03-07T21%3A03%3A46Z&sv=2021-08-06] to [pages/2.png]
14:04:36 download [https://oaidalleapiprodscus.blob.core.windows.net/private/org-UZ21HIS1j93qUiZVWBOGePHu/user-nfhF7kaasrYgX69n1He52KF9/img-3pjK1Q8Obt2WD2T2QrHv4gb1.png?rscd=inline&rsct=image%2Fpng&se=2024-03-07T23%3A03%3A48Z&sig=FQRrKWjgXHXZGlDINVAVLbjVff1WRg5lJCOKNcdUZFw%3D&ske=2024-03-08T18%3A15%3A09Z&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sks=b&skt=2024-03-07T18%3A15%3A09Z&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skv=2021-08-06&sp=r&sr=b&st=2024-03-07T21%3A03%3A48Z&sv=2021-08-06] to [pages/1.png]
2024/03/07 14:04:42 failed to create [pages/3.png]: open pages/3.png: no such file or directory
ls -ltr
total 32
-rw-r--r--  1 sangeethahariharan  staff  2186 Mar  6 13:38 README.md
-rw-r--r--  1 sangeethahariharan  staff  2405 Mar  7 10:16 story-book.gpt
-rw-r--r--@ 1 sangeethahariharan  staff  4307 Mar  7 10:36 index.html

WinGet package not being updated

This change to the WinGet submission CD pipeline introduced a 'v' to the version numbering in the WinGet package that is submitted.

Even though new versions are being submitted (with the v- prefix), it is not updating off 0.1.1 in WinGet, see this comment from a WinGet repo maintainer.

The recommended solution is to pull 0.1.1 from WinGet and no change should be needed in the WinGet submission CD pipeline, we will just continue with the v versioning syntax.

Builtin support for appending to a file (similar to write file).

Builtin support for appending to a file would be helpful when there is a need to write to the same file with in a loop.

I tried the following gptscript to get a file with all acorn commands with their help text.
Using sys.write in this case resulted in output file having help for only 1 acorn command which is as expected.

tools: listacorncommands, acornhelpandwrite

List all acorn commands. For each acorn commands from this list , get help text and write the help text to a file acornhelp.txt


---
name: acornhelpandwrite
description: get help text for acorn commands and write to file
tools: sys.write, acornhelp
args: command: acorn command
args: file: file name

get help text for acorn commands and write to a file 

---
name: acornhelp
description: get help text for acorn commands
args: command: acorn command

#!/bin/bash

acorn help ${command}

---
name: listacorncommands
description: List all acorn commands

#!/bin/bash

acorn help

It would be helpful to have sys.append in this case.

Not able execute the example.gpt script in Readme successfully.

gptscript version - v0.0.0-dev-07bf5bdb-dirty

I see the following 2 errors when running the example.gpt script in Readme:
unzip: cannot find or open chinook.zip, chinook.zip.zip or chinook.zip.ZIP. or
fork/exec /bin/sh: not a directory

 gptscript --cache=false exampleorig.gpt
15:44:54 started  [main]
15:44:54 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call sys-download-6a406a5a -> {"url":"https://www.sqlitetutorial.net/wp-content/uploads/2018/03/chinook.zip"}
15:44:57 started  [sys.download(2)] [input={"url":"https://www.sqlitetutorial.net/wp-content/uploads/2018/03/chinook.zip"}]
15:44:57 sent     [sys.download(2)]
15:44:57 download [https://www.sqlitetutorial.net/wp-content/uploads/2018/03/chinook.zip] to [/var/folders/xt/zg5q3qtj07q84xvf0_80pfdh0000gn/T/gpt-download690060198]
15:44:57 ended    [sys.download(2)]
15:44:57 continue [main]
15:44:57 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call sys-exec-2aa383da -> {"command":"unzip chinook.zip","directory":"/var/folders/xt/zg5q3qtj07q84xvf0_80pfdh0000gn/T"}
15:45:00 started  [sys.exec(3)] [input={"command":"unzip chinook.zip","directory":"/var/folders/xt/zg5q3qtj07q84xvf0_80pfdh0000gn/T"}]
15:45:00 sent     [sys.exec(3)]
unzip:  cannot find or open chinook.zip, chinook.zip.zip or chinook.zip.ZIP.
2024/03/08 15:45:00 exit status 9
 % gptscript --cache=false exampleorig.gpt
15:45:04 started  [main]
15:45:04 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call sys-download-6a406a5a -> {"url":"https://www.sqlitetutorial.net/wp-content/uploads/2018/03/chinook.zip"}
15:45:06 started  [sys.download(2)] [input={"url":"https://www.sqlitetutorial.net/wp-content/uploads/2018/03/chinook.zip"}]
15:45:06 sent     [sys.download(2)]
15:45:06 download [https://www.sqlitetutorial.net/wp-content/uploads/2018/03/chinook.zip] to [/var/folders/xt/zg5q3qtj07q84xvf0_80pfdh0000gn/T/gpt-download179667051]
15:45:06 ended    [sys.download(2)]
15:45:06 continue [main]
15:45:06 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call sys-exec-2aa383da -> {"command":"unzip chinook.zip","directory":"/var/folders/xt/zg5q3qtj07q84xvf0_80pfdh0000gn/T/gpt-download179667051"}
15:45:09 started  [sys.exec(3)] [input={"command":"unzip chinook.zip","directory":"/var/folders/xt/zg5q3qtj07q84xvf0_80pfdh0000gn/T/gpt-download179667051"}]
15:45:09 sent     [sys.exec(3)]
2024/03/08 15:45:09 fork/exec /bin/sh: not a directory

premeditated hate crime speech/weaponized tech

#25

it's my intent to make record and or label that this content shall be evidence in hate crime speech, as the targeted subject in language. Therefore, if the deployment of said data imposes harm or death, the organized content and premeditated programming shall be held liable for Civil and possible future legislations.

Cannot connect to Azure OpenAI deployment

I am unable to connect gptscript to Azure's implementation of OpenAI.

I have a gpt 3.5 turbo deployment on Azure OpenAI named, coincidentally enough, gpt-35-turbo.

It is reachable with:

curl https://<endpoint>.openai.azure.com/openai/deployments/gpt-35-turbo/completions?api-version=2023-05-15 -H "Content-Type: application/json"   -H "api-key: <key>"   -d "{
  \"prompt\": \"Once upon a time\",
  \"max_tokens\": 5
}"

and returns, for example:

{"id":"cmpl-8uobBPlV6d0UDqVFPwQdKEkZJaGgl","object":"text_completion","created":1708551561,"model":"gpt-35-turbo","choices":[{"text":" on a small island,","index":0,"finish_reason":"length","logprobs":null}],"usage":{"prompt_tokens":4,"completion_tokens":5,"total_tokens":9}}

With gptscript, I am running:

.\main.exe "https://get.gptscript.ai/echo.gpt" --input 'Hello, World!' --openai-api-key '<key>' --openai-api-type 'AZURE' --openai-base-url 'https://<endpoint>.openai.azure.com/openai/deployments/gpt-35-turbo/completions' --openai-api-version '2023-05-15'

and getting:

16:45:22 started  [main] [input=--input Hello, World! --openai-api-key <key> --openai-api-type AZURE --openai-base-url https://<endpoint>.openai.azure.com/openai/deployments/gpt-35-turbo/completions --openai-api-version 2023-05-15]
16:45:22 sent     [main]
         content  [1] content | Waiting for model response...2024/02/21 16:45:22 error, status code: 404, message: Resource not found

I can reproduce the same on Linux:

export OPENAI_API_KEY="<key>"
export OPENAI_API_TYPE="AZURE"
export OPENAI_BASE_URL="https://<endpoint>.openai.azure.com/openai/deployments/gpt-35-turbo/completions"
export OPENAI_API_VERSION="2023-05-15"
gptscript "https://get.gptscript.ai/echo.gpt" --input 'Hello, World!' --debug

Gets:

16:51:33 started  [main] [input=--input Hello, World! --debug]
16:51:33 sent     [main]
         content  [1] content | Waiting for model response...2024/02/21 16:51:33 error, status code: 404, message: Resource not found

Support a local Ollama client

The Ollama project is an agnostic runtime for LLMs that runs models in containers: https://github.com/ollama/ollama this can be run locally or within a cloud environment.

I'd love to use gptscript, but I have no incentive to use openAI's gpt 3.5 or more expensive gpt 4 (while it's pretty cheap with lite use, over time, this could easily add up to $100s of dollars) when I have my own hardware and GPUs that I could run with codellama, llama 2, Mistral, or any of the other models the ysupport.


Proposal:

  1. Refactor the pkg/openai package to be an agnostic type interface in pkg/client:

type Client struct {
url string
key string
c *openai.Client
cache *cache.Client
}

  1. Implement an Ollama client that sets the client.url to some env var provided by the user (in the local environment use case, this would likely be localhost)

I'd be happy to try and take a stab at this one since I have experience building local ollama integrations with neovim: https://github.com/jpmcb/nvim-llama

Validation of output and retry

Hi - First, thanks for all the work on gptscript. Amazing stuff!

I'd like to see the ability to have a validation phase, and based on the output of the validation phase, retry using the output of the validation to improve the output.

Use case:

  • As someone working in Devops
  • I want to have gptscript create a configuration file for me
  • and I want to validate that what the LLM produces is a valid config, e.g. sometool --validate created-config.yaml (i.e. have a "validate" tool that runs that CLI command)
  • and if that validation fails then gptscript should retry to create the requested config based on the error output provided by the validation tool, potentially trying several times

Just a thought. Thanks again!

Make default model an env var to play nice with Azure

I'm trying to run this on Azure and in azure the model name is not actually the model name itself, but instead a user defined model deployment name, which will be user specific. Given that model will be used for all calls, it would help to make the default model name an env var, like the other AzureAI variables. I know I can set the model name for each tool, and tested that with echo, but it is cumbersome to do that for every example and tool i run. Also i tried the hackernews example and though i set the model name for each tool, it still tried to do a call with the default model name.

Implement first-class approach to testing GPTScripts

I recently wrote out a sample of what an integration suite could look like entirely written in GPTScript. While we may not move forward with that specific approach I'd like to propose a first-class way to do this, not just for our examples.

My proposition here is a gptscript test <file>.gpt that calls a built-in tool along the lines of sys.test. Then the process becomes calling sys.test on a structured .test.gpt file where that file includes: how to test the file and what test cases there are.

Open to ideas around how this UX could be improved but I think that the approach of defining a standardized testing approach would be super beneficial.

Support comments

I don't actually care right now about being able to comment my scripts (though I probably will at some point), but what I do care about is being able to quickly and easily "disable" a bad piece of the script. Like:

Do thing A, then B, then C.

# Do things X, Y, and Z, but I know it is impossible for the script to do them yet. But I don't want to delete this sentence because I know I'll need it evaually.

Add built-in key-value store tools to persist data locally between `gptscript` invocations

Description

Add built-in tools that can store and retrieve arbitrary key-value pairs to/from disk on my local machine.

Motivation

If I want a tool to be able to reference info from a previous run of gptscript, the tool needs to persist that data somewhere. Today, I believe the onus of how and where to store and retrieve that data is placed on the tool author; e.g. I write another custom tool that writes to a specific file in some arbitrary format. Having a first class way to store key-value pairs would (probably) help me avoid rewriting such a tool every time.

Secret handling

I'd like to see robust secret handling incorporated into gptscript. This secret handling would take two forms:

  1. A way to handle secrets for gptscripts. Environment variables are OK for local, but, even then, could be taken advantage of by a rouge script. A script would need to request and be given permission to use a secret.
  2. Secret values shouldn't be sent to OpenAI.

debug message caused program to panic on any gptscript

I tried the released version or compiled from main. Regardless of which example scripts I ran, it will always panic at the same place:

slog.Debug("stream", "content", response.Choices[0].Delta.Content)

How to reproduce

$ gptscript -v
gptscript version v0.1.4-701b9c8e-dirty

$ gptscript examples/echo.gpt --cache false --input 'Hello World'

16:59:33 started  [main] [input=--cache false --input Hello World]
16:59:33 sent     [main]
         content  [1] content | Waiting for model response...         content  [1] content | --cache false --input Hello Worldpanic: runtime error: index out of range [0] with length 0

goroutine 1 [running]:
github.com/gptscript-ai/gptscript/pkg/openai.(*Client).call(_, {_, _}, {{0x105800100, 0x13}, {0x140003901e0, 0x3, 0x4}, 0x0, 0x322bcc77, ...}, ...)
        github.com/gptscript-ai/gptscript/pkg/openai/client.go:425 +0x7cc
github.com/gptscript-ai/gptscript/pkg/openai.(*Client).Call(0x140000ca040, {0x1058a27a0, 0x1400003a460}, {{0x105800100, 0x13}, 0x0, {0x0, 0x0, 0x0}, {0x140000cc600, ...}, ...}, ...)
        github.com/gptscript-ai/gptscript/pkg/openai/client.go:295 +0x4ec
github.com/gptscript-ai/gptscript/pkg/llm.(*Registry).Call(0x104ffc024?, {0x1058a27a0, 0x1400003a460}, {{0x105800100, 0x13}, 0x0, {0x0, 0x0, 0x0}, {0x140000cc600, ...}, ...}, ...)
        github.com/gptscript-ai/gptscript/pkg/llm/registry.go:53 +0x94
github.com/gptscript-ai/gptscript/pkg/engine.(*Engine).complete(0x140000b8f60, {0x1058a27a0, 0x1400003a460}, 0x140000e6630)
        github.com/gptscript-ai/gptscript/pkg/engine/engine.go:218 +0x1f8
github.com/gptscript-ai/gptscript/pkg/engine.(*Engine).Start(_, {{0x105b64088, 0x1}, {0x1058a27a0, 0x1400003a460}, 0x0, 0x140000b8e10, {{{0x0, 0x0}, {0x140000be0cd, ...}, ...}, ...}}, ...)
        github.com/gptscript-ai/gptscript/pkg/engine/engine.go:189 +0x98c
github.com/gptscript-ai/gptscript/pkg/runner.(*Runner).call(_, {{0x105b64088, 0x1}, {0x1058a27a0, 0x1400003a460}, 0x0, 0x140000b8e10, {{{0x0, 0x0}, {0x140000be0cd, ...}, ...}, ...}}, ...)
        github.com/gptscript-ai/gptscript/pkg/runner/runner.go:104 +0x234
github.com/gptscript-ai/gptscript/pkg/runner.(*Runner).Run(0x140000ae2c0, {0x1058a27a0, 0x1400003a460}, {{0x16ae1b4aa, 0x11}, {0x140000ac138, 0x13}, 0x140000b8270, 0x0}, {0x14000398008, ...}, ...)
        github.com/gptscript-ai/gptscript/pkg/runner/runner.go:60 +0x1f0
github.com/gptscript-ai/gptscript/pkg/cli.(*GPTScript).Run(0x14000186100, 0x1400018af08, {0x140000d6000, 0x5, 0x5})
        github.com/gptscript-ai/gptscript/pkg/cli/gptscript.go:238 +0x848
github.com/acorn-io/cmd.Command.bind.func4(0x1400018af08, {0x140000d6000, 0x5, 0x5})
        github.com/acorn-io/[email protected]/builder.go:477 +0x188
github.com/spf13/cobra.(*Command).execute(0x1400018af08, {0x1400001e1f0, 0x5, 0x5})
        github.com/spf13/[email protected]/command.go:983 +0x840
github.com/spf13/cobra.(*Command).ExecuteC(0x1400018af08)
        github.com/spf13/[email protected]/command.go:1115 +0x344
github.com/spf13/cobra.(*Command).Execute(...)
        github.com/spf13/[email protected]/command.go:1039
github.com/spf13/cobra.(*Command).ExecuteContext(...)
        github.com/spf13/[email protected]/command.go:1032
github.com/acorn-io/cmd.Main(0x1400018af08)
        github.com/acorn-io/[email protected]/builder.go:74 +0x54
main.main()
        github.com/gptscript-ai/gptscript/main.go:9 +0x44

Then, if you apply this patch:

diff --git a/pkg/openai/client.go b/pkg/openai/client.go
index 1590f29..1cff199 100644
--- a/pkg/openai/client.go
+++ b/pkg/openai/client.go
@@ -422,7 +422,6 @@ func (c *Client) call(ctx context.Context, request openai.ChatCompletionRequest,
                } else if err != nil {
                        return nil, err
                }
-               slog.Debug("stream", "content", response.Choices[0].Delta.Content)
                if partial != nil {
                        partialMessage = appendMessage(partialMessage, response)
                        partial <- types.CompletionStatus{

Run the fresh binary:

$ go build ./
$ ./gptscript examples/echo.gpt --cache false --input 'Hello World'

17:03:44 started  [main] [input=--cache false --input Hello World]
17:03:44 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | --cache false --input Hello World
17:03:45 ended    [main]

INPUT:

--cache false --input Hello World

OUTPUT:

--cache false --input Hello World

Notes

On cache hit (the same input is used) it will not panic even without the patch. I suppose it will return the cached response right away without hitting LLM. Of course, without the patch, you won't be able to cache the response in the first place.

Be more lenient when parsing for "---" to allow for extra "-"

Currently we get the following error message when we have extra "-" in the tools delimiter "---"

2024/02/12 11:35:36 failed resolving <tool name> at  <gptscrpit name>: can not load tools path=. name=<tool name>

It would be good to be good to more leninet when parsing for "---" to allow for extra "-" .

support ? on user created tools

Similar to the ? on sys.* tools allowing for the LLM to retry based on the errors, users should be able to do that with their own tools. Right now, it errors out because it can't exact match the name of tool.

tools: mytool?

looks for a tool named mytool? not mytool.

Panic!

My script:

tools: graph-producer

Produce a simple graph.

---
name: graph-producer
description: Produces a simple line graph and saves it as an image to /Users/cjellick/graphs

#!/Users/cjellick/PycharmProjects/pythonProject/.venv/bin/python /Users/cjellick/PycharmProjects/pythonProject/main.py

It is functioning properly. The graph is getting created. I don't know why it's panicing

$ gptscript --cache=false ./examples/craig/dau.gpt
10:02:42 started  [main]
10:02:42 sent     [main]
Sent content:

Produce a simple graph.
Waiting for model response...
tool call graph-producer -> {}
10:02:43 started  [graph-producer(2)] [input={}]
10:02:43 sent     [graph-producer(2)]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x0 pc=0x104efdaec]

goroutine 37 [running]:
github.com/gptscript-ai/gptscript/pkg/engine.(*Engine).runCommand(0x140001001e0, {0x1050dfd00, 0x14000100190}, {{0x140001201f8, 0x18}, {0x140001201ce, 0xe}, {0x1400010212d, 0x4f}, 0x0, ...}, ...)
	/Users/cjellick/projects/gptscript/pkg/engine/cmd.go:68 +0x6bc
github.com/gptscript-ai/gptscript/pkg/engine.(*Engine).Start(_, {{0x1400029e100, 0x1d}, {0x1050dfd00, 0x14000100190}, 0x1400023a1e0, 0x14000118150, {{0x140001201f8, 0x18}, {0x140001201ce, ...}, ...}}, ...)
	/Users/cjellick/projects/gptscript/pkg/engine/engine.go:158 +0x3c4
github.com/gptscript-ai/gptscript/pkg/runner.(*Runner).call(_, {{0x1400029e100, 0x1d}, {0x1050dfd00, 0x14000100190}, 0x1400023a1e0, 0x14000118150, {{0x140001201f8, 0x18}, {0x140001201ce, ...}, ...}}, ...)
	/Users/cjellick/projects/gptscript/pkg/runner/runner.go:124 +0x204
github.com/gptscript-ai/gptscript/pkg/runner.(*Runner).subCalls.func1()
	/Users/cjellick/projects/gptscript/pkg/runner/runner.go:212 +0x36c
golang.org/x/sync/errgroup.(*Group).Go.func1()
	/Users/cjellick/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:78 +0x58
created by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1
	/Users/cjellick/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:75 +0x98

My dumb python script for context

from datetime import datetime

import matplotlib.pyplot as plt


def main():
    # Sample data
    x = [1, 2, 3, 4, 5]
    y = [2, 3, 5, 7, 11]

    # Plotting the data
    plt.plot(x, y)

    # Adding labels and title
    plt.xlabel('X-axis')
    plt.ylabel('Y-axis')
    plt.title('Simple Graph')

    timestamp = datetime.now().strftime("%Y%m%d%H%M%S")
    filename = f'/Users/cjellick/graphs/simple_graph-{timestamp}.png'
    plt.savefig(filename)  # Save as PNG image

    # Displaying the graph
    # plt.show()
    # plt.close()


if __name__ == '__main__':
    main()

Getting a panic error when attempting to use the runner as a library

I am not sure if this is intentional but I've tried to use gptscript as a library to run my scripts and I always get the following error:

panic: runtime error: slice bounds out of range [:-1]

goroutine 1 [running]:
github.com/gptscript-ai/gptscript/pkg/mvl.Package()
	/Users/hectorfernandez/go/pkg/mod/github.com/gptscript-ai/[email protected]/pkg/mvl/log.go:51 +0x108
github.com/gptscript-ai/gptscript/pkg/builtin.init()
	/Users/hectorfernandez/go/pkg/mod/github.com/gptscript-ai/[email protected]/pkg/builtin/log.go:5 +0x24
FAIL	github.com/chainguard-dev/infra-images/wolfi-bump/pkg/gptscript/gptscript	0.287s

Confusing naming

The project name, GPTScript, suggests that it's a scripting language that doesn't have anything in common with GPT, which is, however, not true as it actually uses a GPT model. Please change the project name to something like LlamaScript in order to adhere to the existing naming tradition like JavaScript, AssemblyScript and RustScript do.

Add "audit mode" to manually approve tool calls before they execute

It would be great if there was a command line argument that would cause gptscript to execute in an interactive mode, where the user is prompted to manually approve or reject each tool call before it is executed. This would make me more confident when testing gptscripts with access to very permissive tools (like sys.exec).

Support syntax highlighting for `.gpt` files

Description

Provide syntax highlighting for .gpt files in at least one editor.

Motivation

Humans like colorful text.

Possible Solutions

It looks like a few popular editors like GoLand and VSCode support TextMate Grammar based syntax highlighting, so if it's possible to produce one that approximates .gpt file sytax => QED.

Bonus points for getting gptscript to generate the grammar by analyzing examples and introspecting its own source code.

model not found: gpt-4-turbo-preview

Very new to AI and gptscript.I have signed up with openai API and set the API key and get the error "model not found: gpt-4-turbo-preview" when I try to run it.

Somehow the default model is getting called. What I am not able to figure out is whether a newer model has to be set in GPTScript config somewhere? Or do I need to change the default model in my openai API account?

I see that this default is set in "gptscript/pkg/openai/client.go" line 49.
DefaultModel string usage:"Default LLM model to use" default:"gpt-4-turbo-preview"

I went thru the gptscript documentation but didn't find any place to change model. How do I change this?

Add test CI

We'll want to have an integration and unit test suite to validate quality between changes. Minimally we should run tests to make sure that the examples in our repo work as we expect.

Add colors to terminal output

Right now, everything just prints in the default color. I think it would be nice to have function calls show up in one color, output from the LLM show up in another etc., as this would make it easier to visually scan through the output of a single gptscript invocation.

Improve error message when `OPENAI_API_KEY` env variable is not set.

Executing any gptscript without having OPENAI_API_KEY env set results in the following error message:

Sent content:
<content>

Waiting for model response...
2024/02/09 15:35:25 error, status code: 401, message: You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accessing the API from your browser and are prompted for a username and password. You can obtain an API key from https://platform.openai.com/account/api-keys.

This error message can be improved to instruct user to set OPENAI_API_KEY env variable.
Also this error message can be presented to the user even before making an OpenAPI call.

Add more info to system prompt to allow for the tools to process the user provided prompts in a more orderly way.

When i need the tool to process a set of steps , it is not enough that i state the steps in an order like this example:

Tools:  github.com/gptscript-ai/vision,  github.com/gptscript-ai/image-generation , sys.write, sys.read, sys.download

1. Generate an image of a cow grazing in the open green meadows and return only the url of the image.
2. Download the url of the image generated in step 1 and write to a file cow.png.
3. Describe the image at cow.png in the style of a Wes Anderson character.

When i run the above script , i always end up getting the following error:

10:59:00 sent     [main]
10:59:00 started  [sys.read(2)] [input={"filename": "cow.png"}]
10:59:00 sent     [sys.read(2)]
10:59:00 started  [https://raw.githubusercontent.com/gptscript-ai/image-generation/b9d9ed60c25da7c0e01d504a7219d1c6e460fe80/tool.gpt(3)] [input={"prompt": "a cow grazing in the open green meadows", "quality": "hd"}]
10:59:00 sent     [https://raw.githubusercontent.com/gptscript-ai/image-generation/b9d9ed60c25da7c0e01d504a7219d1c6e460fe80/tool.gpt(3)]
10:59:00 failed to run tool [] cmd [/usr/bin/env python3 /Users/sangeethahariharan/Library/Caches/gptscript/repos/b9d9ed60c25da7c0e01d504a7219d1c6e460fe80/python3.12/cli.py --prompt=a cow grazing in the open green meadows --size= --quality=hd --number=]: context canceled
2024/03/07 10:59:00 open cow.png: no such file or directory

When i include the following lines additionally in prompt - Follow all steps in order and move to the next step only after the first step is done completely. , it works as expected.

Tools:  github.com/gptscript-ai/vision,  github.com/gptscript-ai/image-generation , sys.write, sys.read, sys.download

Follow all steps in order and move to the next step only after the first step is done completely.

1. Generate an image of a cow grazing in the open green meadows and return only the url of the image.
2. Download the url of the image generated in step 1 and write to a file cow.png.
3. Describe the image at cow.png in the style of a Wes Anderson character.

Should we consider adding something along the lines of Follow all steps in order and move to the next step only after the first step is done completely. to our existing system prompt ?

Change script syntax to improve readability using yaml

I think, we can improve readability of script when use yaml syntax.

Also, this feature is able to make the parsing process simpler than before.

Current

tools: myfunction
What's the myfunction of 3

----
name: myfunction
tools: sub, mul
description: A function taking an integer as argument and returns an integer
args: number: An integer
tools: myfunction

Do the following in strict order:
1. If ${number} is 0 skip the remaining steps and return 1
2. Calculate the myfunction of (${number} - 1)
3. Return ${number} multiply the result of the previous step

---
name: sub
description: Subtract two numbers
args: left: a number
args: right: a number

#!/bin/bash

echo $((${LEFT} - ${RIGHT}))

---
name: mul
description: Multiply two numbers
args: left: a number
args: right: a number

#!/bin/bash

echo $((${LEFT} * ${RIGHT}))

New format

tools: 
  - myfunction
body: >
  What's the myfunction of 3

----
name: myfunction
tools: 
  - sub
  - mul
description: A function taking an integer as argument and returns an integer
args: 
  - number: An integer
tools: 
  - myfunction
body: >
  Do the following in strict order:
  1. If ${number} is 0 skip the remaining steps and return 1
  2. Calculate the myfunction of (${number} - 1)
  3. Return ${number} multiply the result of the previous step

---
name: sub
description: Subtract two numbers
args: 
  - left: a number
  - right: a number
body: >
  #!/bin/bash
  
  echo $((${left} - ${right}))

---
name: mul
description: Multiply two numbers
args: 
  - left: a number
  - right: a number
body: >
  #!/bin/bash
  
  echo $((${left} * ${right}))

async/await

Since some of these process can take a while, it would be nice to have some type of async/await.

The nicest version of this I can imagine is automatic. That is, the function starts in the background and gptscript is smart enough to know when it needs the output of the async step and await it.

The other version is allowing a user to define async functions and await them.

Not able to use absolute path for tools.

Steps to reproduce the problem:

  1. Cloned https://github.com/gptscript-ai/vision to my home directory.
  2. Tried to execute the following script:
tools: /Users/sangeethahariharan/vision/tool.gpt

Describe the picture in file /Users/sangeethahariharan/Downloads/sunrise.jpeg for me

Following error is seen:

gptscript testvision.gpt
2024/02/22 14:57:10 failed resolving /users/sangeethahariharan/vision/tool.gpt at ./testvision.gpt: can not load tools path=. name=/users/sangeethahariharan/vision/tool.gpt
sangeethahariharan@Sangeethas-MacBook-Pro examples % ls /users/sangeethahariharan/vision/tool.gpt
/users/sangeethahariharan/vision/tool.gpt

This issue is not seen when relative path is used .

Provide a way to define and get help/description about gptscripts .

Currently , I need to actually look at the script (usually the first prompt in the script) to understand what the script does and what inputs can be provided for the script.

It would be helpful to provide a way to get an overall description of what the tool does and what input can be provided with some command like gptscript <script-name> --help

Not able to run some example scripts which use relative file path to `gptscript` from `examples` directory.

Steps to reproduce the problem:

  1. Launch debug UI using the following steps mentioned in the Quisckstart section in ReadMe.
git clone https://github.com/gptscript-ai/gptscript
cd gptscript/examples

# Run the debugging UI
gptscript --server

Using Gptscript UI , we are not able to run some scripts like - samples-readme.gpt , syntax-from-code.gpt , describe-code.gpt because of these scripts using relative path to gptscript like pkg/parser/parser.go . The working directory for UI is /examples dir but scripts uses gptscript as the working directory.

examples % gptscript syntax-from-code.gpt                                           
11:36:45 started  [main]
11:36:45 sent     [main]
11:36:45 started  [sys.read(2)] [input={"filename":"pkg/parser/parser.go"}]
11:36:45 sent     [sys.read(2)]
2024/02/29 11:36:45 open pkg/parser/parser.go: no such file or directory

The same script will work fine when invoked from gptscript dir:
gptscript % gptscript --cache=false examples/syntax-from-code.gpt

Should the example scripts in this case be modified to using paths relative to the directory were the scripts are residing (instead of being relative to gptscript ) ? Or We could also instruct users to launch gptscript --server from gptscript directory to get around this problem.

Expected Behavior:
We should be able to provide a relative file paths in the gptscripts (relative from the current working directory of the scripts) and expect it to work when executed from any directory.

Add the ability to specify explicit functions

A sample gptscript:

Run commmand and do such-and-such with the output.

It would be nice if we could save a call to OpenAI for the "Run command" part because that is pre-defined and should always be the same.

Does not properly parse URL to program files on WIndows as a URL

gptscript does not properly parse URLs to program files. Attempts to open the URL as a file.

Steps to reproduce:

  1. Install gptscript on Windows with winget install gptscript-ai.gptscript
  2. Run gptscript https://get.gptscript.ai/echo.gpt --input 'Hello, World!'

Expected results:

OUTPUT:

Hello, World!

Actual results:

2024/02/21 15:34:01 open .\https:\get.gptscript.ai\echo.gpt: The filename, directory name, or volume label syntax is incorrect.

image

Not able to make use of tools from other repo directories.

Steps to reproduce the problem:

  1. Cloned https://github.com/gptscript-ai/vision to my home directory.
  2. Tried to execute the following script were tool is provided the relative path to access the tool.gpt from step1.
tools: ../../../vision/tool.gpt

Describe the picture in file /Users/sangeethahariharan/Downloads/sunrise.jpeg for me
  1. Script fails with following errors:
gptscript testvision.gpt                                                 
15:09:43 started  [main]
15:09:43 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call -vision-tool-a7c56c59 -> {"images":"file:///Users/sangeethahariharan/Downloads/sunrise.jpeg","prompt":"Describe the picture."}
15:09:47 started  [vision(2)] [input={"images":"file:///Users/sangeethahariharan/Downloads/sunrise.jpeg","prompt":"Describe the picture."}]
15:09:47 sent     [vision(2)]
node:internal/modules/cjs/loader:1152
  throw err;
  ^

Error: Cannot find module '/Users/sangeethahariharan/gpt/gptscript/examples/index.js'
    at Module._resolveFilename (node:internal/modules/cjs/loader:1149:15)
    at Module._load (node:internal/modules/cjs/loader:990:27)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:142:12)
    at node:internal/main/run_main_module:28:49 {
  code: 'MODULE_NOT_FOUND',
  requireStack: []
}

Node.js v21.6.2
15:09:47 failed to run tool [vision] cmd [/bin/bash /var/folders/xt/zg5q3qtj07q84xvf0_80pfdh0000gn/T/gptscript379155314]: exit status 1
2024/02/22 15:09:47 ERROR: node:internal/modules/cjs/loader:1152
  throw err;
  ^

Error: Cannot find module '/Users/sangeethahariharan/gpt/gptscript/examples/index.js'
    at Module._resolveFilename (node:internal/modules/cjs/loader:1149:15)
    at Module._load (node:internal/modules/cjs/loader:990:27)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:142:12)
    at node:internal/main/run_main_module:28:49 {
  code: 'MODULE_NOT_FOUND',
  requireStack: []
}

Node.js v21.6.2
: exit status 1

Provide better output when `sys.find` does not return any files.

Steps to reproduce the problem:

  1. git clone https://github.com/gptscript-ai/gptscript
  2. Create a new gptscript using sys.find tool will not return any files
 % cat examples/testfind.gpt 
tools: sys.find

Find all sample *.exe file in examples/
  1. Execute this script .
  2. Notice that there is no output in this case
 % gptscript examples/testfind.gpt
17:18:34 started  [main]
17:18:34 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | tool call sys -> {"directory":"examples/","pattern":"*.exe"}
17:18:35 started  [sys.find(2)] [input={"directory":"examples/","pattern":"*.exe"}]
17:18:35 sent     [sys.find(2)]
17:18:35 ended    [sys.find(2)]
17:18:35 continue [main]
17:18:35 sent     [main]
         content  [1] content | Waiting for model response...
17:18:36 continue [main]
2024/02/28 17:18:36 invalid continue call, no completion needed

Tool not providing any output in this case , results in the gptscript using this tool along with other tools.

This behavior can be seen with some non existent files being used when executing samples-readme.gpt script from /examples directory.

17:23:20 started  [main]
17:23:20 sent     [main]
17:23:20 started  [sys.find(2)] [input={"directory":"examples/","pattern":"*.gpt"}]
17:23:20 sent     [sys.find(2)]
17:23:20 ended    [sys.find(2)]
17:23:20 continue [main]
17:23:20 sent     [main]
17:23:20 started  [summary(3)] [input={"file": "examples/sample1.gpt"}]
17:23:20 started  [summary(4)] [input={"file": "examples/sample3.gpt"}]
17:23:20 started  [summary(5)] [input={"file": "examples/sample2.gpt"}]
17:23:20 sent     [summary(3)]
17:23:20 sent     [summary(4)]
17:23:20 sent     [summary(5)]
         content  [4] content | Waiting for model response...
17:23:20 started  [summary(6)->sys.read(6)] [input={"filename":"examples/sample1.gpt"}]
17:23:20 sent     [summary(6)->sys.read(6)]
2024/02/28 17:23:20 open examples/sample1.gpt: no such file or directory

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.