Giter VIP home page Giter VIP logo

smol-dev-js's People

Contributors

darokcx avatar erjanmx avatar jexah avatar m4yankp avatar picocreator avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

smol-dev-js's Issues

Incremental prompt in md file

For incremental development it is cumbersome to use only the command line, as sometimes I want to pass on code examples or documentation.

A good start is to be able to load an instructions.md file. Or even better a folder with several md files like spec2code but for a smaller feature or change than the whole app.

Best would be best to have a web UI like Friday

https://github.com/amirrezasalimi/friday

Support for GPT-4 Turbo / selection of custom model

Hi and first of all, thanks for this cool fork!

I’m not sure if I just missed a setting somehow, in that case I apologize, but it would be awesome if the new GPT4 Turbo models could be implemented – or better yet, if we could supply our own model string to select which one to use.

fetch is not defined

I'm using node -v v14.19.0. I tried to run this in my project and got the following:

## Unable to handle prompt for ...
{"model":"gpt-4","temperature":0,"max_tokens":7905,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"system","content":"You are an assistant, who can only reply in JSON object, reply with a yes (in a param named 'reply') if you understand"},{"role":"assistant","content":"{\"reply\":\"yes\"}"},{"role":"user","content":"[object Object]\n[object Object]\n[object Object]\n[object Object]\n[object Object]"}]}
## Recieved error ...
ReferenceError: fetch is not defined

I checked out the project and added const fetch = require('node-fetch'); to the top of ai-bridge/src/openai/getChatCompletion.js and it solved it for me.

ReferenceError: child_process is not defined

.smol-dev-js/config/config.json

{
	"provider": "openai",
	"gpt4_32k": false,
	"short_description": "NestJs microserivce with REST API",
	"src_include": [
		"**"
	],
	"src_exclude": [
		"**/.*",
		"**/*.bin",
		"**/node_modules/**",
		"**/build/**",
		"**/bin/**"
	]
}

.smol-dev-js/config/aibridge.json

{
	"provider": {
		"openai": "sk-***"
	},
	"providerLatencyAdd": 0,
	"cache": {
		"localJsonlDir": {
			"enable": true,
			"path": "./.smol-dev-js/ai-cache"
		},
		"mongoDB": {
			"enable": false,
			"url": "<CHANGE TO YOUR RESPECTIVE MONGODB URL>"
		},
		"promptCache": true,
		"embeddingCache": true
	},
	"providerRateLimit": 2
}

Environment:

  • MacOS 13.3.1
  • Node 18.14.0
  • pnpm 8.5.0
  • smol-dev-js 1.2.15

Steps to reproduce:

  1. Install the smol-dev-js package globally with the pnpm.
  2. Create new empty directory to serve as the project directory.
  3. Run the smol-dev-js setup in the project directory.
  4. Setup the OpenAI as the main service provide and add a short desctiption. "Enter" through all the other choices.
  5. Run the smol-dev-js run command.
  6. Tell it you'll be using pnpm as the project package manager and you want the NestJs installed as the core dependency.
  7. Type "y" when prompted to confirm the suggested steps.
    No further input/interaction

Unexpected output:

🐣 [ai]: Working on the plan ...
🐣 [ai]: Studying 0 dependencies (in parallel)
🐣 [ai]: Performing any required modules install / file moves / deletion
Confirm the excution of 'npm install @nestjs/core @nestjs/common @nestjs/microservices @nestjs/platform-express reflect-metadata rxjs langchain'
? [you]: Install listed dependencies? › (Y/n)ReferenceError: child_process is not defined
    at getOperationFileMapFromPlan (/Users/malesh/Library/pnpm/global/5/.pnpm/[email protected]/node_modules/smol-dev-js/src/ai/seq/applyOperationFileMapFromPlan.js:70:3)
    at generateFilesFromPrompt (/Users/malesh/Library/pnpm/global/5/.pnpm/[email protected]/node_modules/smol-dev-js/src/ai/seq/generateFilesFromPrompt.js:106:8)
    at async TypeCommand.run [as _runHandler] (/Users/malesh/Library/pnpm/global/5/.pnpm/[email protected]/node_modules/smol-dev-js/src/cli/command/prompt.js:65:4)
    at async Promise.all (index 3)

Size troubles

I'm trying to run this on a project with the following prompt:

Upgrade the Rails version to 7.0.3.1 following known migration guides to upgrade a Rails project from Rails 6 to Rails 7. Ensure that `bundle exec rspec` still passes

I've tried to trim down my files in src_include as much as possible to fit within the default limits but when I try a prompt I see:

reqJson {
  model: 'gpt-4',
  temperature: 0,
  total_tokens: 4050,
  max_tokens: 29905,
  top_p: 1,
  frequency_penalty: 0,
  presence_penalty: 0,
  rawApi: false,
  messages: [
    {
      role: 'system',
      content: "You are an assistant, who can only reply in JSON object, reply with a yes (in a param named 'reply') if you understand"
    },
    { role: 'assistant', content: '{"reply":"yes"}' },
    {
      role: 'user',
      content: '[object Object]\n' +
        '[object Object]\n' +
        '[object Object]\n' +
        '[object Object]\n' +
        '[object Object]'
    }
  ]
}
getCompletion API error {
  message: "This model's maximum context length is 8192 tokens. However, you requested 29973 tokens (68 in the messages, 29905 in the completion). Please reduce the length of the messages or completion.",
  type: 'invalid_request_error',
  param: 'messages',
  code: 'context_length_exceeded'
}

I'm not sure how to debug further here.

Cannot read properties of undefined (reading 'trim')

Steps to test:

  1. Acquire my Macbook.
  2. cd Development/smol-test
  3. yarn start // alias for smol-dev-js prompt
  4. Profit.

It asks me for instructions, but as soon as I press enter, it goes err err err.

🐣 [ai]: What would you like me to do?
✔ [you]:  … An image gallery using Reactjs and AntD 5.0.
🐣 [ai]: TypeError: Cannot read properties of undefined (reading 'trim')
at getPromptBlock
(/node_modules/smol-dev-js/src/prompt/builder/getPromptBlock.js:13:14)

at getShortDescription
(/node_modules/smol-dev-js/src/prompt/part/getShortDescription.js:8:9)

at getMainDevSystemPrompt
(/node_modules/smol-dev-js/src/prompt/part/getMainDevSystemPrompt.js:26:9)

at planDraft
(/node_modules/smol-dev-js/src/ai/seq/planDraft.js:22:9)

at generateFilesFromPrompt
(/node_modules/smol-dev-js/src/ai/seq/generateFilesFromPrompt.js:38:19)

at TypeCommand.run [as _runHandler]
(/node_modules/smol-dev-js/src/cli/command/prompt.js:65:10)

at process.processTicksAndRejections (node:internal/process/task_queues:95:5)

at async Promise.all (index 3)
node -v
v18.16.0
"dependencies": {
  "smol-dev-js": "^1.2.15"
}

I have tried with both global and project-specific installation.

Plz halp.

code fences around generated files

Any time I ask smol-dev-js to create or modify a file it adds code fences at the top and bottom of the file (```javascript at the beginning and ``` at the end).

I can ask smol-dev-js to remove them but then it adds them back with the next command.

cannot merge files - TypeError: path.resolve is not a function

It created a new folder and file on the root folder instead of the exiting src folder.
When I asked to merge the 2 files, I got:

✔ [you]: … you created a CustomNode.tsx in components/Flow instead of updating the src/components/Flow, why? merge the contents of components/Flow/CustomNode.tsx with src/components/Flow/CustomNode.tsx
🐣 [ai]: Here is the updated plan draft:

Merge the contents of components/Flow/CustomNode.tsx with src/components/Flow/CustomNode.tsx.

Update src/components/Flow/CustomNode.tsx with the contents from components/Flow/CustomNode.tsx.

Delete components/Flow/CustomNode.tsx.
✔ [you]: Proceed with the plan? … yes
🐣 [ai]: Working on the plan ...
🐣 [ai]: Studying 1 dependencies (in parallel)
🐣 [ai]: Performing any required modules install / file moves / deletion
TypeError: path.resolve is not a function
at getOperationFileMapFromPlan (/usr/local/lib/node_modules/smol-dev-js/src/ai/seq/applyOperationFileMapFromPlan.js:121:24)
at generateFilesFromPrompt (/usr/local/lib/node_modules/smol-dev-js/src/ai/seq/generateFilesFromPrompt.js:106:8)
at async TypeCommand.run [as _runHandler] (/usr/local/lib/node_modules/smol-dev-js/src/cli/command/prompt.js:65:4)
at async Promise.all (index 3)

GPT-4 does not exist

After set up I am getting the following

    "error": {
        "message": "The model: `gpt-4` does not exist",
        "type": "invalid_request_error",
        "param": null,
        "code": "model_not_found"
    }
}

If I search for gpt-4 in the project I see a couple of references. Anyone getting the same problem?

Request errors on initial use: Missing valid openai response

I am really excited about the concept here but after the initial setup ran into some errors on Linux Mint 20.3 (Ubuntu Focal).

Steps taken:

  • install with npm i -g smol-dev-js
  • run smol-dev-js setup
    • Chose OpenAI (on list for Anthropic!)
    • Entered API key
    • Set remaining settings for project I was in
  • run smole-dev-js run

Output from terminal:

$ smol-dev-js run
--------------------
🐣 [ai]: hi its me, the ai dev ! you said you wanted
         here to help you with your project, which is a ....
--------------------
CityCoins are cryptocurrencies that allow you to support your favorite cities while earning Stacks and Bitcoin.
--------------------
🐣 [ai]: What would you like me to do? (PS: this is not a chat system, there is no chat memory prior to this point)
✔ [you]:  … Suggest something please
🐣 [ai]: (node:147273) ExperimentalWarning: The Fetch API is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
## Unable to handle prompt for ...
{"model":"gpt-4","temperature":0,"max_tokens":7905,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"system","content":"You are an assistant, who can only reply in JSON object, reply with a yes (in a param named 'reply') if you understand"},{"role":"assistant","content":"{\"reply\":\"yes\"}"},{"role":"user","content":"[object Object]\n[object Object]\n[object Object]\n[object Object]\n[object Object]"}]}
## Recieved error ...
[invalid_request_error] undefined
## Unable to handle prompt for ...
{"model":"gpt-4","temperature":0,"max_tokens":7873,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"system","content":"You are an assistant, who can only reply in JSON object, reply with a yes (in a param named 'reply') if you understand"},{"role":"assistant","content":"{\"reply\":\"yes\"}"},{"role":"user","content":"[object Object]\n[object Object]\n[object Object]\n[object Object]\n[object Object]"},{"role":"user","content":"Please update your answer, and respond with only a single JSON object, in the requested format. No apology is needed."}]}
## Recieved error ...
[invalid_request_error] undefined
Error: Missing valid openai response, please check warn logs for more details
    at getChatCompletion (/home/whoabuddy/.nvm/versions/node/v18.12.1/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/openai/getChatCompletion.js:290:8)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Object.promiseGenerator (/home/whoabuddy/.nvm/versions/node/v18.12.1/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/AiBridge.js:249:11)
Last Completion null
Error: Missing valid openai response, please check warn logs for more details
    at getChatCompletion (/home/whoabuddy/.nvm/versions/node/v18.12.1/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/openai/getChatCompletion.js:290:8)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Object.promiseGenerator (/home/whoabuddy/.nvm/versions/node/v18.12.1/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/AiBridge.js:249:11)

Also noticed Recieved is misspelled in the error above 🔬

How can I check the warn logs that were mentioned?

Making changes to existing code base?

In the readme you mention making changes to an existing code base. How exactly do I instruct smol-dev-js to do this?
Would you recommend running code2spec first for the existing code base and then modifying the spec and re-run spec2code, or are there other ways the script can detect the existing code base?

Thanks for bringing a really interesting project to life!

configuration for API BASE URL

I would like to be able to specify the openai_base_url in the aibridge.json in order to test other openAI end points.
It would require to fill the baseURL as per openAI Node SDK (not sure if it is the lib you are using)

anthropic-version header is required

I'm trying to use this for the first time. After setting up the project with smol-dev-js setup I ran smol-dev-js run and got this output:

--------------------
🐣 [ai]: hi its me, the ai dev ! you said you wanted
         here to help you with your project, which is a ....
--------------------
MY PROMPT
--------------------
🐣 [ai]: What would you like me to do? (PS: this is not a chat system, there is no chat memory prior to this point)
✔ [you]:  … create a new next.js project with tailwind
🐣 [ai]: ## Unable to handle prompt for ...
{"model":"claude-v1-100k","temperature":0,"top_p":1,"prompt":"\n\nHuman: You are an assistant, who can only reply in JSON object, reply with a yes (in a param named 'reply') if you understand\n\nAssistant: {\"reply\":\"yes\"}\n\nHuman: [object Object]\n[object Object]\n[object Object]\n[object Object]\n[object Object]\n\nAssistant:","max_tokens_to_sample":89905,"stop_sequence":["<|endoftext|>","\n\nHuman:","\n\nhuman:"]}
## Recieved response ...
{"error":{"type":"invalid_request_error","message":"anthropic-version header is required"}}
## Recieved error ...
[invalid_request_error] undefined
## Unable to handle prompt for ...
{"model":"claude-v1-100k","temperature":0,"top_p":1,"prompt":"\n\nHuman: You are an assistant, who can only reply in JSON object, reply with a yes (in a param named 'reply') if you understand\n\nAssistant: {\"reply\":\"yes\"}\n\nHuman: [object Object]\n[object Object]\n[object Object]\n[object Object]\n[object Object]\n\nHuman: Please update your answer, and respond with only a single JSON object, in the requested format. No apology is needed.\n\nAssistant:","max_tokens_to_sample":89873,"stop_sequence":["<|endoftext|>","\n\nHuman:","\n\nhuman:"]}
## Recieved response ...
{"error":{"type":"invalid_request_error","message":"anthropic-version header is required"}}

Looks like there might have been an Anthropic API update?

Running into ENOENT error

This is the error I'm running into each time after I give the plan a go.

✔ [you]: Proceed with the plan? … yes
🐣 [ai]: Working on the plan ...
🐣 [ai]: Studying 0 dependencies (in parallel)
🐣 [ai]: Performing any required modules install / file moves / deletion
Confirm the excution of 'npm install next react react-dom remark remark-html'
? [you]: Install listed dependencies? › (Y/n)🐣 [ai]: Studying 0 dependencies (awaiting in parallel)
🐣 [ai]: Preparing summaries for smol-er sub-operations ...
🐣 [ai]: (async) Updating src file - src/pages/_app.js
🐣 [ai]: (async) Updating src file - src/components
🐣 [ai]: (async) Updating src file - src/lib
🐣 [ai]: (async) Updating src file - src/posts
🐣 [ai]: (async) Updating src file - src/posts/sample-post.md
🐣 [ai]: (async) Updating src file - package.json
Error: ENOENT: no such file or directory, open '/Users/username/smoll-dev-js-project-1/src/components'

Nothing is rendered into the folder prior to this error. I have no src folder.

Thanks for any help

Error: ENOENT: no such file or directory, rename...

I'm using Claude, Windows 11, and I'm throwing this...

Error: ENOENT: no such file or directory, rename 'C:\Users\rickr\OneDrive\Desktop\smol\style.css' -> 'C:\Users\rickr\OneDrive\Desktop\smol\css\style.css'

Below is my CLI when starting from an empty directory (outside of setup files).
My current work around is beginning each prompt with "Without moving any files..." but that's no good =/

Thoughts? Thank you and this repo is awesome!

PS C:\Users\rickr\OneDrive\Desktop\smol> smol-dev-js run
--------------------
🐣 [ai]: hi its me, the ai dev ! you said you wanted
         here to help you with your project, which is a ....
--------------------
A simple snake game created with javascript and runs in the browser.
--------------------
🐣 [ai]: What would you like me to do? (PS: this is not a chat system, there is no chat memory prior to this point)
√ [you]:  ... Suggest something
🐣 [ai]:  Here is the updated plan draft:

Generate a `index.html` file to act as the entry point for the game. It will contain the canvas element to render the game and link to the `snake.js` and `style.css` files.

Generate a `snake.js` file which will contain the game logic. It will keep track of the snake segments, handle user input to change direction, check for collisions, update the position of the snake segments and render the snake.

Generate a `style.css` file to contain some basic styling for the canvas and to center it on the page.

Update `README.md` to describe the project, files and how to run the game.

Ask the user to:

- Run and test the code
- Make any changes to the files as needed
- Let me know if any files need to be updated or added
√ [you]: Proceed with the plan? ... yes
🐣 [ai]: Working on the plan ...
🐣 [ai]: Studying 0 dependencies (in parallel)
🐣 [ai]: Performing any required modules install / file moves / deletion
🐣 [ai]: Studying 0 dependencies (awaiting in parallel)
🐣 [ai]: Preparing summaries for smol-er sub-operations ...
🐣 [ai]: (async) Updating src file - index.html
🐣 [ai]: (async) Updating src file - snake.js
🐣 [ai]: (async) Updating src file - style.css
🐣 [ai]: Finished current set of async spec/src file update (1st round)
🐣 [ai]: (async) Updating src file - README.md
🐣 [ai]: Finished current set of async spec/src file update (2nd round)
🐣 [ai]: What would you like me to do? (PS: this is not a chat system, there is no chat memory prior to this point)
√ [you]:  ... Suggest something
🐣 [ai]:  Here is the updated plan draft:

Move `style.css` into a `css` folder.
Generate a `game.js` file with the game logic.
Update `index.html` to include the new `css` folder and `game.js` file.
Ask the user to test and provide feedback.
√ [you]: Proceed with the plan? ... yes
🐣 [ai]: Working on the plan ...
🐣 [ai]: Studying 1 dependencies (in parallel)
🐣 [ai]: Performing any required modules install / file moves / deletion
Error: ENOENT: no such file or directory, rename 'C:\Users\rickr\OneDrive\Desktop\smol\style.css' -> 'C:\Users\rickr\OneDrive\Desktop\smol\css\style.css'

`spec2code` command results in `getChatCompletion` error

When running the $ smol-dev-js spec2code command, I encountered a fetch error, causing the command to fail. Here's the error message I received:

❯ smol-dev-js spec2code
🐣 [ai]: Based on the provided feedback, I will regenerate all code files that have corresponding spec defined. Here's the updated plan:

1. Update `wage-cal.js` file:
   - Ensure the `wageCalculator` and `timeToMinutes` functions are implemented according to the spec.
   
2. Update `index.html` file:
   - Ensure the UI elements are created as described in the spec, including input fields for start and end dates, punch time table, and total wage input field.
   - Apply TailwindCSS for styling.

3. Update `main.js` file:
   - Implement the `addPunchTimeRecord` and `calculateTotalWage` functions accordin to the spec.

After updating the code files, I will provide the updated files for you to review.
✔ [you]: Proceed with the plan? … yes
🐣 [ai]: Working on the plan ...
🐣 [ai]: Studying 0 dependencies (in parallel)
🐣 [ai]: Performing any required modules install / file moves / deletion
🐣 [ai]: Studying 0 dependencies (awaiting in parallel)
🐣 [ai]: Preparing summaries for smol-er sub-operations ...
## Unable to handle prompt for ...
{"model":"gpt-4","temperature":0,"max_tokens":6721,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"user","content":"You are an AI developer assitant who is trying to write a program that will generate code for the user based on their intent\n\nThe following are some details of the project ...\n\nProject short description :\n```\nDaily Wage Calculator\n```\n\n\n\n\n\nspecification file list, in the `specs` folder :\n```\nNOTES.md\n```\n\nsource code file list, in the `src` folder :\n```\n\n```\n\nList of actions you the AI dev can do for the user (you are not allowed to do anything else):\n- Move files or folders\n- Delete files or folders\n- Generate/Edit a code/spec file, with the given instructions\n- Update code/spec from spec/code\n\nYou are not able to do the following (ask the user to do it instead):\n- run and test the code\n- compile or commit changes\n\n(You do not need to let the user know about the above list, as they are for your own use)\n\nNote: that AI notes provided are internal only, for your own use. Parts of it maybe outdated\n\n\n\n'AI notes' for the overall project :\n```\nutf-8\n```\nNote: The following spec files content/notes maybe outdated\n\nTop level spec file content (README.md) :\n```\n# A daily wage calculator\n\nA web app that calculates the daily wage of a worker based on the punch in and punch out time. use tailwindcss for styling.\n\n## Files\n\n### wage-cal.js   \n\nA JS file that contains the logic for calculating the daily wage. contains following functions:\n- wageCalculator: takes the punch in and punch out time and returns the daily wage. (input should be array of punch time records and the punch time record should contain 3 periods key \"morning\", \"afternoon\", \"evening\" and each period should contain \"puch in\" and \"punch put\" in \"hh:mm\" format)\n- timeToMinutes: takes the time in hh:mm format and returns the time in minutes.\n\n### index.html  \n\nHTML file that contains the UI. contains following elements:\n- date start: input field for start date\n- date end: input field for end date\n- punch time table: table that contains the columns:\n  - date: display date of the punch time record\n  - morning in: input field for morning punch in time \n  - morning out: input field for morning punch out time\n  - afternoon in: input field for afternoon punch in time\n  - afternoon out: input field for afternoon punch out time\n  - evening in: input field for evening punch in time\n  - evening out: input field for evening punch out time\n  - total: minutes worked in the day\n- total wage: input field that shows the total wage from start date to end date\n\n### main.js   \n\nA JS file that contains the logic for the UI. contains following functions:\n- addPunchTimeRecord: adds a new punch time record to the punch time table\n- calculateTotalWage: calculates the total wage from start date to end date and shows it in the total wage input field\n```\n\nTop level spec file notes (NOTES.md) :\n```\nall punch time that input to this web is timezone `UTC+7`\n```\n\nThe following is the current plan you the AI developer has drafted, after several rounds of user feedback :\n```\nBased on the provided feedback, I will regenerate all code files that have corresponding spec defined. Here's the updated plan:\n\n1. Update `wage-cal.js` file:\n   - Ensure the `wageCalculator` and `timeToMinutes` functions are implemented according to the spec.\n   \n2. Update `index.html` file:\n   - Ensure the UI elements are created as described in the spec, including input fields for start and end dates, punch time table, and total wage input field.\n   - Apply TailwindCSS for styling.\n\n3. Update `main.js` file:\n   - Implement the `addPunchTimeRecord` and `calculateTotalWage` functions according to the spec.\n\nAfter updating the code files, I will provide the updated files for you to review.\n```\n\nThe following is the user prompt history for the current plan (in json array) :\n```\n[\"Regenerate all code files which has the corresponding spec defined (ensure all files are updated)\"]\n```\n\nThe files we have decided to generate are: [\"wage-cal.js\",\"index.html\",\"main.js\"]\n\nThe following is some details of local dependencies which you can use ...\n\n\nNow that we have a list of files, we need to understand what dependencies they share\nPlease name and briefly describe what is shared between the files we are generating, including exported variables, data schemas, class names, or id names of every DOM elements that javascript functions will use, message names, and function names\nFor ID names, make sure its clear its an ID using a # prefix (similar to css style selectors)\nExclusively focus on the names of the shared dependencies, and do not add any other explanation"}]}
## Recieved error ...
TypeError: fetch failed
Error: Missing valid openai response, please check warn logs for more details
    at getChatCompletion (/opt/homebrew/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/openai/getChatCompletion.js:290:8)
    at async Object.promiseGenerator (/opt/homebrew/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/AiBridge.js:249:11)

I have investigated the issue by attempting to use the same message request with getChatCompletion from ai-bridge, but I still receive the same error as before. However, when I tried using createChatCompletion from the openai library, it worked without any issues. Based on this, I believe that the problem is occurring specifically with ai-bridge.

Rate Limit Exceeded

Hello! Thanks for creating a cool port of a cool lib.

I'm trying to get it running and am hitting rate limits, see below. This seems to happen no matter what prompt I run.

➜  smol-dev-js-test smol-dev-js prompt
--------------------
🐣 [ai]: hi its me, the ai dev ! you said you wanted
         here to help you with your project, which is a ....
--------------------
An ecommerce admin dashboard. It will contain CRUD screens and API endpoints for a Widget model containing a bunch of fields that might describe a widget. The landing page will have some stats and graphs related to widgets. The application will be built in Next.js application in typescript using Next.js app router. It will also use Prettier, ESLint, TailwindCSS and ShadCN for UI. It will use Postgres as a database, and Prisma ORM to communicate with it. Build the charts using the Chart.js library.
--------------------
🐣 [ai]: What would you like me to do? (PS: this is not a chat system, there is no chat memory prior to this point)
✔ [you]:  … Suggest something
🐣 [ai]: Unexpected end of stream, with unprocessed data {
    "error": {
        "message": "Rate limit reached for 10KTPM-200RPM in organization org-... on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.",
        "type": "tokens",
        "param": null,
        "code": "rate_limit_exceeded"
    }
}
Unexpected event processing error Unexpected end of stream, with unprocessed data, see warning logs for more details
Unexpected event processing error, see warning logs for more details

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.