Comments (2)
This is more of a long term vision than a step 1 but it'd be cool if mentat could figure out commands to test/run a project on its own and decide which ones it wants to run. With just the readme in context and the prompt "Make a new file commands.txt which lists useful commands I could run to test the project along with a short description of what they do" mentat generates:
# Commands to test the project
## Create a virtual environment
python3 -m venv .venv
# Description: Creates a virtual environment named .venv
## Activate the virtual environment
source .venv/bin/activate
# Description: Activates the virtual environment
## Install Mentat from PyPI
python -m pip install mentat-ai
# Description: Installs Mentat from the Python Package Index
## Install Mentat from Github
python -m pip install git+https://github.com/biobootloader/mentat.git
# Description: Installs Mentat directly from the Github repository
## Run Mentat
mentat <paths to files or directories>
# Description: Runs Mentat with the specified files or directories
## Run Mentat with excluded files or directories
mentat path/to/directory --exclude exclude_me.py dir1/dir2 **/*.ts
# Description: Runs Mentat with the specified directory, excluding the specified files or directories
and with the readme and .github/workflows/
it generates:
1. Run tests with pytest:
Command: pytest
Description: This command will run all the tests in the tests/ directory using pytest.
2. Check code formatting with black:
Command: black --check --preview .
Description: This command will check if the code formatting follows the black style without making any changes to the files.
3. Check import sorting with isort:
Command: isort --profile black --check .
Description: This command will check if the imports are sorted correctly according to the black profile.
4. Run license checking script:
Command: python tests/license_check.py
Description: This command will run the license checking script to ensure all dependencies have compatible licenses.
Could imagine a loop: prompt -> initial changes proposed -> tests detected and ran -> output put into context and model reprompted for changes.
from mentat.
This is similar to what I've imagined calling "Wolverine mode", where Mentat runs something and gets to see the output (and in the case of a crash, automatically fix).
I think a command could be a nice way to add this functionality, instead of a separate way to start Mentat from the command line. Like during your Mentat session you'd type:
/run python myfile.py arg1
That way you wouldn't have to leave Mentat, and earlier relevant conversation context would be available to the model (like what changes you'd just asked it to make, that might be causing the crash).
Perhaps Mentat could also auto-suggest files to add to context that pop up in the stack trace of crashes
from mentat.
Related Issues (20)
- Git Diff problem caused by Sampler when using a Virtual environment HOT 4
- Replace colored with style HOT 1
- How is this project different from aider? HOT 1
- Textual improvements HOT 11
- save/load context HOT 3
- Bug when exiting search command: loading bar gets stuck, doesn't print anything
- Mentat Python SDK doesn't return anything when working on bigger projects
- Optimization of non-GPT 4 major model outputs HOT 1
- maximum context should be reset when /config is used to change model
- in powerpoint online
- "Provider is required when unknown models are used" but no way to set provider for gpt-4-turbo. HOT 5
- Benchmark on SWE-Bench HOT 1
- Release 1.0.18
- Starting mentat failed
- error on running command mentat HOT 1
- Mentat python client throws `mentat.errors.MentatError: Session failed`
- Opendevin collaboration
- Using claude 3.5 sonnet HOT 5
- [Suggestion] Using Exponential Backoff to avoid LLM Rate Limit Errors
- "Provider is required when unknown models are used" even when I set model in ".mentat_config.json" HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mentat.