Comments (7)
The code works. I had a spelling error. Thanks for getting me over the hump. Now I must explore the various models. Great job with the development of continue
!
from continue.
@erlebach The first thing I notice is the URL http://localhost:8080:65432
. Are you purposefully doing this for port forwarding? If not, your problem would be solved by replacing it with http://localhost:65432
.
from continue.
@sestinj: that solved the problem. So right now, I am using the ChatGPT model with my API key. What I really want is to run llama.cpp with llama code on a remote machine with different IP address than my machine. I have tried this before but unsuccessfully. Is .continue/config.py
the only file that requires modification, assuming I still start the continue
server manually? Thanks.
from continue.
Yes, that is the only file you should have to edit. If you haven't already seen it, this is our reference page for using Llama.cpp: https://continue.dev/docs/reference/Models/llamacpp
This is if you're running the llama.cpp server, is that what you're doing?
I can also give better instructions on actually setting up that server and the options if that would help
from continue.
Thanks. I read the docs you refer to again. You write that config.py should contain the following:
config = ContinueConfig(
...
models=Models(
default=LlamaCpp(
max_context_length=4096,
server_url="http://localhost:8080")
)
)
I assume that localhost should be replaced by the IP of the machine that Llama.cpp is running on? VisualStudioCode is running on localhost
. Let me try again. If I have issues, I'll write another message.
from continue.
Here is my "other message" :-) with more details. Llama.cpp runs on a remote machine with IP address x.y.z.w . I execute the Llama.cpp server with the following command:
./server -m models/codellama-7b-instruct.Q4_K_M.gguf --host 0.0.0.0
I modify the config.py file and then execute ./dist/run
. Here is the relevant section of the config.py
file:
"""
This is the Continue configuration file.
See https://continue.dev/docs/customization to for documentation of the available options.
"""
from continuedev.src.continuedev.core.models import Models
from continuedev.src.continuedev.core.config import CustomCommand, SlashCommand, ContinueConfig
from continuedev.src.continuedev.plugins.context_providers.github import GitHubIssuesContextProvider
from continuedev.src.continuedev.libs.llm.maybe_proxy_openai import MaybeProxyOpenAI
from continuedev.src.continuedev.libs.llm.llamacpp import LlamaCpp
from continuedev.src.continuedev.plugins.steps.open_config import OpenConfigStep
from continuedev.src.continuedev.plugins.steps.clear_history import ClearHistoryStep
from continuedev.src.continuedev.plugins.steps.feedback import FeedbackStep
from continuedev.src.continuedev.plugins.steps.comment_code import CommentCodeStep
from continuedev.src.continuedev.plugins.steps.share_session import ShareSessionStep
from continuedev.src.continuedev.plugins.steps.main import EditHighlightedCodeStep
from continuedev.src.continuedev.plugins.steps.cmd import GenerateShellCommandStep
from continuedev.src.continuedev.plugins.context_providers.search import SearchContextProvider
from continuedev.src.continuedev.plugins.context_providers.diff import DiffContextProvider
from continuedev.src.continuedev.plugins.context_providers.url import URLContextProvider
from continuedev.src.continuedev.plugins.context_providers.terminal import TerminalContextProvider
config = ContinueConfig(
allow_anonymous_telemetry=True,
models=Models(
default=LLamaCpp(
max_context_length = 1024,
server_url="http://x.y.z.w:8080"
)
),
system_message="",
temperature=0.5,
The URL of the Continue server (in the VSC settings), remains https://localhost:65432
.
Finally, here is the stacktrace I get when I click on the continue
extension in VSC:
Invalid Continue Config File
Falling back to default config settings due to the following error in ~/.continue/config.py.
Invalid Continue Config File
Falling back to default config settings due to the following error in ~/.continue/config.py.
Falling back to default config settings due to the following error in ~/.continue/config.py.
Traceback (most recent call last):
File "continuedev/src/continuedev/core/sdk.py", line 75, in create
sdk.config = config or sdk._load_config_dot_py()
File "continuedev/src/continuedev/core/sdk.py", line 242, in _load_config_dot_py
config = ContinueConfig.from_filepath(path)
File "continuedev/src/continuedev/core/config.py", line 112, in from_filepath
spec.loader.exec_module(config)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/Users/erlebach/.continue/config.py", line 30, in
default=LLamaCpp(
NameError: name 'LLamaCpp' is not defined
I get I have to do a `pip install LlamaCpp`? I did not see that in the instructions.
Thanks!
from continue.
Glad to hear! If you have any more questions don't hesitate to ask in our Discord! https://discord.gg/RKcHQgxy5N
from continue.
Related Issues (20)
- feature request: Add a new option to disable summaries HOT 5
- System Message not being sent to llama.cpp HOT 3
- Issue running codellama/phind-codellama based model with FastChat OpenAI API HOT 6
- Unable to download `run` and `melisearch` HOT 7
- continue displays empty results when using ollama v0.0.20 HOT 4
- Completion instead of Completions HOT 7
- aiohttp.client_exceptions.ClientPayloadError: Response payload is not completed HOT 3
- Allow edits to an earlier prompt and resume session from edited prompt HOT 3
- Crashing of vscode after installed the extension HOT 13
- "No Code Selected" error HOT 3
- /edit does not do anything with Ollama v0.0.21 HOT 14
- Continue always opens on new VSCode windows HOT 2
- cmd+m doesn't work for Jupyter Notebook outputs HOT 1
- Continue doesn't seem to be getting context of highlighted code when using /edit or /comment HOT 9
- Windows Defender False Positive HOT 4
- [CON-212] Error when editing files on different drives (e.g. c: vs e: on windows) HOT 13
- VSCode diff editing broken because generated file name is too long HOT 3
- "Unable to resolve filesystem provider" in devcontainer HOT 6
- Small timeline window HOT 2
- ``Config.py`` using wrong openai for requests HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from continue.