LLM plugin providing access to Mistral models using the Mistral API
Install this plugin in the same environment as LLM:
llm install llm-mistral
First, obtain an API key for the Mistral API.
Configure the key using the llm keys set mistral
command:
llm keys set mistral
<paste key here>
You can now access the three Mistral hosted models: mistral-tiny
, mistral-small
and mistral-medium
.
To run a prompt through mistral-tiny
:
llm -m mistral-tiny 'A sassy name for a pet sasquatch'
To start an interactive chat session with mistral-small
:
llm chat -m mistral-small
Chatting with mistral-small
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
> three proud names for a pet walrus
1. "Nanuq," the Inuit word for walrus, which symbolizes strength and resilience.
2. "Sir Tuskalot," a playful and regal name that highlights the walrus' distinctive tusks.
3. "Glacier," a name that reflects the walrus' icy Arctic habitat and majestic presence.
To use a system prompt with mistral-medium
to explain some code:
cat example.py | llm -m mistral-medium -s 'explain this code'
All three models accept the following options, using -o name value
syntax:
-o temperature 0.7
: The sampling temperature, between 0 and 1. Higher increases randomness, lower values are more focused and deterministic.-o top_p 0.1
: 0.1 means consider only tokens in the top 10% probability mass. Use this or temperature but not both.-o max_tokens 20
: Maximum number of tokens to generate in the completion.-o safe_mode 1
: Turns on safe mode, which adds a system prompt to add guardrails to the model output.-o random_seed 123
: Set an integer random seed to generate deterministic results.
The Mistral Embeddings API can be used to generate 1,024 dimensional embeddings for any text.
To embed a single string:
llm embed -m mistral-embed -c 'this is text'
This will return a JSON array of 1,024 floating point numbers.
The LLM documentation has more, including how to embed in bulk and store the results in a SQLite database.
See LLM now provides tools for working with embeddings and Embeddings: What they are and why they matter for more about embeddings.
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-mistral
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
llm install -e '.[test]'
To run the tests:
pytest