Giter VIP home page Giter VIP logo

client-js's Introduction

This JavaScript client is inspired from cohere-typescript

Mistral JavaScript Client

You can use the Mistral JavaScript client to interact with the Mistral AI API.

Installing

You can install the library in your project using:

npm install @mistralai/mistralai

Usage

You can watch a free course on using the Mistral JavaScript client here.

Set up

import MistralClient from '@mistralai/mistralai';

const apiKey = process.env.MISTRAL_API_KEY || 'your_api_key';

const client = new MistralClient(apiKey);

List models

const listModelsResponse = await client.listModels();
const listModels = listModelsResponse.data;
listModels.forEach((model) => {
  console.log('Model:', model);
});

Chat with streaming

const chatStreamResponse = await client.chatStream({
  model: 'mistral-tiny',
  messages: [{role: 'user', content: 'What is the best French cheese?'}],
});

console.log('Chat Stream:');
for await (const chunk of chatStreamResponse) {
  if (chunk.choices[0].delta.content !== undefined) {
    const streamText = chunk.choices[0].delta.content;
    process.stdout.write(streamText);
  }
}

Chat without streaming

const chatResponse = await client.chat({
  model: 'mistral-tiny',
  messages: [{role: 'user', content: 'What is the best French cheese?'}],
});

console.log('Chat:', chatResponse.choices[0].message.content);

Embeddings

const input = [];
for (let i = 0; i < 1; i++) {
  input.push('What is the best French cheese?');
}

const embeddingsBatchResponse = await client.embeddings({
  model: 'mistral-embed',
  input: input,
});

console.log('Embeddings Batch:', embeddingsBatchResponse.data);

Run examples

You can run the examples in the examples directory by installing them locally:

cd examples
npm install .

API key setup

Running the examples requires a Mistral AI API key.

Get your own Mistral API Key: https://docs.mistral.ai/#api-access

Run the examples

MISTRAL_API_KEY='your_api_key' node chat_with_streaming.js

Persisting the API key in environment

Set your Mistral API Key as an environment variable. You only need to do this once.

# set Mistral API Key (using zsh for example)
$ echo 'export MISTRAL_API_KEY=[your_api_key]' >> ~/.zshenv

# reload the environment (or just quit and open a new terminal)
$ source ~/.zshenv

You can then run the examples without appending the API key:

node chat_with_streaming.js

After the env variable setup the client will find the MISTRAL_API_KEY by itself

import MistralClient from '@mistralai/mistralai';

const client = new MistralClient();

client-js's People

Contributors

bam4d avatar christian24 avatar flore2003 avatar lerela avatar lgrammel avatar mattlgroff avatar mjdaoudi avatar mrloldev avatar perborgen avatar sanjeev-kadam avatar sanjeevkadam avatar sublimator avatar themataleao avatar theophilegervet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

client-js's Issues

Compilation Error: Interface Property Initializer in client.d.ts

Hello

When attempting to compile a project that includes @mistralai/mistralai as a dependency, TypeScript compilation fails with an error related to an interface property having an initializer.

Error Output

$ tsc
node_modules/@mistralai/mistralai/src/client.d.ts:49:26 - error TS1246: An interface property cannot have an initializer.
49         type: ToolType = ToolType.function;
                            ~~~~~~~~~~~~~~~~~

Steps to Reproduce

  1. Include @mistralai/mistralai in a TypeScript project.
  2. Run the TypeScript compiler (tsc).
  3. Observe the compilation error related to client.d.ts.

Environment

  • TypeScript Version: 4.9.5
  • @mistralai/mistralai Version: 0.1.3
  • Node.js Version: v18.13.0

Using node-fetch makes some Node.js libraries incompatible with this library

Hello, I've been encountering problems related to fetch while using this library.

https://github.com/mistralai/client-js/blob/main/src/client.js#L11

This code overrides globalThis.fetch to an implementation from node-fetch, which behaves differently from Node.js's fetch and breaks several other libraries (@google-ai/generativelanguage in specific) which expects Node.js's own fetch (which have been there from v18.0.0). I don't think that typeof window === 'undefined' is required.

I don't know about how many platforms this library should be compatible with, but as fetch is supported on all major browsers, Node.js, Deno, and Bun, I also suggest to remove the initializeFetch function, the isNode variable, and dependency to node-fetch.

TypeScript typings are not picked up

Hello,

even though TypeScript typings are available in the source code, they are not picked up my IDE. I guess this is due to the wrong file name being specified in package.json. I will try to fix it and provide a PR shortly.

No stopping conditions

It seems it is not possible to set stop sequences with the Mistral API. This is a very common pattern when running inference on LLMs to have some fine-grained control over the output - e.g. OpenAI.

This isn't just an issue with the JS client, it seems there aren't stop sequences in the REST API either.

Is this feature coming soon? If not what's the reasoning behind that?

Thanks!

Feature Request: Usage/Billing API

It is of course possible to count tokens on the client side, but it would be nice to have an overview of the overall spending so far :)

No need for fancy charts or queries, even a single float "upcoming_bill": 12.34 is fine too :)

Missing typescript definition for tool_calls in ChatCompletionResponseChoice

Currently, only the streaming results have definitions for function calling. However regular chat completion does return tool_calls, as suggested in the available javascript example in this repo

ChatCompletionResponseChoice.message is missing the field tool_calls?: ToolCall[]

One strange thing however is that the example suggest that the field is toolCalls in camel case, while the result returned by await client.chat(...) seems to include tool_calls instead.

CDN-hosted version of the mistralai JS SDK?

Hi folks!
For quick frontend-only experimentation purposes, it would be handy to be able to write code like this:

<script type="importmap">
  {
    "imports": {
      "@mistralai/mistralai": "https://{$CDN_HOSTED_LIB_URL}"
    }
  }
</script>
<script type="module">
  import MistralClient from '@mistralai/mistralai';
  const client = new MistralClient($API_KEY);
  // const chatResponseGen = await client.chat({ ... etc
</script>

It would require a CDN-hosted version of @mistralai/mistralai.
Is it an option for you to support something like this?

Note: No server / node component here, so the API key lives in the frontend. Arguably OK for quick local experimentation purposes.

Missing Typescript definitions

In this day of AI it's hardly much of a share, but:

declare module '@mistralai/mistralai' {
  class MistralClient {
    constructor(apiKey?: string, endpoint?: string)

    private _request(method: string, path: string, request: any): Promise<any>

    private _makeChatCompletionRequest(
      model: string,
      messages: Array<{ role: string; content: string }>,
      temperature?: number,
      maxTokens?: number,
      topP?: number,
      randomSeed?: number,
      stream?: boolean,
      safeMode?: boolean
    ): object

    listModels(): Promise<any>

    chat(options: {
      model: string
      messages: Array<{ role: string; content: string }>
      temperature?: number
      maxTokens?: number
      topP?: number
      randomSeed?: number
      safeMode?: boolean
    }): Promise<any>

    chatStream(options: {
      model: string
      messages: Array<{ role: string; content: string }>
      temperature?: number
      maxTokens?: number
      topP?: number
      randomSeed?: number
      safeMode?: boolean
    }): AsyncGenerator<any, void, unknown>

    embeddings(options: {
      model: string
      input: string | string[]
    }): Promise<any>
  }

  export default MistralClient
}

Streaming example throws `JSON SyntaxError`

node chat_with_streaming.js
Chat Stream:
It's subjective to determine theundefined:1
{"id": "cmpl-794d708af6ed43aeab6a2a81390c5d90", "object": "chat.completion.chunk", "created": 1702672109, "model": "mistral-tiny", "choices": [{"index": 0, "delta": {"role": null, "content": " \"best\" French cheese as people have different preferences based

SyntaxError: Unterminated string in JSON at position 258
    at JSON.parse (<anonymous>)
    at MistralClient.chatStream (file:///Users/bracesproul/code/mistral-client-js/src/client.js:184:24)
    at chatStream.next (<anonymous>)
    at file:///Users/bracesproul/code/mistral-client-js/examples/chat_with_streaming.js:13:18
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)

Node.js v20.10.0

If I modify the client streaming code to this:

if (chunkLine.startsWith('data:')) {
  const chunkData = chunkLine.substring(6).trim();
  if (chunkData !== '[DONE]') {
    let parsedChunkData;
    // try/catch added
    try {
      parsedChunkData = JSON.parse(chunkData);
    } catch (e) {
      console.error('\nError parsing chunk data.\n');
      continue;
    }
    yield parsedChunkData;
  }
}

Streaming works because continue allows for the error to not be thrown.
And Error parsing chunk data. is logged anywhere between 5-8 times (ran the example ~5 times)

Top-level await

Hey Mistral Team!

Thank you for amazing work and efforts.

I am building a Raycast extension and wanted to use the mistral API, but I faced this error : build failed (node_modules/@mistralai/mistralai/src/client.js:4:11 fetch = (await import('node-fetch')).default;): Top-level await is currently not supported with the "cjs" output format

I fixed it by changing in the client.js file:

if (typeof globalThis.fetch === 'undefined') {
  fetch = (await import('node-fetch')).default;
  isNode = true;
} else {
  fetch = globalThis.fetch;
}

into

async function initializeFetch() {
  if (typeof globalThis.fetch === 'undefined') {
    const nodeFetch = await import('node-fetch');
    fetch = nodeFetch.default;
    isNode = true;
  } else {
    fetch = globalThis.fetch;
  }
}

initializeFetch()

Can you take a look at that ?

Thanks!

Version 001 Refactor for Typescript & Microbundle

Hi All,

I was playing around with the JS client and ran into a few environment compatibility issues, e.g. a conflict to do w/JS’s lovely module declaration rules.

I started writing a fix and, well, I ended up refactoring most it. It should be completely backward compatible, i.e. the methods are named the same and behave the same.

I’ve managed other NPM libraries before, all for employers, and I believe this is a more solid foundation than what had been originally undertaken. It's in the v001-rewrite-ts-microbundle branch of my fork pbread/client-js.

Note: As I write this, I have the client rewritten but I still need to port over the tests and examples. I'll be doing that now but there might be some minor bugs I need to work through.

Here are the major enhancements:
Native Typescript Support It’s quite tedious to make a native JS compatible with TS. Going from TS to JS is easier and you benefit from type-safety.

Microbundle: Microbundle is wonderful. There almost zero configuration and it takes care of polyfill, minification, cross environment compatibility, TS declaration generation, etc. etc.

Microbundle solves the original problem I ran into. And, it will immediately add functionality, such as producing the UMD files necessary for browser <script> support.

Replaced node-fetch w/isomorphic-fetch: isomorphic-fetch is just a wrapper around node-fetch that solves the globalThis.fetch compatibility problem.

Implemented private keyword: TS offers typesafety but it doesn't actually protect the property. This means the apiKey property can be sniffed easily by any other code in the environment. We can easily add some protection here. I'll make another issue for that.

Added Versioned Facade: Mistral's API is obviously very new. I expect the routes and parameters will change drastically in the coming months. I've added a versioned facade in front of the root-level methods which will allow the client to add temporary fixes to breaking changes, version itself, etc.

Basically, the class has a private v1 property that aligns with Mistral's route structure then the root-level methods call those methods.

client.listModels === client.v1.models.list

HTTP 400: Tool call id was null but must be a-z, A-Z, 0-9, with a length of 9

I am getting this error when using one of the open source models, but not with the largest one. The stacktrace that I am getting is the following:

MistralAPIError: HTTP error! status: 400 Response: 
{"object":"error","message":"Tool call id was null but must be a-z, A-Z, 0-9, with a length of 9.","type":"invalid_request_error","param":null,"code":null}
    at MistralClient._request (file://...@mistralai/mistralai/src/client.js:132:17)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async MistralClient.chatStream (file:///...@mistralai/mistralai/src/client.js:306:22)

It seems that the typescript definitions of the client are forcing the tool calls to have an ID equal to the string "null". This seems to be working fine with some models (mistral-large-latest) but is raising errors with other models (open-mixtral-8x22b).

The definition that is currently in the typescript client is this:

export interface ToolCalls {
    id: 'null';
    type: ToolType = ToolType.function;
    function: FunctionCall;
}

I am using Mistral's client within LangChain.

Context Window size

Hello, it would be great if either the API (preferred) or the documentation could contain the following:

  • context window size (input), in tokens
  • max output tokens per call (e.g. latest OpenAI models have a 4096 limit, while before it was context_window - max_tokens)

At least for the 3x Chat models, but even having guidance on the '-embed' will be of great help to developers.

Love your work!

Refactor arrow function class methods for better inheritance support

In MistralClient, you're using arrow functions for defining class methods. This limits the ability to easily override these methods in subclasses, for example if I want to have a class like:

import MistralClient from "@mistralai/mistralai";

class ExtendedMistral extends MistralClient {
  constructor(apiKey: string, endpoint: string) {
    super(apiKey, endpoint);
  }

  override async chat(options: ChatOptions): Promise<ChatCompletionResponse> {
    // some extra logic here
    console.log("override");

    return await super.chat(options);
  }
}

Arrow functions in JavaScript bind this to the context of where they're defined, not where they're called. This means when you define class methods as arrow functions, they ignore the class hierarchy's this context, making it impossible to override these methods in subclasses.

If chat is an arrow function, it's tied to the original MistralClient instance's context. ExtendedMistral cannot override it because super.chat(options) tries to access a method on the MistralClient prototype, which doesn't exist.

Changing chat to a traditional method (async chat(options) { ... }) allows it to be part of the class's prototype, enabling this and super to correctly refer to ExtendedMistral when overridden.

So instead of doing this in MistralClient:

chat = async function(options) {
  // Implementation
}

Do this:

async chat(options) {
  // Implementation
}

I could take this on if needed.

Versions out of sync

The CI publish step is expecting VERSION: 0.0.1

echo "//registry.npmjs.org/:_authToken=${{ secrets.NPM_TOKEN }}" >> .npmrc
npm version ${{ github.ref_name }}
sed -i 's/VERSION = '\''0.0.1'\''/VERSION = '\''${{ github.ref_name }}'\''/g' src/client.js
npm publish

The client has VERSION = 0.0.3:

let isNode = false;
const VERSION = '0.0.3';
const RETRY_STATUS_CODES = [429, 500, 502, 503, 504];
const ENDPOINT = 'https://api.mistral.ai';

The package.json is currently version showing version 0.0.1

{
"name": "@mistralai/mistralai",
"version": "0.0.1",
"description": "",
"author": "[email protected]",

The published npm version is 0.1.3: https://www.npmjs.com/package/@mistralai/mistralai

cat node_modules/@mistralai/mistralai/package.json 
{
  "name": "@mistralai/mistralai",
  "version": "0.1.3",
  "description": "",

Streaming doesn't work in the browser

The provided value 'stream' is not a valid enum value of type XMLHttpRequestResponseType.
image

I ended up just using my own fetch based impl that uses eventsource-parser but took some time to leave some feedback here.

Typescript types pointing to wrong file

Just installed the client. VS Code can't find TS types probably because in package.json this field is incorrect: "types": "src/mistralai.d.ts" when it should be "types": "src/client.d.ts".

Error with Cloudflare Workers

Hi!

When I use this JS client with the docs example to retrieve the chat response from the Mistral API, I get the following error:

  "logs": [
    {
      "message": [
        {
          "message": "There is no suitable adapter to dispatch the request since :\n- adapter xhr is not supported by the environment\n- adapter http is not available in the build",
          "name": "AxiosError",
          "stack": "AxiosError: There is no suitable adapter to dispatch the request since :\n- adapter xhr is not supported by the environment\n- adapter http is not available in the build\n    at Object.getAdapter (worker.js:1724:15)\n    at Axios.dispatchRequest (worker.js:1753:38)\n    at async MistralClient._request (worker.js:2460:24)\n    at async MistralClient.chat (worker.js:2537:24)\n    at async mistralChat (worker.js:2669:26)\n    at async onUpdate (worker.js:2643:7)",
          "code": "ERR_NOT_SUPPORT",
          "status": null
        }
      ],
      "level": "error"

Client does not return a response

Hi there,

Running the latest version of the SDK 0.1.3, but when I try to init and call the client, it does not return anything.

Here is my code:

const mistral = new MistralClient(env.PUBLIC_MISTRAL_API_KEY)
const response = await mistral.chatStream({
	model: 'mistral-large-latest',
	messages: [{ role: 'system', content: 'Say hello world.' }],
	temperature: 0
})

// response is an empty object {}

CORS issues with frontend applications

Currently, the client-js is not usable within frontend applications directly since CORS is activated for the mistral API endpoint.

Are any open endpoints in sight for testing purposes?

Continous retry on hitting rate limit

The rate limit response code is 429 and is one of the retry code in

const RETRY_STATUS_CODES = [429, 500, 502, 503, 504];

So this is continous retry cycle

    } else if (RETRY_STATUS_CODES.includes(response.status)) {
      console.debug(
        `Retrying request on response status: ${response.status}`,
        `Response: ${await response.text()}`,
        `Attempt: ${attempts + 1}`,
      );
      // eslint-disable-next-line max-len
      await new Promise((resolve) =>
        setTimeout(resolve, Math.pow(2, (attempts + 1)) * 500),
      );
      
    I had like 50 requests in a few seconds.  The more requests I make, the more the rate limit is triggered.  
    
    So either we should change the response code for rate liits or ideally do a sliding window retry with extended delays.

Client is not working on NodeJS

When I put this line on NodeJS request handler it freezes my backend without creating client

    const client = new MistralClient(apiKey);

simple JS script which is run via node index.js works fine though. Is there any caveats when using MistralClient on backend request handlers?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.