Giter VIP home page Giter VIP logo

ollama4j's Introduction

Ollama4j

ollama4j-icon

A Java library (wrapper/binding) for Ollama server.

Find more details on the website.

GitHub stars GitHub forks GitHub watchers GitHub repo size GitHub language count GitHub top language GitHub last commit Hits

codecov

Build Status

Table of Contents

How does it work?

  flowchart LR
    o4j[Ollama4j]
    o[Ollama Server]
    o4j -->|Communicates with| o;
    m[Models]
    subgraph Ollama Deployment
        direction TB
        o -->|Manages| m
    end

Requirements

Java

Or

Installation

In your Maven project, add this dependency:

<dependency>
    <groupId>io.github.amithkoujalgi</groupId>
    <artifactId>ollama4j</artifactId>
    <version>1.0.57</version>
</dependency>

Latest release:

Maven Central

API Spec

Find the full API specifications on the website.

Development

Build:

make build

Run unit tests:

make ut

Run integration tests:

make it

Releases

Releases (newer artifact versions) are done automatically on pushing the code to the main branch through GitHub Actions CI workflow.

Who's using Ollama4j?

Traction

Star History Chart

Areas of improvement

  • Use Java-naming conventions for attributes in the request/response models instead of the snake-case conventions. ( possibly with Jackson-mapper's @JsonProperty)
  • Fix deprecated HTTP client code
  • Setup logging
  • Use lombok
  • Update request body creation with Java objects
  • Async APIs for images
  • Add custom headers to requests
  • Add additional params for ask APIs such as:
    • options: additional model parameters for the Modelfile such as temperature - Supported params.
    • system: system prompt to (overrides what is defined in the Modelfile)
    • template: the full prompt or prompt template (overrides what is defined in the Modelfile)
    • context: the context parameter returned from a previous request, which can be used to keep a short conversational memory
    • stream: Add support for streaming responses from the model
  • Add test cases
  • Handle exceptions better (maybe throw more appropriate exceptions)

Get Involved

Contributions are most welcome! Whether it's reporting a bug, proposing an enhancement, or helping with code - any sort of contribution is much appreciated.

Credits

The nomenclature and the icon have been adopted from the incredible Ollama project.

References

ollama4j's People

Contributors

agentschmecker avatar amithkoujalgi avatar anjeongkyun avatar mgmacleod avatar michaelwechner avatar omcodedthis avatar reckart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

ollama4j's Issues

Empty Response when ollama is very slow

Issue

I get an empty response from the library, but my tcpflow logs show that the response does come later.

{
    "role" : "assistant",
    "content" : "",
    "images" : null
  }

Details

I have an old laptop with only Dual-Core i5 2.7Ghz and am running llama3 on Ollama.
I created a simple app to test ollama4j with llama3, but getting empty responses back, even though I set a huge request timeout.

Here is my code

String host = "http://localhost:11434/";
      String model = "llama3";
      OllamaAPI ollamaAPI = new OllamaAPI(host);
      ollamaAPI.setRequestTimeoutSeconds(600000);
      ollamaAPI.setVerbose(true);
      OllamaChatRequestBuilder builder = OllamaChatRequestBuilder.getInstance(model);
      Options options =
          new OptionsBuilder()
              .setTemperature(0.2f)
              .setNumCtx(example.getModel().getCompletionLength())
              .setTopK(1)
              .setTopP(0.9F)
              .build();
      OllamaChatRequestModel requestModel = builder.withMessage(OllamaChatMessageRole.SYSTEM, example.getSystemPrompt())
          .withMessage(OllamaChatMessageRole.USER, "What can you help me with?")
          .withOptions(options)
          .build();
      System.out.println("Ollama request: " + requestModel.toString());
      OllamaChatResult chatResult = ollamaAPI.chat(requestModel);
      System.out.println("Ollama answer: " + chatResult.getHttpStatusCode() + " in seconds: " + chatResult.getResponseTime() + ":\n" + chatResult.getResponse());

And my logs show this:

Ollama request: {
  "model" : "llama3",
  "options" : {
    "top_p" : 0.9,
    "top_k" : 1,
    "temperature" : 0.2,
    "num_ctx" : 1024
  },
  "stream" : false,
  "messages" : [ {
    "role" : "system",
    "content" : "You are a helpful customer service representative for a credit card company who helps answer customer questions about their past transactions and spending history. Today's date is January 18th, 2024. You provide precise answers and use functions to look up information...",
    "images" : null
  }, {
    "role" : "user",
    "content" : "What can you help me with?",
    "images" : null
  } ]
}
Ollama answer: 200 in seconds: 108372:

If I look into the message history, I basically see

{
    "role" : "assistant",
    "content" : "",
    "images" : null
  }

But if I check my tcpflow logs, I can see a Response:

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Tue, 14 May 2024 10:29:15 GMT
Content-Length: 801

{
   "model":"llama3",
   "created_at":"2024-05-14T10:29:15.284946Z",
   "message":{
      "role":"assistant",
      "content":"I'm happy to assist you with any questions or concerns you may have about your credit card account. I can help you:\n\n* Review your transaction history and spending habits\n* Check your available credit limit and current balance\n* Provide information on rewards and benefits associated with your card\n* Help you track your spending by category (e.g., groceries, entertainment, etc.)\n* Offer suggestions for managing your debt or improving your financial situation\n\nWhat specific area would you like me to help you with today?"
   },
   "done_reason":"stop",
   "done":true,
   "total_duration":54914791839,
   "load_duration":15113419,
   "prompt_eval_duration":804784000,
   "eval_count":101,
   "eval_duration":54082687000
}

Trying the streaming API works, but not the synchronous one.
Do you have any idea what the problem is?

`evalCount` in response from ollama cannot be parsed

I sometimes get the error

UnrecognizedPropertyException: Unrecognized field "eval_count" (class io.github.amithkoujalgi.ollama4j.core.models.OllamaResponseModel), not marked as ignorable (11 known properties: "response", "done", "evalCount", "eval_duration", "model", "created_at", "prompt_eval_duration", "load_duration", "context", "prompt_eval_count", "total_duration"])

Unrecognized field "done_reason"

After the latest update of the Ollama client (version: 6.2.2, May 12, 2024), I encounter the following exceptions:

ERROR io.github.amithkoujalgi.ollama4j.core.models.request.OllamaChatEndpointCaller - Error parsing the Ollama chat response!
com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "done_reason" (class io.github.amithkoujalgi.ollama4j.core.models.chat.OllamaChatResponseModel), not marked as ignorable (12 known properties: "done", "message", "error", "model", "created_at", "prompt_eval_duration", "load_duration", "context", "eval_duration", "eval_count", "total_duration", "prompt_eval_count"])
at [Source: REDACTED (StreamReadFeature.INCLUDE_SOURCE_IN_LOCATION disabled); line: 1, column: 142] (through reference chain: io.github.amithkoujalgi.ollama4j.core.models.chat.OllamaChatResponseModel["done_reason"])
at com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:61)
at com.fasterxml.jackson.databind.DeserializationContext.handleUnknownProperty(DeserializationContext.java:1153)
at com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:2241)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1793)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownVanilla(BeanDeserializerBase.java:1771)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:316)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:177)
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:342)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4905)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3848)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3816)
at io.github.amithkoujalgi.ollama4j.core.models.request.OllamaChatEndpointCaller.parseResponseAndAddToBuffer(OllamaChatEndpointCaller.java:40)
at io.github.amithkoujalgi.ollama4j.core.models.request.OllamaEndpointCaller.callSync(OllamaEndpointCaller.java:99)
at io.github.amithkoujalgi.ollama4j.core.OllamaAPI.chat(OllamaAPI.java:521)
at io.github.amithkoujalgi.ollama4j.core.OllamaAPI.chat(OllamaAPI.java:498)

Set options / temperature

According to the Ollama documentation one can set the temperature using options

https://github.com/jmorganca/ollama/blob/main/docs/api.md#parameters
https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values

{ "model": "mistral", "stream":false, "prompt":"How many moons does earth have?", "options":{"temperature":0} }

IIUC one cannot set the temperature / options yet using ollama4j.

I would like to suggest to introduce another "ask" method, where one can set options as well

Options options = new Options();
options.setTemperature(0.7);
ollamaAPI.ask(String, String, Options)

WDYT?

Thanks

Michael

Logging backend should not be a transitive dependency

Ollama4j uses logback as a logging backend and has a transitive dependency on it.

Libraries should not have transitive dependencies on logging backends.

If you need the backend during testing, best use the test scope on the dependency.

In general, the application which uses the ollama4j library should define a logging backend for ollama4j to use.

Update Code Due to Changes in generate Method of PromptBuilder

Description

The generate method of PromptBuilder has undergone changes due to recent updates. Consequently, the code provided in the example no longer functions correctly.

Previously, the method was called as follows:

OllamaResult response = ollamaAPI.generate(model, promptBuilder.build());

However, in the latest version, additional options need to be passed to the generate method. Therefore, the code needs to be modified as follows:

OllamaResult response = ollamaAPI.generate(model, promptBuilder.build(), new Options());

Updating the code to reflect this change should prevent any further compatibility issues.

Basic Auth or Authorization Header Bearer Token

Do you have plans to support

  • Either Basic Auth
  • Or an Authorization Header Bearer Token

?

I understand Ollama does not support this, but when using a ReverseProxy in front of Ollama, then this would help a lot re security :-)

Thanks

Michael

Certain requests fail with a 400 Bad Request

It appears that certain requests generated by the ask() method trigger a 400 - Bad request on the side of ollama.

Trying the same request manually works though (i.e. copying the JSON generated by ollamaRequestModel.toString() and posting it manually).

Change your logo with Duke instead

Friendly suggestion that it would be better to use the Java Duke logo instead of the TM Java logo. I speak out of experience :) Otherwise very interesting project, keep it up!
IMG_5744

Extend generate API Requests by advanced parameters

In addition to the /api/chat endpoint, the system prompt parameter (and other parameters that override model behaviours) can also be provided to requests at /api/generate.

Thus these should also be made available for the ollama4j API.

See:
Advanced parameters (optional):

format: the format to return a response in. Currently the only accepted value is json

system: system message to (overrides what is defined in the Modelfile)

template: the prompt template to use (overrides what is defined in the Modelfile)

context: the context parameter returned from a previous request to /generate, this can be used to keep a short conversational memory

raw: if true no formatting will be applied to the prompt. You may choose to use the raw parameter if you are specifying a full templated prompt in your request to the API

keep_alive: controls how long the model will stay loaded into memory following the request (default: 5m)

Originally posted by @AgentSchmecker in #20 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.