Package org.springframework.ai.ollama
Class OllamaChatClient
java.lang.Object
org.springframework.ai.ollama.OllamaChatClient
- All Implemented Interfaces:
ChatClient
,StreamingChatClient
,ModelClient<Prompt,
,ChatResponse> StreamingModelClient<Prompt,
ChatResponse>
ChatClient
implementation for Ollama.
Ollama allows developers to run large language models and generate embeddings locally.
It supports open-source models available on [Ollama AI
Library](https://ollama.ai/library). - Llama 2 (7B parameters, 3.8GB size) - Mistral
(7B parameters, 4.1GB size)
Please refer to the official Ollama website for the
most up-to-date information on available models.- Since:
- 0.8.0
- Author:
- Christian Tzolov
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionExecutes a method call to the AI model.reactor.core.publisher.Flux<ChatResponse>
Executes a method call to the AI model.withDefaultOptions
(OllamaOptions options) Deprecated.Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface org.springframework.ai.chat.ChatClient
call
Methods inherited from interface org.springframework.ai.chat.StreamingChatClient
stream
-
Constructor Details
-
OllamaChatClient
-
-
Method Details
-
withModel
Deprecated.UseOllamaOptions.setModel(java.lang.String)
instead. -
withDefaultOptions
-
call
Description copied from interface:ModelClient
Executes a method call to the AI model.- Specified by:
call
in interfaceChatClient
- Specified by:
call
in interfaceModelClient<Prompt,
ChatResponse> - Parameters:
prompt
- the request object to be sent to the AI model- Returns:
- the response from the AI model
-
stream
Description copied from interface:StreamingModelClient
Executes a method call to the AI model.- Specified by:
stream
in interfaceStreamingChatClient
- Specified by:
stream
in interfaceStreamingModelClient<Prompt,
ChatResponse> - Parameters:
prompt
- the request object to be sent to the AI model- Returns:
- the streaming response from the AI model
-
OllamaOptions.setModel(java.lang.String)
instead.