This version is still in development and is not considered stable yet. For the latest snapshot version, please use Spring AI 1.0.0-SNAPSHOT! |
Upgrade Notes
Upgrading to 1.0.0-SNAPSHOT
Overview
The 1.0.0-SNAPSHOT version includes significant changes to artifact IDs, package names, and module structure. This section provides guidance specific to using the SNAPSHOT version.
Add Snapshot Repositories
To use the 1.0.0-SNAPSHOT version, you need to add the snapshot repositories to your build file. For detailed instructions, refer to the Snapshots - Add Snapshot Repositories section in the Getting Started guide.
Update Dependency Management
Update your Spring AI BOM version to 1.0.0-SNAPSHOT
in your build configuration.
For detailed instructions on configuring dependency management, refer to the Dependency Management section in the Getting Started guide.
Artifact ID, Package, and Module Changes
The 1.0.0-SNAPSHOT includes changes to artifact IDs, package names, and module structure.
For details, refer to: - Common Artifact ID Changes - Common Package Changes - Common Module Structure
Automating upgrading using AI
You can automate the upgrade process to 1.0.0-SNAPSHOT using the Claude Code CLI tool with a provided prompt. The prompt will guide the AI to perform the following tasks:
-
Update the Spring AI BOM version to 1.0.0-SNAPSHOT
-
Ensure all required repositories exist in your build configuration
-
Update Spring AI artifact IDs according to the new naming patterns
To use this automation:
-
Download the Claude Code CLI tool
-
Copy the prompt from the update-to-snapshot.txt file
-
Paste the prompt into the Claude Code CLI
-
The AI will analyze your project and make the necessary changes
This approach can save time and reduce the chance of errors when upgrading multiple projects or complex codebases.
Upgrading to 1.0.0-M8
You can automate the upgrade process to 1.0.0-M8 using an OpenRewrite recipe. This recipe helps apply many of the necessary code changes for this version. Find the recipe and usage instructions at Arconia Spring AI Migrations.
Chat Client
-
The
ChatClient
has been enhanced to solve some inconsistencies or unwanted behavior whenever user and system prompts were not rendered before using them in an advisor. The new behavior ensures that the user and system prompts are always rendered before executing the chain of advisors. As part of this enhancement, theAdvisedRequest
andAdvisedResponse
APIs have been deprecated, replaced byChatClientRequest
andChatClientResponse
. Advisors now act on a fully builtPrompt
object included in aChatClientRequest
instead of the destructured format used inAdvisedRequest
, guaranteeing consistency and completeness.
For example, if you had a custom advisor that modified the request prompt in the before
method, you would refactor it as follows:
// --- Before (using AdvisedRequest) ---
@Override
public AdvisedRequest before(AdvisedRequest advisedRequest) {
// Access original user text and parameters directly from AdvisedRequest
String originalUserText = new PromptTemplate(advisedRequest.userText(), advisedRequest.userParams()).render();
// ... retrieve documents, create augmented prompt text ...
List<Document> retrievedDocuments = ...;
String augmentedPromptText = ...; // create augmented text from originalUserText and retrievedDocuments
// Copy existing context and add advisor-specific data
Map<String, Object> context = new HashMap<>(advisedRequest.adviseContext());
context.put("retrievedDocuments", retrievedDocuments); // Example key
// Use the AdvisedRequest builder pattern to return the modified request
return AdvisedRequest.from(advisedRequest)
.userText(augmentedPromptText) // Set the augmented user text
.adviseContext(context) // Set the updated context
.build();
}
// --- After (using ChatClientRequest) ---
@Override
public ChatClientRequest before(ChatClientRequest chatClientRequest, AdvisorChain chain) {
String originalUserText = chatClientRequest.prompt().getUserMessage().getText(); // Access prompt directly
// ... retrieve documents ...
List<Document> retrievedDocuments = ...;
String augmentedQueryText = ...; // create augmented text
// Initialize context with existing data and add advisor-specific data
Map<String, Object> context = new HashMap<>(chatClientRequest.context()); (1)
context.put("retrievedDocuments", retrievedDocuments); // Example key
context.put("originalUserQuery", originalUserText); // Example key
// Use immutable operations
return chatClientRequest.mutate()
.prompt(chatClientRequest.prompt()
.augmentUserMessage(augmentedQueryText) (2)
)
.context(context) (3)
.build();
}
1 | Initialize the context map with data from the incoming request (chatClientRequest.context() ) to preserve context from previous advisors, then add new data. |
2 | Use methods like prompt.augmentUserMessage() to modify the prompt content safely. |
3 | Pass the updated context map. This map becomes part of the ChatClientRequest and is accessible later via ChatClientResponse.responseContext() in the after method.
|
In addition to directly replacing the user message text with augmentUserMessage(String)
, you can provide a function to modify the existing UserMessage
more granularly:
Prompt originalPrompt = new Prompt(new UserMessage("Tell me about Large Language Models."));
// Example: Append context or modify properties using a Function
Prompt augmentedPrompt = originalPrompt.augmentUserMessage(userMessage ->
userMessage.mutate()
.text(userMessage.getText() + "\n\nFocus on their applications in software development.")
// .media(...) // Potentially add/modify media
// .metadata(...) // Potentially add/modify metadata
.build()
);
// 'augmentedPrompt' now contains the modified UserMessage
This approach offers more control when you need to conditionally change parts of the UserMessage
or work with its media and metadata, rather than just replacing the text content.
-
The overloaded
tools
methods in theChatClient
prompt builder API have been renamed for clarity and to avoid ambiguity in method dispatching based on argument types. -
ChatClient.PromptRequestSpec#tools(String… toolNames)
has been renamed toChatClient.PromptRequestSpec#toolNames(String… toolNames)
. Use this method to specify the names of tool functions (registered elsewhere, e.g., via@Bean
definitions with@Description
) that the model is allowed to call. -
ChatClient.PromptRequestSpec#tools(ToolCallback… toolCallbacks)
has been renamed toChatClient.PromptRequestSpec#toolCallbacks(ToolCallback… toolCallbacks)
. Use this method to provide inlineToolCallback
instances, which include the function implementation, name, description, and input type definition.
This change addresses potential confusion where the Java compiler might not select the intended overload based on the provided arguments.
Prompt Templating and Advisors
Several classes and methods related to prompt creation and advisor customization have been deprecated in favor of more flexible approaches using the builder pattern and the TemplateRenderer
interface.
See PromptTemplate for details on the new API.
PromptTemplate Deprecations
The PromptTemplate
class has deprecated several constructors and methods related to the older templateFormat
enum and direct variable injection:
-
Constructors:
PromptTemplate(String template, Map<String, Object> variables)
andPromptTemplate(Resource resource, Map<String, Object> variables)
are deprecated. -
Fields:
template
andtemplateFormat
are deprecated. -
Methods:
getTemplateFormat()
,getInputVariables()
, andvalidate(Map<String, Object> model)
are deprecated.
Migration: Use the PromptTemplate.builder()
pattern to create instances. Provide the template string via .template()
and optionally configure a custom TemplateRenderer
via .renderer()
. Pass variables using .variables()
.
// Before (Deprecated)
PromptTemplate oldTemplate = new PromptTemplate("Hello {name}", Map.of("name", "World"));
String oldRendered = oldTemplate.render(); // Variables passed at construction
// After (Using Builder)
PromptTemplate newTemplate = PromptTemplate.builder()
.template("Hello {name}")
.variables(Map.of("name", "World")) // Variables passed during builder configuration
.build();
Prompt prompt = newTemplate.create(); // Create prompt using baked-in variables
String newRendered = prompt.getContents(); // Or use newTemplate.render()
QuestionAnswerAdvisor Deprecations
The QuestionAnswerAdvisor
has deprecated constructors and builder methods that relied on a simple userTextAdvise
string:
-
Constructors taking a
userTextAdvise
String argument are deprecated. -
Builder method:
userTextAdvise(String userTextAdvise)
is deprecated.
Migration: Use the .promptTemplate(PromptTemplate promptTemplate)
builder method to provide a fully configured PromptTemplate
object for customizing how retrieved context is merged.
// Before (Deprecated)
QuestionAnswerAdvisor oldAdvisor = QuestionAnswerAdvisor.builder(vectorStore)
.userTextAdvise("Context: {question_answer_context} Question: {question}") // Simple string
.build();
// After (Using PromptTemplate)
PromptTemplate customTemplate = PromptTemplate.builder()
.template("Context: {question_answer_context} Question: {question}")
.build();
QuestionAnswerAdvisor newAdvisor = QuestionAnswerAdvisor.builder(vectorStore)
.promptTemplate(customTemplate) // Provide PromptTemplate object
.build();
Chat Memory
-
A
ChatMemory
bean is auto-configured for you whenever using one of the Spring AI Model starters. By default, it uses theMessageWindowChatMemory
implementation and stores the conversation history in memory. -
The
ChatMemory
API has been enhanced to support a more flexible and extensible way of managing conversation history. The storage mechanism has been decoupled from theChatMemory
interface and is now handled by a newChatMemoryRepository
interface. TheChatMemory
API now can be used to implement different memory strategies without being tied to a specific storage mechanism. By default, Spring AI provides aMessageWindowChatMemory
implementation that maintains a window of messages up to a specified maximum size. -
The
get(String conversationId, int lastN)
method inChatMemory
has been deprecated in favour of usingMessageWindowChatMemory
when it’s needed to keep messages in memory up to a certain limit. Theget(String conversationId)
method is now the preferred way to retrieve messages from the memory whereas the specific implementation ofChatMemory
can decide the strategy for filtering, processing, and returning messages. -
The
JdbcChatMemory
has been deprecated in favour of usingJdbcChatMemoryRepository
together with aChatMemory
implementation suchMessageWindowChatMemory
. If you were relying on an auto-configuredJdbcChatMemory
bean, you can replace that by auto-wiring aChatMemory
bean that is auto-configured to use theJdbcChatMemoryRepository
internally for storing messages whenever the related dependency is in the classpath. -
The
spring.ai.chat.memory.jdbc.initialize-schema
property has been deprecated in favor ofspring.ai.chat.memory.repository.jdbc.initialize-schema
. -
Refer to the new Chat Memory documentation for more details on the new API and how to use it.
-
The
MessageWindowChatMemory.get(String conversationId, int lastN)
method is deprecated. The windowing size is now managed internally based on the configuration provided during instantiation, so onlyget(String conversationId)
should be used.
Prompt Templating
-
The
PromptTemplate
API has been redesigned to support a more flexible and extensible way of templating prompts, relying on a newTemplateRenderer
API. As part of this change, thegetInputVariables()
andvalidate()
methods have been deprecated and will throw anUnsupportedOperationException
if called. Any logic specific to a template engine should be available through theTemplateRenderer
API.
Class Package Refactoring
Several classes have been moved to different modules and packages for better organization:
-
Evaluation classes moved:
-
org.springframework.ai.evaluation.FactCheckingEvaluator
moved toorg.springframework.ai.chat.evaluation
package withinspring-ai-client-chat
. -
org.springframework.ai.evaluation.RelevancyEvaluator
moved toorg.springframework.ai.chat.evaluation
package withinspring-ai-client-chat
. -
org.springframework.ai.evaluation.EvaluationRequest
,EvaluationResponse
, andEvaluator
moved fromspring-ai-client-chat
tospring-ai-commons
under theorg.springframework.ai.evaluation
package.
-
-
Output converter classes moved:
-
Classes within
org.springframework.ai.converter
(e.g.,BeanOutputConverter
,ListOutputConverter
,MapOutputConverter
,StructuredOutputConverter
, etc.) moved fromspring-ai-client-chat
tospring-ai-model
.
-
-
Transformer classes moved:
-
org.springframework.ai.chat.transformer.KeywordMetadataEnricher
moved toorg.springframework.ai.model.transformer.KeywordMetadataEnricher
inspring-ai-model
. -
org.springframework.ai.chat.transformer.SummaryMetadataEnricher
moved toorg.springframework.ai.model.transformer.SummaryMetadataEnricher
inspring-ai-model
.
-
-
Utility classes moved:
-
org.springframework.ai.util.PromptAssert
moved fromspring-ai-client-chat
toorg.springframework.ai.rag.util.PromptAssert
inspring-ai-rag
.
-
Please update your imports accordingly.
Observability
-
Changes to the
spring.ai.client
observation:-
The
spring.ai.chat.client.tool.function.names
andspring.ai.chat.client.tool.function.callbacks
attributes have been deprecated, replaced by a newspring.ai.chat.client.tool.names
attribute that includes the names of all the tools passed to a ChatClient, regardless of the underlying mechanism used to define them. -
The
spring.ai.chat.client.advisor.params
attribute has been deprecated and will not have a replacement. The reason is that there is a risk to expose sensitive information or break the instrumentation since the entries in the advisor context are used to pass arbitrary Java objects between advisors and are not necessarily serializable. The conversation ID that was previously exported here is now available via the dedicatedspring.ai.chat.client.conversation.id
attribute. If you need to export some of the other parameters in the advisor context to the observability system, you can do so by defining anObservationFilter
and making an explicit decision on which parameters to export. For inspiration, you can refer to theChatClientPromptContentObservationFilter
. -
The content of a prompt as specified via a ChatClient API was included optionally in the
spring.ai.client
observation, broken down in a few attributes:spring.ai.chat.client.user.text
,spring.ai.chat.client.user.params
,spring.ai.chat.client.system.text
,spring.ai.chat.client.system.params
. All those attributes are now deprecated, replaced by a singlegen_ai.prompt
attribute that contains all the messages in the prompt, solving the problem affecting the deprecated attributes where part of the prompt was not included in the observation, and aligning with the observations used in the ChatModel API. This new attribute can be enabled via thespring.ai.chat.observations.include-prompt
configuration property, whereas the previousspring.ai.chat.observations.include-input
configuration property is deprecated.
-
-
Changes to the
spring.ai.advisor
observation:-
The
spring.ai.advisor.type
attribute has been deprecated. In previous releases, the Advisor API was categorized based on the type of advisor (before
,after
,around
). That distinction doesn’t apply anymore meaning that all Advisors are now of the same type (around
).
-
Upgrading to 1.0.0-M7
Overview of Changes
Spring AI 1.0.0-M7 is the last milestone release before the RC1 and GA releases. It introduces several important changes to artifact IDs, package names, and module structure that will be maintained in the final release.
Artifact ID, Package, and Module Changes
The 1.0.0-M7 includes the same structural changes as 1.0.0-SNAPSHOT.
For details, refer to: - Common Artifact ID Changes - Common Package Changes - Common Module Structure
MCP Java SDK Upgrade to 0.9.0
Spring AI 1.0.0-M7 now uses MCP Java SDK version 0.9.0, which includes significant changes from previous versions. If you’re using MCP in your applications, you’ll need to update your code to accommodate these changes.
Key changes include:
Interface Renaming
-
ClientMcpTransport
→McpClientTransport
-
ServerMcpTransport
→McpServerTransport
-
DefaultMcpSession
→McpClientSession
orMcpServerSession
-
All
*Registration
classes →*Specification
classes
Server Creation Changes
-
Use
McpServerTransportProvider
instead ofServerMcpTransport
// Before
ServerMcpTransport transport = new WebFluxSseServerTransport(objectMapper, "/mcp/message");
var server = McpServer.sync(transport)
.serverInfo("my-server", "1.0.0")
.build();
// After
McpServerTransportProvider transportProvider = new WebFluxSseServerTransportProvider(objectMapper, "/mcp/message");
var server = McpServer.sync(transportProvider)
.serverInfo("my-server", "1.0.0")
.build();
Handler Signature Changes
All handlers now receive an exchange
parameter as their first argument:
// Before
.tool(calculatorTool, args -> new CallToolResult("Result: " + calculate(args)))
// After
.tool(calculatorTool, (exchange, args) -> new CallToolResult("Result: " + calculate(args)))
Client Interaction via Exchange
Methods previously available on the server are now accessed through the exchange object:
// Before
ClientCapabilities capabilities = server.getClientCapabilities();
CreateMessageResult result = server.createMessage(new CreateMessageRequest(...));
// After
ClientCapabilities capabilities = exchange.getClientCapabilities();
CreateMessageResult result = exchange.createMessage(new CreateMessageRequest(...));
Roots Change Handlers
// Before
.rootsChangeConsumers(List.of(
roots -> System.out.println("Roots changed: " + roots)
))
// After
.rootsChangeHandlers(List.of(
(exchange, roots) -> System.out.println("Roots changed: " + roots)
))
For a complete guide to migrating MCP code, refer to the MCP Migration Guide.
Enabling/Disabling Model Auto-Configuration
The previous configuration properties for enabling/disabling model auto-configuration have been removed:
-
spring.ai.<provider>.chat.enabled
-
spring.ai.<provider>.embedding.enabled
-
spring.ai.<provider>.image.enabled
-
spring.ai.<provider>.moderation.enabled
By default, if a model provider (e.g., OpenAI, Ollama) is found on the classpath, its corresponding auto-configuration for relevant model types (chat, embedding, etc.) is enabled. If multiple providers for the same model type are present (e.g., both spring-ai-openai-spring-boot-starter
and spring-ai-ollama-spring-boot-starter
), you can use the following properties to select which provider’s auto-configuration should be active, effectively disabling the others for that specific model type.
To disable auto-configuration for a specific model type entirely, even if only one provider is present, set the corresponding property to a value that does not match any provider on the classpath (e.g., none
or disabled
).
You can refer to the SpringAIModels
enumeration for a list of well-known provider values.
-
spring.ai.model.audio.speech=<model-provider|none>
-
spring.ai.model.audio.transcription=<model-provider|none>
-
spring.ai.model.chat=<model-provider|none>
-
spring.ai.model.embedding=<model-provider|none>
-
spring.ai.model.embedding.multimodal=<model-provider|none>
-
spring.ai.model.embedding.text=<model-provider|none>
-
spring.ai.model.image=<model-provider|none>
-
spring.ai.model.moderation=<model-provider|none>
Automating upgrading using AI
You can automate the upgrade process to 1.0.0-M7 using the Claude Code CLI tool with a provided prompt:
-
Download the Claude Code CLI tool
-
Copy the prompt from the update-to-m7.txt file
-
Paste the prompt into the Claude Code CLI
-
The AI will analyze your project and make the necessary changes
The automated upgrade prompt currently handles artifact ID changes, package relocations, and module structure changes, but does not yet include automatic changes for upgrading to MCP 0.9.0. If you’re using MCP, you’ll need to manually update your code following the guidance in the MCP Java SDK Upgrade section. |
Common Changes Across Versions
Artifact ID Changes
The naming pattern for Spring AI starter artifacts has changed. You’ll need to update your dependencies according to the following patterns:
-
Model starters:
spring-ai-{model}-spring-boot-starter
→spring-ai-starter-model-{model}
-
Vector Store starters:
spring-ai-{store}-store-spring-boot-starter
→spring-ai-starter-vector-store-{store}
-
MCP starters:
spring-ai-mcp-{type}-spring-boot-starter
→spring-ai-starter-mcp-{type}
Examples
-
Maven
-
Gradle
<!-- BEFORE -->
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-openai-spring-boot-starter</artifactId>
</dependency>
<!-- AFTER -->
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-model-openai</artifactId>
</dependency>
// BEFORE
implementation 'org.springframework.ai:spring-ai-openai-spring-boot-starter'
implementation 'org.springframework.ai:spring-ai-redis-store-spring-boot-starter'
// AFTER
implementation 'org.springframework.ai:spring-ai-starter-model-openai'
implementation 'org.springframework.ai:spring-ai-starter-vector-store-redis'
Changes to Spring AI Autoconfiguration Artifacts
The Spring AI autoconfiguration has changed from a single monolithic artifact to individual autoconfiguration artifacts per model, vector store, and other components. This change was made to minimize the impact of different versions of dependent libraries conflicting, such as Google Protocol Buffers, Google RPC, and others. By separating autoconfiguration into component-specific artifacts, you can avoid pulling in unnecessary dependencies and reduce the risk of version conflicts in your application.
The original monolithic artifact is no longer available:
<!-- NO LONGER AVAILABLE -->
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-spring-boot-autoconfigure</artifactId>
<version>${project.version}</version>
</dependency>
Instead, each component now has its own autoconfiguration artifact following these patterns:
-
Model autoconfiguration:
spring-ai-autoconfigure-model-{model}
-
Vector Store autoconfiguration:
spring-ai-autoconfigure-vector-store-{store}
-
MCP autoconfiguration:
spring-ai-autoconfigure-mcp-{type}
Examples of New Autoconfiguration Artifacts
-
Models
-
Vector Stores
-
MCP
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-autoconfigure-model-openai</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-autoconfigure-model-anthropic</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-autoconfigure-model-vertex-ai</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-autoconfigure-vector-store-redis</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-autoconfigure-vector-store-pgvector</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-autoconfigure-vector-store-chroma</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-autoconfigure-mcp-client</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-autoconfigure-mcp-server</artifactId>
</dependency>
In most cases, you won’t need to explicitly add these autoconfiguration dependencies. They are included transitively when using the corresponding starter dependencies. |
Package Name Changes
Your IDE should assist with refactoring to the new package locations.
-
KeywordMetadataEnricher
andSummaryMetadataEnricher
have moved fromorg.springframework.ai.transformer
toorg.springframework.ai.chat.transformer
. -
Content
,MediaContent
, andMedia
have moved fromorg.springframework.ai.model
toorg.springframework.ai.content
.
Module Structure
The project has undergone significant changes to its module and artifact structure. Previously, spring-ai-core
contained all central interfaces, but this has now been split into specialized domain modules to reduce unnecessary dependencies in your applications.

spring-ai-commons
Base module with no dependencies on other Spring AI modules. Contains:
- Core domain models (Document
, TextSplitter
)
- JSON utilities and resource handling
- Structured logging and observability support
spring-ai-model
Provides AI capability abstractions:
- Interfaces like ChatModel
, EmbeddingModel
, and ImageModel
- Message types and prompt templates
- Function-calling framework (ToolDefinition
, ToolCallback
)
- Content filtering and observation support
spring-ai-vector-store
Unified vector database abstraction:
- VectorStore
interface for similarity search
- Advanced filtering with SQL-like expressions
- SimpleVectorStore
for in-memory usage
- Batching support for embeddings
spring-ai-client-chat
High-level conversational AI APIs:
- ChatClient
interface
- Conversation persistence via ChatMemory
- Response conversion with OutputConverter
- Advisor-based interception
- Synchronous and reactive streaming support
spring-ai-advisors-vector-store
Bridges chat with vector stores for RAG:
- QuestionAnswerAdvisor
: injects context into prompts
- VectorStoreChatMemoryAdvisor
: stores/retrieves conversation history
Dependency Structure
The dependency hierarchy can be summarized as:
-
spring-ai-commons
(foundation) -
spring-ai-model
(depends on commons) -
spring-ai-vector-store
andspring-ai-client-chat
(both depend on model) -
spring-ai-advisors-vector-store
andspring-ai-rag
(depend on both client-chat and vector-store) -
spring-ai-model-chat-memory-*
modules (depend on client-chat)
ToolContext Changes
The ToolContext
class has been enhanced to support both explicit and implicit tool resolution. Tools can now be:
-
Explicitly Included: Tools that are explicitly requested in the prompt and included in the call to the model.
-
Implicitly Available: Tools that are made available for runtime dynamic resolution, but never included in any call to the model unless explicitly requested.
Starting with 1.0.0-M7, tools are only included in the call to the model if they are explicitly requested in the prompt or explicitly included in the call.
Additionally, the ToolContext
class has now been marked as final and cannot be extended anymore. It was never supposed to be subclassed. You can add all the contextual data you need when instantiating a ToolContext
, in the form of a Map<String, Object>
. For more information, check the [documentation](docs.spring.io/spring-ai/reference/api/tools.html#_tool_context).
Upgrading to 1.0.0-M6
Changes to Usage Interface and DefaultUsage Implementation
The Usage
interface and its default implementation DefaultUsage
have undergone the following changes:
-
Method Rename:
-
getGenerationTokens()
is nowgetCompletionTokens()
-
-
Type Changes:
-
All token count fields in
DefaultUsage
changed fromLong
toInteger
:-
promptTokens
-
completionTokens
(formerlygenerationTokens
) -
totalTokens
-
-
Required Actions
-
Replace all calls to
getGenerationTokens()
withgetCompletionTokens()
-
Update
DefaultUsage
constructor calls:
// Old (M5) new DefaultUsage(Long promptTokens, Long generationTokens, Long totalTokens) // New (M6) new DefaultUsage(Integer promptTokens, Integer completionTokens, Integer totalTokens)
For more information on handling Usage, refer here |
JSON Ser/Deser changes
While M6 maintains backward compatibility for JSON deserialization of the generationTokens
field, this field will be removed in M7. Any persisted JSON documents using the old field name should be updated to use completionTokens
.
Example of the new JSON format:
{
"promptTokens": 100,
"completionTokens": 50,
"totalTokens": 150
}
Changes to usage of FunctionCallingOptions for tool calling
Each ChatModel
instance, at construction time, accepts an optional ChatOptions
or FunctionCallingOptions
instance
that can be used to configure default tools used for calling the model.
Before 1.0.0-M6:
-
any tool passed via the
functions()
method of the defaultFunctionCallingOptions
instance was included in each call to the model from thatChatModel
instance, possibly overwritten by runtime options. -
any tool passed via the
functionCallbacks()
method of the defaultFunctionCallingOptions
instance was only made available for runtime dynamic resolution (see Tool Resolution), but never included in any call to the model unless explicitly requested.
Starting 1.0.0-M6:
-
any tool passed via the
functions()
method or thefunctionCallbacks()
of the defaultFunctionCallingOptions
instance is now handled in the same way: it is included in each call to the model from thatChatModel
instance, possibly overwritten by runtime options. With that, there is consistency in the way tools are included in calls to the model and prevents any confusion due to a difference in behavior betweenfunctionCallbacks()
and all the other options.
If you want to make a tool available for runtime dynamic resolution and include it in a chat request to the model only when explicitly requested, you can use one of the strategies described in Tool Resolution.
1.0.0-M6 introduced new APIs for handling tool calling. Backward compatibility is maintained for the old APIs across all scenarios, except the one described above. The old APIs are still available, but they are deprecated and will be removed in 1.0.0-M7. |
Removal of deprecated Amazon Bedrock chat models
Starting 1.0.0-M6, Spring AI transitioned to using Amazon Bedrock’s Converse API for all Chat conversation implementations in Spring AI. All the Amazon Bedrock Chat models are removed except the Embedding models for Cohere and Titan.
Refer to Bedrock Converse documentation for using the chat models. |
Changes to use Spring Boot 3.4.2 for dependency management
Spring AI updates to use Spring Boot 3.4.2 for the dependency management. You can refer here for the dependencies managed by Spring Boot 3.4.2
Required Actions
-
If you are upgrading to Spring Boot 3.4.2, please make sure to refer to this documentation for the changes required to configure the REST Client. Notably, if you don’t have an HTTP client library on the classpath, this will likely result in the use of
JdkClientHttpRequestFactory
whereSimpleClientHttpRequestFactory
would have been used previously. To switch to useSimpleClientHttpRequestFactory
, you need to setspring.http.client.factory=simple
. -
If you are using a different version of Spring Boot (say Spring Boot 3.3.x) and need a specific version of a dependency, you can override it in your build configuration.
Vector Store API changes
In version 1.0.0-M6, the delete
method in the VectorStore
interface has been modified to be a void operation instead of returning an Optional<Boolean>
.
If your code previously checked the return value of the delete operation, you’ll need to remove this check.
The operation now throws an exception if the deletion fails, providing more direct error handling.
Upgrading to 1.0.0.M5
-
Vector Builders have been refactored for consistency.
-
Current VectorStore implementation constructors have been deprecated, use the builder pattern.
-
VectorStore implementation packages have been moved into unique package names, avoiding conflicts across artifact. For example
org.springframework.ai.vectorstore
toorg.springframework.ai.pgvector.vectorstore
.
Upgrading to 1.0.0.RC3
-
The type of the portable chat options (
frequencyPenalty
,presencePenalty
,temperature
,topP
) has been changed fromFloat
toDouble
.
Upgrading to 1.0.0.M2
-
The configuration prefix for the Chroma Vector Store has been changes from
spring.ai.vectorstore.chroma.store
tospring.ai.vectorstore.chroma
in order to align with the naming conventions of other vector stores. -
The default value of the
initialize-schema
property on vector stores capable of initializing a schema is now set tofalse
. This implies that the applications now need to explicitly opt-in for schema initialization on supported vector stores, if the schema is expected to be created at application startup. Not all vector stores support this property. See the corresponding vector store documentation for more details. The following are the vector stores that currently don’t support theinitialize-schema
property.-
Hana
-
Pinecone
-
Weaviate
-
-
In Bedrock Jurassic 2, the chat options
countPenalty
,frequencyPenalty
, andpresencePenalty
have been renamed tocountPenaltyOptions
,frequencyPenaltyOptions
, andpresencePenaltyOptions
. Furthermore, the type of the chat optionstopSequences
have been changed fromString[]
toList<String>
. -
In Azure OpenAI, the type of the chat options
frequencyPenalty
andpresencePenalty
has been changed fromDouble
toFloat
, consistently with all the other implementations.
Upgrading to 1.0.0.M1
On our march to release 1.0.0 M1 we have made several breaking changes. Apologies, it is for the best!
ChatClient changes
A major change was made that took the 'old' ChatClient
and moved the functionality into ChatModel
. The 'new' ChatClient
now takes an instance of ChatModel
. This was done to support a fluent API for creating and executing prompts in a style similar to other client classes in the Spring ecosystem, such as RestClient
, WebClient
, and JdbcClient
. Refer to the [JavaDoc](docs.spring.io/spring-ai/docs/api) for more information on the Fluent API, proper reference documentation is coming shortly.
We renamed the 'old' ModelClient
to Model
and renamed implementing classes, for example ImageClient
was renamed to ImageModel
. The Model
implementation represents the portability layer that converts between the Spring AI API and the underlying AI Model API.
Adapting to the changes
The ChatClient class is now in the package org.springframework.ai.chat.client
|
Approach 1
Now, instead of getting an Autoconfigured ChatClient
instance, you will get a ChatModel
instance. The call
method signatures after renaming remain the same.
To adapt your code should refactor your code to change the use of the type ChatClient
to ChatModel
Here is an example of existing code before the change
@RestController
public class OldSimpleAiController {
private final ChatClient chatClient;
public OldSimpleAiController(ChatClient chatClient) {
this.chatClient = chatClient;
}
@GetMapping("/ai/simple")
Map<String, String> completion(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
return Map.of("generation", this.chatClient.call(message));
}
}
Now after the changes this will be
@RestController
public class SimpleAiController {
private final ChatModel chatModel;
public SimpleAiController(ChatModel chatModel) {
this.chatModel = chatModel;
}
@GetMapping("/ai/simple")
Map<String, String> completion(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
return Map.of("generation", this.chatModel.call(message));
}
}
The renaming also applies to the classes
* StreamingChatClient → StreamingChatModel
* EmbeddingClient → EmbeddingModel
* ImageClient → ImageModel
* SpeechClient → SpeechModel
* and similar for other <XYZ>Client classes
|
Approach 2
In this approach you will use the new fluent API available on the 'new' ChatClient
Here is an example of existing code before the change
@RestController
class OldSimpleAiController {
ChatClient chatClient;
OldSimpleAiController(ChatClient chatClient) {
this.chatClient = chatClient;
}
@GetMapping("/ai/simple")
Map<String, String> completion(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
return Map.of(
"generation",
this.chatClient.call(message)
);
}
}
Now after the changes this will be
@RestController
class SimpleAiController {
private final ChatClient chatClient;
SimpleAiController(ChatClient.Builder builder) {
this.chatClient = builder.build();
}
@GetMapping("/ai/simple")
Map<String, String> completion(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
return Map.of(
"generation",
this.chatClient.prompt().user(message).call().content()
);
}
}
The ChatModel instance is made available to you through autoconfiguration.
|
Approach 3
There is a tag in the GitHub repository called [v1.0.0-SNAPSHOT-before-chatclient-changes](github.com/spring-projects/spring-ai/tree/v1.0.0-SNAPSHOT-before-chatclient-changes) that you can check out and do a local build to avoid updating any of your code until you are ready to migrate your code base.
git checkout tags/v1.0.0-SNAPSHOT-before-chatclient-changes
./mvnw clean install -DskipTests
Artifact name changes
Renamed POM artifact names: - spring-ai-qdrant → spring-ai-qdrant-store - spring-ai-cassandra → spring-ai-cassandra-store - spring-ai-pinecone → spring-ai-pinecone-store - spring-ai-redis → spring-ai-redis-store - spring-ai-qdrant → spring-ai-qdrant-store - spring-ai-gemfire → spring-ai-gemfire-store - spring-ai-azure-vector-store-spring-boot-starter → spring-ai-azure-store-spring-boot-starter - spring-ai-redis-spring-boot-starter → spring-ai-starter-vector-store-redis
Upgrading to 0.8.1
Former spring-ai-vertex-ai
has been renamed to spring-ai-vertex-ai-palm2
and spring-ai-vertex-ai-spring-boot-starter
has been renamed to spring-ai-vertex-ai-palm2-spring-boot-starter
.
So, you need to change the dependency from
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-vertex-ai</artifactId>
</dependency>
To
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-vertex-ai-palm2</artifactId>
</dependency>
and the related Boot starter for the Palm2 model has changed from
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-vertex-ai-spring-boot-starter</artifactId>
</dependency>
to
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-vertex-ai-palm2-spring-boot-starter</artifactId>
</dependency>
-
Renamed Classes (01.03.2024)
-
VertexAiApi → VertexAiPalm2Api
-
VertexAiClientChat → VertexAiPalm2ChatClient
-
VertexAiEmbeddingClient → VertexAiPalm2EmbeddingClient
-
VertexAiChatOptions → VertexAiPalm2ChatOptions
-
Upgrading to 0.8.0
January 24, 2024 Update
-
Moving the
prompt
andmessages
andmetadata
packages to subpackages oforg.springframework.ai.chat
-
New functionality is text to image clients. Classes are
OpenAiImageModel
andStabilityAiImageModel
. See the integration tests for usage, docs are coming soon. -
A new package
model
that contains interfaces and base classes to support creating AI Model Clients for any input/output data type combination. At the moment, the chat and image model packages implement this. We will be updating the embedding package to this new model soon. -
A new "portable options" design pattern. We wanted to provide as much portability in the
ModelCall
as possible across different chat based AI Models. There is a common set of generation options and then those that are specific to a model provider. A sort of "duck typing" approach is used.ModelOptions
in the model package is a marker interface indicating implementations of this class will provide the options for a model. SeeImageOptions
, a subinterface that defines portable options across all text→imageImageModel
implementations. ThenStabilityAiImageOptions
andOpenAiImageOptions
provide the options specific to each model provider. All options classes are created via a fluent API builder, all can be passed into the portableImageModel
API. These option data types are used in autoconfiguration/configuration properties for theImageModel
implementations.
January 13, 2024 Update
The following OpenAi Autoconfiguration chat properties have changed
-
from
spring.ai.openai.model
tospring.ai.openai.chat.options.model
. -
from
spring.ai.openai.temperature
tospring.ai.openai.chat.options.temperature
.
Find updated documentation about the OpenAi properties: docs.spring.io/spring-ai/reference/api/chat/openai-chat.html
December 27, 2023 Update
Merge SimplePersistentVectorStore and InMemoryVectorStore into SimpleVectorStore * Replace InMemoryVectorStore with SimpleVectorStore
December 20, 2023 Update
Refactor the Ollama client and related classes and package names
-
Replace the org.springframework.ai.ollama.client.OllamaClient by org.springframework.ai.ollama.OllamaModelCall.
-
The OllamaChatClient method signatures have changed.
-
Rename the org.springframework.ai.autoconfigure.ollama.OllamaProperties into org.springframework.ai.model.ollama.autoconfigure.OllamaChatProperties and change the suffix to:
spring.ai.ollama.chat
. Some of the properties have changed as well.
December 19, 2023 Update
Renaming of AiClient and related classes and package names
-
Rename AiClient to ChatClient
-
Rename AiResponse to ChatResponse
-
Rename AiStreamClient to StreamingChatClient
-
Rename package org.sf.ai.client to org.sf.ai.chat
Rename artifact ID of
-
transformers-embedding
tospring-ai-transformers
Moved Maven modules from top-level directory and embedding-clients
subdirectory to all be under a single models
directory.
December 1, 2023
We are transitioning the project’s Group ID:
-
FROM:
org.springframework.experimental.ai
-
TO:
org.springframework.ai
Artifacts will still be hosted in the snapshot repository as shown below.
The main branch will move to the version 0.8.0-SNAPSHOT
.
It will be unstable for a week or two.
Please use the 0.7.1-SNAPSHOT if you don’t want to be on the bleeding edge.
You can access 0.7.1-SNAPSHOT
artifacts as before and still access 0.7.1-SNAPSHOT Documentation.
0.7.1-SNAPSHOT Dependencies
-
Azure OpenAI
<dependency> <groupId>org.springframework.experimental.ai</groupId> <artifactId>spring-ai-azure-openai-spring-boot-starter</artifactId> <version>0.7.1-SNAPSHOT</version> </dependency>
-
OpenAI
<dependency> <groupId>org.springframework.experimental.ai</groupId> <artifactId>spring-ai-openai-spring-boot-starter</artifactId> <version>0.7.1-SNAPSHOT</version> </dependency>