Titan Chat

Amazon Titan foundation models (FMs) provide customers with a breadth of high-performing image, multimodal embeddings, and text model choices, via a fully managed API. Amazon Titan models are created by AWS and pretrained on large datasets, making them powerful, general-purpose models built to support a variety of use cases, while also supporting the responsible use of AI. Use them as is or privately customize them with your own data.

The AWS Bedrock Titan Model Page and Amazon Bedrock User Guide contains detailed information on how to use the AWS hosted model.


Refer to the Spring AI documentation on Amazon Bedrock for setting up API access.

Add Repositories and BOM

Spring AI artifacts are published in Spring Milestone and Snapshot repositories. Refer to the Repositories section to add these repositories to your build system.

To help with dependency management, Spring AI provides a BOM (bill of materials) to ensure that a consistent version of Spring AI is used throughout the entire project. Refer to the Dependency Management section to add the Spring AI BOM to your build system.


Add the spring-ai-bedrock-ai-spring-boot-starter dependency to your project’s Maven pom.xml file:


or to your Gradle build.gradle build file.

dependencies {
    implementation 'org.springframework.ai:spring-ai-bedrock-ai-spring-boot-starter'
Refer to the Dependency Management section to add the Spring AI BOM to your build file.

Enable Titan Chat

By default the Titan model is disabled. To enable it set the spring.ai.bedrock.titan.chat.enabled property to true. Exporting environment variable is one way to set this configuration property:


Chat Properties

The prefix spring.ai.bedrock.aws is the property prefix to configure the connection to AWS Bedrock.

Property Description Default


AWS region to use.



AWS access key.



AWS secret key.


The prefix spring.ai.bedrock.titan.chat is the property prefix that configures the chat client implementation for Titan.

Property Description Default


Enable Bedrock Titan chat client. Disabled by default



The model id to use. See the TitanChatBedrockApi#TitanChatModel for the supported models.



Controls the randomness of the output. Values can range over [0.0,1.0]



The maximum cumulative probability of tokens to consider when sampling.

AWS Bedrock default


Configure up to four sequences that the generative recognizes. After a stop sequence, the generative stops generating further tokens. The returned text doesn’t contain the stop sequence.

AWS Bedrock default


Specify the maximum number of tokens to use in the generated response. Note that the models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate. We recommend a limit of 4,000 tokens for optimal performance.

AWS Bedrock default

Look at the TitanChatBedrockApi#TitanChatModel for other model IDs. Supported values are: amazon.titan-text-lite-v1 and amazon.titan-text-express-v1. Model ID values can also be found in the AWS Bedrock documentation for base model IDs.

All properties prefixed with spring.ai.bedrock.titan.chat.options can be overridden at runtime by adding a request specific Chat Options to the Prompt call.

Chat Options

The BedrockTitanChatOptions.java provides model configurations, such as temperature, topP, etc.

On start-up, the default options can be configured with the BedrockTitanChatClient(api, options) constructor or the spring.ai.bedrock.titan.chat.options.* properties.

At run-time you can override the default options by adding new, request specific, options to the Prompt call. For example to override the default temperature for a specific request:

ChatResponse response = chatClient.call(
    new Prompt(
        "Generate the names of 5 famous pirates.",
In addition to the model specific BedrockTitanChatOptions you can use a portable ChatOptions instance, created with the ChatOptionsBuilder#builder().

Sample Controller (Auto-configuration)

Create a new Spring Boot project and add the spring-ai-bedrock-ai-spring-boot-starter to your pom (or gradle) dependencies.

Add a application.properties file, under the src/main/resources directory, to enable and configure the Titan Chat client:


replace the regions, access-key and secret-key with your AWS credentials.

This will create a BedrockTitanChatClient implementation that you can inject into your class. Here is an example of a simple @Controller class that uses the chat client for text generations.

public class ChatController {

    private final BedrockTitanChatClient chatClient;

    public ChatController(BedrockTitanChatClient chatClient) {
        this.chatClient = chatClient;

    public Map generate(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
        return Map.of("generation", chatClient.call(message));

	public Flux<ChatResponse> generateStream(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
        Prompt prompt = new Prompt(new UserMessage(message));
        return chatClient.stream(prompt);

Manual Configuration

The BedrockTitanChatClient implements the ChatClient and StreamingChatClient and uses the Low-level TitanChatBedrockApi Client to connect to the Bedrock Titanic service.

Add the spring-ai-bedrock dependency to your project’s Maven pom.xml file:


or to your Gradle build.gradle build file.

dependencies {
    implementation 'org.springframework.ai:spring-ai-bedrock'
Refer to the Dependency Management section to add the Spring AI BOM to your build file.

Next, create an BedrockTitanChatClient and use it for text generations:

TitanChatBedrockApi titanApi = new TitanChatBedrockApi(
    Region.US_EAST_1.id(), new ObjectMapper());

BedrockTitanChatClient chatClient = new BedrockTitanChatClient(titanApi,

ChatResponse response = chatClient.call(
    new Prompt("Generate the names of 5 famous pirates."));

// Or with streaming responses
Flux<ChatResponse> response = chatClient.stream(
    new Prompt("Generate the names of 5 famous pirates."));

Low-level TitanChatBedrockApi Client

The TitanChatBedrockApi provides is lightweight Java client on top of AWS Bedrock Bedrock Titan models.

Following class diagram illustrates the TitanChatBedrockApi interface and building blocks:

bedrock titan chat low level api

Client supports the amazon.titan-text-lite-v1 and amazon.titan-text-express-v1 models for both synchronous (e.g. chatCompletion()) and streaming (e.g. chatCompletionStream()) responses.

Here is a simple snippet how to use the api programmatically:

TitanChatBedrockApi titanBedrockApi = new TitanChatBedrockApi(TitanChatCompletionModel.TITAN_TEXT_EXPRESS_V1.id(),

TitanChatRequest titanChatRequest = TitanChatRequest.builder("Give me the names of 3 famous pirates?")

TitanChatResponse response = titanBedrockApi.chatCompletion(titanChatRequest);

Flux<TitanChatResponseChunk> response = titanBedrockApi.chatCompletionStream(titanChatRequest);

List<TitanChatResponseChunk> results = response.collectList().block();

Follow the TitanChatBedrockApi's JavaDoc for further information.