Record Class AnthropicApi.ChatCompletionRequest

java.lang.Object
java.lang.Record
org.springframework.ai.anthropic.api.AnthropicApi.ChatCompletionRequest
Record Components:
model - The model that will complete your prompt. See the list of models for additional details and options.
messages - Input messages.
system - System prompt. A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role. See our guide to system prompts.
maxTokens - The maximum number of tokens to generate before stopping. Note that our models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate. Different models have different maximum values for this parameter.
metadata - An object describing metadata about the request.
stopSequences - Custom text sequences that will cause the model to stop generating. Our models will normally stop when they have naturally completed their turn, which will result in a response stop_reason of "end_turn". If you want the model to stop generating when it encounters custom strings of text, you can use the stop_sequences parameter. If the model encounters one of the custom sequences, the response stop_reason value will be "stop_sequence" and the response stop_sequence value will contain the matched stop sequence.
stream - Whether to incrementally stream the response using server-sent events.
temperature - Amount of randomness injected into the response.Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks. Note that even with temperature of 0.0, the results will not be fully deterministic.
topP - Use nucleus sampling. In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both. Recommended for advanced use cases only. You usually only need to use temperature.
topK - Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Learn more technical details here. Recommended for advanced use cases only. You usually only need to use temperature.
tools - Definitions of tools that the model may use. If provided the model may return tool_use content blocks that represent the model's use of those tools. You can then run those tools using the tool input generated by the model and then optionally return results back to the model using tool_result content blocks.
Enclosing class:
AnthropicApi

public static record AnthropicApi.ChatCompletionRequest(String model, List<AnthropicApi.AnthropicMessage> messages, String system, Integer maxTokens, AnthropicApi.ChatCompletionRequest.Metadata metadata, List<String> stopSequences, Boolean stream, Double temperature, Double topP, Integer topK, List<AnthropicApi.Tool> tools) extends Record
Chat completion request object.