Function Calling

You can register custom Java functions with the OpenAiChatModel and have the OpenAI model intelligently choose to output a JSON object containing arguments to call one or many of the registered functions. This allows you to connect the LLM capabilities with external tools and APIs. The OpenAI models are trained to detect when a function should be called and to respond with JSON that adheres to the function signature.

The OpenAI API does not call the function directly; instead, the model generates JSON that you can use to call the function in your code and return the result back to the model to complete the conversation.

Spring AI provides flexible and user-friendly ways to register and call custom functions. In general, the custom functions need to provide a function name, description, and the function call signature (as JSON schema) to let the model know what arguments the function expects. The description helps the model to understand when to call the function.

As a developer, you need to implement a function that takes the function call arguments sent from the AI model, and responds with the result back to the model. Your function can in turn invoke other 3rd party services to provide the results.

Spring AI makes this as easy as defining a @Bean definition that returns a java.util.Function and supplying the bean name as an option when invoking the ChatModel.

Under the hood, Spring wraps your POJO (the function) with the appropriate adapter code that enables interaction with the AI Model, saving you from writing tedious boilerplate code. The basis of the underlying infrastructure is the FunctionCallback.java interface and the companion Builder utility class to simplify the implementation and registration of Java callback functions.

How it works

Suppose we want the AI model to respond with information that it does not have, for example, the current temperature at a given location.

We can provide the AI model with metadata about our own functions that it can use to retrieve that information as it processes your prompt.

For example, if during the processing of a prompt, the AI Model determines that it needs additional information about the temperature in a given location, it will start a server-side generated request/response interaction. The AI Model invokes a client side function. The AI Model provides method invocation details as JSON, and it is the responsibility of the client to execute that function and return the response.

The model-client interaction is illustrated in the Spring AI Function Calling Flow diagram.

Spring AI greatly simplifies the code you need to write to support function invocation. It brokers the function invocation conversation for you. You can simply provide your function definition as a @Bean and then provide the bean name of the function in your prompt options. You can also reference multiple function bean names in your prompt.

Quick Start

Let’s create a chatbot that answer questions by calling our own function. To support the response of the chatbot, we will register our own function that takes a location and returns the current weather in that location.

When the model needs to answer a question such as "What’s the weather like in Boston?" the AI model will invoke the client providing the location value as an argument to be passed to the function. This RPC-like data is passed as JSON.

Our function calls some SaaS-based weather service API and returns the weather response back to the model to complete the conversation. In this example, we will use a simple implementation named MockWeatherService that hard-codes the temperature for various locations.

The following MockWeatherService.java represents the weather service API:

public class MockWeatherService implements Function<Request, Response> {

	public enum Unit { C, F }
	public record Request(String location, Unit unit) {}
	public record Response(double temp, Unit unit) {}

	public Response apply(Request request) {
		return new Response(30.0, Unit.C);
	}
}

Registering Functions as Beans

With the OpenAiChatModel Auto-Configuration you have multiple ways to register custom functions as beans in the Spring context.

We start by describing the most POJO-friendly options.

Plain Java Functions

In this approach, you define a @Bean in your application context as you would any other Spring managed object.

Internally, Spring AI ChatModel will create an instance of a FunctionCallback that adds the logic for it being invoked via the AI model. The name of the @Bean is passed as a ChatOption.

@Configuration
static class Config {

	@Bean
	@Description("Get the weather in location") // function description
	public Function<MockWeatherService.Request, MockWeatherService.Response> currentWeather() {
		return new MockWeatherService();
	}

}

The @Description annotation is optional and provides a function description that helps the model understand when to call the function. It is an important property to set to help the AI model determine what client side function to invoke.

Another option for providing the description of the function is to use the @JsonClassDescription annotation on the MockWeatherService.Request:

@Configuration
static class Config {

	@Bean
	public Function<Request, Response> currentWeather() { // bean name as function name
		return new MockWeatherService();
	}

}

@JsonClassDescription("Get the weather in location") // // function description
public record Request(String location, Unit unit) {}

It is a best practice to annotate the request object with information such that the generated JSON schema of that function is as descriptive as possible to help the AI model pick the correct function to invoke.

The FunctionCallbackWithPlainFunctionBeanIT.java demonstrates this approach.

FunctionCallback Wrapper

Another way to register a function is to create a FunctionCallback like this:

@Configuration
static class Config {

	@Bean
	public FunctionCallback weatherFunctionInfo() {

    return FunctionCallback.builder()
        .function("CurrentWeather", new MockWeatherService()) // (1) function name and instance
        .description("Get the weather in location") // (2) function description
        .inputType(MockWeatherService.Request.class) // (3) function input type
        .build();
	}

}

It wraps the 3rd party MockWeatherService function and registers it as a CurrentWeather function with the OpenAiChatModel. It also provides a description (2) and an input type (3) used to generate the JSON schema for the function call.

By default, the response converter performs a JSON serialization of the Response object.
The FunctionCallback internally resolves the function call signature based on the MockWeatherService.Request class.

Specifying functions in Chat Options

To let the model know and call your CurrentWeather function you need to enable it in your prompt requests:

OpenAiChatModel chatModel = ...

UserMessage userMessage = new UserMessage("What's the weather like in San Francisco, Tokyo, and Paris?");

ChatResponse response = this.chatModel.call(new Prompt(this.userMessage,
		OpenAiChatOptions.builder().function("CurrentWeather").build())); // Enable the function

logger.info("Response: {}", response);

The above user question will trigger 3 calls to the CurrentWeather function (one for each city) and the final response will be something like this:

Here is the current weather for the requested cities:
- San Francisco, CA: 30.0°C
- Tokyo, Japan: 10.0°C
- Paris, France: 15.0°C

The OpenAiFunctionCallbackIT.java test demo this approach.

Register/Call Functions with Prompt Options

In addition to the auto-configuration, you can register callback functions, dynamically, with your Prompt requests:

OpenAiChatModel chatModel = ...

UserMessage userMessage = new UserMessage("What's the weather like in San Francisco, Tokyo, and Paris?");

var promptOptions = OpenAiChatOptions.builder()
	.functionCallbacks(List.of(FunctionCallback.builder()
        .function("CurrentWeather", new MockWeatherService()) // (1) function name and instance
        .description("Get the weather in location") // (2) function description
        .inputType(MockWeatherService.Request.class) // (3) function input type
        .build())) // function code
	.build();

ChatResponse response = this.chatModel.call(new Prompt(this.userMessage, this.promptOptions));
The in-prompt registered functions are enabled by default for the duration of this request.

This approach allows to choose dynamically different functions to be called based on the user input.

The FunctionCallbackInPromptIT.java integration test provides a complete example of how to register a function with the OpenAiChatModel and use it in a prompt request.

Tool Context Support

Spring AI now supports passing additional contextual information to function callbacks through a tool context. This feature allows you to provide extra data that can be used within the function execution, enhancing the flexibility and power of function calling.

The context information that is passed in as the second argument of a java.util.BiFunction. The ToolContext contains as an immutable Map<String,Object> allowing you to access key-value pairs.

How to Use Tool Context

You can set the tool context when building your chat options and use a BiFunction for your callback:

BiFunction<MockWeatherService.Request, ToolContext, MockWeatherService.Response> weatherFunction =
    (request, toolContext) -> {
        String sessionId = (String) toolContext.getContext().get("sessionId");
        String userId = (String) toolContext.getContext().get("userId");

        // Use sessionId and userId in your function logic
        double temperature = 0;
        if (request.location().contains("Paris")) {
            temperature = 15;
        }
        else if (request.location().contains("Tokyo")) {
            temperature = 10;
        }
        else if (request.location().contains("San Francisco")) {
            temperature = 30;
        }

        return new MockWeatherService.Response(temperature, 15, 20, 2, 53, 45, MockWeatherService.Unit.C);
    };

OpenAiChatOptions options = OpenAiChatOptions.builder()
    .model(OpenAiApi.ChatModel.GPT_4_O.getValue())
    .functionCallbacks(List.of(FunctionCallback.builder()
        .function("getCurrentWeather", this.weatherFunction)
        .description("Get the weather in location")
        .inputType(MockWeatherService.Request.class)
        .build()))
    .toolContext(Map.of("sessionId", "123", "userId", "user456"))
    .build();

In this example, the weatherFunction is defined as a BiFunction that takes both the request and the tool context as parameters. This allows you to access the context directly within the function logic.

You can then use these options when making a call to the chat model:

UserMessage userMessage = new UserMessage("What's the weather like in San Francisco, Tokyo, and Paris?");
ChatResponse response = chatModel.call(new Prompt(List.of(this.userMessage), options));

This approach allows you to pass session-specific or user-specific information to your functions, enabling more contextual and personalized responses.

Appendices:

Spring AI Function Calling Flow

The following diagram illustrates the flow of the OpenAiChatModel Function Calling:

openai chatclient function call

OpenAI API Function Calling Flow

The following diagram illustrates the flow of the OpenAI API Function Calling:

openai function calling flow

The OpenAiApiToolFunctionCallIT.java provides a complete example on how to use the OpenAI API function calling. It is based on the OpenAI Function Calling tutorial.