5. Spring Integration

This part of the reference shows how to use the spring-integration-kafka module of Spring Integration.

5.1 Spring Integration for Apache Kafka

5.1.1 Introduction

This documentation pertains to versions 2.0.0 and above; for documentation for earlier releases, see the 1.3.x README.

Spring Integration Kafka is now based on the Spring for Apache Kafka project. It provides the following components:

  • Outbound Channel Adapter
  • Message-Driven Channel Adapter

These are discussed in the following sections.

5.1.2 Outbound Channel Adapter

The Outbound channel adapter is used to publish messages from a Spring Integration channel to Kafka topics. The channel is defined in the application context and then wired into the application that sends messages to Kafka. Sender applications can publish to Kafka via Spring Integration messages, which are internally converted to Kafka messages by the outbound channel adapter, as follows: the payload of the Spring Integration message will be used to populate the payload of the Kafka message, and (by default) the kafka_messageKey header of the Spring Integration message will be used to populate the key of the Kafka message.

The target topic and partition for publishing the message can be customized through the kafka_topic and kafka_partitionId headers, respectively.

In addition, the <int-kafka:outbound-channel-adapter> provides the ability to extract the key, target topic, and target partition by applying SpEL expressions on the outbound message. To that end, it supports the mutually exclusive pairs of attributes topic/topic-expression, message-key/message-key-expression, and partition-id/partition-id-expression, to allow the specification of topic,message-key and partition-id respectively as static values on the adapter, or to dynamically evaluate their values at runtime against the request message.

[Important]Important

The KafkaHeaders interface (provided by spring-kafka) contains constants used for interacting with headers. The messageKey and topic default headers now require a kafka_ prefix. When migrating from an earlier version that used the old headers, you need to specify message-key-expression="headers['messageKey']" and topic-expression="headers['topic']" on the <int-kafka:outbound-channel-adapter>, or simply change the headers upstream to the new headers from KafkaHeaders using a <header-enricher> or MessageBuilder. Or, of course, configure them on the adapter using topic and message-key if you are using constant values.

NOTE : If the adapter is configured with a topic or message key (either with a constant or expression), those are used and the corresponding header is ignored. If you wish the header to override the configuration, you need to configure it in an expression, such as:

topic-expression="headers['topic'] != null ? headers['topic'] : 'myTopic'".

The adapter requires a KafkaTemplate.

Here is an example of how the Kafka outbound channel adapter is configured with XML:

<int-kafka:outbound-channel-adapter id="kafkaOutboundChannelAdapter"
                                    kafka-template="template"
                                    auto-startup="false"
                                    channel="inputToKafka"
                                    topic="foo"
                                    sync="false"
                                    message-key-expression="'bar'"
                                    send-failure-channel="failures"
                                    send-success-channel="successes"
                                    error-message-strategy="ems"
                                    partition-id-expression="2">
</int-kafka:outbound-channel-adapter>

<bean id="template" class="org.springframework.kafka.core.KafkaTemplate">
    <constructor-arg>
        <bean class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
            <constructor-arg>
                <map>
                    <entry key="bootstrap.servers" value="localhost:9092" />
                    ... <!-- more producer properties -->
                </map>
            </constructor-arg>
        </bean>
    </constructor-arg>
</bean>

As you can see, the adapter requires a KafkaTemplate which, in turn, requires a suitably configured KafkaProducerFactory.

When using Java Configuration:

@Bean
@ServiceActivator(inputChannel = "toKafka")
public MessageHandler handler() throws Exception {
    KafkaProducerMessageHandler<String, String> handler =
            new KafkaProducerMessageHandler<>(kafkaTemplate());
    handler.setTopicExpression(new LiteralExpression("someTopic"));
    handler.setMessageKeyExpression(new LiteralExpression("someKey"));
    handler.setFailureChannel(failures());
    return handler;
}

@Bean
public KafkaTemplate<String, String> kafkaTemplate() {
    return new KafkaTemplate<>(producerFactory());
}

@Bean
public ProducerFactory<String, String> producerFactory() {
    Map<String, Object> props = new HashMap<>();
    props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, this.brokerAddress);
    // set more properties
    return new DefaultKafkaProducerFactory<>(props);
}

If a send-failure-channel is provided, if a send failure is received (sync or async), an ErrorMessage is sent to the channel. The payload is a KafkaSendFailureException with properties failedMesssage, record (the ProducerRecord) and cause. The DefaultErrorMessageStrategy can be overrided via the error-message-strategy property.

If a send-success-channel is provided, a message with a payload of type org.apache.kafka.clients.producer.RecordMetadata will be sent after a successful send. When using Java configuration, use setOutputChannel for this purpose.

5.1.3 Message Driven Channel Adapter

The KafkaMessageDrivenChannelAdapter (<int-kafka:message-driven-channel-adapter>) uses a spring-kafka KafkaMessageListenerContainer or ConcurrentListenerContainer.

Starting with spring-integration-kafka version 2.1, the mode attribute is available (record or batch, default record). For record mode, each message payload is converted from a single ConsumerRecord; for mode batch the payload is a list of objects which are converted from all the ConsumerRecord s returned by the consumer poll. As with the batched @KafkaListener, the KafkaHeaders.RECEIVED_MESSAGE_KEY, KafkaHeaders.RECEIVED_PARTITION_ID, KafkaHeaders.RECEIVED_TOPIC and KafkaHeaders.OFFSET headers are also lists with, positions corresponding to the position in the payload.

An example of xml configuration variant is shown here:

<int-kafka:message-driven-channel-adapter
        id="kafkaListener"
        listener-container="container1"
        auto-startup="false"
        phase="100"
        send-timeout="5000"
        mode="record"
        retry-template="template"
        recovery-callback="callback"
        error-message-strategy="ems"
        channel="someChannel"
        error-channel="errorChannel" />

<bean id="container1" class="org.springframework.kafka.listener.KafkaMessageListenerContainer">
    <constructor-arg>
        <bean class="org.springframework.kafka.core.DefaultKafkaConsumerFactory">
            <constructor-arg>
                <map>
                <entry key="bootstrap.servers" value="localhost:9092" />
                ...
                </map>
            </constructor-arg>
        </bean>
    </constructor-arg>
    <constructor-arg>
        <bean class="org.springframework.kafka.listener.config.ContainerProperties">
            <constructor-arg name="topics" value="foo" />
        </bean>
    </constructor-arg>

</bean>

When using Java Configuration:

@Bean
public KafkaMessageDrivenChannelAdapter<String, String>
            adapter(KafkaMessageListenerContainer<String, String> container) {
    KafkaMessageDrivenChannelAdapter<String, String> kafkaMessageDrivenChannelAdapter =
            new KafkaMessageDrivenChannelAdapter<>(container, ListenerMode.record);
    kafkaMessageDrivenChannelAdapter.setOutputChannel(received());
    return kafkaMessageDrivenChannelAdapter;
}

@Bean
public KafkaMessageListenerContainer<String, String> container() throws Exception {
    ContainerProperties properties = new ContainerProperties(this.topic);
    // set more properties
    return new KafkaMessageListenerContainer<>(consumerFactory(), properties);
}

@Bean
public ConsumerFactory<String, String> consumerFactory() {
    Map<String, Object> props = new HashMap<>();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, this.brokerAddress);
    // set more properties
    return new DefaultKafkaConsumerFactory<>(props);
}

Received messages will have certain headers populated. Refer to the KafkaHeaders class for more information.

[Important]Important

The Consumer object (in the kafka_consumer header) is not thread-safe; you must only invoke its methods on the thread that calls the listener within the adapter; if you hand off the message to another thread, you must not call its methods.

When a retry-template is provided, delivery failures will be retried according to its retry policy. An error-channel is not allowed in this case. The recovery-callback can be used to handle the error when retries are exhausted. In most cases, this will be an ErrorMessageSendingRecoverer which will send the ErrorMessage to a channel.

When building ErrorMessage (for use in the error-channel or recovery-callback), you can customize the error message using the error-message-strategy property. By default, a RawRecordHeaderErrorMessageStrategy is used; providing access to the converted message as well as the raw ConsumerRecord.

5.1.4 Message Conversion

A StringJsonMessageConverter is provided, see Section 4.1.4, “Serialization/Deserialization and Message Conversion” for more information.

When using this converter with a message-driven channel adapter, you can specify the type to which you want the incoming payload to be converted. This is achieved by setting the payload-type attribute (payloadType property) on the adapter.

<int-kafka:message-driven-channel-adapter
        id="kafkaListener"
        listener-container="container1"
        auto-startup="false"
        phase="100"
        send-timeout="5000"
        channel="nullChannel"
        message-converter="messageConverter"
        payload-type="com.example.Foo"
        error-channel="errorChannel" />

<bean id="messageConverter" class="org.springframework.kafka.support.converter.MessagingMessageConverter"/>
@Bean
public KafkaMessageDrivenChannelAdapter<String, String>
            adapter(KafkaMessageListenerContainer<String, String> container) {
    KafkaMessageDrivenChannelAdapter<String, String> kafkaMessageDrivenChannelAdapter =
            new KafkaMessageDrivenChannelAdapter<>(container, ListenerMode.record);
    kafkaMessageDrivenChannelAdapter.setOutputChannel(received());
    kafkaMessageDrivenChannelAdapter.setMessageConverter(converter());
    kafkaMessageDrivenChannelAdapter.setPayloadType(Foo.class);
    return kafkaMessageDrivenChannelAdapter;
}

5.1.5 What’s New in Spring Integration for Apache Kafka

See the Spring for Apache Kafka Project Page for a matrix of compatible spring-kafka and kafka-clients versions.

2.1.x

The 2.1.x branch introduced the following changes:

  • Update to spring-kafka 1.1.x; including support of batch payloads
  • Support sync outbound requests via XML configuration
  • Support payload-type for inbound channel adapters
  • Support for Enhanced Error handling for the inbound channel adapter (2.1.1)
  • Support for send success/failure messages (2.1.2)

2.2.x

The 2.2.x branch introduced the following changes:

  • Update to spring-kafka 1.2.x

2.3.x

The 2.3.x branch introduced the following changes:

  • Update to spring-kafka 1.3.x; including support for transactions and header mapping provided by kafka-clients 0.11.0.0