Appendix A. Change History

A.1 Changes between 2.0 and 2.1

A.1.1 Kafka Client Version

This version requires the 1.0.0 kafka-clients or higher.

[Note]Note

The 1.1.x client is supported, with version 2.1.5, but you will need to override dependencies as described in ???. The 1.1.x client will be supported natively in version 2.2.

A.1.2 JSON Improvements

The StringJsonMessageConverter and JsonSerializer now add type information in Headers, allowing the converter and JsonDeserializer to create specific types on reception, based on the message itself rather than a fixed configured type. See Section 4.1.5, “Serialization/Deserialization and Message Conversion” for more information.

A.1.3 Container Stopping Error Handlers

Container Error handlers are now provided for both record and batch listeners that treat any exceptions thrown by the listener as fatal; they stop the container. See Section 4.1.8, “Handling Exceptions” for more information.

A.1.4 Pausing/Resuming Containers

The listener containers now have pause() and resume() methods (since version 2.1.3). See Section 4.1.4, “Pausing/Resuming Listener Containers” for more information.

A.1.5 Stateful Retry

Starting with version 2.1.3, stateful retry can be configured; see the section called “Stateful Retry” for more information.

A.1.6 Client ID

Starting with version 2.1.1, it is now possible to set the client.id prefix on @KafkaListener. Previously, to customize the client id, you would need a separate consumer factory (and container factory) per listener. The prefix is suffixed with -n to provide unique client ids when using concurrency.

A.1.7 Logging Offset Commits

By default, logging of topic offset commits is performed with the DEBUG logging level. Starting with version 2.1.2, there is a new property in ContainerProperties called commitLogLevel which allows you to specify the log level for these messages. See the section called “KafkaMessageListenerContainer” for more information.

A.1.8 Default @KafkaHandler

Starting with version 2.1.3, one of the @KafkaHandler s on a class-level @KafkaListener can be designated as the default. See the section called “@KafkaListener on a Class” for more information.

A.1.9 ReplyingKafkaTemplate

Starting with version 2.1.3, a subclass of KafkaTemplate is provided to support request/reply semantics. See the section called “ReplyingKafkaTemplate” for more information.

A.1.10 ChainedKafkaTransactionManager

version 2.1.3 introduced the ChainedKafkaTransactionManager see the section called “ChainedKafkaTransactionManager” for more information.

A.1.11 Migration Guide from 2.0

2.0 to 2.1 Migration.

A.2 Changes Between 1.3 and 2.0

A.2.1 Spring Framework and Java Versions

The Spring for Apache Kafka project now requires Spring Framework 5.0 and Java 8.

A.2.2 @KafkaListener Changes

You can now annotate @KafkaListener methods (and classes, and @KafkaHandler methods) with @SendTo. If the method returns a result, it is forwarded to the specified topic. See the section called “Forwarding Listener Results using @SendTo” for more information.

A.2.3 Message Listeners

Message listeners can now be aware of the Consumer object. See the section called “Message Listeners” for more information.

A.2.4 ConsumerAwareRebalanceListener

Rebalance listeners can now access the Consumer object during rebalance notifications. See the section called “Rebalance Listeners” for more information.

A.3 Changes Between 1.2 and 1.3

A.3.1 Support for Transactions

The 0.11.0.0 client library added support for transactions; the KafkaTransactionManager and other support for transactions has been added. See the section called “Transactions” for more information.

A.3.2 Support for Headers

The 0.11.0.0 client library added support for message headers; these can now be mapped to/from spring-messaging MessageHeaders. See Section 4.1.6, “Message Headers” for more information.

A.3.3 Creating Topics

The 0.11.0.0 client library provides an AdminClient which can be used to create topics. The KafkaAdmin uses this client to automatically add topics defined as @Bean s.

A.3.4 Support for Kafka timestamps

KafkaTemplate now supports API to add records with timestamps. New KafkaHeaders have been introduced regarding timestamp support. Also new KafkaConditions.timestamp() and KafkaMatchers.hasTimestamp() testing utilities have been added. See the section called “KafkaTemplate”, the section called “@KafkaListener Annotation” and Section 4.3, “Testing Applications” for more details.

A.3.5 @KafkaListener Changes

You can now configure a KafkaListenerErrorHandler to handle exceptions. See Section 4.1.8, “Handling Exceptions” for more information.

By default, the @KafkaListener id property is now used as the group.id property, overriding the property configured in the consumer factory (if present). Further, you can explicitly configure the groupId on the annotation. Previously, you would have needed a separate container factory (and consumer factory) to use different group.id s for listeners. To restore the previous behavior of using the factory configured group.id, set the idIsGroup property on the annotation to false.

A.3.6 @EmbeddedKafka Annotation

For convenience a test class level @EmbeddedKafka annotation is provided with the purpose to register KafkaEmbedded as a bean. See Section 4.3, “Testing Applications” for more information.

A.3.7 Kerberos Configuration

Support for configuring Kerberos is now provided. See Section 4.1.9, “Kerberos” for more information.

A.4 Changes between 1.1 and 1.2

This version uses the 0.10.2.x client.

A.5 Changes between 1.0 and 1.1

A.5.1 Kafka Client

This version uses the Apache Kafka 0.10.x.x client.

A.5.2 Batch Listeners

Listeners can be configured to receive the entire batch of messages returned by the consumer.poll() operation, rather than one at a time.

A.5.3 Null Payloads

Null payloads are used to "delete" keys when using log compaction.

A.5.4 Initial Offset

When explicitly assigning partitions, you can now configure the initial offset relative to the current position for the consumer group, rather than absolute or relative to the current end.

A.5.5 Seek

You can now seek the position of each topic/partition. This can be used to set the initial position during initialization when group management is in use and Kafka assigns the partitions. You can also seek when an idle container is detected, or at any arbitrary point in your application’s execution. See the section called “Seeking to a Specific Offset” for more information.