Class KafkaItemReader<K,V>
java.lang.Object
org.springframework.batch.item.ItemStreamSupport
org.springframework.batch.item.support.AbstractItemStreamItemReader<V>
org.springframework.batch.item.kafka.KafkaItemReader<K,V>
- All Implemented Interfaces:
ItemReader<V>
,ItemStream
,ItemStreamReader<V>
An ItemReader
implementation for Apache Kafka.
Uses a KafkaConsumer
to read data from a given topic. Multiple partitions
within the same topic can be assigned to this reader.
Since KafkaConsumer
is not thread-safe, this reader is not thread-safe.
- Since:
- 4.2
- Author:
- Mathieu Ouellet, Mahmoud Ben Hassine
-
Constructor Summary
ConstructorDescriptionKafkaItemReader
(Properties consumerProperties, String topicName, Integer... partitions) Create a newKafkaItemReader
.KafkaItemReader
(Properties consumerProperties, String topicName, List<Integer> partitions) Create a newKafkaItemReader
. -
Method Summary
Modifier and TypeMethodDescriptionvoid
close()
If any resources are needed for the stream to operate they need to be destroyed here.boolean
The flag that determines whether to save internal state for restarts.void
open
(ExecutionContext executionContext) Open the stream for the providedExecutionContext
.read()
Reads a piece of input data and advance to the next one.void
setPartitionOffsets
(Map<org.apache.kafka.common.TopicPartition, Long> partitionOffsets) Setter for partition offsets.void
setPollTimeout
(Duration pollTimeout) Set a timeout for the consumer topic polling duration.void
setSaveState
(boolean saveState) Set the flag that determines whether to save internal data forExecutionContext
.void
update
(ExecutionContext executionContext) Indicates that the execution context provided during open is about to be saved.Methods inherited from class org.springframework.batch.item.ItemStreamSupport
getExecutionContextKey, getName, setExecutionContextName, setName
-
Constructor Details
-
KafkaItemReader
Create a newKafkaItemReader
.consumerProperties
must contain the following keys: 'bootstrap.servers', 'group.id', 'key.deserializer' and 'value.deserializer'- Parameters:
consumerProperties
- properties of the consumertopicName
- name of the topic to read data frompartitions
- list of partitions to read data from
-
KafkaItemReader
Create a newKafkaItemReader
.consumerProperties
must contain the following keys: 'bootstrap.servers', 'group.id', 'key.deserializer' and 'value.deserializer'- Parameters:
consumerProperties
- properties of the consumertopicName
- name of the topic to read data frompartitions
- list of partitions to read data from
-
-
Method Details
-
setPollTimeout
Set a timeout for the consumer topic polling duration. Default to 30 seconds.- Parameters:
pollTimeout
- for the consumer poll operation
-
setSaveState
public void setSaveState(boolean saveState) Set the flag that determines whether to save internal data forExecutionContext
. Only switch this to false if you don't want to save any state from this stream, and you don't need it to be restartable. Always set it to false if the reader is being used in a concurrent environment.- Parameters:
saveState
- flag value (default true).
-
isSaveState
public boolean isSaveState()The flag that determines whether to save internal state for restarts.- Returns:
- true if the flag was set
-
setPartitionOffsets
Setter for partition offsets. This mapping tells the reader the offset to start reading from in each partition. This is optional, defaults to starting from offset 0 in each partition. Passing an empty map makes the reader start from the offset stored in Kafka for the consumer group ID.In case of a restart, offsets stored in the execution context will take precedence.
- Parameters:
partitionOffsets
- mapping of starting offset in each partition
-
open
Description copied from interface:ItemStream
Open the stream for the providedExecutionContext
.- Parameters:
executionContext
- current step'sExecutionContext
. Will be the executionContext from the last run of the step on a restart.
-
read
Description copied from interface:ItemReader
Reads a piece of input data and advance to the next one. Implementations must returnnull
at the end of the input data set. In a transactional setting, caller might get the same item twice from successive calls (or otherwise), if the first call was in a transaction that rolled back.- Returns:
- T the item to be processed or
null
if the data source is exhausted
-
update
Description copied from interface:ItemStream
Indicates that the execution context provided during open is about to be saved. If any state is remaining, but has not been put in the context, it should be added here.- Parameters:
executionContext
- to be updated
-
close
public void close()Description copied from interface:ItemStream
If any resources are needed for the stream to operate they need to be destroyed here. Once this method has been called all other methods (except open) may throw an exception.
-