This version is still in development and is not considered stable yet. For the latest stable version, please use Spring Batch Documentation 5.2.0!

What’s New in Spring Batch 5.1

Dependencies upgrade

In this release, the Spring dependencies are upgraded to the following versions:

  • Spring Framework 6.1.0

  • Spring Integration 6.2.0

  • Spring Data 3.2.0

  • Spring LDAP 3.2.0

  • Spring AMQP 3.1.0

  • Spring Kafka 3.1.0

  • Micrometer 1.12.0

Virtual Threads support

Embracing JDK 21 LTS is one of the main themes for Spring Batch 5.1, especially the support of virtual threads from Project Loom. In this release, virtual threads can be used in all areas of the framework, like running a concurrent step with virtual threads or launching multiple steps in parallel using virtual threads.

Thanks to the well designed separation of concerns in Spring Batch, threads are not managed directly. Thread management is rather delegated to TaskExecutor implementations from Spring Framework. This programming-to-interface approach allows you to switch between TaskExecutor implementations in a transparent and a flexible way.

In Spring Framework 6.1, a new TaskExecutor implementation based on virtual threads has been introduced, which is the VirtualThreadTaskExecutor. This TaskExecutor can be used in Spring Batch wherever a TaskExecutor is required.

Memory management improvement in the JpaItemWriter

When using the JpaItemWriter, the JPA persistence context can quickly grow when the chunk size is large enough. This might lead to OutOfMemoryError errors if not cleared appropriately in a timely manner.

In this release, a new option named clearPersistenceContext has been introduced in the JpaItemWriter to clear the persistence context after writing each chunk of items. This option improves the memory management of chunk-oriented steps dealing with large amounts of data and big chunk sizes.

New synchronized decorators for item readers and writers

Up to version 5.0, Spring Batch provided two decorators SynchronizedItemStreamReader and SynchronizedItemStreamWriter to synchronize thread access to ItemStreamReader#read and ItemStreamWriter#write. Those decorators are useful when using non thread-safe item streams in multi-threaded steps.

While those decorators work with ItemStream implementations, they are not usable with non-item streams. For example, those decorators cannot be used to synchronize access to ListItemReader#read or KafkaItemWriter#write.

For users convenience, this release introduces new decorators for non-item streams as well. With this new feature, all item readers and writers in Spring Batch can now be synchronized without having to write custom decorators.

New Cursor-based MongoItemReader

Up to version 5.0, the MongoItemReader provided by Spring Batch used pagination, which is based on MongoDB’s skip operation. While this works well for small/medium data sets, it starts to perform poorly with large data sets.

This release introduces the MongoCursorItemReader, a new cursor-based item reader for MongoDB. This implementation uses cursors instead paging to read data from MongoDB, which improves the performance of reads on large collections. For consistency with other cursor/paging readers, the current MongoItemReader has been renamed to MongoPagingItemReader.

Bulk inserts support in MongoItemWriter

Up to version 5.0, the MongoItemWriter supported two operations: upsert and delete. While the upsert operation works well for both inserts and updates, it does not perform well for items that are known to be new in the target collection.

Similar to the persist and merge operations in the JpaItemWriter, this release adds a new operation named insert in the MongoItemWriter, which is designed for bulk inserts. This new option performs better than upsert for new items as it does not require an additional lookup to check if items already exist in the target collection.

New item reader and writer for Redis

A new RedisItemReader is now available in the library of built-in item readers. This reader is based on Spring Data Redis and can be configured with a ScanOptions to scan the key set to read from Redis.

Similarly, a new RedisItemWriter based on Spring Data Redis is now part of the writers library. This writer can be configured with a RedisTemplate to write items to Redis.

Automatic configuration of JobRegistryBeanPostProcessor

When configuring a JobOperator in a Spring Batch application, it is necessary to register the jobs in the operator’s JobRegistry. This registration process is either done manually or automatically by adding a JobRegistryBeanPostProcessor bean to the application context.

In this release, the default configuration of Spring Batch (ie by using @EnableBatchProcessing or extending DefaultBatchConfiguration) now automatically registers a JobRegistryBeanPostProcessor bean to the application context. This simplifies the configuration process and improves the user experience when using a JobOperator.

Ability to start a job flow with a decision

When using the XML configuration style, it is possible to start a job flow with a decider thanks to the <decision> element. However, up to version 5.0, it was not possible to achieve the same flow definition with the Java API.

In this release, a new option to start a job flow with a JobExecutionDecider was added to the JobBuilder API. This makes both configuration styles more consistent.

Ability to provide a custom JobKeyGenerator

By default, Spring Batch identifies job instances by calculating an MD5 hash of the identifying job parameters. While it is unlikely to need to customize this identification process, Spring Batch still provide a strategy interface for users to override the default mechanism through the JobKeyGenerator API.

Up to version 5.0, it was not possible to provide a custom key generator without having to create a custom JobRepository and JobExplorer. In this version, it is now possible to provide a custom JobKeyGenerator through the factory beans of JobRepository and JobExplorer.

New documentation based on Antora

The reference documentation was updated to use Antora. This update introduces a number of improvements, including but not limited to:

  • Multi-version documentation: it is now possible to navigate from one version to another thanks to the drop down version list in the left side menu.

  • Integrated search experience: powered by Algolia, the search experience in now better thanks to the integrated search box at the top left of the page

  • Improved configuration style toggle: the toggle to switch between the XML and Java configuration styles for code snippets is now located near each sample, rather than the top of each page

Improved Getting Started experience

In this release, the getting started experience was improved in many ways:

  • Samples are now packaged by feature and are provided in two configuration styles: XML and Java configuration

  • A new Two minutes tutorial was added to the README

  • The Getting Started Guide was updated to the latest and greatest Spring Batch and Spring Boot versions

  • The Issue Reporting Guide was updated with detailed instructions and project templates to help you easily report issues

MongoDB Job Repository (Experimental)

This feature introduces new implementations of JobRepository and JobExplorer backed by MongoDB. This long-awaited feature is now available as experimental and marks the introduction of the first NoSQL meta-data store for Spring Batch.

Please refer to the Spring Batch Experimental repository for more details about this feature.

Composite Item Reader (Experimental)

This feature introduces a composite ItemReader implementation. Similar to the CompositeItemProcessor and CompositeItemWriter, the idea is to delegate reading to a list of item readers in order. This is useful when there is a requirement to read data having the same format from different sources (files, databases, etc).

Please refer to the Spring Batch Experimental repository for more details about this feature.

New Chunk-Oriented Step implementation (Experimental)

This is not a new feature, but rather a new implementation of the chunk-oriented processing model. The goal is to address the reported issues with the current implementation and to provide a new base for the upcoming re-designed concurrency model.

Please refer to the Spring Batch Experimental repository for more details about this new implementation.