© 2018-2022 The original authors.

Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically.

Preface

The Spring Data JDBC project applies core Spring concepts to the development of solutions that use JDBC databases aligned with Domain-driven design principles. We provide a “template” as a high-level abstraction for storing and querying aggregates.

This document is the reference guide for Spring Data JDBC Support. It explains the concepts and semantics and syntax..

This section provides some basic introduction. The rest of the document refers only to Spring Data JDBC features and assumes the user is familiar with SQL and Spring concepts.

1. Learning Spring

Spring Data uses Spring framework’s core functionality, including:

While you need not know the Spring APIs, understanding the concepts behind them is important. At a minimum, the idea behind Inversion of Control (IoC) should be familiar, and you should be familiar with whatever IoC container you choose to use.

The core functionality of the JDBC Aggregate support can be used directly, with no need to invoke the IoC services of the Spring Container. This is much like JdbcTemplate, which can be used "'standalone'" without any other services of the Spring container. To leverage all the features of Spring Data JDBC, such as the repository support, you need to configure some parts of the library to use Spring.

To learn more about Spring, you can refer to the comprehensive documentation that explains the Spring Framework in detail. There are a lot of articles, blog entries, and books on the subject. See the Spring framework home page for more information.

2. Requirements

The Spring Data JDBC binaries require JDK level 8.0 and above and Spring Framework 6.0.0-M6 and above.

In terms of databases, Spring Data JDBC requires a dialect to abstract common SQL functionality over vendor-specific flavours. Spring Data JDBC includes direct support for the following databases:

  • DB2

  • H2

  • HSQLDB

  • MariaDB

  • Microsoft SQL Server

  • MySQL

  • Oracle

  • Postgres

If you use a different database then your application won’t startup. The dialect section contains further detail on how to proceed in such case.

3. Additional Help Resources

Learning a new framework is not always straightforward. In this section, we try to provide what we think is an easy-to-follow guide for starting with the Spring Data JDBC module. However, if you encounter issues or you need advice, feel free to use one of the following links:

Community Forum

Spring Data on Stack Overflow is a tag for all Spring Data (not just Document) users to share information and help each other. Note that registration is needed only for posting.

Professional Support

Professional, from-the-source support, with guaranteed response time, is available from Pivotal Sofware, Inc., the company behind Spring Data and Spring.

4. Following Development

For information on the Spring Data JDBC source code repository, nightly builds, and snapshot artifacts, see the Spring Data JDBC homepage. You can help make Spring Data best serve the needs of the Spring community by interacting with developers through the Community on Stack Overflow. If you encounter a bug or want to suggest an improvement, please create a ticket on the Spring Data issue tracker. To stay up to date with the latest news and announcements in the Spring eco system, subscribe to the Spring Community Portal. You can also follow the Spring blog or the project team on Twitter (SpringData).

5. Project Metadata

6. New & Noteworthy

This section covers the significant changes for each version.

6.1. What’s New in Spring Data JDBC 2.3

  • Support for streaming results.

  • Support for specifying projection types as the return type or using generics and providing a Class parameter to query methods.

6.2. What’s New in Spring Data JDBC 2.2

6.3. What’s New in Spring Data JDBC 2.1

  • Dialect for Oracle databases.

  • Support for @Value in persistence constructors.

6.4. What’s New in Spring Data JDBC 2.0

  • Optimistic Locking support.

  • Support for PagingAndSortingRepository.

  • Query Derivation.

  • Full Support for H2.

  • All SQL identifiers know get quoted by default.

  • Missing columns no longer cause exceptions.

6.5. What’s New in Spring Data JDBC 1.1

  • @Embedded entities support.

  • Store byte[] as BINARY.

  • Dedicated insert method in the JdbcAggregateTemplate.

  • Read only property support.

6.6. What’s New in Spring Data JDBC 1.0

  • Basic support for CrudRepository.

  • @Query support.

  • MyBatis support.

  • Id generation.

  • Event support.

  • Auditing.

  • CustomConversions.

Unresolved directive in index.adoc - include::../../../../../spring-data-commons/src/main/asciidoc/dependencies.adoc[leveloffset=+1] Unresolved directive in index.adoc - include::../../../../../spring-data-commons/src/main/asciidoc/repositories.adoc[leveloffset=+1]

Reference Documentation

7. JDBC Repositories

This chapter points out the specialties for repository support for JDBC.This builds on the core repository support explained in [repositories]. You should have a sound understanding of the basic concepts explained there.

7.1. Why Spring Data JDBC?

The main persistence API for relational databases in the Java world is certainly JPA, which has its own Spring Data module. Why is there another one?

JPA does a lot of things in order to help the developer. Among other things, it tracks changes to entities. It does lazy loading for you. It lets you map a wide array of object constructs to an equally wide array of database designs.

This is great and makes a lot of things really easy. Just take a look at a basic JPA tutorial. But it often gets really confusing as to why JPA does a certain thing. Also, things that are really simple conceptually get rather difficult with JPA.

Spring Data JDBC aims to be much simpler conceptually, by embracing the following design decisions:

  • If you load an entity, SQL statements get run. Once this is done, you have a completely loaded entity. No lazy loading or caching is done.

  • If you save an entity, it gets saved. If you do not, it does not. There is no dirty tracking and no session.

  • There is a simple model of how to map entities to tables. It probably only works for rather simple cases. If you do not like that, you should code your own strategy. Spring Data JDBC offers only very limited support for customizing the strategy with annotations.

7.2. Domain Driven Design and Relational Databases.

All Spring Data modules are inspired by the concepts of “repository”, “aggregate”, and “aggregate root” from Domain Driven Design. These are possibly even more important for Spring Data JDBC, because they are, to some extent, contrary to normal practice when working with relational databases.

An aggregate is a group of entities that is guaranteed to be consistent between atomic changes to it. A classic example is an Order with OrderItems. A property on Order (for example, numberOfItems is consistent with the actual number of OrderItems) remains consistent as changes are made.

References across aggregates are not guaranteed to be consistent at all times. They are guaranteed to become consistent eventually.

Each aggregate has exactly one aggregate root, which is one of the entities of the aggregate. The aggregate gets manipulated only through methods on that aggregate root. These are the atomic changes mentioned earlier.

A repository is an abstraction over a persistent store that looks like a collection of all the aggregates of a certain type. For Spring Data in general, this means you want to have one Repository per aggregate root. In addition, for Spring Data JDBC this means that all entities reachable from an aggregate root are considered to be part of that aggregate root. Spring Data JDBC assumes that only the aggregate has a foreign key to a table storing non-root entities of the aggregate and no other entity points toward non-root entities.

In the current implementation, entities referenced from an aggregate root are deleted and recreated by Spring Data JDBC.

You can overwrite the repository methods with implementations that match your style of working and designing your database.

7.3. Getting Started

An easy way to bootstrap setting up a working environment is to create a Spring-based project in STS or from Spring Initializr.

First, you need to set up a running database server. Refer to your vendor documentation on how to configure your database for JDBC access.

To create a Spring project in STS:

  1. Go to File → New → Spring Template Project → Simple Spring Utility Project, and press Yes when prompted. Then enter a project and a package name, such as org.spring.jdbc.example.

  2. Add the following to the pom.xml files dependencies element:

    <dependencies>
    
      <!-- other dependency elements omitted -->
    
      <dependency>
        <groupId>org.springframework.data</groupId>
        <artifactId>spring-data-jdbc</artifactId>
        <version>3.0.0-M6</version>
      </dependency>
    
    </dependencies>
  3. Change the version of Spring in the pom.xml to be

    <spring.framework.version>6.0.0-M6</spring.framework.version>
  4. Add the following location of the Spring Milestone repository for Maven to your pom.xml such that it is at the same level of your <dependencies/> element:

    <repositories>
      <repository>
        <id>spring-milestone</id>
        <name>Spring Maven MILESTONE Repository</name>
        <url>https://repo.spring.io/libs-milestone</url>
      </repository>
    </repositories>

The repository is also browseable here.

7.4. Examples Repository

There is a GitHub repository with several examples that you can download and play around with to get a feel for how the library works.

7.5. Annotation-based Configuration

The Spring Data JDBC repositories support can be activated by an annotation through Java configuration, as the following example shows:

Example 1. Spring Data JDBC repositories using Java configuration
@Configuration
@EnableJdbcRepositories                                                                (1)
class ApplicationConfig extends AbstractJdbcConfiguration {                            (2)

    @Bean
    DataSource dataSource() {                                                         (3)

        EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
        return builder.setType(EmbeddedDatabaseType.HSQL).build();
    }

    @Bean
    NamedParameterJdbcOperations namedParameterJdbcOperations(DataSource dataSource) { (4)
        return new NamedParameterJdbcTemplate(dataSource);
    }

    @Bean
    TransactionManager transactionManager(DataSource dataSource) {                     (5)
        return new DataSourceTransactionManager(dataSource);
    }
}
1 @EnableJdbcRepositories creates implementations for interfaces derived from Repository
2 AbstractJdbcConfiguration provides various default beans required by Spring Data JDBC
3 Creates a DataSource connecting to a database. This is required by the following two bean methods.
4 Creates the NamedParameterJdbcOperations used by Spring Data JDBC to access the database.
5 Spring Data JDBC utilizes the transaction management provided by Spring JDBC.

The configuration class in the preceding example sets up an embedded HSQL database by using the EmbeddedDatabaseBuilder API of spring-jdbc. The DataSource is then used to set up NamedParameterJdbcOperations and a TransactionManager. We finally activate Spring Data JDBC repositories by using the @EnableJdbcRepositories. If no base package is configured, it uses the package in which the configuration class resides. Extending AbstractJdbcConfiguration ensures various beans get registered. Overwriting its methods can be used to customize the setup (see below).

This configuration can be further simplified by using Spring Boot. With Spring Boot a DataSource is sufficient once the starter spring-boot-starter-data-jdbc is included in the dependencies. Everything else is done by Spring Boot.

There are a couple of things one might want to customize in this setup.

7.5.1. Dialects

Spring Data JDBC uses implementations of the interface Dialect to encapsulate behavior that is specific to a database or its JDBC driver. By default, the AbstractJdbcConfiguration tries to determine the database in use and register the correct Dialect. This behavior can be changed by overwriting jdbcDialect(NamedParameterJdbcOperations).

If you use a database for which no dialect is available, then your application won’t startup. In that case, you’ll have to ask your vendor to provide a Dialect implementation. Alternatively, you can:

  1. Implement your own Dialect.

  2. Implement a JdbcDialectProvider returning the Dialect.

  3. Register the provider by creating a spring.factories resource under META-INF and perform the registration by adding a line
    org.springframework.data.jdbc.repository.config.DialectResolver$JdbcDialectProvider=<fully qualified name of your JdbcDialectProvider>

7.6. Persisting Entities

Saving an aggregate can be performed with the CrudRepository.save(…) method. If the aggregate is new, this results in an insert for the aggregate root, followed by insert statements for all directly or indirectly referenced entities.

If the aggregate root is not new, all referenced entities get deleted, the aggregate root gets updated, and all referenced entities get inserted again. Note that whether an instance is new is part of the instance’s state.

This approach has some obvious downsides. If only few of the referenced entities have been actually changed, the deletion and insertion is wasteful. While this process could and probably will be improved, there are certain limitations to what Spring Data JDBC can offer. It does not know the previous state of an aggregate. So any update process always has to take whatever it finds in the database and make sure it converts it to whatever is the state of the entity passed to the save method.

Unresolved directive in jdbc.adoc - include::../../../../../spring-data-commons/src/main/asciidoc/object-mapping.adoc[leveloffset=+2]

7.6.1. Supported Types in Your Entity

The properties of the following types are currently supported:

  • All primitive types and their boxed types (int, float, Integer, Float, and so on)

  • Enums get mapped to their name.

  • String

  • java.util.Date, java.time.LocalDate, java.time.LocalDateTime, and java.time.LocalTime

  • Arrays and Collections of the types mentioned above can be mapped to columns of array type if your database supports that.

  • Anything your database driver accepts.

  • References to other entities. They are considered a one-to-one relationship, or an embedded type. It is optional for one-to-one relationship entities to have an id attribute. The table of the referenced entity is expected to have an additional column named the same as the table of the referencing entity. You can change this name by implementing NamingStrategy.getReverseColumnName(PersistentPropertyPathExtension path). Embedded entities do not need an id. If one is present it gets ignored.

  • Set<some entity> is considered a one-to-many relationship. The table of the referenced entity is expected to have an additional column named the same as the table of the referencing entity. You can change this name by implementing NamingStrategy.getReverseColumnName(PersistentPropertyPathExtension path).

  • Map<simple type, some entity> is considered a qualified one-to-many relationship. The table of the referenced entity is expected to have two additional columns: One named the same as the table of the referencing entity for the foreign key and one with the same name and an additional _key suffix for the map key. You can change this behavior by implementing NamingStrategy.getReverseColumnName(PersistentPropertyPathExtension path) and NamingStrategy.getKeyColumn(RelationalPersistentProperty property), respectively. Alternatively you may annotate the attribute with @MappedCollection(idColumn="your_column_name", keyColumn="your_key_column_name")

  • List<some entity> is mapped as a Map<Integer, some entity>.

The handling of referenced entities is limited. This is based on the idea of aggregate roots as described above. If you reference another entity, that entity is, by definition, part of your aggregate. So, if you remove the reference, the previously referenced entity gets deleted. This also means references are 1-1 or 1-n, but not n-1 or n-m.

If you have n-1 or n-m references, you are, by definition, dealing with two separate aggregates. References between those may be encoded as simple id values, which map properly with Spring Data JDBC. A better way to encode these is to make them instances of AggregateReference. An AggregateReference is a wrapper around an id value which marks that value as a reference to a different aggregate. Also, the type of that aggregate is encoded in a type parameter.

Example 2. Declaring and setting an AggregateReference
class Person {
	@Id long id;
	AggregateReference<Person, Long> bestFriend;
}

// ...

Person p1, p2 = // some initialization

p1.bestFriend = AggregateReference.to(p2.id);
  • Types for which you registered suitable .

7.6.2. NamingStrategy

When you use the standard implementations of CrudRepository that Spring Data JDBC provides, they expect a certain table structure. You can tweak that by providing a NamingStrategy in your application context.

7.6.3. Custom table names

When the NamingStrategy does not matching on your database table names, you can customize the names with the @Table annotation. The element value of this annotation provides the custom table name. The following example maps the MyEntity class to the CUSTOM_TABLE_NAME table in the database:

@Table("CUSTOM_TABLE_NAME")
class MyEntity {
    @Id
    Integer id;

    String name;
}

7.6.4. Custom column names

When the NamingStrategy does not matching on your database column names, you can customize the names with the @Column annotation. The element value of this annotation provides the custom column name. The following example maps the name property of the MyEntity class to the CUSTOM_COLUMN_NAME column in the database:

class MyEntity {
    @Id
    Integer id;

    @Column("CUSTOM_COLUMN_NAME")
    String name;
}

The @MappedCollection annotation can be used on a reference type (one-to-one relationship) or on Sets, Lists, and Maps (one-to-many relationship). idColumn element of the annotation provides a custom name for the foreign key column referencing the id column in the other table. In the following example the corresponding table for the MySubEntity class has a NAME column, and the CUSTOM_MY_ENTITY_ID_COLUMN_NAME column of the MyEntity id for relationship reasons:

class MyEntity {
    @Id
    Integer id;

    @MappedCollection(idColumn = "CUSTOM_MY_ENTITY_ID_COLUMN_NAME")
    Set<MySubEntity> subEntities;
}

class MySubEntity {
    String name;
}

When using List and Map you must have an additional column for the position of a dataset in the List or the key value of the entity in the Map. This additional column name may be customized with the keyColumn Element of the @MappedCollection annotation:

class MyEntity {
    @Id
    Integer id;

    @MappedCollection(idColumn = "CUSTOM_COLUMN_NAME", keyColumn = "CUSTOM_KEY_COLUMN_NAME")
    List<MySubEntity> name;
}

class MySubEntity {
    String name;
}

7.6.5. Embedded entities

Embedded entities are used to have value objects in your java data model, even if there is only one table in your database. In the following example you see, that MyEntity is mapped with the @Embedded annotation. The consequence of this is, that in the database a table my_entity with the two columns id and name (from the EmbeddedEntity class) is expected.

However, if the name column is actually null within the result set, the entire property embeddedEntity will be set to null according to the onEmpty of @Embedded, which nulls objects when all nested properties are null.
Opposite to this behavior USE_EMPTY tries to create a new instance using either a default constructor or one that accepts nullable parameter values from the result set.

Example 3. Sample Code of embedding objects
class MyEntity {

    @Id
    Integer id;

    @Embedded(onEmpty = USE_NULL) (1)
    EmbeddedEntity embeddedEntity;
}

class EmbeddedEntity {
    String name;
}
1 Nulls embeddedEntity if name in null. Use USE_EMPTY to instantiate embeddedEntity with a potential null value for the name property.

If you need a value object multiple times in an entity, this can be achieved with the optional prefix element of the @Embedded annotation. This element represents a prefix and is prepend for each column name in the embedded object.

Make use of the shortcuts @Embedded.Nullable & @Embedded.Empty for @Embedded(onEmpty = USE_NULL) and @Embedded(onEmpty = USE_EMPTY) to reduce verbosity and simultaneously set JSR-305 @javax.annotation.Nonnull accordingly.

class MyEntity {

    @Id
    Integer id;

    @Embedded.Nullable (1)
    EmbeddedEntity embeddedEntity;
}
1 Shortcut for @Embedded(onEmpty = USE_NULL).

Embedded entities containing a Collection or a Map will always be considered non empty since they will at least contain the empty collection or map. Such an entity will therefore never be null even when using @Embedded(onEmpty = USE_NULL).

Unresolved directive in jdbc.adoc - include::../../../../../spring-data-commons/src/main/asciidoc/is-new-state-detection.adoc[leveloffset=+2]

7.6.6. ID Generation

Spring Data JDBC uses the ID to identify entities. The ID of an entity must be annotated with Spring Data’s @Id annotation.

When your database has an auto-increment column for the ID column, the generated value gets set in the entity after inserting it into the database.

One important constraint is that, after saving an entity, the entity must not be new any more. Note that whether an entity is new is part of the entity’s state. With auto-increment columns, this happens automatically, because the ID gets set by Spring Data with the value from the ID column. If you are not using auto-increment columns, you can use a BeforeConvert listener, which sets the ID of the entity (covered later in this document).

7.6.7. Read Only Properties

Attributes annotated with @ReadOnlyProperty will not be written to the database by Spring Data JDBC, but they will be read when an entity gets loaded.

Spring Data JDBC will not automatically reload an entity after writing it. Therefore, you have to reload it explicitly if you want to see data that was generated in the database for such columns.

If the annotated attribute is an entity or collection of entities, it is represented by one or more separate rows in separate tables. Spring Data JDBC will not perform any insert, delete or update for these rows.

7.6.8. Optimistic Locking

Spring Data JDBC supports optimistic locking by means of a numeric attribute that is annotated with @Version on the aggregate root. Whenever Spring Data JDBC saves an aggregate with such a version attribute two things happen: The update statement for the aggregate root will contain a where clause checking that the version stored in the database is actually unchanged. If this isn’t the case an OptimisticLockingFailureException will be thrown. Also the version attribute gets increased both in the entity and in the database so a concurrent action will notice the change and throw an OptimisticLockingFailureException if applicable as described above.

This process also applies to inserting new aggregates, where a null or 0 version indicates a new instance and the increased instance afterwards marks the instance as not new anymore, making this work rather nicely with cases where the id is generated during object construction for example when UUIDs are used.

During deletes the version check also applies but no version is increased.

7.7. Query Methods

This section offers some specific information about the implementation and use of Spring Data JDBC.

Most of the data access operations you usually trigger on a repository result in a query being run against the databases. Defining such a query is a matter of declaring a method on the repository interface, as the following example shows:

Example 4. PersonRepository with query methods
interface PersonRepository extends PagingAndSortingRepository<Person, String> {

  List<Person> findByFirstname(String firstname);                                   (1)

  List<Person> findByFirstnameOrderByLastname(String firstname, Pageable pageable); (2)

  Slice<Person> findByLastname(String lastname, Pageable pageable);                 (3)

  Page<Person> findByLastname(String lastname, Pageable pageable);                  (4)

  Person findByFirstnameAndLastname(String firstname, String lastname);             (5)

  Person findFirstByLastname(String lastname);                                      (6)

  @Query("SELECT * FROM person WHERE lastname = :lastname")
  List<Person> findByLastname(String lastname);                                     (7)
  @Query("SELECT * FROM person WHERE lastname = :lastname")
  Stream<Person> streamByLastname(String lastname);                                     (8)
}
1 The method shows a query for all people with the given firstname. The query is derived by parsing the method name for constraints that can be concatenated with And and Or. Thus, the method name results in a query expression of SELECT … FROM person WHERE firstname = :firstname.
2 Use Pageable to pass offset and sorting parameters to the database.
3 Return a Slice<Person>. Selects LIMIT+1 rows to determine whether there’s more data to consume. ResultSetExtractor customization is not supported.
4 Run a paginated query returning Page<Person>. Selects only data within the given page bounds and potentially a count query to determine the total count. ResultSetExtractor customization is not supported.
5 Find a single entity for the given criteria. It completes with IncorrectResultSizeDataAccessException on non-unique results.
6 In contrast to <3>, the first entity is always emitted even if the query yields more result documents.
7 The findByLastname method shows a query for all people with the given lastname.
8 The streamByLastname method returns a Stream, which makes values possible as soon as they are returned from the database.

The following table shows the keywords that are supported for query methods:

Table 1. Supported keywords for query methods
Keyword Sample Logical result

After

findByBirthdateAfter(Date date)

birthdate > date

GreaterThan

findByAgeGreaterThan(int age)

age > age

GreaterThanEqual

findByAgeGreaterThanEqual(int age)

age >= age

Before

findByBirthdateBefore(Date date)

birthdate < date

LessThan

findByAgeLessThan(int age)

age < age

LessThanEqual

findByAgeLessThanEqual(int age)

age <= age

Between

findByAgeBetween(int from, int to)

age BETWEEN from AND to

NotBetween

findByAgeNotBetween(int from, int to)

age NOT BETWEEN from AND to

In

findByAgeIn(Collection<Integer> ages)

age IN (age1, age2, ageN)

NotIn

findByAgeNotIn(Collection ages)

age NOT IN (age1, age2, ageN)

IsNotNull, NotNull

findByFirstnameNotNull()

firstname IS NOT NULL

IsNull, Null

findByFirstnameNull()

firstname IS NULL

Like, StartingWith, EndingWith

findByFirstnameLike(String name)

firstname LIKE name

NotLike, IsNotLike

findByFirstnameNotLike(String name)

firstname NOT LIKE name

Containing on String

findByFirstnameContaining(String name)

firstname LIKE '%' + name + '%'

NotContaining on String

findByFirstnameNotContaining(String name)

firstname NOT LIKE '%' + name + '%'

(No keyword)

findByFirstname(String name)

firstname = name

Not

findByFirstnameNot(String name)

firstname != name

IsTrue, True

findByActiveIsTrue()

active IS TRUE

IsFalse, False

findByActiveIsFalse()

active IS FALSE

Query derivation is limited to properties that can be used in a WHERE clause without using joins.

7.7.1. Query Lookup Strategies

The JDBC module supports defining a query manually as a String in a @Query annotation or as named query in a property file.

Deriving a query from the name of the method is is currently limited to simple properties, that means properties present in the aggregate root directly. Also, only select queries are supported by this approach.

7.7.2. Using @Query

The following example shows how to use @Query to declare a query method:

Example 5. Declare a query method by using @Query
interface UserRepository extends CrudRepository<User, Long> {

  @Query("select firstName, lastName from User u where u.emailAddress = :email")
  User findByEmailAddress(@Param("email") String email);
}

For converting the query result into entities the same RowMapper is used by default as for the queries Spring Data JDBC generates itself. The query you provide must match the format the RowMapper expects. Columns for all properties that are used in the constructor of an entity must be provided. Columns for properties that get set via setter, wither or field access are optional. Properties that don’t have a matching column in the result will not be set. The query is used for populating the aggregate root, embedded entities and one-to-one relationships including arrays of primitive types which get stored and loaded as SQL-array-types. Separate queries are generated for maps, lists, sets and arrays of entities.

Spring fully supports Java 8’s parameter name discovery based on the -parameters compiler flag. By using this flag in your build as an alternative to debug information, you can omit the @Param annotation for named parameters.
Spring Data JDBC supports only named parameters.

7.7.3. Named Queries

If no query is given in an annotation as described in the previous section Spring Data JDBC will try to locate a named query. There are two ways how the name of the query can be determined. The default is to take the domain class of the query, i.e. the aggregate root of the repository, take its simple name and append the name of the method separated by a .. Alternatively the @Query annotation has a name attribute which can be used to specify the name of a query to be looked up.

Named queries are expected to be provided in the property file META-INF/jdbc-named-queries.properties on the classpath.

The location of that file may be changed by setting a value to @EnableJdbcRepositories.namedQueriesLocation.

Streaming Results

When you specify Stream as the return type of a query method, Spring Data JDBC returns elements as soon as they become available. When dealing with large amounts of data this is suitable for reducing latency and memory requirements.

The stream contains an open connection to the database. To avoid memory leaks, that connection needs to be closed eventually, by closing the stream. The recommended way to do that is a try-with-resource clause. It also means that, once the connection to the database is closed, the stream cannot obtain further elements and likely throws an exception.

Custom RowMapper

You can configure which RowMapper to use, either by using the @Query(rowMapperClass = …​.) or by registering a RowMapperMap bean and registering a RowMapper per method return type. The following example shows how to register DefaultQueryMappingConfiguration:

@Bean
QueryMappingConfiguration rowMappers() {
  return new DefaultQueryMappingConfiguration()
    .register(Person.class, new PersonRowMapper())
    .register(Address.class, new AddressRowMapper());
}

When determining which RowMapper to use for a method, the following steps are followed, based on the return type of the method:

  1. If the type is a simple type, no RowMapper is used.

    Instead, the query is expected to return a single row with a single column, and a conversion to the return type is applied to that value.

  2. The entity classes in the QueryMappingConfiguration are iterated until one is found that is a superclass or interface of the return type in question. The RowMapper registered for that class is used.

    Iterating happens in the order of registration, so make sure to register more general types after specific ones.

If applicable, wrapper types such as collections or Optional are unwrapped. Thus, a return type of Optional<Person> uses the Person type in the preceding process.

Using a custom RowMapper through QueryMappingConfiguration, @Query(rowMapperClass=…), or a custom ResultSetExtractor disables Entity Callbacks and Lifecycle Events as the result mapping can issue its own events/callbacks if needed.
Modifying Query

You can mark a query as being a modifying query by using the @Modifying on query method, as the following example shows:

@Modifying
@Query("UPDATE DUMMYENTITY SET name = :name WHERE id = :id")
boolean updateName(@Param("id") Long id, @Param("name") String name);

You can specify the following return types:

  • void

  • int (updated record count)

  • boolean(whether a record was updated)

Modifying queries are executed directly against the database. No events or callbacks get called. Therefore also fields with auditing annotations do not get updated if they don’t get updated in the annotated query.

Unresolved directive in jdbc.adoc - include::../../../../../spring-data-commons/src/main/asciidoc/query-by-example.adoc[leveloffset=+1] :leveloffset: +1

7.8. Running an Example

In Spring Data JDBC, you can use Query by Example with Repositories, as shown in the following example:

Example 6. Query by Example using a Repository
public interface PersonRepository
        extends CrudRepository<Person, String>,
        QueryByExampleExecutor<Person> { … }

public class PersonService {

  @Autowired PersonRepository personRepository;

  public List<Person> findPeople(Person probe) {
    return personRepository.findAll(Example.of(probe));
  }
}
Currently, only SingularAttribute properties can be used for property matching.

The property specifier accepts property names (such as firstname and lastname). You can navigate by chaining properties together with dots (address.city). You can also tune it with matching options and case sensitivity.

The following table shows the various StringMatcher options that you can use and the result of using them on a field named firstname:

Table 2. StringMatcher options
Matching Logical result

DEFAULT (case-sensitive)

firstname = ?0

DEFAULT (case-insensitive)

LOWER(firstname) = LOWER(?0)

EXACT (case-sensitive)

firstname = ?0

EXACT (case-insensitive)

LOWER(firstname) = LOWER(?0)

STARTING (case-sensitive)

firstname like ?0 + '%'

STARTING (case-insensitive)

LOWER(firstname) like LOWER(?0) + '%'

ENDING (case-sensitive)

firstname like '%' + ?0

ENDING (case-insensitive)

LOWER(firstname) like '%' + LOWER(?0)

CONTAINING (case-sensitive)

firstname like '%' + ?0 + '%'

CONTAINING (case-insensitive)

LOWER(firstname) like '%' + LOWER(?0) + '%'

Unresolved directive in jdbc.adoc - include::../../../../../spring-data-commons/src/main/asciidoc/repository-projections.adoc[leveloffset=+2]

7.9. MyBatis Integration

The CRUD operations and query methods can be delegated to MyBatis. This section describes how to configure Spring Data JDBC to integrate with MyBatis and which conventions to follow to hand over the running of the queries as well as the mapping to the library.

7.9.1. Configuration

The easiest way to properly plug MyBatis into Spring Data JDBC is by importing MyBatisJdbcConfiguration into you application configuration:

@Configuration
@EnableJdbcRepositories
@Import(MyBatisJdbcConfiguration.class)
class Application {

  @Bean
  SqlSessionFactoryBean sqlSessionFactoryBean() {
    // Configure MyBatis here
  }
}

As you can see, all you need to declare is a SqlSessionFactoryBean as MyBatisJdbcConfiguration relies on a SqlSession bean to be available in the ApplicationContext eventually.

7.9.2. Usage conventions

For each operation in CrudRepository, Spring Data JDBC runs multiple statements. If there is a SqlSessionFactory in the application context, Spring Data checks, for each step, whether the SessionFactory offers a statement. If one is found, that statement (including its configured mapping to an entity) is used.

The name of the statement is constructed by concatenating the fully qualified name of the entity type with Mapper. and a String determining the kind of statement. For example, if an instance of org.example.User is to be inserted, Spring Data JDBC looks for a statement named org.example.UserMapper.insert.

When the statement is run, an instance of [MyBatisContext] gets passed as an argument, which makes various arguments available to the statement.

The following table describes the available MyBatis statements:

Name Purpose CrudRepository methods that might trigger this statement Attributes available in the MyBatisContext

insert

Inserts a single entity. This also applies for entities referenced by the aggregate root.

save, saveAll.

getInstance: the instance to be saved

getDomainType: The type of the entity to be saved.

get(<key>): ID of the referencing entity, where <key> is the name of the back reference column provided by the NamingStrategy.

update

Updates a single entity. This also applies for entities referenced by the aggregate root.

save, saveAll.

getInstance: The instance to be saved

getDomainType: The type of the entity to be saved.

delete

Deletes a single entity.

delete, deleteById.

getId: The ID of the instance to be deleted

getDomainType: The type of the entity to be deleted.

deleteAll-<propertyPath>

Deletes all entities referenced by any aggregate root of the type used as prefix with the given property path. Note that the type used for prefixing the statement name is the name of the aggregate root, not the one of the entity to be deleted.

deleteAll.

getDomainType: The types of the entities to be deleted.

deleteAll

Deletes all aggregate roots of the type used as the prefix

deleteAll.

getDomainType: The type of the entities to be deleted.

delete-<propertyPath>

Deletes all entities referenced by an aggregate root with the given propertyPath

deleteById.

getId: The ID of the aggregate root for which referenced entities are to be deleted.

getDomainType: The type of the entities to be deleted.

findById

Selects an aggregate root by ID

findById.

getId: The ID of the entity to load.

getDomainType: The type of the entity to load.

findAll

Select all aggregate roots

findAll.

getDomainType: The type of the entity to load.

findAllById

Select a set of aggregate roots by ID values

findAllById.

getId: A list of ID values of the entities to load.

getDomainType: The type of the entity to load.

findAllByProperty-<propertyName>

Select a set of entities that is referenced by another entity. The type of the referencing entity is used for the prefix. The referenced entities type is used as the suffix. This method is deprecated. Use findAllByPath instead

All find* methods. If no query is defined for findAllByPath

getId: The ID of the entity referencing the entities to be loaded.

getDomainType: The type of the entity to load.

findAllByPath-<propertyPath>

Select a set of entities that is referenced by another entity via a property path.

All find* methods.

getIdentifier: The Identifier holding the id of the aggregate root plus the keys and list indexes of all path elements.

getDomainType: The type of the entity to load.

findAllSorted

Select all aggregate roots, sorted

findAll(Sort).

getSort: The sorting specification.

findAllPaged

Select a page of aggregate roots, optionally sorted

findAll(Page).

getPageable: The paging specification.

count

Count the number of aggregate root of the type used as prefix

count

getDomainType: The type of aggregate roots to count.

7.10. Lifecycle Events

Spring Data JDBC triggers events that get published to any matching ApplicationListener beans in the application context. Events and callbacks get only triggered for aggregate roots. If you want to process non-root entities, you need to do that through a listener for the containing aggregate root.

Entity lifecycle events can be costly, and you may notice a change in the performance profile when loading large result sets. You can disable lifecycle events on the Template API.

For example, the following listener gets invoked before an aggregate gets saved:

@Bean
ApplicationListener<BeforeSaveEvent<Object>> loggingSaves() {

	return event -> {

		Object entity = event.getEntity();
		LOG.info("{} is getting saved.", entity);
	};
}

If you want to handle events only for a specific domain type you may derive your listener from AbstractRelationalEventListener and overwrite one or more of the onXXX methods, where XXX stands for an event type. Callback methods will only get invoked for events related to the domain type and their subtypes, therefore you don’t require further casting.

class PersonLoadListener extends AbstractRelationalEventListener<Person> {

	@Override
	protected void onAfterLoad(AfterLoadEvent<Person> personLoad) {
		LOG.info(personLoad.getEntity());
	}
}

The following table describes the available events. For more details about the exact relation between process steps see the description of available callbacks which map 1:1 to events.

Table 3. Available events
Event When It Is Published

BeforeDeleteEvent

Before an aggregate root gets deleted.

AfterDeleteEvent

After an aggregate root gets deleted.

BeforeConvertEvent

Before an aggregate root gets converted into a plan for executing SQL statements, but after the decision was made if the aggregate is new or not, i.e. if an update or an insert is in order. This is the correct event if you want to set an id programmatically.

BeforeSaveEvent

Before an aggregate root gets saved (that is, inserted or updated but after the decision about whether if it gets inserted or updated was made). Do not use this for creating Ids for new aggregates. Use BeforeConvertEvent or even better BeforeConvertCallback instead.

AfterSaveEvent

After an aggregate root gets saved (that is, inserted or updated).

AfterLoadEvent

After an aggregate root gets created from a database ResultSet and all its properties get set. Note: This is deprecated. Use AfterConvert instead

AfterConvertEvent

After an aggregate root gets created from a database ResultSet and all its properties get set.

Lifecycle events depend on an ApplicationEventMulticaster, which in case of the SimpleApplicationEventMulticaster can be configured with a TaskExecutor, and therefore gives no guarantees when an Event is processed.

7.10.1. Store-specific EntityCallbacks

Spring Data JDBC uses the EntityCallback API for its auditing support and reacts on the callbacks listed in the following table.

Table 4. Process Steps and Callbacks of the Different Processes performed by Spring Data JDBC.
Process EntityCallback / Process Step Comment

Delete

BeforeDeleteCallback

Before the actual deletion.

The aggregate root and all the entities of that aggregate get removed from the database.

AfterDeleteCallback

After an aggregate gets deleted.

Save

Determine if an insert or an update of the aggregate is to be performed dependen on if it is new or not.

BeforeConvertCallback

This is the correct callback if you want to set an id programmatically. In the previous step new aggregates got detected as such and a Id generated in this step would be used in the following step.

Convert the aggregate to a aggregate change, it is a sequence of SQL statements to be executed against the database. In this step the decision is made if an Id is provided by the aggregate or if the Id is still empty and is expected to be generated by the database.

BeforeSaveCallback

Changes made to the aggregate root may get considered, but the decision if an id value will be sent to the database is already made in the previous step.

The SQL statements determined above get executed against the database.

AfterSaveCallback

After an aggregate root gets saved (that is, inserted or updated).

Load

Load the aggregate using 1 or more SQL queries. Construct the aggregate from the resultset.

AfterConvertCallback

We encourage the use of callbacks over events since they support the use of immutable classes and therefore are more powerful and versatile than events.

Unresolved directive in jdbc.adoc - include::../../../../../spring-data-commons/src/main/asciidoc/entity-callbacks.adoc[leveloffset=+1]

7.11. Custom Conversions

Spring Data JDBC allows registration of custom converters to influence how values are mapped in the database. Currently, converters are only applied on property-level.

7.11.1. Writing a Property by Using a Registered Spring Converter

The following example shows an implementation of a Converter that converts from a Boolean object to a String value:

import org.springframework.core.convert.converter.Converter;

@WritingConverter
public class BooleanToStringConverter implements Converter<Boolean, String> {

    @Override
    public String convert(Boolean source) {
        return source != null && source ? "T" : "F";
    }
}

There are a couple of things to notice here: Boolean and String are both simple types hence Spring Data requires a hint in which direction this converter should apply (reading or writing). By annotating this converter with @WritingConverter you instruct Spring Data to write every Boolean property as String in the database.

7.11.2. Reading by Using a Spring Converter

The following example shows an implementation of a Converter that converts from a String to a Boolean value:

@ReadingConverter
public class StringToBooleanConverter implements Converter<String, Boolean> {

    @Override
    public Boolean convert(String source) {
        return source != null && source.equalsIgnoreCase("T") ? Boolean.TRUE : Boolean.FALSE;
    }
}

There are a couple of things to notice here: String and Boolean are both simple types hence Spring Data requires a hint in which direction this converter should apply (reading or writing). By annotating this converter with @ReadingConverter you instruct Spring Data to convert every String value from the database that should be assigned to a Boolean property.

7.11.3. Registering Spring Converters with the JdbcConverter

class MyJdbcConfiguration extends AbstractJdbcConfiguration {

    // …

	@Override
    protected List<?> userConverters() {
		return Arrays.asList(new BooleanToStringConverter(), new StringToBooleanConverter());
	}

}
In previous versions of Spring Data JDBC it was recommended to directly overwrite AbstractJdbcConfiguration.jdbcCustomConversions(). This is no longer necessary or even recommended, since that method assembles conversions intended for all databases, conversions registered by the Dialect used and conversions registered by the user. If you are migrating from an older version of Spring Data JDBC and have AbstractJdbcConfiguration.jdbcCustomConversions() overwritten conversions from your Dialect will not get registered.

7.11.4. JdbcValue

Value conversion uses JdbcValue to enrich values propagated to JDBC operations with a java.sql.Types type. Register a custom write converter if you need to specify a JDBC-specific type instead of using type derivation. This converter should convert the value to JdbcValue which has a field for the value and for the actual JDBCType.

Unresolved directive in jdbc-custom-conversions.adoc - include::../../../../../spring-data-commons/src/main/asciidoc/custom-conversions.adoc[leveloffset=+3]

7.12. Logging

Spring Data JDBC does little to no logging on its own. Instead, the mechanics of JdbcTemplate to issue SQL statements provide logging. Thus, if you want to inspect what SQL statements are run, activate logging for Spring’s NamedParameterJdbcTemplate or MyBatis.

7.13. Transactionality

The methods of CrudRepository instances are transactional by default. For reading operations, the transaction configuration readOnly flag is set to true. All others are configured with a plain @Transactional annotation so that default transaction configuration applies. For details, see the Javadoc of SimpleJdbcRepository. If you need to tweak transaction configuration for one of the methods declared in a repository, redeclare the method in your repository interface, as follows:

Example 7. Custom transaction configuration for CRUD
interface UserRepository extends CrudRepository<User, Long> {

  @Override
  @Transactional(timeout = 10)
  List<User> findAll();

  // Further query method declarations
}

The preceding causes the findAll() method to be run with a timeout of 10 seconds and without the readOnly flag.

Another way to alter transactional behavior is by using a facade or service implementation that typically covers more than one repository. Its purpose is to define transactional boundaries for non-CRUD operations. The following example shows how to create such a facade:

Example 8. Using a facade to define transactions for multiple repository calls
@Service
public class UserManagementImpl implements UserManagement {

  private final UserRepository userRepository;
  private final RoleRepository roleRepository;

  UserManagementImpl(UserRepository userRepository,
    RoleRepository roleRepository) {
    this.userRepository = userRepository;
    this.roleRepository = roleRepository;
  }

  @Transactional
  public void addRoleToAllUsers(String roleName) {

    Role role = roleRepository.findByName(roleName);

    for (User user : userRepository.findAll()) {
      user.addRole(role);
      userRepository.save(user);
    }
}

The preceding example causes calls to addRoleToAllUsers(…) to run inside a transaction (participating in an existing one or creating a new one if none are already running). The transaction configuration for the repositories is neglected, as the outer transaction configuration determines the actual repository to be used. Note that you have to explicitly activate <tx:annotation-driven /> or use @EnableTransactionManagement to get annotation-based configuration for facades working. Note that the preceding example assumes you use component scanning.

7.13.1. Transactional Query Methods

To let your query methods be transactional, use @Transactional at the repository interface you define, as the following example shows:

Example 9. Using @Transactional at query methods
@Transactional(readOnly = true)
interface UserRepository extends CrudRepository<User, Long> {

  List<User> findByLastname(String lastname);

  @Modifying
  @Transactional
  @Query("delete from User u where u.active = false")
  void deleteInactiveUsers();
}

Typically, you want the readOnly flag to be set to true, because most of the query methods only read data. In contrast to that, deleteInactiveUsers() uses the @Modifying annotation and overrides the transaction configuration. Thus, the method is with the readOnly flag set to false.

It is highly recommended to make query methods transactional. These methods might execute more then one query in order to populate an entity. Without a common transaction Spring Data JDBC executes the queries in different connections. This may put excessive strain on the connection pool and might even lead to dead locks when multiple methods request a fresh connection while holding on to one.
It is definitely reasonable to mark read-only queries as such by setting the readOnly flag. This does not, however, act as a check that you do not trigger a manipulating query (although some databases reject INSERT and UPDATE statements inside a read-only transaction). Instead, the readOnly flag is propagated as a hint to the underlying JDBC driver for performance optimizations.

Unresolved directive in jdbc.adoc - include::../../../../../spring-data-commons/src/main/asciidoc/auditing.adoc[leveloffset=+1]

7.14. JDBC Auditing

In order to activate auditing, add @EnableJdbcAuditing to your configuration, as the following example shows:

Example 10. Activating auditing with Java configuration
@Configuration
@EnableJdbcAuditing
class Config {

  @Bean
  AuditorAware<AuditableUser> auditorProvider() {
    return new AuditorAwareImpl();
  }
}

If you expose a bean of type AuditorAware to the ApplicationContext, the auditing infrastructure automatically picks it up and uses it to determine the current user to be set on domain types. If you have multiple implementations registered in the ApplicationContext, you can select the one to be used by explicitly setting the auditorAwareRef attribute of @EnableJdbcAuditing.

7.15. JDBC Locking

Spring Data JDBC supports locking on derived query methods. To enable locking on a given derived query method inside a repository, you annotate it with @Lock. The required value of type LockMode offers two values: PESSIMISTIC_READ which guarantees that the data you are reading doesn’t get modified and PESSIMISTIC_WRITE which obtains a lock to modify the data. Some databases do not make this distinction. In that cases both modes are equivalent of PESSIMISTIC_WRITE.

Example 11. Using @Lock on derived query method
interface UserRepository extends CrudRepository<User, Long> {

  @Lock(LockMode.PESSIMISTIC_READ)
  List<User> findByLastname(String lastname);
}

As you can see above, the method findByLastname(String lastname) will be executed with a pessimistic read lock. If you are using a databse with the MySQL Dialect this will result for example in the following query:

Example 12. Resulting Sql query for MySQL dialect
Select * from user u where u.lastname = lastname LOCK IN SHARE MODE

Alternative to LockMode.PESSIMISTIC_READ you can use LockMode.PESSIMISTIC_WRITE.

Appendix

Appendix A: Glossary

AOP

Aspect-Oriented Programming

CRUD

Create, Read, Update, Delete - Basic persistence operations

Dependency Injection

Pattern to hand a component’s dependency to the component from outside, freeing the component to lookup the dependent itself. For more information, see https://en.wikipedia.org/wiki/Dependency_Injection.

JPA

Java Persistence API

Spring

Java application framework — https://projects.spring.io/spring-framework

Unresolved directive in index.adoc - include::../../../../../spring-data-commons/src/main/asciidoc/repository-populator-namespace-reference.adoc[leveloffset=+1] Unresolved directive in index.adoc - include::../../../../../spring-data-commons/src/main/asciidoc/repository-query-keywords-reference.adoc[leveloffset=+1] Unresolved directive in index.adoc - include::../../../../../spring-data-commons/src/main/asciidoc/repository-query-return-types-reference.adoc[leveloffset=+1]