© 2008-2022 The original authors.
Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically. |
1. Preface
Spring Data JPA provides repository support for the Jakarta Persistence API (JPA). It eases development of applications that need to access JPA data sources.
1.1. Project Metadata
-
Version control: https://github.com/spring-projects/spring-data-jpa
-
Bugtracker: https://github.com/spring-projects/spring-data-jpa/issues
-
Release repository: https://repo.spring.io/libs-release
-
Milestone repository: https://repo.spring.io/libs-milestone
-
Snapshot repository: https://repo.spring.io/libs-snapshot
2. New & Noteworthy
2.1. What’s New in Spring Data JPA 3.0
-
Upgrade to Hibernate 6. See this section for what to consider when upgrading.
-
Support for null handling definitions via
Sort
.
2.1.1. Upgrading to Hibernate 6
Spring Data 3.0 upgrades its Hibernate baseline to Hibernate 6. As quite a few things have changed in that version, a couple of things that have worked before might need some tweaks.
-
Using JPA named queries with pagination — Pagination requires Spring Data to derive a count query from the originally declared one loading the actual content of the Page. For queries declared as JPA named queries we have relied on provider-specific API to obtain the original source query and tweak it accordingly. On Hibernate 6, in certain arrangements that query extraction might fail. We recommend to either rather declare the queries on the repository methods directly using
@Query
. -
Using positional parameters with pagination — When using positional parameters with pagination queries you need to make sure that the parameter indexes still start with 1, even with a potential
ORDER BY
clause removed from the query. This is because, the count query derived from the original one will have that clause removed from the query and Hibernate 6 rejects queries parameter indexes not starting at 1. We generally recommend to use named parameters anyway. -
Applying JPA entity graphs — Under certain model conditions, the application of entity graphs might fail on Hibernate 6. See this ticket for details. We generally recommend to rather use interface or DTO projections instead of entity graphs.
-
Using
… like … escape ?#{escapeCharacter()}
in queries — If you have customized the global default escape character (via@EnableJpaRepositories(escapeCharacter = '…')
) the application of that through the corresponding SpEL expression currently fails. See this ticket for details.
2.2. What’s New in Spring Data JPA 2.5
There is a new getById
method in the JpaRepository
which will replace getOne
, which is now deprecated.
Since this method returns a reference this changes the behaviour of an existing getById
method which before was implemented by query derivation.
This in turn might lead to an unexpected LazyLoadingException
when accessing attributes of that reference outside a transaction.
To avoid this please rename your existing getById
method to getXyzById
with Xyz
being an arbitrary string.
2.3. What’s New in Spring Data JPA 1.11
Spring Data JPA 1.11 added the following features:
-
Improved compatibility with Hibernate 5.2.
-
Support any-match mode for Query by Example.
-
Paged query optimizations.
-
Support for the
exists
projection in repository query derivation.
2.4. What’s New in Spring Data JPA 1.10
Spring Data JPA 1.10 added the following features:
-
Support for Projections in repository query methods.
-
Support for Query by Example.
-
The following annotations have been enabled to build on composed annotations:
@EntityGraph
,@Lock
,@Modifying
,@Query
,@QueryHints
, and@Procedure
. -
Support for the
Contains
keyword on collection expressions. -
AttributeConverter
implementations forZoneId
of JSR-310 and ThreeTenBP. -
Upgrade to Querydsl 4, Hibernate 5, OpenJPA 2.4, and EclipseLink 2.6.1.
3. Dependencies
Due to the different inception dates of individual Spring Data modules, most of them carry different major and minor version numbers. The easiest way to find compatible ones is to rely on the Spring Data Release Train BOM that we ship with the compatible versions defined. In a Maven project, you would declare this dependency in the <dependencyManagement />
section of your POM as follows:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-bom</artifactId>
<version>2022.0.0-M6</version>
<scope>import</scope>
<type>pom</type>
</dependency>
</dependencies>
</dependencyManagement>
The current release train version is 2022.0.0-M6
. The train version uses calver with the pattern YYYY.MINOR.MICRO
.
The version name follows ${calver}
for GA releases and service releases and the following pattern for all other versions: ${calver}-${modifier}
, where modifier
can be one of the following:
-
SNAPSHOT
: Current snapshots -
M1
,M2
, and so on: Milestones -
RC1
,RC2
, and so on: Release candidates
You can find a working example of using the BOMs in our Spring Data examples repository. With that in place, you can declare the Spring Data modules you would like to use without a version in the <dependencies />
block, as follows:
<dependencies>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-jpa</artifactId>
</dependency>
<dependencies>
3.1. Dependency Management with Spring Boot
Spring Boot selects a recent version of Spring Data modules for you. If you still want to upgrade to a newer version, set
the spring-data-releasetrain.version
property to the train version and iteration you would like to use.
4. Working with Spring Data Repositories
The goal of the Spring Data repository abstraction is to significantly reduce the amount of boilerplate code required to implement data access layers for various persistence stores.
Spring Data repository documentation and your module This chapter explains the core concepts and interfaces of Spring Data repositories. The information in this chapter is pulled from the Spring Data Commons module. It uses the configuration and code samples for the Jakarta Persistence API (JPA) module. You should adapt the XML namespace declaration and the types to be extended to the equivalents of the particular module that you use. “Namespace reference” covers XML configuration, which is supported across all Spring Data modules that support the repository API. “Repository query keywords” covers the query method keywords supported by the repository abstraction in general. For detailed information on the specific features of your module, see the chapter on that module of this document. |
4.1. Core concepts
The central interface in the Spring Data repository abstraction is Repository
.
It takes the domain class to manage as well as the ID type of the domain class as type arguments.
This interface acts primarily as a marker interface to capture the types to work with and to help you to discover interfaces that extend this one.
The CrudRepository
and ListCrudRepository
interfaces provide sophisticated CRUD functionality for the entity class that is being managed.
CrudRepository
Interfacepublic interface CrudRepository<T, ID> extends Repository<T, ID> {
<S extends T> S save(S entity); (1)
Optional<T> findById(ID primaryKey); (2)
Iterable<T> findAll(); (3)
long count(); (4)
void delete(T entity); (5)
boolean existsById(ID primaryKey); (6)
// … more functionality omitted.
}
1 | Saves the given entity. |
2 | Returns the entity identified by the given ID. |
3 | Returns all entities. |
4 | Returns the number of entities. |
5 | Deletes the given entity. |
6 | Indicates whether an entity with the given ID exists. |
ListCrudRepository
offers equivalent methods, but they return List
where the CrudRepository
methods return an Iterable
.
We also provide persistence technology-specific abstractions, such as JpaRepository or MongoRepository .
Those interfaces extend CrudRepository and expose the capabilities of the underlying persistence technology in addition to the rather generic persistence technology-agnostic interfaces such as CrudRepository .
|
Additional to the CrudRepository
, there is a PagingAndSortingRepository
abstraction that adds additional methods to ease paginated access to entities:
PagingAndSortingRepository
interfacepublic interface PagingAndSortingRepository<T, ID> {
Iterable<T> findAll(Sort sort);
Page<T> findAll(Pageable pageable);
}
To access the second page of User
by a page size of 20, you could do something like the following:
PagingAndSortingRepository<User, Long> repository = // … get access to a bean
Page<User> users = repository.findAll(PageRequest.of(1, 20));
In addition to query methods, query derivation for both count and delete queries is available. The following list shows the interface definition for a derived count query:
interface UserRepository extends CrudRepository<User, Long> {
long countByLastname(String lastname);
}
The following listing shows the interface definition for a derived delete query:
interface UserRepository extends CrudRepository<User, Long> {
long deleteByLastname(String lastname);
List<User> removeByLastname(String lastname);
}
4.2. Query Methods
Standard CRUD functionality repositories usually have queries on the underlying datastore. With Spring Data, declaring those queries becomes a four-step process:
-
Declare an interface extending Repository or one of its subinterfaces and type it to the domain class and ID type that it should handle, as shown in the following example:
interface PersonRepository extends Repository<Person, Long> { … }
-
Declare query methods on the interface.
interface PersonRepository extends Repository<Person, Long> { List<Person> findByLastname(String lastname); }
-
Set up Spring to create proxy instances for those interfaces, either with JavaConfig or with XML configuration.
-
To use Java configuration, create a class similar to the following:
@EnableJpaRepositories class Config { … }
-
To use XML configuration, define a bean similar to the following:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jpa="http://www.springframework.org/schema/data/jpa" xsi:schemaLocation="http://www.springframework.org/schema/beans https://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/data/jpa https://www.springframework.org/schema/data/jpa/spring-jpa.xsd"> <jpa:repositories base-package="com.acme.repositories"/> </beans>
The JPA namespace is used in this example. If you use the repository abstraction for any other store, you need to change this to the appropriate namespace declaration of your store module. In other words, you should exchange
jpa
in favor of, for example,mongodb
.Note that the JavaConfig variant does not configure a package explicitly, because the package of the annotated class is used by default. To customize the package to scan, use one of the
basePackage…
attributes of the data-store-specific repository’s@Enable${store}Repositories
-annotation.
-
-
Inject the repository instance and use it, as shown in the following example:
class SomeClient { private final PersonRepository repository; SomeClient(PersonRepository repository) { this.repository = repository; } void doSomething() { List<Person> persons = repository.findByLastname("Matthews"); } }
The sections that follow explain each step in detail:
4.3. Defining Repository Interfaces
To define a repository interface, you first need to define a domain class-specific repository interface.
The interface must extend Repository
and be typed to the domain class and an ID type.
If you want to expose CRUD methods for that domain type, you may extend CrudRepository
, or one of its variants instead of Repository
.
4.3.1. Fine-tuning Repository Definition
There are a few variants how you can get started with your repository interface.
The typical approach is to extend CrudRepository
, which gives you methods for CRUD functionality.
CRUD stands for Create, Read, Update, Delete.
With version 3.0 we also introduced ListCrudRepository
which is very similar to the CrudRepository
but for those methods that return multiple entities it returns a List
instead of an Iterable
which you might find easier to use.
If you are using a reactive store you might choose ReactiveCrudRepository
, or RxJava3CrudRepository
depending on which reactive framework you are using.
If you are using Kotlin you might pick CoroutineCrudRepository
which utilizes Kotlin’s coroutines.
Additional you can extend PagingAndSortingRepository
, ReactiveSortingRepository
, RxJava3SortingRepository
, or CoroutineSortingRepository
if you need methods that allow to specify a Sort
abstraction or in the first case a Pageable
abstraction.
Note that the various sorting repositories no longer extended their respective CRUD repository as they did in Spring Data Versions pre 3.0.
Therefore, you need to extend both interfaces if you want functionality of both.
If you do not want to extend Spring Data interfaces, you can also annotate your repository interface with @RepositoryDefinition
.
Extending one of the CRUD repository interfaces exposes a complete set of methods to manipulate your entities.
If you prefer to be selective about the methods being exposed, copy the methods you want to expose from the CRUD repository into your domain repository.
When doing so, you may change the return type of methods.
Spring Data will honor the return type if possible.
For example, for methods returning multiple entities you may choose Iterable<T>
, List<T>
, Collection<T>
or a VAVR list.
If many repositories in your application should have the same set of methods you can define your own base interface to inherit from.
Such an interface must be annotated with @NoRepositoryBean
.
This prevents Spring Data to try to create an instance of it directly and failing because it can’t determine the entity for that repository, since it still contains a generic type variable.
The following example shows how to selectively expose CRUD methods (findById
and save
, in this case):
@NoRepositoryBean
interface MyBaseRepository<T, ID> extends Repository<T, ID> {
Optional<T> findById(ID id);
<S extends T> S save(S entity);
}
interface UserRepository extends MyBaseRepository<User, Long> {
User findByEmailAddress(EmailAddress emailAddress);
}
In the prior example, you defined a common base interface for all your domain repositories and exposed findById(…)
as well as save(…)
.These methods are routed into the base repository implementation of the store of your choice provided by Spring Data (for example, if you use JPA, the implementation is SimpleJpaRepository
), because they match the method signatures in CrudRepository
.
So the UserRepository
can now save users, find individual users by ID, and trigger a query to find Users
by email address.
The intermediate repository interface is annotated with @NoRepositoryBean .
Make sure you add that annotation to all repository interfaces for which Spring Data should not create instances at runtime.
|
4.3.2. Using Repositories with Multiple Spring Data Modules
Using a unique Spring Data module in your application makes things simple, because all repository interfaces in the defined scope are bound to the Spring Data module. Sometimes, applications require using more than one Spring Data module. In such cases, a repository definition must distinguish between persistence technologies. When it detects multiple repository factories on the class path, Spring Data enters strict repository configuration mode. Strict configuration uses details on the repository or the domain class to decide about Spring Data module binding for a repository definition:
-
If the repository definition extends the module-specific repository, it is a valid candidate for the particular Spring Data module.
-
If the domain class is annotated with the module-specific type annotation, it is a valid candidate for the particular Spring Data module. Spring Data modules accept either third-party annotations (such as JPA’s
@Entity
) or provide their own annotations (such as@Document
for Spring Data MongoDB and Spring Data Elasticsearch).
The following example shows a repository that uses module-specific interfaces (JPA in this case):
interface MyRepository extends JpaRepository<User, Long> { }
@NoRepositoryBean
interface MyBaseRepository<T, ID> extends JpaRepository<T, ID> { … }
interface UserRepository extends MyBaseRepository<User, Long> { … }
MyRepository
and UserRepository
extend JpaRepository
in their type hierarchy.
They are valid candidates for the Spring Data JPA module.
The following example shows a repository that uses generic interfaces:
interface AmbiguousRepository extends Repository<User, Long> { … }
@NoRepositoryBean
interface MyBaseRepository<T, ID> extends CrudRepository<T, ID> { … }
interface AmbiguousUserRepository extends MyBaseRepository<User, Long> { … }
AmbiguousRepository
and AmbiguousUserRepository
extend only Repository
and CrudRepository
in their type hierarchy.
While this is fine when using a unique Spring Data module, multiple modules cannot distinguish to which particular Spring Data these repositories should be bound.
The following example shows a repository that uses domain classes with annotations:
interface PersonRepository extends Repository<Person, Long> { … }
@Entity
class Person { … }
interface UserRepository extends Repository<User, Long> { … }
@Document
class User { … }
PersonRepository
references Person
, which is annotated with the JPA @Entity
annotation, so this repository clearly belongs to Spring Data JPA. UserRepository
references User
, which is annotated with Spring Data MongoDB’s @Document
annotation.
The following bad example shows a repository that uses domain classes with mixed annotations:
interface JpaPersonRepository extends Repository<Person, Long> { … }
interface MongoDBPersonRepository extends Repository<Person, Long> { … }
@Entity
@Document
class Person { … }
This example shows a domain class using both JPA and Spring Data MongoDB annotations.
It defines two repositories, JpaPersonRepository
and MongoDBPersonRepository
.
One is intended for JPA and the other for MongoDB usage.
Spring Data is no longer able to tell the repositories apart, which leads to undefined behavior.
Repository type details and distinguishing domain class annotations are used for strict repository configuration to identify repository candidates for a particular Spring Data module. Using multiple persistence technology-specific annotations on the same domain type is possible and enables reuse of domain types across multiple persistence technologies. However, Spring Data can then no longer determine a unique module with which to bind the repository.
The last way to distinguish repositories is by scoping repository base packages. Base packages define the starting points for scanning for repository interface definitions, which implies having repository definitions located in the appropriate packages. By default, annotation-driven configuration uses the package of the configuration class. The base package in XML-based configuration is mandatory.
The following example shows annotation-driven configuration of base packages:
@EnableJpaRepositories(basePackages = "com.acme.repositories.jpa")
@EnableMongoRepositories(basePackages = "com.acme.repositories.mongo")
class Configuration { … }
4.4. Defining Query Methods
The repository proxy has two ways to derive a store-specific query from the method name:
-
By deriving the query from the method name directly.
-
By using a manually defined query.
Available options depend on the actual store. However, there must be a strategy that decides what actual query is created. The next section describes the available options.
4.4.1. Query Lookup Strategies
The following strategies are available for the repository infrastructure to resolve the query.
With XML configuration, you can configure the strategy at the namespace through the query-lookup-strategy
attribute.
For Java configuration, you can use the queryLookupStrategy
attribute of the Enable${store}Repositories
annotation.
Some strategies may not be supported for particular datastores.
-
CREATE
attempts to construct a store-specific query from the query method name. The general approach is to remove a given set of well known prefixes from the method name and parse the rest of the method. You can read more about query construction in “Query Creation”. -
USE_DECLARED_QUERY
tries to find a declared query and throws an exception if it cannot find one. The query can be defined by an annotation somewhere or declared by other means. See the documentation of the specific store to find available options for that store. If the repository infrastructure does not find a declared query for the method at bootstrap time, it fails. -
CREATE_IF_NOT_FOUND
(the default) combinesCREATE
andUSE_DECLARED_QUERY
. It looks up a declared query first, and, if no declared query is found, it creates a custom method name-based query. This is the default lookup strategy and, thus, is used if you do not configure anything explicitly. It allows quick query definition by method names but also custom-tuning of these queries by introducing declared queries as needed.
4.4.2. Query Creation
The query builder mechanism built into the Spring Data repository infrastructure is useful for building constraining queries over entities of the repository.
The following example shows how to create a number of queries:
interface PersonRepository extends Repository<Person, Long> {
List<Person> findByEmailAddressAndLastname(EmailAddress emailAddress, String lastname);
// Enables the distinct flag for the query
List<Person> findDistinctPeopleByLastnameOrFirstname(String lastname, String firstname);
List<Person> findPeopleDistinctByLastnameOrFirstname(String lastname, String firstname);
// Enabling ignoring case for an individual property
List<Person> findByLastnameIgnoreCase(String lastname);
// Enabling ignoring case for all suitable properties
List<Person> findByLastnameAndFirstnameAllIgnoreCase(String lastname, String firstname);
// Enabling static ORDER BY for a query
List<Person> findByLastnameOrderByFirstnameAsc(String lastname);
List<Person> findByLastnameOrderByFirstnameDesc(String lastname);
}
Parsing query method names is divided into subject and predicate.
The first part (find…By
, exists…By
) defines the subject of the query, the second part forms the predicate.
The introducing clause (subject) can contain further expressions.
Any text between find
(or other introducing keywords) and By
is considered to be descriptive unless using one of the result-limiting keywords such as a Distinct
to set a distinct flag on the query to be created or Top
/First
to limit query results.
The appendix contains the full list of query method subject keywords and query method predicate keywords including sorting and letter-casing modifiers.
However, the first By
acts as a delimiter to indicate the start of the actual criteria predicate.
At a very basic level, you can define conditions on entity properties and concatenate them with And
and Or
.
The actual result of parsing the method depends on the persistence store for which you create the query. However, there are some general things to notice:
-
The expressions are usually property traversals combined with operators that can be concatenated. You can combine property expressions with
AND
andOR
. You also get support for operators such asBetween
,LessThan
,GreaterThan
, andLike
for the property expressions. The supported operators can vary by datastore, so consult the appropriate part of your reference documentation. -
The method parser supports setting an
IgnoreCase
flag for individual properties (for example,findByLastnameIgnoreCase(…)
) or for all properties of a type that supports ignoring case (usuallyString
instances — for example,findByLastnameAndFirstnameAllIgnoreCase(…)
). Whether ignoring cases is supported may vary by store, so consult the relevant sections in the reference documentation for the store-specific query method. -
You can apply static ordering by appending an
OrderBy
clause to the query method that references a property and by providing a sorting direction (Asc
orDesc
). To create a query method that supports dynamic sorting, see “Special parameter handling”.
4.4.3. Property Expressions
Property expressions can refer only to a direct property of the managed entity, as shown in the preceding example. At query creation time, you already make sure that the parsed property is a property of the managed domain class. However, you can also define constraints by traversing nested properties. Consider the following method signature:
List<Person> findByAddressZipCode(ZipCode zipCode);
Assume a Person
has an Address
with a ZipCode
.
In that case, the method creates the x.address.zipCode
property traversal.
The resolution algorithm starts by interpreting the entire part (AddressZipCode
) as the property and checks the domain class for a property with that name (uncapitalized).
If the algorithm succeeds, it uses that property.
If not, the algorithm splits up the source at the camel-case parts from the right side into a head and a tail and tries to find the corresponding property — in our example, AddressZip
and Code
.
If the algorithm finds a property with that head, it takes the tail and continues building the tree down from there, splitting the tail up in the way just described.
If the first split does not match, the algorithm moves the split point to the left (Address
, ZipCode
) and continues.
Although this should work for most cases, it is possible for the algorithm to select the wrong property.
Suppose the Person
class has an addressZip
property as well.
The algorithm would match in the first split round already, choose the wrong property, and fail (as the type of addressZip
probably has no code
property).
To resolve this ambiguity you can use _
inside your method name to manually define traversal points.
So our method name would be as follows:
List<Person> findByAddress_ZipCode(ZipCode zipCode);
Because we treat the underscore character as a reserved character, we strongly advise following standard Java naming conventions (that is, not using underscores in property names but using camel case instead).
4.4.4. Special parameter handling
To handle parameters in your query, define method parameters as already seen in the preceding examples.
Besides that, the infrastructure recognizes certain specific types like Pageable
and Sort
, to apply pagination and sorting to your queries dynamically.
The following example demonstrates these features:
Pageable
, Slice
, and Sort
in query methodsPage<User> findByLastname(String lastname, Pageable pageable);
Slice<User> findByLastname(String lastname, Pageable pageable);
List<User> findByLastname(String lastname, Sort sort);
List<User> findByLastname(String lastname, Pageable pageable);
APIs taking Sort and Pageable expect non-null values to be handed into methods.
If you do not want to apply any sorting or pagination, use Sort.unsorted() and Pageable.unpaged() .
|
The first method lets you pass an org.springframework.data.domain.Pageable
instance to the query method to dynamically add paging to your statically defined query.
A Page
knows about the total number of elements and pages available.
It does so by the infrastructure triggering a count query to calculate the overall number.
As this might be expensive (depending on the store used), you can instead return a Slice
.
A Slice
knows only about whether a next Slice
is available, which might be sufficient when walking through a larger result set.
Sorting options are handled through the Pageable
instance, too.
If you need only sorting, add an org.springframework.data.domain.Sort
parameter to your method.
As you can see, returning a List
is also possible.
In this case, the additional metadata required to build the actual Page
instance is not created (which, in turn, means that the additional count query that would have been necessary is not issued).
Rather, it restricts the query to look up only the given range of entities.
To find out how many pages you get for an entire query, you have to trigger an additional count query. By default, this query is derived from the query you actually trigger. |
Paging and Sorting
You can define simple sorting expressions by using property names. You can concatenate expressions to collect multiple criteria into one expression.
Sort sort = Sort.by("firstname").ascending()
.and(Sort.by("lastname").descending());
For a more type-safe way to define sort expressions, start with the type for which to define the sort expression and use method references to define the properties on which to sort.
TypedSort<Person> person = Sort.sort(Person.class);
Sort sort = person.by(Person::getFirstname).ascending()
.and(person.by(Person::getLastname).descending());
TypedSort.by(…) makes use of runtime proxies by (typically) using CGlib, which may interfere with native image compilation when using tools such as Graal VM Native.
|
If your store implementation supports Querydsl, you can also use the generated metamodel types to define sort expressions:
QSort sort = QSort.by(QPerson.firstname.asc())
.and(QSort.by(QPerson.lastname.desc()));
4.4.5. Limiting Query Results
You can limit the results of query methods by using the first
or top
keywords, which you can use interchangeably.
You can append an optional numeric value to top
or first
to specify the maximum result size to be returned.
If the number is left out, a result size of 1 is assumed.
The following example shows how to limit the query size:
Top
and First
User findFirstByOrderByLastnameAsc();
User findTopByOrderByAgeDesc();
Page<User> queryFirst10ByLastname(String lastname, Pageable pageable);
Slice<User> findTop3ByLastname(String lastname, Pageable pageable);
List<User> findFirst10ByLastname(String lastname, Sort sort);
List<User> findTop10ByLastname(String lastname, Pageable pageable);
The limiting expressions also support the Distinct
keyword for datastores that support distinct queries.
Also, for the queries that limit the result set to one instance, wrapping the result into with the Optional
keyword is supported.
If pagination or slicing is applied to a limiting query pagination (and the calculation of the number of available pages), it is applied within the limited result.
Limiting the results in combination with dynamic sorting by using a Sort parameter lets you express query methods for the 'K' smallest as well as for the 'K' biggest elements.
|
4.4.6. Repository Methods Returning Collections or Iterables
Query methods that return multiple results can use standard Java Iterable
, List
, and Set
.
Beyond that, we support returning Spring Data’s Streamable
, a custom extension of Iterable
, as well as collection types provided by Vavr.
Refer to the appendix explaining all possible query method return types.
Using Streamable as Query Method Return Type
You can use Streamable
as alternative to Iterable
or any collection type.
It provides convenience methods to access a non-parallel Stream
(missing from Iterable
) and the ability to directly ….filter(…)
and ….map(…)
over the elements and concatenate the Streamable
to others:
interface PersonRepository extends Repository<Person, Long> {
Streamable<Person> findByFirstnameContaining(String firstname);
Streamable<Person> findByLastnameContaining(String lastname);
}
Streamable<Person> result = repository.findByFirstnameContaining("av")
.and(repository.findByLastnameContaining("ea"));
Returning Custom Streamable Wrapper Types
Providing dedicated wrapper types for collections is a commonly used pattern to provide an API for a query result that returns multiple elements. Usually, these types are used by invoking a repository method returning a collection-like type and creating an instance of the wrapper type manually. You can avoid that additional step as Spring Data lets you use these wrapper types as query method return types if they meet the following criteria:
-
The type implements
Streamable
. -
The type exposes either a constructor or a static factory method named
of(…)
orvalueOf(…)
that takesStreamable
as an argument.
The following listing shows an example:
class Product { (1)
MonetaryAmount getPrice() { … }
}
@RequiredArgsConstructor(staticName = "of")
class Products implements Streamable<Product> { (2)
private final Streamable<Product> streamable;
public MonetaryAmount getTotal() { (3)
return streamable.stream()
.map(Priced::getPrice)
.reduce(Money.of(0), MonetaryAmount::add);
}
@Override
public Iterator<Product> iterator() { (4)
return streamable.iterator();
}
}
interface ProductRepository implements Repository<Product, Long> {
Products findAllByDescriptionContaining(String text); (5)
}
1 | A Product entity that exposes API to access the product’s price. |
2 | A wrapper type for a Streamable<Product> that can be constructed by using Products.of(…) (factory method created with the Lombok annotation).
A standard constructor taking the Streamable<Product> will do as well. |
3 | The wrapper type exposes an additional API, calculating new values on the Streamable<Product> . |
4 | Implement the Streamable interface and delegate to the actual result. |
5 | That wrapper type Products can be used directly as a query method return type.
You do not need to return Streamable<Product> and manually wrap it after the query in the repository client. |
Support for Vavr Collections
Vavr is a library that embraces functional programming concepts in Java. It ships with a custom set of collection types that you can use as query method return types, as the following table shows:
Vavr collection type | Used Vavr implementation type | Valid Java source types |
---|---|---|
|
|
|
|
|
|
|
|
|
You can use the types in the first column (or subtypes thereof) as query method return types and get the types in the second column used as implementation type, depending on the Java type of the actual query result (third column).
Alternatively, you can declare Traversable
(the Vavr Iterable
equivalent), and we then derive the implementation class from the actual return value.
That is, a java.util.List
is turned into a Vavr List
or Seq
, a java.util.Set
becomes a Vavr LinkedHashSet
Set
, and so on.
4.4.7. Null Handling of Repository Methods
As of Spring Data 2.0, repository CRUD methods that return an individual aggregate instance use Java 8’s Optional
to indicate the potential absence of a value.
Besides that, Spring Data supports returning the following wrapper types on query methods:
-
com.google.common.base.Optional
-
scala.Option
-
io.vavr.control.Option
Alternatively, query methods can choose not to use a wrapper type at all.
The absence of a query result is then indicated by returning null
.
Repository methods returning collections, collection alternatives, wrappers, and streams are guaranteed never to return null
but rather the corresponding empty representation.
See “Repository query return types” for details.
Nullability Annotations
You can express nullability constraints for repository methods by using Spring Framework’s nullability annotations.
They provide a tooling-friendly approach and opt-in null
checks during runtime, as follows:
-
@NonNullApi
: Used on the package level to declare that the default behavior for parameters and return values is, respectively, neither to accept nor to producenull
values. -
@NonNull
: Used on a parameter or return value that must not benull
(not needed on a parameter and return value where@NonNullApi
applies). -
@Nullable
: Used on a parameter or return value that can benull
.
Spring annotations are meta-annotated with JSR 305 annotations (a dormant but widely used JSR).
JSR 305 meta-annotations let tooling vendors (such as IDEA, Eclipse, and Kotlin) provide null-safety support in a generic way, without having to hard-code support for Spring annotations.
To enable runtime checking of nullability constraints for query methods, you need to activate non-nullability on the package level by using Spring’s @NonNullApi
in package-info.java
, as shown in the following example:
package-info.java
@org.springframework.lang.NonNullApi
package com.acme;
Once non-null defaulting is in place, repository query method invocations get validated at runtime for nullability constraints.
If a query result violates the defined constraint, an exception is thrown.
This happens when the method would return null
but is declared as non-nullable (the default with the annotation defined on the package in which the repository resides).
If you want to opt-in to nullable results again, selectively use @Nullable
on individual methods.
Using the result wrapper types mentioned at the start of this section continues to work as expected: an empty result is translated into the value that represents absence.
The following example shows a number of the techniques just described:
package com.acme; (1)
interface UserRepository extends Repository<User, Long> {
User getByEmailAddress(EmailAddress emailAddress); (2)
@Nullable
User findByEmailAddress(@Nullable EmailAddress emailAdress); (3)
Optional<User> findOptionalByEmailAddress(EmailAddress emailAddress); (4)
}
1 | The repository resides in a package (or sub-package) for which we have defined non-null behavior. |
2 | Throws an EmptyResultDataAccessException when the query does not produce a result.
Throws an IllegalArgumentException when the emailAddress handed to the method is null . |
3 | Returns null when the query does not produce a result.
Also accepts null as the value for emailAddress . |
4 | Returns Optional.empty() when the query does not produce a result.
Throws an IllegalArgumentException when the emailAddress handed to the method is null . |
Nullability in Kotlin-based Repositories
Kotlin has the definition of nullability constraints baked into the language.
Kotlin code compiles to bytecode, which does not express nullability constraints through method signatures but rather through compiled-in metadata.
Make sure to include the kotlin-reflect
JAR in your project to enable introspection of Kotlin’s nullability constraints.
Spring Data repositories use the language mechanism to define those constraints to apply the same runtime checks, as follows:
interface UserRepository : Repository<User, String> {
fun findByUsername(username: String): User (1)
fun findByFirstname(firstname: String?): User? (2)
}
1 | The method defines both the parameter and the result as non-nullable (the Kotlin default).
The Kotlin compiler rejects method invocations that pass null to the method.
If the query yields an empty result, an EmptyResultDataAccessException is thrown. |
2 | This method accepts null for the firstname parameter and returns null if the query does not produce a result. |
4.4.8. Streaming Query Results
You can process the results of query methods incrementally by using a Java 8 Stream<T>
as the return type.
Instead of wrapping the query results in a Stream
, data store-specific methods are used to perform the streaming, as shown in the following example:
Stream<T>
@Query("select u from User u")
Stream<User> findAllByCustomQueryAndStream();
Stream<User> readAllByFirstnameNotNull();
@Query("select u from User u")
Stream<User> streamAllPaged(Pageable pageable);
A Stream potentially wraps underlying data store-specific resources and must, therefore, be closed after usage.
You can either manually close the Stream by using the close() method or by using a Java 7 try-with-resources block, as shown in the following example:
|
Stream<T>
result in a try-with-resources
blocktry (Stream<User> stream = repository.findAllByCustomQueryAndStream()) {
stream.forEach(…);
}
Not all Spring Data modules currently support Stream<T> as a return type.
|
4.4.9. Asynchronous Query Results
You can run repository queries asynchronously by using Spring’s asynchronous method running capability.
This means the method returns immediately upon invocation while the actual query occurs in a task that has been submitted to a Spring TaskExecutor
.
Asynchronous queries differ from reactive queries and should not be mixed.
See the store-specific documentation for more details on reactive support.
The following example shows a number of asynchronous queries:
@Async
Future<User> findByFirstname(String firstname); (1)
@Async
CompletableFuture<User> findOneByFirstname(String firstname); (2)
1 | Use java.util.concurrent.Future as the return type. |
2 | Use a Java 8 java.util.concurrent.CompletableFuture as the return type. |
4.5. Creating Repository Instances
This section covers how to create instances and bean definitions for the defined repository interfaces. One way to do so is by using the Spring namespace that is shipped with each Spring Data module that supports the repository mechanism, although we generally recommend using Java configuration.
4.5.1. XML Configuration
Each Spring Data module includes a repositories
element that lets you define a base package that Spring scans for you, as shown in the following example:
<?xml version="1.0" encoding="UTF-8"?>
<beans:beans xmlns:beans="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://www.springframework.org/schema/data/jpa"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/data/jpa
https://www.springframework.org/schema/data/jpa/spring-jpa.xsd">
<repositories base-package="com.acme.repositories" />
</beans:beans>
In the preceding example, Spring is instructed to scan com.acme.repositories
and all its sub-packages for interfaces extending Repository
or one of its sub-interfaces.
For each interface found, the infrastructure registers the persistence technology-specific FactoryBean
to create the appropriate proxies that handle invocations of the query methods.
Each bean is registered under a bean name that is derived from the interface name, so an interface of UserRepository
would be registered under userRepository
.
Bean names for nested repository interfaces are prefixed with their enclosing type name.
The base-package
attribute allows wildcards so that you can define a pattern of scanned packages.
Using Filters
By default, the infrastructure picks up every interface that extends the persistence technology-specific Repository
sub-interface located under the configured base package and creates a bean instance for it.
However, you might want more fine-grained control over which interfaces have bean instances created for them.
To do so, use <include-filter />
and <exclude-filter />
elements inside the <repositories />
element.
The semantics are exactly equivalent to the elements in Spring’s context namespace.
For details, see the Spring reference documentation for these elements.
For example, to exclude certain interfaces from instantiation as repository beans, you could use the following configuration:
<repositories base-package="com.acme.repositories">
<context:exclude-filter type="regex" expression=".*SomeRepository" />
</repositories>
The preceding example excludes all interfaces ending in SomeRepository
from being instantiated.
4.5.2. Java Configuration
You can also trigger the repository infrastructure by using a store-specific @Enable${store}Repositories
annotation on a Java configuration class. For an introduction to Java-based configuration of the Spring container, see JavaConfig in the Spring reference documentation.
A sample configuration to enable Spring Data repositories resembles the following:
@Configuration
@EnableJpaRepositories("com.acme.repositories")
class ApplicationConfiguration {
@Bean
EntityManagerFactory entityManagerFactory() {
// …
}
}
The preceding example uses the JPA-specific annotation, which you would change according to the store module you actually use. The same applies to the definition of the EntityManagerFactory bean. See the sections covering the store-specific configuration.
|
4.5.3. Standalone Usage
You can also use the repository infrastructure outside of a Spring container — for example, in CDI environments. You still need some Spring libraries in your classpath, but, generally, you can set up repositories programmatically as well. The Spring Data modules that provide repository support ship with a persistence technology-specific RepositoryFactory
that you can use, as follows:
RepositoryFactorySupport factory = … // Instantiate factory here
UserRepository repository = factory.getRepository(UserRepository.class);
4.6. Custom Implementations for Spring Data Repositories
Spring Data provides various options to create query methods with little coding. But when those options don’t fit your needs you can also provide your own custom implementation for repository methods. This section describes how to do that.
4.6.1. Customizing Individual Repositories
To enrich a repository with custom functionality, you must first define a fragment interface and an implementation for the custom functionality, as follows:
interface CustomizedUserRepository {
void someCustomMethod(User user);
}
class CustomizedUserRepositoryImpl implements CustomizedUserRepository {
public void someCustomMethod(User user) {
// Your custom implementation
}
}
The most important part of the class name that corresponds to the fragment interface is the Impl postfix.
|
The implementation itself does not depend on Spring Data and can be a regular Spring bean.
Consequently, you can use standard dependency injection behavior to inject references to other beans (such as a JdbcTemplate
), take part in aspects, and so on.
Then you can let your repository interface extend the fragment interface, as follows:
interface UserRepository extends CrudRepository<User, Long>, CustomizedUserRepository {
// Declare query methods here
}
Extending the fragment interface with your repository interface combines the CRUD and custom functionality and makes it available to clients.
Spring Data repositories are implemented by using fragments that form a repository composition. Fragments are the base repository, functional aspects (such as QueryDsl), and custom interfaces along with their implementations. Each time you add an interface to your repository interface, you enhance the composition by adding a fragment. The base repository and repository aspect implementations are provided by each Spring Data module.
The following example shows custom interfaces and their implementations:
interface HumanRepository {
void someHumanMethod(User user);
}
class HumanRepositoryImpl implements HumanRepository {
public void someHumanMethod(User user) {
// Your custom implementation
}
}
interface ContactRepository {
void someContactMethod(User user);
User anotherContactMethod(User user);
}
class ContactRepositoryImpl implements ContactRepository {
public void someContactMethod(User user) {
// Your custom implementation
}
public User anotherContactMethod(User user) {
// Your custom implementation
}
}
The following example shows the interface for a custom repository that extends CrudRepository
:
interface UserRepository extends CrudRepository<User, Long>, HumanRepository, ContactRepository {
// Declare query methods here
}
Repositories may be composed of multiple custom implementations that are imported in the order of their declaration. Custom implementations have a higher priority than the base implementation and repository aspects. This ordering lets you override base repository and aspect methods and resolves ambiguity if two fragments contribute the same method signature. Repository fragments are not limited to use in a single repository interface. Multiple repositories may use a fragment interface, letting you reuse customizations across different repositories.
The following example shows a repository fragment and its implementation:
save(…)
interface CustomizedSave<T> {
<S extends T> S save(S entity);
}
class CustomizedSaveImpl<T> implements CustomizedSave<T> {
public <S extends T> S save(S entity) {
// Your custom implementation
}
}
The following example shows a repository that uses the preceding repository fragment:
interface UserRepository extends CrudRepository<User, Long>, CustomizedSave<User> {
}
interface PersonRepository extends CrudRepository<Person, Long>, CustomizedSave<Person> {
}
Configuration
The repository infrastructure tries to autodetect custom implementation fragments by scanning for classes below the package in which it found a repository.
These classes need to follow the naming convention of appending
the namespace element’s repository-impl-postfix
attribute to the fragment interface name.
This postfix defaults to Impl
.
The following example shows a repository that uses the default postfix and a repository that sets a custom value for the postfix:
<repositories base-package="com.acme.repository" />
<repositories base-package="com.acme.repository" repository-impl-postfix="MyPostfix" />
The first configuration in the preceding example tries to look up a class called com.acme.repository.CustomizedUserRepositoryImpl
to act as a custom repository implementation.
The second example tries to look up com.acme.repository.CustomizedUserRepositoryMyPostfix
.
Resolution of Ambiguity
If multiple implementations with matching class names are found in different packages, Spring Data uses the bean names to identify which one to use.
Given the following two custom implementations for the CustomizedUserRepository
shown earlier, the first implementation is used.
Its bean name is customizedUserRepositoryImpl
, which matches that of the fragment interface (CustomizedUserRepository
) plus the postfix Impl
.
package com.acme.impl.one;
class CustomizedUserRepositoryImpl implements CustomizedUserRepository {
// Your custom implementation
}
package com.acme.impl.two;
@Component("specialCustomImpl")
class CustomizedUserRepositoryImpl implements CustomizedUserRepository {
// Your custom implementation
}
If you annotate the UserRepository
interface with @Component("specialCustom")
, the bean name plus Impl
then matches the one defined for the repository implementation in com.acme.impl.two
, and it is used instead of the first one.
Manual Wiring
If your custom implementation uses annotation-based configuration and autowiring only, the preceding approach shown works well, because it is treated as any other Spring bean. If your implementation fragment bean needs special wiring, you can declare the bean and name it according to the conventions described in the preceding section. The infrastructure then refers to the manually defined bean definition by name instead of creating one itself. The following example shows how to manually wire a custom implementation:
<repositories base-package="com.acme.repository" />
<beans:bean id="userRepositoryImpl" class="…">
<!-- further configuration -->
</beans:bean>
4.6.2. Customize the Base Repository
The approach described in the preceding section requires customization of each repository interfaces when you want to customize the base repository behavior so that all repositories are affected. To instead change behavior for all repositories, you can create an implementation that extends the persistence technology-specific repository base class. This class then acts as a custom base class for the repository proxies, as shown in the following example:
class MyRepositoryImpl<T, ID>
extends SimpleJpaRepository<T, ID> {
private final EntityManager entityManager;
MyRepositoryImpl(JpaEntityInformation entityInformation,
EntityManager entityManager) {
super(entityInformation, entityManager);
// Keep the EntityManager around to used from the newly introduced methods.
this.entityManager = entityManager;
}
@Transactional
public <S extends T> S save(S entity) {
// implementation goes here
}
}
The class needs to have a constructor of the super class which the store-specific repository factory implementation uses.
If the repository base class has multiple constructors, override the one taking an EntityInformation plus a store specific infrastructure object (such as an EntityManager or a template class).
|
The final step is to make the Spring Data infrastructure aware of the customized repository base class.
In Java configuration, you can do so by using the repositoryBaseClass
attribute of the @Enable${store}Repositories
annotation, as shown in the following example:
@Configuration
@EnableJpaRepositories(repositoryBaseClass = MyRepositoryImpl.class)
class ApplicationConfiguration { … }
A corresponding attribute is available in the XML namespace, as shown in the following example:
<repositories base-package="com.acme.repository"
base-class="….MyRepositoryImpl" />
4.7. Publishing Events from Aggregate Roots
Entities managed by repositories are aggregate roots.
In a Domain-Driven Design application, these aggregate roots usually publish domain events.
Spring Data provides an annotation called @DomainEvents
that you can use on a method of your aggregate root to make that publication as easy as possible, as shown in the following example:
class AnAggregateRoot {
@DomainEvents (1)
Collection<Object> domainEvents() {
// … return events you want to get published here
}
@AfterDomainEventPublication (2)
void callbackMethod() {
// … potentially clean up domain events list
}
}
1 | The method that uses @DomainEvents can return either a single event instance or a collection of events.
It must not take any arguments. |
2 | After all events have been published, we have a method annotated with @AfterDomainEventPublication .
You can use it to potentially clean the list of events to be published (among other uses). |
The methods are called every time one of a Spring Data repository’s save(…)
, saveAll(…)
, delete(…)
or deleteAll(…)
methods are called.
4.8. Spring Data Extensions
This section documents a set of Spring Data extensions that enable Spring Data usage in a variety of contexts. Currently, most of the integration is targeted towards Spring MVC.
4.8.1. Querydsl Extension
Querydsl is a framework that enables the construction of statically typed SQL-like queries through its fluent API.
Several Spring Data modules offer integration with Querydsl through QuerydslPredicateExecutor
, as the following example shows:
public interface QuerydslPredicateExecutor<T> {
Optional<T> findById(Predicate predicate); (1)
Iterable<T> findAll(Predicate predicate); (2)
long count(Predicate predicate); (3)
boolean exists(Predicate predicate); (4)
// … more functionality omitted.
}
1 | Finds and returns a single entity matching the Predicate . |
2 | Finds and returns all entities matching the Predicate . |
3 | Returns the number of entities matching the Predicate . |
4 | Returns whether an entity that matches the Predicate exists. |
To use the Querydsl support, extend QuerydslPredicateExecutor
on your repository interface, as the following example shows:
interface UserRepository extends CrudRepository<User, Long>, QuerydslPredicateExecutor<User> {
}
The preceding example lets you write type-safe queries by using Querydsl Predicate
instances, as the following example shows:
Predicate predicate = user.firstname.equalsIgnoreCase("dave")
.and(user.lastname.startsWithIgnoreCase("mathews"));
userRepository.findAll(predicate);
4.8.2. Web support
Spring Data modules that support the repository programming model ship with a variety of web support.
The web related components require Spring MVC JARs to be on the classpath.
Some of them even provide integration with Spring HATEOAS.
In general, the integration support is enabled by using the @EnableSpringDataWebSupport
annotation in your JavaConfig configuration class, as the following example shows:
@Configuration
@EnableWebMvc
@EnableSpringDataWebSupport
class WebConfiguration {}
The @EnableSpringDataWebSupport
annotation registers a few components.
We discuss those later in this section.
It also detects Spring HATEOAS on the classpath and registers integration components (if present) for it as well.
Alternatively, if you use XML configuration, register either SpringDataWebConfiguration
or HateoasAwareSpringDataWebConfiguration
as Spring beans, as the following example shows (for SpringDataWebConfiguration
):
<bean class="org.springframework.data.web.config.SpringDataWebConfiguration" />
<!-- If you use Spring HATEOAS, register this one *instead* of the former -->
<bean class="org.springframework.data.web.config.HateoasAwareSpringDataWebConfiguration" />
Basic Web Support
The configuration shown in the previous section registers a few basic components:
-
A Using the
DomainClassConverter
Class to let Spring MVC resolve instances of repository-managed domain classes from request parameters or path variables. -
HandlerMethodArgumentResolver
implementations to let Spring MVC resolvePageable
andSort
instances from request parameters. -
Jackson Modules to de-/serialize types like
Point
andDistance
, or store specific ones, depending on the Spring Data Module used.
Using the DomainClassConverter
Class
The DomainClassConverter
class lets you use domain types in your Spring MVC controller method signatures directly so that you need not manually lookup the instances through the repository, as the following example shows:
@Controller
@RequestMapping("/users")
class UserController {
@RequestMapping("/{id}")
String showUserForm(@PathVariable("id") User user, Model model) {
model.addAttribute("user", user);
return "userForm";
}
}
The method receives a User
instance directly, and no further lookup is necessary.
The instance can be resolved by letting Spring MVC convert the path variable into the id
type of the domain class first and eventually access the instance through calling findById(…)
on the repository instance registered for the domain type.
Currently, the repository has to implement CrudRepository to be eligible to be discovered for conversion.
|
HandlerMethodArgumentResolvers for Pageable and Sort
The configuration snippet shown in the previous section also registers a PageableHandlerMethodArgumentResolver
as well as an instance of SortHandlerMethodArgumentResolver
.
The registration enables Pageable
and Sort
as valid controller method arguments, as the following example shows:
@Controller
@RequestMapping("/users")
class UserController {
private final UserRepository repository;
UserController(UserRepository repository) {
this.repository = repository;
}
@RequestMapping
String showUsers(Model model, Pageable pageable) {
model.addAttribute("users", repository.findAll(pageable));
return "users";
}
}
The preceding method signature causes Spring MVC try to derive a Pageable
instance from the request parameters by using the following default configuration:
|
Page you want to retrieve. 0-indexed and defaults to 0. |
|
Size of the page you want to retrieve. Defaults to 20. |
|
Properties that should be sorted by in the format |
To customize this behavior, register a bean that implements the PageableHandlerMethodArgumentResolverCustomizer
interface or the SortHandlerMethodArgumentResolverCustomizer
interface, respectively.
Its customize()
method gets called, letting you change settings, as the following example shows:
@Bean SortHandlerMethodArgumentResolverCustomizer sortCustomizer() {
return s -> s.setPropertyDelimiter("<-->");
}
If setting the properties of an existing MethodArgumentResolver
is not sufficient for your purpose, extend either SpringDataWebConfiguration
or the HATEOAS-enabled equivalent, override the pageableResolver()
or sortResolver()
methods, and import your customized configuration file instead of using the @Enable
annotation.
If you need multiple Pageable
or Sort
instances to be resolved from the request (for multiple tables, for example), you can use Spring’s @Qualifier
annotation to distinguish one from another.
The request parameters then have to be prefixed with ${qualifier}_
.
The following example shows the resulting method signature:
String showUsers(Model model,
@Qualifier("thing1") Pageable first,
@Qualifier("thing2") Pageable second) { … }
You have to populate thing1_page
, thing2_page
, and so on.
The default Pageable
passed into the method is equivalent to a PageRequest.of(0, 20)
, but you can customize it by using the @PageableDefault
annotation on the Pageable
parameter.
Hypermedia Support for Pageables
Spring HATEOAS ships with a representation model class (PagedResources
) that allows enriching the content of a Page
instance with the necessary Page
metadata as well as links to let the clients easily navigate the pages.
The conversion of a Page
to a PagedResources
is done by an implementation of the Spring HATEOAS ResourceAssembler
interface, called the PagedResourcesAssembler
.
The following example shows how to use a PagedResourcesAssembler
as a controller method argument:
@Controller
class PersonController {
@Autowired PersonRepository repository;
@RequestMapping(value = "/persons", method = RequestMethod.GET)
HttpEntity<PagedResources<Person>> persons(Pageable pageable,
PagedResourcesAssembler assembler) {
Page<Person> persons = repository.findAll(pageable);
return new ResponseEntity<>(assembler.toResources(persons), HttpStatus.OK);
}
}
Enabling the configuration, as shown in the preceding example, lets the PagedResourcesAssembler
be used as a controller method argument.
Calling toResources(…)
on it has the following effects:
-
The content of the
Page
becomes the content of thePagedResources
instance. -
The
PagedResources
object gets aPageMetadata
instance attached, and it is populated with information from thePage
and the underlyingPageRequest
. -
The
PagedResources
may getprev
andnext
links attached, depending on the page’s state. The links point to the URI to which the method maps. The pagination parameters added to the method match the setup of thePageableHandlerMethodArgumentResolver
to make sure the links can be resolved later.
Assume we have 30 Person
instances in the database.
You can now trigger a request (GET http://localhost:8080/persons
) and see output similar to the following:
{ "links" : [ { "rel" : "next",
"href" : "http://localhost:8080/persons?page=1&size=20" }
],
"content" : [
… // 20 Person instances rendered here
],
"pageMetadata" : {
"size" : 20,
"totalElements" : 30,
"totalPages" : 2,
"number" : 0
}
}
The assembler produced the correct URI and also picked up the default configuration to resolve the parameters into a Pageable
for an upcoming request.
This means that, if you change that configuration, the links automatically adhere to the change.
By default, the assembler points to the controller method it was invoked in, but you can customize that by passing a custom Link
to be used as base to build the pagination links, which overloads the PagedResourcesAssembler.toResource(…)
method.
Spring Data Jackson Modules
The core module, and some of the store specific ones, ship with a set of Jackson Modules for types, like org.springframework.data.geo.Distance
and org.springframework.data.geo.Point
, used by the Spring Data domain.
Those Modules are imported once web support is enabled and com.fasterxml.jackson.databind.ObjectMapper
is available.
During initialization SpringDataJacksonModules
, like the SpringDataJacksonConfiguration
, get picked up by the infrastructure, so that the declared com.fasterxml.jackson.databind.Module
s are made available to the Jackson ObjectMapper
.
Data binding mixins for the following domain types are registered by the common infrastructure.
org.springframework.data.geo.Distance org.springframework.data.geo.Point org.springframework.data.geo.Box org.springframework.data.geo.Circle org.springframework.data.geo.Polygon
The individual module may provide additional |
Web Databinding Support
You can use Spring Data projections (described in Projections) to bind incoming request payloads by using either JSONPath expressions (requires Jayway JsonPath) or XPath expressions (requires XmlBeam), as the following example shows:
@ProjectedPayload
public interface UserPayload {
@XBRead("//firstname")
@JsonPath("$..firstname")
String getFirstname();
@XBRead("/lastname")
@JsonPath({ "$.lastname", "$.user.lastname" })
String getLastname();
}
You can use the type shown in the preceding example as a Spring MVC handler method argument or by using ParameterizedTypeReference
on one of methods of the RestTemplate
.
The preceding method declarations would try to find firstname
anywhere in the given document.
The lastname
XML lookup is performed on the top-level of the incoming document.
The JSON variant of that tries a top-level lastname
first but also tries lastname
nested in a user
sub-document if the former does not return a value.
That way, changes in the structure of the source document can be mitigated easily without having clients calling the exposed methods (usually a drawback of class-based payload binding).
Nested projections are supported as described in Projections.
If the method returns a complex, non-interface type, a Jackson ObjectMapper
is used to map the final value.
For Spring MVC, the necessary converters are registered automatically as soon as @EnableSpringDataWebSupport
is active and the required dependencies are available on the classpath.
For usage with RestTemplate
, register a ProjectingJackson2HttpMessageConverter
(JSON) or XmlBeamHttpMessageConverter
manually.
For more information, see the web projection example in the canonical Spring Data Examples repository.
Querydsl Web Support
For those stores that have QueryDSL integration, you can derive queries from the attributes contained in a Request
query string.
Consider the following query string:
?firstname=Dave&lastname=Matthews
Given the User
object from the previous examples, you can resolve a query string to the following value by using the QuerydslPredicateArgumentResolver
, as follows:
QUser.user.firstname.eq("Dave").and(QUser.user.lastname.eq("Matthews"))
The feature is automatically enabled, along with @EnableSpringDataWebSupport , when Querydsl is found on the classpath.
|
Adding a @QuerydslPredicate
to the method signature provides a ready-to-use Predicate
, which you can run by using the QuerydslPredicateExecutor
.
Type information is typically resolved from the method’s return type.
Since that information does not necessarily match the domain type, it might be a good idea to use the root attribute of QuerydslPredicate .
|
The following example shows how to use @QuerydslPredicate
in a method signature:
@Controller
class UserController {
@Autowired UserRepository repository;
@RequestMapping(value = "/", method = RequestMethod.GET)
String index(Model model, @QuerydslPredicate(root = User.class) Predicate predicate, (1)
Pageable pageable, @RequestParam MultiValueMap<String, String> parameters) {
model.addAttribute("users", repository.findAll(predicate, pageable));
return "index";
}
}
1 | Resolve query string arguments to matching Predicate for User . |
The default binding is as follows:
-
Object
on simple properties aseq
. -
Object
on collection like properties ascontains
. -
Collection
on simple properties asin
.
You can customize those bindings through the bindings
attribute of @QuerydslPredicate
or by making use of Java 8 default methods
and adding the QuerydslBinderCustomizer
method to the repository interface, as follows:
interface UserRepository extends CrudRepository<User, String>,
QuerydslPredicateExecutor<User>, (1)
QuerydslBinderCustomizer<QUser> { (2)
@Override
default void customize(QuerydslBindings bindings, QUser user) {
bindings.bind(user.username).first((path, value) -> path.contains(value)) (3)
bindings.bind(String.class)
.first((StringPath path, String value) -> path.containsIgnoreCase(value)); (4)
bindings.excluding(user.password); (5)
}
}
1 | QuerydslPredicateExecutor provides access to specific finder methods for Predicate . |
2 | QuerydslBinderCustomizer defined on the repository interface is automatically picked up and shortcuts @QuerydslPredicate(bindings=…) . |
3 | Define the binding for the username property to be a simple contains binding. |
4 | Define the default binding for String properties to be a case-insensitive contains match. |
5 | Exclude the password property from Predicate resolution. |
You can register a QuerydslBinderCustomizerDefaults bean holding default Querydsl bindings before applying specific bindings from the repository or @QuerydslPredicate .
|
4.8.3. Repository Populators
If you work with the Spring JDBC module, you are probably familiar with the support for populating a DataSource
with SQL scripts.
A similar abstraction is available on the repositories level, although it does not use SQL as the data definition language because it must be store-independent.
Thus, the populators support XML (through Spring’s OXM abstraction) and JSON (through Jackson) to define data with which to populate the repositories.
Assume you have a file called data.json
with the following content:
[ { "_class" : "com.acme.Person",
"firstname" : "Dave",
"lastname" : "Matthews" },
{ "_class" : "com.acme.Person",
"firstname" : "Carter",
"lastname" : "Beauford" } ]
You can populate your repositories by using the populator elements of the repository namespace provided in Spring Data Commons.
To populate the preceding data to your PersonRepository
, declare a populator similar to the following:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:repository="http://www.springframework.org/schema/data/repository"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/data/repository
https://www.springframework.org/schema/data/repository/spring-repository.xsd">
<repository:jackson2-populator locations="classpath:data.json" />
</beans>
The preceding declaration causes the data.json
file to be read and deserialized by a Jackson ObjectMapper
.
The type to which the JSON object is unmarshalled is determined by inspecting the _class
attribute of the JSON document.
The infrastructure eventually selects the appropriate repository to handle the object that was deserialized.
To instead use XML to define the data the repositories should be populated with, you can use the unmarshaller-populator
element.
You configure it to use one of the XML marshaller options available in Spring OXM. See the Spring reference documentation for details.
The following example shows how to unmarshall a repository populator with JAXB:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:repository="http://www.springframework.org/schema/data/repository"
xmlns:oxm="http://www.springframework.org/schema/oxm"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/data/repository
https://www.springframework.org/schema/data/repository/spring-repository.xsd
http://www.springframework.org/schema/oxm
https://www.springframework.org/schema/oxm/spring-oxm.xsd">
<repository:unmarshaller-populator locations="classpath:data.json"
unmarshaller-ref="unmarshaller" />
<oxm:jaxb2-marshaller contextPath="com.acme" />
</beans>
5. Reference Documentation
5.1. JPA Repositories
This chapter points out the specialties for repository support for JPA. This builds on the core repository support explained in “Working with Spring Data Repositories”. Make sure you have a sound understanding of the basic concepts explained there.
5.1.1. Introduction
This section describes the basics of configuring Spring Data JPA through either:
-
“Spring Namespace” (XML configuration)
-
“Annotation-based Configuration” (Java configuration)
Spring Namespace
The JPA module of Spring Data contains a custom namespace that allows defining repository beans. It also contains certain features and element attributes that are special to JPA. Generally, the JPA repositories can be set up by using the repositories
element, as shown in the following example:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jpa="http://www.springframework.org/schema/data/jpa"
xsi:schemaLocation="http://www.springframework.org/schema/beans
https://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/data/jpa
https://www.springframework.org/schema/data/jpa/spring-jpa.xsd">
<jpa:repositories base-package="com.acme.repositories" />
</beans>
Using the repositories
element looks up Spring Data repositories as described in “Creating Repository Instances”. Beyond that, it activates persistence exception translation for all beans annotated with @Repository
, to let exceptions being thrown by the JPA persistence providers be converted into Spring’s DataAccessException
hierarchy.
Custom Namespace Attributes
Beyond the default attributes of the repositories
element, the JPA namespace offers additional attributes to let you gain more detailed control over the setup of the repositories:
|
Explicitly wire the |
|
Explicitly wire the |
Spring Data JPA requires a PlatformTransactionManager bean named transactionManager to be present if no explicit transaction-manager-ref is defined.
|
Annotation-based Configuration
The Spring Data JPA repositories support can be activated not only through an XML namespace but also by using an annotation through JavaConfig, as shown in the following example:
@Configuration
@EnableJpaRepositories
@EnableTransactionManagement
class ApplicationConfig {
@Bean
public DataSource dataSource() {
EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
return builder.setType(EmbeddedDatabaseType.HSQL).build();
}
@Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
vendorAdapter.setGenerateDdl(true);
LocalContainerEntityManagerFactoryBean factory = new LocalContainerEntityManagerFactoryBean();
factory.setJpaVendorAdapter(vendorAdapter);
factory.setPackagesToScan("com.acme.domain");
factory.setDataSource(dataSource());
return factory;
}
@Bean
public PlatformTransactionManager transactionManager(EntityManagerFactory entityManagerFactory) {
JpaTransactionManager txManager = new JpaTransactionManager();
txManager.setEntityManagerFactory(entityManagerFactory);
return txManager;
}
}
You must create LocalContainerEntityManagerFactoryBean and not EntityManagerFactory directly, since the former also participates in exception translation mechanisms in addition to creating EntityManagerFactory .
|
The preceding configuration class sets up an embedded HSQL database by using the EmbeddedDatabaseBuilder
API of spring-jdbc
. Spring Data then sets up an EntityManagerFactory
and uses Hibernate as the sample persistence provider. The last infrastructure component declared here is the JpaTransactionManager
. Finally, the example activates Spring Data JPA repositories by using the @EnableJpaRepositories
annotation, which essentially carries the same attributes as the XML namespace. If no base package is configured, it uses the one in which the configuration class resides.
Bootstrap Mode
By default, Spring Data JPA repositories are default Spring beans.
They are singleton scoped and eagerly initialized.
During startup, they already interact with the JPA EntityManager
for verification and metadata analysis purposes.
Spring Framework supports the initialization of the JPA EntityManagerFactory
in a background thread because that process usually takes up a significant amount of startup time in a Spring application.
To make use of that background initialization effectively, we need to make sure that JPA repositories are initialized as late as possible.
As of Spring Data JPA 2.1 you can now configure a BootstrapMode
(either via the @EnableJpaRepositories
annotation or the XML namespace) that takes the following values:
-
DEFAULT
(default) — Repositories are instantiated eagerly unless explicitly annotated with@Lazy
. The lazification only has effect if no client bean needs an instance of the repository as that will require the initialization of the repository bean. -
LAZY
— Implicitly declares all repository beans lazy and also causes lazy initialization proxies to be created to be injected into client beans. That means, that repositories will not get instantiated if the client bean is simply storing the instance in a field and not making use of the repository during initialization. Repository instances will be initialized and verified upon first interaction with the repository. -
DEFERRED
— Fundamentally the same mode of operation asLAZY
, but triggering repository initialization in response to anContextRefreshedEvent
so that repositories are verified before the application has completely started.
Recommendations
If you’re not using asynchronous JPA bootstrap stick with the default bootstrap mode.
In case you bootstrap JPA asynchronously, DEFERRED
is a reasonable default as it will make sure the Spring Data JPA bootstrap only waits for the EntityManagerFactory
setup if that itself takes longer than initializing all other application components.
Still, it makes sure that repositories are properly initialized and validated before the application signals it’s up.
LAZY
is a decent choice for testing scenarios and local development.
Once you are pretty sure that repositories can properly bootstrap, or in cases where you are testing other parts of the application, running verification for all repositories might unnecessarily increase the startup time.
The same applies to local development in which you only access parts of the application that might need to have a single repository initialized.
5.1.2. Persisting Entities
This section describes how to persist (save) entities with Spring Data JPA.
Saving Entities
Saving an entity can be performed with the CrudRepository.save(…)
method. It persists or merges the given entity by using the underlying JPA EntityManager
. If the entity has not yet been persisted, Spring Data JPA saves the entity with a call to the entityManager.persist(…)
method. Otherwise, it calls the entityManager.merge(…)
method.
Entity State-detection Strategies
Spring Data JPA offers the following strategies to detect whether an entity is new or not:
-
Version-Property and Id-Property inspection (default): By default Spring Data JPA inspects first if there is a Version-property of non-primitive type. If there is, the entity is considered new if the value of that property is
null
. Without such a Version-property Spring Data JPA inspects the identifier property of the given entity. If the identifier property isnull
, then the entity is assumed to be new. Otherwise, it is assumed to be not new. -
Implementing
Persistable
: If an entity implementsPersistable
, Spring Data JPA delegates the new detection to theisNew(…)
method of the entity. See the JavaDoc for details. -
Implementing
EntityInformation
: You can customize theEntityInformation
abstraction used in theSimpleJpaRepository
implementation by creating a subclass ofJpaRepositoryFactory
and overriding thegetEntityInformation(…)
method accordingly. You then have to register the custom implementation ofJpaRepositoryFactory
as a Spring bean. Note that this should be rarely necessary. See the JavaDoc for details.
Option 1 is not an option for entities that use manually assigned identifiers and no version attribute as with those the identifier will always be non-null
.
A common pattern in that scenario is to use a common base class with a transient flag defaulting to indicate a new instance and using JPA lifecycle callbacks to flip that flag on persistence operations:
@MappedSuperclass
public abstract class AbstractEntity<ID> implements Persistable<ID> {
@Transient
private boolean isNew = true; (1)
@Override
public boolean isNew() {
return isNew; (2)
}
@PrePersist (3)
@PostLoad
void markNotNew() {
this.isNew = false;
}
// More code…
}
1 | Declare a flag to hold the new state. Transient so that it’s not persisted to the database. |
2 | Return the flag in the implementation of Persistable.isNew() so that Spring Data repositories know whether to call EntityManager.persist() or ….merge() . |
3 | Declare a method using JPA entity callbacks so that the flag is switched to indicate an existing entity after a repository call to save(…) or an instance creation by the persistence provider. |
5.1.3. Query Methods
This section describes the various ways to create a query with Spring Data JPA.
Query Lookup Strategies
The JPA module supports defining a query manually as a String or having it being derived from the method name.
Derived queries with the predicates IsStartingWith
, StartingWith
, StartsWith
, IsEndingWith
, EndingWith
, EndsWith
,
IsNotContaining
, NotContaining
, NotContains
, IsContaining
, Containing
, Contains
the respective arguments for these queries will get sanitized.
This means if the arguments actually contain characters recognized by LIKE
as wildcards these will get escaped so they match only as literals.
The escape character used can be configured by setting the escapeCharacter
of the @EnableJpaRepositories
annotation.
Compare with Using SpEL Expressions.
Declared Queries
Although getting a query derived from the method name is quite convenient, one might face the situation in which either the method name parser does not support the keyword one wants to use or the method name would get unnecessarily ugly. So you can either use JPA named queries through a naming convention (see Using JPA Named Queries for more information) or rather annotate your query method with @Query
(see Using @Query
for details).
Query Creation
Generally, the query creation mechanism for JPA works as described in “Query Methods”. The following example shows what a JPA query method translates into:
public interface UserRepository extends Repository<User, Long> { List<User> findByEmailAddressAndLastname(String emailAddress, String lastname); }
We create a query using the JPA criteria API from this, but, essentially, this translates into the following query: select u from User u where u.emailAddress = ?1 and u.lastname = ?2
. Spring Data JPA does a property check and traverses nested properties, as described in “Property Expressions”.
The following table describes the keywords supported for JPA and what a method containing that keyword translates to:
Keyword | Sample | JPQL snippet |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
In and NotIn also take any subclass of Collection as a parameter as well as arrays or varargs. For other syntactical versions of the same logical operator, check “Repository query keywords”.
|
However, that latter query would narrow the focus to just
What is the point of this query anyway? To find the number of people with a given last name? To find the number of distinct people with that binding last name?
To find the number of distinct last names? (That last one is an entirely different query!)
Using |
Using JPA Named Queries
The examples use the <named-query /> element and @NamedQuery annotation. The queries for these configuration elements have to be defined in the JPA query language. Of course, you can use <named-native-query /> or @NamedNativeQuery too. These elements let you define the query in native SQL by losing the database platform independence.
|
XML Named Query Definition
To use XML configuration, add the necessary <named-query />
element to the orm.xml
JPA configuration file located in the META-INF
folder of your classpath. Automatic invocation of named queries is enabled by using some defined naming convention. For more details, see below.
<named-query name="User.findByLastname">
<query>select u from User u where u.lastname = ?1</query>
</named-query>
The query has a special name that is used to resolve it at runtime.
Annotation-based Configuration
Annotation-based configuration has the advantage of not needing another configuration file to be edited, lowering maintenance effort. You pay for that benefit by the need to recompile your domain class for every new query declaration.
@Entity
@NamedQuery(name = "User.findByEmailAddress",
query = "select u from User u where u.emailAddress = ?1")
public class User {
}
Declaring Interfaces
To allow these named queries, specify the UserRepositoryWithRewriter
as follows:
public interface UserRepository extends JpaRepository<User, Long> {
List<User> findByLastname(String lastname);
User findByEmailAddress(String emailAddress);
}
Spring Data tries to resolve a call to these methods to a named query, starting with the simple name of the configured domain class, followed by the method name separated by a dot. So the preceding example would use the named queries defined earlier instead of trying to create a query from the method name.
Using @Query
Using named queries to declare queries for entities is a valid approach and works fine for a small number of queries. As the queries themselves are tied to the Java method that runs them, you can actually bind them directly by using the Spring Data JPA @Query
annotation rather than annotating them to the domain class. This frees the domain class from persistence specific information and co-locates the query to the repository interface.
Queries annotated to the query method take precedence over queries defined using @NamedQuery
or named queries declared in orm.xml
.
The following example shows a query created with the @Query
annotation:
@Query
public interface UserRepository extends JpaRepository<User, Long> {
@Query("select u from User u where u.emailAddress = ?1")
User findByEmailAddress(String emailAddress);
}
Applying a QueryRewriter
Sometimes, no matter how many features you try to apply, it seems impossible to get Spring Data JPA to apply every thing
you’d like to a query before it is sent to the EntityManager
.
You have the ability to get your hands on the query, right before it’s sent to the EntityManager
and "rewrite" it. That is,
you can make any alterations at the last moment.
@Query
public interface MyRepository extends JpaRepository<User, Long> {
@Query(value = "select original_user_alias.* from SD_USER original_user_alias",
nativeQuery = true,
queryRewriter = MyQueryRewriter.class)
List<User> findByNativeQuery(String param);
@Query(value = "select original_user_alias from User original_user_alias",
queryRewriter = MyQueryRewriter.class)
List<User> findByNonNativeQuery(String param);
}
This example shows both a native (pure SQL) rewriter as well as a JPQL query, both leveraging the same QueryRewriter
.
In this scenario, Spring Data JPA will look for a bean registered in the application context of the corresponding type.
You can write a query rewriter like this:
QueryRewriter
public class MyQueryRewriter implements QueryRewriter {
@Override
public String rewrite(String query, Sort sort) {
return query.replaceAll("original_user_alias", "rewritten_user_alias");
}
}
You have to ensure your QueryRewriter
is registered in the application context, whether it’s by applying one of Spring Framework’s
@Component
-based annotations, or having it as part of a @Bean
method inside an @Configuration
class.
Another option is to have the repository itself implement the interface.
QueryRewriter
public interface MyRepository extends JpaRepository<User, Long>, QueryRewriter {
@Query(value = "select original_user_alias.* from SD_USER original_user_alias",
nativeQuery = true,
queryRewriter = MyRepository.class)
List<User> findByNativeQuery(String param);
@Query(value = "select original_user_alias from User original_user_alias",
queryRewriter = MyRepository.class)
List<User> findByNonNativeQuery(String param);
@Override
default String rewrite(String query, Sort sort) {
return query.replaceAll("original_user_alias", "rewritten_user_alias");
}
}
Depending on what you’re doing with your QueryRewriter
, it may be advisable to have more than one, each registered with the
application context.
In a CDI-based environment, Spring Data JPA will search the BeanManager for instances of your implementation of
QueryRewriter .
|
Using Advanced LIKE
Expressions
The query running mechanism for manually defined queries created with @Query
allows the definition of advanced LIKE
expressions inside the query definition, as shown in the following example:
like
expressions in @Querypublic interface UserRepository extends JpaRepository<User, Long> {
@Query("select u from User u where u.firstname like %?1")
List<User> findByFirstnameEndsWith(String firstname);
}
In the preceding example, the LIKE
delimiter character (%
) is recognized, and the query is transformed into a valid JPQL query (removing the %
). Upon running the query, the parameter passed to the method call gets augmented with the previously recognized LIKE
pattern.
Native Queries
The @Query
annotation allows for running native queries by setting the nativeQuery
flag to true, as shown in the following example:
public interface UserRepository extends JpaRepository<User, Long> {
@Query(value = "SELECT * FROM USERS WHERE EMAIL_ADDRESS = ?1", nativeQuery = true)
User findByEmailAddress(String emailAddress);
}
Spring Data JPA does not currently support dynamic sorting for native queries, because it would have to manipulate the actual query declared, which it cannot do reliably for native SQL. You can, however, use native queries for pagination by specifying the count query yourself, as shown in the following example: |
@Query
public interface UserRepository extends JpaRepository<User, Long> {
@Query(value = "SELECT * FROM USERS WHERE LASTNAME = ?1",
countQuery = "SELECT count(*) FROM USERS WHERE LASTNAME = ?1",
nativeQuery = true)
Page<User> findByLastname(String lastname, Pageable pageable);
}
A similar approach also works with named native queries, by adding the .count
suffix to a copy of your query. You probably need to register a result set mapping for your count query, though.
Using Sort
Sorting can be done by either providing a PageRequest
or by using Sort
directly. The properties actually used within the Order
instances of Sort
need to match your domain model, which means they need to resolve to either a property or an alias used within the query. The JPQL defines this as a state field path expression.
Using any non-referenceable path expression leads to an Exception .
|
However, using Sort
together with @Query
lets you sneak in non-path-checked Order
instances containing functions within the ORDER BY
clause. This is possible because the Order
is appended to the given query string. By default, Spring Data JPA rejects any Order
instance containing function calls, but you can use JpaSort.unsafe
to add potentially unsafe ordering.
The following example uses Sort
and JpaSort
, including an unsafe option on JpaSort
:
Sort
and JpaSort
public interface UserRepository extends JpaRepository<User, Long> {
@Query("select u from User u where u.lastname like ?1%")
List<User> findByAndSort(String lastname, Sort sort);
@Query("select u.id, LENGTH(u.firstname) as fn_len from User u where u.lastname like ?1%")
List<Object[]> findByAsArrayAndSort(String lastname, Sort sort);
}
repo.findByAndSort("lannister", Sort.by("firstname")); (1)
repo.findByAndSort("stark", Sort.by("LENGTH(firstname)")); (2)
repo.findByAndSort("targaryen", JpaSort.unsafe("LENGTH(firstname)")); (3)
repo.findByAsArrayAndSort("bolton", Sort.by("fn_len")); (4)
1 | Valid Sort expression pointing to property in domain model. |
2 | Invalid Sort containing function call. Throws Exception. |
3 | Valid Sort containing explicitly unsafe Order . |
4 | Valid Sort expression pointing to aliased function. |
Using Named Parameters
By default, Spring Data JPA uses position-based parameter binding, as described in all the preceding examples. This makes query methods a little error-prone when refactoring regarding the parameter position. To solve this issue, you can use @Param
annotation to give a method parameter a concrete name and bind the name in the query, as shown in the following example:
public interface UserRepository extends JpaRepository<User, Long> {
@Query("select u from User u where u.firstname = :firstname or u.lastname = :lastname")
User findByLastnameOrFirstname(@Param("lastname") String lastname,
@Param("firstname") String firstname);
}
The method parameters are switched according to their order in the defined query. |
As of version 4, Spring fully supports Java 8’s parameter name discovery based on the -parameters compiler flag. By using this flag in your build as an alternative to debug information, you can omit the @Param annotation for named parameters.
|
Using SpEL Expressions
As of Spring Data JPA release 1.4, we support the usage of restricted SpEL template expressions in manually defined queries that are defined with @Query
. Upon the query being run, these expressions are evaluated against a predefined set of variables. Spring Data JPA supports a variable called entityName
. Its usage is select x from #{#entityName} x
. It inserts the entityName
of the domain type associated with the given repository. The entityName
is resolved as follows: If the domain type has set the name property on the @Entity
annotation, it is used. Otherwise, the simple class-name of the domain type is used.
The following example demonstrates one use case for the #{#entityName}
expression in a query string where you want to define a repository interface with a query method and a manually defined query:
@Entity
public class User {
@Id
@GeneratedValue
Long id;
String lastname;
}
public interface UserRepository extends JpaRepository<User,Long> {
@Query("select u from #{#entityName} u where u.lastname = ?1")
List<User> findByLastname(String lastname);
}
To avoid stating the actual entity name in the query string of a @Query
annotation, you can use the #{#entityName}
variable.
The entityName can be customized by using the @Entity annotation. Customizations in orm.xml are not supported for the SpEL expressions.
|
Of course, you could have just used User
in the query declaration directly, but that would require you to change the query as well. The reference to #entityName
picks up potential future remappings of the User
class to a different entity name (for example, by using @Entity(name = "MyUser")
.
Another use case for the #{#entityName}
expression in a query string is if you want to define a generic repository interface with specialized repository interfaces for a concrete domain type. To not repeat the definition of custom query methods on the concrete interfaces, you can use the entity name expression in the query string of the @Query
annotation in the generic repository interface, as shown in the following example:
@MappedSuperclass
public abstract class AbstractMappedType {
…
String attribute
}
@Entity
public class ConcreteType extends AbstractMappedType { … }
@NoRepositoryBean
public interface MappedTypeRepository<T extends AbstractMappedType>
extends Repository<T, Long> {
@Query("select t from #{#entityName} t where t.attribute = ?1")
List<T> findAllByAttribute(String attribute);
}
public interface ConcreteRepository
extends MappedTypeRepository<ConcreteType> { … }
In the preceding example, the MappedTypeRepository
interface is the common parent interface for a few domain types extending AbstractMappedType
. It also defines the generic findAllByAttribute(…)
method, which can be used on instances of the specialized repository interfaces. If you now invoke findByAllAttribute(…)
on ConcreteRepository
, the query becomes select t from ConcreteType t where t.attribute = ?1
.
SpEL expressions to manipulate arguments may also be used to manipulate method arguments. In these SpEL expressions the entity name is not available, but the arguments are. They can be accessed by name or index as demonstrated in the following example.
@Query("select u from User u where u.firstname = ?1 and u.firstname=?#{[0]} and u.emailAddress = ?#{principal.emailAddress}")
List<User> findByFirstnameAndCurrentUserWithCustomQuery(String firstname);
For like
-conditions one often wants to append %
to the beginning or the end of a String valued parameter.
This can be done by appending or prefixing a bind parameter marker or a SpEL expression with %
.
Again the following example demonstrates this.
@Query("select u from User u where u.lastname like %:#{[0]}% and u.lastname like %:lastname%")
List<User> findByLastnameWithSpelExpression(@Param("lastname") String lastname);
When using like
-conditions with values that are coming from a not secure source the values should be sanitized so they can’t contain any wildcards and thereby allow attackers to select more data than they should be able to.
For this purpose the escape(String)
method is made available in the SpEL context.
It prefixes all instances of _
and %
in the first argument with the single character from the second argument.
In combination with the escape
clause of the like
expression available in JPQL and standard SQL this allows easy cleaning of bind parameters.
@Query("select u from User u where u.firstname like %?#{escape([0])}% escape ?#{escapeCharacter()}")
List<User> findContainingEscaped(String namePart);
Given this method declaration in a repository interface findContainingEscaped("Peter_")
will find Peter_Parker
but not Peter Parker
.
The escape character used can be configured by setting the escapeCharacter
of the @EnableJpaRepositories
annotation.
Note that the method escape(String)
available in the SpEL context will only escape the SQL and JPQL standard wildcards _
and %
.
If the underlying database or the JPA implementation supports additional wildcards these will not get escaped.
Modifying Queries
All the previous sections describe how to declare queries to access a given entity or collection of entities.
You can add custom modifying behavior by using the custom method facilities described in “Custom Implementations for Spring Data Repositories”.
As this approach is feasible for comprehensive custom functionality, you can modify queries that only need parameter binding by annotating the query method with @Modifying
, as shown in the following example:
@Modifying
@Query("update User u set u.firstname = ?1 where u.lastname = ?2")
int setFixedFirstnameFor(String firstname, String lastname);
Doing so triggers the query annotated to the method as an updating query instead of a selecting one. As the EntityManager
might contain outdated entities after the execution of the modifying query, we do not automatically clear it (see the JavaDoc of EntityManager.clear()
for details), since this effectively drops all non-flushed changes still pending in the EntityManager
.
If you wish the EntityManager
to be cleared automatically, you can set the @Modifying
annotation’s clearAutomatically
attribute to true
.
The @Modifying
annotation is only relevant in combination with the @Query
annotation.
Derived query methods or custom methods do not require this annotation.
Derived Delete Queries
Spring Data JPA also supports derived delete queries that let you avoid having to declare the JPQL query explicitly, as shown in the following example:
interface UserRepository extends Repository<User, Long> {
void deleteByRoleId(long roleId);
@Modifying
@Query("delete from User u where u.role.id = ?1")
void deleteInBulkByRoleId(long roleId);
}
Although the deleteByRoleId(…)
method looks like it basically produces the same result as the deleteInBulkByRoleId(…)
, there is an important difference between the two method declarations in terms of the way they are run.
As the name suggests, the latter method issues a single JPQL query (the one defined in the annotation) against the database.
This means even currently loaded instances of User
do not see lifecycle callbacks invoked.
To make sure lifecycle queries are actually invoked, an invocation of deleteByRoleId(…)
runs a query and then deletes the returned instances one by one, so that the persistence provider can actually invoke @PreRemove
callbacks on those entities.
In fact, a derived delete query is a shortcut for running the query and then calling CrudRepository.delete(Iterable<User> users)
on the result and keeping behavior in sync with the implementations of other delete(…)
methods in CrudRepository
.
Applying Query Hints
To apply JPA query hints to the queries declared in your repository interface, you can use the @QueryHints
annotation. It takes an array of JPA @QueryHint
annotations plus a boolean flag to potentially disable the hints applied to the additional count query triggered when applying pagination, as shown in the following example:
public interface UserRepository extends Repository<User, Long> {
@QueryHints(value = { @QueryHint(name = "name", value = "value")},
forCounting = false)
Page<User> findByLastname(String lastname, Pageable pageable);
}
The preceding declaration would apply the configured @QueryHint
for that actually query but omit applying it to the count query triggered to calculate the total number of pages.
Adding Comments to Queries
Sometimes, you need to debug a query based upon database performance.
The query your database administrator shows you may look VERY different than what you wrote using @Query
, or it may look
nothing like what you presume Spring Data JPA has generated regarding a custom finder or if you used query by example.
To make this process easier, you can insert custom comments into almost any JPA operation, whether its a query or other operation
by applying the @Meta
annotation.
@Meta
annotation to repository operationspublic interface RoleRepository extends JpaRepository<Role, Integer> {
@Meta(comment = "find roles by name")
List<Role> findByName(String name);
@Override
@Meta(comment = "find roles using QBE")
<S extends Role> List<S> findAll(Example<S> example);
@Meta(comment = "count roles for a given name")
long countByName(String name);
@Override
@Meta(comment = "exists based on QBE")
<S extends Role> boolean exists(Example<S> example);
}
This sample repository has a mixture of custom finders as well as overriding the inherited operations from JpaRepository
.
Either way, the @Meta
annotation lets you add a comment
that will be inserted into queries before they are sent to the database.
It’s also important to note that this feature isn’t confined solely to queries. It extends to the count
and exists
operations.
And while not shown, it also extends to certain delete
operations.
While we have attempted to apply this feature everywhere possible, some operations of the underlying EntityManager don’t support comments. For example, entityManager.createQuery() is clearly documented as supporting comments, but entityManager.find() operations do not.
|
Neither JPQL logging nor SQL logging is a standard in JPA, so each provider requires custom configuration, as shown the sections below.
To activate query comments in Hibernate, you must set hibernate.use_sql_comments
to true
.
If you are using Java-based configuration settings, this can be done like this:
@Bean
public Properties jpaProperties() {
Properties properties = new Properties();
properties.setProperty("hibernate.use_sql_comments", "true");
return properties;
}
If you have a persistence.xml
file, you can apply it there:
persistence.xml
-based configuration<persistence-unit name="my-persistence-unit">
...registered classes...
<properties>
<property name="hibernate.use_sql_comments" value="true" />
</properties>
</persistence-unit>
Finally, if you are using Spring Boot, then you can set it up inside your application.properties
file:
spring.jpa.properties.hibernate.use_sql_comments=true
To activate query comments in EclipseLink, you must set eclipselink.logging.level.sql
to FINE
.
If you are using Java-based configuration settings, this can be done like this:
@Bean
public Properties jpaProperties() {
Properties properties = new Properties();
properties.setProperty("eclipselink.logging.level.sql", "FINE");
return properties;
}
If you have a persistence.xml
file, you can apply it there:
persistence.xml
-based configuration<persistence-unit name="my-persistence-unit">
...registered classes...
<properties>
<property name="eclipselink.logging.level.sql" value="FINE" />
</properties>
</persistence-unit>
Finally, if you are using Spring Boot, then you can set it up inside your application.properties
file:
spring.jpa.properties.eclipselink.logging.level.sql=FINE
Configuring Fetch- and LoadGraphs
The JPA 2.1 specification introduced support for specifying Fetch- and LoadGraphs that we also support with the @EntityGraph
annotation, which lets you reference a @NamedEntityGraph
definition. You can use that annotation on an entity to configure the fetch plan of the resulting query. The type (Fetch
or Load
) of the fetching can be configured by using the type
attribute on the @EntityGraph
annotation. See the JPA 2.1 Spec 3.7.4 for further reference.
The following example shows how to define a named entity graph on an entity:
@Entity
@NamedEntityGraph(name = "GroupInfo.detail",
attributeNodes = @NamedAttributeNode("members"))
public class GroupInfo {
// default fetch mode is lazy.
@ManyToMany
List<GroupMember> members = new ArrayList<GroupMember>();
…
}
The following example shows how to reference a named entity graph on a repository query method:
public interface GroupRepository extends CrudRepository<GroupInfo, String> {
@EntityGraph(value = "GroupInfo.detail", type = EntityGraphType.LOAD)
GroupInfo getByGroupName(String name);
}
It is also possible to define ad hoc entity graphs by using @EntityGraph
. The provided attributePaths
are translated into the according EntityGraph
without needing to explicitly add @NamedEntityGraph
to your domain types, as shown in the following example:
public interface GroupRepository extends CrudRepository<GroupInfo, String> {
@EntityGraph(attributePaths = { "members" })
GroupInfo getByGroupName(String name);
}
Projections
Spring Data query methods usually return one or multiple instances of the aggregate root managed by the repository. However, it might sometimes be desirable to create projections based on certain attributes of those types. Spring Data allows modeling dedicated return types, to more selectively retrieve partial views of the managed aggregates.
Imagine a repository and aggregate root type such as the following example:
class Person {
@Id UUID id;
String firstname, lastname;
Address address;
static class Address {
String zipCode, city, street;
}
}
interface PersonRepository extends Repository<Person, UUID> {
Collection<Person> findByLastname(String lastname);
}
Now imagine that we want to retrieve the person’s name attributes only. What means does Spring Data offer to achieve this? The rest of this chapter answers that question.
Interface-based Projections
The easiest way to limit the result of the queries to only the name attributes is by declaring an interface that exposes accessor methods for the properties to be read, as shown in the following example:
interface NamesOnly {
String getFirstname();
String getLastname();
}
The important bit here is that the properties defined here exactly match properties in the aggregate root. Doing so lets a query method be added as follows:
interface PersonRepository extends Repository<Person, UUID> {
Collection<NamesOnly> findByLastname(String lastname);
}
The query execution engine creates proxy instances of that interface at runtime for each element returned and forwards calls to the exposed methods to the target object.
Declaring a method in your Repository that overrides a base method (e.g. declared in CrudRepository , a store-specific repository interface, or the Simple…Repository ) results in a call to the base method regardless of the declared return type. Make sure to use a compatible return type as base methods cannot be used for projections. Some store modules support @Query annotations to turn an overridden base method into a query method that then can be used to return projections.
|
Projections can be used recursively. If you want to include some of the Address
information as well, create a projection interface for that and return that interface from the declaration of getAddress()
, as shown in the following example:
interface PersonSummary {
String getFirstname();
String getLastname();
AddressSummary getAddress();
interface AddressSummary {
String getCity();
}
}
On method invocation, the address
property of the target instance is obtained and wrapped into a projecting proxy in turn.
A projection interface whose accessor methods all match properties of the target aggregate is considered to be a closed projection. The following example (which we used earlier in this chapter, too) is a closed projection:
interface NamesOnly {
String getFirstname();
String getLastname();
}
If you use a closed projection, Spring Data can optimize the query execution, because we know about all the attributes that are needed to back the projection proxy. For more details on that, see the module-specific part of the reference documentation.
Accessor methods in projection interfaces can also be used to compute new values by using the @Value
annotation, as shown in the following example:
interface NamesOnly {
@Value("#{target.firstname + ' ' + target.lastname}")
String getFullName();
…
}
The aggregate root backing the projection is available in the target
variable.
A projection interface using @Value
is an open projection.
Spring Data cannot apply query execution optimizations in this case, because the SpEL expression could use any attribute of the aggregate root.
The expressions used in @Value
should not be too complex — you want to avoid programming in String
variables.
For very simple expressions, one option might be to resort to default methods (introduced in Java 8), as shown in the following example:
interface NamesOnly {
String getFirstname();
String getLastname();
default String getFullName() {
return getFirstname().concat(" ").concat(getLastname());
}
}
This approach requires you to be able to implement logic purely based on the other accessor methods exposed on the projection interface. A second, more flexible, option is to implement the custom logic in a Spring bean and then invoke that from the SpEL expression, as shown in the following example:
@Component
class MyBean {
String getFullName(Person person) {
…
}
}
interface NamesOnly {
@Value("#{@myBean.getFullName(target)}")
String getFullName();
…
}
Notice how the SpEL expression refers to myBean
and invokes the getFullName(…)
method and forwards the projection target as a method parameter.
Methods backed by SpEL expression evaluation can also use method parameters, which can then be referred to from the expression.
The method parameters are available through an Object
array named args
. The following example shows how to get a method parameter from the args
array:
interface NamesOnly {
@Value("#{args[0] + ' ' + target.firstname + '!'}")
String getSalutation(String prefix);
}
Again, for more complex expressions, you should use a Spring bean and let the expression invoke a method, as described earlier.
Getters in projection interfaces can make use of nullable wrappers for improved null-safety. Currently supported wrapper types are:
-
java.util.Optional
-
com.google.common.base.Optional
-
scala.Option
-
io.vavr.control.Option
interface NamesOnly {
Optional<String> getFirstname();
}
If the underlying projection value is not null
, then values are returned using the present-representation of the wrapper type.
In case the backing value is null
, then the getter method returns the empty representation of the used wrapper type.
Class-based Projections (DTOs)
Another way of defining projections is by using value type DTOs (Data Transfer Objects) that hold properties for the fields that are supposed to be retrieved. These DTO types can be used in exactly the same way projection interfaces are used, except that no proxying happens and no nested projections can be applied.
If the store optimizes the query execution by limiting the fields to be loaded, the fields to be loaded are determined from the parameter names of the constructor that is exposed.
The following example shows a projecting DTO:
class NamesOnly {
private final String firstname, lastname;
NamesOnly(String firstname, String lastname) {
this.firstname = firstname;
this.lastname = lastname;
}
String getFirstname() {
return this.firstname;
}
String getLastname() {
return this.lastname;
}
// equals(…) and hashCode() implementations
}
Avoid boilerplate code for projection DTOs
You can dramatically simplify the code for a DTO by using Project Lombok, which provides an
Fields are |
Class based projections do not work with native queries. As a workaround you may use named queries with ResultSetMapping or the Hibernate specific ResultTransformer
|
Dynamic Projections
So far, we have used the projection type as the return type or element type of a collection. However, you might want to select the type to be used at invocation time (which makes it dynamic). To apply dynamic projections, use a query method such as the one shown in the following example:
interface PersonRepository extends Repository<Person, UUID> {
<T> Collection<T> findByLastname(String lastname, Class<T> type);
}
This way, the method can be used to obtain the aggregates as is or with a projection applied, as shown in the following example:
void someMethod(PersonRepository people) {
Collection<Person> aggregates =
people.findByLastname("Matthews", Person.class);
Collection<NamesOnly> aggregates =
people.findByLastname("Matthews", NamesOnly.class);
}
Query parameters of type Class are inspected whether they qualify as dynamic projection parameter.
If the actual return type of the query equals the generic parameter type of the Class parameter, then the matching Class parameter is not available for usage within the query or SpEL expressions.
If you want to use a Class parameter as query argument then make sure to use a different generic parameter, for example Class<?> .
|
5.1.4. Stored Procedures
The JPA 2.1 specification introduced support for calling stored procedures by using the JPA criteria query API.
We Introduced the @Procedure
annotation for declaring stored procedure metadata on a repository method.
The examples to follow use the following stored procedure:
plus1inout
procedure in HSQL DB./;
DROP procedure IF EXISTS plus1inout
/;
CREATE procedure plus1inout (IN arg int, OUT res int)
BEGIN ATOMIC
set res = arg + 1;
END
/;
Metadata for stored procedures can be configured by using the NamedStoredProcedureQuery
annotation on an entity type.
@Entity
@NamedStoredProcedureQuery(name = "User.plus1", procedureName = "plus1inout", parameters = {
@StoredProcedureParameter(mode = ParameterMode.IN, name = "arg", type = Integer.class),
@StoredProcedureParameter(mode = ParameterMode.OUT, name = "res", type = Integer.class) })
public class User {}
Note that @NamedStoredProcedureQuery
has two different names for the stored procedure.
name
is the name JPA uses. procedureName
is the name the stored procedure has in the database.
You can reference stored procedures from a repository method in multiple ways.
The stored procedure to be called can either be defined directly by using the value
or procedureName
attribute of the @Procedure
annotation.
This refers directly to the stored procedure in the database and ignores any configuration via @NamedStoredProcedureQuery
.
Alternatively you may specify the @NamedStoredProcedureQuery.name
attribute as the @Procedure.name
attribute.
If neither value
, procedureName
nor name
is configured, the name of the repository method is used as the name
attribute.
The following example shows how to reference an explicitly mapped procedure:
@Procedure("plus1inout")
Integer explicitlyNamedPlus1inout(Integer arg);
The following example is equivalent to the previous one but uses the procedureName
alias:
procedureName
alias.@Procedure(procedureName = "plus1inout")
Integer callPlus1InOut(Integer arg);
The following is again equivalent to the previous two but using the method name instead of an explicite annotation attribute.
EntityManager
by using the method name.@Procedure
Integer plus1inout(@Param("arg") Integer arg);
The following example shows how to reference a stored procedure by referencing the @NamedStoredProcedureQuery.name
attribute.
EntityManager
.@Procedure(name = "User.plus1IO")
Integer entityAnnotatedCustomNamedProcedurePlus1IO(@Param("arg") Integer arg);
If the stored procedure getting called has a single out parameter that parameter may be returned as the return value of the method.
If there are multiple out parameters specified in a @NamedStoredProcedureQuery
annotation those can be returned as a Map
with the key being the parameter name given in the @NamedStoredProcedureQuery
annotation.
5.1.5. Specifications
JPA 2 introduces a criteria API that you can use to build queries programmatically. By writing a criteria
, you define the where clause of a query for a domain class. Taking another step back, these criteria can be regarded as a predicate over the entity that is described by the JPA criteria API constraints.
Spring Data JPA takes the concept of a specification from Eric Evans' book, “Domain Driven Design”, following the same semantics and providing an API to define such specifications with the JPA criteria API. To support specifications, you can extend your repository interface with the JpaSpecificationExecutor
interface, as follows:
public interface CustomerRepository extends CrudRepository<Customer, Long>, JpaSpecificationExecutor<Customer> {
…
}
The additional interface has methods that let you run specifications in a variety of ways. For example, the findAll
method returns all entities that match the specification, as shown in the following example:
List<T> findAll(Specification<T> spec);
The Specification
interface is defined as follows:
public interface Specification<T> {
Predicate toPredicate(Root<T> root, CriteriaQuery<?> query,
CriteriaBuilder builder);
}
Specifications can easily be used to build an extensible set of predicates on top of an entity that then can be combined and used with JpaRepository
without the need to declare a query (method) for every needed combination, as shown in the following example:
public class CustomerSpecs {
public static Specification<Customer> isLongTermCustomer() {
return (root, query, builder) -> {
LocalDate date = LocalDate.now().minusYears(2);
return builder.lessThan(root.get(Customer_.createdAt), date);
};
}
public static Specification<Customer> hasSalesOfMoreThan(MonetaryAmount value) {
return (root, query, builder) -> {
// build query here
};
}
}
The Customer_
type is a metamodel type generated using the JPA Metamodel generator (see the Hibernate implementation’s documentation for an example).
So the expression, Customer_.createdAt
, assumes the Customer
has a createdAt
attribute of type Date
.
Besides that, we have expressed some criteria on a business requirement abstraction level and created executable Specifications
.
So a client might use a Specification
as follows:
List<Customer> customers = customerRepository.findAll(isLongTermCustomer());
Why not create a query for this kind of data access? Using a single Specification
does not gain a lot of benefit over a plain query declaration. The power of specifications really shines when you combine them to create new Specification
objects. You can achieve this through the default methods of Specification
we provide to build expressions similar to the following:
MonetaryAmount amount = new MonetaryAmount(200.0, Currencies.DOLLAR);
List<Customer> customers = customerRepository.findAll(
isLongTermCustomer().or(hasSalesOfMoreThan(amount)));
Specification
offers some “glue-code” default methods to chain and combine Specification
instances. These methods let you extend your data access layer by creating new Specification
implementations and combining them with already existing implementations.
And with JPA 2.1, the CriteriaBuilder
API introduced CriteriaDelete
. This is provided through JpaSpecificationExecutor’s `delete(Specification)
API.
Specification
to delete entries.Specification<User> ageLessThan18 = (root, query, cb) -> cb.lessThan(root.get("age").as(Integer.class), 18)
userRepository.delete(ageLessThan18);
The Specification
builds up a criteria where the age
field (cast as an integer) is less than 18
.
Passed on to the userRepository
, it will use JPA’s CriteriaDelete
feature to generate the right DELETE
operation.
It then returns the number of entities deleted.
5.1.6. Query by Example
Introduction
This chapter provides an introduction to Query by Example and explains how to use it.
Query by Example (QBE) is a user-friendly querying technique with a simple interface. It allows dynamic query creation and does not require you to write queries that contain field names. In fact, Query by Example does not require you to write queries by using store-specific query languages at all.
Usage
The Query by Example API consists of four parts:
-
Probe: The actual example of a domain object with populated fields.
-
ExampleMatcher
: TheExampleMatcher
carries details on how to match particular fields. It can be reused across multiple Examples. -
Example
: AnExample
consists of the probe and theExampleMatcher
. It is used to create the query. -
FetchableFluentQuery
: AFetchableFluentQuery
offers a fluent API, that allows further customization of a query derived from anExample
. Using the fluent API lets you to specify ordering projection and result processing for your query.
Query by Example is well suited for several use cases:
-
Querying your data store with a set of static or dynamic constraints.
-
Frequent refactoring of the domain objects without worrying about breaking existing queries.
-
Working independently from the underlying data store API.
Query by Example also has several limitations:
-
No support for nested or grouped property constraints, such as
firstname = ?0 or (firstname = ?1 and lastname = ?2)
. -
Only supports starts/contains/ends/regex matching for strings and exact matching for other property types.
Before getting started with Query by Example, you need to have a domain object. To get started, create an interface for your repository, as shown in the following example:
public class Person {
@Id
private String id;
private String firstname;
private String lastname;
private Address address;
// … getters and setters omitted
}
The preceding example shows a simple domain object.
You can use it to create an Example
.
By default, fields having null
values are ignored, and strings are matched by using the store specific defaults.
Inclusion of properties into a Query by Example criteria is based on nullability.
Properties using primitive types (int , double , …) are always included unless the ExampleMatcher ignores the property path.
|
Examples can be built by either using the of
factory method or by using ExampleMatcher
. Example
is immutable.
The following listing shows a simple Example:
Person person = new Person(); (1)
person.setFirstname("Dave"); (2)
Example<Person> example = Example.of(person); (3)
1 | Create a new instance of the domain object. |
2 | Set the properties to query. |
3 | Create the Example . |
You can run the example queries by using repositories.
To do so, let your repository interface extend QueryByExampleExecutor<T>
.
The following listing shows an excerpt from the QueryByExampleExecutor
interface:
QueryByExampleExecutor
public interface QueryByExampleExecutor<T> {
<S extends T> S findOne(Example<S> example);
<S extends T> Iterable<S> findAll(Example<S> example);
// … more functionality omitted.
}
Example Matchers
Examples are not limited to default settings.
You can specify your own defaults for string matching, null handling, and property-specific settings by using the ExampleMatcher
, as shown in the following example:
Person person = new Person(); (1)
person.setFirstname("Dave"); (2)
ExampleMatcher matcher = ExampleMatcher.matching() (3)
.withIgnorePaths("lastname") (4)
.withIncludeNullValues() (5)
.withStringMatcher(StringMatcher.ENDING); (6)
Example<Person> example = Example.of(person, matcher); (7)
1 | Create a new instance of the domain object. |
2 | Set properties. |
3 | Create an ExampleMatcher to expect all values to match.
It is usable at this stage even without further configuration. |
4 | Construct a new ExampleMatcher to ignore the lastname property path. |
5 | Construct a new ExampleMatcher to ignore the lastname property path and to include null values. |
6 | Construct a new ExampleMatcher to ignore the lastname property path, to include null values, and to perform suffix string matching. |
7 | Create a new Example based on the domain object and the configured ExampleMatcher . |
By default, the ExampleMatcher
expects all values set on the probe to match.
If you want to get results matching any of the predicates defined implicitly, use ExampleMatcher.matchingAny()
.
You can specify behavior for individual properties (such as "firstname" and "lastname" or, for nested properties, "address.city"). You can tune it with matching options and case sensitivity, as shown in the following example:
ExampleMatcher matcher = ExampleMatcher.matching()
.withMatcher("firstname", endsWith())
.withMatcher("lastname", startsWith().ignoreCase());
}
Another way to configure matcher options is to use lambdas (introduced in Java 8). This approach creates a callback that asks the implementor to modify the matcher. You need not return the matcher, because configuration options are held within the matcher instance. The following example shows a matcher that uses lambdas:
ExampleMatcher matcher = ExampleMatcher.matching()
.withMatcher("firstname", match -> match.endsWith())
.withMatcher("firstname", match -> match.startsWith());
}
Queries created by Example
use a merged view of the configuration.
Default matching settings can be set at the ExampleMatcher
level, while individual settings can be applied to particular property paths.
Settings that are set on ExampleMatcher
are inherited by property path settings unless they are defined explicitly.
Settings on a property patch have higher precedence than default settings.
The following table describes the scope of the various ExampleMatcher
settings:
Setting | Scope |
---|---|
Null-handling |
|
String matching |
|
Ignoring properties |
Property path |
Case sensitivity |
|
Value transformation |
Property path |
Fluent API
QueryByExampleExecutor
offers one more method, which we did not mention so far: <S extends T, R> R findBy(Example<S> example, Function<FluentQuery.FetchableFluentQuery<S>, R> queryFunction)
.
As with other methods, it executes a query derived from an Example
.
However, with the second argument, you can control aspects of that execution that you cannot dynamically control otherwise.
You do so by invoking the various methods of the FetchableFluentQuery
in the second argument.
sortBy
lets you specify an ordering for your result.
as
lets you specify the type to which you want the result to be transformed.
project
limits the queried attributes.
first
, firstValue
, one
, oneValue
, all
, page
, stream
, count
, and exists
define what kind of result you get and how the query behaves when more than the expected number of results are available.
Optional<Person> match = repository.findBy(example,
q -> q
.sortBy(Sort.by("lastname").descending())
.first()
);
Running an Example
In Spring Data JPA, you can use Query by Example with Repositories, as shown in the following example:
public interface PersonRepository extends JpaRepository<Person, String> { … }
public class PersonService {
@Autowired PersonRepository personRepository;
public List<Person> findPeople(Person probe) {
return personRepository.findAll(Example.of(probe));
}
}
Currently, only SingularAttribute properties can be used for property matching.
|
The property specifier accepts property names (such as firstname
and lastname
). You can navigate by chaining properties together with dots (address.city
). You can also tune it with matching options and case sensitivity.
The following table shows the various StringMatcher
options that you can use and the result of using them on a field named firstname
:
Matching | Logical result |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5.1.7. Transactionality
By default, CRUD methods on repository instances inherited from SimpleJpaRepository
are transactional.
For read operations, the transaction configuration readOnly
flag is set to true
.
All others are configured with a plain @Transactional
so that default transaction configuration applies.
Repository methods that are backed by transactional repository fragments inherit the transactional attributes from the actual fragment method.
If you need to tweak transaction configuration for one of the methods declared in a repository, redeclare the method in your repository interface, as follows:
public interface UserRepository extends CrudRepository<User, Long> {
@Override
@Transactional(timeout = 10)
public List<User> findAll();
// Further query method declarations
}
Doing so causes the findAll()
method to run with a timeout of 10 seconds and without the readOnly
flag.
Another way to alter transactional behaviour is to use a facade or service implementation that (typically) covers more than one repository. Its purpose is to define transactional boundaries for non-CRUD operations. The following example shows how to use such a facade for more than one repository:
@Service
public class UserManagementImpl implements UserManagement {
private final UserRepository userRepository;
private final RoleRepository roleRepository;
public UserManagementImpl(UserRepository userRepository,
RoleRepository roleRepository) {
this.userRepository = userRepository;
this.roleRepository = roleRepository;
}
@Transactional
public void addRoleToAllUsers(String roleName) {
Role role = roleRepository.findByName(roleName);
for (User user : userRepository.findAll()) {
user.addRole(role);
userRepository.save(user);
}
}
}
This example causes call to addRoleToAllUsers(…)
to run inside a transaction (participating in an existing one or creating a new one if none are already running). The transaction configuration at the repositories is then neglected, as the outer transaction configuration determines the actual one used. Note that you must activate <tx:annotation-driven />
or use @EnableTransactionManagement
explicitly to get annotation-based configuration of facades to work.
This example assumes you use component scanning.
Note that the call to save
is not strictly necessary from a JPA point of view, but should still be there in order to stay consistent to the repository abstraction offered by Spring Data.
Transactional query methods
To let your query methods be transactional, use @Transactional
at the repository interface you define, as shown in the following example:
@Transactional(readOnly = true)
interface UserRepository extends JpaRepository<User, Long> {
List<User> findByLastname(String lastname);
@Modifying
@Transactional
@Query("delete from User u where u.active = false")
void deleteInactiveUsers();
}
Typically, you want the readOnly
flag to be set to true
, as most of the query methods only read data. In contrast to that, deleteInactiveUsers()
makes use of the @Modifying
annotation and overrides the transaction configuration. Thus, the method runs with the readOnly
flag set to false
.
You can use transactions for read-only queries and mark them as such by setting the |
5.1.8. Locking
To specify the lock mode to be used, you can use the @Lock
annotation on query methods, as shown in the following example:
interface UserRepository extends Repository<User, Long> {
// Plain query method
@Lock(LockModeType.READ)
List<User> findByLastname(String lastname);
}
This method declaration causes the query being triggered to be equipped with a LockModeType
of READ
. You can also define locking for CRUD methods by redeclaring them in your repository interface and adding the @Lock
annotation, as shown in the following example:
interface UserRepository extends Repository<User, Long> {
// Redeclaration of a CRUD method
@Lock(LockModeType.READ)
List<User> findAll();
}
5.1.9. Auditing
Basics
Spring Data provides sophisticated support to transparently keep track of who created or changed an entity and when the change happened.To benefit from that functionality, you have to equip your entity classes with auditing metadata that can be defined either using annotations or by implementing an interface. Additionally, auditing has to be enabled either through Annotation configuration or XML configuration to register the required infrastructure components. Please refer to the store-specific section for configuration samples.
Applications that only track creation and modification dates are not required do make their entities implement |
Annotation-based Auditing Metadata
We provide @CreatedBy
and @LastModifiedBy
to capture the user who created or modified the entity as well as @CreatedDate
and @LastModifiedDate
to capture when the change happened.
class Customer {
@CreatedBy
private User user;
@CreatedDate
private Instant createdDate;
// … further properties omitted
}
As you can see, the annotations can be applied selectively, depending on which information you want to capture.
The annotations, indicating to capture when changes are made, can be used on properties of type JDK8 date and time types, long
, Long
, and legacy Java Date
and Calendar
.
Auditing metadata does not necessarily need to live in the root level entity but can be added to an embedded one (depending on the actual store in use), as shown in the snippet below.
class Customer {
private AuditMetadata auditingMetadata;
// … further properties omitted
}
class AuditMetadata {
@CreatedBy
private User user;
@CreatedDate
private Instant createdDate;
}
Interface-based Auditing Metadata
In case you do not want to use annotations to define auditing metadata, you can let your domain class implement the Auditable
interface. It exposes setter methods for all of the auditing properties.
AuditorAware
In case you use either @CreatedBy
or @LastModifiedBy
, the auditing infrastructure somehow needs to become aware of the current principal. To do so, we provide an AuditorAware<T>
SPI interface that you have to implement to tell the infrastructure who the current user or system interacting with the application is. The generic type T
defines what type the properties annotated with @CreatedBy
or @LastModifiedBy
have to be.
The following example shows an implementation of the interface that uses Spring Security’s Authentication
object:
AuditorAware
based on Spring Securityclass SpringSecurityAuditorAware implements AuditorAware<User> {
@Override
public Optional<User> getCurrentAuditor() {
return Optional.ofNullable(SecurityContextHolder.getContext())
.map(SecurityContext::getAuthentication)
.filter(Authentication::isAuthenticated)
.map(Authentication::getPrincipal)
.map(User.class::cast);
}
}
The implementation accesses the Authentication
object provided by Spring Security and looks up the custom UserDetails
instance that you have created in your UserDetailsService
implementation. We assume here that you are exposing the domain user through the UserDetails
implementation but that, based on the Authentication
found, you could also look it up from anywhere.
ReactiveAuditorAware
When using reactive infrastructure you might want to make use of contextual information to provide @CreatedBy
or @LastModifiedBy
information.
We provide an ReactiveAuditorAware<T>
SPI interface that you have to implement to tell the infrastructure who the current user or system interacting with the application is. The generic type T
defines what type the properties annotated with @CreatedBy
or @LastModifiedBy
have to be.
The following example shows an implementation of the interface that uses reactive Spring Security’s Authentication
object:
ReactiveAuditorAware
based on Spring Securityclass SpringSecurityAuditorAware implements ReactiveAuditorAware<User> {
@Override
public Mono<User> getCurrentAuditor() {
return ReactiveSecurityContextHolder.getContext()
.map(SecurityContext::getAuthentication)
.filter(Authentication::isAuthenticated)
.map(Authentication::getPrincipal)
.map(User.class::cast);
}
}
The implementation accesses the Authentication
object provided by Spring Security and looks up the custom UserDetails
instance that you have created in your UserDetailsService
implementation. We assume here that you are exposing the domain user through the UserDetails
implementation but that, based on the Authentication
found, you could also look it up from anywhere.
There is also a convenience base class, AbstractAuditable
, which you can extend to avoid the need to manually implement the interface methods. Doing so increases the coupling of your domain classes to Spring Data, which might be something you want to avoid. Usually, the annotation-based way of defining auditing metadata is preferred as it is less invasive and more flexible.
5.1.10. JPA Auditing
General Auditing Configuration
Spring Data JPA ships with an entity listener that can be used to trigger the capturing of auditing information. First, you must register the AuditingEntityListener
to be used for all entities in your persistence contexts inside your orm.xml
file, as shown in the following example:
<persistence-unit-metadata>
<persistence-unit-defaults>
<entity-listeners>
<entity-listener class="….data.jpa.domain.support.AuditingEntityListener" />
</entity-listeners>
</persistence-unit-defaults>
</persistence-unit-metadata>
You can also enable the AuditingEntityListener
on a per-entity basis by using the @EntityListeners
annotation, as follows:
@Entity
@EntityListeners(AuditingEntityListener.class)
public class MyEntity {
}
The auditing feature requires spring-aspects.jar to be on the classpath.
|
With orm.xml
suitably modified and spring-aspects.jar
on the classpath, activating auditing functionality is a matter of adding the Spring Data JPA auditing
namespace element to your configuration, as follows:
<jpa:auditing auditor-aware-ref="yourAuditorAwareBean" />
As of Spring Data JPA 1.5, you can enable auditing by annotating a configuration class with the @EnableJpaAuditing
annotation. You must still modify the orm.xml
file and have spring-aspects.jar
on the classpath. The following example shows how to use the @EnableJpaAuditing
annotation:
@Configuration
@EnableJpaAuditing
class Config {
@Bean
public AuditorAware<AuditableUser> auditorProvider() {
return new AuditorAwareImpl();
}
}
If you expose a bean of type AuditorAware
to the ApplicationContext
, the auditing infrastructure automatically picks it up and uses it to determine the current user to be set on domain types. If you have multiple implementations registered in the ApplicationContext
, you can select the one to be used by explicitly setting the auditorAwareRef
attribute of @EnableJpaAuditing
.
5.2. Miscellaneous Considerations
5.2.1. Using JpaContext
in Custom Implementations
When working with multiple EntityManager
instances and custom repository implementations, you need to wire the correct EntityManager
into the repository implementation class. You can do so by explicitly naming the EntityManager
in the @PersistenceContext
annotation or, if the EntityManager
is @Autowired
, by using @Qualifier
.
As of Spring Data JPA 1.9, Spring Data JPA includes a class called JpaContext
that lets you obtain the EntityManager
by managed domain class, assuming it is managed by only one of the EntityManager
instances in the application. The following example shows how to use JpaContext
in a custom repository:
JpaContext
in a custom repository implementationclass UserRepositoryImpl implements UserRepositoryCustom {
private final EntityManager em;
@Autowired
public UserRepositoryImpl(JpaContext context) {
this.em = context.getEntityManagerByManagedType(User.class);
}
…
}
The advantage of this approach is that, if the domain type gets assigned to a different persistence unit, the repository does not have to be touched to alter the reference to the persistence unit.
5.2.2. Merging persistence units
Spring supports having multiple persistence units. Sometimes, however, you might want to modularize your application but still make sure that all these modules run inside a single persistence unit. To enable that behavior, Spring Data JPA offers a PersistenceUnitManager
implementation that automatically merges persistence units based on their name, as shown in the following example:
<bean class="….LocalContainerEntityManagerFactoryBean">
<property name="persistenceUnitManager">
<bean class="….MergingPersistenceUnitManager" />
</property>
</bean>
Classpath Scanning for @Entity Classes and JPA Mapping Files
A plain JPA setup requires all annotation-mapped entity classes to be listed in orm.xml
. The same applies to XML mapping files. Spring Data JPA provides a ClasspathScanningPersistenceUnitPostProcessor
that gets a base package configured and optionally takes a mapping filename pattern. It then scans the given package for classes annotated with @Entity
or @MappedSuperclass
, loads the configuration files that match the filename pattern, and hands them to the JPA configuration. The post-processor must be configured as follows:
<bean class="….LocalContainerEntityManagerFactoryBean">
<property name="persistenceUnitPostProcessors">
<list>
<bean class="org.springframework.data.jpa.support.ClasspathScanningPersistenceUnitPostProcessor">
<constructor-arg value="com.acme.domain" />
<property name="mappingFileNamePattern" value="**/*Mapping.xml" />
</bean>
</list>
</property>
</bean>
As of Spring 3.1, a package to scan can be configured on the LocalContainerEntityManagerFactoryBean directly to enable classpath scanning for entity classes. See the JavaDoc for details.
|
5.2.3. CDI Integration
Instances of the repository interfaces are usually created by a container, for which Spring is the most natural choice when working with Spring Data. Spring offers sophisticated support for creating bean instances, as documented in Creating Repository Instances. As of version 1.1.0, Spring Data JPA ships with a custom CDI extension that allows using the repository abstraction in CDI environments. The extension is part of the JAR. To activate it, include the Spring Data JPA JAR on your classpath.
You can now set up the infrastructure by implementing a CDI Producer for the EntityManagerFactory
and EntityManager
, as shown in the following example:
class EntityManagerFactoryProducer {
@Produces
@ApplicationScoped
public EntityManagerFactory createEntityManagerFactory() {
return Persistence.createEntityManagerFactory("my-persistence-unit");
}
public void close(@Disposes EntityManagerFactory entityManagerFactory) {
entityManagerFactory.close();
}
@Produces
@RequestScoped
public EntityManager createEntityManager(EntityManagerFactory entityManagerFactory) {
return entityManagerFactory.createEntityManager();
}
public void close(@Disposes EntityManager entityManager) {
entityManager.close();
}
}
The necessary setup can vary depending on the JavaEE environment. You may need to do nothing more than redeclare a EntityManager
as a CDI bean, as follows:
class CdiConfig {
@Produces
@RequestScoped
@PersistenceContext
public EntityManager entityManager;
}
In the preceding example, the container has to be capable of creating JPA EntityManagers
itself. All the configuration does is re-export the JPA EntityManager
as a CDI bean.
The Spring Data JPA CDI extension picks up all available EntityManager
instances as CDI beans and creates a proxy for a Spring Data repository whenever a bean of a repository type is requested by the container. Thus, obtaining an instance of a Spring Data repository is a matter of declaring an @Injected
property, as shown in the following example:
class RepositoryClient {
@Inject
PersonRepository repository;
public void businessMethod() {
List<Person> people = repository.findAll();
}
}
5.3. Spring Data Envers
5.3.1. What is Spring Data Envers?
Spring Data Envers makes typical Envers queries available in repositories for Spring Data JPA. It differs from other Spring Data modules in that it is always used in combination with another Spring Data Module: Spring Data JPA.
5.3.2. What is Envers?
Envers is a Hibernate module that adds auditing capabilities to JPA entities. This documentation assumes you are familiar with Envers, just as Spring Data Envers relies on Envers being properly configured.
5.3.3. Configuration
As a starting point for using Spring Data Envers, you need a project with Spring Data JPA on the classpath and an additional spring-data-envers
dependency:
<dependencies>
<!-- other dependency elements omitted -->
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-envers</artifactId>
<version>3.0.0-M6</version>
</dependency>
</dependencies>
This also brings hibernate-envers
into the project as a transient dependency.
To enable Spring Data Envers and Spring Data JPA, we need to configure two beans and a special repositoryFactoryBeanClass
:
@Configuration
@EnableEnversRepositories
@EnableTransactionManagement
public class EnversDemoConfiguration {
@Bean
public DataSource dataSource() {
EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
return builder.setType(EmbeddedDatabaseType.HSQL).build();
}
@Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
vendorAdapter.setGenerateDdl(true);
LocalContainerEntityManagerFactoryBean factory = new LocalContainerEntityManagerFactoryBean();
factory.setJpaVendorAdapter(vendorAdapter);
factory.setPackagesToScan("example.springdata.jpa.envers");
factory.setDataSource(dataSource());
return factory;
}
@Bean
public PlatformTransactionManager transactionManager(EntityManagerFactory entityManagerFactory) {
JpaTransactionManager txManager = new JpaTransactionManager();
txManager.setEntityManagerFactory(entityManagerFactory);
return txManager;
}
}
To actually use Spring Data Envers, make one or more repositories into a {spring-data-commons-javadoc-base}/org/springframework/data/repository/history/RevisionRepository.html[RevisionRepository
] by adding it as an extended interface:
interface PersonRepository
extends CrudRepository<Person, Long>,
RevisionRepository<Person, Long, Long> (1)
{}
1 | The first type parameter (Person ) denotes the entity type, the second (Long ) denotes the type of the id property, and the last one (Long ) is the type of the revision number.
For Envers in default configuration, the revision number parameter should be Integer or Long . |
The entity for that repository must be an entity with Envers auditing enabled (that is, it must have an @Audited
annotation):
@Entity
@Audited
class Person {
@Id @GeneratedValue
Long id;
String name;
@Version Long version;
}
5.3.4. Usage
You can now use the methods from RevisionRepository
to query the revisions of the entity, as the following test case shows:
@ExtendWith(SpringExtension.class)
@Import(EnversDemoConfiguration.class) (1)
class EnversIntegrationTests {
final PersonRepository repository;
final TransactionTemplate tx;
EnversIntegrationTests(@Autowired PersonRepository repository, @Autowired PlatformTransactionManager tm) {
this.repository = repository;
this.tx = new TransactionTemplate(tm);
}
@Test
void testRepository() {
Person updated = preparePersonHistory();
Revisions<Long, Person> revisions = repository.findRevisions(updated.id);
Iterator<Revision<Long, Person>> revisionIterator = revisions.iterator();
checkNextRevision(revisionIterator, "John", RevisionType.INSERT);
checkNextRevision(revisionIterator, "Jonny", RevisionType.UPDATE);
checkNextRevision(revisionIterator, null, RevisionType.DELETE);
assertThat(revisionIterator.hasNext()).isFalse();
}
/**
* Checks that the next element in the iterator is a Revision entry referencing a Person
* with the given name after whatever change brought that Revision into existence.
* <p>
* As a side effect the Iterator gets advanced by one element.
*
* @param revisionIterator the iterator to be tested.
* @param name the expected name of the Person referenced by the Revision.
* @param revisionType the type of the revision denoting if it represents an insert, update or delete.
*/
private void checkNextRevision(Iterator<Revision<Long, Person>> revisionIterator, String name,
RevisionType revisionType) {
assertThat(revisionIterator.hasNext()).isTrue();
Revision<Long, Person> revision = revisionIterator.next();
assertThat(revision.getEntity().name).isEqualTo(name);
assertThat(revision.getMetadata().getRevisionType()).isEqualTo(revisionType);
}
/**
* Creates a Person with a couple of changes so it has a non-trivial revision history.
* @return the created Person.
*/
private Person preparePersonHistory() {
Person john = new Person();
john.setName("John");
// create
Person saved = tx.execute(__ -> repository.save(john));
assertThat(saved).isNotNull();
saved.setName("Jonny");
// update
Person updated = tx.execute(__ -> repository.save(saved));
assertThat(updated).isNotNull();
// delete
tx.executeWithoutResult(__ -> repository.delete(updated));
return updated;
}
}
1 | This references the application context configuration presented earlier (in the Configuration section). |
5.3.5. Further Resources
You can download the Spring Data Envers example in the Spring Data Examples repository and play around with to get a feel for how the library works.
You should also check out the {spring-data-commons-javadoc-base}/org/springframework/data/repository/history/RevisionRepository.html[Javadoc for RevisionRepository
] and related classes.
You can ask questions at Stackoverflow by using the spring-data-envers
tag.
Appendix A: Namespace reference
The <repositories />
Element
The <repositories />
element triggers the setup of the Spring Data repository infrastructure. The most important attribute is base-package
, which defines the package to scan for Spring Data repository interfaces. See “XML Configuration”. The following table describes the attributes of the <repositories />
element:
Name | Description |
---|---|
|
Defines the package to be scanned for repository interfaces that extend |
|
Defines the postfix to autodetect custom repository implementations. Classes whose names end with the configured postfix are considered as candidates. Defaults to |
|
Determines the strategy to be used to create finder queries. See “Query Lookup Strategies” for details. Defaults to |
|
Defines the location to search for a Properties file containing externally defined queries. |
|
Whether nested repository interface definitions should be considered. Defaults to |
Appendix B: Populators namespace reference
The <populator /> element
The <populator />
element allows to populate the a data store via the Spring Data repository infrastructure.[1]
Name | Description |
---|---|
|
Where to find the files to read the objects from the repository shall be populated with. |
Appendix C: Repository query keywords
Supported query method subject keywords
The following table lists the subject keywords generally supported by the Spring Data repository query derivation mechanism to express the predicate. Consult the store-specific documentation for the exact list of supported keywords, because some keywords listed here might not be supported in a particular store.
Keyword | Description |
---|---|
|
General query method returning typically the repository type, a |
|
Exists projection, returning typically a |
|
Count projection returning a numeric result. |
|
Delete query method returning either no result ( |
|
Limit the query results to the first |
|
Use a distinct query to return only unique results. Consult the store-specific documentation whether that feature is supported. This keyword can occur in any place of the subject between |
Supported query method predicate keywords and modifiers
The following table lists the predicate keywords generally supported by the Spring Data repository query derivation mechanism. However, consult the store-specific documentation for the exact list of supported keywords, because some keywords listed here might not be supported in a particular store.
Logical keyword | Keyword expressions |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
In addition to filter predicates, the following list of modifiers is supported:
Keyword | Description |
---|---|
|
Used with a predicate keyword for case-insensitive comparison. |
|
Ignore case for all suitable properties. Used somewhere in the query method predicate. |
|
Specify a static sorting order followed by the property path and direction (e. g. |
Appendix D: Repository query return types
Supported Query Return Types
The following table lists the return types generally supported by Spring Data repositories. However, consult the store-specific documentation for the exact list of supported return types, because some types listed here might not be supported in a particular store.
Geospatial types (such as GeoResult , GeoResults , and GeoPage ) are available only for data stores that support geospatial queries.
Some store modules may define their own result wrapper types.
|
Return type | Description |
---|---|
|
Denotes no return value. |
Primitives |
Java primitives. |
Wrapper types |
Java wrapper types. |
|
A unique entity. Expects the query method to return one result at most. If no result is found, |
|
An |
|
A |
|
A |
|
A Java 8 or Guava |
|
Either a Scala or Vavr |
|
A Java 8 |
|
A convenience extension of |
Types that implement |
Types that expose a constructor or |
Vavr |
Vavr collection types. See Support for Vavr Collections for details. |
|
A |
|
A Java 8 |
|
A sized chunk of data with an indication of whether there is more data available. Requires a |
|
A |
|
A result entry with additional information, such as the distance to a reference location. |
|
A list of |
|
A |
|
A Project Reactor |
|
A Project Reactor |
|
A RxJava |
|
A RxJava |
|
A RxJava |
Appendix E: Frequently Asked Questions
Common
-
I’d like to get more detailed logging information on what methods are called inside
JpaRepository
for example. How can I gain them?You can make use of
CustomizableTraceInterceptor
provided by Spring, as shown in the following example:<bean id="customizableTraceInterceptor" class=" org.springframework.aop.interceptor.CustomizableTraceInterceptor"> <property name="enterMessage" value="Entering $[methodName]($[arguments])"/> <property name="exitMessage" value="Leaving $[methodName](): $[returnValue]"/> </bean> <aop:config> <aop:advisor advice-ref="customizableTraceInterceptor" pointcut="execution(public * org.springframework.data.jpa.repository.JpaRepository+.*(..))"/> </aop:config>
Infrastructure
-
Currently I have implemented a repository layer based on
HibernateDaoSupport
. I create aSessionFactory
by using Spring’sAnnotationSessionFactoryBean
. How do I get Spring Data repositories working in this environment?You have to replace
AnnotationSessionFactoryBean
with theHibernateJpaSessionFactoryBean
, as follows:Example 134. Looking up aSessionFactory
from aHibernateEntityManagerFactory
<bean id="sessionFactory" class="org.springframework.orm.jpa.vendor.HibernateJpaSessionFactoryBean"> <property name="entityManagerFactory" ref="entityManagerFactory"/> </bean>
Appendix F: Glossary
- AOP
-
Aspect oriented programming
- Commons DBCP
-
Commons DataBase Connection Pools - a library from the Apache foundation that offers pooling implementations of the DataSource interface.
- CRUD
-
Create, Read, Update, Delete - Basic persistence operations.
- DAO
-
Data Access Object - Pattern to separate persisting logic from the object to be persisted
- Dependency Injection
-
Pattern to hand a component’s dependency to the component from outside, freeing the component to lookup the dependent itself. For more information, see https://en.wikipedia.org/wiki/Dependency_Injection.
- EclipseLink
-
Object relational mapper implementing JPA - https://www.eclipse.org/eclipselink/
- Hibernate
-
Object relational mapper implementing JPA - https://hibernate.org/
- JPA
-
Jakarta Persistence API
- Spring
-
Java application framework - https://projects.spring.io/spring-framework