3.0.M3
Copyright © 2004-2009 Rod Johnson, Juergen Hoeller, Alef Arendsen, Colin Sampaleanu, Rob Harrop, Thomas Risberg, Darren Davison, Dmitriy Kopylenko, Mark Pollack, Thierry Templier, Erwin Vervaet, Portia Tung, Ben Hale, Adrian Colyer, John Lewis, Costin Leau, Mark Fisher, Sam Brannen, Ramnivas Laddad, Arjen Poutsma, Chris Beams, Tareq Abed Rabbo
Table of Contents
Developing software applications is hard enough even with good tools and technologies. Implementing applications using platforms which promise everything but turn out to be heavy-weight, hard to control and not very efficient during the development cycle makes it even harder. Spring provides a light-weight solution for building enterprise-ready applications, while still supporting the possibility of using declarative transaction management, remote access to your logic using RMI or web services, and various options for persisting your data to a database. Spring provides a full-featured MVC framework, and transparent ways of integrating AOP into your software.
Spring could potentially be a one-stop-shop for all your enterprise applications; however, Spring is modular, allowing you to use just those parts of it that you need, without having to bring in the rest. You can use the IoC container, with Struts on top, but you could also choose to use just the Hibernate integration code or the JDBC abstraction layer
Spring has been (and continues to be) designed to be non-intrusive, meaning dependencies, from your domain logic code, on the framework itself are generally none. For your integration layer like the data access layer there will of course be some dependencies on the data access technology in use and also on the Spring libraries, but these dependencies should be easy to isolate from the rest of your code base.
This document provides a reference guide to Spring's features. Since this document is still to be considered very much work-in-progress, if you have any requests or comments, please post them on the user mailing list or on the support forums at http://forum.springsource.org/.
Fundamentally, what is Spring? We think of it as a Platform for your Java code. It provides comprehensive infrastructural support for developing Java applications. Spring deals with the plumbing so you can focus on solving the domain problem
Spring as a platform allows applications to be built from “plain old Java objects” (POJOs). This is true for the Java SE programming model as well as within a number of other environments including full and partial Java EE. Spring allows enterprise services to be applied to POJOs in a non-invasive way
Examples of Spring as a platform:
Make a Java method execute in a database transaction; without the implementer dealing with transaction APIs
Make a local Java method a remote-procedure; without the implementer dealing with remoting APIs
Make a local Java method a management operation; without the implementer dealing with JMX APIs
Make a local Java method a message handler; without the implementer dealing with JMS APIs
Java applications (a loose term which runs the gamut from constrained applets to full-fledged n-tier server-side enterprise applications) typically are composed of a number of objects that collaborate with one another to form the application proper. The objects in an application can thus be said to have dependencies between themselves.
The Java language and platform provides a wealth of functionality for architecting and building applications, ranging all the way from the very basic building blocks of primitive types and classes (and the means to define new classes), to rich full-featured application servers and web frameworks. One area that is decidedly conspicuous by its absence is any means of taking the basic building blocks and composing them into a coherent whole; this area has typically been left to the purvey of the architects and developers tasked with building an application (or applications). Now to be fair, there are a number of design patterns devoted to the business of composing the various classes and object instances that makeup an all-singing, all-dancing application. Design patterns such as Factory, Abstract Factory, Builder, Decorator, and Service Locator (to name but a few) have widespread recognition and acceptance within the software development industry (presumably that is why these patterns have been formalized as patterns in the first place). This is all very well, but these patterns are just that: best practices given a name, typically together with a description of what the pattern does, where the pattern is typically best applied, the problems that the application of the pattern addresses, and so forth. Notice that the last paragraph used the phrase “... a description of what the pattern does...”; pattern books and wikis are typically listings of such formalized best practice that you can certainly take away, mull over, and then implement yourself in your application.
The IoC component of the Spring Framework addresses the enterprise concern of taking the classes, objects, and services that are to compose an application, by providing a formalized means of composing these various disparate components into a fully working application ready for use. The Spring Framework takes best practices that have been proven over the years in numerous applications and formalized as design patterns, and actually codifies these patterns as first class objects that you as an architect and developer can take away and integrate into your own application(s). This is a Very Good Thing Indeed as attested to by the numerous organizations and institutions that have used the Spring Framework to engineer robust, maintainable applications.
The Spring Framework contains a lot of features, which are well-organized in ab out twenty modules. These modules can be grouped together based on their primary features into Core Container, Data Access/Integration, Web, AOP (Aspect Oriented Programming), Instrumentation and Test. These groups are shown in the diagram below.
Overview of the Spring Framework
The Core Container consists of the Core, Beans, Context and Expression modules.
The Core and
Beans modules provide the most fundamental parts of
the framework and provides the IoC and Dependency Injection features.
The basic concept here is the BeanFactory
, which
provides a sophisticated implementation of the factory pattern which
removes the need for programmatic singletons and allows you to decouple
the configuration and specification of dependencies from your actual
program logic.
The Context module build on the solid base provided by the Core and Beans modules: it provides a way to access objects in a framework-style manner in a fashion somewhat reminiscent of a JNDI-registry. The Context module inherits its features from the Beans module and adds support for internationalization (I18N) (using for example resource bundles), event-propagation, resource-loading, and the transparent creation of contexts by, for example, a servlet container. The Context module also contains support for some Java EE features like EJB, JMX and basic remoting support.
The Expression Language module provides a powerful expression language for querying and manipulating an object graph at runtime. It can be seen as an extension of the unified expression language (unified EL) as specified in the JSP 2.1 specification. The language supports setting and getting of property values, property assignment, method invocation, accessing the context of arrays, collections and indexers, logical and arithmetic operators, named variables, and retrieval of objects by name from Spring's IoC container. It also supports list projection and selection, as well as common list aggregators.
The Data Access/Integration layer consists of the JDBC, ORM, OXM, JMS and Transaction modules.
The JDBC module provides a JDBC-abstraction layer that removes the need to do tedious JDBC coding and parsing of database-vendor specific error codes.
The ORM module provides integration layers for popular object-relational mapping APIs, including JPA, JDO, Hibernate, and iBatis. Using the ORM package you can use all those O/R-mappers in combination with all the other features Spring offers, such as the simple declarative transaction management feature mentioned previously.
The OXM module provides an abstraction layer for using a number of Object/XML mapping implementations. Supported technologies include JAXB, Castor, XMLBeans, JiBX and XStream.
The JMS module provides Spring's support for the Java Messaging Service. It contains features for both producing and consuming messages.
The Transaction module provides a way to do programmatic as well as declarative transaction management, not only for classes implementing special interfaces, but for all your POJOs (plain old Java objects).
The Web layer consists of the Web, Web-Servlet and Web-Portlet modules.
Spring's Web module provides basic web-oriented integration features, such as multipart file-upload functionality, the initialization of the IoC container using servlet listeners and a web-oriented application context. It also contains the web related parts of Spring's remoting support.
The Web-Servlet module provides Spring's Model-View-Controller (MVC) implementation for web-applications. Spring's MVC framework is not just any old implementation; it provides a clean separation between domain model code and web forms, and allows you to use all the other features of the Spring Framework.
The Web-Portlet module provides the MVC implementation to be used in a portlet environment and mirrors what is provided in the Web-Servlet module.
Spring's AOP module provides an AOP Alliance-compliant aspect-oriented programming implementation allowing you to define, for example, method-interceptors and pointcuts to cleanly decouple code implementing functionality that should logically speaking be separated. Using source-level metadata functionality you can also incorporate all kinds of behavioral information into your code, in a manner similar to that of .NET attributes.
There is also a separate Aspects module that provides integration with AspectJ.
The Instrumentation module provides class instrumentation support and classloader implementations to be used in certain application servers.
The Test module contains the Test Framework that supports testing Spring components using JUnit or TestNG. It provides consistent loading of Spring ApplicationContexts and caching of those contexts. It also contains a number of Mock objects that are usful in many testing scenarios to test your code in isolation.
With the building blocks described above you can use Spring in all sorts of scenarios, from applets up to fully-fledged enterprise applications using Spring's transaction management functionality and web framework integration.
Typical full-fledged Spring web application
By using Spring's declarative transaction management
features the web application is fully transactional, just as it
would be when using container managed transactions as provided by
Enterprise JavaBeans. All your custom business logic can be implemented
using simple POJOs, managed by Spring's IoC container. Additional services
include support for sending email, and validation that is independent of
the web layer enabling you to choose where to execute validation rules.
Spring's ORM support is integrated with JPA, Hibernate, JDO and iBatis;
for example, when using Hibernate, you can continue to use your existing
mapping files and standard Hibernate
SessionFactory
configuration. Form
controllers seamlessly integrate the web-layer with the domain model,
removing the need for ActionForms
or other classes
that transform HTTP parameters to values for your domain model.
Spring middle-tier using a third-party web framework
Sometimes the current circumstances do not allow you to completely
switch to a different framework. The Spring Framework does
not force you to use everything within it; it is not
an all-or-nothing solution. Existing front-ends built
using WebWork, Struts, Tapestry, or other UI frameworks can be integrated
perfectly well with a Spring-based middle-tier, allowing you to use the
transaction features that Spring offers. The only thing you need to do is
wire up your business logic using an
ApplicationContext
and integrate your web layer
using a WebApplicationContext
.
Remoting usage scenario
When you need to access existing code via web services, you can use
Spring's Hessian-
, Burlap-
,
Rmi-
or JaxRpcProxyFactory
classes. Enabling remote access to existing applications suddenly is not
that hard anymore.
EJBs - Wrapping existing POJOs
The Spring Framework also provides an access- and abstraction- layer for Enterprise JavaBeans, enabling you to reuse your existing POJOs and wrap them in Stateless Session Beans, for use in scalable, failsafe web applications that might need declarative security.
If you have been using the Spring Framework for some time, you will be aware that Spring has undergone two major revisions: Spring 2.0, released in October 2006, and Spring 2.5, released in November 2007. It is now time for a third overhaul resulting in Spring 3.0.
The entire framework code has been revised to take advantage of Java 5 features like generics, varargs and other language improvements. We have done our best to still keep the code backwards compatible. We now have consistent use of generic Collections and Maps, consistent use of generified FactoryBeans, and also consistent resolution of bridge methods in the Spring AOP API. Generified ApplicationListeners automatically receive specific event types only. All callback interfaces such as TransactionCallback and HibernateCallback declare a generic result value now. Overall, the Spring core codebase is now freshly revised and optimized for Java 5.
Spring's TaskExecutor abstraction has been updated for close integration with Java 5's java.util.concurrent facilities. We provide first-class support for Callables and Futures now, as well as ExecutorService adapters, ThreadFactory integration, etc. This has been aligned with JSR-236 (Concurrency Utilities for Java EE 6) as far as possible. Furthermore, we provide support for asynchronous method invocations through the use of the new @Async annotation (or EJB 3.1's @Asynchronous annotation).
The Spring reference documentation has also substantially been updated to reflect all of the changes and new features for Spring 3.0. While every effort has been made to ensure that there are no errors in this documentation, some errors may nevertheless have crept in. If you do spot any typos or even more serious errors, and you can spare a few cycles during lunch, please do bring the error to the attention of the Spring team by raising an issue.
The framework modules have been revised and are now managed separately with one source-tree per module jar:
org.springframework.aop
org.springframework.beans
org.springframework.context
org.springframework.context.support
org.springframework.expression
org.springframework.instrument
org.springframework.jdbc
org.springframework.jms
org.springframework.orm
org.springframework.oxm
org.springframework.test
org.springframework.transaction
org.springframework.web
org.springframework.web.portlet
org.springframework.web.servlet
We are now using a new Spring build system as known from Spring Web Flow 2.0. This gives us:
Ivy-based "Spring Build" system
consistent deployment procedure
consistent dependency management
consistent generation of OSGi manifests
This is a list of new features for Spring 3.0. We will cover these features in more detail later in this section.
Spring Expression Language
IoC enhancements/Java based bean metadata
Object to XML mapping functionality (OXM) moved from Spring Web Services project
Comprehensive REST support
@MVC additions
Declarative model validation
Early support for Java EE 6
BeanFactory interface returns typed bean instances as far as possible:
T getBean(Stringname, Class<T> requiredType)
Map<String, T> getBeansOfType(Class<T> type)
Spring's TaskExecutor interface now extends
java.util.concurrent.Executor
:
extended AsyncTaskExecutor supports standard Callables with Futures
New Java 5 based converter API and SPI:
stateless ConversionService and Converters
superseding standard JDK PropertyEditors
Typed ApplicationListener<E>
Spring introduces an expression language which is similar to Unified EL in its syntax but offers significantly more features. The expression language can be used when defining XML and Annotation based bean definitions and also serves as the foundation for expression language support across the Spring portfolio. Details of this new functionality can be found in the chapter Spring Expression Language (SpEL).
The Spring Expression Language was created to provide the Spring community a single, well supported expression language that can be used across all the products in the Spring portfolio. Its language features are driven by the requirements of the projects in the Spring portfolio, including tooling requirements for code completion support within the Eclipse based SpringSource Tool Suite.
The following is an example of how the Expression Language can be used to configure some properties of a database setup
<bean class="mycompany.RewardsTestDatabase"> <property name="databaseName" value="#{systemProperties.databaseName}"/> <property name="keyGenerator" value="#{strategyBean.databaseKeyGenerator}"/> </bean>
This functionality is also available if you prefer to configure your components using annotations:
@Repository public class RewardsTestDatabase { @Value("#{systemProperties.databaseName}") public void setDatabaseName(String dbName) { … } @Value("#{strategyBean.databaseKeyGenerator}") public voidsetKeyGenerator(KeyGenerator kg) { … } }
Some core features from the JavaConfig project have been added to the Spring Framework now. This means that the following annotations are now directly supported:
@Configuration
@Bean
@Primary
@Lazy
@Import
@Value
Here is an example of a Java class providing basic configuration using the new JavaConfig features:
@Configuration public class AppConfig{ private @Value("#{jdbcProperties.url}") String jdbcUrl; private @Value("#{jdbcProperties.username}") String username; private @Value("#{jdbcProperties.password}") String password; @Bean public FooService fooService() { return new FooServiceImpl(fooRepository()); } @Bean public FooRepository fooRepository() { return new HibernateFooRepository(sessionFactory()); } @Bean public SessionFactory sessionFactory() { // wire up a session factory using // AnnotationSessionFactoryBean asFactoryBean.setDataSource(dataSource()); return (SessionFactory) asFactoryBean.getObject(); } @Bean public DataSource dataSource() { return new DriverManagerDataSource(jdbcUrl, username, password); } }
To get this to work you need to add the following component scanning entry in your minimal application context XML file.
<context:component-scan base-package="com.myco.config"/>
@Bean
annotated methods are also supported
inside Spring components. They contribute a factory bean definition to
the container. See Defining bean metadata within
components for more information
Object to XML mapping functionality (OXM) from the Spring Web
Services project has been moved to the core Spring Framework now. The
functionality is found in the org.springframework.oxm
package. More information on the use of the OXM
module
can be found in the Marshalling XML using O/X
Mappers chapter.
The most exciting new feature for the Web Tier is the support for building RESTful web services and web applications. There are also some new annotations that can be used in any web application.
Server-side support for building RESTful applications has been
provided as an extension of the existing annotation driven MVC web
framework. Client-side support is provided by the
RestTemplate
class in the spirit of other
template classes such as JdbcTemplate
and
JmsTemplate
. Both server and client side REST
functionality make use of
HttpConverter
s to facilitate the
conversion between objects and their representation in HTTP request
and replies.
The MarhsallingHttpMessageConverter
uses
the Object to XML mapping functionality mentioned
earlier.
Refer to the section on REST support for more information.
Additional annotations such as
@CookieValue
and
@RequestHeaders
have been added. See Mapping cookie values with the
@CookieValue annotation and Mapping request header attributes with
the @RequestHeader annotation for more information.
Hibernate Validator, JSR 303
Work in progress... not part of the Spring 3.0 M3 release.
This chapter will give you a quick introduction and serve as a guide for how to get started using the Spring Framework for your Java development. We can of course only cover a tiny subset of the available features in this chapter. You will have to turn to the rest of this reference document for more detailed coverage of all features.
This initial part of the reference documentation covers all of those technologies that are absolutely integral to the Spring Framework.
Foremost amongst these is the Spring Framework's Inversion of Control (IoC) container. A thorough treatment of the Spring Framework's IoC container is closely followed by comprehensive coverage of Spring's Aspect-Oriented Programming (AOP) technologies. The Spring Framework has its own AOP framework, which is conceptually easy to understand, and which successfully addresses the 80% sweet spot of AOP requirements in Java enterprise programming.
Coverage of Spring's integration with AspectJ (currently the richest - in terms of features - and certainly most mature AOP implementation in the Java enterprise space) is also provided.
Finally, the adoption of the test-driven-development (TDD) approach to software development is certainly advocated by the Spring team, and so coverage of Spring's support for integration testing is covered (alongside best practices for unit testing). The Spring team have found that the correct use of IoC certainly does make both unit and integration testing easier (in that the presence of setter methods and appropriate constructors on classes makes them easier to wire together on a test without having to set up service locator registries and suchlike)... the chapter dedicated solely to testing will hopefully convince you of this as well.
This chapter covers the Spring Framework's implementation of the Inversion of Control (IoC) [1] principle.
The org.springframework.beans
and
org.springframework.context
packages provide the basis
for the Spring Framework's IoC container. The BeanFactory
interface provides an advanced configuration mechanism capable of managing
objects of any nature. The ApplicationContext
interface builds on top of the BeanFactory
(it is a sub-interface) and adds other functionality such as easier
integration with Spring's AOP features, message resource handling (for use
in internationalization), event propagation, and application-layer
specific contexts such as the
WebApplicationContext
for use in web
applications.
In short, the BeanFactory
provides
the configuration framework and basic functionality, while the
ApplicationContext
adds more
enterprise-centric functionality to it. The
ApplicationContext
is a complete superset
of the BeanFactory
, and any description of
BeanFactory
capabilities and behavior is to
be considered to apply to the
ApplicationContext
as well.
This chapter is divided into two parts, with the first part covering the basic principles
that apply to both the BeanFactory
and
ApplicationContext
, and with the second part covering those features
that apply only to the ApplicationContext
interface.
In Spring, those objects that form the backbone of your application and that are managed by the Spring IoC container are referred to as beans. A bean is simply an object that is instantiated, assembled and otherwise managed by a Spring IoC container; other than that, there is nothing special about a bean (it is in all other respects one of probably many objects in your application). These beans, and the dependencies between them, are reflected in the configuration metadata used by a container.
The
org.springframework.beans.factory.BeanFactory
is the actual representation of the Spring IoC
container that is responsible for containing and
otherwise managing the aforementioned beans.
The BeanFactory
interface is the
central IoC container interface in Spring. Its responsibilities include
instantiating or sourcing application objects, configuring such objects,
and assembling the dependencies between these objects.
There are a number of implementations of the
BeanFactory
interface that come supplied
straight out-of-the-box with Spring. The most commonly used
BeanFactory
implementation is the
XmlBeanFactory
class. This implementation allows
you to express the objects that compose your application, and the
doubtless rich interdependencies between such objects, in terms of XML.
The XmlBeanFactory
takes this XML
configuration metadata and uses it to create a
fully configured system or application.
The Spring IoC container
As can be seen in the above image, the Spring IoC container consumes some form of configuration metadata; this configuration metadata is nothing more than how you (as an application developer) inform the Spring container as to how to “instantiate, configure, and assemble [the objects in your application]”. This configuration metadata is typically supplied in a simple and intuitive XML format. When using XML-based configuration metadata, you write bean definitions for those beans that you want the Spring IoC container to manage, and then let the container do its stuff.
![]() | Note |
---|---|
XML-based metadata is by far the most commonly used form of configuration metadata. It is not however the only form of configuration metadata that is allowed. The Spring IoC container itself is totally decoupled from the format in which this configuration metadata is actually written. The XML-based configuration metadata format really is simple though, and so the majority of this chapter will use the XML format to convey key concepts and features of the Spring IoC container. You can find details of another form of metadata that the Spring container can consume in the section entitled Section 4.11, “Annotation-based configuration” |
In the vast majority of application scenarios, explicit user
code is not required to instantiate one or more instances of a Spring
IoC container. For example, in a web application scenario, a simple
eight (or so) lines of boilerplate J2EE web descriptor XML in the
web.xml
file of the application will typically
suffice (see Section 4.8.5, “Convenient ApplicationContext
instantiation for web applications”).
Spring configuration consists of at least one bean definition
that the container must manage, but typically there will be more than
one bean definition. When using XML-based configuration metadata,
these beans are configured as <bean/>
elements inside a top-level <beans/>
element.
These bean definitions correspond to the actual objects that
make up your application. Typically you will have bean definitions for
your service layer objects, your data access objects (DAOs),
presentation objects such as Struts
Action
instances, infrastructure
objects such as Hibernate
SessionFactories
, JMS
Queues
, and so forth. Typically one
does not configure fine-grained domain objects in the container,
because it is usually the responsibility of DAOs and business logic to
create/load domain objects.
Find below an example of the basic structure of XML-based configuration metadata.
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd"> <bean id="..." class="..."> <!-- collaborators and configuration for this bean go here --> </bean> <bean id="..." class="..."> <!-- collaborators and configuration for this bean go here --> </bean> <!-- more bean definitions go here --> </beans>
Instantiating a Spring IoC container is straightforward.
ApplicationContext context = new ClassPathXmlApplicationContext( new String[] {"services.xml", "daos.xml"}); // an ApplicationContext is also a BeanFactory (via inheritance) BeanFactory factory = context;
It can often be useful to split up container definitions into
multiple XML files. One way to then load an application context which
is configured from all these XML fragments is to use the application
context constructor which takes multiple
Resource
locations. With a bean
factory, a bean definition reader can be used multiple times to read
definitions from each file in turn.
Generally, the Spring team prefers the above approach, since it
keeps container configuration files unaware of the fact that they are
being combined with others. An alternate approach is to use one or
more occurrences of the <import/>
element to
load bean definitions from another file (or files). Let's look at a
sample:
<beans> <import resource="services.xml"/> <import resource="resources/messageSource.xml"/> <import resource="/resources/themeSource.xml"/> <bean id="bean1" class="..."/> <bean id="bean2" class="..."/> </beans>
In this example, external bean definitions are being loaded from
3 files, services.xml
,
messageSource.xml
, and
themeSource.xml
. All location paths are considered
relative to the definition file doing the importing, so
services.xml
in this case must be in the same
directory or classpath location as the file doing the importing, while
messageSource.xml
and
themeSource.xml
must be in a
resources
location below the location of the
importing file. As you can see, a leading slash is actually ignored,
but given that these are considered relative paths, it is probably
better form not to use the slash at all. The contents of the files
being imported must be valid XML bean definition files according to
the Spring Schema or DTD, including the top level
<beans/>
element.
![]() | Note |
---|---|
It is possible to reference files in parent directories using a relative "../" path. However, this is not recommended because it creates a dependency on a file that is outside the current application. This is in particular not recommended for "classpath:" URLs (e.g. "classpath:../services.xml") where the runtime resolution process will pick the "nearest" classpath root and then look into its parent directory. This is fragile since classpath configuration changes may lead to a different directory being picked. Note that you can always use fully qualified resource locations instead of relative paths: e.g. "file:C:/config/services.xml" or "classpath:/config/services.xml". However, be aware that you are coupling your application's configuration to specific absolute locations then. It is generally preferable to keep an indirection for such absolute locations, e.g. through "${...}" placeholders that are resolved against JVM system properties at runtime. |
A Spring IoC container manages one or more
beans. These beans are created using the
configuration metadata that has been supplied to the container
(typically in the form of XML <bean/>
definitions).
Within the container itself, these bean definitions are
represented as BeanDefinition
objects,
which contain (among other information) the following metadata:
a package-qualified class name: typically this is the actual implementation class of the bean being defined.
bean behavioral configuration elements, which state how the bean should behave in the container (scope, lifecycle callbacks, and so forth).
references to other beans which are needed for the bean to do its work; these references are also called collaborators or dependencies.
other configuration settings to set in the newly created object. An example would be the number of connections to use in a bean that manages a connection pool, or the size limit of the pool.
The concepts listed above directly translate to a set of properties that each bean definition consists of. Some of these properties are listed below, along with a link to further documentation about each of them.
Table 4.1. The bean definition
Feature | Explained in... |
---|---|
class | |
name | |
scope | |
constructor arguments | |
properties | |
autowiring mode | |
dependency checking mode | |
lazy-initialization mode | |
initialization method | |
destruction method |
Besides bean definitions which contain information on how to
create a specific bean, certain
BeanFactory
implementations also permit
the registration of existing objects that have been created outside the
factory (by user code). The
DefaultListableBeanFactory
class supports this
through the registerSingleton(..)
method.
(Typical applications solely work with beans defined through metadata
bean definitions though.)
Every bean has one or more id
s (also called
identifiers, or names; these terms refer to the same thing). These
id
s must be unique within the container the bean is
hosted in. A bean will almost always have only one id, but if a bean
has more than one id, the extra ones can essentially be considered
aliases.
When using XML-based configuration metadata, you use the
'id'
or 'name'
attributes to
specify the bean identifier(s). The 'id'
attribute
allows you to specify exactly one id, and as it is a real XML element
ID attribute, the XML parser is able to do some extra validation when
other elements reference the id; as such, it is the preferred way to
specify a bean id. However, the XML specification does limit the
characters which are legal in XML IDs. This is usually not a
constraint, but if you have a need to use one of these special XML
characters, or want to introduce other aliases to the bean, you may
also or instead specify one or more bean id
s,
separated by a comma (,
), semicolon
(;
), or whitespace in the 'name'
attribute.
Please note that you are not required to supply a name for a bean. If no name is supplied explicitly, the container will generate a unique name for that bean. The motivations for not supplying a name for a bean will be discussed later (one use case is inner beans).
In a bean definition itself, you may supply more than one name
for the bean, by using a combination of up to one name specified via
the id
attribute, and any number of other names
via the name
attribute. All these names can be
considered equivalent aliases to the same bean, and are useful for
some situations, such as allowing each component used in an
application to refer to a common dependency using a bean name that
is specific to that component itself.
Having to specify all aliases when the bean is actually
defined is not always adequate however. It is sometimes desirable to
introduce an alias for a bean which is defined elsewhere. In
XML-based configuration metadata this may be accomplished via the
use of the <alias/>
element.
<alias name="fromName" alias="toName"/>
In this case, a bean in the same container which is named
'fromName'
, may also after the use of this alias
definition, be referred to as 'toName'
.
As a concrete example, consider the case where component A defines a DataSource bean called componentA-dataSource, in its XML fragment. Component B would however like to refer to the DataSource as componentB-dataSource in its XML fragment. And the main application, MyApp, defines its own XML fragment and assembles the final application context from all three fragments, and would like to refer to the DataSource as myApp-dataSource. This scenario can be easily handled by adding to the MyApp XML fragment the following standalone aliases:
<alias name="componentA-dataSource" alias="componentB-dataSource"/> <alias name="componentA-dataSource" alias="myApp-dataSource" />
Now each component and the main application can refer to the dataSource via a name that is unique and guaranteed not to clash with any other definition (effectively there is a namespace), yet they refer to the same bean.
A bean definition essentially is a recipe for creating one or more objects. The container looks at the recipe for a named bean when asked, and uses the configuration metadata encapsulated by that bean definition to create (or acquire) an actual object.
If you are using XML-based configuration metadata, you can
specify the type (or class) of object that is to be instantiated using
the 'class'
attribute of the
<bean/>
element. This
'class'
attribute (which internally eventually
boils down to being a Class
property on a
BeanDefinition
instance) is normally
mandatory (see the section called “Instantiation using an instance factory method” and Section 4.6, “Bean definition inheritance” for the two exceptions) and
is used for one of two purposes. The class property specifies the
class of the bean to be constructed in the common case where the
container itself directly creates the bean by calling its constructor
reflectively (somewhat equivalent to Java code using the
'new' operator). In the less common case where
the container invokes a static
,
factory method on a class to create the bean, the
class property specifies the actual class containing the
static
factory method that is to be invoked to
create the object (the type of the object returned from the invocation
of the static
factory method may be the same class
or another class entirely, it doesn't matter).
When creating a bean using the constructor approach, all normal classes are usable by and compatible with Spring. That is, the class being created does not need to implement any specific interfaces or be coded in a specific fashion. Just specifying the bean class should be enough. However, depending on what type of IoC you are going to use for that specific bean, you may need a default (empty) constructor.
Additionally, the Spring IoC container isn't limited to just managing true JavaBeans, it is also able to manage virtually any class you want it to manage. Most people using Spring prefer to have actual JavaBeans (having just a default (no-argument) constructor and appropriate setters and getters modeled after the properties) in the container, but it is also possible to have more exotic non-bean-style classes in your container. If, for example, you need to use a legacy connection pool that absolutely does not adhere to the JavaBean specification, Spring can manage it as well.
When using XML-based configuration metadata you can specify your bean class like so:
<bean id="exampleBean" class="examples.ExampleBean"/> <bean name="anotherExample" class="examples.ExampleBeanTwo"/>
The mechanism for supplying arguments to the constructor (if required), or setting properties of the object instance after it has been constructed, is described shortly.
When defining a bean which is to be created using a static
factory method, along with the class
attribute
which specifies the class containing the static
factory method, another attribute named
factory-method
is needed to specify the name of
the factory method itself. Spring expects to be able to call this
method (with an optional list of arguments as described later) and
get back a live object, which from that point on is treated as if it
had been created normally via a constructor. One use for such a bean
definition is to call static
factories in legacy
code.
The following example shows a bean definition which specifies
that the bean is to be created by calling a factory-method. Note
that the definition does not specify the type (class) of the
returned object, only the class containing the factory method. In
this example, the createInstance()
method
must be a static method.
<bean id="exampleBean" class="examples.ExampleBean2" factory-method="createInstance"/>
The mechanism for supplying (optional) arguments to the factory method, or setting properties of the object instance after it has been returned from the factory, will be described shortly.
In a fashion similar to instantiation via a static factory
method, instantiation using an instance factory method is
where a non-static method of an existing bean from the container is
invoked to create a new bean. To use this mechanism, the
'class'
attribute must be left empty, and the
'factory-bean'
attribute must specify the name of
a bean in the current (or parent/ancestor) container that contains
the instance method that is to be invoked to create the object. The
name of the factory method itself must be set using the
'factory-method'
attribute.
<!-- the factory bean, which contains a method called createInstance() --> <bean id="serviceLocator" class="com.foo.DefaultServiceLocator"> <!-- inject any dependencies required by this locator bean --> </bean> <!-- the bean to be created via the factory bean --> <bean id="exampleBean" factory-bean="serviceLocator" factory-method="createInstance"/>
Although the mechanisms for setting bean properties are still to be discussed, one implication of this approach is that the factory bean itself can be managed and configured via DI.
![]() | Note |
---|---|
When the Spring documentation makes mention of a 'factory
bean', this will be a reference to a bean that is configured in
the Spring container that will create objects via an instance
or static
factory method. When the documentation mentions a
|
A BeanFactory
is essentially
nothing more than the interface for an advanced factory capable of
maintaining a registry of different beans and their dependencies. The
BeanFactory
enables you to read bean
definitions and access them using the bean factory. When using just the
BeanFactory
you would create one and read
in some bean definitions in the XML format as follows:
Resource res = new FileSystemResource("beans.xml"); BeanFactory factory = new XmlBeanFactory(res);
Basically that is all there is to it. Using
getBean(String)
you can retrieve instances of
your beans; the client-side view of the
BeanFactory
is simple. The
BeanFactory
interface has just a few
other methods, but ideally your application code should never use
them... indeed, your application code should have no calls to the
getBean(String)
method at all, and thus no
dependency on Spring APIs at all.
Your typical enterprise application is not made up of a single object (or bean in the Spring parlance). Even the simplest of applications will no doubt have at least a handful of objects that work together to present what the end-user sees as a coherent application. This next section explains how you go from defining a number of bean definitions that stand-alone, each to themselves, to a fully realized application where objects work (or collaborate) together to achieve some goal (usually an application that does what the end-user wants).
The basic principle behind Dependency Injection (DI) is that objects define their dependencies (that is to say the other objects they work with) only through constructor arguments, arguments to a factory method, or properties which are set on the object instance after it has been constructed or returned from a factory method. Then, it is the job of the container to actually inject those dependencies when it creates the bean. This is fundamentally the inverse, hence the name Inversion of Control (IoC), of the bean itself being in control of instantiating or locating its dependencies on its own using direct construction of classes, or something like the Service Locator pattern.
It becomes evident upon usage that code gets much cleaner when the DI principle is applied, and reaching a higher grade of decoupling is much easier when objects do not look up their dependencies, but are provided with them (and additionally do not even know where the dependencies are located and of what concrete class they are). DI exists in two major variants, namely Constructor Injection and Setter Injection.
Constructor-based DI is effected by
invoking a constructor with a number of arguments, each representing a
dependency. Additionally, calling a static
factory
method with specific arguments to construct the bean, can be
considered almost equivalent, and the rest of this text will consider
arguments to a constructor and arguments to a
static
factory method similarly. Find below an
example of a class that could only be dependency injected using
constructor injection. Notice that there is nothing
special about this class.
public class SimpleMovieLister { // the SimpleMovieLister has a dependency on a MovieFinder private MovieFinder movieFinder; // a constructor so that the Spring container can 'inject' a MovieFinder public SimpleMovieLister(MovieFinder movieFinder) { this.movieFinder = movieFinder; } // business logic that actually 'uses' the injected MovieFinder is omitted... }
Constructor argument resolution matching occurs using the argument's type. If there is no potential for ambiguity in the constructor arguments of a bean definition, then the order in which the constructor arguments are defined in a bean definition is the order in which those arguments will be supplied to the appropriate constructor when it is being instantiated. Consider the following class:
package x.y; public class Foo { public Foo(Bar bar, Baz baz) { // ... } }
There is no potential for ambiguity here (assuming of course
that Bar
and Baz
classes are not related in an inheritance hierarchy). Thus the
following configuration will work just fine, and you do not need to
specify the constructor argument indexes and / or types
explicitly.
<beans> <bean name="foo" class="x.y.Foo"> <constructor-arg> <bean class="x.y.Bar"/> </constructor-arg> <constructor-arg> <bean class="x.y.Baz"/> </constructor-arg> </bean> </beans>
When another bean is referenced, the type is known, and
matching can occur (as was the case with the preceding example).
When a simple type is used, such as
<value>true<value>
, Spring cannot
determine the type of the value, and so cannot match by type without
help. Consider the following class:
package examples; public class ExampleBean { // No. of years to the calculate the Ultimate Answer private int years; // The Answer to Life, the Universe, and Everything private String ultimateAnswer; public ExampleBean(int years, String ultimateAnswer) { this.years = years; this.ultimateAnswer = ultimateAnswer; } }
The above scenario can use type
matching with simple types by explicitly specifying the type of
the constructor argument using the 'type'
attribute. For example:
<bean id="exampleBean" class="examples.ExampleBean"> <constructor-arg type="int" value="7500000"/> <constructor-arg type="java.lang.String" value="42"/> </bean>
Constructor arguments can have their index specified
explicitly by use of the index
attribute. For
example:
<bean id="exampleBean" class="examples.ExampleBean"> <constructor-arg index="0" value="7500000"/> <constructor-arg index="1" value="42"/> </bean>
As well as solving the ambiguity problem of multiple simple values, specifying an index also solves the problem of ambiguity where a constructor may have two arguments of the same type. Note that the index is 0 based.
Setter-based DI is realized by calling
setter methods on your beans after invoking a no-argument constructor
or no-argument static
factory method to instantiate
your bean.
Find below an example of a class that can only be dependency injected using pure setter injection. Note that there is nothing special about this class... it is plain old Java.
public class SimpleMovieLister { // the SimpleMovieLister has a dependency on the MovieFinder private MovieFinder movieFinder; // a setter method so that the Spring container can 'inject' a MovieFinder public void setMovieFinder(MovieFinder movieFinder) { this.movieFinder = movieFinder; } // business logic that actually 'uses' the injected MovieFinder is omitted... }
The BeanFactory
supports both of
these variants for injecting dependencies into beans it manages. (It
in fact also supports injecting setter-based dependencies after some
dependencies have already been supplied via the constructor approach.)
The configuration for the dependencies comes in the form of a
BeanDefinition
, which is used together
with PropertyEditor
instances to know
how to convert properties from one format to another. However, most
users of Spring will not be dealing with these classes directly (that
is programmatically), but rather with an XML definition file which
will be converted internally into instances of these classes, and used
to load an entire Spring IoC container instance.
Bean dependency resolution generally happens as follows:
The BeanFactory
is created
and initialized with a configuration which describes all the
beans. (Most Spring users use a
BeanFactory
or
ApplicationContext
implementation
that supports XML format configuration files.)
Each bean has dependencies expressed in the form of properties, constructor arguments, or arguments to the static-factory method when that is used instead of a normal constructor. These dependencies will be provided to the bean, when the bean is actually created.
Each property or constructor argument is either an actual definition of the value to set, or a reference to another bean in the container.
Each property or constructor argument which is a value must be
able to be converted from whatever format it was specified in, to
the actual type of that property or constructor argument. By
default Spring can convert a value supplied in string format to
all built-in types, such as int
,
long
, String
,
boolean
, etc.
The Spring container validates the configuration of each bean as
the container is created, including the validation that properties
which are bean references are actually referring to valid beans.
However, the bean properties themselves are not set until the bean
is actually created. For those beans that are
singleton-scoped and set to be pre-instantiated (such as singleton
beans in an ApplicationContext
),
creation happens at the time that the container is created, but
otherwise this is only when the bean is requested. When a bean
actually has to be created, this will potentially cause a graph of
other beans to be created, as its dependencies and its dependencies'
dependencies (and so on) are created and assigned.
You can generally trust Spring to do the right thing. It will
detect misconfiguration issues, such as references to non-existent
beans and circular dependencies, at container load-time. It will
actually set properties and resolve dependencies as late as possible,
which is when the bean is actually created. This means that a Spring
container which has loaded correctly can later generate an exception
when you request a bean if there is a problem creating that bean or
one of its dependencies. This could happen if the bean throws an
exception as a result of a missing or invalid property, for example.
This potentially delayed visibility of some configuration issues is
why ApplicationContext
implementations
by default pre-instantiate singleton beans. At the cost of some
upfront time and memory to create these beans before they are actually
needed, you find out about configuration issues when the
ApplicationContext
is created, not
later. If you wish, you can still override this default behavior and
set any of these singleton beans to lazy-initialize (that is not be
pre-instantiated).
If no circular dependencies are involved (see sidebar for a discussion of circular dependencies), when one or more collaborating beans are being injected into a dependent bean, each collaborating bean is totally configured prior to being passed (via one of the DI flavors) to the dependent bean. This means that if bean A has a dependency on bean B, the Spring IoC container will totally configure bean B prior to invoking the setter method on bean A; you can read 'totally configure' to mean that the bean will be instantiated (if not a pre-instantiated singleton), all of its dependencies will be set, and the relevant lifecycle methods (such as a configured init method or the IntializingBean callback method) will all be invoked.
First, an example of using XML-based configuration metadata for setter-based DI. Find below a small part of a Spring XML configuration file specifying some bean definitions.
<bean id="exampleBean" class="examples.ExampleBean"> <!-- setter injection using the nested <ref/> element --> <property name="beanOne"><ref bean="anotherExampleBean"/></property> <!-- setter injection using the neater 'ref' attribute --> <property name="beanTwo" ref="yetAnotherBean"/> <property name="integerProperty" value="1"/> </bean> <bean id="anotherExampleBean" class="examples.AnotherBean"/> <bean id="yetAnotherBean" class="examples.YetAnotherBean"/>
public class ExampleBean { private AnotherBean beanOne; private YetAnotherBean beanTwo; private int i; public void setBeanOne(AnotherBean beanOne) { this.beanOne = beanOne; } public void setBeanTwo(YetAnotherBean beanTwo) { this.beanTwo = beanTwo; } public void setIntegerProperty(int i) { this.i = i; } }
As you can see, setters have been declared to match against the properties specified in the XML file. Find below an example of using constructor-based DI.
<bean id="exampleBean" class="examples.ExampleBean"> <!-- constructor injection using the nested <ref/> element --> <constructor-arg> <ref bean="anotherExampleBean"/> </constructor-arg> <!-- constructor injection using the neater 'ref' attribute --> <constructor-arg ref="yetAnotherBean"/> <constructor-arg type="int" value="1"/> </bean> <bean id="anotherExampleBean" class="examples.AnotherBean"/> <bean id="yetAnotherBean" class="examples.YetAnotherBean"/>
public class ExampleBean { private AnotherBean beanOne; private YetAnotherBean beanTwo; private int i; public ExampleBean( AnotherBean anotherBean, YetAnotherBean yetAnotherBean, int i) { this.beanOne = anotherBean; this.beanTwo = yetAnotherBean; this.i = i; } }
As you can see, the constructor arguments specified in the bean
definition will be used to pass in as arguments to the constructor of
the ExampleBean
.
Now consider a variant of this where instead of using a
constructor, Spring is told to call a static
factory method to return an instance of the object:
<bean id="exampleBean" class="examples.ExampleBean" factory-method="createInstance"> <constructor-arg ref="anotherExampleBean"/> <constructor-arg ref="yetAnotherBean"/> <constructor-arg value="1"/> </bean> <bean id="anotherExampleBean" class="examples.AnotherBean"/> <bean id="yetAnotherBean" class="examples.YetAnotherBean"/>
public class ExampleBean { // a private constructor private ExampleBean(...) { ... } // a static factory method; the arguments to this method can be // considered the dependencies of the bean that is returned, // regardless of how those arguments are actually used. public static ExampleBean createInstance ( AnotherBean anotherBean, YetAnotherBean yetAnotherBean, int i) { ExampleBean eb = new ExampleBean (...); // some other operations... return eb; } }
Note that arguments to the static
factory
method are supplied via <constructor-arg/>
elements, exactly the same as if a constructor had actually been used.
Also, it is important to realize that the type of the class being
returned by the factory method does not have to be of the same type as
the class which contains the static
factory method,
although in this example it is. An instance (non-static) factory
method would be used in an essentially identical fashion (aside from
the use of the factory-bean
attribute instead of
the class
attribute), so details will not be
discussed here.
As mentioned in the previous section, bean properties and
constructor arguments can be defined as either references to other
managed beans (collaborators), or values defined inline. Spring's
XML-based configuration metadata supports a number of sub-element types
within its <property/>
and
<constructor-arg/>
elements for just this
purpose.
The <value/>
element specifies a
property or constructor argument as a human-readable string
representation. As mentioned
previously, JavaBeans PropertyEditors
are
used to convert these string values from a
String
to the actual type of the property or
argument.
<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <!-- results in a setDriverClassName(String) call --> <property name="driverClassName"> <value>com.mysql.jdbc.Driver</value> </property> <property name="url"> <value>jdbc:mysql://localhost:3306/mydb</value> </property> <property name="username"> <value>root</value> </property> <property name="password"> <value>masterkaoli</value> </property> </bean>
The <property/>
and
<constructor-arg/>
elements also support the
use of the 'value'
attribute, which can lead to
much more succinct configuration. When using the
'value'
attribute, the above bean definition reads
like so:
<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <!-- results in a setDriverClassName(String) call --> <property name="driverClassName" value="com.mysql.jdbc.Driver"/> <property name="url" value="jdbc:mysql://localhost:3306/mydb"/> <property name="username" value="root"/> <property name="password" value="masterkaoli"/> </bean>
The Spring team generally prefer the attribute style over the
use of nested <value/>
elements. If you are
reading this reference manual straight through from top to bottom
(wow!) then we are getting slightly ahead of ourselves here, but you
can also configure a java.util.Properties
instance like so:
<bean id="mappings" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <!-- typed as a java.util.Properties --> <property name="properties"> <value> jdbc.driver.className=com.mysql.jdbc.Driver jdbc.url=jdbc:mysql://localhost:3306/mydb </value> </property> </bean>
Can you see what is happening? The Spring container is
converting the text inside the <value/>
element into a java.util.Properties
instance
using the JavaBeans PropertyEditor
mechanism. This is a nice shortcut, and is one of a few places where
the Spring team do favor the use of the nested
<value/>
element over the
'value'
attribute style.
The idref
element is simply an error-proof
way to pass the id of another bean in the
container (to a <constructor-arg/>
or
<property/>
element).
<bean id="theTargetBean" class="..."/> <bean id="theClientBean" class="..."> <property name="targetName"> <idref bean="theTargetBean" /> </property> </bean>
The above bean definition snippet is exactly equivalent (at runtime) to the following snippet:
<bean id="theTargetBean" class="..." /> <bean id="client" class="..."> <property name="targetName" value="theTargetBean" /> </bean>
The main reason the first form is preferable to the second is
that using the idref
tag allows the container to
validate at deployment time that the
referenced, named bean actually exists. In the second variation, no
validation is performed on the value that is passed to the
'targetName'
property of the
'client'
bean. Any typo will only be discovered
(with most likely fatal results) when the
'client'
bean is actually instantiated. If the
'client'
bean is a prototype bean, this typo (and
the resulting exception) may only be discovered long after the
container is actually deployed.
Additionally, if the bean being referred to is in the same XML
unit, and the bean name is the bean id, the
'local'
attribute may be used, which allows the
XML parser itself to validate the bean id even earlier, at XML
document parse time.
<property name="targetName"> <!-- a bean with an id of 'theTargetBean' must exist; otherwise an XML exception will be thrown --> <idref local="theTargetBean"/> </property>
By way of an example, one common place (at least in pre-Spring
2.0 configuration) where the <idref/> element brings value is
in the configuration of AOP
interceptors in a ProxyFactoryBean
bean definition. If you use <idref/> elements when specifying
the interceptor names, there is no chance of inadvertently
misspelling an interceptor id.
The ref
element is the final element allowed
inside a <constructor-arg/>
or
<property/>
definition element. It is used to
set the value of the specified property to be a reference to another
bean managed by the container (a collaborator). As mentioned in a
previous section, the referred-to bean is considered to be a
dependency of the bean who's property is being set, and will be
initialized on demand as needed (if it is a singleton bean it may have
already been initialized by the container) before the property is set.
All references are ultimately just a reference to another object, but
there are 3 variations on how the id/name of the other object may be
specified, which determines how scoping and validation is
handled.
Specifying the target bean by using the bean
attribute of the <ref/>
tag is the most
general form, and will allow creating a reference to any bean in the
same container (whether or not in the same XML file), or parent
container. The value of the 'bean'
attribute may be
the same as either the 'id'
attribute of the target
bean, or one of the values in the 'name'
attribute
of the target bean.
<ref bean="someBean"/>
Specifying the target bean by using the local
attribute leverages the ability of the XML parser to validate XML id
references within the same file. The value of the
local
attribute must be the same as the
id
attribute of the target bean. The XML parser
will issue an error if no matching element is found in the same file.
As such, using the local variant is the best choice (in order to know
about errors as early as possible) if the target bean is in the same
XML file.
<ref local="someBean"/>
Specifying the target bean by using the
'parent'
attribute allows a reference to be created
to a bean which is in a parent container of the current container. The
value of the 'parent'
attribute may be the same as
either the 'id'
attribute of the target bean, or
one of the values in the 'name'
attribute of the
target bean, and the target bean must be in a parent container to the
current one. The main use of this bean reference variant is when you
have a hierarchy of containers and you want to wrap an existing bean
in a parent container with some sort of proxy which will have the same
name as the parent bean.
<!-- in the parent context --> <bean id="accountService" class="com.foo.SimpleAccountService"> <!-- insert dependencies as required as here --> </bean>
<!-- in the child (descendant) context --> <bean id="accountService" <-- notice that the name of this bean is the same as the name of the 'parent' bean class="org.springframework.aop.framework.ProxyFactoryBean"> <property name="target"> <ref parent="accountService"/> <-- notice how we refer to the parent bean </property> <!-- insert other configuration and dependencies as required as here --> </bean>
A <bean/>
element inside the
<property/>
or
<constructor-arg/>
elements is used to define
a so-called inner bean. An inner bean
definition does not need to have any id or name defined, and it is
best not to even specify any id or name value because the id or name
value simply will be ignored by the container.
<bean id="outer" class="..."> <!-- instead of using a reference to a target bean, simply define the target bean inline --> <property name="target"> <bean class="com.example.Person"> <!-- this is the inner bean --> <property name="name" value="Fiona Apple"/> <property name="age" value="25"/> </bean> </property> </bean>
Note that in the specific case of inner beans, the
'scope'
flag and any 'id'
or
'name'
attribute are effectively ignored. Inner
beans are always anonymous and they are
always scoped as prototypes. Please
also note that it is not possible to inject inner
beans into collaborating beans other than the enclosing bean.
The <list/>
,
<set/>
, <map/>
, and
<props/>
elements allow properties and
arguments of the Java Collection
type
List
,
Set
,
Map
, and
Properties
, respectively, to be defined
and set.
<bean id="moreComplexObject" class="example.ComplexObject"> <!-- results in a setAdminEmails(java.util.Properties) call --> <property name="adminEmails"> <props> <prop key="administrator">[email protected]</prop> <prop key="support">[email protected]</prop> <prop key="development">[email protected]</prop> </props> </property> <!-- results in a setSomeList(java.util.List) call --> <property name="someList"> <list> <value>a list element followed by a reference</value> <ref bean="myDataSource" /> </list> </property> <!-- results in a setSomeMap(java.util.Map) call --> <property name="someMap"> <map> <entry> <key> <value>an entry</value> </key> <value>just some string</value> </entry> <entry> <key> <value>a ref</value> </key> <ref bean="myDataSource" /> </entry> </map> </property> <!-- results in a setSomeSet(java.util.Set) call --> <property name="someSet"> <set> <value>just some string</value> <ref bean="myDataSource" /> </set> </property> </bean>
![]() | Note |
---|---|
The nested element style used this initial example tends to become quite verbose. Fortunately, there are attribute shortcuts for most elements, which you can read about in Section 4.3.2.6, “Shortcuts and other convenience options for XML-based configuration metadata”. |
Note that the value of a map key or value, or a set value, can also again be any of the following elements:
bean | ref | idref | list | set | map | props | value | null
As of Spring 2.0, the container also supports the
merging of collections. This allows an
application developer to define a parent-style
<list/>
, <map/>
,
<set/>
or <props/>
element, and have child-style <list/>
,
<map/>
, <set/>
or
<props/>
elements inherit and override
values from the parent collection; that is to say the child
collection's values will be the result obtained from the merging of
the elements of the parent and child collections, with the child's
collection elements overriding values specified in the parent
collection.
Please note that this section on merging makes use of the parent-child bean mechanism. This concept has not yet been introduced, so readers unfamiliar with the concept of parent and child bean definitions may wish to read the relevant section before continuing.
Find below an example of the collection merging feature:
<beans> <bean id="parent" abstract="true" class="example.ComplexObject"> <property name="adminEmails"> <props> <prop key="administrator">[email protected]</prop> <prop key="support">[email protected]</prop> </props> </property> </bean> <bean id="child" parent="parent"> <property name="adminEmails"> <!-- the merge is specified on the *child* collection definition --> <props merge="true"> <prop key="sales">[email protected]</prop> <prop key="support">[email protected]</prop> </props> </property> </bean> <beans>
Notice the use of the merge=true
attribute
on the <props/>
element of the
adminEmails
property of the
child
bean definition. When the
child
bean is actually resolved and instantiated
by the container, the resulting instance will have an
adminEmails
Properties
collection that contains the result of the merging of the child's
adminEmails
collection with the parent's
adminEmails
collection.
[email protected] [email protected] [email protected]
Notice how the child Properties
collection's value set will have inherited all the property elements
from the parent <props/>
. Notice also how
the child's value for the support
value overrides
the value in the parent collection.
This merging behavior applies similarly to the
<list/>
, <map/>
,
and <set/>
collection types. In the
specific case of the <list/>
element, the
semantics associated with the List
collection
type, that is the notion of an ordered
collection
of values, is maintained; the parent's values will precede all of
the child list's values. In the case of the
Map
,
Set
, and
Properties
collection types, there is
no notion of ordering and hence no ordering semantics are in effect
for the collection types that underlie the associated
Map
,
Set
and
Properties
implementation types used
internally by the container.
Finally, some minor notes about the merging support are in
order; you cannot merge different collection types (e.g. a
Map
and a
List
), and if you do attempt to do so
an appropriate Exception
will be thrown; and
in case it is not immediately obvious, the
'merge'
attribute must be specified on the lower
level, inherited, child definition; specifying the
'merge'
attribute on a parent collection
definition is redundant and will not result in the desired merging;
and (lastly), please note that this merging feature is only
available in Spring 2.0 (and later versions).
If you are using Java 5 or Java 6, you will be aware that it
is possible to have strongly typed collections (using generic
types). That is, it is possible to declare a
Collection
type such that it can only
contain String
elements (for example). If you
are using Spring to dependency inject a strongly-typed
Collection
into a bean, you can take
advantage of Spring's type-conversion support such that the elements
of your strongly-typed Collection
instances will be converted to the appropriate type prior to being
added to the Collection
.
public class Foo { private Map<String, Float> accounts; public void setAccounts(Map<String, Float> accounts) { this.accounts = accounts; } }
<beans> <bean id="foo" class="x.y.Foo"> <property name="accounts"> <map> <entry key="one" value="9.99"/> <entry key="two" value="2.75"/> <entry key="six" value="3.99"/> </map> </property> </bean> </beans>
When the 'accounts'
property of the
'foo'
bean is being prepared for injection, the
generics information about the element type of the strongly-typed
Map<String, Float>
is actually
available via reflection, and so Spring's type conversion
infrastructure will actually recognize the various value elements as
being of type Float
and so the string values
'9.99', '2.75'
, and '3.99'
will be converted into an actual Float
type.
The <null/>
element is used to handle
null
values. Spring treats empty arguments for
properties and the like as empty Strings
. The
following XML-based configuration metadata snippet results in the
email property being set to the empty String
value ("")
<bean class="ExampleBean"> <property name="email"><value/></property> </bean>
This is equivalent to the following Java code:
exampleBean.setEmail("")
. The special
<null>
element may be used to indicate a
null
value. For example:
<bean class="ExampleBean"> <property name="email"><null/></property> </bean>
The above configuration is equivalent to the following Java
code: exampleBean.setEmail(null)
.
The configuration metadata shown so far is a tad verbose. That
is why there are several options available for you to limit the amount
of XML you have to write to configure your components. The first is a
shortcut to define values and references to other beans as part of a
<property/>
definition. The second is
slightly different format of specifying properties altogether.
The <property/>
,
<constructor-arg/>
, and
<entry/>
elements all support a
'value'
attribute which may be used instead of
embedding a full <value/>
element.
Therefore, the following:
<property name="myProperty"> <value>hello</value> </property>
<constructor-arg> <value>hello</value> </constructor-arg>
<entry key="myKey"> <value>hello</value> </entry>
are equivalent to:
<property name="myProperty" value="hello"/>
<constructor-arg value="hello"/>
<entry key="myKey" value="hello"/>
The <property/>
and
<constructor-arg/>
elements support a
similar shortcut 'ref'
attribute which may be
used instead of a full nested <ref/>
element. Therefore, the following:
<property name="myProperty"> <ref bean="myBean"> </property>
<constructor-arg> <ref bean="myBean"> </constructor-arg>
... are equivalent to:
<property name="myProperty" ref="myBean"/>
<constructor-arg ref="myBean"/>
Note however that the shortcut form is equivalent to a
<ref bean="xxx">
element; there is no
shortcut for <ref local="xxx"
>. To enforce
a strict local reference, you must use the long form.
Finally, the entry element allows a shortcut form to specify
the key and/or value of the map, in the form of the
'key'
/ 'key-ref'
and
'value'
/ 'value-ref'
attributes. Therefore, the following:
<entry> <key> <ref bean="myKeyBean" /> </key> <ref bean="myValueBean" /> </entry>
is equivalent to:
<entry key-ref="myKeyBean" value-ref="myValueBean"/>
Again, the shortcut form is equivalent to a <ref
bean="xxx">
element; there is no shortcut for
<ref local="xxx"
>.
The second option you have to limit the amount of XML you have
to write to configure your components is to use the special
"p-namespace". Spring 2.0 and later features support for extensible
configuration formats using
namespaces. Those namespaces are all based on an XML Schema
definition. In fact, the beans
configuration
format that you've been reading about is defined in an XML Schema
document.
One special namespace is not defined in an XSD file, and only
exists in the core of Spring itself. The so-called p-namespace
doesn't need a schema definition and is an alternative way of
configuring your properties differently than the way you have seen
so far. Instead of using nested <property/>
elements, using the p-namespace you can use attributes as part of
the bean
element that describe your property
values. The values of the attributes will be taken as the values for
your properties.
The following two XML snippets boil down to the same thing in the end: the first is using the standard XML format whereas the second example is using the p-namespace.
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean name="classic" class="com.example.ExampleBean"> <property name="email" value="[email protected]/> </bean> <bean name="p-namespace" class="com.example.ExampleBean" p:email="[email protected]"/> </beans>
As you can see, we are including an attribute in the p-namespace called email in the bean definition - this is telling Spring that it should include a property declaration. As previously mentioned, the p-namespace doesn't have a schema definition, so the name of the attribute can be set to whatever name your property has.
This next example includes two more bean definitions that both have a reference to another bean:
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean name="john-classic" class="com.example.Person"> <property name="name" value="John Doe"/> <property name="spouse" ref="jane"/> </bean> <bean name="john-modern" class="com.example.Person" p:name="John Doe" p:spouse-ref="jane"/> <bean name="jane" class="com.example.Person"> <property name="name" value="Jane Doe"/> </bean> </beans>
As you can see, this example doesn't only include a property
value using the p-namespace, but also uses a special format to
declare property references. Whereas the first bean definition uses
<property name="spouse" ref="jane"/>
to
create a reference from bean john
to bean
jane
, the second bean definition uses
p:spouse-ref="jane"
as an attribute to do the
exact same thing. In this case 'spouse
' is the
property name whereas the '-ref
' part indicates
that this is not a straight value but rather a reference to another
bean.
![]() | Note |
---|---|
Please note that the p-namespace is not quite as flexible as
the standard XML format - for example particular, the 'special'
format used to declare property references will clash with
properties that end in ' |
Compound or nested property names are perfectly legal when
setting bean properties, as long as all components of the path except
the final property name are not null
. Consider the
following bean definition...
<bean id="foo" class="foo.Bar"> <property name="fred.bob.sammy" value="123" /> </bean>
The foo
bean has a fred
property which has a bob
property, which has a
sammy
property, and that final
sammy
property is being set to the value
123
. In order for this to work, the
fred
property of foo
, and the
bob
property of fred
must not be
null
be non-null after the bean is constructed, or
a NullPointerException
will be
thrown.
For most situations, the fact that a bean is a dependency of
another is expressed by the fact that one bean is set as a property of
another. This is typically accomplished with the <ref/>
element in XML-based configuration metadata. For the relatively
infrequent situations where dependencies between beans are less direct
(for example, when a static initializer in a class needs to be
triggered, such as database driver registration), the
'depends-on'
attribute may be used to explicitly
force one or more beans to be initialized before the bean using this
element is initialized. Find below an example of using the
'depends-on'
attribute to express a dependency on a
single bean.
<bean id="beanOne" class="ExampleBean" depends-on="manager"/> <bean id="manager" class="ManagerBean" />
If you need to express a dependency on multiple beans, you can
supply a list of bean names as the value of the
'depends-on'
attribute, with commas, whitespace and
semicolons all valid delimiters, like so:
<bean id="beanOne" class="ExampleBean" depends-on="manager,accountDao"> <property name="manager" ref="manager" /> </bean> <bean id="manager" class="ManagerBean" /> <bean id="accountDao" class="x.y.jdbc.JdbcAccountDao" />
![]() | Note |
---|---|
The ' |
The default behavior for
ApplicationContext
implementations is to
eagerly pre-instantiate all singleton
beans at
startup. Pre-instantiation means that an
ApplicationContext
will eagerly create
and configure all of its singleton beans as part
of its initialization process. Generally this is a good
thing, because it means that any errors in the configuration
or in the surrounding environment will be discovered immediately (as
opposed to possibly hours or even days down the line).
However, there are times when this behavior is
not what is wanted. If you do not want a singleton
bean to be pre-instantiated when using an
ApplicationContext
, you can selectively
control this by marking a bean definition as lazy-initialized. A
lazily-initialized bean indicates to the IoC container whether or not a
bean instance should be created at startup or when it is first
requested.
When configuring beans via XML, this lazy loading is controlled by
the 'lazy-init'
attribute on the
<bean/>
element; for example:
<bean id="lazy" class="com.foo.ExpensiveToCreateBean" lazy-init="true"/> <bean name="not.lazy" class="com.foo.AnotherBean"/>
When the above configuration is consumed by an
ApplicationContext
, the bean named
'lazy'
will not be eagerly
pre-instantiated when the
ApplicationContext
is starting up,
whereas the 'not.lazy'
bean will be eagerly
pre-instantiated.
One thing to understand about lazy-initialization is that even
though a bean definition may be marked up as being lazy-initialized, if
the lazy-initialized bean is the dependency of a singleton bean that is
not lazy-initialized, when the
ApplicationContext
is eagerly
pre-instantiating the singleton, it will have to satisfy all of the
singletons dependencies, one of which will be the lazy-initialized bean!
So don't be confused if the IoC container creates one of the beans that
you have explicitly configured as lazy-initialized at startup; all that
means is that the lazy-initialized bean is being injected into a
non-lazy-initialized singleton bean elsewhere.
It is also possible to control lazy-initialization at the
container level by using the 'default-lazy-init'
attribute on the <beans/>
element; for
example:
<beans default-lazy-init="true"> <!-- no beans will be pre-instantiated... --> </beans>
The Spring container is able to autowire
relationships between collaborating beans. This means that it is
possible to automatically let Spring resolve collaborators (other beans)
for your bean by inspecting the contents of the
BeanFactory
. The autowiring functionality
has five modes. Autowiring is specified per bean
and can thus be enabled for some beans, while other beans will not be
autowired. Using autowiring, it is possible to reduce or eliminate the
need to specify properties or constructor arguments, thus saving a
significant amount of typing. [2] When using XML-based configuration metadata, the autowire
mode for a bean definition is specified by using the
autowire
attribute of the
<bean/>
element. The following values are
allowed:
Table 4.2. Autowiring modes
Mode | Explanation |
---|---|
no | No autowiring at all. Bean references must be
defined via a |
byName | Autowiring by property name. This option will
inspect the container and look for a bean named exactly the same
as the property which needs to be autowired. For example, if you
have a bean definition which is set to autowire by name, and it
contains a master property (that is, it has
a setMaster(..) method), Spring will look
for a bean definition named |
byType | Allows a property to be autowired if there is
exactly one bean of the property type in the container. If there
is more than one, a fatal exception is thrown, and this
indicates that you may not use byType
autowiring for that bean. If there are no matching beans,
nothing happens; the property is not set. If this is not
desirable, setting the
|
constructor | This is analogous to byType, but applies to constructor arguments. If there isn't exactly one bean of the constructor argument type in the container, a fatal error is raised. |
autodetect | Chooses constructor or byType through introspection of the bean class. If a default constructor is found, the byType mode will be applied. |
Note that explicit dependencies in property
and
constructor-arg
settings
always
override autowiring. Please also
note that it is not currently possible to autowire so-called
simple properties such as primitives,
Strings
, and Classes
(and
arrays of such simple properties). (This is by-design and should be
considered a feature.) When using either the
byType or constructor
autowiring mode, it is possible to wire arrays and typed-collections. In
such cases all autowire candidates within the
container that match the expected type will be provided to satisfy the
dependency. Strongly-typed Maps can even be autowired if the expected
key type is String
. An autowired Map's values
will consist of all bean instances that match the expected type, and the
Map's keys will contain the corresponding bean names.
Autowire behavior can be combined with dependency checking, which will be performed after all autowiring has been completed.
It is important to understand the various advantages and disadvantages of autowiring. Some advantages of autowiring include:
Autowiring can significantly reduce the volume of configuration required. However, mechanisms such as the use of a bean template (discussed elsewhere in this chapter) are also valuable in this regard.
Autowiring can cause configuration to keep itself up to date as your objects evolve. For example, if you need to add an additional dependency to a class, that dependency can be satisfied automatically without the need to modify configuration. Thus there may be a strong case for autowiring during development, without ruling out the option of switching to explicit wiring when the code base becomes more stable.
Some disadvantages of autowiring:
Autowiring is more magical than explicit wiring. Although, as noted in the above table, Spring is careful to avoid guessing in case of ambiguity which might have unexpected results, the relationships between your Spring-managed objects are no longer documented explicitly.
Wiring information may not be available to tools that may generate documentation from a Spring container.
Another issue to consider when autowiring by type is that multiple
bean definitions within the container may match the type specified by
the setter method or constructor argument to be autowired. For arrays,
collections, or Maps, this is not necessarily a problem. However for
dependencies that expect a single value, this ambiguity will not be
arbitrarily resolved. Instead, if no unique bean definition is
available, an Exception will be thrown. You do have several options when
confronted with this scenario. First, you may abandon autowiring in
favor of explicit wiring. Second, you may designate that certain bean
definitions are never to be considered as candidates by setting their
'autowire-candidate'
attributes to
'false'
as described in the next section. Third, you
may designate a single bean definition as the
primary candidate by setting the
'primary'
attribute of its
<bean/>
element to 'true'
.
Finally, if you are using at least Java 5, you may be interested in
exploring the more fine-grained control available with annotation-based
configuration as described in the section entitled Section 4.11, “Annotation-based configuration”.
When deciding whether to use autowiring, there is no wrong or right answer in all cases. A degree of consistency across a project is best though; for example, if autowiring is not used in general, it might be confusing to developers to use it just to wire one or two bean definitions.
You can also (on a per-bean basis) totally exclude a bean from
being an autowire candidate. When configuring beans using Spring's XML
format, the 'autowire-candidate'
attribute of the
<bean/>
element can be set to
'false'
; this has the effect of making the
container totally exclude that specific bean definition from being
available to the autowiring infrastructure.
Another option is to limit autowire candidates based on
pattern-matching against bean names. The top-level
<beans/>
element accepts one or more patterns
within its 'default-autowire-candidates'
attribute.
For example, to limit autowire candidate status to any bean whose name
ends with 'Repository', provide a value of
'*Repository'. To provide multiple patterns, define them in a
comma-separated list. Note that an explicit value of
'true'
or 'false'
for a bean
definition's 'autowire-candidate'
attribute always
takes precedence, and for such beans, the pattern matching rules will
not apply.
These techniques can be useful when you have one or more beans that you absolutely never ever want to have injected into other beans via autowiring. It does not mean that an excluded bean cannot itself be configured using autowiring... it can, it is rather that it itself will not be considered as a candidate for autowiring other beans.
The Spring IoC container also has the ability to check for the existence of unresolved dependencies of a bean deployed into the container. These are JavaBeans properties of the bean, which do not have actual values set for them in the bean definition, or alternately provided automatically by the autowiring feature.
This feature is sometimes useful when you want to ensure that all
properties (or all properties of a certain type) are set on a bean. Of
course, in many cases a bean class will have default values for many
properties, or some properties do not apply to all usage scenarios, so
this feature is of limited use. Dependency checking can also be enabled
and disabled per bean, just as with the autowiring functionality. The
default is to not check dependencies. Dependency
checking can be handled in several different modes. When using XML-based
configuration metadata, this is specified via the
'dependency-check'
attribute in a bean definition,
which may have the following values.
Table 4.3. Dependency checking modes
Mode | Explanation |
---|---|
none | No dependency checking. Properties of the bean which have no value specified for them are simply not set. |
simple | Dependency checking is performed for primitive types and collections (everything except collaborators). |
object | Dependency checking is performed for collaborators only. |
all | Dependency checking is done for collaborators, primitive types and collections. |
If you are using Java 5 and thus have access to source-level annotations, you may find the section entitled Section 29.3.1, “@Required” to be of interest.
For most application scenarios, the majority of the beans in the container will be singletons. When a singleton bean needs to collaborate with another singleton bean, or a non-singleton bean needs to collaborate with another non-singleton bean, the typical and common approach of handling this dependency by defining one bean to be a property of the other is quite adequate. There is a problem when the bean lifecycles are different. Consider a singleton bean A which needs to use a non-singleton (prototype) bean B, perhaps on each method invocation on A. The container will only create the singleton bean A once, and thus only get the opportunity to set the properties once. There is no opportunity for the container to provide bean A with a new instance of bean B every time one is needed.
One solution to this issue is to forego some inversion of control.
Bean A can be made
aware of the container by implementing the
BeanFactoryAware
interface, and use programmatic means to ask the
container via a getBean("B")
call for (a
typically new) bean B instance every time it needs it. Find below an
admittedly somewhat contrived example of this approach:
// a class that uses a stateful Command-style class to perform some processing package fiona.apple; // lots of Spring-API imports import org.springframework.beans.BeansException; import org.springframework.beans.factory.BeanFactory; import org.springframework.beans.factory.BeanFactoryAware; public class CommandManager implements BeanFactoryAware { private BeanFactory beanFactory; public Object process(Map commandState) { // grab a new instance of the appropriate Command Command command = createCommand(); // set the state on the (hopefully brand new) Command instance command.setState(commandState); return command.execute(); } // the Command returned here could be an implementation that executes asynchronously, or whatever protected Command createCommand() { return (Command) this.beanFactory.getBean("command"); // notice the Spring API dependency } public void setBeanFactory(BeanFactory beanFactory) throws BeansException { this.beanFactory = beanFactory; } }
The above example is generally not a desirable solution since the business code is then aware of and coupled to the Spring Framework. Method Injection, a somewhat advanced feature of the Spring IoC container, allows this use case to be handled in a clean fashion.
Lookup method injection refers to the ability of the container to override methods on container managed beans, to return the result of looking up another named bean in the container. The lookup will typically be of a prototype bean as in the scenario described above. The Spring Framework implements this method injection by dynamically generating a subclass overriding the method, using bytecode generation via the CGLIB library.
So if you look at the code from previous code snippet (the
CommandManager
class), the Spring container is
going to dynamically override the implementation of the
createCommand()
method. Your
CommandManager
class is not going to have any
Spring dependencies, as can be seen in this reworked example
below:
package fiona.apple; // no more Spring imports! public abstract class CommandManager { public Object process(Object commandState) { // grab a new instance of the appropriate Command interface Command command = createCommand(); // set the state on the (hopefully brand new) Command instance command.setState(commandState); return command.execute(); } // okay... but where is the implementation of this method? protected abstract Command createCommand(); }
In the client class containing the method to be injected (the
CommandManager
in this case), the method that
is to be 'injected' must have a signature of the following
form:
<public|protected> [abstract] <return-type> theMethodName(no-arguments);
If the method is abstract
, the
dynamically-generated subclass will implement the method. Otherwise,
the dynamically-generated subclass will override the concrete method
defined in the original class. Let's look at an example:
<!-- a stateful bean deployed as a prototype (non-singleton) --> <bean id="command" class="fiona.apple.AsyncCommand" scope="prototype"> <!-- inject dependencies here as required --> </bean> <!-- commandProcessor uses statefulCommandHelper --> <bean id="commandManager" class="fiona.apple.CommandManager"> <lookup-method name="createCommand" bean="command"/> </bean>
The bean identified as commandManager will
call its own method createCommand()
whenever
it needs a new instance of the command bean. It
is important to note that the person deploying the beans must be
careful to deploy the command
bean as a prototype
(if that is actually what is needed). If it is deployed as a singleton, the same
instance of the command
bean will be returned each
time!
Please be aware that in order for this dynamic subclassing to
work, you will need to have the CGLIB jar(s) on your classpath.
Additionally, the class that the Spring container is going to subclass
cannot be final
, and the method that is being
overridden cannot be final
either. Also, testing a
class that has an abstract
method can be somewhat
odd in that you will have to subclass the class yourself and supply a
stub implementation of the abstract
method.
Finally, objects that have been the target of method injection cannot
be serialized.
![]() | Tip |
---|---|
The interested reader may also find the
|
A less commonly useful form of method injection than Lookup Method Injection is the ability to replace arbitrary methods in a managed bean with another method implementation. Users may safely skip the rest of this section (which describes this somewhat advanced feature), until this functionality is actually needed.
When using XML-based configuration metadata, the
replaced-method
element may be used to replace an
existing method implementation with another, for a deployed bean.
Consider the following class, with a method computeValue, which we
want to override:
public class MyValueCalculator { public String computeValue(String input) { // some real code... } // some other methods... }
A class implementing the
org.springframework.beans.factory.support.MethodReplacer
interface provides the new method definition.
/** meant to be used to override the existing computeValue(String) implementation in MyValueCalculator */ public class ReplacementComputeValue implements MethodReplacer { public Object reimplement(Object o, Method m, Object[] args) throws Throwable { // get the input value, work with it, and return a computed result String input = (String) args[0]; ... return ...; } }
The bean definition to deploy the original class and specify the method override would look like this:
<bean id="myValueCalculator class="x.y.z.MyValueCalculator"> <!-- arbitrary method replacement --> <replaced-method name="computeValue" replacer="replacementComputeValue"> <arg-type>String</arg-type> </replaced-method> </bean> <bean id="replacementComputeValue" class="a.b.c.ReplacementComputeValue"/>
One or more contained <arg-type/>
elements within the <replaced-method/>
element may be used to indicate the method signature of the method
being overridden. Note that the signature for the arguments is
actually only needed in the case that the method is actually
overloaded and there are multiple variants within the class. For
convenience, the type string for an argument may be a substring of the
fully qualified type name. For example, all the following would match
java.lang.String
.
java.lang.String String Str
Since the number of arguments is often enough to distinguish between each possible choice, this shortcut can save a lot of typing, by allowing you to type just the shortest string that will match an argument type.
When you create a bean definition what you are actually creating is a recipe for creating actual instances of the class defined by that bean definition. The idea that a bean definition is a recipe is important, because it means that, just like a class, you can potentially have many object instances created from a single recipe.
You can control not only the various dependencies and configuration
values that are to be plugged into an object that is created from a
particular bean definition, but also the scope of
the objects created from a particular bean definition. This approach is
very powerful and gives you the flexibility to choose
the scope of the objects you create through configuration instead of
having to 'bake in' the scope of an object at the Java class level. Beans
can be defined to be deployed in one of a number of scopes: out of the
box, the Spring Framework supports exactly five scopes (of which three are
available only if you are using a web-aware
ApplicationContext
).
The scopes supported out of the box are listed below:
Table 4.4. Bean scopes
Scope | Description |
---|---|
Scopes a single bean definition to a single object instance per Spring IoC container. | |
Scopes a single bean definition to any number of object instances. | |
Scopes a single bean definition to the lifecycle of a
single HTTP request; that is each and every HTTP request will have
its own instance of a bean created off the back of a single bean
definition. Only valid in the context of a web-aware Spring
| |
Scopes a single bean definition to the lifecycle of a
HTTP | |
Scopes a single bean definition to the lifecycle of a
global HTTP |
When a bean is a singleton, only one shared
instance of the bean will be managed, and all requests for beans with an
id or id
s matching that bean definition will result
in that one specific bean instance being returned by the Spring
container.
To put it another way, when you define a bean definition and it is scoped as a singleton, then the Spring IoC container will create exactly one instance of the object defined by that bean definition. This single instance will be stored in a cache of such singleton beans, and all subsequent requests and references for that named bean will result in the cached object being returned.
Please be aware that Spring's concept of a singleton bean is quite
different from the Singleton pattern as defined in the seminal Gang of
Four (GoF) patterns book. The GoF Singleton hard codes the scope of an
object such that one and only one instance of a
particular class will ever be created per
ClassLoader
. The scope of the Spring
singleton is best described as per container and per
bean. This means that if you define one bean for a particular
class in a single Spring container, then the Spring container will
create one and only one instance of the class
defined by that bean definition. The singleton scope is the
default scope in Spring. To define a bean as a singleton in
XML, you would write configuration like so:
<bean id="accountService" class="com.foo.DefaultAccountService"/> <!-- the following is equivalent, though redundant (singleton scope is the default); using spring-beans-2.0.dtd --> <bean id="accountService" class="com.foo.DefaultAccountService" scope="singleton"/> <!-- the following is equivalent and preserved for backward compatibility in spring-beans.dtd --> <bean id="accountService" class="com.foo.DefaultAccountService" singleton="true"/>
The non-singleton, prototype scope of bean deployment results in
the creation of a new bean instance every time a
request for that specific bean is made (that is, it is injected into
another bean or it is requested via a programmatic
getBean()
method call on the container). As a rule of
thumb, you should use the prototype scope for all beans that are
stateful, while the singleton scope should be used for stateless
beans.
The following diagram illustrates the Spring prototype scope. Please note that a DAO would not typically be configured as a prototype, since a typical DAO would not hold any conversational state; it was just easier for this author to reuse the core of the singleton diagram.
To define a bean as a prototype in XML, you would write configuration like so:
<!-- using spring-beans-2.0.dtd --> <bean id="accountService" class="com.foo.DefaultAccountService" scope="prototype"/> <!-- the following is equivalent and preserved for backward compatibility in spring-beans.dtd --> <bean id="accountService" class="com.foo.DefaultAccountService" singleton="false"/>
There is one quite important thing to be aware of when deploying a bean in the prototype scope, in that the lifecycle of the bean changes slightly. Spring does not manage the complete lifecycle of a prototype bean: the container instantiates, configures, decorates and otherwise assembles a prototype object, hands it to the client and then has no further knowledge of that prototype instance. This means that while initialization lifecycle callback methods will be called on all objects regardless of scope, in the case of prototypes, any configured destruction lifecycle callbacks will not be called. It is the responsibility of the client code to clean up prototype scoped objects and release any expensive resources that the prototype bean(s) are holding onto. (One possible way to get the Spring container to release resources used by prototype-scoped beans is through the use of a custom bean post-processor which would hold a reference to the beans that need to be cleaned up.)
In some respects, you can think of the Spring containers role when
talking about a prototype-scoped bean as somewhat of a replacement for
the Java 'new'
operator. All lifecycle aspects past
that point have to be handled by the client. (The lifecycle of a bean in
the Spring container is further described in the section entitled Section 4.5.1, “Lifecycle callbacks”.)
When using singleton-scoped beans that have dependencies on beans that are scoped as prototypes, please be aware that dependencies are resolved at instantiation time. This means that if you dependency inject a prototype-scoped bean into a singleton-scoped bean, a brand new prototype bean will be instantiated and then dependency injected into the singleton bean... but that is all. That exact same prototype instance will be the sole instance that is ever supplied to the singleton-scoped bean, which is fine if that is what you want.
However, sometimes what you actually want is for the singleton-scoped bean to be able to acquire a brand new instance of the prototype-scoped bean again and again and again at runtime. In that case it is no use just dependency injecting a prototype-scoped bean into your singleton bean, because as explained above, that only happens once when the Spring container is instantiating the singleton bean and resolving and injecting its dependencies. If you are in the scenario where you need to get a brand new instance of a (prototype) bean again and again and again at runtime, you are referred to the section entitled Section 4.3.7, “Method Injection”
![]() | Backwards compatibility note: specifying the lifecycle scope in XML |
---|---|
If you are referencing the
To be totally clear about this, this means that if you use the
" |
The other scopes, namely request
,
session
, and global session
are
for use only in web-based applications (and can be used irrespective of
which particular web application framework you are using, if indeed
any). In the interest of keeping related concepts together in one place
in the reference documentation, these scopes are described here.
![]() | Note |
---|---|
The scopes that are described in the following paragraphs are
only available if you are using a web-aware
Spring |
In order to support the scoping of beans at the
request
, session
, and
global session
levels (web-scoped beans), some
minor initial configuration is required before you can set about
defining your bean definitions. Please note that this extra setup is
not required if you just want to use the
'standard' scopes (namely singleton and prototype).
Now as things stand, there are a couple of ways to effect this initial setup depending on your particular Servlet environment...
If you are accessing scoped beans within Spring Web MVC, i.e.
within a request that is processed by the Spring
DispatcherServlet
, or
DispatcherPortlet
, then no special setup is
necessary: DispatcherServlet
and
DispatcherPortlet
already expose all relevant
state.
When using a Servlet 2.4+ web container, with requests processed
outside of Spring's DispatcherServlet (e.g. when using JSF or Struts),
you need to add the following
javax.servlet.ServletRequestListener
to
the declarations in your web application's
'web.xml'
file.
<web-app> ... <listener> <listener-class>org.springframework.web.context.request.RequestContextListener</listener-class> </listener> ... </web-app>
If you are using an older web container (Servlet 2.3), you will
need to use the provided
javax.servlet.Filter
implementation.
Find below a snippet of XML configuration that has to be included in
the 'web.xml'
file of your web application if you
want to have access to web-scoped beans in requests outside of
Spring's DispatcherServlet on a Servlet 2.3 container. (The filter
mapping depends on the surrounding web application configuration and
so you will have to change it as appropriate.)
<web-app> .. <filter> <filter-name>requestContextFilter</filter-name> <filter-class>org.springframework.web.filter.RequestContextFilter</filter-class> </filter> <filter-mapping> <filter-name>requestContextFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> ... </web-app>
That's it. DispatcherServlet
,
RequestContextListener
and
RequestContextFilter
all do exactly the same
thing, namely bind the HTTP request object to the
Thread
that is servicing that request. This
makes beans that are request- and session-scoped available further
down the call chain.
Consider the following bean definition:
<bean id="loginAction" class="com.foo.LoginAction" scope="request"/>
With the above bean definition in place, the Spring container
will create a brand new instance of the
LoginAction
bean using the
'loginAction'
bean definition for each and every
HTTP request. That is, the 'loginAction'
bean will
be effectively scoped at the HTTP request level. You can change or
dirty the internal state of the instance that is created as much as
you want, safe in the knowledge that other requests that are also
using instances created off the back of the same
'loginAction'
bean definition will not be seeing
these changes in state since they are particular to an individual
request. When the request is finished processing, the bean that is
scoped to the request will be discarded.
Consider the following bean definition:
<bean id="userPreferences" class="com.foo.UserPreferences" scope="session"/>
With the above bean definition in place, the Spring container
will create a brand new instance of the
UserPreferences
bean using the
'userPreferences'
bean definition for the lifetime
of a single HTTP Session
. In other
words, the 'userPreferences'
bean will be
effectively scoped at the HTTP Session
level. Just like request-scoped
beans, you can
change the internal state of the instance that is created as much as
you want, safe in the knowledge that other HTTP
Session
instances that are also using
instances created off the back of the same
'userPreferences'
bean definition will not be
seeing these changes in state since they are particular to an
individual HTTP Session
. When the HTTP
Session
is eventually discarded, the
bean that is scoped to that particular HTTP
Session
will also be discarded.
Consider the following bean definition:
<bean id="userPreferences" class="com.foo.UserPreferences" scope="globalSession"/>
The global session
scope is similar to the
standard HTTP Session
scope (described immediately
above), and really only makes sense in the context of
portlet-based web applications. The portlet specification defines the
notion of a global Session
that is
shared amongst all of the various portlets that make up a single
portlet web application. Beans defined at the global
session
scope are scoped (or bound) to the lifetime of the
global portlet Session
.
Please note that if you are writing a standard Servlet-based web
application and you define one or more beans as having global
session
scope, the standard HTTP
Session
scope will be used, and no
error will be raised.
Being able to define a bean scoped to a HTTP request or
Session
(or indeed a custom scope of your
own devising) is all very well, but one of the main value-adds of the
Spring IoC container is that it manages not only the instantiation of
your objects (beans), but also the wiring up of collaborators (or
dependencies). If you want to inject a (for example) HTTP request
scoped bean into another bean, you will need to inject an AOP proxy in
place of the scoped bean. That is, you need to inject a proxy object
that exposes the same public interface as the scoped object, but that
is smart enough to be able to retrieve the real, target object from
the relevant scope (for example a HTTP request) and delegate method
calls onto the real object.
![]() | Note |
---|---|
You do not need to use the
|
Let's look at the configuration that is required to effect this; the configuration is not hugely complex (it takes just one line), but it is important to understand the “why” as well as the “how” behind it.
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd"> <!-- a HTTP Session-scoped bean exposed as a proxy --> <bean id="userPreferences" class="com.foo.UserPreferences" scope="session"> <!-- this next element effects the proxying of the surrounding bean --> <aop:scoped-proxy/> </bean> <!-- a singleton-scoped bean injected with a proxy to the above bean --> <bean id="userService" class="com.foo.SimpleUserService"> <!-- a reference to the proxied 'userPreferences' bean --> <property name="userPreferences" ref="userPreferences"/> </bean> </beans>
To create such a proxy, you need only to insert a child
<aop:scoped-proxy/>
element into a scoped
bean definition (you may also need the CGLIB library on your classpath
so that the container can effect class-based proxying; you will also
need to be using Appendix A, XML Schema-based configuration). So, just why do you
need this <aop:scoped-proxy/>
element in the
definition of beans scoped at the request
,
session
, globalSession
and
'insert your custom scope here' level? The reason
is best explained by picking apart the following bean definition
(please note that the following 'userPreferences'
bean definition as it stands is
incomplete):
<bean id="userPreferences" class="com.foo.UserPreferences" scope="session"/> <bean id="userManager" class="com.foo.UserManager"> <property name="userPreferences" ref="userPreferences"/> </bean>
From the above configuration it is evident that the singleton
bean 'userManager'
is being injected with a
reference to the HTTP Session
-scoped
bean 'userPreferences'
. The salient point here is
that the 'userManager'
bean is a singleton... it
will be instantiated exactly once per container,
and its dependencies (in this case only one, the
'userPreferences'
bean) will also only be injected
(once!). This means that the 'userManager'
will
(conceptually) only ever operate on the exact same
'userPreferences'
object, that is the one that it
was originally injected with. This is not what
you want when you inject a HTTP
Session
-scoped bean as a dependency
into a collaborating object (typically). Rather, what we
do want is a single
'userManager'
object, and then, for the lifetime of
a HTTP Session
, we want to see and use
a 'userPreferences'
object that is specific to said
HTTP Session
.
Rather what you need then is to inject some sort of object that
exposes the exact same public interface as the
UserPreferences
class (ideally an object that
is a UserPreferences
instance) and that is smart enough to be able to go off and fetch the
real
UserPreferences
object from whatever underlying
scoping mechanism we have chosen (HTTP request,
Session
, etc.). We can then safely
inject this proxy object into the 'userManager'
bean, which will be blissfully unaware that the
UserPreferences
reference that it is holding
onto is a proxy. In the case of this example, when a
UserManager
instance invokes a method
on the dependency-injected UserPreferences
object, it is really invoking a method on the proxy... the proxy will
then go off and fetch the real UserPreferences
object from (in this case) the HTTP
Session
, and delegate the method
invocation onto the retrieved real
UserPreferences
object.
That is why you need the following, correct and complete,
configuration when injecting request-
,
session-
, and
globalSession-scoped
beans into collaborating
objects:
<bean id="userPreferences" class="com.foo.UserPreferences" scope="session"> <aop:scoped-proxy/> </bean> <bean id="userManager" class="com.foo.UserManager"> <property name="userPreferences" ref="userPreferences"/> </bean>
By default, when the Spring container is creating a proxy for
a bean that is marked up with the
<aop:scoped-proxy/>
element, a
CGLIB-based class proxy will be created. This means that
you need to have the CGLIB library on the classpath of your
application.
Note: CGLIB proxies will only intercept public method calls! Do not call non-public methods on such a proxy; they will not be delegated to the scoped target object.
You can choose to have the Spring container create 'standard'
JDK interface-based proxies for such scoped beans by specifying
'false
' for the value of the
'proxy-target-class
' attribute of the
<aop:scoped-proxy/>
element. Using JDK
interface-based proxies does mean that you don't need any additional
libraries on your application's classpath to effect such proxying,
but it does mean that the class of the scoped bean must implement at
least one interface, and all of the
collaborators into which the scoped bean is injected must be
referencing the bean via one of its interfaces.
<!-- DefaultUserPreferences implements the UserPreferences interface --> <bean id="userPreferences" class="com.foo.DefaultUserPreferences" scope="session"> <aop:scoped-proxy proxy-target-class="false"/> </bean> <bean id="userManager" class="com.foo.UserManager"> <property name="userPreferences" ref="userPreferences"/> </bean>
The section entitled Section 8.6, “Proxying mechanisms” may also be of some interest with regard to understanding the nuances of choosing whether class-based or interface-based proxying is right for you.
As of Spring 2.0, the bean scoping mechanism in Spring is
extensible. This means that you are not limited to just the bean scopes
that Spring provides out of the box; you can define your own scopes, or
even redefine the existing scopes (although that last one would probably
be considered bad practice - please note that you
cannot override the built-in
singleton
and prototype
scopes).
Scopes are defined by the
org.springframework.beans.factory.config.Scope
interface. This is the interface that you will need to implement in
order to integrate your own custom scope(s) into the Spring container,
and is described in detail below. You may wish to look at the
Scope
implementations that are supplied
with the Spring Framework itself for an idea of how to go about
implementing your own. The Scope
Javadoc explains the main class to implement when you need
your own scope in more detail too.
The Scope
interface has four methods dealing
with getting objects from the scope, removing them from the scope and
allowing them to be 'destroyed' if needed.
The first method should return the object from the underlying scope. The session scope implementation for example will return the session-scoped bean (and if it does not exist, return a new instance of the bean, after having bound it to the session for future reference).
Object get(String name, ObjectFactory objectFactory)
The second method should remove the object from the underlying scope. The session scope implementation for example, removes the session-scoped bean from the underlying session. The object should be returned (you are allowed to return null if the object with the specified name wasn't found)
Object remove(String name)
The third method is used to register callbacks the scope should execute when it is destroyed or when the specified object in the scope is destroyed. Please refer to the Javadoc or a Spring scope implementation for more information on destruction callbacks.
void registerDestructionCallback(String name, Runnable destructionCallback)
The last method deals with obtaining the conversation identifier for the underlying scope. This identifier is different for each scope. For a session for example, this can be the session identifier.
String getConversationId()
After you have written and tested one or more custom
Scope
implementations, you then need to
make the Spring container aware of your new scope(s). The central
method to register a new Scope
with the
Spring container is declared on the
ConfigurableBeanFactory
interface
(implemented by most of the concrete
BeanFactory
implementations that ship
with Spring); this central method is displayed below:
void registerScope(String scopeName, Scope scope);
The first argument to the
registerScope(..)
method is the unique name
associated with a scope; examples of such names in the Spring
container itself are 'singleton'
and
'prototype'
. The second argument to the
registerScope(..)
method is an actual
instance of the custom Scope
implementation that you wish to register and use.
Let's assume that you have written your own custom
Scope
implementation, and you have
registered it like so:
// note: the ThreadScope class does not ship with the Spring Framework Scope customScope = new ThreadScope(); beanFactory.registerScope("thread", customScope);
You can then create bean definitions that adhere to the scoping
rules of your custom Scope
like
so:
<bean id="..." class="..." scope="thread"/>
If you have your own custom Scope
implementation(s), you are not just limited to only programmatic
registration of the custom scope(s). You can also do the
Scope
registration declaratively, using
the CustomScopeConfigurer
class.
The declarative registration of custom
Scope
implementations using the
CustomScopeConfigurer
class is shown
below:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd"> <bean class="org.springframework.beans.factory.config.CustomScopeConfigurer"> <property name="scopes"> <map> <entry key="thread"> <bean class="com.foo.ThreadScope"/> </entry> </map> </property> </bean> <bean id="bar" class="x.y.Bar" scope="thread"> <property name="name" value="Rick"/> <aop:scoped-proxy/> </bean> <bean id="foo" class="x.y.Foo"> <property name="bar" ref="bar"/> </bean> </beans>
![]() | Note |
---|---|
Note that, when placing a <aop:scoped-proxy/> in a
|
The Spring Framework provides several callback interfaces to
change the behavior of your bean in the container; they include
InitializingBean
and
DisposableBean
. Implementing these
interfaces will result in the container calling
afterPropertiesSet()
for the former and
destroy()
for the latter to allow the bean to
perform certain actions upon initialization and destruction.
Internally, the Spring Framework uses
BeanPostProcessor
implementations to
process any callback interfaces it can find and call the appropriate
methods. If you need custom features or other lifecycle behavior Spring
doesn't offer out-of-the-box, you can implement a
BeanPostProcessor
yourself. More
information about this can be found in the section entitled Section 4.7, “Container extension points”.
All the different lifecycle callback interfaces are described below. In one of the appendices, you can find diagrams that show how Spring manages beans, how those lifecycle features change the nature of your beans, and how they are managed.
Implementing the
org.springframework.beans.factory.InitializingBean
interface allows a bean to perform initialization work after all
necessary properties on the bean have been set by the container. The
InitializingBean
interface specifies
exactly one method:
void afterPropertiesSet() throws Exception;
Generally, the use of the
InitializingBean
interface can be
avoided and is actually discouraged since it unnecessarily couples the
code to Spring. As an alternative, bean definitions provide support
for a generic initialization method to be specified. In the case of
XML-based configuration metadata, this is done using the
'init-method'
attribute. For example, the following
definition:
<bean id="exampleInitBean" class="examples.ExampleBean" init-method="init"/>
public class ExampleBean { public void init() { // do some initialization work } }
...is exactly the same as...
<bean id="exampleInitBean" class="examples.AnotherExampleBean"/>
public class AnotherExampleBean implements InitializingBean { public void afterPropertiesSet() { // do some initialization work } }
... but does not couple the code to Spring.
Implementing the
org.springframework.beans.factory.DisposableBean
interface allows a bean to get a callback when the container
containing it is destroyed. The
DisposableBean
interface specifies a
single method:
void destroy() throws Exception;
Generally, the use of the
DisposableBean
callback interface can
be avoided and is actually discouraged since it unnecessarily couples
the code to Spring. As an alternative, bean definitions provide
support for a generic destroy method to be specified. When using
XML-based configuration metadata this is done via the
'destroy-method'
attribute on the
<bean/>
. For example, the following
definition:
<bean id="exampleInitBean" class="examples.ExampleBean" destroy-method="cleanup"/>
public class ExampleBean { public void cleanup() { // do some destruction work (like releasing pooled connections) } }
...is exactly the same as...
<bean id="exampleInitBean" class="examples.AnotherExampleBean"/>
public class AnotherExampleBean implements DisposableBean { public void destroy() { // do some destruction work (like releasing pooled connections) } }
... but does not couple the code to Spring.
When writing initialization and destroy method callbacks that do
not use the Spring-specific
InitializingBean
and
DisposableBean
callback interfaces, one
typically finds oneself writing methods with names such as
init()
, initialize()
,
dispose()
, etc. The names of such lifecycle
callback methods are (hopefully!) standardized across a project so
that all developers on a team use the same method names and thus
ensure some level of consistency.
The Spring container can be configured to
'look'
for named initialization and destroy
callback method names on every bean. This means
that you, as an application developer, can simply write your
application classes, use a convention of having an initialization
callback called init()
, and then (without having to
configure each and every bean with, in the case of XML-based
configuration, an 'init-method="init"'
attribute)
be safe in the knowledge that the Spring IoC container
will call that method when the bean is being
created (and in accordance with the standard lifecycle callback
contract described previously).
Let's look at an example to make the use of this feature
completely clear. For the sake of the example, let us say that one of
the coding conventions on a project is that all initialization
callback methods are to be named init()
and that
destroy callback methods are to be called
destroy()
. This leads to classes like so...
public class DefaultBlogService implements BlogService { private BlogDao blogDao; public void setBlogDao(BlogDao blogDao) { this.blogDao = blogDao; } // this is (unsurprisingly) the initialization callback method public void init() { if (this.blogDao == null) { throw new IllegalStateException("The [blogDao] property must be set."); } } }
<beans default-init-method="init"> <bean id="blogService" class="com.foo.DefaultBlogService"> <property name="blogDao" ref="blogDao" /> </bean> </beans>
Notice the use of the 'default-init-method'
attribute on the top-level <beans/>
element.
The presence of this attribute means that the Spring IoC container
will recognize a method called 'init'
on beans as
being the initialization method callback, and when a bean is being
created and assembled, if the bean's class has such a method, it will
be invoked at the appropriate time.
Destroy method callbacks are configured similarly (in XML that
is) using the 'default-destroy-method'
attribute on
the top-level <beans/>
element.
The use of this feature can save you the (small) housekeeping chore of specifying an initialization and destroy method callback on each and every bean, and it is great for enforcing a consistent naming convention for initialization and destroy method callbacks, as consistency is something that should always be aimed for.
Consider the case where you have some existing beans where the
underlying classes already have initialization callback methods that
are named at variance with the convention. You can
always override the default by specifying (in XML
that is) the method name using the 'init-method'
and 'destroy-method'
attributes on the
<bean/>
element itself.
Finally, please be aware that the Spring container guarantees that a configured initialization callback is called immediately after a bean has been supplied with all of its dependencies. This means that the initialization callback will be called on the raw bean reference, which means that any AOP interceptors or suchlike that will ultimately be applied to the bean will not yet be in place. A target bean is fully created first, then an AOP proxy (for example) with its interceptor chain is applied. Note that, if the target bean and the proxy are defined separately, your code can even interact with the raw target bean, bypassing the proxy. Hence, it would be very inconsistent to apply the interceptors to the init method, since that would couple the lifecycle of the target bean with its proxy/interceptors and leave strange semantics when talking to the raw target bean directly.
As of Spring 2.5, there are three options for controlling bean
lifecycle behavior: the InitializingBean
and DisposableBean
callback interfaces; custom init()
and
destroy()
methods; and the @PostConstruct
and @PreDestroy
annotations.
When combining different lifecycle mechanisms - for example, in a class hierarchy in which various lifecycle mechanisms are in use - developers should be aware of the order in which these mechanisms are applied. The following is the ordering for initialization methods:
Methods annotated with
@PostConstruct
afterPropertiesSet()
as defined by the
InitializingBean
callback
interface
A custom configured init()
method
Destroy methods are called in the same order:
Methods annotated with
@PreDestroy
destroy()
as defined by the
DisposableBean
callback
interface
A custom configured destroy()
method
![]() | Note |
---|---|
If multiple lifecycle mechanisms are configured for a given
bean, and each mechanism is configured with a different method name,
then each configured method will be executed in the order listed
above; however, if the same method name is configured - for example,
|
![]() | Note |
---|---|
This next section does not apply to web applications (in case
the title of this section did not make that abundantly clear).
Spring's web-based |
If you are using Spring's IoC container in a non-web application environment, for example in a rich client desktop environment, and you want the container to shutdown gracefully and call the relevant destroy callbacks on your singleton beans, you will need to register a shutdown hook with the JVM. This is quite easy to do (see below), and will ensure that your Spring IoC container shuts down gracefully and that all resources held by your singletons are released. Of course it is still up to you to both configure the destroy callbacks for your singletons and implement such destroy callbacks correctly.
So to register a shutdown hook that enables the graceful
shutdown of the relevant Spring IoC container, you simply need to call
the registerShutdownHook()
method that is
declared on the AbstractApplicationContext
class. To wit...
import org.springframework.context.support.AbstractApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; public final class Boot { public static void main(final String[] args) throws Exception { AbstractApplicationContext ctx = new ClassPathXmlApplicationContext(new String []{"beans.xml"}); // add a shutdown hook for the above context... ctx.registerShutdownHook(); // app runs here... // main method exits, hook is called prior to the app shutting down... } }
A class which implements the
org.springframework.beans.factory.BeanFactoryAware
interface is provided with a reference to the
BeanFactory
that created it, when it is
created by that BeanFactory
.
public interface BeanFactoryAware { void setBeanFactory(BeanFactory beanFactory) throws BeansException; }
This allows beans to manipulate the
BeanFactory
that created them
programmatically, through the
BeanFactory
interface, or by casting
the reference to a known subclass of this which exposes additional
functionality. Primarily this would consist of programmatic retrieval
of other beans. While there are cases when this capability is useful,
it should generally be avoided, since it couples the code to Spring
and does not follow the Inversion of Control style, where
collaborators are provided to beans as properties.
An alternative option that is equivalent in effect to the
BeanFactoryAware
-based approach is to
use the
org.springframework.beans.factory.config.ObjectFactoryCreatingFactoryBean
.
(It should be noted that this approach still does not reduce the
coupling to Spring, but it does not violate the central principle of
IoC as much as the
BeanFactoryAware
-based
approach.)
The ObjectFactoryCreatingFactoryBean
is a
FactoryBean
implementation that returns a reference to an object (factory) that
can in turn be used to effect a bean lookup. The
ObjectFactoryCreatingFactoryBean
class does
itself implement the BeanFactoryAware
interface; what client beans are actually injected with is an instance
of the ObjectFactory
interface. This is
a Spring-specific interface (and hence there is still no total
decoupling from Spring), but clients can then use the
ObjectFactory
's
getObject()
method to effect the bean lookup
(under the hood the ObjectFactory
implementation instance that is returned simply delegates down to a
BeanFactory
to actually lookup a bean
by name). All that you need to do is supply the
ObjectFactoryCreatingFactoryBean
with the name
of the bean that is to be looked up. Let's look at an example:
package x.y; public class NewsFeed { private String news; public void setNews(String news) { this.news = news; } public String getNews() { return this.toString() + ": '" + news + "'"; } }
package x.y; import org.springframework.beans.factory.ObjectFactory; public class NewsFeedManager { private ObjectFactory factory; public void setFactory(ObjectFactory factory) { this.factory = factory; } public void printNews() { // here is where the lookup is performed; note that there is no // need to hard code the name of the bean that is being looked up... NewsFeed news = (NewsFeed) factory.getObject(); System.out.println(news.getNews()); } }
Find below the XML configuration to wire together the above
classes using the
ObjectFactoryCreatingFactoryBean
approach.
<beans> <bean id="newsFeedManager" class="x.y.NewsFeedManager"> <property name="factory"> <bean class="org.springframework.beans.factory.config.ObjectFactoryCreatingFactoryBean"> <property name="targetBeanName"> <idref local="newsFeed" /> </property> </bean> </property> </bean> <bean id="newsFeed" class="x.y.NewsFeed" scope="prototype"> <property name="news" value="... that's fit to print!" /> </bean> </beans>
And here is a small driver program to test the fact that new
(prototype) instances of the newsFeed
bean are
actually being returned for each call to the injected
ObjectFactory
inside the
NewsFeedManager
's
printNews()
method.
import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; import x.y.NewsFeedManager; public class Main { public static void main(String[] args) throws Exception { ApplicationContext ctx = new ClassPathXmlApplicationContext("beans.xml"); NewsFeedManager manager = (NewsFeedManager) ctx.getBean("newsFeedManager"); manager.printNews(); manager.printNews(); } }
The output from running the above program will look like so (results will of course vary on your machine).
[email protected]: '... that's fit to print!' [email protected]: '... that's fit to print!'
As of Spring 2.5, you can rely upon autowiring of the
BeanFactory
as yet another alternative
to implementing the BeanFactoryAware
interface. The "traditional" constructor
and
byType
autowiring modes (as described in the
section entitled Section 4.3.5, “Autowiring collaborators”) are now
capable of providing a dependency of type
BeanFactory
for either a constructor
argument or setter method parameter respectively. For more flexibility
(including the ability to autowire fields and multiple parameter
methods), consider using the new annotation-based autowiring features.
In that case, the BeanFactory
will be
autowired into a field, constructor argument, or method parameter that
is expecting the BeanFactory
type as
long as the field, constructor, or method in question carries the
@Autowired
annotation. For more
information, see the section entitled Section 4.11.2, “@Autowired”.
If a bean implements the
org.springframework.beans.factory.BeanNameAware
interface and is deployed in a
BeanFactory
, the
BeanFactory
will call the bean through
this interface to inform the bean of the name it
was deployed under. The callback will be invoked after population of
normal bean properties but before an initialization callback like
InitializingBean
's
afterPropertiesSet or a custom
init-method.
A bean definition potentially contains a large amount of configuration information, including container specific information (for example initialization method, static factory method name, and so forth) and constructor arguments and property values. A child bean definition is a bean definition that inherits configuration data from a parent definition. It is then able to override some values, or add others, as needed. Using parent and child bean definitions can potentially save a lot of typing. Effectively, this is a form of templating.
When working with a BeanFactory
programmatically, child bean definitions are represented by the
ChildBeanDefinition
class. Most users will never
work with them on this level, instead configuring bean definitions
declaratively in something like the XmlBeanFactory
.
When using XML-based configuration metadata a child bean definition is
indicated simply by using the 'parent'
attribute,
specifying the parent bean as the value of this attribute.
<bean id="inheritedTestBean" abstract="true" class="org.springframework.beans.TestBean"> <property name="name" value="parent"/> <property name="age" value="1"/> </bean> <bean id="inheritsWithDifferentClass" class="org.springframework.beans.DerivedTestBean" parent="inheritedTestBean" init-method="initialize"> <property name="name" value="override"/> <!-- the age property value of 1 will be inherited from parent --> </bean>
A child bean definition will use the bean class from the parent definition if none is specified, but can also override it. In the latter case, the child bean class must be compatible with the parent, that is it must accept the parent's property values.
A child bean definition will inherit constructor argument values,
property values and method overrides from the parent, with the option to
add new values. If any init-method, destroy-method and/or
static
factory method settings are specified, they will
override the corresponding parent settings.
The remaining settings will always be taken from the child definition: depends on, autowire mode, dependency check, singleton, scope, lazy init.
Note that in the example above, we have explicitly marked the parent
bean definition as abstract by using the abstract
attribute. In the case that the parent definition does not specify a
class, and so explicitly marking the parent bean definition as
abstract
is required:
<bean id="inheritedTestBeanWithoutClass" abstract="true"> <property name="name" value="parent"/> <property name="age" value="1"/> </bean> <bean id="inheritsWithClass" class="org.springframework.beans.DerivedTestBean" parent="inheritedTestBeanWithoutClass" init-method="initialize"> <property name="name" value="override"/> <!-- age will inherit the value of 1 from the parent bean definition--> </bean>
The parent bean cannot get instantiated on its own since it is
incomplete, and it is also explicitly marked as
abstract
. When a definition is defined to be
abstract
like this, it is usable only as a pure
template bean definition that will serve as a parent definition for child
definitions. Trying to use such an abstract
parent bean
on its own (by referring to it as a ref property of another bean, or doing
an explicit getBean()
call with the parent bean
id), will result in an error. Similarly, the container's internal
preInstantiateSingletons()
method will completely
ignore bean definitions which are defined as abstract.
![]() | Note |
---|---|
|
The IoC component of the Spring Framework has been designed for
extension. There is typically no need for an application developer to
subclass any of the various BeanFactory
or
ApplicationContext
implementation classes.
The Spring IoC container can be infinitely extended by plugging in
implementations of special integration interfaces. The next few sections
are devoted to detailing all of these various integration
interfaces.
The first extension point that we will look at is the
BeanPostProcessor
interface. This
interface defines a number of callback methods
that you as an application developer can implement in order to provide
your own (or override the containers default) instantiation logic,
dependency-resolution logic, and so forth. If you want to do some custom
logic after the Spring container has finished instantiating, configuring
and otherwise initializing a bean, you can plug in one or more
BeanPostProcessor
implementations.
You can configure multiple BeanPostProcessors
if you wish. You can control the order in which these
BeanPostProcessors
execute by setting the
'order'
property (you can only set this property if
the BeanPostProcessor
implements the
Ordered
interface; if you write your own
BeanPostProcessor
you should consider
implementing the Ordered
interface too);
consult the Javadoc for the
BeanPostProcessor
and
Ordered
interfaces for more
details.
![]() | Note |
---|---|
If you want to change the actual bean definition (that is the
recipe that defines the bean), then you rather need to use a
Also, |
The
org.springframework.beans.factory.config.BeanPostProcessor
interface consists of exactly two callback methods. When such a class is
registered as a post-processor with the container (see below for how
this registration is effected), for each bean instance that is created
by the container, the post-processor will get a callback from the
container both before any container initialization
methods (such as afterPropertiesSet and any
declared init method) are called, and also afterwards. The
post-processor is free to do what it wishes with the bean instance,
including ignoring the callback completely. A bean post-processor will
typically check for callback interfaces, or do something such as wrap a
bean with a proxy; some of the Spring AOP infrastructure classes are
implemented as bean post-processors and they do this proxy-wrapping
logic.
It is important to know that a
BeanFactory
treats bean post-processors
slightly differently than an
ApplicationContext
. An
ApplicationContext
will
automatically detect any beans which are defined in
the configuration metadata which is supplied to it that implement the
BeanPostProcessor
interface, and register
them as post-processors, to be then called appropriately by the
container on bean creation. Nothing else needs to be done other than
deploying the post-processors in a similar fashion to any other bean. On
the other hand, when using a BeanFactory
implementation, bean post-processors explicitly have to be registered,
with code like this:
ConfigurableBeanFactory factory = new XmlBeanFactory(...); // now register any needed BeanPostProcessor instances MyBeanPostProcessor postProcessor = new MyBeanPostProcessor(); factory.addBeanPostProcessor(postProcessor); // now start using the factory
This explicit registration step is not convenient, and this is one
of the reasons why the various
ApplicationContext
implementations are
preferred above plain BeanFactory
implementations in the vast majority of Spring-backed applications,
especially when using BeanPostProcessors
.
![]() | BeanPostProcessors and AOP auto-proxying |
---|---|
Classes that implement the
For any such bean, you should see an info log message: “Bean 'foo' is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)”. |
Find below some examples of how to write, register, and use
BeanPostProcessors
in the context of an
ApplicationContext
.
This first example is hardly compelling, but serves to
illustrate basic usage. All we are going to do is code a custom
BeanPostProcessor
implementation that
simply invokes the toString()
method of each
bean as it is created by the container and prints the resulting string
to the system console. Yes, it is not hugely useful, but serves to get
the basic concepts across before we move into the second example which
is actually useful.
Find below the custom
BeanPostProcessor
implementation class
definition:
package scripting; import org.springframework.beans.factory.config.BeanPostProcessor; import org.springframework.beans.BeansException; public class InstantiationTracingBeanPostProcessor implements BeanPostProcessor { // simply return the instantiated bean as-is public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException { return bean; // we could potentially return any object reference here... } public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException { System.out.println("Bean '" + beanName + "' created : " + bean.toString()); return bean; } }
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:lang="http://www.springframework.org/schema/lang" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/lang http://www.springframework.org/schema/lang/spring-lang-3.0.xsd"> <lang:groovy id="messenger" script-source="classpath:org/springframework/scripting/groovy/Messenger.groovy"> <lang:property name="message" value="Fiona Apple Is Just So Dreamy."/> </lang:groovy> <!-- when the above bean ('messenger') is instantiated, this custom BeanPostProcessor implementation will output the fact to the system console --> <bean class="scripting.InstantiationTracingBeanPostProcessor"/> </beans>
Notice how the
InstantiationTracingBeanPostProcessor
is simply
defined; it doesn't even have a name, and because it is a bean it can
be dependency injected just like any other bean. (The above
configuration also just so happens to define a bean that is backed by
a Groovy script. The Spring 2.0 dynamic language support is detailed
in the chapter entitled Chapter 28, Dynamic language support.)
Find below a small driver script to exercise the above code and configuration;
import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; import org.springframework.scripting.Messenger; public final class Boot { public static void main(final String[] args) throws Exception { ApplicationContext ctx = new ClassPathXmlApplicationContext("scripting/beans.xml"); Messenger messenger = (Messenger) ctx.getBean("messenger"); System.out.println(messenger); } }
The output of executing the above program will be (something like) this:
Bean 'messenger' created : [email protected] [email protected]
Using callback interfaces or annotations in conjunction with a
custom BeanPostProcessor
implementation
is a common means of extending the Spring IoC container. This next
example is a bit of a cop-out, in that you are directed to the section
entitled Section 29.3.1, “@Required” which
demonstrates the usage of a custom
BeanPostProcessor
implementation that
ships with the Spring distribution which ensures that JavaBean
properties on beans that are marked with an (arbitrary) annotation are
actually (configured to be) dependency-injected with a value.
The next extension point that we will look at is the
org.springframework.beans.factory.config.BeanFactoryPostProcessor
.
The semantics of this interface are similar to the
BeanPostProcessor
, with one major
difference: BeanFactoryPostProcessors
operate on the
bean configuration metadata; that is, the Spring
IoC container will allow BeanFactoryPostProcessors
to
read the configuration metadata and potentially change it
before the container has actually instantiated any
other beans.
You can configure multiple
BeanFactoryPostProcessors
if you wish. You can
control the order in which these
BeanFactoryPostProcessors
execute by setting the
'order'
property (you can only set this property if
the BeanFactoryPostProcessor
implements
the Ordered
interface; if you write your
own BeanFactoryPostProcessor
you should
consider implementing the Ordered
interface too); consult the Javadoc for the
BeanFactoryPostProcessor
and
Ordered
interfaces for more
details.
![]() | Note |
---|---|
If you want to change the actual bean
instances (the objects that are created from the
configuration metadata), then you rather need to use a
Also, |
A bean factory post-processor is executed manually (in the case of
a BeanFactory
) or automatically (in the
case of an ApplicationContext
) to apply
changes of some sort to the configuration metadata that defines a
container. Spring includes a number of pre-existing bean factory
post-processors, such as
PropertyOverrideConfigurer
and
PropertyPlaceholderConfigurer
, both described
below. A custom BeanFactoryPostProcessor
can also be used to register custom property editors, for
example.
In a BeanFactory
, the process of
applying a BeanFactoryPostProcessor
is
manual, and will be similar to this:
XmlBeanFactory factory = new XmlBeanFactory(new FileSystemResource("beans.xml")); // bring in some property values from a Properties file PropertyPlaceholderConfigurer cfg = new PropertyPlaceholderConfigurer(); cfg.setLocation(new FileSystemResource("jdbc.properties")); // now actually do the replacement cfg.postProcessBeanFactory(factory);
This explicit registration step is not convenient, and this is one
of the reasons why the various
ApplicationContext
implementations are
preferred above plain BeanFactory
implementations in the vast majority of Spring-backed applications,
especially when using
BeanFactoryPostProcessors
.
An ApplicationContext
will detect
any beans which are deployed into it which implement the
BeanFactoryPostProcessor
interface, and
automatically use them as bean factory post-processors, at the
appropriate time. Nothing else needs to be done other than deploying
these post-processor in a similar fashion to any other bean.
![]() | Note |
---|---|
Just as in the case of |
The PropertyPlaceholderConfigurer
is used to externalize property values from a
BeanFactory
definition, into another
separate file in the standard Java Properties
format. This is useful to allow the person deploying an application to
customize environment-specific properties (for example database URLs,
usernames and passwords), without the complexity or risk of modifying
the main XML definition file or files for the container.
Consider the following XML-based configuration metadata
fragment, where a DataSource
with
placeholder values is defined. We will configure some properties from
an external Properties
file, and at runtime, we
will apply a PropertyPlaceholderConfigurer
to
the metadata which will replace some properties of the
DataSource:
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="locations"> <value>classpath:com/foo/jdbc.properties</value> </property> </bean> <bean id="dataSource" destroy-method="close" class="org.apache.commons.dbcp.BasicDataSource"> <property name="driverClassName" value="${jdbc.driverClassName}"/> <property name="url" value="${jdbc.url}"/> <property name="username" value="${jdbc.username}"/> <property name="password" value="${jdbc.password}"/> </bean>
The actual values come from another file in the standard Java
Properties
format:
jdbc.driverClassName=org.hsqldb.jdbcDriver
jdbc.url=jdbc:hsqldb:hsql://production:9002
jdbc.username=sa
jdbc.password=root
With the context
namespace introduced in
Spring 2.5, it is possible to configure property placeholders with a
dedicated configuration element. Multiple locations may be provided as
a comma-separated list for the location
attribute.
<context:property-placeholder location="classpath:com/foo/jdbc.properties"/>
The PropertyPlaceholderConfigurer
doesn't
only look for properties in the Properties
file
you specify, but also checks against the Java
System
properties if it cannot find a property
you are trying to use. This behavior can be customized by setting the
systemPropertiesMode
property of the configurer. It
has three values, one to tell the configurer to always override, one
to let it never override and one to let it
override only if the property cannot be found in the properties file
specified. Please consult the Javadoc for the
PropertyPlaceholderConfigurer
for more
information.
![]() | Class name substitution |
---|---|
The <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="locations"> <value>classpath:com/foo/strategy.properties</value> </property> <property name="properties"> <value>custom.strategy.class=com.foo.DefaultStrategy</value> </property> </bean> <bean id="serviceStrategy" class="${custom.strategy.class}"/> If the class is unable to be resolved at runtime to a valid
class, resolution of the bean will fail once it is about to be
created (which is during the
|
The PropertyOverrideConfigurer
, another
bean factory post-processor, is similar to the
PropertyPlaceholderConfigurer
, but in
contrast to the latter, the original definitions can have default
values or no values at all for bean properties. If an overriding
Properties
file does not have an entry for a
certain bean property, the default context definition is used.
Note that the bean factory definition is
not aware of being overridden, so it is not
immediately obvious when looking at the XML definition file that the
override configurer is being used. In case that there are multiple
PropertyOverrideConfigurer
instances that
define different values for the same bean property, the last one will
win (due to the overriding mechanism).
Properties file configuration lines are expected to be in the format:
beanName.property=value
An example properties file might look like this:
dataSource.driverClassName=com.mysql.jdbc.Driver dataSource.url=jdbc:mysql:mydb
This example file would be usable against a container definition which contains a bean called dataSource, which has driver and url properties.
Note that compound property names are also supported, as long as every component of the path except the final property being overridden is already non-null (presumably initialized by the constructors). In this example...
foo.fred.bob.sammy=123
... the sammy
property of the
bob
property of the fred
property of the foo
bean is being set to the scalar
value 123
.
Note: Specified override values are always literal values; they are not translated into bean references. This also applies when the original value in the XML bean definition specifies a bean reference
With the context
namespace introduced in
Spring 2.5, it is possible to configure property overriding with a
dedicated configuration element:
<context:property-override location="classpath:override.properties"/>
The
org.springframework.beans.factory.FactoryBean
interface is to be implemented by objects that are themselves
factories.
The FactoryBean
interface is a
point of pluggability into the Spring IoC containers instantiation
logic. If you have some complex initialization code that is better
expressed in Java as opposed to a (potentially) verbose amount of XML,
you can create your own FactoryBean
,
write the complex initialization inside that class, and then plug your
custom FactoryBean
into the
container.
The FactoryBean
interface provides
three methods:
Object getObject()
: has to return an
instance of the object this factory creates. The instance can
possibly be shared (depending on whether this factory returns
singletons or prototypes).
boolean isSingleton()
: has to return
true
if this
FactoryBean
returns singletons,
false
otherwise
Class getObjectType()
: has to return
either the object type returned by the
getObject()
method or
null
if the type isn't known in advance
The FactoryBean
concept and
interface is used in a number of places within the Spring Framework; at
the time of writing there are over 50 implementations of the
FactoryBean
interface that ship with
Spring itself.
Finally, there is sometimes a need to ask a container for an
actual FactoryBean
instance itself, not
the bean it produces. This may be achieved by prepending the bean id
with '&'
(sans quotes) when calling the
getBean
method of the
BeanFactory
(including
ApplicationContext
). So for a given
FactoryBean
with an id of
myBean
, invoking getBean("myBean")
on the container will return the product of the
FactoryBean
, but invoking
getBean("&myBean")
will return the
FactoryBean
instance itself.
While the beans
package provides basic
functionality for managing and manipulating beans, including in a
programmatic way, the context
package adds the ApplicationContext
interface, which enhances BeanFactory
functionality in a more framework-oriented style.
Many users will use ApplicationContext
in a
completely declarative fashion, not even having to create it manually, but
instead relying on support classes such as
ContextLoader
to automatically instantiate an
ApplicationContext
as part of the normal
startup process of a J2EE web-app. (Of course, it is still possible to
create an ApplicationContext
programmatically.)
The basis for the context package is the
ApplicationContext
interface, located in
the org.springframework.context
package. Deriving from
the BeanFactory
interface, it provides all
the functionality of BeanFactory
. To allow
working in a more framework-oriented fashion, using layering and
hierarchical contexts, the context package also provides the following
functionality:
MessageSource
, providing access
to messages in i18n-style.
Access to resources, such as URLs and files.
Event propagation to beans implementing the
ApplicationListener
interface.
Loading of multiple (hierarchical) contexts, allowing each to be focused on one particular layer, for example the web layer of an application.
Short version: use an
ApplicationContext
unless you have a
really good reason for not doing so. For those of you that are looking
for slightly more depth as to the 'but why' of the above recommendation,
keep reading.
As the ApplicationContext
includes
all functionality of the BeanFactory
, it
is generally recommended that it be used in preference to the
BeanFactory
, except for a few limited
situations such as in an Applet
, where memory
consumption might be critical and a few extra kilobytes might make a
difference. However, for most 'typical' enterprise applications and
systems, the ApplicationContext
is what
you will want to use. Versions of Spring 2.0 and above make
heavy use of the BeanPostProcessor
extension point (to effect proxying and suchlike), and if you are
using just a plain BeanFactory
then a
fair amount of support such as transactions and AOP will not take effect
(at least not without some extra steps on your part), which could be
confusing because nothing will actually be wrong with the
configuration.
Find below a feature matrix that lists what features are provided
by the BeanFactory
and
ApplicationContext
interfaces (and
attendant implementations). (The following sections describe
functionality that ApplicationContext
adds to the basic BeanFactory
capabilities in a lot more depth than the said feature matrix.)
Table 4.5. Feature Matrix
Feature | BeanFactory | ApplicationContext |
---|---|---|
Bean instantiation/wiring | Yes | Yes |
Automatic
| No | Yes |
Automatic
| No | Yes |
Convenient
| No | Yes |
| No | Yes |
The ApplicationContext
interface
extends an interface called
MessageSource
, and therefore provides
messaging (i18n or internationalization) functionality. Together with
the HierarchicalMessageSource
, capable of
resolving hierarchical messages, these are the basic interfaces Spring
provides to do message resolution. Let's quickly review the methods
defined there:
String getMessage(String code, Object[] args,
String default, Locale loc)
: the basic method used to
retrieve a message from the
MessageSource
. When no message is
found for the specified locale, the default message is used. Any
arguments passed in are used as replacement values, using the
MessageFormat
functionality provided
by the standard library.
String getMessage(String code, Object[] args,
Locale loc)
: essentially the same as the previous
method, but with one difference: no default message can be
specified; if the message cannot be found, a
NoSuchMessageException
is thrown.
String getMessage(MessageSourceResolvable
resolvable, Locale locale)
: all properties used in the
methods above are also wrapped in a class named
MessageSourceResolvable
, which you
can use via this method.
When an ApplicationContext
gets
loaded, it automatically searches for a
MessageSource
bean defined in the
context. The bean has to have the name
'messageSource'
. If such a bean is found, all calls
to the methods described above will be delegated to the message source
that was found. If no message source was found, the
ApplicationContext
attempts to see if it
has a parent containing a bean with the same name. If so, it uses that
bean as the MessageSource
. If it can't
find any source for messages, an empty
DelegatingMessageSource
will be instantiated in
order to be able to accept calls to the methods defined above.
Spring currently provides two
MessageSource
implementations. These are
the ResourceBundleMessageSource
and the
StaticMessageSource
. Both implement
HierarchicalMessageSource
in order to do
nested messaging. The StaticMessageSource
is
hardly ever used but provides programmatic ways to add messages to the
source. The ResourceBundleMessageSource
is more
interesting and is the one we will provide an example for:
<beans> <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource"> <property name="basenames"> <list> <value>format</value> <value>exceptions</value> <value>windows</value> </list> </property> </bean> </beans>
This assumes you have three resource bundles defined on your
classpath called format
,
exceptions
and windows
. Using the
JDK standard way of resolving messages through ResourceBundles, any
request to resolve a message will be handled. For the purposes of the
example, lets assume the contents of two of the above resource bundle
files are...
# in 'format.properties'
message=Alligators rock!
# in 'exceptions.properties' argument.required=The '{0}' argument is required.
Some (admittedly trivial) driver code to exercise the
MessageSource
functionality can be found below.
Remember that all ApplicationContext
implementations are also MessageSource
implementations and so can be cast to the
MessageSource
interface.
public static void main(String[] args) { MessageSource resources = new ClassPathXmlApplicationContext("beans.xml"); String message = resources.getMessage("message", null, "Default", null); System.out.println(message); }
The resulting output from the above program will be...
Alligators rock!
So to summarize, the MessageSource
is
defined in a file called 'beans.xml'
(this file
exists at the root of your classpath). The
'messageSource'
bean definition refers to a number of
resource bundles via its basenames
property; the
three files that are passed in the list to the
basenames
property exist as files at the root of your
classpath (and are called format.properties
,
exceptions.properties
, and
windows.properties
respectively).
Lets look at another example, and this time we will look at passing arguments to the message lookup; these arguments will be converted into Strings and inserted into placeholders in the lookup message. This is perhaps best explained with an example:
<beans> <!-- this MessageSource is being used in a web application --> <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource"> <property name="basename" value="test-messages"/> </bean> <!-- let's inject the above MessageSource into this POJO --> <bean id="example" class="com.foo.Example"> <property name="messages" ref="messageSource"/> </bean> </beans>
public class Example { private MessageSource messages; public void setMessages(MessageSource messages) { this.messages = messages; } public void execute() { String message = this.messages.getMessage("argument.required", new Object [] {"userDao"}, "Required", null); System.out.println(message); } }
The resulting output from the invocation of the
execute()
method will be...
The 'userDao' argument is required.
With regard to internationalization (i18n), Spring's various
MessageResource
implementations follow the same
locale resolution and fallback rules as the standard JDK
ResourceBundle
. In short, and continuing with the
example 'messageSource'
defined previously, if you
want to resolve messages against the British (en-GB) locale, you would
create files called format_en_GB.properties
,
exceptions_en_GB.properties
, and
windows_en_GB.properties
respectively.
Locale resolution is typically going to be managed by the surrounding environment of the application. For the purpose of this example though, we'll just manually specify the locale that we want to resolve our (British) messages against.
# in 'exceptions_en_GB.properties'
argument.required=Ebagum lad, the '{0}' argument is required, I say, required.
public static void main(final String[] args) { MessageSource resources = new ClassPathXmlApplicationContext("beans.xml"); String message = resources.getMessage("argument.required", new Object [] {"userDao"}, "Required", Locale.UK); System.out.println(message); }
The resulting output from the running of the above program will be...
Ebagum lad, the 'userDao' argument is required, I say, required.
The MessageSourceAware
interface can also
be used to acquire a reference to any
MessageSource
that has been defined. Any bean
that is defined in an ApplicationContext
that
implements the MessageSourceAware
interface will
be injected with the application context's
MessageSource
when it (the bean) is being created
and configured.
Note: As an alternative to
ResourceBundleMessageSource
, Spring also provides
a ReloadableResourceBundleMessageSource
class.
This variant supports the same bundle file format but is more flexible
than the standard JDK based
ResourceBundleMessageSource
implementation. In particular, it allows for reading files
from any Spring resource location (not just from the classpath) and
supports hot reloading of bundle property files (while efficiently
caching them in between). Check out the
ReloadableResourceBundleMessageSource
javadoc for
details.
Event handling in the
ApplicationContext
is provided through
the ApplicationEvent
class and
ApplicationListener
interface. If a bean
which implements the ApplicationListener
interface is deployed into the context, every time an
ApplicationEvent
gets published to the
ApplicationContext
, that bean will be
notified. Essentially, this is the standard
Observer design pattern. Spring provides the
following standard events:
Table 4.6. Built-in Events
Event | Explanation |
---|---|
ContextRefreshedEvent | Published when the
ApplicationContext is initialized
or refreshed, e.g. using the refresh()
method on the
ConfigurableApplicationContext
interface. "Initialized" here means that all beans are loaded,
post-processor beans are detected and activated, singletons are
pre-instantiated, and the
ApplicationContext object is
ready for use. A refresh may be triggered multiple times, as
long as the context hasn't been closed - provided that the
chosen ApplicationContext
actually supports such "hot" refreshes (which e.g.
XmlWebApplicationContext does but
GenericApplicationContext
doesn't). |
ContextStartedEvent | Published when the
ApplicationContext is started,
using the start() method on the
ConfigurableApplicationContext
interface. "Started" here means that all
Lifecycle beans will receive an
explicit start signal. This will typically be used for
restarting after an explicit stop, but may also be used for
starting components that haven't been configured for autostart
(e.g. haven't started on initialization already). |
ContextStoppedEvent | Published when the
ApplicationContext is stopped,
using the stop() method on the
ConfigurableApplicationContext
interface. "Stopped" here means that all
Lifecycle beans will receive an
explicit stop signal. A stopped context may be restarted through
a start() call. |
ContextClosedEvent | Published when the
ApplicationContext is closed,
using the close() method on the
ConfigurableApplicationContext
interface. "Closed" here means that all singleton beans are
destroyed. A closed context has reached its end of life; it
cannot be refreshed or restarted. |
RequestHandledEvent | A web-specific event telling all beans that an HTTP
request has been serviced (this will be published
after the request has been finished). Note
that this event is only applicable for web applications using
Spring's DispatcherServlet . |
Implementing custom events can be done as well. Simply call the
publishEvent()
method on the
ApplicationContext
, specifying a
parameter which is an instance of your custom event class implementing
ApplicationEvent
. Event listeners receive events
synchronously. This means the publishEvent()
method blocks until all listeners have finished processing the event (it
is possible to supply an alternate event publishing strategy via a
ApplicationEventMulticaster
implementation). Furthermore, when a listener receives an event it
operates inside the transaction context of the publisher, if a
transaction context is available.
Let's look at an example. First, the
ApplicationContext
:
<bean id="emailer" class="example.EmailBean"> <property name="blackList"> <list> <value>[email protected]</value> <value>[email protected]</value> <value>[email protected]</value> </list> </property> </bean> <bean id="blackListListener" class="example.BlackListNotifier"> <property name="notificationAddress" value="[email protected]"/> </bean>
Now, let's look at the actual classes:
public class EmailBean implements ApplicationContextAware { private List blackList; private ApplicationContext ctx; public void setBlackList(List blackList) { this.blackList = blackList; } public void setApplicationContext(ApplicationContext ctx) { this.ctx = ctx; } public void sendEmail(String address, String text) { if (blackList.contains(address)) { BlackListEvent event = new BlackListEvent(address, text); ctx.publishEvent(event); return; } // send email... } }
public class BlackListNotifier implements ApplicationListener { private String notificationAddress; public void setNotificationAddress(String notificationAddress) { this.notificationAddress = notificationAddress; } public void onApplicationEvent(ApplicationEvent event) { if (event instanceof BlackListEvent) { // notify appropriate person... } } }
Of course, this particular example could probably be implemented in better ways (perhaps by using AOP features), but it should be sufficient to illustrate the basic event mechanism.
For optimal usage and understanding of application contexts, users
should generally familiarize themselves with Spring's
Resource
abstraction, as described in the
chapter entitled Chapter 5, Resources.
An application context is a
ResourceLoader
, able to be used to load
Resource
s. A
Resource
is essentially a
java.net.URL
on steroids (in fact, it just wraps and
uses a URL where appropriate), which can be used to obtain low-level
resources from almost any location in a transparent fashion, including
from the classpath, a filesystem location, anywhere describable with a
standard URL, and some other variations. If the resource location string
is a simple path without any special prefixes, where those resources
come from is specific and appropriate to the actual application context
type.
A bean deployed into the application context may implement the
special callback interface,
ResourceLoaderAware
, to be automatically
called back at initialization time with the application context itself
passed in as the ResourceLoader
. A bean
may also expose properties of type
Resource
, to be used to access static
resources, and expect that they will be injected into it like any other
properties. The person deploying the bean may specify those
Resource
properties as simple String
paths, and rely on a special JavaBean
PropertyEditor
that is automatically
registered by the context, to convert those text strings to actual
Resource
objects.
The location path or paths supplied to an
ApplicationContext
constructor are
actually resource strings, and in simple form are treated appropriately
to the specific context implementation (
ClassPathXmlApplicationContext
treats a simple
location path as a classpath location), but may also be used with
special prefixes to force loading of definitions from the classpath or a
URL, regardless of the actual context type.
As opposed to the BeanFactory
,
which will often be created programmatically,
ApplicationContext
instances can be
created declaratively using for example a
ContextLoader
. Of course you can also create
ApplicationContext
instances
programmatically using one of the
ApplicationContext
implementations.
First, let's examine the ContextLoader
mechanism
and its implementations.
The ContextLoader
mechanism comes in two
flavors: the ContextLoaderListener
and the
ContextLoaderServlet
. They both have the same
functionality but differ in that the listener version cannot be reliably
used in Servlet 2.3 containers. Since the Servlet 2.4 specification,
servlet context listeners are required to execute immediately after the
servlet context for the web application has been created and is
available to service the first request (and also when the servlet
context is about to be shut down): as such a servlet context listener is
an ideal place to initialize the Spring
ApplicationContext
. It is up to you as to
which one you use, but all things being equal you should probably prefer
ContextLoaderListener
; for more information on
compatibility, have a look at the Javadoc for the
ContextLoaderServlet
.
You can register an
ApplicationContext
using the
ContextLoaderListener
as follows:
<context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/daoContext.xml /WEB-INF/applicationContext.xml</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <!-- or use the ContextLoaderServlet instead of the above listener <servlet> <servlet-name>context</servlet-name> <servlet-class>org.springframework.web.context.ContextLoaderServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> -->
The listener inspects the
'contextConfigLocation'
parameter. If the parameter
does not exist, the listener will use
/WEB-INF/applicationContext.xml
as a default. When it
does exist, it will separate the String using
predefined delimiters (comma, semicolon and whitespace) and use the
values as locations where application contexts will be searched for.
Ant-style path patterns are supported as well: e.g.
/WEB-INF/*Context.xml
(for all files whose name ends
with "Context.xml", residing in the "WEB-INF" directory) or
/WEB-INF/**/*Context.xml
(for all such files in any
subdirectory of "WEB-INF").
The ContextLoaderServlet
can be used
instead of the ContextLoaderListener
. The servlet
will use the 'contextConfigLocation'
parameter just
as the listener does.
The majority of the code inside an application is best written in a
DI style, where that code is served out of a Spring IoC container, has its
own dependencies supplied by the container when it is created, and is
completely unaware of the container. However, for the small glue layers of
code that are sometimes needed to tie other code together, there is
sometimes a need for singleton (or quasi-singleton) style access to a
Spring IoC container. For example, third party code may try to construct
new objects directly (Class.forName()
style), without
the ability to force it to get these objects out of a Spring IoC
container. If the object constructed by the third party code is just a
small stub or proxy, which then uses a singleton style access to a Spring
IoC container to get a real object to delegate to, then inversion of
control has still been achieved for the majority of the code (the object
coming out of the container); thus most code is still unaware of the
container or how it is accessed, and remains decoupled from other code,
with all ensuing benefits. EJBs may also use this stub/proxy approach to
delegate to a plain Java implementation object, coming out of a Spring IoC
container. While the Spring IoC container itself ideally does not have to
be a singleton, it may be unrealistic in terms of memory usage or
initialization times (when using beans in the Spring IoC container such as
a Hibernate SessionFactory
) for each bean
to use its own, non-singleton Spring IoC container.
As another example, in complex J2EE applications with multiple
layers (various JAR files, EJBs, and WAR files packaged as an EAR), with
each layer having its own Spring IoC container definition (effectively
forming a hierarchy), the preferred approach when there is only one
web-app (WAR) in the top hierarchy is to simply create one composite
Spring IoC container from the multiple XML definition files from each
layer. All of the various Spring IoC container implementations may be
constructed from multiple definition files in this fashion. However, if
there are multiple sibling web-applications at the root of the hierarchy,
it is problematic to create a Spring IoC container for each
web-application which consists of mostly identical bean definitions from
lower layers, as there may be issues due to increased memory usage, issues
with creating multiple copies of beans which take a long time to
initialize (for example a Hibernate
SessionFactory
), and possible issues due to
side-effects. As an alternative, classes such as ContextSingletonBeanFactoryLocator
or SingletonBeanFactoryLocator
may be used to demand-load multiple hierarchical (that is one container is
the parent of another) Spring IoC container instances in a singleton
fashion, which may then be used as the parents of the web-application
Spring IoC container instances. The result is that bean definitions for
lower layers are loaded only as needed, and loaded only once.
You can see a detailed example of the usage of these classes by
viewing the Javadoc for the SingletonBeanFactoryLocator
and ContextSingletonBeanFactoryLocator
classes. As mentioned in the chapter on EJBs,
the Spring convenience base classes for EJBs normally use a non-singleton
BeanFactoryLocator
implementation, which is
easily replaced by the use of
SingletonBeanFactoryLocator
and
ContextSingletonBeanFactoryLocator
.
Since Spring 2.5, it is possible to deploy a Spring ApplicationContext as a RAR file, encapsulating the context and all of its required bean classes and library JARs in a J2EE RAR deployment unit. This is the equivalent of bootstrapping a standalone ApplicationContext, just hosted in J2EE environment, being able to access the J2EE server's facilities. RAR deployment is intended as a more 'natural' alternative to the not uncommon scenario of deploying a headless WAR file - i.e. a WAR file without any HTTP entry points, just used for bootstrapping a Spring ApplicationContext in a J2EE environment.
RAR deployment is ideal for application contexts that do not need
any HTTP entry points but rather just consist of message endpoints and
scheduled jobs etc. Beans in such a context may use application server
resources such as the JTA transaction manager and JNDI-bound JDBC
DataSources and JMS ConnectionFactory instances, and may also register
with the platform's JMX server - all through Spring's standard transaction
management and JNDI and JMX support facilities. Application components may
also interact with the application's server JCA WorkManager through
Spring's TaskExecutor
abstraction.
Check out the JavaDoc of the SpringContextResourceAdapter class for the configuration details involved in RAR deployment.
For simple deployment needs, all you need to do is the
following: Package all application classes into a RAR file
(which is just a standard JAR file with a different file extension), add
all required library jars into the root of the RAR archive, add a
"META-INF/ra.xml" deployment descriptor (as shown in
SpringContextResourceAdapter
's JavaDoc) as well as
the corresponding Spring XML bean definition file(s) (typically
"META-INF/applicationContext.xml"), and drop the resulting RAR file into
your application server's deployment directory!
NOTE: Such RAR deployment units are usually self-contained; they do not expose components to the 'outside' world, not even to other modules of the same application. Interaction with a RAR-based ApplicationContext usually happens through JMS destinations that it shares with other modules. A RAR-based ApplicationContext may also - for example - schedule some jobs, reacting to new files in the file system (or the like). If it actually needs to allow for synchronous access from the outside, it could for example export RMI endpoints, which of course may be used by other application modules on the same machine as well.
As mentioned in the section entitled Section 4.7.1.2, “Example: The
RequiredAnnotationBeanPostProcessor”, using a
BeanPostProcessor
in conjunction with
annotations is a common means of extending the Spring IoC container. For
example, Spring 2.0 introduced the possibility of enforcing required
properties with the @Required annotation. As of
Spring 2.5, it is now possible to follow that same general approach to
drive Spring's dependency injection. Essentially, the
@Autowired
annotation provides the same
capabilities as described in Section 4.3.5, “Autowiring collaborators” but
with more fine-grained control and wider applicability. Spring 2.5 also
adds support for JSR-250 annotations such as
@Resource
,
@PostConstruct
, and
@PreDestroy
. Use of these annotations also
requires that certain BeanPostProcessors
be
registered within the Spring container. As always, these can be registered
as individual bean definitions, but they can also be implicitly registered
by including the following tag in an XML-based Spring configuration
(notice the inclusion of the 'context
'
namespace):
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <context:annotation-config/> </beans>
(The implicitly registered post-processors include AutowiredAnnotationBeanPostProcessor
,
CommonAnnotationBeanPostProcessor
,
PersistenceAnnotationBeanPostProcessor
,
as well as the aforementioned RequiredAnnotationBeanPostProcessor
.)
![]() | Note |
---|---|
Note that |
The @Required
annotation applies to
bean property setter methods, as in the following example:
public class SimpleMovieLister { private MovieFinder movieFinder; @Required public void setMovieFinder(MovieFinder movieFinder) { this.movieFinder = movieFinder; } // ... }
This annotation simply indicates that the affected bean property
must be populated at configuration time: either through an explicit
property value in a bean definition or through autowiring. The container
will throw an exception if the affected bean property has not been
populated; this allows for eager and explicit failure, avoiding
NullPointerException
s or the like later on. Note
that it is still recommended to put assertions into the bean class
itself (for example into an init method) in order to enforce those
required references and values even when using the class outside of a
container.
As expected, the @Autowired
annotation may be applied to "traditional" setter methods:
public class SimpleMovieLister { private MovieFinder movieFinder; @Autowired public void setMovieFinder(MovieFinder movieFinder) { this.movieFinder = movieFinder; } // ... }
The annotation may also be applied to methods with arbitrary names and/or multiple arguments:
public class MovieRecommender { private MovieCatalog movieCatalog; private CustomerPreferenceDao customerPreferenceDao; @Autowired public void prepare(MovieCatalog movieCatalog, CustomerPreferenceDao customerPreferenceDao) { this.movieCatalog = movieCatalog; this.customerPreferenceDao = customerPreferenceDao; } // ... }
The @Autowired
annotation may even
be applied on constructors and fields:
public class MovieRecommender { @Autowired private MovieCatalog movieCatalog; private CustomerPreferenceDao customerPreferenceDao; @Autowired public MovieRecommender(CustomerPreferenceDao customerPreferenceDao) { this.customerPreferenceDao = customerPreferenceDao; } // ... }
It is also possible to provide all beans of a
particular type from the
ApplicationContext
by adding the
annotation to a field or method that expects an array of that
type:
public class MovieRecommender { @Autowired private MovieCatalog[] movieCatalogs; // ... }
The same applies for typed collections:
public class MovieRecommender { private Set<MovieCatalog> movieCatalogs; @Autowired public void setMovieCatalogs(Set<MovieCatalog> movieCatalogs) { this.movieCatalogs = movieCatalogs; } // ... }
Even typed Maps may be autowired as long as the expected key type
is String
. The Map values will contain all beans
of the expected type, and the keys will contain the corresponding bean
names:
public class MovieRecommender { private Map<String, MovieCatalog> movieCatalogs; @Autowired public void setMovieCatalogs(Map<String, MovieCatalog> movieCatalogs) { this.movieCatalogs = movieCatalogs; } // ... }
By default, the autowiring will fail whenever zero candidate beans are available; the default behavior is to treat annotated methods, constructors, and fields as indicating required dependencies. This behavior can be changed as demonstrated below.
public class SimpleMovieLister { private MovieFinder movieFinder; @Autowired(required=false) public void setMovieFinder(MovieFinder movieFinder) { this.movieFinder = movieFinder; } // ... }
![]() | Note |
---|---|
Only one annotated constructor per-class may be marked as required, but multiple non-required constructors can be annotated. In that case, each will be considered among the candidates and Spring will use the greediest constructor whose dependencies can be satisfied. Prefer the use of |
@Autowired
may also be used for
well-known "resolvable dependencies": the
BeanFactory
interface, the
ApplicationContext
interface, the
ResourceLoader
interface, the
ApplicationEventPublisher
interface and
the MessageSource
interface. These
interfaces (and their extended interfaces such as
ConfigurableApplicationContext
or
ResourcePatternResolver
) will be
automatically resolved, with no special setup necessary.
public class MovieRecommender { @Autowired private ApplicationContext context; public MovieRecommender() { } // ... }
Since autowiring by type may lead to multiple candidates, it is
often necessary to have more control over the selection process. One way
to accomplish this is with Spring's
@Qualifier
annotation. This allows for
associating qualifier values with specific arguments, narrowing the set
of type matches so that a specific bean is chosen for each argument. In
the simplest case, this can be a plain descriptive value:
public class MovieRecommender { @Autowired @Qualifier("main") private MovieCatalog movieCatalog; // ... }
The @Qualifier
annotation can also
be specified on individual constructor arguments or method
parameters:
public class MovieRecommender { private MovieCatalog movieCatalog; private CustomerPreferenceDao customerPreferenceDao; @Autowired public void prepare(@Qualifier("main") MovieCatalog movieCatalog, CustomerPreferenceDao customerPreferenceDao) { this.movieCatalog = movieCatalog; this.customerPreferenceDao = customerPreferenceDao; } // ... }
The corresponding bean definitions would look like as follows. The bean with qualifier value "main" would be wired with the constructor argument that has been qualified with the same value.
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <context:annotation-config/> <bean class="example.SimpleMovieCatalog"> <qualifier value="main"/> <!-- inject any dependencies required by this bean --> </bean> <bean class="example.SimpleMovieCatalog"> <qualifier value="action"/> <!-- inject any dependencies required by this bean --> </bean> <bean id="movieRecommender" class="example.MovieRecommender"/> </beans>
For a fallback match, the bean name is considered as a default
qualifier value. This means that the bean may be defined with an id
"main" instead of the nested qualifier element, leading to the same
matching result. However, note that while this can be used to refer to
specific beans by name, @Autowired
is
fundamentally about type-driven injection with optional semantic
qualifiers. This means that qualifier values, even when using the bean
name fallback, always have narrowing semantics within the set of type
matches; they do not semantically express a reference to a unique bean
id. Good qualifier values would be "main" or "EMEA" or "persistent",
expressing characteristics of a specific component - independent from
the bean id (which may be auto-generated in case of an anonymous bean
definition like the one above).
Qualifiers also apply to typed collections (as discussed above):
e.g. to Set<MovieCatalog>
. In such a case, all
matching beans according to the declared qualifiers are going to be
injected as a collection. This implies that qualifiers do not have to be
unique; they rather simply constitute filtering criteria. For example,
there could be multiple MovieCatalog
beans
defined with the same qualifier value "action"; all of which would be
injected into a Set<MovieCatalog>
annotated
with @Qualifier("action")
.
![]() | Tip |
---|---|
If you intend to express annotation-driven injection by name, do
not primarily use As a specific consequence of this semantic difference, beans
which are themselves defined as a collection or map type cannot be
injected via Note: In contrast to
|
You may create your own custom qualifier annotations as well.
Simply define an annotation and provide the
@Qualifier
annotation within your
definition:
@Target({ElementType.FIELD, ElementType.PARAMETER}) @Retention(RetentionPolicy.RUNTIME) @Qualifier public @interface Genre { String value(); }
Then you can provide the custom qualifier on autowired fields and parameters:
public class MovieRecommender { @Autowired @Genre("Action") private MovieCatalog actionCatalog; private MovieCatalog comedyCatalog; @Autowired public void setComedyCatalog(@Genre("Comedy") MovieCatalog comedyCatalog) { this.comedyCatalog = comedyCatalog; } // ... }
The next step is to provide the information on the candidate bean
definitions. You can add <qualifier/>
tags as
sub-elements of the <bean/>
tag and then
specify the 'type'
and 'value'
to
match your custom qualifier annotations. The type will be matched
against the fully-qualified class name of the annotation, or as a
convenience when there is no risk of conflicting names, you may use the
'short' class name. Both are demonstrated in the following
example.
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <context:annotation-config/> <bean class="example.SimpleMovieCatalog"> <qualifier type="Genre" value="Action"/> <!-- inject any dependencies required by this bean --> </bean> <bean class="example.SimpleMovieCatalog"> <qualifier type="example.Genre" value="Comedy"/> <!-- inject any dependencies required by this bean --> </bean> <bean id="movieRecommender" class="example.MovieRecommender"/> </beans>
In the next section, entitled Section 4.12, “Classpath scanning, managed components and writing configurations using Java”, you will see an annotation-based alternative to providing the qualifier metadata in XML. Specifically, see: Section 4.12.9, “Providing qualifier metadata with annotations”.
In some cases, it may be sufficient to use an annotation without a value. This may be useful when the annotation serves a more generic purpose and could be applied across several different types of dependencies. For example, you may provide an offline catalog that would be searched when no Internet connection is available. First define the simple annotation:
@Target({ElementType.FIELD, ElementType.PARAMETER}) @Retention(RetentionPolicy.RUNTIME) @Qualifier public @interface Offline { }
Then add the annotation to the field or property to be autowired:
public class MovieRecommender { @Autowired @Offline private MovieCatalog offlineCatalog; // ... }
Now the bean definition only needs a qualifier
'type'
:
<bean class="example.SimpleMovieCatalog"> <qualifier type="Offline"/> <!-- inject any dependencies required by this bean --> </bean>
It is also possible to define custom qualifier annotations that
accept named attributes in addition to or instead of the simple
'value'
attribute. If multiple attribute values are
then specified on a field or parameter to be autowired, a bean
definition must match all such attribute values to
be considered an autowire candidate. As an example, consider the
following annotation definition:
@Target({ElementType.FIELD, ElementType.PARAMETER}) @Retention(RetentionPolicy.RUNTIME) @Qualifier public @interface MovieQualifier { String genre(); Format format(); }
In this case Format
is an enum:
public enum Format {
VHS, DVD, BLURAY
}
The fields to be autowired are annotated with the custom qualifier
and include values for both attributes: 'genre'
and
'format'
.
public class MovieRecommender { @Autowired @MovieQualifier(format=Format.VHS, genre="Action") private MovieCatalog actionVhsCatalog; @Autowired @MovieQualifier(format=Format.VHS, genre="Comedy") private MovieCatalog comedyVhsCatalog; @Autowired @MovieQualifier(format=Format.DVD, genre="Action") private MovieCatalog actionDvdCatalog; @Autowired @MovieQualifier(format=Format.BLURAY, genre="Comedy") private MovieCatalog comedyBluRayCatalog; // ... }
Finally, the bean definitions should contain matching qualifier
values. This example also demonstrates that bean
meta attributes may be used instead of the
<qualifier/>
sub-elements. If available, the
<qualifier/>
and its attributes would take
precedence, but the autowiring mechanism will fallback on the values
provided within the <meta/>
tags if no such
qualifier is present (see the last 2 bean definitions below).
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <context:annotation-config/> <bean class="example.SimpleMovieCatalog"> <qualifier type="MovieQualifier"> <attribute key="format" value="VHS"/> <attribute key="genre" value="Action"/> </qualifier> <!-- inject any dependencies required by this bean --> </bean> <bean class="example.SimpleMovieCatalog"> <qualifier type="MovieQualifier"> <attribute key="format" value="VHS"/> <attribute key="genre" value="Comedy"/> </qualifier> <!-- inject any dependencies required by this bean --> </bean> <bean class="example.SimpleMovieCatalog"> <meta key="format" value="DVD"/> <meta key="genre" value="Action"/> <!-- inject any dependencies required by this bean --> </bean> <bean class="example.SimpleMovieCatalog"> <meta key="format" value="BLURAY"/> <meta key="genre" value="Comedy"/> <!-- inject any dependencies required by this bean --> </bean> </beans>
The CustomAutowireConfigurer
is a BeanFactoryPostProcessor
that
enables further customization of the autowiring process. Specifically,
it allows you to register your own custom qualifier annotation types
even if they are not themselves annotated with Spring's
@Qualifier
annotation.
<bean id="customAutowireConfigurer" class="org.springframework.beans.factory.annotation.CustomAutowireConfigurer"> <property name="customQualifierTypes"> <set> <value>example.CustomQualifier</value> </set> </property> </bean>
Note that the particular implementation of
AutowireCandidateResolver
that will be
activated for the application context depends upon the Java version. If
running on less than Java 5, the qualifier annotations are not
supported, and therefore autowire candidates are solely determined by
the 'autowire-candidate'
value of each bean
definition as well as any
'default-autowire-candidates'
pattern(s) available on
the <beans/>
element. If running on Java 5 or
greater, the presence of @Qualifier
annotations or any custom annotations registered with the
CustomAutowireConfigurer
will also play a
role.
Regardless of the Java version, the determination of a "primary"
candidate (when multiple beans qualify as autowire candidates) is the
same: if exactly one bean definition among the candidates has a
'primary'
attribute set to 'true'
,
it will be selected.
Spring also supports injection using the JSR-250
@Resource
annotation on fields or bean
property setter methods. This is a common pattern found in Java EE 5 and
Java 6 (e.g. in JSF 1.2 managed beans or JAX-WS 2.0 endpoints), which
Spring supports for Spring-managed objects as well.
@Resource
takes a 'name' attribute,
and by default Spring will interpret that value as the bean name to be
injected. In other words, it follows by-name
semantics as demonstrated in this example:
public class SimpleMovieLister { private MovieFinder movieFinder; @Resource(name="myMovieFinder") public void setMovieFinder(MovieFinder movieFinder) { this.movieFinder = movieFinder; } }
If no name is specified explicitly, then the default name will be derived from the name of the field or setter method: In case of a field, it will simply be equivalent to the field name; in case of a setter method, it will be equivalent to the bean property name. So the following example is going to have the bean with name "movieFinder" injected into its setter method:
public class SimpleMovieLister { private MovieFinder movieFinder; @Resource public void setMovieFinder(MovieFinder movieFinder) { this.movieFinder = movieFinder; } }
![]() | Note |
---|---|
The name provided with the annotation will be resolved as a bean
name by the |
Similar to @Autowired
,
@Resource
may fall back to standard bean
type matches (i.e. find a primary type match instead of a specific named
bean) as well as resolve well-known "resolvable dependencies": the
BeanFactory
interface, the
ApplicationContext
interface, the
ResourceLoader
interface, the
ApplicationEventPublisher
interface and
the MessageSource
interface. Note that
this only applies to @Resource
usage with
no explicit name specified!
So the following example will have its
customerPreferenceDao
field looking for a bean with
name "customerPreferenceDao" first, then falling back to a primary type
match for the type CustomerPreferenceDao
. The
"context" field will simply be injected based on the known resolvable
dependency type
ApplicationContext
.
public class MovieRecommender { @Resource private CustomerPreferenceDao customerPreferenceDao; @Resource private ApplicationContext context; public MovieRecommender() { } // ... }
The CommonAnnotationBeanPostProcessor
not
only recognizes the @Resource
annotation
but also the JSR-250 lifecycle annotations.
Introduced in Spring 2.5, the support for these annotations offers yet
another alternative to those described in the sections on initialization
callbacks and destruction
callbacks. Provided that the
CommonAnnotationBeanPostProcessor
is registered
within the Spring ApplicationContext
, a
method carrying one of these annotations will be invoked at the same
point in the lifecycle as the corresponding Spring lifecycle interface's
method or explicitly declared callback method. In the example below, the
cache will be pre-populated upon initialization and cleared upon
destruction.
public class CachingMovieLister { @PostConstruct public void populateMovieCache() { // populates the movie cache upon initialization... } @PreDestroy public void clearMovieCache() { // clears the movie cache upon destruction... } }
![]() | Note |
---|---|
For details regarding the effects of combining various lifecycle mechanisms, see Section 4.5.1.4, “Combining lifecycle mechanisms”. |
Thus far most of the examples within this chapter have used XML for
specifying the configuration metadata that produces each
BeanDefinition
within the Spring container.
The previous section (Section 4.11, “Annotation-based configuration”)
demonstrated the possibility of providing a considerable amount of the
configuration metadata using source-level annotations. Even in those
examples however, the "base" bean definitions were explicitly defined in
the XML file while the annotations were driving the dependency injection
only. The current section introduces an option for implicitly detecting
the candidate components by scanning the classpath
and matching against filters.
![]() | Note |
---|---|
Starting with Spring 3.0 many of the features provided by the
Spring JavaConfig
project have been added to the core Spring Framework. This
allows you to define beans using Java rather than using the traditional
XML files. Take a look at the
|
Beginning with Spring 2.0, the
@Repository
annotation was introduced as
a marker for any class that fulfills the role or
stereotype of a repository (a.k.a. Data Access
Object or DAO). Among the possibilities for leveraging such a marker is
the automatic translation of exceptions as described in Section 14.6.4, “Exception Translation”.
Spring 2.5 introduces further stereotype annotations:
@Component
,
@Service
and
@Controller
.
@Component
serves as a generic stereotype
for any Spring-managed component; whereas,
@Repository
,
@Service
, and
@Controller
serve as specializations of
@Component
for more specific use cases
(e.g., in the persistence, service, and presentation layers,
respectively). What this means is that you can annotate your component
classes with @Component
, but by
annotating them with @Repository
,
@Service
, or
@Controller
instead, your classes are
more properly suited for processing by tools or associating with
aspects. For example, these stereotype annotations make ideal targets
for pointcuts. Of course, it is also possible that
@Repository
,
@Service
, and
@Controller
may carry additional
semantics in future releases of the Spring Framework. Thus, if you are
making a decision between using
@Component
or
@Service
for your service layer,
@Service
is clearly the better choice.
Similarly, as stated above, @Repository
is already supported as a marker for automatic exception translation in
your persistence layer.
Spring provides the capability of automatically detecting
'stereotyped' classes and registering corresponding
BeanDefinition
s with the
ApplicationContext
. For example, the
following two classes are eligible for such autodetection:
@Service public class SimpleMovieLister { private MovieFinder movieFinder; @Autowired public SimpleMovieLister(MovieFinder movieFinder) { this.movieFinder = movieFinder; } }
@Repository public class JpaMovieFinder implements MovieFinder { // implementation elided for clarity }
To autodetect these classes and register the corresponding beans requires the inclusion of the following element in XML where 'basePackage' would be a common parent package for the two classes (or alternatively a comma-separated list could be specified that included the parent package of each class).
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <context:component-scan base-package="org.example"/> </beans>
![]() | Note |
---|---|
Note that the scanning of classpath packages requires the presence of corresponding directory entries in the classpath. When building jars with Ant, make sure to not activate the files-only switch of the jar task! |
Furthermore, the
AutowiredAnnotationBeanPostProcessor
and
CommonAnnotationBeanPostProcessor
are
both included implicitly when using the component-scan element. That
means that the two components are autodetected and
wired together - all without any bean configuration metadata provided in
XML.
![]() | Note |
---|---|
The registration of those post-processors can be disabled by including the annotation-config attribute with a value of 'false'. |
By default, classes annotated with
@Component
,
@Repository
,
@Service
, or
@Controller
(or classes annotated with a
custom annotation that itself is annotated with
@Component
) are the only detected
candidate components. However it is simple to modify and extend this
behavior by applying custom filters. These can be added as either
include-filter or
exclude-filter sub-elements of the
'component-scan
' element. Each filter element
requires the 'type
' and
'expression
' attributes. Five filtering options exist
as described below.
Table 4.7. Filter Types
Filter Type | Example Expression | Description |
---|---|---|
annotation | org.example.SomeAnnotation | An annotation to be present at the type level in target components. |
assignable | org.example.SomeClass | A class (or interface) that the target components are assignable to (extend/implement). |
aspectj | org.example..*Service+ | An AspectJ type expression to be matched by the target components. |
regex | org\.example\.Default.* | A regex expression to be matched by the target components' class names. |
custom | org.example.MyCustomTypeFilter | A custom implementation of the
org.springframework.core.type.TypeFilter
interface. |
Find below an example of the XML configuration for ignoring all
@Repository
annotations and using "stub"
repositories instead.
<beans ...> <context:component-scan base-package="org.example"> <context:include-filter type="regex" expression=".*Stub.*Repository"/> <context:exclude-filter type="annotation" expression="org.springframework.stereotype.Repository"/> </context:component-scan> </beans>
![]() | Note |
---|---|
It is also possible to disable the default filters by providing
use-default-filters="false" as an attribute of
the <component-scan/> element. This will in effect disable
automatic detection of classes annotated with
|
The central artifact in Spring's new Java-configuration support is
the @Configuration
-annotated class. These
classes consist principally of
@Bean
-annotated methods that define
instantiation, configuration, and initialization logic for objects that
will be managed by the Spring IoC container.
Annotating a class with the
@Configuration
indicates that the class
may be used by the Spring IoC container as a source of bean definitions.
The simplest possible @Configuration
class would read as follows:
@Configuration public class AppConfig { }
An application may make use of one
@Configuration
-annotated class, or many.
@Configuration
is meta-annotated as a
@Component
, therefore
Configuration-classes are candidates for component-scanning and may also
take advantage of @Autowired
annotations
at the field and method level but not at the constructor level.
Configuration-classes must also have a default constructor. Externalized
values may be wired into Configuration-classes using the
@Value
annotation.
@Bean
is a method-level annotation
and a direct analog of the XML <bean/>
element. The
annotation supports some of the attributes offered by
<bean/>
, such as: init-method
,
destroy-method
,
autowiring
and name
.
You can use the @Bean annotation in a Configuraton-class or in a Component-class.
To declare a bean, simply annotate a method with the
@Bean
annotation. Such a method will be
used to register a bean definition within a BeanFactory
of the type specified as the methods return value. By default, the
bean name will be the same as the method name (see bean naming for details on how to
customize this behavior). The following is a simple example of a
@Bean
method declaration:
@Configuration public class AppConfig { @Bean public TransferService transferService() { return new TransferServiceImpl(); } }
For comparison sake, the configuration above is exactly equivalent to the following Spring XML:
<beans> <bean name="transferService" class="com.acme.TransferServiceImpl"/> </beans>
Both will result in a bean named transferService
being available in the BeanFactory
or
ApplicationContext
, bound to an object instance of type
TransferServiceImpl
:
transferService -> com.acme.TransferServiceImpl
When @Bean
s have dependencies on
one another, expressing that dependency is as simple as having one
bean method call another:
@Configuration public class AppConfig { @Bean public Foo foo() { return new Foo(bar()); } @Bean public Bar bar() { return new Bar(); } }
In the example above, the foo
bean recevies a
reference to bar
via constructor injection.
Beans created in a Configuration-class supports the regular lifecycle callbacks. Any classes defined with the @Bean annotation can use the @PostConstruct and @PreDestroy annotations from JSR-250, see the section on JSR-250 annotations for further details.
The regular Spring lifecycle callbacks are fully
supported as well. If a bean implements InitializingBean
,
DisposableBean
, or Lifecycle
, their
respective methods will be called by the container.
The standard set of *Aware
interfaces such as
BeanFactoryAware
,
BeanNameAware
,
MessageSourceAware
,
ApplicationContextAware
,
etc. are also fully supported.
The @Bean
annotation supports
specifying arbitrary initialization and destruction callback methods,
much like Spring XML's init-method
and
destroy-method
attributes to the bean
element:
public class Foo { public void init() { // initialization logic } } public class Bar { public void cleanup() { // destruction logic } } @Configuration public class AppConfig { @Bean(initMethodName = "init") public Foo foo() { return new Foo(); } @Bean(destroyMethodName="cleanup") public Bar bar() { return new Bar(); } }
Of course, in the case of Foo
above, it would be
equally as valid to call the init()
method directly
during construction:
@Configuration public class AppConfig { @Bean public Foo foo() { Foo foo = new Foo(); foo.init(); return foo; } // ... }
![]() | Tip |
---|---|
Remember that because you are working directly in Java, you can do anything you like with your objects, and do not always need to rely on the container! |
You can specify that your beans defined with the
@Bean
annotation should have a
specific scope. You can use any of the standard scopes specified in
the Bean Scopes
section.
The StandardScopes
class provides string
constants for each of these four scopes. SINGLETON is the default,
and can be overridden by using the
@Scope
annotation:
@Configuration public class MyConfiguration { @Bean @Scope(StandardScopes.PROTOTYPE) public Encryptor encryptor() { // ... } }
Spring offers a convenient way of working with scoped
dependencies through scoped
proxies. The easiest way to create such a proxy when using
the XML configuration is the <aop:scoped-proxy/>
element. Configuring your beans in Java with a @Scope annotation
offers equivalent support with the proxyMode attribute. The default
is no proxy (ScopedProxyMode.NO
) but you can
specify ScopedProxyMode.TARGET_CLASS
or
ScopedProxyMode.INTERFACES
.
If we were to port the the XML reference documentation scoped
proxy example (see link above) to our
@Bean
using Java, it would look like
the following:
// a HTTP Session-scoped bean exposed as a proxy @Bean @Scope(value = StandardScopes.SESSION, proxyMode = ScopedProxyMode.TARGET_CLASS) public UserPreferences userPreferences() { return new UserPreferences(); } @Bean public Service userService() { UserService service = new SimpleUserService(); // a reference to the proxied 'userPreferences' bean service.seUserPreferences(userPreferences()); return service; }
As noted earlier, lookup method injection is an advanced feature that should be comparatively rarely used. It is useful in cases where a singleton-scoped bean has a dependency on a prototype-scoped bean. Using Java for this type of configuration provides a natural means for implementing this pattern.
public abstract class CommandManager { public Object process(Object commandState) { // grab a new instance of the appropriate Command interface Command command = createCommand(); // set the state on the (hopefully brand new) Command instance command.setState(commandState); return command.execute(); } // okay... but where is the implementation of this method? protected abstract Command createCommand(); }
Using Java-configurtion support we can easily create a
subclass of CommandManager
where the abstract
createCommand()
is overridden in such a way that it
'looks up' a brand new (prototype) command object:
@Bean @Scope(StandardScopes.PROTOTYPE) public AsyncCommand asyncCommand() { AsyncCommand command = new AsyncCommand(); // inject dependencies here as required return command; } @Bean public CommandManager commandManager() { // return new anonymous implementation of CommandManager with command() overridden // to return a new prototype Command object return new CommandManager() { protected Command command() { return asyncCommand(); } } }
By default, Configuration-classes uses a
@Bean
method's name as the name of the
resulting bean. This functionality can be overridden, however, using
the name
attribute.
@Configuration public class AppConfig { @Bean(name = "bar") public Foo foo() { return new Foo(); } }
Spring components can also contribute bean definition metadata to
the container. This is done with the same @Bean
annotation used to define bean metadata within
@Configuration
annotated classes. Here is a simple
example
@Component public class FactoryMethodComponent { @Bean @Qualifier("public") public TestBean publicInstance() { return new TestBean("publicInstance"); } public void DoWork() { // Component method implementation omitted } }
This class is a Spring component and has application specific code
contained in its DoWork
method. However, it
also contributes a bean definition that has a factory method referring
to the method publicInstance
. The
@Bean
annotation identifies the factory method and
also other bean definition properties, such as a qualifier value via the
@Qualifier
annotation. Other method level
annotations that can be specified are @Scope
,
@Lazy
, and custom qualifier annotations. Autowired
fields and methods are supported as before with the additional support
for autowiring of @Bean methods, as shown in the example below
@Component public class FactoryMethodComponent { private static int i; @Bean @Qualifier("public") public TestBean publicInstance() { return new TestBean("publicInstance"); } // use of a custom qualifier and autowiring of method parameters @Bean @BeanAge(1) protected TestBean protectedInstance(@Qualifier("public") TestBean spouse, @Value("#{privateInstance.age}") String country) { TestBean tb = new TestBean("protectedInstance", 1); tb.setSpouse(tb); tb.setCountry(country); return tb; } @Bean @Scope(StandardScopes.PROTOTYPE) private TestBean privateInstance() { return new TestBean("privateInstance", i++); } @Bean @Scope(value = StandardScopes.SESSION, proxyMode = ScopedProxyMode.TARGET_CLASS) public TestBean requestScopedInstance() { return new TestBean("requestScopedInstance", 3); } }
Note the use of autowiring of the String
method parameter country
to the value of the
Age
property on another bean named
privateInstance
. A Spring Expression Language element
is used to define the value of the property via the notation #{
<expression> }
. For @Value
annotations, an expression resolver is preconfigured to look for bean
names when resolving expression text.
The @Bean
methods in a Spring component are
processed differently than their counterparts inside a Spring
@Configuration
class. The difference is that
@Component
classes are not enhanced with CGLIB to
intercept the invocation of methods and fields. CGLIB proxying is the
means by which invoking methods or fields within
@Configuration
classes' @Bean
methods create bean metadata references to collaborating objects and do
not invoke the method with normal Java semantics.
In contrast, calling a method or field within a
@Component
classes' @Bean
method
has standard Java semantics.
When a component is autodetected as part of the scanning process,
its bean name will be generated by the
BeanNameGenerator
strategy known to that
scanner. By default, any Spring 'stereotype' annotation
(@Component
,
@Repository
,
@Service
, and
@Controller
) that contains a
name
value will thereby provide that name to the
corresponding bean definition. If such an annotation contains no
name
value or for any other detected component (such
as those discovered due to custom filters), the default bean name
generator will return the uncapitalized non-qualified class name. For
example, if the following two components were detected, the names would
be 'myMovieLister' and 'movieFinderImpl':
@Service("myMovieLister") public class SimpleMovieLister { // ... }
@Repository public class MovieFinderImpl implements MovieFinder { // ... }
![]() | Note |
---|---|
If you don't want to rely on the default bean-naming strategy,
you may provide a custom bean-naming strategy. First, implement the
|
<beans ...> <context:component-scan base-package="org.example" name-generator="org.example.MyNameGenerator" /> </beans>
As a general rule, consider specifying the name with the annotation whenever other components may be making explicit references to it. On the other hand, the auto-generated names are adequate whenever the container is responsible for wiring.
As with Spring-managed components in general, the default and by
far most common scope is 'singleton'. However, there are times when
other scopes are needed. Therefore Spring 2.5 introduces a new
@Scope
annotation as well. Simply provide
the name of the scope within the annotation, such as:
@Scope(StandardScopes.PROTOTYPE) @Repository public class MovieFinderImpl implements MovieFinder { // ... }
![]() | Note |
---|---|
If you would like to provide a custom strategy for scope
resolution rather than relying on the annotation-based approach,
implement the |
<beans ...> <context:component-scan base-package="org.example" scope-resolver="org.example.MyScopeResolver" /> </beans>
When using certain non-singleton scopes, it may be necessary to generate proxies for the scoped objects. The reasoning is described in detail within the section entitled Section 4.4.4.5, “Scoped beans as dependencies”. For this purpose, a scoped-proxy attribute is available on the 'component-scan' element. The three possible values are: 'no', 'interfaces', and 'targetClass'. For example, the following configuration will result in standard JDK dynamic proxies:
<beans ...> <context:component-scan base-package="org.example" scoped-proxy="interfaces" /> </beans>
The @Qualifier
annotation was
introduced in the section above entitled Section 4.11.3, “Fine-tuning annotation-based autowiring with qualifiers”. The examples in that
section demonstrated use of the
@Qualifier
annotation as well as custom
qualifier annotations to provide fine-grained control when resolving
autowire candidates. Since those examples were based on XML bean
definitions, the qualifier metadata was provided on the candidate bean
definitions using the 'qualifier
' or
'meta
' sub-elements of the 'bean
'
element in the XML. When relying upon classpath scanning for
autodetection of components, then the qualifier metadata may be provided
with type-level annotations on the candidate class. The following three
examples demonstrate this technique.
@Component @Qualifier("Action") public class ActionMovieCatalog implements MovieCatalog { // ... }
@Component @Genre("Action") public class ActionMovieCatalog implements MovieCatalog { // ... }
@Component @Offline public class CachingMovieCatalog implements MovieCatalog { // ... }
![]() | Note |
---|---|
As with most of the annotation-based alternatives, keep in mind that the annotation metadata is bound to the class definition itself, while the use of XML allows for multiple beans of the same type to provide variations in their qualifier metadata since that metadata is provided per-instance rather than per-class. |
The context
namespace introduced in Spring 2.5
provides a load-time-weaver
element.
<beans ...> <context:load-time-weaver/> </beans>
Adding this element to an XML-based Spring configuration file
activates a Spring LoadTimeWeaver
for the
ApplicationContext
. Any bean within that
ApplicationContext
may implement
LoadTimeWeaverAware
thereby receiving a
reference to the load-time weaver instance. This is particularly useful in
combination with Spring's JPA support where
load-time weaving may be necessary for JPA class transformation. Consult
the LocalContainerEntityManagerFactoryBean
Javadoc
for more detail. For more on AspectJ load-time weaving, see Section 8.8.4, “Load-time weaving with AspectJ in the Spring Framework”.
[1] See the section entitled Background
[2] See the section entitled Section 4.3.1, “Injecting dependencies”
Java's standard java.net.URL
class and
standard handlers for various URL prefixes unfortunately are not quite
adequate enough for all access to low-level resources. For example,
there is no standardized URL
implementation
that may be used to access a resource that needs to be obtained from
the classpath, or relative to a
ServletContext
. While it is possible
to register new handlers for specialized URL
prefixes (similar to existing handlers for prefixes such as
http:
), this is generally quite complicated, and the
URL
interface still lacks some desirable
functionality, such as a method to check for the existence of the
resource being pointed to.
Spring's Resource
interface is meant
to be a more capable interface for abstracting access to low-level
resources.
public interface Resource extends InputStreamSource { boolean exists(); boolean isOpen(); URL getURL() throws IOException; File getFile() throws IOException; Resource createRelative(String relativePath) throws IOException; String getFilename(); String getDescription(); }
public interface InputStreamSource { InputStream getInputStream() throws IOException; }
Some of the most important methods from the
Resource
interface are:
getInputStream()
: locates and opens the
resource, returning an InputStream
for reading
from the resource. It is expected that each invocation returns a
fresh InputStream
. It is the responsibility of
the caller to close the stream.
exists()
: returns a
boolean
indicating whether this resource actually
exists in physical form.
isOpen()
: returns a
boolean
indicating whether this resource represents
a handle with an open stream. If true
, the
InputStream
cannot be read multiple times, and
must be read once only and then closed to avoid resource leaks. Will
be false
for all usual resource implementations,
with the exception of
InputStreamResource
.
getDescription()
: returns a description
for this resource, to be used for error output when working with the
resource. This is often the fully qualified file name or the actual
URL of the resource.
Other methods allow you to obtain an actual
URL
or File
object
representing the resource (if the underlying implementation is compatible,
and supports that functionality).
The Resource
abstraction is used
extensively in Spring itself, as an argument type in many method
signatures when a resource is needed. Other methods in some Spring APIs
(such as the constructors to various
ApplicationContext
implementations), take a
String
which in unadorned or simple form is used to
create a Resource
appropriate to that
context implementation, or via special prefixes on the
String
path, allow the caller to specify that a
specific Resource
implementation must be
created and used.
While the Resource
interface is used
a lot with Spring and by Spring, it's actually very useful to use as a
general utility class by itself in your own code, for access to resources,
even when your code doesn't know or care about any other parts of Spring.
While this couples your code to Spring, it really only couples it to this
small set of utility classes, which are serving as a more capable
replacement for URL
, and can be considered
equivalent to any other library you would use for this purpose.
It is important to note that the
Resource
abstraction does not replace
functionality: it wraps it where possible. For example, a
UrlResource
wraps a URL, and uses the wrapped
URL
to do its work.
There are a number of Resource
implementations that come supplied straight out of the box in
Spring:
The UrlResource
wraps a
java.net.URL
, and may be used to access any
object that is normally accessible via a URL, such as files, an HTTP
target, an FTP target, etc. All URLs have a standardized
String
representation, such that appropriate
standardized prefixes are used to indicate one URL type from another.
This includes file:
for accessing filesystem paths,
http:
for accessing resources via the HTTP protocol,
ftp:
for accessing resources via FTP, etc.
A UrlResource
is created by Java code
explicitly using the UrlResource
constructor, but
will often be created implicitly when you call an API method which takes
a String
argument which is meant to represent a
path. For the latter case, a JavaBeans
PropertyEditor
will ultimately decide
which type of Resource
to create. If the
path string contains a few well-known (to it, that is) prefixes such as
classpath:
, it will create an appropriate specialized
Resource
for that prefix. However, if it
doesn't recognize the prefix, it will assume the this is just a standard
URL string, and will create a UrlResource
.
This class represents a resource which should be obtained from the classpath. This uses either the thread context class loader, a given class loader, or a given class for loading resources.
This Resource
implementation
supports resolution as java.io.File
if the class
path resource resides in the file system, but not for classpath
resources which reside in a jar and have not been expanded (by the
servlet engine, or whatever the environment is) to the filesystem. To
address this the various Resource
implementations always support resolution as a
java.net.URL
.
A ClassPathResource
is created by Java code
explicitly using the ClassPathResource
constructor, but will often be created implicitly when you call an API
method which takes a String
argument which is
meant to represent a path. For the latter case, a JavaBeans
PropertyEditor
will recognize the special
prefix classpath:
on the string path, and create a
ClassPathResource
in that case.
This is a Resource
implementation
for java.io.File
handles. It obviously supports
resolution as a File
, and as a
URL
.
This is a Resource
implementation
for ServletContext
resources,
interpreting relative paths within the relevant web application's root
directory.
This always supports stream access and URL access, but only allows
java.io.File
access when the web application
archive is expanded and the resource is physically on the filesystem.
Whether or not it's expanded and on the filesystem like this, or
accessed directly from the JAR or somewhere else like a DB (it's
conceivable) is actually dependent on the Servlet container.
A Resource
implementation for a
given InputStream
. This should only be
used if no specific Resource
implementation is applicable. In particular, prefer
ByteArrayResource
or any of the file-based
Resource
implementations where
possible.
In contrast to other Resource
implementations, this is a descriptor for an
already opened resource - therefore returning
true
from isOpen()
. Do not
use it if you need to keep the resource descriptor somewhere, or if you
need to read a stream multiple times.
The ResourceLoader
interface is meant
to be implemented by objects that can return (i.e. load)
Resource
instances.
public interface ResourceLoader { Resource getResource(String location); }
All application contexts implement the
ResourceLoader
interface, and therefore all
application contexts may be used to obtain
Resource
instances.
When you call getResource()
on a specific
application context, and the location path specified doesn't have a
specific prefix, you will get back a
Resource
type that is appropriate to that
particular application context. For example, assume the following snippet
of code was executed against a
ClassPathXmlApplicationContext
instance:
Resource template = ctx.getResource("some/resource/path/myTemplate.txt);
What would be returned would be a
ClassPathResource
; if the same method was executed
against a FileSystemXmlApplicationContext
instance,
you'd get back a FileSystemResource
. For a
WebApplicationContext
, you'd get back a
ServletContextResource
, and so on.
As such, you can load resources in a fashion appropriate to the particular application context.
On the other hand, you may also force
ClassPathResource
to be used, regardless of the
application context type, by specifying the special
classpath:
prefix:
Resource template = ctx.getResource("classpath:some/resource/path/myTemplate.txt);
Similarly, one can force a UrlResource
to be
used by specifying any of the standard java.net.URL
prefixes:
Resource template = ctx.getResource("file:/some/resource/path/myTemplate.txt);
Resource template = ctx.getResource("http://myhost.com/resource/path/myTemplate.txt);
The following table summarizes the strategy for converting
String
s to
Resource
s:
Table 5.1. Resource strings
Prefix | Example | Explanation |
---|---|---|
classpath: | | Loaded from the classpath. |
file: | | Loaded as a |
http: | | Loaded as a
|
(none) | | Depends on the underlying
|
[1] But see also the section entitled Section 5.7.3, “FileSystemResource caveats”. |
The ResourceLoaderAware
interface is
a special marker interface, identifying objects that expect to be provided
with a ResourceLoader
reference.
public interface ResourceLoaderAware { void setResourceLoader(ResourceLoader resourceLoader); }
When a class implements
ResourceLoaderAware
and is deployed into an
application context (as a Spring-managed bean), it is recognized as
ResourceLoaderAware
by the application
context. The application context will then invoke the
setResourceLoader(ResourceLoader)
, supplying
itself as the argument (remember, all application contexts in Spring
implement the ResourceLoader
interface).
Of course, since an
ApplicationContext
is a
ResourceLoader
, the bean could also
implement the ApplicationContextAware
interface and use the supplied application context directly to load
resources, but in general, it's better to use the specialized
ResourceLoader
interface if that's all
that's needed. The code would just be coupled to the resource loading
interface, which can be considered a utility interface, and not the whole
Spring ApplicationContext
interface.
As of Spring 2.5, you can rely upon autowiring of the
ResourceLoader
as an alternative to
implementing the ResourceLoaderAware
interface.
The "traditional" constructor
and byType
autowiring modes (as described in the section entitled
Section 4.3.5, “Autowiring collaborators”) are now capable of providing a
dependency of type ResourceLoader
for either a
constructor argument or setter method parameter respectively. For more flexibility
(including the ability to autowire fields and multiple parameter methods), consider
using the new annotation-based autowiring features. In that case, the
ResourceLoader
will be autowired into a field,
constructor argument, or method parameter that is expecting the
ResourceLoader
type as long as the field,
constructor, or method in question carries the
@Autowired
annotation. For more information,
see the section entitled Section 4.11.2, “@Autowired”.
If the bean itself is going to determine and supply the resource
path through some sort of dynamic process, it probably makes sense for the
bean to use the ResourceLoader
interface to
load resources. Consider as an example the loading of a template of some
sort, where the specific resource that is needed depends on the role of
the user. If the resources are static, it makes sense to eliminate the use
of the ResourceLoader
interface completely,
and just have the bean expose the Resource
properties it needs, and expect that they will be injected into it.
What makes it trivial to then inject these properties, is that all
application contexts register and use a special JavaBeans
PropertyEditor
which can convert
String
paths to
Resource
objects. So if
myBean
has a template property of type
Resource
, it can be configured with a
simple string for that resource, as follows:
<bean id="myBean" class="..."> <property name="template" value="some/resource/path/myTemplate.txt"/> </bean>
Note that the resource path has no prefix, so because the
application context itself is going to be used as the
ResourceLoader
, the resource itself will be
loaded via a ClassPathResource
,
FileSystemResource
, or
ServletContextResource
(as appropriate)
depending on the exact type of the context.
If there is a need to force a specific
Resource
type to be used, then a prefix may
be used. The following two examples show how to force a
ClassPathResource
and a
UrlResource
(the latter being used to access a
filesystem file).
<property name="template" value="classpath:some/resource/path/myTemplate.txt">
<property name="template" value="file:/some/resource/path/myTemplate.txt"/>
An application context constructor (for a specific application context type) generally takes a string or array of strings as the location path(s) of the resource(s) such as XML files that make up the definition of the context.
When such a location path doesn't have a prefix, the specific
Resource
type built from that path and
used to load the bean definitions, depends on and is appropriate to the
specific application context. For example, if you create a
ClassPathXmlApplicationContext
as follows:
ApplicationContext ctx = new ClassPathXmlApplicationContext("conf/appContext.xml");
The bean definitions will be loaded from the classpath, as a
ClassPathResource
will be
used. But if you create a
FileSystemXmlApplicationContext
as
follows:
ApplicationContext ctx = new FileSystemXmlApplicationContext("conf/appContext.xml");
The bean definition will be loaded from a filesystem location, in this case relative to the current working directory.
Note that the use of the special classpath prefix or a standard
URL prefix on the location path will override the default type of
Resource
created to load the definition.
So this FileSystemXmlApplicationContext
...
ApplicationContext ctx = new FileSystemXmlApplicationContext("classpath:conf/appContext.xml");
... will actually load its bean definitions from the classpath.
However, it is still a FileSystemXmlApplicationContext
. If it is
subsequently used as a ResourceLoader
,
any unprefixed paths will still be treated as filesystem paths.
The ClassPathXmlApplicationContext
exposes a number of constructors to enable convenient instantiation.
The basic idea is that one supplies merely a string array containing
just the filenames of the XML files themselves (without the leading
path information), and one also supplies a
Class
; the
ClassPathXmlApplicationContext
will derive the
path information from the supplied class.
An example will hopefully make this clear. Consider a directory layout that looks like this:
com/ foo/ services.xml daos.xml MessengerService.class
A ClassPathXmlApplicationContext
instance
composed of the beans defined in the 'services.xml'
and 'daos.xml'
could be instantiated like
so...
ApplicationContext ctx = new ClassPathXmlApplicationContext( new String[] {"services.xml", "daos.xml"}, MessengerService.class);
Please do consult the Javadocs for the
ClassPathXmlApplicationContext
class for
details of the various constructors.
The resource paths in application context constructor values may
be a simple path (as shown above) which has a one-to-one mapping to a
target Resource, or alternately may contain the special "classpath*:"
prefix and/or internal Ant-style regular expressions (matched using
Spring's PathMatcher
utility). Both of the latter
are effectively wildcards
One use for this mechanism is when doing component-style
application assembly. All components can 'publish' context definition
fragments to a well-known location path, and when the final application
context is created using the same path prefixed via
classpath*:
, all component fragments will be picked
up automatically.
Note that this wildcarding is specific to use of resource paths in
application context constructors (or when using the
PathMatcher
utility class hierarchy directly),
and is resolved at construction time. It has nothing to do with the
Resource
type itself. It's not possible
to use the classpath*:
prefix to construct an actual
Resource
, as a resource points to just
one resource at a time.
When the path location contains an Ant-style pattern, for example:
/WEB-INF/*-context.xml com/mycompany/**/applicationContext.xml file:C:/some/path/*-context.xml classpath:com/mycompany/**/applicationContext.xml
... the resolver follows a more complex but defined procedure to
try to resolve the wildcard. It produces a Resource for the path up to
the last non-wildcard segment and obtains a URL from it. If this URL
is not a "jar:" URL or container-specific variant (e.g.
"zip:
" in WebLogic, "wsjar
" in
WebSphere, etc.), then a java.io.File
is
obtained from it and used to resolve the wildcard by traversing the
filesystem. In the case of a jar URL, the resolver either gets a
java.net.JarURLConnection
from it or manually
parses the jar URL and then traverses the contents of the jar file
to resolve the wildcards.
If the specified path is already a file URL (either
explicitly, or implicitly because the base
ResourceLoader
is a
filesystem one, then wildcarding is guaranteed to work in a
completely portable fashion.
If the specified path is a classpath location, then the
resolver must obtain the last non-wildcard path segment URL via a
Classloader.getResource()
call. Since this
is just a node of the path (not the file at the end) it is actually
undefined (in the ClassLoader
Javadocs)
exactly what sort of a URL is returned in this case. In practice, it
is always a java.io.File
representing the
directory, where the classpath resource resolves to a filesystem
location, or a jar URL of some sort, where the classpath resource
resolves to a jar location. Still, there is a portability concern on
this operation.
If a jar URL is obtained for the last non-wildcard segment,
the resolver must be able to get a
java.net.JarURLConnection
from it, or
manually parse the jar URL, to be able to walk the contents of the
jar, and resolve the wildcard. This will work in most environments,
but will fail in others, and it is strongly recommended that the
wildcard resolution of resources coming from jars be thoroughly
tested in your specific environment before you rely on it.
When constructing an XML-based application context, a location
string may use the special classpath*:
prefix:
ApplicationContext ctx = new ClassPathXmlApplicationContext("classpath*:conf/appContext.xml");
This special prefix specifies that all classpath resources that
match the given name must be obtained (internally, this essentially
happens via a ClassLoader.getResources(...)
call), and then merged to form the final application context
definition.
![]() | Classpath*: portability |
---|---|
The wildcard classpath relies on the |
The "classpath*:
" prefix can also be combined
with a PathMatcher
pattern in the rest of the location path, for
example "classpath*:META-INF/*-beans.xml
". In this
case, the resolution strategy is fairly simple: a
ClassLoader.getResources() call is used on the last non-wildcard path
segment to get all the matching resources in the class loader
hierarchy, and then off each resource the same PathMatcher resoltion
strategy described above is used for the wildcard subpath.
Please note that "classpath*:
" when
combined with Ant-style patterns will only work reliably with at least
one root directory before the pattern starts, unless the actual target
files reside in the file system. This means that a pattern like
"classpath*:*.xml
" will not retrieve files from the
root of jar files but rather only from the root of expanded
directories. This originates from a limitation in the JDK's
ClassLoader.getResources()
method which only
returns file system locations for a passed-in empty string (indicating
potential roots to search).
Ant-style patterns with "classpath:
"
resources are not guaranteed to find matching resources if the root
package to search is available in multiple class path locations. This
is because a resource such as
com/mycompany/package1/service-context.xml
may be in only one location, but when a path such as
classpath:com/mycompany/**/service-context.xml
is used to try to resolve it, the resolver will work off the (first) URL
returned by getResource("com/mycompany")
;. If
this base package node exists in multiple classloader locations, the
actual end resource may not be underneath. Therefore, preferably, use
"classpath*:
" with the same Ant-style pattern in
such a case, which will search all class path locations that contain
the root package.
A FileSystemResource
that is not attached
to a FileSystemApplicationContext
(that is, a
FileSystemApplicationContext
is not the actual
ResourceLoader
) will treat absolute vs.
relative paths as you would expect. Relative paths are relative to the
current working directory, while absolute paths are relative to the root
of the filesystem.
For backwards compatibility (historical) reasons however, this
changes when the FileSystemApplicationContext
is
the ResourceLoader
. The
FileSystemApplicationContext
simply forces all
attached FileSystemResource
instances to treat
all location paths as relative, whether they start with a leading slash
or not. In practice, this means the following are equivalent:
ApplicationContext ctx = new FileSystemXmlApplicationContext("conf/context.xml");
ApplicationContext ctx = new FileSystemXmlApplicationContext("/conf/context.xml");
As are the following: (Even though it would make sense for them to be different, as one case is relative and the other absolute.)
FileSystemXmlApplicationContext ctx = ...;
ctx.getResource("some/resource/path/myTemplate.txt");
FileSystemXmlApplicationContext ctx = ...;
ctx.getResource("/some/resource/path/myTemplate.txt");
In practice, if true absolute filesystem paths are needed, it is
better to forgo the use of absolute paths with
FileSystemResource
/
FileSystemXmlApplicationContext
, and just force
the use of a UrlResource
, by using the
file:
URL prefix.
// actual context type doesn't matter, the Resource will always be UrlResource ctx.getResource("file:/some/resource/path/myTemplate.txt");
// force this FileSystemXmlApplicationContext to load its definition via a UrlResource ApplicationContext ctx = new FileSystemXmlApplicationContext("file:/conf/context.xml");
There are pros and cons for considering validation as business logic,
and Spring offers a design for validation (and data binding) that
does not exclude either one of them. Specifically validation should not be
tied to the web tier, should be easy to localize and it should be
possible to plug in any validator available. Considering the above, Spring
has come up with a Validator
interface that
is both basic and eminently usable in every layer of an application.
Data binding is useful for allowing user input to be dynamically
bound to the domain model of an application (or whatever objects you use
to process user input). Spring provides the so-called
DataBinder
to do exactly that. The
Validator
and the
DataBinder
make up the validation
package,
which is primarily used in but not limited to the MVC framework.
The BeanWrapper
is a fundamental concept in the
Spring Framework and is used in a lot of places. However, you probably
will not ever have the need to use the BeanWrapper
directly. Because this
is reference documentation however, we felt that some explanation might be
in order. We're explaining the BeanWrapper
in this chapter since if you were
going to use it at all, you would probably do so when trying to bind
data to objects, which is strongly related to the BeanWrapper
.
Spring uses PropertyEditors all over the place. The concept of a
PropertyEditor
is part of the JavaBeans specification. Just as the
BeanWrapper
, it's best to explain the use of PropertyEditors in this
chapter as well, since it's closely related to the BeanWrapper
and the
DataBinder
.
Spring's features a Validator
interface that you can
use to validate objects. The Validator
interface works using
an Errors
object so that while validating, validators can report
validation failures to the Errors
object.
Let's consider a small data object:
public class Person { private String name; private int age; // the usual getters and setters... }
We're going to provide validation behavior for the Person
class by implementing the following two methods of the
org.springframework.validation.Validator
interface:
supports(Class)
- Can this
Validator
validate instances of the supplied
Class
?
validate(Object, org.springframework.validation.Errors)
-
validates the given object and in case of validation errors, registers
those with the given Errors
object
Implementing a Validator
is fairly straightforward,
especially when you know of the ValidationUtils
helper class
that the Spring Framework also provides.
public class PersonValidator implements Validator { /** * This Validator validates just Person instances */ public boolean supports(Class clazz) { return Person.class.equals(clazz); } public void validate(Object obj, Errors e) { ValidationUtils.rejectIfEmpty(e, "name", "name.empty"); Person p = (Person) obj; if (p.getAge() < 0) { e.rejectValue("age", "negativevalue"); } else if (p.getAge() > 110) { e.rejectValue("age", "too.darn.old"); } } }
As you can see, the static
rejectIfEmpty(..)
method on the ValidationUtils
class is used to reject the
'name'
property if it is null
or the empty string.
Have a look at the Javadoc for the ValidationUtils
class to see
what functionality it provides besides the example shown previously.
While it is certainly possible to implement a single
Validator
class to validate each of the nested objects
in a rich object, it may be better to encapsulate the validation logic for each nested
class of object in its own Validator
implementation. A
simple example of a 'rich' object would be a
Customer
that is composed of two String
properties (a first and second name) and a complex Address
object.
Address
objects may be used independently of
Customer
objects, and so a distinct
AddressValidator
has been implemented. If you want your
CustomerValidator
to reuse the logic contained within the
AddressValidator
class without recourse to copy-n-paste you can
dependency-inject or instantiate an AddressValidator
within your
CustomerValidator
, and use it like so:
public class CustomerValidator implements Validator { private final Validator addressValidator; public CustomerValidator(Validator addressValidator) { if (addressValidator == null) { throw new IllegalArgumentException("The supplied [Validator] is required and must not be null."); } if (!addressValidator.supports(Address.class)) { throw new IllegalArgumentException( "The supplied [Validator] must support the validation of [Address] instances."); } this.addressValidator = addressValidator; } /** * This Validator validates Customer instances, and any subclasses of Customer too */ public boolean supports(Class clazz) { return Customer.class.isAssignableFrom(clazz); } public void validate(Object target, Errors errors) { ValidationUtils.rejectIfEmptyOrWhitespace(errors, "firstName", "field.required"); ValidationUtils.rejectIfEmptyOrWhitespace(errors, "surname", "field.required"); Customer customer = (Customer) target; try { errors.pushNestedPath("address"); ValidationUtils.invokeValidator(this.addressValidator, customer.getAddress(), errors); } finally { errors.popNestedPath(); } } }
Validation errors are reported to the Errors
object passed to the validator. In case of Spring Web MVC you can use
<spring:bind/>
tag to inspect the error messages, but
of course you can also inspect the errors object yourself. More information about
the methods it offers can be found from the Javadoc.
We've talked about databinding and validation. Outputting messages corresponding to
validation errors is the last thing we need to discuss. In the example we've shown
above, we rejected the name
and the age
field.
If we're going to output the error messages by using a MessageSource
,
we will do so using the error code we've given when rejecting the field ('name' and 'age'
in this case). When you call (either directly, or indirectly, using for example the
ValidationUtils
class) rejectValue
or one of
the other reject
methods from the Errors
interface, the underlying implementation will not only register the code you've
passed in, but also a number of additional error codes. What error codes it registers
is determined by the MessageCodesResolver
that is used.
By default, the DefaultMessageCodesResolver
is used, which for example
not only registers a message with the code you gave, but also messages that include the
field name you passed to the reject method. So in case you reject a field using
rejectValue("age", "too.darn.old")
, apart from the
too.darn.old
code, Spring will also register
too.darn.old.age
and too.darn.old.age.int
(so the first will include the field name and the second will include the type of the
field); this is done as a convenience to aid developers in targeting error
messages and suchlike.
More information on the MessageCodesResolver
and the default
strategy can be found online with the Javadocs for
MessageCodesResolver
and
DefaultMessageCodesResolver
respectively.
The org.springframework.beans
package adheres to
the JavaBeans standard provided by Sun. A JavaBean is simply a class with
a default no-argument constructor, which follows a naming convention
where (by way of an example) a property named bingoMadness
would have a setter
method setBingoMadness(..)
and a getter method getBingoMadness()
.
For more information about JavaBeans and the specification, please refer
to Sun's website ( java.sun.com/products/javabeans).
One quite important class in the beans package is the
BeanWrapper
interface and its corresponding
implementation (BeanWrapperImpl
). As quoted from the
Javadoc, the BeanWrapper
offers functionality to set and get property
values (individually or in bulk), get property descriptors, and to query
properties to determine if they are readable or writable. Also, the
BeanWrapper
offers support for nested properties, enabling the setting of
properties on sub-properties to an unlimited depth. Then, the BeanWrapper
supports the ability to add standard JavaBeans
PropertyChangeListeners
and
VetoableChangeListeners
, without the need for
supporting code in the target class. Last but not least, the BeanWrapper
provides support for the setting of indexed properties. The BeanWrapper
usually isn't used by application code directly, but by the
DataBinder
and the
BeanFactory
.
The way the BeanWrapper
works is partly indicated by its name:
it wraps a bean to perform actions on that bean, like
setting and retrieving properties.
Setting and getting properties is done using the
setPropertyValue(s)
and
getPropertyValue(s)
methods that both come with a
couple of overloaded variants. They're all described in more detail in
the Javadoc Spring comes with. What's important to know is that there
are a couple of conventions for indicating properties of an object. A
couple of examples:
Table 6.1. Examples of properties
Expression | Explanation |
---|---|
name | Indicates the property name
corresponding to the methods getName() or
isName() and
setName(..) |
account.name | Indicates the nested property name
of the property account corresponding e.g.
to the methods getAccount().setName() or
getAccount().getName() |
account[2] | Indicates the third element of the
indexed property account . Indexed
properties can be of type array ,
list or other naturally
ordered collection |
account[COMPANYNAME] | Indicates the value of the map entry indexed by the key
COMPANYNAME of the Map property
account |
Below you'll find some examples of working with the BeanWrapper
to
get and set properties.
(This next section is not vitally important to you if you're not
planning to work with the BeanWrapper
directly. If you're
just using the DataBinder
and the
BeanFactory
and their out-of-the-box implementation, you
should skip ahead to the section about
PropertyEditors
.)
Consider the following two classes:
public class Company { private String name; private Employee managingDirector; public String getName() { return this.name; } public void setName(String name) { this.name = name; } public Employee getManagingDirector() { return this.managingDirector; } public void setManagingDirector(Employee managingDirector) { this.managingDirector = managingDirector; } }
public class Employee { private String name; private float salary; public String getName() { return this.name; } public void setName(String name) { this.name = name; } public float getSalary() { return salary; } public void setSalary(float salary) { this.salary = salary; } }
The following code snippets show some examples of how to retrieve
and manipulate some of the properties of instantiated
Companies
and Employees
:
BeanWrapper company = BeanWrapperImpl(new Company()); // setting the company name.. company.setPropertyValue("name", "Some Company Inc."); // ... can also be done like this: PropertyValue value = new PropertyValue("name", "Some Company Inc."); company.setPropertyValue(value); // ok, let's create the director and tie it to the company: BeanWrapper jim = BeanWrapperImpl(new Employee()); jim.setPropertyValue("name", "Jim Stravinsky"); company.setPropertyValue("managingDirector", jim.getWrappedInstance()); // retrieving the salary of the managingDirector through the company Float salary = (Float) company.getPropertyValue("managingDirector.salary");
Spring heavily uses the concept of PropertyEditors
to effect the conversion
between an Object
and a String
. If you think about it,
it sometimes might be handy to be able to represent properties in a different way than the object itself.
For example, a Date
can be represented in a human readable way (as the
String
'2007-14-09
'), while we're still able to convert the
human readable form back to the original date (or even better: convert any date entered in a human readable
form, back to Date
objects). This behavior can be achieved by
registering custom editors, of type java.beans.PropertyEditor
.
Registering custom editors on a BeanWrapper
or alternately in a specific IoC
container as mentioned in the previous chapter, gives it the knowledge of how to convert properties to the
desired type. Read more about PropertyEditors
in the Javadoc of the
java.beans
package provided by Sun.
A couple of examples where property editing is used in Spring:
setting properties on beans is done
using PropertyEditors
. When mentioning
java.lang.String
as the value of a property of
some bean you're declaring in XML file, Spring will (if the setter
of the corresponding property has a Class
-parameter) use the
ClassEditor
to try to resolve the parameter to
a Class
object.
parsing HTTP request parameters in
Spring's MVC framework is done using all kinds of PropertyEditors
that you can manually bind in all subclasses of the
CommandController
.
Spring has a number of built-in PropertyEditors
to make life easy.
Each of those is listed below and they are all located in the
org.springframework.beans.propertyeditors
package. Most, but not all (as indicated below),
are registered by default by BeanWrapperImpl
. Where the property editor is configurable
in some fashion, you can of course still register your own variant to override the default one:
Table 6.2. Built-in PropertyEditors
Class | Explanation |
---|---|
ByteArrayPropertyEditor | Editor for byte arrays. Strings will simply be
converted to their corresponding byte representations.
Registered by default by BeanWrapperImpl . |
ClassEditor | Parses Strings representing classes to actual classes
and the other way around. When a class is not found, an
IllegalArgumentException is thrown. Registered by default by
BeanWrapperImpl . |
CustomBooleanEditor | Customizable property editor for Boolean properties.
Registered by default by BeanWrapperImpl , but, can be
overridden by registering custom instance of it as custom
editor. |
CustomCollectionEditor | Property editor for Collections, converting any source
Collection to a given target Collection type. |
CustomDateEditor | Customizable property editor for java.util.Date, supporting a custom DateFormat. NOT registered by default. Must be user registered as needed with appropriate format. |
CustomNumberEditor | Customizable property editor for any Number subclass
like Integer , Long ,
Float , Double . Registered
by default by BeanWrapperImpl , but can be
overridden by registering custom instance of it as a custom editor. |
FileEditor | Capable of resolving Strings to
java.io.File objects. Registered by default by
BeanWrapperImpl . |
InputStreamEditor | One-way property editor, capable of taking a text
string and producing (via an intermediate ResourceEditor and
Resource ) an
InputStream , so InputStream
properties may be directly set as Strings. Note that the default usage
will not close the InputStream for
you! Registered by default by BeanWrapperImpl . |
LocaleEditor | Capable of resolving Strings to
Locale objects and vice versa (the String
format is [language]_[country]_[variant], which is the same
thing the toString() method of Locale provides). Registered by
default by BeanWrapperImpl . |
PatternEditor | Capable of resolving Strings to JDK 1.5
Pattern objects and vice versa. |
PropertiesEditor | Capable of converting Strings (formatted using the
format as defined in the Javadoc for the java.lang.Properties
class) to Properties objects. Registered by
default by BeanWrapperImpl . |
StringTrimmerEditor | Property editor that trims Strings. Optionally allows
transforming an empty string into a null value. NOT
registered by default; must be user registered as needed. |
URLEditor | Capable of resolving a String representation of a URL
to an actual URL object. Registered by
default by BeanWrapperImpl . |
Spring uses the java.beans.PropertyEditorManager
to set
the search path for property editors that might be needed. The search path also includes
sun.bean.editors
, which includes
PropertyEditor
implementations for types such as
Font
, Color
, and most of the primitive types.
Note also that the standard JavaBeans infrastructure will automatically discover
PropertyEditor
classes (without you having to register them
explicitly) if they are in the same package as the class they handle, and have the same name
as that class, with 'Editor'
appended; for example, one could have the
following class and package structure, which would be sufficient for the
FooEditor
class to be recognized and used as the
PropertyEditor
for Foo
-typed
properties.
com
chank
pop
Foo
FooEditor // the PropertyEditor
for the Foo
class
Note that you can also use the standard BeanInfo
JavaBeans
mechanism here as well (described
in not-amazing-detail here).
Find below an example of using the BeanInfo
mechanism for
explicitly registering one or more PropertyEditor
instances
with the properties of an associated class.
com
chank
pop
Foo
FooBeanInfo // the BeanInfo
for the Foo
class
Here is the Java source code for the referenced FooBeanInfo
class. This
would associate a CustomNumberEditor
with the age
property of the Foo
class.
public class FooBeanInfo extends SimpleBeanInfo { public PropertyDescriptor[] getPropertyDescriptors() { try { final PropertyEditor numberPE = new CustomNumberEditor(Integer.class, true); PropertyDescriptor ageDescriptor = new PropertyDescriptor("age", Foo.class) { public PropertyEditor createPropertyEditor(Object bean) { return numberPE; }; }; return new PropertyDescriptor[] { ageDescriptor }; } catch (IntrospectionException ex) { throw new Error(ex.toString()); } } }
When setting bean properties as a string value, a Spring IoC container
ultimately uses standard JavaBeans PropertyEditors
to convert these
Strings to the complex type of the property. Spring pre-registers a number
of custom PropertyEditors
(for example, to convert a classname expressed
as a string into a real Class
object). Additionally, Java's standard
JavaBeans PropertyEditor
lookup mechanism allows a
PropertyEditor
for a class simply to be named appropriately and
placed in the same package as the class it provides support for, to be found automatically.
If there is a need to register other custom PropertyEditors
, there
are several mechanisms available. The most manual approach, which is not normally convenient or
recommended, is to simply use the registerCustomEditor()
method of the
ConfigurableBeanFactory
interface, assuming you have a
BeanFactory
reference. Another, slightly more convenient, mechanism is to use
a special bean factory post-processor called CustomEditorConfigurer
.
Although bean factory post-processors can be used with BeanFactory
implementations, the CustomEditorConfigurer
has a nested property setup, so it is
strongly recommended that it is used with the ApplicationContext
, where
it may be deployed in similar fashion to any other bean, and automatically detected and applied.
Note that all bean factories and application contexts automatically use a number of built-in property
editors, through their use of something called a BeanWrapper
to handle
property conversions. The standard property editors that the BeanWrapper
registers are listed in the previous section. Additionally,
ApplicationContexts
also override or add an additional number of editors
to handle resource lookups in a manner appropriate to the specific application context type.
Standard JavaBeans PropertyEditor
instances are used to convert
property values expressed as strings to the actual complex type of the property.
CustomEditorConfigurer
, a bean factory post-processor, may be used to conveniently
add support for additional PropertyEditor
instances to an
ApplicationContext
.
Consider a user class ExoticType
, and another class
DependsOnExoticType
which needs ExoticType
set as a property:
package example; public class ExoticType { private String name; public ExoticType(String name) { this.name = name; } } public class DependsOnExoticType { private ExoticType type; public void setType(ExoticType type) { this.type = type; } }
When things are properly set up, we want to be able to assign the type property as a string, which a
PropertyEditor
will behind the scenes convert into an actual
ExoticType
instance:
<bean id="sample" class="example.DependsOnExoticType"> <property name="type" value="aNameForExoticType"/> </bean>
The PropertyEditor
implementation could look similar to this:
// converts string representation to ExoticType object package example; public class ExoticTypeEditor extends PropertyEditorSupport { private String format; public void setFormat(String format) { this.format = format; } public void setAsText(String text) { if (format != null && format.equals("upperCase")) { text = text.toUpperCase(); } ExoticType type = new ExoticType(text); setValue(type); } }
Finally, we use CustomEditorConfigurer
to register the new
PropertyEditor
with the ApplicationContext
,
which will then be able to use it as needed:
<bean class="org.springframework.beans.factory.config.CustomEditorConfigurer"> <property name="customEditors"> <map> <entry key="example.ExoticType"> <bean class="example.ExoticTypeEditor"> <property name="format" value="upperCase"/> </bean> </entry> </map> </property> </bean>
Another mechanism for registering property editors with the Spring container is to create and use
a PropertyEditorRegistrar
. This interface is particularly useful when you
need to use the same set of property editors in several different situations: write a corresponding
registrar and reuse that in each case. PropertyEditorRegistrars
work in conjunction
with an interface called PropertyEditorRegistry
, an interface
that is implemented by the Spring BeanWrapper
(and
DataBinder
). PropertyEditorRegistrars
are particularly
convenient when used in conjunction with the CustomEditorConfigurer
(introduced here), which exposes a
property called setPropertyEditorRegistrars(..)
:
PropertyEditorRegistrars
added to a CustomEditorConfigurer
in this
fashion can easily be shared with DataBinder
and Spring MVC
Controllers
. Furthermore, it avoids the need for synchronization on custom
editors: a PropertyEditorRegistrar
is expected to create fresh
PropertyEditor
instances for each bean creation attempt.
Using a PropertyEditorRegistrar
is perhaps best illustrated with an
example. First off, you need to create your own PropertyEditorRegistrar
implementation:
package com.foo.editors.spring; public final class CustomPropertyEditorRegistrar implements PropertyEditorRegistrar { public void registerCustomEditors(PropertyEditorRegistry registry) { // it is expected that new PropertyEditor instances are created registry.registerCustomEditor(ExoticType.class, new ExoticTypeEditor()); // you could register as many custom property editors as are required here... } }
See also the org.springframework.beans.support.ResourceEditorRegistrar
for an
example PropertyEditorRegistrar
implementation. Notice how in its
implementation of the registerCustomEditors(..)
method it creates new instances
of each property editor.
Next we configure a CustomEditorConfigurer
and inject an
instance of our CustomPropertyEditorRegistrar
into it:
<bean class="org.springframework.beans.factory.config.CustomEditorConfigurer"> <property name="propertyEditorRegistrars"> <list> <ref bean="customPropertyEditorRegistrar"/> </list> </property> </bean> <bean id="customPropertyEditorRegistrar" class="com.foo.editors.spring.CustomPropertyEditorRegistrar"/>
Finally, and in a bit of a departure from the focus of this chapter, for those of you using
Spring's MVC web framework, using PropertyEditorRegistrars
in conjunction with data-binding Controllers
(such as
SimpleFormController
) can be very convenient. Find below an example of using a
PropertyEditorRegistrar
in the implementation of an initBinder(..)
method:
public final class RegisterUserController extends SimpleFormController { private final PropertyEditorRegistrar customPropertyEditorRegistrar; public RegisterUserController(PropertyEditorRegistrar propertyEditorRegistrar) { this.customPropertyEditorRegistrar = propertyEditorRegistrar; } protected void initBinder(HttpServletRequest request, ServletRequestDataBinder binder) throws Exception { this.customPropertyEditorRegistrar.registerCustomEditors(binder); } // other methods to do with registering a User }
This style of PropertyEditor
registration can lead to concise code (the
implementation of initBinder(..)
is just one line long!), and allows common
PropertyEditor
registration code to be encapsulated in a class and then
shared amongst as many Controllers
as needed.
The Spring Expression Language (SpEL for short) is a powerful expression language that supports querying and manipulating an object graph at runtime. The language syntax is similar to Unified EL but offers additional features, most notably method invocation and basic string templating functionality.
While there are several other Java expression languages available, OGNL, MVEL, and JBoss EL, to name a few, the Spring Expression Language was created to provide the Spring community with a single well supported expression language that can used across all the products in the Spring portfolio. Its language features are driven by the requirements of the projects in the Spring portfolio, including tooling requirements for code completion support within the eclipse based SpringSource Tool Suite. That said, SpEL is based on an technology agnostic API allowing other expression language implementations to be integreated should the need arise.
While SpEL serves as the foundation for expression evaluation within the Spring portfolio, it is not directly tied to Spring and can be used independently. In order to be self contained, many of the examples in this chapter use SpEL as if it was an independent expression language. This requires creating a few boostrapping infrastructure classes such as the parser. Most Spring users will not need to deal with this infrastructure and will instead only author expression strings for evaluation. An example of this typical use is the integration of SpEL into creating XML or annotated based bean definitions as shown in the section Expression support for defining bean definitions.
This chapter covers the features of the expression language, its API, and its language sytnax. In several places an Inventor and Inventor's Society class are used as the target objects for expression evaluation. These class declarations and the data used to populate them are listed at the end of the chapter.
The expression language support the following functionality
Literal expressions
Boolean and relational operators
Regular expressions
Class expressions
Accessing properties, arrays, lists, maps
Method invocation
Relational operators
Assignment
Calling constructors
Ternary operator
Variables
User defined functions
Collection projection
Collection selection
Templated expressions
This section introduces the simple use of SpEL interfaces and its expression language. The complete language reference can be found in the section Language Reference
The following code introduces the SpEL API to evaluate the literal string expression 'Hello World'
ExpressionParser parser = new SpelAntlrExpressionParser(); Expression exp = parser.parseExpression("'Hello World'"); String message = (String) exp.getValue();
The value of the message variable is simply 'Hello World'.
The SpEL classes and interfaces you are most likely to use are located in the packages org.springframework.expression and its subpackages spel.antlr and spel.support.
The expression language is based on a grammar and uses ANTLR to
construct the lexer and parser. The interface
ExpressionParser
is responsible for parsing
an expression string. In this example the expression string is a string
literal denoted by the surrounding single quotes. The interface
Expression
is responsible for evaluating
the previously defined expression string. There are two exceptions that
can be thrown, ParseException
and
EvaluationException
when calling
'parser.parseExpression
' and
'exp.getValue
' respectedly.
SpEL supports a wide range of features, such a calling methods, accessing properties and calling constructors.
As an example of method invocation, we call the 'concat' method on the string literal
ExpressionParser parser = new SpelAntlrExpressionParser(); Expression exp = parser.parseExpression("'Hello World'.concat('!')"); String message = (String) exp.getValue();
The value of message is now 'Hello World!'.
As an example of calling a JavaBean property, the String property 'Bytes' can be called as shown below
ExpressionParser parser = new SpelAntlrExpressionParser(); Expression exp = parser.parseExpression("'Hello World'.bytes"); // invokes 'getBytes()' byte[] bytes = (byte[]) exp.getValue();
SpEL also supports nested properties using standard 'dot' notation, i.e. prop1.prop2.prop3 and the setting of property values
Public fields may also be accessed
ExpressionParser parser = new SpelAntlrExpressionParser(); Expression exp = parser.parseExpression("'Hello World'.bytes.length"); // invokes 'getBytes().length' int length = (Integer) exp.getValue();
The String's constructor can be called instead of using a string literal
ExpressionParser parser = new SpelAntlrExpressionParser(); Expression exp = parser.parseExpression("new String('hello world').toUpperCase()"); String message = exp.getValue(String.class);
Note the use of the generic method public <T> T
getValue(Class<T> desiredResultType)
. Using this method
removes the need to cast the value of the expression to the desired result
type. An EvaluationException
will be thrown if the
value an not be cast to the type T
or converted using
the registered type converter.
The more common usage of SpEL is provide an expression string that
is evaluated against a specific object instance. In the following example
we retrieve the Name
property from an instance of the
Inventor class.
// Create and set a calendar GregorianCalendar c = new GregorianCalendar(); c.set(1856, 7, 9); // The constructor arguments are name, birthday, and nationaltiy. Inventor tesla = new Inventor("Nikola Tesla", c.getTime(), "Serbian"); ExpressionParser parser = new SpelAntlrExpressionParser(); Expression exp = parser.parseExpression("name"); EvaluationContext context = new StandardEvaluationContext(); context.setRootObject(tesla); String name = (String) exp.getValue(context);
In the last line, the value of the string variable 'name' will be set to "Nikola Tesla". The class StandardEvaluationContext is where you can specify which object the "Name" property will be evaluated against. You can reuse the same expression over and over again and set a new root object on the evaluation context. Expressions are evaluated using reflection.
![]() | Note |
---|---|
In standalone usage of SpEL you will need to create the parser as well as provide an evaluation context. However, more common usage is to provide only the SpEL expression string as part of a configuration file, for example for Spring bean or Spring Web Flow definitions. In this case, the parser, evaluation context, root object and any predefined variables will be set up for you implicitly. |
As a final introductory example, the use of a boolean operator is shown using the Inventor object in the previous example
Expression exp = parser.parseExpression("name == 'Nikola Tesla'"); boolean result = exp.getValue(context, Boolean.class); // evaluates to true
The interface EvaluationContext
is
used when evaluating an expression to resolve properties, methods,
fields, and to help perform type conversion. The out-of-the-box
implementation, StandardEvaluationContext
, uses
reflection to manipulate the object, caching
java.lang.reflect's Method
,
Field
, and Constructor
instances for increased performance.
The StandardEvaluationContext
is where you
specify the root object to evaluate against via the method
setRootObject
. You can also specify variables
and functions that will be used in the expression using the methods
setVariable
and
registerFunction
. The use of variables and
functions are described in the language reference sections Variables and Functions. The
StandardEvaluationContext
is also where you can
register custom ConstructorResolver
s,
MethodResolver
s, and
PropertyAccessor
s to extend how SpEL evaluates
expressions. Please refer to the JavaDoc of these classes for more
details.
By default SpEL uses the conversion service available in
Spring core (org.springframework.core.convert.ConversionService
).
This conversion service comes with many converters built in for common conversions
but is also fully extensible so custom conversions between
types can be added. Additionally it has the key capability that it
is generics aware. This means that when working with generic types in
expressions, SpEL will attempt conversions to maintain type correctness for any
objects it encounters.
What does this mean in practice? Suppose assignment, using setValue()
,
is being used to set a List
property. The type of the property is
actually List<Boolean>
. SpEL will recognize that the elements
of the list need to be converted to Boolean
before being placed in it.
A simple example:
class Simple { public List<Boolean> booleanList = new ArrayList<Boolean>(); } Simple simple = new Simple(); simple.booleanList.add(true); StandardEvaluationContext simpleContext = new StandardEvaluationContext(simple); // false is passed in here as a string. SpEL and the conversion service will // correctly recognize that it needs to be a Boolean and convert it parser.parseExpression("booleanList[0]").setValue(simpleContext, "false"); // b will be false Boolean b = simple.booleanList.get(0);
SpEL expressions can be used with XML or annotation based
configuration metadata for defining BeanDefinitions. In both cases the
syntax to define the expression is of the form #{ <expression
string> }
.
A property or constructor-arg value can be set using expressions as shown below
<bean id="numberGuess" class="org.spring.samples.NumberGuess"> <property name="randomNumber" value="#{ T(java.lang.Math).random() * 100.0 }"/> <!-- other properties --> </bean>
The variable 'systemProperties' is predefined, so you can use it in your expressions as shown below. Note that you do not have to prefix the predefined variable with the '#' symbol in this context.
<bean id="taxCalculator" class="org.spring.samples.TaxCalculator"> <property name="defaultLocale" value="#{ systemProperties['user.region'] }"/> <!-- other properties --> </bean>
You can also refer to other bean properties by name, for example
<bean id="numberGuess" class="org.spring.samples.NumberGuess"> <property name="randomNumber" value="#{ T(java.lang.Math).random() * 100.0 }"/> <!-- other properties --> </bean> <bean id="shapeGuess" class="org.spring.samples.ShapeGuess"> <property name="initialShapeSeed" value="#{ numberGuess.randomNumber }"/> <!-- other properties --> </bean>
The @Value
annotation can be placed on fields,
methods and method/constructor parameters to specify a default
value.
Here is an example to set the default value of a field variable
public static class FieldValueTestBean @Value("#{ systemProperties['user.region'] }") private String defaultLocale; public void setDefaultLocale(String defaultLocale) { this.defaultLocale = defaultLocale; } public String getDefaultLocale() { return this.defaultLocale; } }
The equivalent but on a property setter method is shown below
public static class PropertyValueTestBean private String defaultLocale; @Value("#{ systemProperties['user.region'] }") public void setDefaultLocale(String defaultLocale) { this.defaultLocale = defaultLocale; } public String getDefaultLocale() { return this.defaultLocale; } }
Autowired methods and constructors can also use the
@Value
annotation.
public class SimpleMovieLister { private MovieFinder movieFinder; private String defaultLocale; @Autowired public void configure(MovieFinder movieFinder, @Value("#{ systemProperties['user.region'] } String defaultLocale) { this.movieFinder = movieFinder; this.defaultLocale = defaultLocale; } // ... }
public class MovieRecommender { private String defaultLocale; private CustomerPreferenceDao customerPreferenceDao; @Autowired public MovieRecommender(CustomerPreferenceDao customerPreferenceDao, @Value("#{ systemProperties['user.country'] } String defaultLocale) { this.customerPreferenceDao = customerPreferenceDao; this.defaultLocale = defaultLocale; } // ... }
The types of literal expressions supported are strings, dates, numeric values (int, real, and hex), boolean and null. String are delimited by single quotes. To put a single quote itself in a string use the backslash character. The following listing shows simple usage of literals. Typically they would not be used in isolation like this, but as part of a more complex expression, for example using a literal on one side of a logical comparison operator.
ExpressionParser parser = new SpelAntlrExpressionParser(); String helloWorld = (String) parser.parseExpression("'Hello World'").getValue(); // evals to "Hello World" double avogadrosNumber = (Double) parser.parseExpression("6.0221415E+23").getValue(); int maxValue = (Integer) parser.parseExpression("0x7FFFFFFF").getValue(); // evals to 2147483647 boolean trueValue = (Boolean) parser.parseExpression("true").getValue(); Object nullValue = parser.parseExpression("null").getValue();
Numbers support the use of the negative sign, exponential notation, and decimal points. By default real numbers are parsed using Double.parseDouble().
Navigating with property references is easy, just use a period to indicate a nested property value. The instances of Inventor class, pupin and tesla, were populated with data listed in section Section Classes used in the examples. To navigate "down" and get Tesla's year of birth and Pupin's city of birth the following expressions are used
int year = (Integer) parser.parseExpression("Birthdate.Year + 1900").getValue(context); // 1856 String city = (String) parser.parseExpression("placeOfBirth.City").getValue(context);
Case insensitivity is allowed for the first letter of property names. The contents of arrays and lists are obtained using square bracket notation.
ExpressionParser parser = new SpelAntlrExpressionParser(); // Inventions Array StandardEvaluationContext teslaContext = new StandardEvaluationContext(); teslaContext.setRootObject(tesla); // evaluates to "Induction motor" String invention = parser.parseExpression("inventions[3]").getValue(teslaContext, String.class); // Members List StandardEvaluationContext societyContext = new StandardEvaluationContext(); societyContext.setRootObject(ieee); // evaluates to "Nikola Tesla" String name = parser.parseExpression("Members[0].Name").getValue(societyContext, String.class); // List and Array navigation // evaluates to "Wireless communication" String invention = parser.parseExpression("Members[0].Inventions[6]").getValue(societyContext, String.class);
The contents of maps are obtained by specifying the literal key value within the brackets. In this case, because keys for the Officers map are strings, we can specify string literal.
// Officer's Dictionary Inventor pupin = parser.parseExpression("Officers['president']").getValue(societyContext, Inventor.class); // evaluates to "Idvor" String city = parser.parseExpression("Officers['president'].PlaceOfBirth.City").getValue(societyContext, String.class); // setting values parser.parseExpression("Officers['advisors'][0].PlaceOfBirth.Country").setValue(societyContext, "Croatia");
Methods are invoked using typical Java programming syntax. You may also invoke methods on literals. Varargs are also supported.
// string literal, evaluates to "bc" String c = parser.parseExpression("'abc'.substring(2, 3)").getValue(String.class); // evaluates to true boolean isMember = parser.parseExpression("isMember('Mihajlo Pupin')").getValue(societyContext, Boolean.class);
The relational operators; equal, not equal, less than, less than or equal, greater than, and greater than or equal are supported using standard operator notation.
// evaluates to true boolean trueValue = parser.parseExpression("2 == 2").getValue(Boolean.class); // evaluates to false boolean falseValue = parser.parseExpression("2 < -5.0").getValue(Boolean.class); // evaluates to true boolean trueValue = parser.parseExpression("'black' < 'block'").getValue(Boolean.class);
In addition to standard relational operators SpEL supports the 'instanceof' and regular expression based 'matches' operator.
// evaluates to false boolean falseValue = parser.parseExpression("'xyz' instanceof T(int)").getValue(Boolean.class); // evaluates to true boolean trueValue = parser.parseExpression("'5.00' matches '^-?\\d+(\\.\\d{2})?$'").getValue(Boolean.class); //evaluates to false boolean falseValue = parser.parseExpression("'5.0067' matches '^-?\\d+(\\.\\d{2})?$'").getValue(Boolean.class);
The logical operators that are supported are and, or, and not. Their use is demonstrated below
// -- AND -- // evaluates to false boolean falseValue = parser.parseExpression("true and false").getValue(Boolean.class); // evaluates to true String expression = "isMember('Nikola Tesla') and isMember('Mihajlo Pupin')"; boolean trueValue = parser.parseExpression(expression).getValue(societyContext, Boolean.class); // -- OR -- // evaluates to true boolean trueValue = parser.parseExpression("true or false").getValue(Boolean.class); // evaluates to true String expression = "isMember('Nikola Tesla') or isMember('Albert Einstien')"; boolean trueValue = parser.parseExpression(expression).getValue(societyContext, Boolean.class); // -- NOT -- // evaluates to false boolean falseValue = parser.parseExpression("!true").getValue(Boolean.class); // -- AND and NOT -- String expression = "isMember('Nikola Tesla') and !isMember('Mihajlo Pupin')"; boolean falseValue = parser.parseExpression(expression).getValue(societyContext, Boolean.class);
The addition operator can be used on numbers, strings and dates. Subtraction can be used on numbers and dates. Multiplication and division can be used only on numbers. Other mathematical operators supported are modulus (%) and exponential power (^). Standard operator precedence is enforced. These operators are demonstrated below
// Addition int two = parser.parseExpression("1 + 1").getValue(Integer.class); // 2 String testString = parser.parseExpression("'test' + ' ' + 'string'").getValue(String.class); // 'test string' // Subtraction int four = parser.parseExpression("1 - -3").getValue(Integer.class); // 4 double d = parser.parseExpression("1000.00 - 1e4").getValue(Double.class); // -9000 // Multiplication int six = parser.parseExpression("-2 * -3").getValue(Integer.class); // 6 double twentyFour = parser.parseExpression("2.0 * 3e0 * 4").getValue(Double.class); // 24.0 // Division int minusTwo = parser.parseExpression("6 / -3").getValue(Integer.class); // -2 double one = parser.parseExpression("8.0 / 4e0 / 2").getValue(Double.class); // 1.0 // Modulus int three = parser.parseExpression("7 % 4").getValue(Integer.class); // 3 int one = parser.parseExpression("8 / 5 % 2").getValue(Integer.class); // 1 // Operator precedence int minusTwentyOne = parser.parseExpression("1+2-3*8").getValue(Integer.class); // -21
Setting of a property is done by using the assignment operator.
This would typically be done within a call to
setValue
but can also be done inside a call to
getValue
Inventor inventor = new Inventor(); StandardEvaluationContext inventorContext = new StandardEvaluationContext(); inventorContext.setRootObject(inventor); parser.parseExpression("Name").setValue(inventorContext, "Alexander Seovic2"); // alternatively String aleks = parser.parseExpression("Name = 'Alexandar Seovic'").getValue(inventorContext, String.class);
The special 'T' operator can be used to specify an instance of
java.lang.Class (the 'type'). Static methods are invoked using this
operator as well. The StandardEvaluationContext
uses a TypeLocator
to find types and
the StandardTypeLocator
(which can be replaced)
is built with an understanding of the java.lang package. This means T()
references to types within java.lang do not need to be fully qualified,
but all other type references must be.
Class dateClass = parser.parseExpression("T(java.util.Date)").getValue(Class.class); Class stringClass = parser.parseExpression("T(String)").getValue(Class.class); boolean trueValue = parser.parseExpression("T(java.math.RoundingMode).CEILING < T(java.math.RoundingMode).FLOOR").getValue(Boolean.class);
Constructors can be invoked using the new operator. The fully qualified class name should be used for all but the primitive type and String (where int, float, etc, can be used).
Inventor einstein = parser.parseExpression("new org.spring.samples.spel.inventor.Inventor('Albert Einstein', 'German')").getValue(Inventor.class); //create new inventor instance within add method of List parser.parseExpression("Members.add(new org.spring.samples.spel.inventor.Inventor('Albert Einstein', 'German'))").getValue(societyContext);
Variables can referenced in the expression using the syntax #variableName. Variables are set using the method setVariable on the StandardEvaluationContext.
Inventor tesla = new Inventor("Nikola Tesla", "Serbian"); StandardEvaluationContext context = new StandardEvaluationContext(); context.setVariable("newName", "Mike Tesla"); context.setRootObject(tesla); parser.parseExpression("Name = #newName").getValue(context); System.out.println(tesla.getName()) // "Mike Tesla"
The variable #this is always defined and refers to the current evaluation object (the object against which unqualified references will be resolved).
// create an array of integers List<Integer> primes = new ArrayList<Integer>(); primes.addAll(Arrays.asList(2,3,5,7,11,13,17)); // create parser and set variable 'primes' as the array of integers ExpressionParser parser = new SpelAntlrExpressionParser(); StandardEvaluationContext context = new StandardEvaluationContext(); context.setVariable("primes",primes); // all prime numbers > 10 from the list (using selection ?{...}) List<Integer> primesGreaterThanTen = (List<Integer>) parser.parseExpression("#primes.?[#this>10]").getValue(context); //evaluates to [11, 13, 17]
You can extend SpEL by registering user defined functions that can
be called within the expression string. The function is registered with
the StandardEvaluationContext
using the
method
public void registerFunction(String name, Method m)
A reference to a Java Method provides the implementation of the function. For example, a utility method to reverse a string is shown below.
public abstract class StringUtils { public static String reverseString(String input) { StringBuilder backwards = new StringBuilder(); for (int i = 0; i < input.length(); i++) { backwards.append(input.charAt(input.length() - 1 - i)); } return backwards.toString(); } }
This method is then registered with the evaluation context and can be used within an expression string
ExpressionParser parser = new SpelAntlrExpressionParser(); StandardEvaluationContext context = new StandardEvaluationContext(); context.registerFunction("reverseString", StringUtils.class.getDeclaredMethod("reverseString", new Class[] { String.class })); String helloWorldReversed = parser.parseExpression("#reverseString('hello')").getValue(context, String.class);
You can use the ternary operator for performing if-then-else conditional logic inside the expression. A minimal example is;
String falseString = parser.parseExpression("false ? 'trueExp' : 'falseExp'").getValue(String.class);
In this case, the boolean false results in returning the string value 'falseExp'. A less artificial example is shown below.
parser.parseExpression("Name").setValue(societyContext, "IEEE"); societyContext.setVariable("queryName", "Nikola Tesla"); expression = "isMember(#queryName)? #queryName + ' is a member of the ' " + "+ Name + ' Society' : #queryName + ' is not a member of the ' + Name + ' Society'"; String queryResultString = parser.parseExpression(expression).getValue(societyContext, String.class); // queryResultString = "Nikola Tesla is a member of the IEEE Society"
Selection is a powerful expression language feature that allow you to transform some source collection into another by selecting from its entries.
Selection uses the syntax ?[selectionExpression]
. This will
filter the collection and return a new collection containing a subset of the
original elements. For example, selection would allow us to easily
get a list of Serbian inventors:
List<Inventor> list = (List<Inventor>) parser.parseExpression("Members.?[Nationality == 'Serbian']").getValue(societyContext);
Selection is possible upon both lists and maps. In the former case the
selection criteria is evaluated against each individual list element whilst against
a map the selection criteria is evaluated against each map entry (objects of the Java
type Map.Entry
). Map entries have their key and value accessible
as properties for use in the selection.
This expression will return a new map consisting of those elements of the original map where the entry value is less than 27.
Map newMap = parser.parseExpression("map.?[value<27]").getValue();
In addition to returning all the selected elements, it is possible to retrieve
just the first or the last value. To obtain the first entry matching the selection
the syntax is ^[...]
whilst to obtain the last matching selection
the syntax is $[...]
.
Projection allows a collection to drive the evaluation of a sub-expression and
the result is a new collection. The syntax for projection is ![projectionExpression]
.
Most easily understood by example, suppose we have
a list of inventors but want the list of cities where they were born. Effectively
we want to evaluate 'placeOfBirth.city' for every entry in the inventor list. Using
projection:
// returns [ 'Smiljan', 'Idvor' ] List placesOfBirth = (List)parser.parseExpression("Members.![placeOfBirth.city]");
A map can also be used to drive projection and in this case the projection
expression is evaluated against each entry in the map (represented as a Java
Map.Entry
). The result of a projection across a map is a list consisting
of the evaluation of the projection expression against each map entry.
Expression templates allow a mixing of literal text with one or
more evaluation blocks. Each evaluation block is delimited with a prefix
and suffix characters that you can define, a common choice is to use
${}
as the delimiters. For example,
String randomPhrase = parser.parseExpression("random number is ${T(java.lang.Math).random()}", new TemplatedParserContext()).getValue(String.class); // evaluates to "random number is 0.7038186818312008"
The string is evaluated by concatenating the literal text 'random
number is' with the result of evaluating the expression inside the ${}
delimiter, in this case the result of calling that random() method. The
second argument to the method parseExpression()
of
the type ParserContext
. The
ParserContext
interface is used to
influence how the expression is parsed in order to support the
expression templating functionality. The definition of
TemplatedParserContext
is shown below
public class TemplatedParserContext implements ParserContext { public String getExpressionPrefix() { return "${"; } public String getExpressionSuffix() { return "}"; } public boolean isTemplate() { return true; } }
Inventor.java
package org.spring.samples.spel.inventor; import java.util.Date; import java.util.GregorianCalendar; public class Inventor { private String name; private String nationality; private String[] inventions; private Date birthdate; private PlaceOfBirth placeOfBirth; public Inventor(String name, String nationality) { GregorianCalendar c= new GregorianCalendar(); this.name = name; this.nationality = nationality; this.birthdate = c.getTime(); } public Inventor(String name, Date birthdate, String nationality) { this.name = name; this.nationality = nationality; this.birthdate = birthdate; } public Inventor() { } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getNationality() { return nationality; } public void setNationality(String nationality) { this.nationality = nationality; } public Date getBirthdate() { return birthdate; } public void setBirthdate(Date birthdate) { this.birthdate = birthdate; } public PlaceOfBirth getPlaceOfBirth() { return placeOfBirth; } public void setPlaceOfBirth(PlaceOfBirth placeOfBirth) { this.placeOfBirth = placeOfBirth; } public void setInventions(String[] inventions) { this.inventions = inventions; } public String[] getInventions() { return inventions; } }
PlaceOfBirth.java
package org.spring.samples.spel.inventor; public class PlaceOfBirth { private String city; private String country; public PlaceOfBirth(String city) { this.city=city; } public PlaceOfBirth(String city, String country) { this(city); this.country = country; } public String getCity() { return city; } public void setCity(String s) { this.city = s; } public String getCountry() { return country; } public void setCountry(String country) { this.country = country; } }
Society.java
package org.spring.samples.spel.inventor; import java.util.*; public class Society { private String name; public static String Advisors = "advisors"; public static String President = "president"; private List<Inventor> members = new ArrayList<Inventor>(); private Map officers = new HashMap(); public List getMembers() { return members; } public Map getOfficers() { return officers; } public String getName() { return name; } public void setName(String name) { this.name = name; } public boolean isMember(String name) { boolean found = false; for (Inventor inventor : members) { if (inventor.getName().equals(name)) { found = true; break; } } return found; } }
Aspect-Oriented Programming (AOP) complements Object-Oriented Programming (OOP) by providing another way of thinking about program structure. The key unit of modularity in OOP is the class, whereas in AOP the unit of modularity is the aspect. Aspects enable the modularization of concerns such as transaction management that cut across multiple types and objects. (Such concerns are often termed crosscutting concerns in AOP literature.)
One of the key components of Spring is the AOP framework. While the Spring IoC container does not depend on AOP, meaning you do not need to use AOP if you don't want to, AOP complements Spring IoC to provide a very capable middleware solution.
AOP is used in the Spring Framework to...
... provide declarative enterprise services, especially as a replacement for EJB declarative services. The most important such service is declarative transaction management.
... allow users to implement custom aspects, complementing their use of OOP with AOP.
If you are interested only in generic declarative services
or other pre-packaged declarative middleware services such as pooling, you
do not need to work directly with Spring AOP, and can skip most of this
chapter.
Let us begin by defining some central AOP concepts and terminology. These terms are not Spring-specific... unfortunately, AOP terminology is not particularly intuitive; however, it would be even more confusing if Spring used its own terminology.
Aspect: a modularization of a concern
that cuts across multiple classes. Transaction management is a good
example of a crosscutting concern in J2EE applications. In Spring
AOP, aspects are implemented using regular classes (the schema-based approach) or regular
classes annotated with the @Aspect
annotation (the @AspectJ
style).
Join point: a point during the execution of a program, such as the execution of a method or the handling of an exception. In Spring AOP, a join point always represents a method execution.
Advice: action taken by an aspect at a particular join point. Different types of advice include "around," "before" and "after" advice. (Advice types are discussed below.) Many AOP frameworks, including Spring, model an advice as an interceptor, maintaining a chain of interceptors around the join point.
Pointcut: a predicate that matches join points. Advice is associated with a pointcut expression and runs at any join point matched by the pointcut (for example, the execution of a method with a certain name). The concept of join points as matched by pointcut expressions is central to AOP, and Spring uses the AspectJ pointcut expression language by default.
Introduction: declaring additional
methods or fields on behalf of a type. Spring AOP allows you to
introduce new interfaces (and a corresponding implementation) to any
advised object. For example, you could use an introduction to make a
bean implement an IsModified
interface, to simplify caching. (An introduction is known as an
inter-type declaration in the AspectJ community.)
Target object: object being advised by one or more aspects. Also referred to as the advised object. Since Spring AOP is implemented using runtime proxies, this object will always be a proxied object.
AOP proxy: an object created by the AOP framework in order to implement the aspect contracts (advise method executions and so on). In the Spring Framework, an AOP proxy will be a JDK dynamic proxy or a CGLIB proxy.
Weaving: linking aspects with other application types or objects to create an advised object. This can be done at compile time (using the AspectJ compiler, for example), load time, or at runtime. Spring AOP, like other pure Java AOP frameworks, performs weaving at runtime.
Types of advice:
Before advice: Advice that executes before a join point, but which does not have the ability to prevent execution flow proceeding to the join point (unless it throws an exception).
After returning advice: Advice to be executed after a join point completes normally: for example, if a method returns without throwing an exception.
After throwing advice: Advice to be executed if a method exits by throwing an exception.
After (finally) advice: Advice to be executed regardless of the means by which a join point exits (normal or exceptional return).
Around advice: Advice that surrounds a join point such as a method invocation. This is the most powerful kind of advice. Around advice can perform custom behavior before and after the method invocation. It is also responsible for choosing whether to proceed to the join point or to shortcut the advised method execution by returning its own return value or throwing an exception.
Around advice is the most general kind of advice. Since Spring
AOP, like AspectJ, provides a full range of advice types, we recommend
that you use the least powerful advice type that can implement the
required behavior. For example, if you need only to update a cache with
the return value of a method, you are better off implementing an after
returning advice than an around advice, although an around advice can
accomplish the same thing. Using the most specific advice type provides
a simpler programming model with less potential for errors. For example,
you do not need to invoke the proceed()
method
on the JoinPoint
used for around advice,
and hence cannot fail to invoke it.
In Spring 2.0, all advice parameters are statically typed, so that
you work with advice parameters of the appropriate type (the type of the
return value from a method execution for example) rather than
Object
arrays.
The concept of join points, matched by pointcuts, is the key to AOP which distinguishes it from older technologies offering only interception. Pointcuts enable advice to be targeted independently of the Object-Oriented hierarchy. For example, an around advice providing declarative transaction management can be applied to a set of methods spanning multiple objects (such as all business operations in the service layer).
Spring AOP is implemented in pure Java. There is no need for a special compilation process. Spring AOP does not need to control the class loader hierarchy, and is thus suitable for use in a J2EE web container or application server.
Spring AOP currently supports only method execution join points (advising the execution of methods on Spring beans). Field interception is not implemented, although support for field interception could be added without breaking the core Spring AOP APIs. If you need to advise field access and update join points, consider a language such as AspectJ.
Spring AOP's approach to AOP differs from that of most other AOP frameworks. The aim is not to provide the most complete AOP implementation (although Spring AOP is quite capable); it is rather to provide a close integration between AOP implementation and Spring IoC to help solve common problems in enterprise applications.
Thus, for example, the Spring Framework's AOP functionality is normally used in conjunction with the Spring IoC container. Aspects are configured using normal bean definition syntax (although this allows powerful "autoproxying" capabilities): this is a crucial difference from other AOP implementations. There are some things you cannot do easily or efficiently with Spring AOP, such as advise very fine-grained objects (such as domain objects typically): AspectJ is the best choice in such cases. However, our experience is that Spring AOP provides an excellent solution to most problems in J2EE applications that are amenable to AOP.
Spring AOP will never strive to compete with AspectJ to provide a comprehensive AOP solution. We believe that both proxy-based frameworks like Spring AOP and full-blown frameworks such as AspectJ are valuable, and that they are complementary, rather than in competition. Spring 2.0 seamlessly integrates Spring AOP and IoC with AspectJ, to enable all uses of AOP to be catered for within a consistent Spring-based application architecture. This integration does not affect the Spring AOP API or the AOP Alliance API: Spring AOP remains backward-compatible. See the following chapter for a discussion of the Spring AOP APIs.
![]() | Note |
---|---|
One of the central tenets of the Spring Framework is that of non-invasiveness; this is the idea that you should not be forced to introduce framework-specific classes and interfaces into your business/domain model. However, in some places the Spring Framework does give you the option to introduce Spring Framework-specific dependencies into your codebase: the rationale in giving you such options is because in certain scenarios it might be just plain easier to read or code some specific piece of functionality in such a way. The Spring Framework (almost) always offers you the choice though: you have the freedom to make an informed decision as to which option best suits your particular use case or scenario. One such choice that is relevant to this chapter is that of which AOP framework (and which AOP style) to choose. You have the choice of AspectJ and/or Spring AOP, and you also have the choice of either the @AspectJ annotation-style approach or the Spring XML configuration-style approach. The fact that this chapter chooses to introduce the @AspectJ-style approach first should not be taken as an indication that the Spring team favors the @AspectJ annotation-style approach over the Spring XML configuration-style. See the section entitled Section 8.4, “Choosing which AOP declaration style to use” for a fuller discussion of the whys and wherefores of each style. |
Spring AOP defaults to using standard J2SE dynamic proxies for AOP proxies. This enables any interface (or set of interfaces) to be proxied.
Spring AOP can also use CGLIB proxies. This is necessary to proxy classes, rather than interfaces. CGLIB is used by default if a business object does not implement an interface. As it is good practice to program to interfaces rather than classes, business classes normally will implement one or more business interfaces. It is possible to force the use of CGLIB, in those (hopefully rare) cases where you need to advise a method that is not declared on an interface, or where you need to pass a proxied object to a method as a concrete type.
It is important to grasp the fact that Spring AOP is proxy-based. See the section entitled Section 8.6.1, “Understanding AOP proxies” for a thorough examination of exactly what this implementation detail actually means.
@AspectJ refers to a style of declaring aspects as regular Java classes annotated with Java 5 annotations. The @AspectJ style was introduced by the AspectJ project as part of the AspectJ 5 release. Spring 2.0 interprets the same annotations as AspectJ 5, using a library supplied by AspectJ for pointcut parsing and matching. The AOP runtime is still pure Spring AOP though, and there is no dependency on the AspectJ compiler or weaver.
Using the AspectJ compiler and weaver enables use of the
full AspectJ language, and is discussed in Section 8.8, “Using AspectJ with Spring applications”.
To use @AspectJ aspects in a Spring configuration you need to enable Spring support for configuring Spring AOP based on @AspectJ aspects, and autoproxying beans based on whether or not they are advised by those aspects. By autoproxying we mean that if Spring determines that a bean is advised by one or more aspects, it will automatically generate a proxy for that bean to intercept method invocations and ensure that advice is executed as needed.
The @AspectJ support is enabled by including the following element inside your spring configuration:
<aop:aspectj-autoproxy/>
This assumes that you are using schema support as described in Appendix A, XML Schema-based configuration. See Section A.2.7, “The aop schema” for how to import the tags in the aop namespace.
If you are using the DTD, it is still possible to enable @AspectJ support by adding the following definition to your application context:
<bean class="org.springframework.aop.aspectj.annotation.AnnotationAwareAspectJAutoProxyCreator" />
You will also need two AspectJ libraries on the classpath of your
application: aspectjweaver.jar
and aspectjrt.jar
. These
libraries are available in the 'lib'
directory of an AspectJ installation
(version 1.5.1 or later required), or in the 'lib/aspectj'
directory of the
Spring-with-dependencies distribution.
With the @AspectJ support enabled, any bean defined in your
application context with a class that is an @AspectJ aspect (has the
@Aspect
annotation) will be automatically
detected by Spring and used to configure Spring AOP. The following
example shows the minimal definition required for a not-very-useful
aspect:
A regular bean definition in the application context, pointing to
a bean class that has the @Aspect
annotation:
<bean id="myAspect" class="org.xyz.NotVeryUsefulAspect"> <!-- configure properties of aspect here as normal --> </bean>
And the NotVeryUsefulAspect
class
definition, annotated with
org.aspectj.lang.annotation.Aspect
annotation;
package org.xyz; import org.aspectj.lang.annotation.Aspect; @Aspect public class NotVeryUsefulAspect { }
Aspects (classes annotated with
@Aspect
) may have methods and fields just
like any other class. They may also contain pointcut, advice, and
introduction (inter-type) declarations.
![]() | Advising aspects |
---|---|
In Spring AOP, it is not possible to have aspects themselves be the target of advice from other aspects. The @Aspect annotation on a class marks it as an aspect, and hence excludes it from auto-proxying. |
Recall that pointcuts determine join points of interest, and thus
enable us to control when advice executes. Spring AOP only
supports method execution join points for Spring beans, so
you can think of a pointcut as matching the execution of methods on
Spring beans. A pointcut declaration has two parts: a signature
comprising a name and any parameters, and a pointcut expression that
determines exactly which method executions we are
interested in. In the @AspectJ annotation-style of AOP, a pointcut
signature is provided by a regular method definition, and the pointcut
expression is indicated using the
@Pointcut
annotation (the method serving
as the pointcut signature must have a
void
return type).
An example will help make this distinction between a pointcut
signature and a pointcut expression clear. The following example defines
a pointcut named 'anyOldTransfer'
that will match the
execution of any method named 'transfer'
:
@Pointcut("execution(* transfer(..))")// the pointcut expression private void anyOldTransfer() {}// the pointcut signature
The pointcut expression that forms the value of the
@Pointcut
annotation is a regular AspectJ
5 pointcut expression. For a full discussion of AspectJ's pointcut
language, see the AspectJ
Programming Guide (and for Java 5 based extensions, the AspectJ
5 Developers Notebook) or one of the books on AspectJ such as
“Eclipse AspectJ” by Colyer et. al. or “AspectJ in
Action” by Ramnivas Laddad.
Spring AOP supports the following AspectJ pointcut designators (PCD) for use in pointcut expressions:
execution - for matching method execution join points, this is the primary pointcut designator you will use when working with Spring AOP
within - limits matching to join points within certain types (simply the execution of a method declared within a matching type when using Spring AOP)
this - limits matching to join points (the execution of methods when using Spring AOP) where the bean reference (Spring AOP proxy) is an instance of the given type
target - limits matching to join points (the execution of methods when using Spring AOP) where the target object (application object being proxied) is an instance of the given type
args - limits matching to join points (the execution of methods when using Spring AOP) where the arguments are instances of the given types
@target
- limits matching to join points (the execution of methods when
using Spring AOP) where the class of the executing object has an
annotation of the given type
@args
-
limits matching to join points (the execution of methods when
using Spring AOP) where the runtime type of the actual arguments
passed have annotations of the given type(s)
@within
- limits matching to join points within types that have the given
annotation (the execution of methods declared in types with the
given annotation when using Spring AOP)
@annotation - limits matching to join points where the subject of the join point (method being executed in Spring AOP) has the given annotation
Because Spring AOP limits matching to only method execution
join points, the discussion of the pointcut designators above gives a
narrower definition than you will find in the AspectJ programming
guide. In addition, AspectJ itself has type-based semantics and at an
execution join point both 'this
' and
'target
' refer to the same object - the object
executing the method. Spring AOP is a proxy-based system and
differentiates between the proxy object itself (bound to
'this
') and the target object behind the proxy
(bound to 'target
').
![]() | Note |
---|---|
Due to the proxy-based nature of Spring's AOP framework, protected methods are by definition not intercepted, neither for JDK proxies (where this isn't applicable) nor for CGLIB proxies (where this is technically possible but not recommendable for AOP purposes). As a consequence, any given pointcut will be matched against public methods only! If your interception needs include protected/private methods or even constructors, consider the use of Spring-driven native AspectJ weaving instead of Spring's proxy-based AOP framework. This constitutes a different mode of AOP usage with different characteristics, so be sure to make yourself familiar with weaving first before making a decision. |
Spring AOP also supports an additional PCD named
'bean
'. This PCD allows you to limit the matching
of join points to a particular named Spring bean, or to a set of named
Spring beans (when using wildcards). The 'bean
' PCD
has the following form:
bean(idOrNameOfBean)
The 'idOrNameOfBean
' token can be the name of
any Spring bean: limited wildcard support using the
'*
' character is provided, so if you establish
some naming conventions for your Spring beans you can quite easily
write a 'bean
' PCD expression to pick them out. As
is the case with other pointcut designators, the
'bean
' PCD can be &&'ed, ||'ed, and !
(negated) too.
![]() | Note |
---|---|
Please note that the ' The ' |
Pointcut expressions can be combined using '&&', '||'
and '!'. It is also possible to refer to pointcut expressions by name.
The following example shows three pointcut expressions:
anyPublicOperation
(which matches if a method
execution join point represents the execution of any public method);
inTrading
(which matches if a method execution is
in the trading module), and tradingOperation
(which
matches if a method execution represents any public method in the
trading module).
@Pointcut("execution(public * *(..))") private void anyPublicOperation() {} @Pointcut("within(com.xyz.someapp.trading..*)") private void inTrading() {} @Pointcut("anyPublicOperation() && inTrading()") private void tradingOperation() {}
It is a best practice to build more complex pointcut expressions out of smaller named components as shown above. When referring to pointcuts by name, normal Java visibility rules apply (you can see private pointcuts in the same type, protected pointcuts in the hierarchy, public pointcuts anywhere and so on). Visibility does not affect pointcut matching.
When working with enterprise applications, you often want to refer to modules of the application and particular sets of operations from within several aspects. We recommend defining a "SystemArchitecture" aspect that captures common pointcut expressions for this purpose. A typical such aspect would look as follows:
package com.xyz.someapp; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Pointcut; @Aspect public class SystemArchitecture { /** * A join point is in the web layer if the method is defined * in a type in the com.xyz.someapp.web package or any sub-package * under that. */ @Pointcut("within(com.xyz.someapp.web..*)") public void inWebLayer() {} /** * A join point is in the service layer if the method is defined * in a type in the com.xyz.someapp.service package or any sub-package * under that. */ @Pointcut("within(com.xyz.someapp.service..*)") public void inServiceLayer() {} /** * A join point is in the data access layer if the method is defined * in a type in the com.xyz.someapp.dao package or any sub-package * under that. */ @Pointcut("within(com.xyz.someapp.dao..*)") public void inDataAccessLayer() {} /** * A business service is the execution of any method defined on a service * interface. This definition assumes that interfaces are placed in the * "service" package, and that implementation types are in sub-packages. * * If you group service interfaces by functional area (for example, * in packages com.xyz.someapp.abc.service and com.xyz.def.service) then * the pointcut expression "execution(* com.xyz.someapp..service.*.*(..))" * could be used instead. * * Alternatively, you can write the expression using the 'bean' * PCD, like so "bean(*Service)". (This assumes that you have * named your Spring service beans in a consistent fashion.) */ @Pointcut("execution(* com.xyz.someapp.service.*.*(..))") public void businessService() {} /** * A data access operation is the execution of any method defined on a * dao interface. This definition assumes that interfaces are placed in the * "dao" package, and that implementation types are in sub-packages. */ @Pointcut("execution(* com.xyz.someapp.dao.*.*(..))") public void dataAccessOperation() {} }
The pointcuts defined in such an aspect can be referred to anywhere that you need a pointcut expression. For example, to make the service layer transactional, you could write:
<aop:config> <aop:advisor pointcut="com.xyz.someapp.SystemArchitecture.businessService()" advice-ref="tx-advice"/> </aop:config> <tx:advice id="tx-advice"> <tx:attributes> <tx:method name="*" propagation="REQUIRED"/> </tx:attributes> </tx:advice>
The <aop:config>
and
<aop:advisor>
elements are discussed in Section 8.3, “Schema-based AOP support”. The transaction elements are discussed in
Chapter 11, Transaction management.
Spring AOP users are likely to use the
execution
pointcut designator the most often. The
format of an execution expression is:
execution(modifiers-pattern? ret-type-pattern declaring-type-pattern? name-pattern(param-pattern)
throws-pattern?)
All parts except the returning type pattern (ret-type-pattern in
the snippet above), name pattern, and parameters pattern are optional.
The returning type pattern determines what the return type of the
method must be in order for a join point to be matched. Most
frequently you will use *
as the returning type
pattern, which matches any return type. A fully-qualified type name
will match only when the method returns the given type. The name
pattern matches the method name. You can use the *
wildcard as all or part of a name pattern. The parameters pattern is
slightly more complex: ()
matches a method that
takes no parameters, whereas (..)
matches any
number of parameters (zero or more). The pattern
(*)
matches a method taking one parameter of any
type, (*,String)
matches a method taking two
parameters, the first can be of any type, the second must be a String.
Consult the
Language Semantics section of the AspectJ Programming Guide
for more information.
Some examples of common pointcut expressions are given below.
the execution of any public method:
execution(public * *(..))
the execution of any method with a name beginning with "set":
execution(* set*(..))
the execution of any method defined by the
AccountService
interface:
execution(* com.xyz.service.AccountService.*(..))
the execution of any method defined in the service package:
execution(* com.xyz.service.*.*(..))
the execution of any method defined in the service package or a sub-package:
execution(* com.xyz.service..*.*(..))
any join point (method execution only in Spring AOP) within the service package:
within(com.xyz.service.*)
any join point (method execution only in Spring AOP) within the service package or a sub-package:
within(com.xyz.service..*)
any join point (method execution only in Spring AOP) where
the proxy implements the
AccountService
interface:
this(com.xyz.service.AccountService)
'this' is more commonly used in a binding form :-
see the following section on advice for how to make the proxy
object available in the advice body.
any join point (method execution only in Spring AOP) where
the target object implements the
AccountService
interface:
target(com.xyz.service.AccountService)
'target' is more commonly used in a binding form :-
see the following section on advice for how to make the target
object available in the advice body.
any join point (method execution only in Spring AOP) which
takes a single parameter, and where the argument passed at runtime
is Serializable
:
args(java.io.Serializable)
'args' is more commonly used in a binding form :- see the following section on advice for how to make the method arguments available in the advice body.
Note that the pointcut given in this example is different to
execution(* *(java.io.Serializable))
: the args
version matches if the argument passed at runtime is Serializable,
the execution version matches if the method signature declares a
single parameter of type
Serializable
.
any join point (method execution only in Spring AOP) where
the target object has an
@Transactional
annotation:
@target(org.springframework.transaction.annotation.Transactional)
'@target' can also be used in a binding form :- see
the following section on advice for how to make the annotation
object available in the advice body.
any join point (method execution only in Spring AOP) where
the declared type of the target object has an
@Transactional
annotation:
@within(org.springframework.transaction.annotation.Transactional)
'@within' can also be used in a binding form :- see
the following section on advice for how to make the annotation
object available in the advice body.
any join point (method execution only in Spring AOP) where
the executing method has an
@Transactional
annotation:
@annotation(org.springframework.transaction.annotation.Transactional)
'@annotation' can also be used in a binding form :-
see the following section on advice for how to make the annotation
object available in the advice body.
any join point (method execution only in Spring AOP) which
takes a single parameter, and where the runtime type of the
argument passed has the @Classified
annotation:
@args(com.xyz.security.Classified)
'@args' can also be used in a binding form :- see
the following section on advice for how to make the annotation
object(s) available in the advice body.
any join point (method execution only in Spring AOP) on a
Spring bean named 'tradeService
':
bean(tradeService)
any join point (method execution only in Spring AOP) on
Spring beans having names that match the wildcard expression
'*Service
':
bean(*Service)
Advice is associated with a pointcut expression, and runs before, after, or around method executions matched by the pointcut. The pointcut expression may be either a simple reference to a named pointcut, or a pointcut expression declared in place.
Before advice is declared in an aspect using the
@Before
annotation:
import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Before; @Aspect public class BeforeExample { @Before("com.xyz.myapp.SystemArchitecture.dataAccessOperation()") public void doAccessCheck() { // ... } }
If using an in-place pointcut expression we could rewrite the above example as:
import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Before; @Aspect public class BeforeExample { @Before("execution(* com.xyz.myapp.dao.*.*(..))") public void doAccessCheck() { // ... } }
After returning advice runs when a matched method execution
returns normally. It is declared using the
@AfterReturning
annotation:
import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.AfterReturning; @Aspect public class AfterReturningExample { @AfterReturning("com.xyz.myapp.SystemArchitecture.dataAccessOperation()") public void doAccessCheck() { // ... } }
Note: it is of course possible to have multiple advice declarations, and other members as well, all inside the same aspect. We're just showing a single advice declaration in these examples to focus on the issue under discussion at the time.
Sometimes you need access in the advice body to the actual value
that was returned. You can use the form of
@AfterReturning
that binds the return
value for this:
import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.AfterReturning; @Aspect public class AfterReturningExample { @AfterReturning( pointcut="com.xyz.myapp.SystemArchitecture.dataAccessOperation()", returning="retVal") public void doAccessCheck(Object retVal) { // ... } }
The name used in the returning
attribute must
correspond to the name of a parameter in the advice method. When a
method execution returns, the return value will be passed to the
advice method as the corresponding argument value. A
returning
clause also restricts matching to only
those method executions that return a value of the specified type
(Object
in this case, which will match any
return value).
Please note that it is not possible to return a totally different reference when using after-returning advice.
After throwing advice runs when a matched method execution exits
by throwing an exception. It is declared using the
@AfterThrowing
annotation:
import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.AfterThrowing; @Aspect public class AfterThrowingExample { @AfterThrowing("com.xyz.myapp.SystemArchitecture.dataAccessOperation()") public void doRecoveryActions() { // ... } }
Often you want the advice to run only when exceptions of a given
type are thrown, and you also often need access to the thrown
exception in the advice body. Use the throwing
attribute to both restrict matching (if desired, use
Throwable
as the exception type
otherwise) and bind the thrown exception to an advice
parameter.
import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.AfterThrowing; @Aspect public class AfterThrowingExample { @AfterThrowing( pointcut="com.xyz.myapp.SystemArchitecture.dataAccessOperation()", throwing="ex") public void doRecoveryActions(DataAccessException ex) { // ... } }
The name used in the throwing
attribute must
correspond to the name of a parameter in the advice method. When a
method execution exits by throwing an exception, the exception will be
passed to the advice method as the corresponding argument value. A
throwing
clause also restricts matching to only
those method executions that throw an exception of the specified type
(DataAccessException
in this case).
After (finally) advice runs however a matched method execution
exits. It is declared using the @After
annotation. After advice must be prepared to handle both normal and
exception return conditions. It is typically used for releasing
resources, etc.
import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.After; @Aspect public class AfterFinallyExample { @After("com.xyz.myapp.SystemArchitecture.dataAccessOperation()") public void doReleaseLock() { // ... } }
The final kind of advice is around advice. Around advice runs "around" a matched method execution. It has the opportunity to do work both before and after the method executes, and to determine when, how, and even if, the method actually gets to execute at all. Around advice is often used if you need to share state before and after a method execution in a thread-safe manner (starting and stopping a timer for example). Always use the least powerful form of advice that meets your requirements (i.e. don't use around advice if simple before advice would do).
Around advice is declared using the
@Around
annotation. The first parameter
of the advice method must be of type
ProceedingJoinPoint
. Within the body of
the advice, calling proceed()
on the
ProceedingJoinPoint
causes the
underlying method to execute. The proceed
method
may also be called passing in an Object[]
- the
values in the array will be used as the arguments to the method
execution when it proceeds.
The behavior of proceed when called with an
Object[]
is a little different than the
behavior of proceed for around advice compiled by the AspectJ
compiler. For around advice written using the traditional AspectJ
language, the number of arguments passed to proceed must match the
number of arguments passed to the around advice (not the number of
arguments taken by the underlying join point), and the value passed to
proceed in a given argument position supplants the original value at
the join point for the entity the value was bound to (Don't worry if
this doesn't make sense right now!). The approach taken by Spring is
simpler and a better match to its proxy-based, execution only
semantics. You only need to be aware of this difference if you are
compiling @AspectJ aspects written for Spring and using proceed with
arguments with the AspectJ compiler and weaver. There is a way to
write such aspects that is 100% compatible across both Spring AOP and
AspectJ, and this is discussed in the following section on advice
parameters.
import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Around; import org.aspectj.lang.ProceedingJoinPoint; @Aspect public class AroundExample { @Around("com.xyz.myapp.SystemArchitecture.businessService()") public Object doBasicProfiling(ProceedingJoinPoint pjp) throws Throwable { // start stopwatch Object retVal = pjp.proceed(); // stop stopwatch return retVal; } }
The value returned by the around advice will be the return value seen by the caller of the method. A simple caching aspect for example could return a value from a cache if it has one, and invoke proceed() if it does not. Note that proceed may be invoked once, many times, or not at all within the body of the around advice, all of these are quite legal.
Spring 2.0 offers fully typed advice - meaning that you declare
the parameters you need in the advice signature (as we saw for the
returning and throwing examples above) rather than work with
Object[]
arrays all the time. We'll see how to
make argument and other contextual values available to the advice body
in a moment. First let's take a look at how to write generic advice
that can find out about the method the advice is currently
advising.
Any advice method may declare as its first parameter, a
parameter of type
org.aspectj.lang.JoinPoint
(please
note that around advice is required to declare
a first parameter of type
ProceedingJoinPoint
, which is a
subclass of JoinPoint
. The
JoinPoint
interface provides a number
of useful methods such as getArgs()
(returns the
method arguments), getThis()
(returns the
proxy object), getTarget()
(returns the
target object), getSignature()
(returns a
description of the method that is being advised) and
toString()
(prints a useful description of
the method being advised). Please do consult the Javadocs for full
details.
We've already seen how to bind the returned value or exception
value (using after returning and after throwing advice). To make
argument values available to the advice body, you can use the
binding form of args
. If a parameter name is used
in place of a type name in an args expression, then the value of the
corresponding argument will be passed as the parameter value when
the advice is invoked. An example should make this clearer. Suppose
you want to advise the execution of dao operations that take an
Account object as the first parameter, and you need access to the
account in the advice body. You could write the following:
@Before("com.xyz.myapp.SystemArchitecture.dataAccessOperation() &&" + "args(account,..)") public void validateAccount(Account account) { // ... }
The args(account,..)
part of the pointcut
expression serves two purposes: firstly, it restricts matching to
only those method executions where the method takes at least one
parameter, and the argument passed to that parameter is an instance
of Account
; secondly, it makes the actual
Account
object available to the advice via
the account
parameter.
Another way of writing this is to declare a pointcut that
"provides" the Account
object value when it
matches a join point, and then just refer to the named pointcut from
the advice. This would look as follows:
@Pointcut("com.xyz.myapp.SystemArchitecture.dataAccessOperation() &&" + "args(account,..)") private void accountDataAccessOperation(Account account) {} @Before("accountDataAccessOperation(account)") public void validateAccount(Account account) { // ... }
The interested reader is once more referred to the AspectJ programming guide for more details.
The proxy object (this
), target object
(target
), and annotations (@within,
@target, @annotation, @args
) can all be bound in a similar
fashion. The following example shows how you could match the
execution of methods annotated with an
@Auditable
annotation, and extract
the audit code.
First the definition of the
@Auditable
annotation:
@Retention(RetentionPolicy.RUNTIME) @Target(ElementType.METHOD) public @interface Auditable { AuditCode value(); }
And then the advice that matches the execution of
@Auditable
methods:
@Before("com.xyz.lib.Pointcuts.anyPublicMethod() && " + "@annotation(auditable)") public void audit(Auditable auditable) { AuditCode code = auditable.value(); // ... }
The parameter binding in advice invocations relies on matching names used in pointcut expressions to declared parameter names in (advice and pointcut) method signatures. Parameter names are not available through Java reflection, so Spring AOP uses the following strategies to determine parameter names:
If the parameter names have been specified by the user explicitly, then the specified parameter names are used: both the advice and the pointcut annotations have an optional "argNames" attribute which can be used to specify the argument names of the annotated method - these argument names are available at runtime. For example:
@Before( value="com.xyz.lib.Pointcuts.anyPublicMethod() && target(bean) && @annotation(auditable)", argNames="bean,auditable") public void audit(Object bean, Auditable auditable) { AuditCode code = auditable.value(); // ... use code and bean }
If the first parameter is of the
JoinPoint
,
ProceedingJoinPoint
, or
JoinPoint.StaticPart
type, you
may leave out the name of the parameter from the value of the
"argNames" attribute. For example, if you modify the preceding
advice to receive the join point object, the "argNames"
attribute need not include it:
@Before( value="com.xyz.lib.Pointcuts.anyPublicMethod() && target(bean) && @annotation(auditable)", argNames="bean,auditable") public void audit(JoinPoint jp, Object bean, Auditable auditable) { AuditCode code = auditable.value(); // ... use code, bean, and jp }
The special treatment given to the first parameter of the
JoinPoint
,
ProceedingJoinPoint
, and
JoinPoint.StaticPart
types is
particularly convenient for advice that do not collect any other
join point context. In such situations, you may simply omit the
"argNames" attribute. For example, the following advice need not
declare the "argNames" attribute:
@Before( "com.xyz.lib.Pointcuts.anyPublicMethod()") public void audit(JoinPoint jp) { // ... use jp }
Using the 'argNames'
attribute is a
little clumsy, so if the 'argNames'
attribute
has not been specified, then Spring AOP will look at the debug
information for the class and try to determine the parameter
names from the local variable table. This information will be
present as long as the classes have been compiled with debug
information ('-g:vars'
at a minimum). The
consequences of compiling with this flag on are: (1) your code
will be slightly easier to understand (reverse engineer), (2)
the class file sizes will be very slightly bigger (typically
inconsequential), (3) the optimization to remove unused local
variables will not be applied by your compiler. In other words,
you should encounter no difficulties building with this flag
on.
If an @AspectJ aspect has been compiled by the AspectJ
compiler (ajc) even without the debug information then there is
no need to add the argNames
attribute as the
compiler will retain the needed information.
If the code has been compiled without the necessary debug
information, then Spring AOP will attempt to deduce the pairing
of binding variables to parameters (for example, if only one
variable is bound in the pointcut expression, and the advice
method only takes one parameter, the pairing is obvious!). If
the binding of variables is ambiguous given the available
information, then an
AmbiguousBindingException
will be
thrown.
If all of the above strategies fail then an
IllegalArgumentException
will be
thrown.
We remarked earlier that we would describe how to write a proceed call with arguments that works consistently across Spring AOP and AspectJ. The solution is simply to ensure that the advice signature binds each of the method parameters in order. For example:
@Around("execution(List<Account> find*(..)) &&" + "com.xyz.myapp.SystemArchitecture.inDataAccessLayer() && " + "args(accountHolderNamePattern)") public Object preProcessQueryPattern(ProceedingJoinPoint pjp, String accountHolderNamePattern) throws Throwable { String newPattern = preProcess(accountHolderNamePattern); return pjp.proceed(new Object[] {newPattern}); }
In many cases you will be doing this binding anyway (as in the example above).
What happens when multiple pieces of advice all want to run at the same join point? Spring AOP follows the same precedence rules as AspectJ to determine the order of advice execution. The highest precedence advice runs first "on the way in" (so given two pieces of before advice, the one with highest precedence runs first). "On the way out" from a join point, the highest precedence advice runs last (so given two pieces of after advice, the one with the highest precedence will run second).
When two pieces of advice defined in
different aspects both need to run at the same
join point, unless you specify otherwise the order of execution is
undefined. You can control the order of execution by specifying
precedence. This is done in the normal Spring way by either
implementing the
org.springframework.core.Ordered
interface in the aspect class or annotating it with the
Order
annotation. Given two aspects,
the aspect returning the lower value from
Ordered.getValue()
(or the annotation value) has
the higher precedence.
When two pieces of advice defined in the same aspect both need to run at the same join point, the ordering is undefined (since there is no way to retrieve the declaration order via reflection for javac-compiled classes). Consider collapsing such advice methods into one advice method per join point in each aspect class, or refactor the pieces of advice into separate aspect classes - which can be ordered at the aspect level.
Introductions (known as inter-type declarations in AspectJ) enable an aspect to declare that advised objects implement a given interface, and to provide an implementation of that interface on behalf of those objects.
An introduction is made using the
@DeclareParents
annotation. This
annotation is used to declare that matching types have a new parent
(hence the name). For example, given an interface
UsageTracked
, and an implementation of
that interface DefaultUsageTracked
, the following
aspect declares that all implementors of service interfaces also
implement the UsageTracked
interface. (In
order to expose statistics via JMX for example.)
@Aspect public class UsageTracking { @DeclareParents(value="com.xzy.myapp.service.*+", defaultImpl=DefaultUsageTracked.class) public static UsageTracked mixin; @Before("com.xyz.myapp.SystemArchitecture.businessService() &&" + "this(usageTracked)") public void recordUsage(UsageTracked usageTracked) { usageTracked.incrementUseCount(); } }
The interface to be implemented is determined by the type of the
annotated field. The value
attribute of the
@DeclareParents
annotation is an AspectJ
type pattern :- any bean of a matching type will implement the
UsageTracked interface. Note that in the before advice of the above
example, service beans can be directly used as implementations of the
UsageTracked
interface. If accessing a
bean programmatically you would write the following:
UsageTracked usageTracked = (UsageTracked) context.getBean("myService");
(This is an advanced topic, so if you are just starting out with AOP you can safely skip it until later.)
By default there will be a single instance of each aspect within
the application context. AspectJ calls this the singleton instantiation
model. It is possible to define aspects with alternate lifecycles :-
Spring supports AspectJ's perthis
and
pertarget
instantiation models (percflow,
percflowbelow,
and pertypewithin
are not
currently supported).
A "perthis" aspect is declared by specifying a
perthis
clause in the
@Aspect
annotation. Let's look at an
example, and then we'll explain how it works.
@Aspect("perthis(com.xyz.myapp.SystemArchitecture.businessService())") public class MyAspect { private int someState; @Before(com.xyz.myapp.SystemArchitecture.businessService()) public void recordServiceUsage() { // ... } }
The effect of the 'perthis'
clause is that one
aspect instance will be created for each unique service object executing
a business service (each unique object bound to 'this' at join points
matched by the pointcut expression). The aspect instance is created the
first time that a method is invoked on the service object. The aspect
goes out of scope when the service object goes out of scope. Before the
aspect instance is created, none of the advice within it executes. As
soon as the aspect instance has been created, the advice declared within
it will execute at matched join points, but only when the service object
is the one this aspect is associated with. See the AspectJ programming
guide for more information on per-clauses.
The 'pertarget'
instantiation model works in
exactly the same way as perthis, but creates one aspect instance for
each unique target object at matched join points.
Now that you have seen how all the constituent parts work, let's put them together to do something useful!
The execution of business services can sometimes fail due to
concurrency issues (for example, deadlock loser). If the operation is
retried, it is quite likely to succeed next time round. For business
services where it is appropriate to retry in such conditions (idempotent
operations that don't need to go back to the user for conflict
resolution), we'd like to transparently retry the operation to avoid the
client seeing a
PessimisticLockingFailureException
. This is a
requirement that clearly cuts across multiple services in the service
layer, and hence is ideal for implementing via an aspect.
Because we want to retry the operation, we will need to use around advice so that we can call proceed multiple times. Here's how the basic aspect implementation looks:
@Aspect public class ConcurrentOperationExecutor implements Ordered { private static final int DEFAULT_MAX_RETRIES = 2; private int maxRetries = DEFAULT_MAX_RETRIES; private int order = 1; public void setMaxRetries(int maxRetries) { this.maxRetries = maxRetries; } public int getOrder() { return this.order; } public void setOrder(int order) { this.order = order; } @Around("com.xyz.myapp.SystemArchitecture.businessService()") public Object doConcurrentOperation(ProceedingJoinPoint pjp) throws Throwable { int numAttempts = 0; PessimisticLockingFailureException lockFailureException; do { numAttempts++; try { return pjp.proceed(); } catch(PessimisticLockingFailureException ex) { lockFailureException = ex; } } while(numAttempts <= this.maxRetries); throw lockFailureException; } }
Note that the aspect implements the
Ordered
interface so we can set the
precedence of the aspect higher than the transaction advice (we want a
fresh transaction each time we retry). The maxRetries
and order
properties will both be configured by
Spring. The main action happens in the
doConcurrentOperation
around advice. Notice that for
the moment we're applying the retry logic to all
businessService()s
. We try to proceed, and if we fail
with an PessimisticLockingFailureException
we
simply try again unless we have exhausted all of our retry
attempts.
The corresponding Spring configuration is:
<aop:aspectj-autoproxy/> <bean id="concurrentOperationExecutor" class="com.xyz.myapp.service.impl.ConcurrentOperationExecutor"> <property name="maxRetries" value="3"/> <property name="order" value="100"/> </bean>
To refine the aspect so that it only retries idempotent
operations, we might define an Idempotent
annotation:
@Retention(RetentionPolicy.RUNTIME) public @interface Idempotent { // marker annotation }
and use the annotation to annotate the implementation of service
operations. The change to the aspect to only retry idempotent operations
simply involves refining the pointcut expression so that only
@Idempotent
operations match:
@Around("com.xyz.myapp.SystemArchitecture.businessService() && " + "@annotation(com.xyz.myapp.service.Idempotent)") public Object doConcurrentOperation(ProceedingJoinPoint pjp) throws Throwable { ... }
If you are unable to use Java 5, or simply prefer an XML-based format, then Spring 2.0 also offers support for defining aspects using the new "aop" namespace tags. The exact same pointcut expressions and advice kinds are supported as when using the @AspectJ style, hence in this section we will focus on the new syntax and refer the reader to the discussion in the previous section (Section 8.2, “@AspectJ support”) for an understanding of writing pointcut expressions and the binding of advice parameters.
To use the aop namespace tags described in this section, you need to import the spring-aop schema as described in Appendix A, XML Schema-based configuration. See Section A.2.7, “The aop schema” for how to import the tags in the aop namespace.
Within your Spring configurations, all aspect and advisor elements
must be placed within an <aop:config>
element
(you can have more than one <aop:config>
element
in an application context configuration). An
<aop:config>
element can contain pointcut,
advisor, and aspect elements (note these must be declared in that
order).
![]() | Warning |
---|---|
The |
Using the schema support, an aspect is simply a regular Java object defined as a bean in your Spring application context. The state and behavior is captured in the fields and methods of the object, and the pointcut and advice information is captured in the XML.
An aspect is declared using the <aop:aspect> element, and
the backing bean is referenced using the ref
attribute:
<aop:config> <aop:aspect id="myAspect" ref="aBean"> ... </aop:aspect> </aop:config> <bean id="aBean" class="..."> ... </bean>
The bean backing the aspect ("aBean
" in this
case) can of course be configured and dependency injected just like any
other Spring bean.
A named pointcut can be declared inside an <aop:config> element, enabling the pointcut definition to be shared across several aspects and advisors.
A pointcut representing the execution of any business service in the service layer could be defined as follows:
<aop:config> <aop:pointcut id="businessService" expression="execution(* com.xyz.myapp.service.*.*(..))"/> </aop:config>
Note that the pointcut expression itself is using the same AspectJ pointcut expression language as described in Section 8.2, “@AspectJ support”. If you are using the schema based declaration style with Java 5, you can refer to named pointcuts defined in types (@Aspects) within the pointcut expression, but this feature is not available on JDK 1.4 and below (it relies on the Java 5 specific AspectJ reflection APIs). On JDK 1.5 therefore, another way of defining the above pointcut would be:
<aop:config> <aop:pointcut id="businessService" expression="com.xyz.myapp.SystemArchitecture.businessService()"/> </aop:config>
Assuming you have a SystemArchitecture
aspect
as described in Section 8.2.3.3, “Sharing common pointcut definitions”.
Declaring a pointcut inside an aspect is very similar to declaring a top-level pointcut:
<aop:config> <aop:aspect id="myAspect" ref="aBean"> <aop:pointcut id="businessService" expression="execution(* com.xyz.myapp.service.*.*(..))"/> ... </aop:aspect> </aop:config>
Much the same way in an @AspectJ aspect, pointcuts declared using the schema based definition style may collect join point context. For example, the following pointcut collects the 'this' object as the join point context and passes it to advice:
<aop:config> <aop:aspect id="myAspect" ref="aBean"> <aop:pointcut id="businessService" expression="execution(* com.xyz.myapp.service.*.*(..)) && this(service)"/> <aop:before pointcut-ref="businessService" method="monitor"/> ... </aop:aspect> </aop:config>
The advice must be declared to receive the collected join point context by including parameters of the matching names:
public void monitor(Object service) { ... }
When combining pointcut sub-expressions, '&&' is awkward within an XML document, and so the keywords 'and', 'or' and 'not' can be used in place of '&&', '||' and '!' respectively. For example, the previous pointcut may be better written as:
<aop:config> <aop:aspect id="myAspect" ref="aBean"> <aop:pointcut id="businessService" expression="execution(* com.xyz.myapp.service.*.*(..)) and this(service)"/> <aop:before pointcut-ref="businessService" method="monitor"/> ... </aop:aspect> </aop:config>
Note that pointcuts defined in this way are referred to by their XML id and cannot be used as named pointcuts to form composite pointcuts. The named pointcut support in the schema based definition style is thus more limited than that offered by the @AspectJ style.
The same five advice kinds are supported as for the @AspectJ style, and they have exactly the same semantics.
Before advice runs before a matched method execution. It is
declared inside an <aop:aspect>
using the
<aop:before> element.
<aop:aspect id="beforeExample" ref="aBean"> <aop:before pointcut-ref="dataAccessOperation" method="doAccessCheck"/> ... </aop:aspect>
Here dataAccessOperation
is the id of a
pointcut defined at the top (<aop:config>
)
level. To define the pointcut inline instead, replace the
pointcut-ref
attribute with a
pointcut
attribute:
<aop:aspect id="beforeExample" ref="aBean"> <aop:before pointcut="execution(* com.xyz.myapp.dao.*.*(..))" method="doAccessCheck"/> ... </aop:aspect>
As we noted in the discussion of the @AspectJ style, using named pointcuts can significantly improve the readability of your code.
The method attribute identifies a method
(doAccessCheck
) that provides the body of the
advice. This method must be defined for the bean referenced by the
aspect element containing the advice. Before a data access operation
is executed (a method execution join point matched by the pointcut
expression), the "doAccessCheck" method on the aspect bean will be
invoked.
After returning advice runs when a matched method execution
completes normally. It is declared inside an
<aop:aspect>
in the same way as before
advice. For example:
<aop:aspect id="afterReturningExample" ref="aBean"> <aop:after-returning pointcut-ref="dataAccessOperation" method="doAccessCheck"/> ... </aop:aspect>
Just as in the @AspectJ style, it is possible to get hold of the return value within the advice body. Use the returning attribute to specify the name of the parameter to which the return value should be passed:
<aop:aspect id="afterReturningExample" ref="aBean"> <aop:after-returning pointcut-ref="dataAccessOperation" returning="retVal" method="doAccessCheck"/> ... </aop:aspect>
The doAccessCheck method must declare a parameter named
retVal
. The type of this parameter constrains
matching in the same way as described for @AfterReturning. For
example, the method signature may be declared as:
public void doAccessCheck(Object retVal) {...
After throwing advice executes when a matched method execution
exits by throwing an exception. It is declared inside an
<aop:aspect>
using the after-throwing
element:
<aop:aspect id="afterThrowingExample" ref="aBean"> <aop:after-throwing pointcut-ref="dataAccessOperation" method="doRecoveryActions"/> ... </aop:aspect>
Just as in the @AspectJ style, it is possible to get hold of the thrown exception within the advice body. Use the throwing attribute to specify the name of the parameter to which the exception should be passed:
<aop:aspect id="afterThrowingExample" ref="aBean"> <aop:after-throwing pointcut-ref="dataAccessOperation" throwing="dataAccessEx" method="doRecoveryActions"/> ... </aop:aspect>
The doRecoveryActions method must declare a parameter named
dataAccessEx
. The type of this parameter constrains
matching in the same way as described for @AfterThrowing. For example,
the method signature may be declared as:
public void doRecoveryActions(DataAccessException dataAccessEx) {...
After (finally) advice runs however a matched method execution
exits. It is declared using the after
element:
<aop:aspect id="afterFinallyExample" ref="aBean"> <aop:after pointcut-ref="dataAccessOperation" method="doReleaseLock"/> ... </aop:aspect>
The final kind of advice is around advice. Around advice runs "around" a matched method execution. It has the opportunity to do work both before and after the method executes, and to determine when, how, and even if, the method actually gets to execute at all. Around advice is often used if you need to share state before and after a method execution in a thread-safe manner (starting and stopping a timer for example). Always use the least powerful form of advice that meets your requirements; don't use around advice if simple before advice would do.
Around advice is declared using the
aop:around
element. The first parameter of the
advice method must be of type
ProceedingJoinPoint
. Within the body of
the advice, calling proceed()
on the
ProceedingJoinPoint
causes the
underlying method to execute. The proceed
method
may also be calling passing in an Object[]
-
the values in the array will be used as the arguments to the method
execution when it proceeds. See Section 8.2.4.5, “Around advice” for notes on calling proceed
with an Object[]
.
<aop:aspect id="aroundExample" ref="aBean"> <aop:around pointcut-ref="businessService" method="doBasicProfiling"/> ... </aop:aspect>
The implementation of the doBasicProfiling
advice would be exactly the same as in the @AspectJ example (minus the
annotation of course):
public Object doBasicProfiling(ProceedingJoinPoint pjp) throws Throwable { // start stopwatch Object retVal = pjp.proceed(); // stop stopwatch return retVal; }
The schema based declaration style supports fully typed advice
in the same way as described for the @AspectJ support - by matching
pointcut parameters by name against advice method parameters. See
Section 8.2.4.6, “Advice parameters” for details. If you
wish to explicitly specify argument names for the advice methods (not
relying on the detection strategies previously described) then this is
done using the arg-names
attribute of the advice
element, which is treated in the same manner to the "argNames"
attribute in an advice annotation as described in the section called “Determining argument names”. For example:
<aop:before pointcut="com.xyz.lib.Pointcuts.anyPublicMethod() and @annotation(auditable)" method="audit" arg-names="auditable"/>
The arg-names
attribute accepts a
comma-delimited list of parameter names.
Find below a slightly more involved example of the XSD-based approach that illustrates some around advice used in conjunction with a number of strongly typed parameters.
package x.y.service; public interface FooService { Foo getFoo(String fooName, int age); } public class DefaultFooService implements FooService { public Foo getFoo(String name, int age) { return new Foo(name, age); } }
Next up is the aspect. Notice the fact that the
profile(..)
method accepts a number of
strongly-typed parameters, the first of which happens to be the join
point used to proceed with the method call: the presence of this
parameter is an indication that the
profile(..)
is to be used as
around
advice:
package x.y; import org.aspectj.lang.ProceedingJoinPoint; import org.springframework.util.StopWatch; public class SimpleProfiler { public Object profile(ProceedingJoinPoint call, String name, int age) throws Throwable { StopWatch clock = new StopWatch( "Profiling for '" + name + "' and '" + age + "'"); try { clock.start(call.toShortString()); return call.proceed(); } finally { clock.stop(); System.out.println(clock.prettyPrint()); } } }
Finally, here is the XML configuration that is required to effect the execution of the above advice for a particular join point:
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd"> <!-- this is the object that will be proxied by Spring's AOP infrastructure --> <bean id="fooService" class="x.y.service.DefaultFooService"/> <!-- this is the actual advice itself --> <bean id="profiler" class="x.y.SimpleProfiler"/> <aop:config> <aop:aspect ref="profiler"> <aop:pointcut id="theExecutionOfSomeFooServiceMethod" expression="execution(* x.y.service.FooService.getFoo(String,int)) and args(name, age)"/> <aop:around pointcut-ref="theExecutionOfSomeFooServiceMethod" method="profile"/> </aop:aspect> </aop:config> </beans>
If we had the following driver script, we would get output something like this on standard output:
import org.springframework.beans.factory.BeanFactory; import org.springframework.context.support.ClassPathXmlApplicationContext; import x.y.service.FooService; public final class Boot { public static void main(final String[] args) throws Exception { BeanFactory ctx = new ClassPathXmlApplicationContext("x/y/plain.xml"); FooService foo = (FooService) ctx.getBean("fooService"); foo.getFoo("Pengo", 12); } }
StopWatch 'Profiling for 'Pengo' and '12'': running time (millis) = 0 ----------------------------------------- ms % Task name ----------------------------------------- 00000 ? execution(getFoo)
When multiple advice needs to execute at the same join point
(executing method) the ordering rules are as described in Section 8.2.4.7, “Advice ordering”. The precedence between
aspects is determined by either adding the
Order
annotation to the bean backing
the aspect or by having the bean implement the
Ordered
interface.
Introductions (known as inter-type declarations in AspectJ) enable an aspect to declare that advised objects implement a given interface, and to provide an implementation of that interface on behalf of those objects.
An introduction is made using the
aop:declare-parents
element inside an
aop:aspect
This element is used to declare that
matching types have a new parent (hence the name). For example, given an
interface UsageTracked
, and an
implementation of that interface
DefaultUsageTracked
, the following aspect
declares that all implementors of service interfaces also implement the
UsageTracked
interface. (In order to
expose statistics via JMX for example.)
<aop:aspect id="usageTrackerAspect" ref="usageTracking"> <aop:declare-parents types-matching="com.xzy.myapp.service.*+" implement-interface="com.xyz.myapp.service.tracking.UsageTracked" default-impl="com.xyz.myapp.service.tracking.DefaultUsageTracked"/> <aop:before pointcut="com.xyz.myapp.SystemArchitecture.businessService() and this(usageTracked)" method="recordUsage"/> </aop:aspect>
The class backing the usageTracking
bean would
contain the method:
public void recordUsage(UsageTracked usageTracked) { usageTracked.incrementUseCount(); }
The interface to be implemented is determined by
implement-interface
attribute. The value of the
types-matching
attribute is an AspectJ type pattern
:- any bean of a matching type will implement the
UsageTracked
interface. Note that in the
before advice of the above example, service beans can be directly used
as implementations of the UsageTracked
interface. If accessing a bean programmatically you would write the
following:
UsageTracked usageTracked = (UsageTracked) context.getBean("myService");
The only supported instantiation model for schema-defined aspects is the singleton model. Other instantiation models may be supported in future releases.
The concept of "advisors" is brought forward from the AOP support defined in Spring 1.2 and does not have a direct equivalent in AspectJ. An advisor is like a small self-contained aspect that has a single piece of advice. The advice itself is represented by a bean, and must implement one of the advice interfaces described in Section 9.3.2, “Advice types in Spring”. Advisors can take advantage of AspectJ pointcut expressions though.
Spring 2.0 supports the advisor concept with the
<aop:advisor>
element. You will most commonly
see it used in conjunction with transactional advice, which also has its
own namespace support in Spring 2.0. Here's how it looks:
<aop:config> <aop:pointcut id="businessService" expression="execution(* com.xyz.myapp.service.*.*(..))"/> <aop:advisor pointcut-ref="businessService" advice-ref="tx-advice"/> </aop:config> <tx:advice id="tx-advice"> <tx:attributes> <tx:method name="*" propagation="REQUIRED"/> </tx:attributes> </tx:advice>
As well as the pointcut-ref
attribute used in the
above example, you can also use the pointcut
attribute
to define a pointcut expression inline.
To define the precedence of an advisor so that the advice can
participate in ordering, use the order
attribute to
define the Ordered
value of the advisor.
Let's see how the concurrent locking failure retry example from Section 8.2.7, “Example” looks when rewritten using the schema support.
The execution of business services can sometimes fail due to
concurrency issues (for example, deadlock loser). If the operation is
retried, it is quite likely it will succeed next time round. For
business services where it is appropriate to retry in such conditions
(idempotent operations that don't need to go back to the user for
conflict resolution), we'd like to transparently retry the operation to
avoid the client seeing a
PessimisticLockingFailureException
. This is a
requirement that clearly cuts across multiple services in the service
layer, and hence is ideal for implementing via an aspect.
Because we want to retry the operation, we'll need to use around advice so that we can call proceed multiple times. Here's how the basic aspect implementation looks (it's just a regular Java class using the schema support):
public class ConcurrentOperationExecutor implements Ordered { private static final int DEFAULT_MAX_RETRIES = 2; private int maxRetries = DEFAULT_MAX_RETRIES; private int order = 1; public void setMaxRetries(int maxRetries) { this.maxRetries = maxRetries; } public int getOrder() { return this.order; } public void setOrder(int order) { this.order = order; } public Object doConcurrentOperation(ProceedingJoinPoint pjp) throws Throwable { int numAttempts = 0; PessimisticLockingFailureException lockFailureException; do { numAttempts++; try { return pjp.proceed(); } catch(PessimisticLockingFailureException ex) { lockFailureException = ex; } } while(numAttempts <= this.maxRetries); throw lockFailureException; } }
Note that the aspect implements the
Ordered
interface so we can set the
precedence of the aspect higher than the transaction advice (we want a
fresh transaction each time we retry). The maxRetries
and order
properties will both be configured by
Spring. The main action happens in the
doConcurrentOperation
around advice method. We try to
proceed, and if we fail with a
PessimisticLockingFailureException
we simply try
again unless we have exhausted all of our retry attempts.
This class is identical to the one used in the @AspectJ example, but with the annotations removed.
The corresponding Spring configuration is:
<aop:config> <aop:aspect id="concurrentOperationRetry" ref="concurrentOperationExecutor"> <aop:pointcut id="idempotentOperation" expression="execution(* com.xyz.myapp.service.*.*(..))"/> <aop:around pointcut-ref="idempotentOperation" method="doConcurrentOperation"/> </aop:aspect> </aop:config> <bean id="concurrentOperationExecutor" class="com.xyz.myapp.service.impl.ConcurrentOperationExecutor"> <property name="maxRetries" value="3"/> <property name="order" value="100"/> </bean>
Notice that for the time being we assume that all business
services are idempotent. If this is not the case we can refine the
aspect so that it only retries genuinely idempotent operations, by
introducing an Idempotent
annotation:
@Retention(RetentionPolicy.RUNTIME) public @interface Idempotent { // marker annotation }
and using the annotation to annotate the implementation of service
operations. The change to the aspect to retry only idempotent operations
simply involves refining the pointcut expression so that only
@Idempotent
operations match:
<aop:pointcut id="idempotentOperation" expression="execution(* com.xyz.myapp.service.*.*(..)) and @annotation(com.xyz.myapp.service.Idempotent)"/>
Once you have decided that an aspect is the best approach for implementing a given requirement, how do you decide between using Spring AOP or AspectJ, and between the Aspect language (code) style, @AspectJ annotation style, or the Spring XML style? These decisions are influenced by a number of factors including application requirements, development tools, and team familiarity with AOP.
Use the simplest thing that can work. Spring AOP is simpler than using full AspectJ as there is no requirement to introduce the AspectJ compiler / weaver into your development and build processes. If you only need to advise the execution of operations on Spring beans, then Spring AOP is the right choice. If you need to advise objects not managed by the Spring container (such as domain objects typically), then you will need to use AspectJ. You will also need to use AspectJ if you wish to advise join points other than simple method executions (for example, field get or set join points, and so on).
When using AspectJ, you have the choice of the AspectJ language syntax (also known as the "code style") or the @AspectJ annotation style. Clearly, if you are not using Java 5+ then the choice has been made for you... use the code style. If aspects play a large role in your design, and you are able to use the AspectJ Development Tools (AJDT) plugin for Eclipse, then the AspectJ language syntax is the preferred option: it is cleaner and simpler because the language was purposefully designed for writing aspects. If you are not using Eclipse, or have only a few aspects that do not play a major role in your application, then you may want to consider using the @AspectJ style and sticking with a regular Java compilation in your IDE, and adding an aspect weaving phase to your build script.
If you have chosen to use Spring AOP, then you have a choice of @AspectJ or XML style. Clearly if you are not running on Java 5+, then the XML style is the appropriate choice; for Java 5 projects there are various tradeoffs to consider.
The XML style will be most familiar to existing Spring users. It can be used with any JDK level (referring to named pointcuts from within pointcut expressions does still require Java 5+ though) and is backed by genuine POJOs. When using AOP as a tool to configure enterprise services then XML can be a good choice (a good test is whether you consider the pointcut expression to be a part of your configuration you might want to change independently). With the XML style arguably it is clearer from your configuration what aspects are present in the system.
The XML style has two disadvantages. Firstly it does not fully encapsulate the implementation of the requirement it addresses in a single place. The DRY principle says that there should be a single, unambiguous, authoritative representation of any piece of knowledge within a system. When using the XML style, the knowledge of how a requirement is implemented is split across the declaration of the backing bean class, and the XML in the configuration file. When using the @AspectJ style there is a single module - the aspect - in which this information is encapsulated. Secondly, the XML style is slightly more limited in what it can express than the @AspectJ style: only the "singleton" aspect instantiation model is supported, and it is not possible to combine named pointcuts declared in XML. For example, in the @AspectJ style you can write something like:
@Pointcut(execution(* get*())) public void propertyAccess() {} @Pointcut(execution(org.xyz.Account+ *(..)) public void operationReturningAnAccount() {} @Pointcut(propertyAccess() && operationReturningAnAccount()) public void accountPropertyAccess() {}
In the XML style I can declare the first two pointcuts:
<aop:pointcut id="propertyAccess" expression="execution(* get*())"/> <aop:pointcut id="operationReturningAnAccount" expression="execution(org.xyz.Account+ *(..))"/>
The downside of the XML approach is that you cannot define the
'accountPropertyAccess
' pointcut by combining these
definitions.
The @AspectJ style supports additional instantiation models, and richer pointcut composition. It has the advantage of keeping the aspect as a modular unit. It also has the advantage the @AspectJ aspects can be understood (and thus consumed) both by Spring AOP and by AspectJ - so if you later decide you need the capabilities of AspectJ to implement additional requirements then it is very easy to migrate to an AspectJ-based approach. On balance the Spring team prefer the @AspectJ style whenever you have aspects that do more than simple "configuration" of enterprise services.
It is perfectly possible to mix @AspectJ style aspects using the
autoproxying support, schema-defined <aop:aspect>
aspects, <aop:advisor>
declared advisors and even
proxies and interceptors defined using the Spring 1.2 style in the same
configuration. All of these are implemented using the same underlying
support mechanism and will co-exist without any difficulty.
Spring AOP uses either JDK dynamic proxies or CGLIB to create the proxy for a given target object. (JDK dynamic proxies are preferred whenever you have a choice).
If the target object to be proxied implements at least one interface then a JDK dynamic proxy will be used. All of the interfaces implemented by the target type will be proxied. If the target object does not implement any interfaces then a CGLIB proxy will be created.
If you want to force the use of CGLIB proxying (for example, to proxy every method defined for the target object, not just those implemented by its interfaces) you can do so. However, there are some issues to consider:
final
methods cannot be advised, as they
cannot be overriden.
You will need the CGLIB 2 binaries on your classpath, whereas dynamic proxies are available with the JDK. Spring will automatically warn you when it needs CGLIB and the CGLIB library classes are not found on the classpath.
The constructor of your proxied object will be called twice. This is a natural consequence of the CGLIB proxy model whereby a subclass is generated for each proxied object. For each proxied instance, two objects are created: the actual proxied object and an instance of the subclass that implements the advice. This behavior is not exhibited when using JDK proxies. Usually, calling the constructor of the proxied type twice, is not an issue, as there are usually only assignments taking place and no real logic is implemented in the constructor.
To force the use of CGLIB proxies set
the value of the proxy-target-class
attribute of the
<aop:config>
element to true:
<aop:config proxy-target-class="true"> <!-- other beans defined here... --> </aop:config>
To force CGLIB proxying when using the @AspectJ autoproxy support,
set the 'proxy-target-class'
attribute of the
<aop:aspectj-autoproxy>
element to
true
:
<aop:aspectj-autoproxy proxy-target-class="true"/>
![]() | Note |
---|---|
Multiple To be clear: using ' |
Spring AOP is proxy-based. It is vitally important that you grasp the semantics of what that last statement actually means before you write your own aspects or use any of the Spring AOP-based aspects supplied with the Spring Framework.
Consider first the scenario where you have a plain-vanilla, un-proxied, nothing-special-about-it, straight object reference, as illustrated by the following code snippet.
public class SimplePojo implements Pojo { public void foo() { // this next method invocation is a direct call on the 'this' reference this.bar(); } public void bar() { // some logic... } }
If you invoke a method on an object reference, the method is invoked directly on that object reference, as can be seen below.
public class Main { public static void main(String[] args) { Pojo pojo = new SimplePojo(); // this is a direct method call on the 'pojo' reference pojo.foo(); } }
Things change slightly when the reference that client code has is a proxy. Consider the following diagram and code snippet.
public class Main { public static void main(String[] args) { ProxyFactory factory = new ProxyFactory(new SimplePojo()); factory.addInterface(Pojo.class); factory.addAdvice(new RetryAdvice()); Pojo pojo = (Pojo) factory.getProxy(); // this is a method call on the proxy! pojo.foo(); } }
The key thing to understand here is that the client code inside
the main(..)
of the Main
class has a reference to the proxy. This means that
method calls on that object reference will be calls on the proxy, and as
such the proxy will be able to delegate to all of the interceptors
(advice) that are relevant to that particular method call. However, once
the call has finally reached the target object, the
SimplePojo
reference in this case, any method
calls that it may make on itself, such as
this.bar()
or
this.foo()
, are going to be invoked against the
this
reference, and
not the proxy. This has important implications. It
means that self-invocation is not going to result
in the advice associated with a method invocation getting a chance to
execute.
Okay, so what is to be done about this? The best approach (the term best is used loosely here) is to refactor your code such that the self-invocation does not happen. For sure, this does entail some work on your part, but it is the best, least-invasive approach. The next approach is absolutely horrendous, and I am almost reticent to point it out precisely because it is so horrendous. You can (choke!) totally tie the logic within your class to Spring AOP by doing this:
public class SimplePojo implements Pojo { public void foo() { // this works, but... gah! ((Pojo) AopContext.currentProxy()).bar(); } public void bar() { // some logic... } }
This totally couples your code to Spring AOP, and it makes the class itself aware of the fact that it is being used in an AOP context, which flies in the face of AOP. It also requires some additional configuration when the proxy is being created:
public class Main { public static void main(String[] args) { ProxyFactory factory = new ProxyFactory(new SimplePojo()); factory.adddInterface(Pojo.class); factory.addAdvice(new RetryAdvice()); factory.setExposeProxy(true); Pojo pojo = (Pojo) factory.getProxy(); // this is a method call on the proxy! pojo.foo(); } }
Finally, it must be noted that AspectJ does not have this self-invocation issue because it is not a proxy-based AOP framework.
In addition to declaring aspects in your configuration using either
<aop:config>
or
<aop:aspectj-autoproxy>
, it is also possible
programmatically to create proxies that advise target objects. For the
full details of Spring's AOP API, see the next chapter. Here we want to
focus on the ability to automatically create proxies using @AspectJ
aspects.
The class
org.springframework.aop.aspectj.annotation.AspectJProxyFactory
can be used to create a proxy for a target object that is advised by one
or more @AspectJ aspects. Basic usage for this class is very simple, as
illustrated below. See the Javadocs for full information.
// create a factory that can generate a proxy for the given target object AspectJProxyFactory factory = new AspectJProxyFactory(targetObject); // add an aspect, the class must be an @AspectJ aspect // you can call this as many times as you need with different aspects factory.addAspect(SecurityManager.class); // you can also add existing aspect instances, the type of the object supplied must be an @AspectJ aspect factory.addAspect(usageTracker); // now get the proxy object... MyInterfaceType proxy = factory.getProxy();
Everything we've covered so far in this chapter is pure Spring AOP. In this section, we're going to look at how you can use the AspectJ compiler/weaver instead of, or in addition to, Spring AOP if your needs go beyond the facilities offered by Spring AOP alone.
Spring ships with a small AspectJ aspect library, which is available
standalone in your distribution as spring-aspects.jar
; you'll need to add this
to your classpath in order to use the aspects in it. Section 8.8.1, “Using AspectJ to dependency inject domain objects with
Spring” and Section 8.8.2, “Other Spring aspects for AspectJ”
discuss the content of this library and how you can use it. Section 8.8.3, “Configuring AspectJ aspects using Spring IoC” discusses how to dependency inject AspectJ
aspects that are woven using the AspectJ compiler. Finally, Section 8.8.4, “Load-time weaving with AspectJ in the Spring Framework” provides an introduction to load-time weaving for
Spring applications using AspectJ.
The Spring container instantiates and configures beans defined in
your application context. It is also possible to ask a bean factory to
configure a pre-existing object given the name of a
bean definition containing the configuration to be applied. The
spring-aspects.jar
contains an
annotation-driven aspect that exploits this capability to allow
dependency injection of any object. The support is
intended to be used for objects created outside of the control
of any container. Domain objects often fall into this
category because they are often created programmatically using the
new
operator, or by an ORM tool as a result of a
database query.
The @Configurable
annotation marks
a class as eligible for Spring-driven configuration. In the simplest
case it can be used just as a marker annotation:
package com.xyz.myapp.domain; import org.springframework.beans.factory.annotation.Configurable; @Configurable public class Account { // ... }
When used as a marker interface in this way, Spring will configure
new instances of the annotated type (Account
in
this case) using a prototype-scoped bean definition with the same name
as the fully-qualified type name
(com.xyz.myapp.domain.Account
). Since the default
name for a bean is the fully-qualified name of its type, a convenient
way to declare the prototype definition is simply to omit the
id
attribute:
<bean class="com.xyz.myapp.domain.Account" scope="prototype"> <property name="fundsTransferService" ref="fundsTransferService"/> </bean>
If you want to explicitly specify the name of the prototype bean definition to use, you can do so directly in the annotation:
package com.xyz.myapp.domain; import org.springframework.beans.factory.annotation.Configurable; @Configurable("account") public class Account { // ... }
Spring will now look for a bean definition named
"account
" and use that as the definition to configure
new Account
instances.
You can also use autowiring to avoid having to specify a
prototype-scoped bean definition at all. To have Spring apply autowiring
use the 'autowire
' property of the
@Configurable
annotation: specify either
@Configurable(autowire=Autowire.BY_TYPE)
or
@Configurable(autowire=Autowire.BY_NAME
for
autowiring by type or by name respectively. As an alternative, as of
Spring 2.5 it is preferable to specify explicit, annotation-driven
dependency injection for your @Configurable
beans by using @Autowired
and
@Resource
at the field or method level (see
Section 4.11, “Annotation-based configuration” for further details).
Finally you can enable Spring dependency checking for the object
references in the newly created and configured object by using the
dependencyCheck
attribute (for example:
@Configurable(autowire=Autowire.BY_NAME,dependencyCheck=true)
).
If this attribute is set to true, then Spring will validate after
configuration that all properties (which are not primitives or
collections) have been set.
Using the annotation on its own does nothing of course. It is the
AnnotationBeanConfigurerAspect
in spring-aspects.jar
that acts on the
presence of the annotation. In essence the aspect says "after returning
from the initialization of a new object of a type annotated with
@Configurable
, configure the newly
created object using Spring in accordance with the properties of the
annotation". In this context, initialization refers
to newly instantiated objects (e.g., objects instantiated with the
'new
' operator) as well as to
Serializable
objects that are undergoing
deserialization (e.g., via readResolve()).
![]() | Note |
---|---|
One of the key phrases in the above paragraph is 'in
essence'. For most cases, the exact semantics of
'after returning from the initialization of a new
object' will be fine... in this context, 'after
initialization' means that the dependencies will be
injected after the object has been constructed -
this means that the dependencies will not be available for use in the
constructor bodies of the class. If you want the dependencies to be
injected before the constructor bodies execute,
and thus be available for use in the body of the constructors, then
you need to define this on the
@Configurable(preConstruction=true) You can find out more information about the language semantics of the various pointcut types in AspectJ in this appendix of the AspectJ Programming Guide. |
For this to work the annotated types must be woven with the
AspectJ weaver - you can either use a build-time Ant or Maven task to do
this (see for example the AspectJ
Development Environment Guide) or load-time weaving (see Section 8.8.4, “Load-time weaving with AspectJ in the Spring Framework”). The
AnnotationBeanConfigurerAspect
itself needs
configuring by Spring (in order to obtain a reference to the bean
factory that is to be used to configure new objects). The Spring context
namespace defines a convenient tag for doing this: just include
the following in your application context configuration:
<context:spring-configured/>
If you are using the DTD instead of schema, the equivalent definition is:
<bean class="org.springframework.beans.factory.aspectj.AnnotationBeanConfigurerAspect" factory-method="aspectOf"/>
Instances of @Configurable
objects
created before the aspect has been configured will
result in a warning being issued to the log and no configuration of the
object taking place. An example might be a bean in the Spring
configuration that creates domain objects when it is initialized by
Spring. In this case you can use the "depends-on" bean attribute to
manually specify that the bean depends on the configuration
aspect.
<bean id="myService" class="com.xzy.myapp.service.MyService" depends-on="org.springframework.beans.factory.aspectj.AnnotationBeanConfigurerAspect"> <!-- ... --> </bean>
One of the goals of the
@Configurable
support is to enable
independent unit testing of domain objects without the difficulties
associated with hard-coded lookups. If
@Configurable
types have not been woven
by AspectJ then the annotation has no affect during unit testing, and
you can simply set mock or stub property references in the object
under test and proceed as normal. If
@Configurable
types
have been woven by AspectJ then you can still
unit test outside of the container as normal, but you will see a
warning message each time that you construct an
@Configurable
object indicating that it
has not been configured by Spring.
The AnnotationBeanConfigurerAspect
used
to implement the @Configurable
support
is an AspectJ singleton aspect. The scope of a singleton aspect is the
same as the scope of static
members, that is to say
there is one aspect instance per classloader that defines the type.
This means that if you define multiple application contexts within the
same classloader hierarchy you need to consider where to define the
<context:spring-configured/>
bean and where to
place spring-aspects.jar
on
the classpath.
Consider a typical Spring web-app configuration with a shared
parent application context defining common business services and
everything needed to support them, and one child application context
per servlet containing definitions particular to that servlet. All of
these contexts will co-exist within the same classloader hierarchy,
and so the AnnotationBeanConfigurerAspect
can only
hold a reference to one of them. In this case we recommend defining
the <context:spring-configured/>
bean in the
shared (parent) application context: this defines the services that
you are likely to want to inject into domain objects. A consequence is
that you cannot configure domain objects with references to beans
defined in the child (servlet-specific) contexts using the
@Configurable mechanism (probably not something you want to do
anyway!).
When deploying multiple web-apps within the same container,
ensure that each web-application loads the types in spring-aspects.jar
using its own
classloader (for example, by placing spring-aspects.jar
in 'WEB-INF/lib'
). If spring-aspects.jar
is only added to the
container wide classpath (and hence loaded by the shared parent
classloader), all web applications will share the same aspect instance
which is probably not what you want.
In addition to the @Configurable
aspect, spring-aspects.jar
contains an AspectJ aspect that can be used to drive Spring's
transaction management for types and methods annotated with the
@Transactional
annotation. This is
primarily intended for users who want to use the Spring Framework's
transaction support outside of the Spring container.
The aspect that interprets
@Transactional
annotations is the
AnnotationTransactionAspect
. When using this
aspect, you must annotate the implementation class
(and/or methods within that class), not the
interface (if any) that the class implements. AspectJ follows Java's
rule that annotations on interfaces are not
inherited.
A @Transactional
annotation on a
class specifies the default transaction semantics for the execution of
any public operation in the class.
A @Transactional
annotation on a
method within the class overrides the default transaction semantics
given by the class annotation (if present). Methods with
public
, protected
, and default
visibility may all be annotated. Annotating protected
and default visibility methods directly is the only way to get
transaction demarcation for the execution of such methods.
For AspectJ programmers that want to use the Spring configuration
and transaction management support but don't want to (or cannot) use
annotations, spring-aspects.jar
also contains abstract
aspects you can extend to
provide your own pointcut definitions. See the sources for the
AbstractBeanConfigurerAspect
and
AbstractTransactionAspect
aspects for more
information. As an example, the following excerpt shows how you could
write an aspect to configure all instances of objects defined in the
domain model using prototype bean definitions that match the
fully-qualified class names:
public aspect DomainObjectConfiguration extends AbstractBeanConfigurerAspect { public DomainObjectConfiguration() { setBeanWiringInfoResolver(new ClassNameBeanWiringInfoResolver()); } // the creation of a new bean (any object in the domain model) protected pointcut beanCreation(Object beanInstance) : initialization(new(..)) && SystemArchitecture.inDomainModel() && this(beanInstance); }
When using AspectJ aspects with Spring applications, it is natural
to both want and expect to be able to configure such aspects using
Spring. The AspectJ runtime itself is responsible for aspect creation,
and the means of configuring the AspectJ created aspects via Spring
depends on the AspectJ instantiation model (the
'per-xxx
' clause) used by the aspect.
The majority of AspectJ aspects are singleton
aspects. Configuration of these aspects is very easy: simply create a
bean definition referencing the aspect type as normal, and include the
bean attribute 'factory-method="aspectOf"'
. This
ensures that Spring obtains the aspect instance by asking AspectJ for it
rather than trying to create an instance itself. For example:
<bean id="profiler" class="com.xyz.profiler.Profiler" factory-method="aspectOf"> <property name="profilingStrategy" ref="jamonProfilingStrategy"/> </bean>
Non-singleton aspects are harder to configure: however it is
possible to do so by creating prototype bean definitions and using the
@Configurable
support from spring-aspects.jar
to configure the
aspect instances once they have bean created by the AspectJ
runtime.
If you have some @AspectJ aspects that you want to weave with
AspectJ (for example, using load-time weaving for domain model types)
and other @AspectJ aspects that you want to use with Spring AOP, and
these aspects are all configured using Spring, then you will need to
tell the Spring AOP @AspectJ autoproxying support which exact subset of
the @AspectJ aspects defined in the configuration should be used for
autoproxying. You can do this by using one or more
<include/>
elements inside the
<aop:aspectj-autoproxy/>
declaration. Each
<include/>
element specifies a name pattern,
and only beans with names matched by at least one of the patterns will
be used for Spring AOP autoproxy configuration:
<aop:aspectj-autoproxy> <aop:include name="thisBean"/> <aop:include name="thatBean"/> </aop:aspectj-autoproxy>
![]() | Note |
---|---|
Do not be misled by the name of the
|
Load-time weaving (LTW) refers to the process of weaving AspectJ aspects into an application's class files as they are being loaded into the Java virtual machine (JVM). The focus of this section is on configuring and using LTW in the specific context of the Spring Framework: this section is not an introduction to LTW though. For full details on the specifics of LTW and configuring LTW with just AspectJ (with Spring not being involved at all), see the LTW section of the AspectJ Development Environment Guide.
The value-add that the Spring Framework brings to AspectJ LTW is
in enabling much finer-grained control over the weaving process.
'Vanilla' AspectJ LTW is effected using a Java (5+) agent, which is
switched on by specifying a VM argument when starting up a JVM. It is
thus a JVM-wide setting, which may be fine in some situations, but often
is a little too coarse. Spring-enabled LTW enables you to switch on LTW
on a per-ClassLoader
basis,
which obviously is more fine-grained and which can make more sense in a
'single-JVM-multiple-application' environment (such as is found in a
typical application server environment).
Further, in certain environments, this support enables load-time weaving without making any modifications to the application server's launch script that will be needed to add -javaagent:path/to/aspectjweaver.jar or (as we describe later in this section) -javaagent:path/to/spring-agent.jar. Developers simply modify one or more files that form the application context to enable load-time weaving instead of relying on administrators who typically are in charge of the deployment configuration such as the launch script.
Now that the sales pitch is over, let us first walk through a quick example of AspectJ LTW using Spring, followed by detailed specifics about elements introduced in the following example. For a complete example, please see the Petclinic sample application.
Let us assume that you are an application developer who has been tasked with diagnosing the cause of some performance problems in a system. Rather than break out a profiling tool, what we are going to do is switch on a simple profiling aspect that will enable us to very quickly get some performance metrics, so that we can then apply a finer-grained profiling tool to that specific area immediately afterwards.
Here is the profiling aspect. Nothing too fancy, just a quick-and-dirty time-based profiler, using the @AspectJ-style of aspect declaration.
package foo; import org.aspectj.lang.ProceedingJoinPoint; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Around; import org.aspectj.lang.annotation.Pointcut; import org.springframework.util.StopWatch; import org.springframework.core.annotation.Order; @Aspect public class ProfilingAspect { @Around("methodsToBeProfiled()") public Object profile(ProceedingJoinPoint pjp) throws Throwable { StopWatch sw = new StopWatch(getClass().getSimpleName()); try { sw.start(pjp.getSignature().getName()); return pjp.proceed(); } finally { sw.stop(); System.out.println(sw.prettyPrint()); } } @Pointcut("execution(public * foo..*.*(..))") public void methodsToBeProfiled(){} }
We will also need to create an
'META-INF/aop.xml
' file, to inform the AspectJ
weaver that we want to weave our
ProfilingAspect
into our classes. This file
convention, namely the presence of a file (or files) on the Java
classpath called ' META-INF/aop.xml
' is standard
AspectJ.
<!DOCTYPE aspectj PUBLIC "-//AspectJ//DTD//EN" "http://www.eclipse.org/aspectj/dtd/aspectj.dtd"> <aspectj> <weaver> <!-- only weave classes in our application-specific packages --> <include within="foo.*"/> </weaver> <aspects> <!-- weave in just this aspect --> <aspect name="foo.ProfilingAspect"/> </aspects> </aspectj>
Now to the Spring-specific portion of the configuration. We need
to configure a LoadTimeWeaver
(all
explained later, just take it on trust for now). This load-time weaver
is the essential component responsible for weaving the aspect
configuration in one or more 'META-INF/aop.xml
'
files into the classes in your application. The good thing is that it
does not require a lot of configuration, as can be seen below (there
are some more options that you can specify, but these are detailed
later).
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-2.5.xsd"> <!-- a service object; we will be profiling its methods --> <bean id="entitlementCalculationService" class="foo.StubEntitlementCalculationService"/> <!-- this switches on the load-time weaving --> <context:load-time-weaver/> </beans>
Now that all the required artifacts are in place - the aspect,
the 'META-INF/aop.xml
' file, and the Spring
configuration -, let us create a simple driver class with a
main(..)
method to demonstrate the LTW in
action.
package foo; import org.springframework.context.support.ClassPathXmlApplicationContext; public final class Main { public static void main(String[] args) { ApplicationContext ctx = new ClassPathXmlApplicationContext("beans.xml", Main.class); EntitlementCalculationService entitlementCalculationService = (EntitlementCalculationService) ctx.getBean("entitlementCalculationService"); // the profiling aspect is 'woven' around this method execution entitlementCalculationService.calculateEntitlement(); } }
There is one last thing to do. The introduction to this section
did say that one could switch on LTW selectively on a
per-ClassLoader
basis with Spring, and this is
true. However, just for this example, we are going to use a Java agent
(supplied with Spring) to switch on the LTW. This is the command line
we will use to run the above Main
class:
java -javaagent:C:/projects/foo/lib/global/spring-agent.jar foo.Main
The '-javaagent
' is a Java 5+ flag for
specifying and enabling agents
to instrument programs running on the JVM. The Spring
Framework ships with such an agent, the
InstrumentationSavingAgent
, which is packaged
in the spring-agent.jar
that
was supplied as the value of the -javaagent
argument in the above example.
The output from the execution of the Main
program will look something like that below. (I have introduced a
Thread.sleep(..)
statement into the
calculateEntitlement()
implementation so that
the profiler actually captures something other than 0 milliseconds -
the 01234
milliseconds is not
an overhead introduced by the AOP :) )
Calculating entitlement StopWatch 'ProfilingAspect': running time (millis) = 1234 ------ ----- ---------------------------- ms % Task name ------ ----- ---------------------------- 01234 100% calculateEntitlement
Since this LTW is effected using full-blown AspectJ, we are not
just limited to advising Spring beans; the following slight variation
on the Main
program will yield the same
result.
package foo; import org.springframework.context.support.ClassPathXmlApplicationContext; public final class Main { public static void main(String[] args) { new ClassPathXmlApplicationContext("beans.xml", Main.class); EntitlementCalculationService entitlementCalculationService = new StubEntitlementCalculationService(); // the profiling aspect will be 'woven' around this method execution entitlementCalculationService.calculateEntitlement(); } }
Notice how in the above program we are simply bootstrapping the
Spring container, and then creating a new instance of the
StubEntitlementCalculationService
totally
outside the context of Spring... the profiling advice still gets woven
in.
The example admittedly is simplistic... however the basics of the LTW support in Spring have all been introduced in the above example, and the rest of this section will explain the 'why' behind each bit of configuration and usage in detail.
![]() | Note |
---|---|
The |
The aspects that you use in LTW have to be AspectJ aspects. They can be written in either the AspectJ language itself or you can write your aspects in the @AspectJ-style. The latter option is of course only an option if you are using Java 5+, but it does mean that your aspects are then both valid AspectJ and Spring AOP aspects. Furthermore, the compiled aspect classes need to be available on the classpath.
The AspectJ LTW infrastructure is configured using one or more
'META-INF/aop.xml
' files, that are on the Java
classpath (either directly, or more typically in jar files).
The structure and contents of this file is detailed in the main
AspectJ reference documentation, and the interested reader is referred
to that resource. (I appreciate that this section is brief,
but the 'aop.xml
' file is 100% AspectJ - there is
no Spring-specific information or semantics that apply to it, and so
there is no extra value that I can contribute either as a result), so
rather than rehash the quite satisfactory section that the AspectJ
developers wrote, I am just directing you there.)
At a minimum you will need the following libraries to use the Spring Framework's support for AspectJ LTW:
spring.jar
(version
2.5 or later)
aspectjrt.jar
(version 1.5 or later)
aspectjweaver.jar
(version 1.5 or later)
If you are using the Spring-provided agent to enable instrumentation, you will also need:
spring-agent.jar
The key component in Spring's LTW support is the
LoadTimeWeaver
interface (in the
org.springframework.instrument.classloading
package), and the numerous implementations of it that ship with the
Spring distribution. A LoadTimeWeaver
is responsible for adding one or more
java.lang.instrument.ClassFileTransformers
to a
ClassLoader
at runtime, which opens the door to
all manner of interesting applications, one of which happens to be the
LTW of aspects.
![]() | Tip |
---|---|
If you are unfamiliar with the idea of runtime class file
transformation, you are encouraged to read the Javadoc API
documentation for the |
Configuring a LoadTimeWeaver
using XML for a particular
ApplicationContext
can be as easy as
adding one line. (Please note that you almost certainly will need to
be using an ApplicationContext
as your
Spring container - typically a
BeanFactory
will not be enough because
the LTW support makes use of
BeanFactoryPostProcessors
.)
To enable the Spring Framework's LTW support, you need to
configure a LoadTimeWeaver
, which
typically is done using the
<context:load-time-weaver/>
element. Find
below a valid <context:load-time-weaver/>
definition that uses default settings.
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-2.5.xsd"> <context:load-time-weaver/> </beans>
The above <context:load-time-weaver/>
bean definition will define and register a number of LTW-specific
infrastructure beans for you automatically, such as a
LoadTimeWeaver
and an
AspectJWeavingEnabler
. Notice how the
<context:load-time-weaver/>
is defined in the
'context
' namespace; note also that the referenced
XML Schema file is only available in versions of Spring 2.5 and
later.
What the above configuration does is define and register a
default LoadTimeWeaver
bean for you.
The default LoadTimeWeaver
is the
DefaultContextLoadTimeWeaver
class, which
attempts to decorate an automatically detected
LoadTimeWeaver
: the exact type of
LoadTimeWeaver
that will be
'automatically detected' is dependent upon your runtime environment
(summarised in the following table).
Table 8.1. DefaultContextLoadTimeWeaver
LoadTimeWeavers
Runtime Environment | LoadTimeWeaver implementation |
---|---|
Running in BEA's Weblogic 10 |
|
Running in Oracle's OC4J |
|
Running in GlassFish |
|
JVM started with Spring
|
|
Fallback, expecting the underlying ClassLoader to follow common conventions
(e.g. applicable to |
|
Note that these are just the
LoadTimeWeavers
that are autodetected
when using the DefaultContextLoadTimeWeaver
: it
is of course possible to specify exactly which
LoadTimeWeaver
implementation that you
wish to use by specifying the fully-qualified classname as the value
of the 'weaver-class
' attribute of the
<context:load-time-weaver/>
element. Find
below an example of doing just that:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-2.5.xsd"> <context:load-time-weaver weaver-class="org.springframework.instrument.classloading.ReflectiveLoadTimeWeaver"/> </beans>
The LoadTimeWeaver
that is
defined and registered by the
<context:load-time-weaver/>
element can be
later retrieved from the Spring container using the well-known name
'loadTimeWeaver
'. Remember that the
LoadTimeWeaver
exists just as a
mechanism for Spring's LTW infrastructure to add one or more
ClassFileTransformers
. The actual
ClassFileTransformer
that does the LTW is the
ClassPreProcessorAgentAdapter
(from the
org.aspectj.weaver.loadtime
package) class. See the
class-level Javadoc for the
ClassPreProcessorAgentAdapter
class for further
details, because the specifics of how the weaving is actually effected
is beyond the scope of this section.
There is one final attribute of the
<context:load-time-weaver/>
left to discuss:
the 'aspectj-weaving
' attribute. This is a simple
attribute that controls whether LTW is enabled or not, it is as simple
as that. It accepts one of three possible values, summarised below,
with the default value if the attribute is not present being '
autodetect
'
Table 8.2. 'aspectj-weaving
' attribute values
Attribute Value | Explanation |
---|---|
| AspectJ weaving is on, and aspects will be woven at load-time as appropriate. |
| LTW is off... no aspect will be woven at load-time. |
| If the Spring LTW infrastructure can find at
least one ' |
This last section contains any additional settings and configuration that you will need when using Spring's LTW support in environments such as application servers and web containers.
You may enable Spring's support for LTW in any Java application
(standalone as well as application server based) through the use of
the Spring-provided instrumentation agent. To do so, start
the VM by by specifying the
-javaagent:path/to/spring-agent.jar
option.
Note that this requires modification of the VM launch script
which may prevent you from using this in application server
environments (depending on your operation policies).
For web applications deployed onto Apache Tomcat 5.0 and above,
Spring provides a TomcatInstrumentableClassLoader
to be registered as the web app class loader. The required Tomcat setup
looks as follows, to be included either in Tomcat's central
server.xml
file or in an application-specific
META-INF/context.xml
file within the WAR root.
Spring's spring-tomcat-weaver.jar
needs to be
included in Tomcat's common lib directory in order to make this
setup work.
<Context path="/myWebApp" docBase="/my/webApp/location"> <Loader loaderClass="org.springframework.instrument.classloading.tomcat.TomcatInstrumentableClassLoader" useSystemClassLoaderAsParent="false"/> </Context>
Note: We generally recommend Tomcat 5.5.20 or above
when enabling load-time weaving. Prior versions have known
issues with custom ClassLoader
setup.
Alternatively, consider the use of the Spring-provided generic VM agent, to be specified in Tomcat's launch script (see above). This will make instrumentation available to all deployed web applications, no matter which ClassLoader they happen to run on.
For a more detailed discussion of Tomcat-based weaving setup, check out the the section called “Tomcat load-time weaving setup (5.0+)” section which discusses specifics of various Tomcat versions. While the primary focus of that section is on JPA persistence provider setup, the Tomcat setup characteristics apply to general load-time weaving as well.
Recent versions of BEA WebLogic (version 10 and above), Oracle
Containers for Java EE (OC4J 10.1.3.1 and above) and Resin (3.1 and above)
provide a ClassLoader that is capable of local instrumentation.
Spring's native LTW leverages such ClassLoaders to enable AspectJ weaving.
You can enable LTW by simply activating context:load-time-weaver
as described earlier. Specifically, you do not
need to modify the launch script to add
-javaagent:path/to/spring-agent.jar
.
GlassFish provides an instrumentation-capable ClassLoader as well, but only in its EAR environment. For GlassFish web applications, follow the Tomcat setup instructions as outlined above.
More information on AspectJ can be found on the AspectJ website.
The book Eclipse AspectJ by Adrian Colyer et. al. (Addison-Wesley, 2005) provides a comprehensive introduction and reference for the AspectJ language.
The book AspectJ in Action by Ramnivas Laddad (Manning, 2003) comes highly recommended; the focus of the book is on AspectJ, but a lot of general AOP themes are explored (in some depth).
The previous chapter described the Spring 2.0 support for AOP using @AspectJ and schema-based aspect definitions. In this chapter we discuss the lower-level Spring AOP APIs and the AOP support used in Spring 1.2 applications. For new applications, we recommend the use of the Spring 2.0 AOP support described in the previous chapter, but when working with existing applications, or when reading books and articles, you may come across Spring 1.2 style examples. Spring 2.0 is fully backwards compatible with Spring 1.2 and everything described in this chapter is fully supported in Spring 2.0.
Let's look at how Spring handles the crucial pointcut concept.
Spring's pointcut model enables pointcut reuse independent of advice types. It's possible to target different advice using the same pointcut.
The org.springframework.aop.Pointcut
interface
is the central interface, used to target advices to particular classes
and methods. The complete interface is shown below:
public interface Pointcut { ClassFilter getClassFilter(); MethodMatcher getMethodMatcher(); }
Splitting the Pointcut
interface into two parts
allows reuse of class and method matching parts, and fine-grained
composition operations (such as performing a "union" with another method
matcher).
The ClassFilter
interface is used to restrict
the pointcut to a given set of target classes. If the
matches()
method always returns true, all target
classes will be matched:
public interface ClassFilter { boolean matches(Class clazz); }
The MethodMatcher
interface is normally more
important. The complete interface is shown below:
public interface MethodMatcher { boolean matches(Method m, Class targetClass); boolean isRuntime(); boolean matches(Method m, Class targetClass, Object[] args); }
The matches(Method, Class)
method is used to
test whether this pointcut will ever match a given method on a target
class. This evaluation can be performed when an AOP proxy is created, to
avoid the need for a test on every method invocation. If the 2-argument
matches method returns true for a given method, and the
isRuntime()
method for the MethodMatcher returns
true, the 3-argument matches method will be invoked on every method
invocation. This enables a pointcut to look at the arguments passed to
the method invocation immediately before the target advice is to
execute.
Most MethodMatchers are static, meaning that their
isRuntime()
method returns false. In this case, the
3-argument matches method will never be invoked.
![]() | Tip |
---|---|
If possible, try to make pointcuts static, allowing the AOP framework to cache the results of pointcut evaluation when an AOP proxy is created. |
Spring supports operations on pointcuts: notably, union and intersection.
Union means the methods that either pointcut matches.
Intersection means the methods that both pointcuts match.
Union is usually more useful.
Pointcuts can be composed using the static methods in the org.springframework.aop.support.Pointcuts class, or using the ComposablePointcut class in the same package. However, using AspectJ pointcut expressions is usually a simpler approach.
Since 2.0, the most important type of pointcut used by Spring is
org.springframework.aop.aspectj.AspectJExpressionPointcut
.
This is a pointcut that uses an AspectJ supplied library to parse an AspectJ
pointcut expression string.
See the previous chapter for a discussion of supported AspectJ pointcut primitives.
Spring provides several convenient pointcut implementations. Some can be used out of the box; others are intended to be subclassed in application-specific pointcuts.
Static pointcuts are based on method and target class, and cannot take into account the method's arguments. Static pointcuts are sufficient - and best - for most usages. It's possible for Spring to evaluate a static pointcut only once, when a method is first invoked: after that, there is no need to evaluate the pointcut again with each method invocation.
Let's consider some static pointcut implementations included with Spring.
One obvious way to specify static pointcuts is regular
expressions. Several AOP frameworks besides Spring make this
possible.
org.springframework.aop.support.Perl5RegexpMethodPointcut
is a generic regular expression pointcut, using Perl 5 regular
expression syntax. The Perl5RegexpMethodPointcut
class depends on Jakarta ORO for regular expression matching. Spring
also provides the JdkRegexpMethodPointcut
class
that uses the regular expression support in JDK 1.4+.
Using the Perl5RegexpMethodPointcut
class,
you can provide a list of pattern Strings. If any of these is a
match, the pointcut will evaluate to true. (So the result is
effectively the union of these pointcuts.)
The usage is shown below:
<bean id="settersAndAbsquatulatePointcut" class="org.springframework.aop.support.Perl5RegexpMethodPointcut"> <property name="patterns"> <list> <value>.*set.*</value> <value>.*absquatulate</value> </list> </property> </bean>
Spring provides a convenience class,
RegexpMethodPointcutAdvisor
, that allows us to
also reference an Advice (remember that an Advice can be an
interceptor, before advice, throws advice etc.). Behind the scenes,
Spring will use a JdkRegexpMethodPointcut
. Using
RegexpMethodPointcutAdvisor
simplifies wiring,
as the one bean encapsulates both pointcut and advice, as shown
below:
<bean id="settersAndAbsquatulateAdvisor" class="org.springframework.aop.support.RegexpMethodPointcutAdvisor"> <property name="advice"> <ref local="beanNameOfAopAllianceInterceptor"/> </property> <property name="patterns"> <list> <value>.*set.*</value> <value>.*absquatulate</value> </list> </property> </bean>
RegexpMethodPointcutAdvisor can be used with any Advice type.
Dynamic pointcuts are costlier to evaluate than static pointcuts. They take into account method arguments, as well as static information. This means that they must be evaluated with every method invocation; the result cannot be cached, as arguments will vary.
The main example is the control flow
pointcut.
Spring control flow pointcuts are conceptually similar to
AspectJ cflow pointcuts, although less
powerful. (There is currently no way to specify that a pointcut
executes below a join point matched by another pointcut.)
A control flow pointcut matches
the current call stack. For example, it might fire if the join point
was invoked by a method in the com.mycompany.web
package, or by the SomeCaller
class. Control flow
pointcuts are specified using the
org.springframework.aop.support.ControlFlowPointcut
class.
![]() | Note |
---|---|
Control flow pointcuts are significantly more expensive to evaluate at runtime than even other dynamic pointcuts. In Java 1.4, the cost is about 5 times that of other dynamic pointcuts. |
Spring provides useful pointcut superclasses to help you to implement your own pointcuts.
Because static pointcuts are most useful, you'll probably subclass StaticMethodMatcherPointcut, as shown below. This requires implementing just one abstract method (although it's possible to override other methods to customize behavior):
class TestStaticPointcut extends StaticMethodMatcherPointcut { public boolean matches(Method m, Class targetClass) { // return true if custom criteria match } }
There are also superclasses for dynamic pointcuts.
You can use custom pointcuts with any advice type in Spring 1.0 RC2 and above.
Because pointcuts in Spring AOP are Java classes, rather than language features (as in AspectJ) it's possible to declare custom pointcuts, whether static or dynamic. Custom pointcuts in Spring can be arbitrarily complex. However, using the AspectJ pointcut expression language is recommended if possible.
![]() | Note |
---|---|
Later versions of Spring may offer support for "semantic pointcuts" as offered by JAC: for example, "all methods that change instance variables in the target object." |
Let's now look at how Spring AOP handles advice.
Each advice is a Spring bean. An advice instance can be shared across all advised objects, or unique to each advised object. This corresponds to per-class or per-instance advice.
Per-class advice is used most often. It is appropriate for generic advice such as transaction advisors. These do not depend on the state of the proxied object or add new state; they merely act on the method and arguments.
Per-instance advice is appropriate for introductions, to support mixins. In this case, the advice adds state to the proxied object.
It's possible to use a mix of shared and per-instance advice in the same AOP proxy.
Spring provides several advice types out of the box, and is extensible to support arbitrary advice types. Let us look at the basic concepts and standard advice types.
The most fundamental advice type in Spring is interception around advice.
Spring is compliant with the AOP Alliance interface for around advice using method interception. MethodInterceptors implementing around advice should implement the following interface:
public interface MethodInterceptor extends Interceptor { Object invoke(MethodInvocation invocation) throws Throwable; }
The MethodInvocation
argument to the
invoke()
method exposes the method being invoked;
the target join point; the AOP proxy; and the arguments to the method.
The invoke()
method should return the
invocation's result: the return value of the join point.
A simple MethodInterceptor
implementation
looks as follows:
public class DebugInterceptor implements MethodInterceptor { public Object invoke(MethodInvocation invocation) throws Throwable { System.out.println("Before: invocation=[" + invocation + "]"); Object rval = invocation.proceed(); System.out.println("Invocation returned"); return rval; } }
Note the call to the MethodInvocation's
proceed()
method. This proceeds down the
interceptor chain towards the join point. Most interceptors will invoke
this method, and return its return value. However, a
MethodInterceptor, like any around advice, can return a different
value or throw an exception rather than invoke the proceed method.
However, you don't want to do this without good reason!
![]() | Note |
---|---|
MethodInterceptors offer interoperability with other AOP Alliance-compliant AOP implementations. The other advice types discussed in the remainder of this section implement common AOP concepts, but in a Spring-specific way. While there is an advantage in using the most specific advice type, stick with MethodInterceptor around advice if you are likely to want to run the aspect in another AOP framework. Note that pointcuts are not currently interoperable between frameworks, and the AOP Alliance does not currently define pointcut interfaces. |
A simpler advice type is a before
advice. This does not need a
MethodInvocation
object, since it will only be
called before entering the method.
The main advantage of a before advice is that there is no need
to invoke the proceed()
method, and therefore no
possibility of inadvertently failing to proceed down the interceptor
chain.
The MethodBeforeAdvice
interface is shown
below. (Spring's API design would allow for field before advice,
although the usual objects apply to field interception and it's
unlikely that Spring will ever implement it).
public interface MethodBeforeAdvice extends BeforeAdvice { void before(Method m, Object[] args, Object target) throws Throwable; }
Note the return type is void
. Before
advice can insert custom behavior before the join point executes, but
cannot change the return value. If a before advice throws an
exception, this will abort further execution of the interceptor chain.
The exception will propagate back up the interceptor chain. If it is
unchecked, or on the signature of the invoked method, it will be
passed directly to the client; otherwise it will be wrapped in an
unchecked exception by the AOP proxy.
An example of a before advice in Spring, which counts all method invocations:
public class CountingBeforeAdvice implements MethodBeforeAdvice { private int count; public void before(Method m, Object[] args, Object target) throws Throwable { ++count; } public int getCount() { return count; } }
![]() | Tip |
---|---|
Before advice can be used with any pointcut. |
Throws advice is invoked after
the return of the join point if the join point threw an exception.
Spring offers typed throws advice. Note that this means that the
org.springframework.aop.ThrowsAdvice
interface does
not contain any methods: It is a tag interface identifying that the
given object implements one or more typed throws advice methods. These
should be in the form of:
afterThrowing([Method, args, target], subclassOfThrowable)
Only the last argument is required. The method signatures may have either one or four arguments, depending on whether the advice method is interested in the method and arguments. The following classes are examples of throws advice.
The advice below is invoked if a RemoteException
is thrown (including subclasses):
public class RemoteThrowsAdvice implements ThrowsAdvice { public void afterThrowing(RemoteException ex) throws Throwable { // Do something with remote exception } }
The following advice is invoked if a
ServletException
is thrown. Unlike the above
advice, it declares 4 arguments, so that it has access to the invoked
method, method arguments and target object:
public class ServletThrowsAdviceWithArguments implements ThrowsAdvice { public void afterThrowing(Method m, Object[] args, Object target, ServletException ex) { // Do something with all arguments } }
The final example illustrates how these two methods could be
used in a single class, which handles both
RemoteException
and
ServletException
. Any number of throws advice
methods can be combined in a single class.
public static class CombinedThrowsAdvice implements ThrowsAdvice { public void afterThrowing(RemoteException ex) throws Throwable { // Do something with remote exception } public void afterThrowing(Method m, Object[] args, Object target, ServletException ex) { // Do something with all arguments } }
Note: If a throws-advice method throws an exception itself, it will override the original exception (i.e. change the exception thrown to the user). The overriding exception will typically be a RuntimeException; this is compatible with any method signature. However, if a throws-advice method throws a checked exception, it will have to match the declared exceptions of the target method and is hence to some degree coupled to specific target method signatures. Do not throw an undeclared checked exception that is incompatible with the target method's signature!
![]() | Tip |
---|---|
Throws advice can be used with any pointcut. |
An after returning advice in Spring must implement the org.springframework.aop.AfterReturningAdvice interface, shown below:
public interface AfterReturningAdvice extends Advice { void afterReturning(Object returnValue, Method m, Object[] args, Object target) throws Throwable; }
An after returning advice has access to the return value (which it cannot modify), invoked method, methods arguments and target.
The following after returning advice counts all successful method invocations that have not thrown exceptions:
public class CountingAfterReturningAdvice implements AfterReturningAdvice { private int count; public void afterReturning(Object returnValue, Method m, Object[] args, Object target) throws Throwable { ++count; } public int getCount() { return count; } }
This advice doesn't change the execution path. If it throws an exception, this will be thrown up the interceptor chain instead of the return value.
![]() | Tip |
---|---|
After returning advice can be used with any pointcut. |
Spring treats introduction advice as a special kind of interception advice.
Introduction requires an IntroductionAdvisor
,
and an IntroductionInterceptor
, implementing the
following interface:
public interface IntroductionInterceptor extends MethodInterceptor { boolean implementsInterface(Class intf); }
The invoke()
method inherited from the AOP
Alliance MethodInterceptor
interface must implement
the introduction: that is, if the invoked method is on an introduced
interface, the introduction interceptor is responsible for handling
the method call - it cannot invoke proceed()
.
Introduction advice cannot be used with any pointcut, as it
applies only at class, rather than method, level. You can only use
introduction advice with the IntroductionAdvisor
,
which has the following methods:
public interface IntroductionAdvisor extends Advisor, IntroductionInfo { ClassFilter getClassFilter(); void validateInterfaces() throws IllegalArgumentException; } public interface IntroductionInfo { Class[] getInterfaces(); }
There is no MethodMatcher
, and hence no
Pointcut
, associated with introduction advice. Only
class filtering is logical.
The getInterfaces()
method returns the
interfaces introduced by this advisor.
validateInterfaces()
method is used internally to see whether or not the introduced interfaces can be implemented by the configured
IntroductionInterceptor
.
Let's look at a simple example from the Spring test suite. Let's suppose we want to introduce the following interface to one or more objects:
public interface Lockable { void lock(); void unlock(); boolean locked(); }
This illustrates a mixin. We
want to be able to cast advised objects to Lockable, whatever their
type, and call lock and unlock methods. If we call the lock() method,
we want all setter methods to throw a
LockedException
. Thus we can add an aspect that
provides the ability to make objects immutable, without them having
any knowledge of it: a good example of AOP.
Firstly, we'll need an
IntroductionInterceptor
that does the heavy
lifting. In this case, we extend the
org.springframework.aop.support.DelegatingIntroductionInterceptor
convenience class. We could implement IntroductionInterceptor
directly, but using
DelegatingIntroductionInterceptor
is best for most
cases.
The DelegatingIntroductionInterceptor
is
designed to delegate an introduction to an actual implementation of
the introduced interface(s), concealing the use of interception to do
so. The delegate can be set to any object using a constructor
argument; the default delegate (when the no-arg constructor is used)
is this. Thus in the example below, the delegate is the
LockMixin
subclass of
DelegatingIntroductionInterceptor
. Given a delegate
(by default itself), a
DelegatingIntroductionInterceptor
instance looks
for all interfaces implemented by the delegate (other than
IntroductionInterceptor), and will support introductions against any
of them. It's possible for subclasses such as
LockMixin
to call the
suppressInterface(Class intf)
method to suppress
interfaces that should not be exposed. However, no matter how many
interfaces an IntroductionInterceptor
is prepared
to support, the IntroductionAdvisor
used will
control which interfaces are actually exposed. An introduced interface
will conceal any implementation of the same interface by the
target.
Thus LockMixin subclasses
DelegatingIntroductionInterceptor
and implements
Lockable itself. The superclass automatically picks up that Lockable
can be supported for introduction, so we don't need to specify that.
We could introduce any number of interfaces in this way.
Note the use of the locked
instance variable.
This effectively adds additional state to that held in the target
object.
public class LockMixin extends DelegatingIntroductionInterceptor implements Lockable { private boolean locked; public void lock() { this.locked = true; } public void unlock() { this.locked = false; } public boolean locked() { return this.locked; } public Object invoke(MethodInvocation invocation) throws Throwable { if (locked() && invocation.getMethod().getName().indexOf("set") == 0) throw new LockedException(); return super.invoke(invocation); } }
Often it isn't necessary to override the invoke()
method: the
DelegatingIntroductionInterceptor
implementation - which calls the delegate method if the method is
introduced, otherwise proceeds towards the join point - is usually
sufficient. In the present case, we need to add a check: no setter
method can be invoked if in locked mode.
The introduction advisor required is simple. All it needs to do
is hold a distinct LockMixin
instance, and specify
the introduced interfaces - in this case, just
Lockable
. A more complex example might take a
reference to the introduction interceptor (which would be defined as a
prototype): in this case, there's no configuration relevant for a
LockMixin
, so we simply create it using
new
.
public class LockMixinAdvisor extends DefaultIntroductionAdvisor { public LockMixinAdvisor() { super(new LockMixin(), Lockable.class); } }
We can apply this advisor very simply: it requires no
configuration. (However, it is necessary: It's
impossible to use an IntroductionInterceptor
without an IntroductionAdvisor.) As usual with
introductions, the advisor must be per-instance, as it is stateful. We
need a different instance of LockMixinAdvisor
, and
hence LockMixin
, for each advised object. The
advisor comprises part of the advised object's state.
We can apply this advisor programmatically, using the
Advised.addAdvisor()
method, or (the recommended
way) in XML configuration, like any other advisor. All proxy creation
choices discussed below, including "auto proxy creators," correctly
handle introductions and stateful mixins.
In Spring, an Advisor is an aspect that contains just a single advice object associated with a pointcut expression.
Apart from the special case of introductions, any advisor can be
used with any advice.
org.springframework.aop.support.DefaultPointcutAdvisor
is the most commonly used advisor class. For example, it can be used with
a MethodInterceptor
, BeforeAdvice
or
ThrowsAdvice
.
It is possible to mix advisor and advice types in Spring in the same AOP proxy. For example, you could use a interception around advice, throws advice and before advice in one proxy configuration: Spring will automatically create the necessary interceptor chain.
If you're using the Spring IoC container (an ApplicationContext or BeanFactory) for your business objects - and you should be! - you will want to use one of Spring's AOP FactoryBeans. (Remember that a factory bean introduces a layer of indirection, enabling it to create objects of a different type.)
![]() | Note |
---|---|
The Spring 2.0 AOP support also uses factory beans under the covers. |
The basic way to create an AOP proxy in Spring is to use the org.springframework.aop.framework.ProxyFactoryBean. This gives complete control over the pointcuts and advice that will apply, and their ordering. However, there are simpler options that are preferable if you don't need such control.
The ProxyFactoryBean
, like other Spring
FactoryBean
implementations, introduces a level of
indirection. If you define a ProxyFactoryBean
with
name foo
, what objects referencing
foo
see is not the
ProxyFactoryBean
instance itself, but an object
created by the ProxyFactoryBean
's implementation of
the getObject()
method. This method will create an
AOP proxy wrapping a target object.
One of the most important benefits of using a
ProxyFactoryBean
or another IoC-aware class to create
AOP proxies, is that it means that advices and pointcuts can also be
managed by IoC. This is a powerful feature, enabling certain approaches
that are hard to achieve with other AOP frameworks. For example, an
advice may itself reference application objects (besides the target,
which should be available in any AOP framework), benefiting from all the
pluggability provided by Dependency Injection.
In common with most FactoryBean
implementations
provided with Spring, the ProxyFactoryBean
class is
itself a JavaBean. Its properties are used to:
Specify the target you want to proxy.
Specify whether to use CGLIB (see below and also the section entitled Section 9.5.3, “JDK- and CGLIB-based proxies”).
Some key properties are inherited from
org.springframework.aop.framework.ProxyConfig
(the
superclass for all AOP proxy factories in Spring). These key properties include:
proxyTargetClass
: true
if the
target class is to be proxied, rather than the target class' interfaces.
If this property value is set to true
, then CGLIB proxies
will be created (but see also below the section entitled
Section 9.5.3, “JDK- and CGLIB-based proxies”).
optimize
: controls whether or not aggressive
optimizations are applied to proxies created via CGLIB.
One should not blithely use this setting unless one fully understands
how the relevant AOP proxy handles optimization. This is currently used only
for CGLIB proxies; it has no effect with JDK dynamic proxies.
frozen
: if a proxy configuration is frozen
,
then changes to the configuration are no longer allowed. This is useful both as
a slight optimization and for those cases when you don't want callers to be able
to manipulate the proxy (via the Advised
interface)
after the proxy has been created. The default value of this property is
false
, so changes such as adding additional advice are allowed.
exposeProxy
: determines whether or not the current
proxy should be exposed in a ThreadLocal
so that
it can be accessed by the target. If a target needs to obtain
the proxy and the exposeProxy
property is set to
true
, the target can use the
AopContext.currentProxy()
method.
aopProxyFactory
: the implementation of
AopProxyFactory
to use. Offers a way of
customizing whether to use dynamic proxies, CGLIB or any other proxy
strategy. The default implementation will choose dynamic proxies or
CGLIB appropriately. There should be no need to use this property;
it is intended to allow the addition of new proxy types in Spring 1.1.
Other properties specific to ProxyFactoryBean
include:
proxyInterfaces
: array of String interface
names. If this isn't supplied, a CGLIB proxy for the target class
will be used (but see also below the section entitled
Section 9.5.3, “JDK- and CGLIB-based proxies”).
interceptorNames
: String array of
Advisor
, interceptor or other advice
names to apply. Ordering is significant, on a first come-first served
basis. That is to say that the first interceptor in the list
will be the first to be able to intercept the invocation.
The names are bean names in the current factory, including
bean names from ancestor factories. You can't mention bean
references here since doing so would result in the
ProxyFactoryBean
ignoring the singleton
setting of the advice.
You can append an interceptor name with an asterisk
(*
). This will result in the application of all
advisor beans with names starting with the part before the asterisk
to be applied. An example of using this feature can be found in
Section 9.5.6, “Using 'global' advisors”.
singleton: whether or not the factory should return a single
object, no matter how often the getObject()
method is called. Several FactoryBean
implementations offer such a method. The default value is
true
. If you want to use stateful advice -
for example, for stateful mixins - use prototype advices along
with a singleton value of false
.
This section serves as the definitive documentation on how the
ProxyFactoryBean
chooses to create one of
either a JDK- and CGLIB-based proxy for a particular target object
(that is to be proxied).
![]() | Note |
---|---|
The behavior of the |
If the class of a target object that is to be proxied (hereafter simply
referred to as the target class) doesn't implement any interfaces, then
a CGLIB-based proxy will be created. This is the easiest scenario, because
JDK proxies are interface based, and no interfaces means JDK proxying
isn't even possible. One simply plugs in the target bean, and specifies the
list of interceptors via the interceptorNames
property.
Note that a CGLIB-based proxy will be created even if the
proxyTargetClass
property of the
ProxyFactoryBean
has been set to false
.
(Obviously this makes no sense, and is best removed from the bean
definition because it is at best redundant, and at worst confusing.)
If the target class implements one (or more) interfaces, then the type of
proxy that is created depends on the configuration of the
ProxyFactoryBean
.
If the proxyTargetClass
property of the
ProxyFactoryBean
has been set to true
,
then a CGLIB-based proxy will be created. This makes sense, and is in
keeping with the principle of least surprise. Even if the
proxyInterfaces
property of the
ProxyFactoryBean
has been set to one or more
fully qualified interface names, the fact that the
proxyTargetClass
property is set to
true
will cause
CGLIB-based proxying to be in effect.
If the proxyInterfaces
property of the
ProxyFactoryBean
has been set to one or more
fully qualified interface names, then a JDK-based proxy will be created.
The created proxy will implement all of the interfaces that were specified
in the proxyInterfaces
property; if the target class
happens to implement a whole lot more interfaces than those specified in
the proxyInterfaces
property, that is all well and
good but those additional interfaces will not be implemented by the
returned proxy.
If the proxyInterfaces
property of the
ProxyFactoryBean
has not been
set, but the target class does implement one (or more)
interfaces, then the ProxyFactoryBean
will auto-detect
the fact that the target class does actually implement at least one interface,
and a JDK-based proxy will be created. The interfaces that are actually
proxied will be all of the interfaces that the target
class implements; in effect, this is the same as simply supplying a list
of each and every interface that the target class implements to the
proxyInterfaces
property. However, it is significantly less
work, and less prone to typos.
Let's look at a simple example of ProxyFactoryBean
in action. This example involves:
A target bean that will be proxied. This is the "personTarget" bean definition in the example below.
An Advisor and an Interceptor used to provide advice.
An AOP proxy bean definition specifying the target object (the personTarget bean) and the interfaces to proxy, along with the advices to apply.
<bean id="personTarget" class="com.mycompany.PersonImpl"> <property name="name"><value>Tony</value></property> <property name="age"><value>51</value></property> </bean> <bean id="myAdvisor" class="com.mycompany.MyAdvisor"> <property name="someProperty"><value>Custom string property value</value></property> </bean> <bean id="debugInterceptor" class="org.springframework.aop.interceptor.DebugInterceptor"> </bean> <bean id="person" class="org.springframework.aop.framework.ProxyFactoryBean"> <property name="proxyInterfaces"><value>com.mycompany.Person</value></property> <property name="target"><ref local="personTarget"/></property> <property name="interceptorNames"> <list> <value>myAdvisor</value> <value>debugInterceptor</value> </list> </property> </bean>
Note that the interceptorNames
property takes a
list of String: the bean names of the interceptor or advisors in the
current factory. Advisors, interceptors, before, after returning and
throws advice objects can be used. The ordering of advisors is
significant.
![]() | Note |
---|---|
You might be wondering why the list doesn't hold bean references. The reason for this is that if the ProxyFactoryBean's singleton property is set to false, it must be able to return independent proxy instances. If any of the advisors is itself a prototype, an independent instance would need to be returned, so it's necessary to be able to obtain an instance of the prototype from the factory; holding a reference isn't sufficient. |
The "person" bean definition above can be used in place of a Person implementation, as follows:
Person person = (Person) factory.getBean("person");
Other beans in the same IoC context can express a strongly typed dependency on it, as with an ordinary Java object:
<bean id="personUser" class="com.mycompany.PersonUser"> <property name="person"><ref local="person" /></property> </bean>
The PersonUser
class in this example would
expose a property of type Person. As far as it's concerned, the AOP
proxy can be used transparently in place of a "real" person
implementation. However, its class would be a dynamic proxy class. It
would be possible to cast it to the Advised
interface
(discussed below).
It's possible to conceal the distinction between target and proxy
using an anonymous inner bean, as follows. Only the
ProxyFactoryBean
definition is different; the advice
is included only for completeness:
<bean id="myAdvisor" class="com.mycompany.MyAdvisor"> <property name="someProperty"><value>Custom string property value</value></property> </bean> <bean id="debugInterceptor" class="org.springframework.aop.interceptor.DebugInterceptor"/> <bean id="person" class="org.springframework.aop.framework.ProxyFactoryBean"> <property name="proxyInterfaces"><value>com.mycompany.Person</value></property> <!-- Use inner bean, not local reference to target --> <property name="target"> <bean class="com.mycompany.PersonImpl"> <property name="name"><value>Tony</value></property> <property name="age"><value>51</value></property> </bean> </property> <property name="interceptorNames"> <list> <value>myAdvisor</value> <value>debugInterceptor</value> </list> </property> </bean>
This has the advantage that there's only one object of type
Person
: useful if we want to prevent users of the
application context from obtaining a reference to the un-advised object, or
need to avoid any ambiguity with Spring IoC
autowiring. There's also arguably an advantage in
that the ProxyFactoryBean definition is self-contained. However, there
are times when being able to obtain the un-advised target from the
factory might actually be an advantage: for
example, in certain test scenarios.
What if you need to proxy a class, rather than one or more interfaces?
Imagine that in our example above, there was no
Person
interface: we needed to advise a class called
Person
that didn't implement any business interface.
In this case, you can configure Spring to use CGLIB proxying, rather
than dynamic proxies. Simply set the proxyTargetClass
property on the ProxyFactoryBean above to true. While it's best to
program to interfaces, rather than classes, the ability to advise
classes that don't implement interfaces can be useful when working with
legacy code. (In general, Spring isn't prescriptive. While it makes it
easy to apply good practices, it avoids forcing a particular
approach.)
If you want to, you can force the use of CGLIB in any case, even if you do have interfaces.
CGLIB proxying works by generating a subclass of the target class at runtime. Spring configures this generated subclass to delegate method calls to the original target: the subclass is used to implement the Decorator pattern, weaving in the advice.
CGLIB proxying should generally be transparent to users. However, there are some issues to consider:
Final
methods can't be advised, as they
can't be overridden.
You'll need the CGLIB 2 binaries on your classpath; dynamic proxies are available with the JDK.
There's little performance difference between CGLIB proxying and dynamic proxies. As of Spring 1.0, dynamic proxies are slightly faster. However, this may change in the future. Performance should not be a decisive consideration in this case.
By appending an asterisk to an interceptor name, all advisors with bean names matching the part before the asterisk, will be added to the advisor chain. This can come in handy if you need to add a standard set of 'global' advisors:
<bean id="proxy" class="org.springframework.aop.framework.ProxyFactoryBean"> <property name="target" ref="service"/> <property name="interceptorNames"> <list> <value>global*</value> </list> </property> </bean> <bean id="global_debug" class="org.springframework.aop.interceptor.DebugInterceptor"/> <bean id="global_performance" class="org.springframework.aop.interceptor.PerformanceMonitorInterceptor"/>
Especially when defining transactional proxies, you may end up with many similar proxy definitions. The use of parent and child bean definitions, along with inner bean definitions, can result in much cleaner and more concise proxy definitions.
First a parent, template, bean definition is created for the proxy:
<bean id="txProxyTemplate" abstract="true" class="org.springframework.transaction.interceptor.TransactionProxyFactoryBean"> <property name="transactionManager" ref="transactionManager"/> <property name="transactionAttributes"> <props> <prop key="*">PROPAGATION_REQUIRED</prop> </props> </property> </bean>
This will never be instantiated itself, so may actually be incomplete. Then each proxy which needs to be created is just a child bean definition, which wraps the target of the proxy as an inner bean definition, since the target will never be used on its own anyway.
<bean id="myService" parent="txProxyTemplate"> <property name="target"> <bean class="org.springframework.samples.MyServiceImpl"> </bean> </property> </bean>
It is of course possible to override properties from the parent template, such as in this case, the transaction propagation settings:
<bean id="mySpecialService" parent="txProxyTemplate"> <property name="target"> <bean class="org.springframework.samples.MySpecialServiceImpl"> </bean> </property> <property name="transactionAttributes"> <props> <prop key="get*">PROPAGATION_REQUIRED,readOnly</prop> <prop key="find*">PROPAGATION_REQUIRED,readOnly</prop> <prop key="load*">PROPAGATION_REQUIRED,readOnly</prop> <prop key="store*">PROPAGATION_REQUIRED</prop> </props> </property> </bean>
Note that in the example above, we have explicitly marked the parent bean definition as abstract by using the abstract attribute, as described previously, so that it may not actually ever be instantiated. Application contexts (but not simple bean factories) will by default pre-instantiate all singletons. It is therefore important (at least for singleton beans) that if you have a (parent) bean definition which you intend to use only as a template, and this definition specifies a class, you must make sure to set the abstract attribute to true, otherwise the application context will actually try to pre-instantiate it.
It's easy to create AOP proxies programmatically using Spring. This enables you to use Spring AOP without dependency on Spring IoC.
The following listing shows creation of a proxy for a target object, with one interceptor and one advisor. The interfaces implemented by the target object will automatically be proxied:
ProxyFactory factory = new ProxyFactory(myBusinessInterfaceImpl);
factory.addInterceptor(myMethodInterceptor);
factory.addAdvisor(myAdvisor);
MyBusinessInterface tb = (MyBusinessInterface) factory.getProxy();
The first step is to construct an object of type
org.springframework.aop.framework.ProxyFactory
. You can
create this with a target object, as in the above example, or specify the
interfaces to be proxied in an alternate constructor.
You can add interceptors or advisors, and manipulate them for the life of the ProxyFactory. If you add an IntroductionInterceptionAroundAdvisor you can cause the proxy to implement additional interfaces.
There are also convenience methods on ProxyFactory (inherited from
AdvisedSupport
) which allow you to add other advice types
such as before and throws advice. AdvisedSupport is the superclass of both
ProxyFactory and ProxyFactoryBean.
![]() | Tip |
---|---|
Integrating AOP proxy creation with the IoC framework is best practice in most applications. We recommend that you externalize configuration from Java code with AOP, as in general. |
However you create AOP proxies, you can manipulate them using the
org.springframework.aop.framework.Advised
interface.
Any AOP proxy can be cast to this interface, whichever other interfaces it
implements. This interface includes the following methods:
Advisor[] getAdvisors(); void addAdvice(Advice advice) throws AopConfigException; void addAdvice(int pos, Advice advice) throws AopConfigException; void addAdvisor(Advisor advisor) throws AopConfigException; void addAdvisor(int pos, Advisor advisor) throws AopConfigException; int indexOf(Advisor advisor); boolean removeAdvisor(Advisor advisor) throws AopConfigException; void removeAdvisor(int index) throws AopConfigException; boolean replaceAdvisor(Advisor a, Advisor b) throws AopConfigException; boolean isFrozen();
The getAdvisors()
method will return an Advisor
for every advisor, interceptor or other advice type that has been added to
the factory. If you added an Advisor, the returned advisor at this index
will be the object that you added. If you added an interceptor or other
advice type, Spring will have wrapped this in an advisor with a pointcut
that always returns true. Thus if you added a
MethodInterceptor
, the advisor returned for this index
will be an DefaultPointcutAdvisor
returning your
MethodInterceptor
and a pointcut that matches all
classes and methods.
The addAdvisor()
methods can be used to add any
Advisor. Usually the advisor holding pointcut and advice will be the
generic DefaultPointcutAdvisor
, which can be used with
any advice or pointcut (but not for introductions).
By default, it's possible to add or remove advisors or interceptors even once a proxy has been created. The only restriction is that it's impossible to add or remove an introduction advisor, as existing proxies from the factory will not show the interface change. (You can obtain a new proxy from the factory to avoid this problem.)
A simple example of casting an AOP proxy to the
Advised
interface and examining and manipulating its
advice:
Advised advised = (Advised) myObject; Advisor[] advisors = advised.getAdvisors(); int oldAdvisorCount = advisors.length; System.out.println(oldAdvisorCount + " advisors"); // Add an advice like an interceptor without a pointcut // Will match all proxied methods // Can use for interceptors, before, after returning or throws advice advised.addAdvice(new DebugInterceptor()); // Add selective advice using a pointcut advised.addAdvisor(new DefaultPointcutAdvisor(mySpecialPointcut, myAdvice)); assertEquals("Added two advisors", oldAdvisorCount + 2, advised.getAdvisors().length);
![]() | Note |
---|---|
It's questionable whether it's advisable (no pun intended) to modify advice on a business object in production, although there are no doubt legitimate usage cases. However, it can be very useful in development: for example, in tests. I have sometimes found it very useful to be able to add test code in the form of an interceptor or other advice, getting inside a method invocation I want to test. (For example, the advice can get inside a transaction created for that method: for example, to run SQL to check that a database was correctly updated, before marking the transaction for roll back.) |
Depending on how you created the proxy, you can usually set a
frozen
flag, in which case the
Advised
isFrozen()
method will
return true, and any attempts to modify advice through addition or removal
will result in an AopConfigException
. The ability to
freeze the state of an advised object is useful in some cases, for
example, to prevent calling code removing a security interceptor. It may
also be used in Spring 1.1 to allow aggressive optimization if runtime
advice modification is known not to be required.
So far we've considered explicit creation of AOP proxies using a
ProxyFactoryBean
or similar factory bean.
Spring also allows us to use "autoproxy" bean definitions, which can automatically proxy selected bean definitions. This is built on Spring "bean post processor" infrastructure, which enables modification of any bean definition as the container loads.
In this model, you set up some special bean definitions in your XML
bean definition file to configure the auto proxy infrastructure. This
allows you just to declare the targets eligible for autoproxying: you
don't need to use ProxyFactoryBean
.
There are two ways to do this:
Using an autoproxy creator that refers to specific beans in the current context.
A special case of autoproxy creation that deserves to be considered separately; autoproxy creation driven by source-level metadata attributes.
The org.springframework.aop.framework.autoproxy
package provides the following standard autoproxy creators.
The BeanNameAutoProxyCreator
class is a
BeanPostProcessor
that automatically creates AOP proxies
for beans with names matching literal values or wildcards.
<bean class="org.springframework.aop.framework.autoproxy.BeanNameAutoProxyCreator"> <property name="beanNames"><value>jdk*,onlyJdk</value></property> <property name="interceptorNames"> <list> <value>myInterceptor</value> </list> </property> </bean>
As with ProxyFactoryBean
, there is an
interceptorNames
property rather than a list of interceptors, to allow
correct behavior for prototype advisors. Named "interceptors" can be
advisors or any advice type.
As with auto proxying in general, the main point of using
BeanNameAutoProxyCreator
is to apply the same
configuration consistently to multiple objects, with minimal
volume of configuration. It is a popular choice for applying
declarative transactions to multiple objects.
Bean definitions whose names match, such as "jdkMyBean" and
"onlyJdk" in the above example, are plain old bean definitions with
the target class. An AOP proxy will be created automatically by the
BeanNameAutoProxyCreator
. The same advice will be
applied to all matching beans. Note that if advisors are used (rather
than the interceptor in the above example), the pointcuts may apply
differently to different beans.
A more general and extremely powerful auto proxy creator is
DefaultAdvisorAutoProxyCreator
. This will
automagically apply eligible advisors in the current context, without
the need to include specific bean names in the autoproxy advisor's
bean definition. It offers the same merit of consistent configuration
and avoidance of duplication as
BeanNameAutoProxyCreator
.
Using this mechanism involves:
Specifying a
DefaultAdvisorAutoProxyCreator
bean
definition.
Specifying any number of Advisors in the same or related contexts. Note that these must be Advisors, not just interceptors or other advices. This is necessary because there must be a pointcut to evaluate, to check the eligibility of each advice to candidate bean definitions.
The DefaultAdvisorAutoProxyCreator
will
automatically evaluate the pointcut contained in each advisor, to see
what (if any) advice it should apply to each business object (such as
"businessObject1" and "businessObject2" in the example).
This means that any number of advisors can be applied automatically to each business object. If no pointcut in any of the advisors matches any method in a business object, the object will not be proxied. As bean definitions are added for new business objects, they will automatically be proxied if necessary.
Autoproxying in general has the advantage of making it impossible for callers or dependencies to obtain an un-advised object. Calling getBean("businessObject1") on this ApplicationContext will return an AOP proxy, not the target business object. (The "inner bean" idiom shown earlier also offers this benefit.)
<bean class="org.springframework.aop.framework.autoproxy.DefaultAdvisorAutoProxyCreator"/> <bean class="org.springframework.transaction.interceptor.TransactionAttributeSourceAdvisor"> <property name="transactionInterceptor" ref="transactionInterceptor"/> </bean> <bean id="customAdvisor" class="com.mycompany.MyAdvisor"/> <bean id="businessObject1" class="com.mycompany.BusinessObject1"> <!-- Properties omitted --> </bean> <bean id="businessObject2" class="com.mycompany.BusinessObject2"/>
The DefaultAdvisorAutoProxyCreator
is very
useful if you want to apply the same advice consistently to many
business objects. Once the infrastructure definitions are in place,
you can simply add new business objects without including specific
proxy configuration. You can also drop in additional aspects very
easily - for example, tracing or performance monitoring aspects - with
minimal change to configuration.
The DefaultAdvisorAutoProxyCreator offers support for filtering
(using a naming convention so that only certain advisors are
evaluated, allowing use of multiple, differently configured,
AdvisorAutoProxyCreators in the same factory) and ordering. Advisors
can implement the org.springframework.core.Ordered
interface to ensure correct ordering if this is an issue. The
TransactionAttributeSourceAdvisor used in the above example has a
configurable order value; the default setting is unordered.
This is the superclass of DefaultAdvisorAutoProxyCreator. You
can create your own autoproxy creators by subclassing this class, in
the unlikely event that advisor definitions offer insufficient
customization to the behavior of the framework
DefaultAdvisorAutoProxyCreator
.
A particularly important type of autoproxying is driven by
metadata. This produces a similar programming model to .NET
ServicedComponents
. Instead of using XML deployment
descriptors as in EJB, configuration for transaction management and
other enterprise services is held in source-level attributes.
In this case, you use the
DefaultAdvisorAutoProxyCreator
, in combination with
Advisors that understand metadata attributes. The metadata specifics are
held in the pointcut part of the candidate advisors, rather than in the
autoproxy creation class itself.
This is really a special case of the
DefaultAdvisorAutoProxyCreator
, but deserves
consideration on its own. (The metadata-aware code is in the pointcuts
contained in the advisors, not the AOP framework itself.)
The /attributes
directory of the JPetStore
sample application shows the use of attribute-driven autoproxying. In
this case, there's no need to use the
TransactionProxyFactoryBean
. Simply defining
transactional attributes on business objects is sufficient, because of
the use of metadata-aware pointcuts. The bean definitions include the
following code, in /WEB-INF/declarativeServices.xml
.
Note that this is generic, and can be used outside the JPetStore:
<bean class="org.springframework.aop.framework.autoproxy.DefaultAdvisorAutoProxyCreator"/> <bean class="org.springframework.transaction.interceptor.TransactionAttributeSourceAdvisor"> <property name="transactionInterceptor" ref="transactionInterceptor"/> </bean> <bean id="transactionInterceptor" class="org.springframework.transaction.interceptor.TransactionInterceptor"> <property name="transactionManager" ref="transactionManager"/> <property name="transactionAttributeSource"> <bean class="org.springframework.transaction.interceptor.AttributesTransactionAttributeSource"> <property name="attributes" ref="attributes"/> </bean> </property> </bean> <bean id="attributes" class="org.springframework.metadata.commons.CommonsAttributes"/>
The DefaultAdvisorAutoProxyCreator
bean
definition (the name is not significant, hence it can even be omitted)
will pick up all eligible pointcuts in the current application context.
In this case, the "transactionAdvisor" bean definition, of type
TransactionAttributeSourceAdvisor
, will apply to
classes or methods carrying a transaction attribute. The
TransactionAttributeSourceAdvisor depends on a TransactionInterceptor,
via constructor dependency. The example resolves this via autowiring.
The AttributesTransactionAttributeSource
depends on
an implementation of the
org.springframework.metadata.Attributes
interface. In
this fragment, the "attributes" bean satisfies this, using the Jakarta
Commons Attributes API to obtain attribute information. (The application
code must have been compiled using the Commons Attributes compilation
task.)
The /annotation
directory of the JPetStore
sample application contains an analogous example for auto-proxying
driven by JDK 1.5+ annotations. The following configuration enables
automatic detection of Spring's Transactional
annotation, leading to implicit proxies for beans containing that
annotation:
<bean class="org.springframework.aop.framework.autoproxy.DefaultAdvisorAutoProxyCreator"/> <bean class="org.springframework.transaction.interceptor.TransactionAttributeSourceAdvisor"> <property name="transactionInterceptor" ref="transactionInterceptor"/> </bean> <bean id="transactionInterceptor" class="org.springframework.transaction.interceptor.TransactionInterceptor"> <property name="transactionManager" ref="transactionManager"/> <property name="transactionAttributeSource"> <bean class="org.springframework.transaction.annotation.AnnotationTransactionAttributeSource"/> </property> </bean>
The TransactionInterceptor
defined here depends
on a PlatformTransactionManager
definition, which is
not included in this generic file (although it could be) because it will
be specific to the application's transaction requirements (typically
JTA, as in this example, or Hibernate, JDO or JDBC):
<bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager"/>
![]() | Tip |
---|---|
If you require only declarative transaction management, using these generic XML definitions will result in Spring automatically proxying all classes or methods with transaction attributes. You won't need to work directly with AOP, and the programming model is similar to that of .NET ServicedComponents. |
This mechanism is extensible. It's possible to do autoproxying based on custom attributes. You need to:
Define your custom attribute.
Specify an Advisor with the necessary advice, including a pointcut that is triggered by the presence of the custom attribute on a class or method. You may be able to use an existing advice, merely implementing a static pointcut that picks up the custom attribute.
It's possible for such advisors to be unique to each advised class
(for example, mixins): they simply need to be defined as prototype,
rather than singleton, bean definitions. For example, the
LockMixin
introduction interceptor from the Spring
test suite, shown above, could be used in conjunction with an
attribute-driven pointcut to target a mixin, as shown here. We use the
generic DefaultPointcutAdvisor
, configured using
JavaBean properties:
<bean id="lockMixin" class="org.springframework.aop.LockMixin" scope="prototype"/> <bean id="lockableAdvisor" class="org.springframework.aop.support.DefaultPointcutAdvisor" scope="prototype"> <property name="pointcut" ref="myAttributeAwarePointcut"/> <property name="advice" ref="lockMixin"/> </bean> <bean id="anyBean" class="anyclass" ...
If the attribute aware pointcut matches any methods in the
anyBean
or other bean definitions, the mixin will be
applied. Note that both lockMixin
and
lockableAdvisor
definitions are prototypes. The
myAttributeAwarePointcut
pointcut can be a singleton
definition, as it doesn't hold state for individual advised
objects.
Spring offers the concept of a TargetSource,
expressed in the org.springframework.aop.TargetSource
interface. This interface is responsible for returning the "target object"
implementing the join point. The TargetSource
implementation is asked for a target instance each time the AOP proxy
handles a method invocation.
Developers using Spring AOP don't normally need to work directly with TargetSources, but this provides a powerful means of supporting pooling, hot swappable and other sophisticated targets. For example, a pooling TargetSource can return a different target instance for each invocation, using a pool to manage instances.
If you do not specify a TargetSource, a default implementation is used that wraps a local object. The same target is returned for each invocation (as you would expect).
Let's look at the standard target sources provided with Spring, and how you can use them.
![]() | Tip |
---|---|
When using a custom target source, your target will usually need to be a prototype rather than a singleton bean definition. This allows Spring to create a new target instance when required. |
The
org.springframework.aop.target.HotSwappableTargetSource
exists to allow the target of an AOP proxy to be switched while allowing
callers to keep their references to it.
Changing the target source's target takes effect immediately. The
HotSwappableTargetSource
is threadsafe.
You can change the target via the swap()
method
on HotSwappableTargetSource as follows:
HotSwappableTargetSource swapper =
(HotSwappableTargetSource) beanFactory.getBean("swapper");
Object oldTarget = swapper.swap(newTarget);
The XML definitions required look as follows:
<bean id="initialTarget" class="mycompany.OldTarget"/> <bean id="swapper" class="org.springframework.aop.target.HotSwappableTargetSource"> <constructor-arg ref="initialTarget"/> </bean> <bean id="swappable" class="org.springframework.aop.framework.ProxyFactoryBean"> <property name="targetSource" ref="swapper"/> </bean>
The above swap()
call changes the target of the
swappable bean. Clients who hold a reference to that bean will be
unaware of the change, but will immediately start hitting the new
target.
Although this example doesn't add any advice - and it's not
necessary to add advice to use a TargetSource
- of
course any TargetSource
can be used in conjunction
with arbitrary advice.
Using a pooling target source provides a similar programming model to stateless session EJBs, in which a pool of identical instances is maintained, with method invocations going to free objects in the pool.
A crucial difference between Spring pooling and SLSB pooling is that Spring pooling can be applied to any POJO. As with Spring in general, this service can be applied in a non-invasive way.
Spring provides out-of-the-box support for Jakarta Commons Pool
1.3, which provides a fairly efficient pooling implementation. You'll
need the commons-pool Jar on your application's classpath to use this
feature. It's also possible to subclass
org.springframework.aop.target.AbstractPoolingTargetSource
to support any other pooling API.
Sample configuration is shown below:
<bean id="businessObjectTarget" class="com.mycompany.MyBusinessObject" scope="prototype"> ... properties omitted </bean> <bean id="poolTargetSource" class="org.springframework.aop.target.CommonsPoolTargetSource"> <property name="targetBeanName" value="businessObjectTarget"/> <property name="maxSize" value="25"/> </bean> <bean id="businessObject" class="org.springframework.aop.framework.ProxyFactoryBean"> <property name="targetSource" ref="poolTargetSource"/> <property name="interceptorNames" value="myInterceptor"/> </bean>
Note that the target object - "businessObjectTarget" in the
example - must be a prototype. This allows the
PoolingTargetSource
implementation to create new
instances of the target to grow the pool as necessary. See the havadoc
for AbstractPoolingTargetSource
and the concrete
subclass you wish to use for information about its properties: "maxSize"
is the most basic, and always guaranteed to be present.
In this case, "myInterceptor" is the name of an interceptor that would need to be defined in the same IoC context. However, it isn't necessary to specify interceptors to use pooling. If you want only pooling, and no other advice, don't set the interceptorNames property at all.
It's possible to configure Spring so as to be able to cast any
pooled object to the
org.springframework.aop.target.PoolingConfig
interface, which exposes information about the configuration and current
size of the pool through an introduction. You'll need to define an
advisor like this:
<bean id="poolConfigAdvisor" class="org.springframework.beans.factory.config.MethodInvokingFactoryBean"> <property name="targetObject" ref="poolTargetSource"/> <property name="targetMethod" value="getPoolingConfigMixin"/> </bean>
This advisor is obtained by calling a convenience method on the
AbstractPoolingTargetSource
class, hence the use of
MethodInvokingFactoryBean. This advisor's name ("poolConfigAdvisor"
here) must be in the list of interceptors names in the ProxyFactoryBean
exposing the pooled object.
The cast will look as follows:
PoolingConfig conf = (PoolingConfig) beanFactory.getBean("businessObject"); System.out.println("Max pool size is " + conf.getMaxSize());
![]() | Note |
---|---|
Pooling stateless service objects is not usually necessary. We don't believe it should be the default choice, as most stateless objects are naturally thread safe, and instance pooling is problematic if resources are cached. |
Simpler pooling is available using autoproxying. It's possible to set the TargetSources used by any autoproxy creator.
Setting up a "prototype" target source is similar to a pooling TargetSource. In this case, a new instance of the target will be created on every method invocation. Although the cost of creating a new object isn't high in a modern JVM, the cost of wiring up the new object (satisfying its IoC dependencies) may be more expensive. Thus you shouldn't use this approach without very good reason.
To do this, you could modify the
poolTargetSource
definition shown above as follows.
(I've also changed the name, for clarity.)
<bean id="prototypeTargetSource" class="org.springframework.aop.target.PrototypeTargetSource"> <property name="targetBeanName" ref="businessObjectTarget"/> </bean>
There's only one property: the name of the target bean. Inheritance is used in the TargetSource implementations to ensure consistent naming. As with the pooling target source, the target bean must be a prototype bean definition.
ThreadLocal
target sources are useful if you need an object to be
created for each incoming request (per thread that is). The concept of a
ThreadLocal
provide a JDK-wide facility to
transparently store resource alongside a thread. Setting up a
ThreadLocalTargetSource
is pretty much the same as was explained for the
other types of target source:
<bean id="threadlocalTargetSource" class="org.springframework.aop.target.ThreadLocalTargetSource"> <property name="targetBeanName" value="businessObjectTarget"/> </bean>
![]() | Note |
---|---|
ThreadLocals come with serious issues (potentially
resulting in memory leaks) when incorrectly using them in a
multi-threaded and multi-classloader environments. One should always
consider wrapping a threadlocal in some other class and never directly
use the |
Spring AOP is designed to be extensible. While the interception implementation strategy is presently used internally, it is possible to support arbitrary advice types in addition to the out-of-the-box interception around advice, before, throws advice and after returning advice.
The org.springframework.aop.framework.adapter
package is an SPI package allowing support for new custom advice types to
be added without changing the core framework. The only constraint on a
custom Advice
type is that it must implement the
org.aopalliance.aop.Advice
tag interface.
Please refer to the
org.springframework.aop.framework.adapter
package's
Javadocs for further information.
Please refer to the Spring sample applications for further examples of Spring AOP:
The JPetStore's default configuration illustrates the use of the
TransactionProxyFactoryBean
for declarative transaction
management.
The /attributes
directory of the JPetStore
illustrates the use of attribute-driven declarative transaction management.
The Spring team considers developer testing to be an absolutely integral part of enterprise software development. A thorough treatment of testing in the enterprise is beyond the scope of this chapter; rather, the focus here is on the value-add that the adoption of the IoC principle can bring to unit testing and on the benefits that the Spring Framework provides in integration testing.
One of the main benefits of Dependency Injection is that your code
should really depend far less on the container than in traditional J2EE
development. The POJOs that make up your application should be testable
in JUnit or TestNG tests, with objects simply instantiated using the
new
operator, without Spring or any other
container. You can use mock
objects (in conjunction with many other valuable testing
techniques) to test your code in isolation. If you follow the architecture
recommendations around Spring you will find that the resulting clean
layering and componentization of your codebase will naturally facilitate
easier unit testing. For example, you will be able to
test service layer objects by stubbing or mocking DAO or Repository
interfaces, without any need to access persistent data while running unit
tests.
True unit tests typically will run extremely quickly, as there is no runtime infrastructure to set up, whether application server, database, ORM tool, or whatever. Thus emphasizing true unit tests as part of your development methodology will boost your productivity. The upshot of this is that you often do not need this section of the testing chapter to help you write effective unit tests for your IoC-based applications. For certain unit testing scenarios, however, the Spring Framework provides the following mock objects and testing support classes.
The org.springframework.mock.jndi
package
contains an implementation of the JNDI SPI, which is useful for
setting up a simple JNDI environment for test suites or stand-alone
applications. If, for example, JDBC DataSource
s
get bound to the same JNDI names in test code as within a J2EE
container, both application code and configuration can be reused in
testing scenarios without modification.
The org.springframework.mock.web
package
contains a comprehensive set of Servlet API mock objects, targeted at
usage with Spring's Web MVC framework, which are useful for testing
web contexts and controllers. These mock objects are generally more
convenient to use than dynamic mock objects (e.g., EasyMock) or existing Servlet
API mock objects (e.g., MockObjects).
The org.springframework.test.util
package
contains ReflectionTestUtils
, which is a
collection of reflection-based utility methods for use in unit and
integration testing scenarios in which the developer would benefit
from being able to set a non-public
field or invoke
a non-public
setter method when testing application
code involving, for example:
ORM frameworks such as JPA and Hibernate which condone the
usage of private
or
protected
field access as opposed to
public
setter methods for properties in a
domain entity
Spring's support for annotations such as
@Autowired
and
@Resource
which provides dependency
injection for private
or
protected
fields, setter methods, and
configuration methods
The org.springframework.test.web
package
contains ModelAndViewAssert
, which can be
used in combination with any testing framework (e.g., JUnit 4+,
TestNG, etc.) for unit tests dealing with Spring MVC
ModelAndView
objects.
![]() | Unit testing Spring MVC Controllers |
---|---|
To test your Spring MVC |
It is important to be able to perform some integration testing without requiring deployment to your application server or connecting to other enterprise infrastructure. This will enable you to test things such as:
The correct wiring of your Spring IoC container contexts.
Data access using JDBC or an ORM tool. This would include such things as the correctness of SQL statements, Hibernate queries, JPA entity mappings, etc.
The Spring Framework provides first class support for integration
testing in the org.springframework.test-VERSION.jar
library (where VERSION
is the release version). In this library,
you will find the org.springframework.test
package
which contains valuable classes for integration testing using a Spring
container, while at the same time not being reliant on an application
server or other deployment environment. Such tests will be slower to run
than unit tests but much faster to run than the equivalent Cactus tests
or remote tests relying on deployment to an application server.
Since Spring 2.5, unit and integration testing support is provided in the form of the annotation-driven Spring TestContext Framework. The TestContext Framework is agnostic of the actual testing framework in use, thus allowing instrumentation of tests in various environments including JUnit 3.8, JUnit 4.5, TestNG, etc.
![]() | Legacy JUnit 3.8 class hierarchy is deprecated |
---|---|
As of Spring 3.0, the legacy JUnit 3.8 base class hierarchy (e.g.,
|
The following bullet points highlight the fundamental goals of Spring's integration testing support:
Spring IoC container caching between test execution.
Dependency Injection of test fixture instances (this is nice).
Transaction management appropriate to integration testing (this is even nicer).
Spring-specific support classes that are really useful when writing integration tests.
In the next few sections each of the above goals is discussed in greater detail, and at the end of each section you will find a direct link to implementation and configuration details pertaining to that particular goal.
The Spring TestContext Framework provides consistent
loading of Spring ApplicationContext
s and
caching of those contexts. Support for the caching of loaded contexts
is important, because if you are working on a large project, startup
time may become an issue - not because of the overhead of Spring
itself, but because the objects instantiated by the Spring container
will themselves take time to instantiate. For example, a project with
50-100 Hibernate mapping files might take 10-20 seconds to load the
mapping files, and incurring that cost before running every single
test in every single test fixture will lead to slower overall test
runs that could reduce productivity.
Test classes provide an array containing the
resource locations of XML configuration metadata - typically on the
classpath - used to configure the application. This will be the same,
or nearly the same, as the list of configuration locations specified
in web.xml
or other deployment
configuration.
By default, once loaded, the configured
ApplicationContext
will be reused for
each test. Thus the setup cost will be incurred only once (per test
fixture), and subsequent test execution will be much faster. In the
unlikely case that a test may 'dirty' the application context,
requiring reloading - for example, by changing a bean definition or
the state of an application object - Spring's testing support provides
a mechanism to cause the test fixture to reload the configurations and
rebuild the application context before executing the next test.
See: context management and caching with the TestContext Framework.
When the TestContext framework loads your
application context, it can optionally configure instances of your
test classes via Dependency Injection. This provides a convenient
mechanism for setting up test fixtures using pre-configured beans from
your application context. A strong benefit here is that you can reuse
application contexts across various testing scenarios (e.g., for
configuring Spring-managed object graphs, transactional proxies,
DataSource
s, etc.), thus avoiding the need to
duplicate complex test fixture set up for individual test
cases.
As an example, consider the scenario where we have a class,
HibernateTitleDao
, that performs data access
logic for say, the Title
domain object. We want
to write integration tests that test all of the following
areas:
The Spring configuration: basically, is everything related
to the configuration of the
HibernateTitleDao
bean correct and
present?
The Hibernate mapping file configuration: is everything mapped correctly and are the correct lazy-loading settings in place?
The logic of the HibernateTitleDao
:
does the configured instance of this class perform as
anticipated?
See: dependency injection of test fixtures with the TestContext Framework.
One common issue in tests that access a real database is their affect on the state of the persistence store. Even when you're using a development database, changes to the state may affect future tests. Also, many operations - such as inserting or modifying persistent data - cannot be performed (or verified) outside a transaction.
The TestContext framework addresses this issue. By default,
the framework will create and roll back a transaction for each
test. You simply write code that can assume the existence of a
transaction. If you call transactionally proxied objects in your
tests, they will behave correctly, according to their transactional
semantics. In addition, if test methods delete the contents of
selected tables while running within a transaction, the transaction
will roll back by default, and the database will return to its state
prior to execution of the test. Transactional support is provided to
your test class via a
PlatformTransactionManager
bean defined in the
test's application context.
If you want a transaction to commit - unusual, but occasionally
useful when you want a particular test to populate or modify the
database - the TestContext framework can be
instructed to cause the transaction to commit instead of roll back
via the
@TransactionConfiguration
and
@Rollback
annotations.
See: transaction management with the TestContext Framework.
The Spring TestContext Framework provides
several abstract
support classes that can simplify
writing integration tests. These base test classes provide well
defined hooks into the testing framework as well as convenient
instance variables and methods, allowing access to such things
as:
The ApplicationContext
: useful for
performing explicit bean lookups or testing the state of the
context as a whole.
A SimpleJdbcTemplate
: useful for querying to
confirm state. For example, you might query before and after
testing application code that creates an object and persists it
using an ORM tool, to verify that the data appears in the
database. (Spring will ensure that the query runs in the scope of
the same transaction.) You will need to tell your ORM tool to
'flush' its changes for this to work correctly, for example using
the flush()
method on Hibernate's
Session
interface.
In addition, you may find it desirable to provide your own custom, application-wide superclass for integration tests that provides further useful instance variables and methods specific to your project.
See: support classes for the TestContext Framework.
The org.springframework.test.jdbc
package
contains SimpleJdbcTestUtils
, which is a
Java-5-based collection of JDBC related utility functions intended to
simplify standard database testing scenarios. Note that AbstractTransactionalJUnit38SpringContextTests
,
AbstractTransactionalJUnit4SpringContextTests
,
and AbstractTransactionalTestNGSpringContextTests
provide convenience methods which delegate to
SimpleJdbcTestUtils
internally.
The Spring Framework provides the following set of Spring-specific annotations that you can use in your unit and integration tests in conjunction with the TestContext framework. Refer to the respective JavaDoc for further information, including default attribute values, etc.
@ContextConfiguration
Defines class-level metadata which is used to determine how
to load and configure an
ApplicationContext
. Specifically,
@ContextConfiguration defines the application context resource
locations
to load as well as the
ContextLoader
strategy to use for
loading the context.
@ContextConfiguration(locations={"example/test-context.xml"}, loader=CustomContextLoader.class) public class CustomConfiguredApplicationContextTests { // class body... }
Note: @ContextConfiguration
provides support for inherited resource
locations by default. See the Context management and
caching section and JavaDoc for an example and further
details.
@DirtiesContext
The presence of this annotation on a test method indicates that the underlying Spring container is 'dirtied' during the execution of the test method, and thus must be rebuilt after the test method finishes execution (regardless of whether the test passed or not).
@DirtiesContext @Test public void testProcessWhichDirtiesAppCtx() { // some logic that results in the Spring container being dirtied }
@TestExecutionListeners
Defines class-level metadata for configuring which
TestExecutionListener
s should be
registered with a TestContextManager
.
Typically, @TestExecutionListeners
will be used in conjunction with
@ContextConfiguration
.
@ContextConfiguration @TestExecutionListeners({CustomTestExecutionListener.class, AnotherTestExecutionListener.class}) public class CustomTestExecutionListenerTests { // class body... }
Note: @TestExecutionListeners
provides support for inherited listeners by
default. See the JavaDoc for an example and further
details.
@TransactionConfiguration
Defines class-level metadata for configuring transactional
tests. Specifically, the bean name of the
PlatformTransactionManager
that is
to be used to drive transactions can be explicitly configured if
the bean name of the desired PlatformTransactionManager is not
"transactionManager". In addition, the
defaultRollback
flag can optionally be changed
to false
. Typically,
@TransactionConfiguration
will be
used in conjunction with
@ContextConfiguration
.
@ContextConfiguration @TransactionConfiguration(transactionManager="txMgr", defaultRollback=false) public class CustomConfiguredTransactionalTests { // class body... }
@Rollback
Indicates whether or not the transaction for the annotated
test method should be rolled back after the
test method has completed. If true
, the
transaction will be rolled back; otherwise, the transaction will be
committed. Use @Rollback
to override
the default rollback flag configured at the class level.
@Rollback(false) @Test public void testProcessWithoutRollback() { // ... }
@BeforeTransaction
Indicates that the annotated public void
method should be executed before a
transaction is started for test methods configured to run within a
transaction via the @Transactional
annotation.
@BeforeTransaction public void beforeTransaction() { // logic to be executed before a transaction is started }
@AfterTransaction
Indicates that the annotated public void
method should be executed after a transaction
has been ended for test methods configured to run within a
transaction via the @Transactional
annotation.
@AfterTransaction public void afterTransaction() { // logic to be executed after a transaction has ended }
@NotTransactional
The presence of this annotation indicates that the annotated test method must not execute in a transactional context.
@NotTransactional @Test public void testProcessWithoutTransaction() { // ... }
The following annotations are only supported when used in conjunction with JUnit (i.e., with the SpringJUnit4ClassRunner or the JUnit 3.8 and JUnit 4.5 support classes.
@IfProfileValue
Indicates that the annotated test is enabled for a specific
testing environment. If the configured
ProfileValueSource
returns a matching
value
for the provided name
,
the test will be enabled. This annotation can be applied to an
entire class or individual methods.
@IfProfileValue(name="java.vendor", value="Sun Microsystems Inc.") @Test public void testProcessWhichRunsOnlyOnSunJvm() { // some logic that should run only on Java VMs from Sun Microsystems }
Alternatively @IfProfileValue
may be configured with a list of values
(with
OR semantics) to achieve TestNG-like support
for test groups in a JUnit environment.
Consider the following example:
@IfProfileValue(name="test-groups", values={"unit-tests", "integration-tests"}) @Test public void testProcessWhichRunsForUnitOrIntegrationTestGroups() { // some logic that should run only for unit and integration test groups }
@ProfileValueSourceConfiguration
Class-level annotation which is used to specify what type of
ProfileValueSource
to use when retrieving
profile values configured via the
@IfProfileValue
annotation. If
@ProfileValueSourceConfiguration
is
not declared for a test,
SystemProfileValueSource
will be used by
default.
@ProfileValueSourceConfiguration(CustomProfileValueSource.class) public class CustomProfileValueSourceTests { // class body... }
@ExpectedException
Indicates that the annotated test method is expected to throw an exception during execution. The type of the expected exception is provided in the annotation, and if an instance of the exception is thrown during the test method execution then the test passes. Likewise if an instance of the exception is not thrown during the test method execution then the test fails.
@ExpectedException(SomeBusinessException.class) public void testProcessRainyDayScenario() { // some logic that should result in an Exception being thrown }
Using Spring's
@ExpectedException
annotation in
conjunction with JUnit 4's
@Test(expected=...)
configuration
would lead to an unresolvable conflict. Developers must therefore
choose one or the other when integrating with JUnit 4, in which
case it is generally preferable to use the explicit JUnit 4
configuration.
@Timed
Indicates that the annotated test method has to finish execution in a specified time period (in milliseconds). If the text execution time takes longer than the specified time period, the test fails.
Note that the time period includes execution of the test
method itself, any repetitions of the test (see
@Repeat
), as well as any
set up or tear down of the
test fixture.
@Timed(millis=1000) public void testProcessWithOneSecondTimeout() { // some logic that should not take longer than 1 second to execute }
Spring's @Timed
annotation
has different semantics than JUnit 4's
@Test(timeout=...)
support.
Specifically, due to the manner in which JUnit 4 handles test
execution timeouts (i.e., by executing the test method in a
separate Thread
),
@Test(timeout=...)
applies to
each iteration in the case of repetitions
and preemptively fails the test if the test takes too long.
Spring's @Timed
, on the other hand,
times the total test execution time
(including all repetitions) and does not preemptively fail the test
but rather waits for the test to actually complete before failing.
@Repeat
Indicates that the annotated test method must be executed repeatedly. The number of times that the test method is to be executed is specified in the annotation.
Note that the scope of execution to be repeated includes execution of the test method itself as well as any set up or tear down of the test fixture.
@Repeat(10) @Test public void testProcessRepeatedly() { // ... }
The following non-test-specific annotations are supported with standard semantics for all configurations of the Spring TestContext Framework.
@Autowired
@Qualifier
@Resource
(javax.annotation)
if JSR-250 is present
@PersistenceContext
(javax.persistence) if JPA is present
@PersistenceUnit
(javax.persistence) if JPA is present
@Required
@Transactional
The Spring TestContext
Framework (located in the
org.springframework.test.context
package) provides
generic, annotation-driven unit and integration testing support that is
agnostic of the testing framework in use, for example JUnit 3.8, JUnit
4.5, TestNG 5.8, etc. The TestContext framework also places a great deal
of importance on convention over configuration with
reasonable defaults that can be overridden via annotation-based
configuration.
In addition to generic testing infrastructure, the TestContext
framework provides explicit support for JUnit 3.8, JUnit 4.5, and TestNG
5.8 in the form of abstract
support classes. For
JUnit 4.5, the framework also provides a custom
Runner
which allows one to write test
classes that are not required to extend a particular class
hierarchy.
The following section provides an overview of the internals of the TestContext framework. If you are only interested in using the framework and not necessarily interested in extending it with your own custom listeners, feel free to go directly to the configuration (context management, dependency injection, transaction management), support classes, and annotation support sections.
The core of the framework consists of the
TestContext
and
TestContextManager
classes and the
TestExecutionListener
interface. A
TestContextManager
is created on a per-test
basis. The TestContextManager
in turn manages a
TestContext
which is responsible for holding
the context of the current test. The
TestContextManager
is also responsible for
updating the state of the TestContext
as the
test progresses and delegating to
TestExecutionListener
s, which
instrument the actual test execution (e.g., providing dependency
injection, managing transactions, etc.). Consult the JavaDoc and the
Spring test suite for further information and examples of various
configurations.
TestContext
: encapsulates the context
in which a test is executed, agnostic of the actual testing
framework in use.
TestContextManager
: the main entry
point into the Spring TestContext Framework,
which is responsible for managing a single
TestContext
and signaling events to all
registered TestExecutionListener
s
at well defined test execution points: test instance preparation,
prior to any before methods of a particular
testing framework, and after any after
methods of a particular testing framework.
TestExecutionListener
:
defines a listener API for reacting to test
execution events published by the
TestContextManager
with which the listener
is registered.
Spring provides three
TestExecutionListener
implementations which are configured by default:
DependencyInjectionTestExecutionListener
,
DirtiesContextTestExecutionListener
, and
TransactionalTestExecutionListener
, which
provide support for dependency injection of the test instance,
handling of the @DirtiesContext
annotation, and transactional test execution support with default
rollback semantics, respectively.
The following three sections explain how to configure the
TestContext
framework via annotations and
provide working examples of how to actually write unit and integration
tests with the framework.
Each TestContext
provides context
management and caching support for the test instance for which it is
responsible. Test instances do not automatically receive access to the
configured ApplicationContext
; however, if a
test class implements the
ApplicationContextAware
interface, a
reference to the ApplicationContext
will be
supplied to the test instance (provided the
DependencyInjectionTestExecutionListener
has
been configured, which is the default). Note that
AbstractJUnit38SpringContextTests
,
AbstractJUnit4SpringContextTests
, and
AbstractTestNGSpringContextTests
already
implement ApplicationContextAware
and
therefore provide this functionality out-of-the-box.
![]() | @Autowired ApplicationContext |
---|---|
As an alternative to implementing the
@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration public class MyTest { @Autowired private ApplicationContext applicationContext; // class body... } |
In contrast to the now deprecated JUnit 3.8 legacy class hierarchy,
test classes which use the TestContext framework do not need to override
any protected
instance methods to configure their
application context. Rather, configuration is achieved merely by
declaring the @ContextConfiguration
annotation at the class level. If your test class does not explicitly
declare any application context resource locations
,
the configured ContextLoader
will
determine how and whether or not to load a context from a default set
of locations. For example,
GenericXmlContextLoader
- which is the default
ContextLoader
- will generate a default
location based on the name of the test class. If your class is named
com.example.MyTest
,
GenericXmlContextLoader
will load your
application context from
"classpath:/com/example/MyTest-context.xml"
.
package com.example; @RunWith(SpringJUnit4ClassRunner.class) // ApplicationContext will be loaded from "classpath:/com/example/MyTest-context.xml" @ContextConfiguration public class MyTest { // class body... }
If the default location does not suit your needs, you are free
to explicitly configure the locations
attribute of
@ContextConfiguration
(see code listing
below) with an array containing the resource locations of XML
configuration metadata (assuming an XML-capable
ContextLoader
has been configured) -
typically on the classpath - used to configure the application. This
will be the same, or nearly the same, as the list of configuration
locations specified in web.xml
or other deployment
configuration. As an alternative you may choose to implement and
configure your own custom
ContextLoader
.
@RunWith(SpringJUnit4ClassRunner.class) // ApplicationContext will be loaded from "/applicationContext.xml" and "/applicationContext-test.xml" // in the root of the classpath @ContextConfiguration(locations={"/applicationContext.xml", "/applicationContext-test.xml"}) public class MyTest { // class body... }
@ContextConfiguration
supports
an alias for the locations
attribute via the
standard value
attribute. Thus, if you do not need
to configure a custom ContextLoader
, you
can omit the declaration of the locations
attribute
name and declare the resource locations using the shorthand format
demonstrated in the following example.
@ContextConfiguration
also
supports a boolean inheritLocations
attribute which
denotes whether or not resource locations from superclasses should be
inherited. The default value is
true
, which means that an annotated class will
inherit the resource locations defined by an
annotated superclass. Specifically, the resource locations for an
annotated class will be appended to the list of resource locations
defined by an annotated superclass. Thus, subclasses have the option
of extending the list of resource locations. In
the following example, the
ApplicationContext
for
ExtendedTest
will be loaded from
"/base-context.xml" and
"/extended-context.xml", in that order. Beans defined in
"/extended-context.xml" may therefore override those defined in
"/base-context.xml".
@RunWith(SpringJUnit4ClassRunner.class) // ApplicationContext will be loaded from "/base-context.xml" in the root of the classpath @ContextConfiguration("/base-context.xml") public class BaseTest { // class body... } // ApplicationContext will be loaded from "/base-context.xml" and "/extended-context.xml" // in the root of the classpath @ContextConfiguration("/extended-context.xml") public class ExtendedTest extends BaseTest { // class body... }
If inheritLocations
is set to
false
, the resource locations for the annotated
class will shadow and effectively replace any
resource locations defined by a superclass.
By default, once loaded, the configured
ApplicationContext
will be reused for
each test. Thus the setup cost will be incurred only once (per test
fixture), and subsequent test execution will be much faster. In the
unlikely case that a test may dirty the
application context, requiring reloading - for example, by changing a
bean definition or the state of an application object - you may
annotate your test method with
@DirtiesContext
(assuming
DirtiesContextTestExecutionListener
has been
configured, which is the default) to cause the test fixture to reload
the configurations and rebuild the application context before
executing the next test.