8. Messaging Endpoints

8.1 Message Endpoints

The first part of this chapter covers some background theory and reveals quite a bit about the underlying API that drives Spring Integration’s various messaging components. This information can be helpful if you want to really understand what’s going on behind the scenes. However, if you want to get up and running with the simplified namespace-based configuration of the various elements, feel free to skip ahead toSection 8.1.4, “Namespace Support” for now.

As mentioned in the overview, Message Endpoints are responsible for connecting the various messaging components to channels. Over the next several chapters, you will see a number of different components that consume Messages. Some of these are also capable of sending reply Messages. Sending Messages is quite straightforward. As shown above in Section 4.1, “Message Channels”, it’s easy to send a Message to a Message Channel. However, receiving is a bit more complicated. The main reason is that there are two types of consumers: Polling Consumers and Event Driven Consumers.

Of the two, Event Driven Consumers are much simpler. Without any need to manage and schedule a separate poller thread, they are essentially just listeners with a callback method. When connecting to one of Spring Integration’s subscribable Message Channels, this simple option works great. However, when connecting to a buffering, pollable Message Channel, some component has to schedule and manage the polling thread(s). Spring Integration provides two different endpoint implementations to accommodate these two types of consumers. Therefore, the consumers themselves can simply implement the callback interface. When polling is required, the endpoint acts as a container for the consumer instance. The benefit is similar to that of using a container for hosting Message Driven Beans, but since these consumers are simply Spring-managed Objects running within an ApplicationContext, it more closely resembles Spring’s own MessageListener containers.

8.1.1 Message Handler

Spring Integration’s MessageHandler interface is implemented by many of the components within the framework. In other words, this is not part of the public API, and a developer would not typically implement MessageHandler directly. Nevertheless, it is used by a Message Consumer for actually handling the consumed Messages, and so being aware of this strategy interface does help in terms of understanding the overall role of a consumer. The interface is defined as follows:

public interface MessageHandler {

    void handleMessage(Message<?> message);

}

Despite its simplicity, this provides the foundation for most of the components that will be covered in the following chapters (Routers, Transformers, Splitters, Aggregators, Service Activators, etc). Those components each perform very different functionality with the Messages they handle, but the requirements for actually receiving a Message are the same, and the choice between polling and event-driven behavior is also the same. Spring Integration provides two endpoint implementations that host these callback-based handlers and allow them to be connected to Message Channels.

8.1.2 Event Driven Consumer

Because it is the simpler of the two, we will cover the Event Driven Consumer endpoint first. You may recall that the SubscribableChannel interface provides a subscribe() method and that the method accepts a MessageHandler parameter (as shown in the section called “SubscribableChannel”):

subscribableChannel.subscribe(messageHandler);

Since a handler that is subscribed to a channel does not have to actively poll that channel, this is an Event Driven Consumer, and the implementation provided by Spring Integration accepts a a SubscribableChannel and a MessageHandler:

SubscribableChannel channel = context.getBean("subscribableChannel", SubscribableChannel.class);

EventDrivenConsumer consumer = new EventDrivenConsumer(channel, exampleHandler);

8.1.3 Polling Consumer

Spring Integration also provides a PollingConsumer, and it can be instantiated in the same way except that the channel must implement PollableChannel:

PollableChannel channel = context.getBean("pollableChannel", PollableChannel.class);

PollingConsumer consumer = new PollingConsumer(channel, exampleHandler);
[Note]Note

For more information regarding Polling Consumers, please also read Section 4.2, “Poller” as well as Section 4.3, “Channel Adapter”.

There are many other configuration options for the Polling Consumer. For example, the trigger is a required property:

PollingConsumer consumer = new PollingConsumer(channel, handler);

consumer.setTrigger(new IntervalTrigger(30, TimeUnit.SECONDS));

Spring Integration currently provides two implementations of the Trigger interface: IntervalTrigger and CronTrigger. The IntervalTrigger is typically defined with a simple interval (in milliseconds), but also supports an initialDelay property and a boolean fixedRate property (the default is false, i.e. fixed delay):

IntervalTrigger trigger = new IntervalTrigger(1000);
trigger.setInitialDelay(5000);
trigger.setFixedRate(true);

The CronTrigger simply requires a valid cron expression (see the Javadoc for details):

CronTrigger trigger = new CronTrigger("*/10 * * * * MON-FRI");

In addition to the trigger, several other polling-related configuration properties may be specified:

PollingConsumer consumer = new PollingConsumer(channel, handler);

consumer.setMaxMessagesPerPoll(10);
consumer.setReceiveTimeout(5000);

The maxMessagesPerPoll property specifies the maximum number of messages to receive within a given poll operation. This means that the poller will continue calling receive() without waiting until either null is returned or that max is reached. For example, if a poller has a 10 second interval trigger and a maxMessagesPerPoll setting of 25, and it is polling a channel that has 100 messages in its queue, all 100 messages can be retrieved within 40 seconds. It grabs 25, waits 10 seconds, grabs the next 25, and so on.

The receiveTimeout property specifies the amount of time the poller should wait if no messages are available when it invokes the receive operation. For example, consider two options that seem similar on the surface but are actually quite different: the first has an interval trigger of 5 seconds and a receive timeout of 50 milliseconds while the second has an interval trigger of 50 milliseconds and a receive timeout of 5 seconds. The first one may receive a message up to 4950 milliseconds later than it arrived on the channel (if that message arrived immediately after one of its poll calls returned). On the other hand, the second configuration will never miss a message by more than 50 milliseconds. The difference is that the second option requires a thread to wait, but as a result it is able to respond much more quickly to arriving messages. This technique, known as long polling, can be used to emulate event-driven behavior on a polled source.

A Polling Consumer may also delegate to a Spring TaskExecutor, as illustrated in the following example:

PollingConsumer consumer = new PollingConsumer(channel, handler);

TaskExecutor taskExecutor = context.getBean("exampleExecutor", TaskExecutor.class);
consumer.setTaskExecutor(taskExecutor);

Furthermore, a PollingConsumer has a property called adviceChain. This property allows you to specify a List of AOP Advices for handling additional cross cutting concerns including transactions. These advices are applied around the doPoll() method. For more in-depth information, please see the sections AOP Advice chains and Transaction Support under Section 8.1.4, “Namespace Support”.

The examples above show dependency lookups, but keep in mind that these consumers will most often be configured as Spring bean definitions. In fact, Spring Integration also provides a FactoryBean called ConsumerEndpointFactoryBean that creates the appropriate consumer type based on the type of channel, and there is full XML namespace support to even further hide those details. The namespace-based configuration will be featured as each component type is introduced.

[Note]Note

Many of the MessageHandler implementations are also capable of generating reply Messages. As mentioned above, sending Messages is trivial when compared to the Message reception. Nevertheless,when and how many reply Messages are sent depends on the handler type. For example, an Aggregator waits for a number of Messages to arrive and is often configured as a downstream consumer for a Splitter which may generate multiple replies for each Message it handles. When using the namespace configuration, you do not strictly need to know all of the details, but it still might be worth knowing that several of these components share a common base class, the AbstractReplyProducingMessageHandler, and it provides a setOutputChannel(..) method.

8.1.4 Namespace Support

Throughout the reference manual, you will see specific configuration examples for endpoint elements, such as router, transformer, service-activator, and so on. Most of these will support an input-channel attribute and many will support an output-channel attribute. After being parsed, these endpoint elements produce an instance of either the PollingConsumer or the EventDrivenConsumer depending on the type of the input-channel that is referenced: PollableChannel or SubscribableChannel respectively. When the channel is pollable, then the polling behavior is determined based on the endpoint element’s poller sub-element and its attributes.

Configuration_Below you find a _poller with all available configuration options:

<int:poller cron=""                                  1
            default="false"                          2
            error-channel=""                         3
            fixed-delay=""                           4
            fixed-rate=""                            5
            id=""                                    6
            max-messages-per-poll=""                 7
            receive-timeout=""                       8
            ref=""                                   9
            task-executor=""                         10
            time-unit="MILLISECONDS"                 11
            trigger="">                              12
            <int:advice-chain />                     13
            <int:transactional />                    14
</int:poller>

1

Provides the ability to configure Pollers using Cron expressions. The underlying implementation uses an org.springframework.scheduling.support.CronTrigger. If this attribute is set, none of the following attributes must be specified: fixed-delay, trigger, fixed-rate, ref.

2

By setting this attribute to true, it is possible to define exactly one (1) global default poller. An exception is raised if more than one default poller is defined in the application context. Any endpoints connected to a PollableChannel (PollingConsumer) or any SourcePollingChannelAdapter that does not have any explicitly configured poller will then use the global default Poller. Optional. Defaults to false.

3

Identifies the channel which error messages will be sent to if a failure occurs in this poller’s invocation. To completely suppress Exceptions, provide a reference to the nullChannel. Optional.

4

The fixed delay trigger uses a PeriodicTrigger under the covers. If the time-unit attribute is not used, the specified value is represented in milliseconds. If this attribute is set, none of the following attributes must be specified: fixed-rate, trigger, cron, ref.

5

The fixed rate trigger uses a PeriodicTrigger under the covers. If the time-unit attribute is not used the specified value is represented in milliseconds. If this attribute is set, none of the following attributes must be specified: fixed-delay, trigger, cron, ref.

6

The Id referring to the Poller’s underlying bean-definition, which is of type org.springframework.integration.scheduling.PollerMetadata. The id attribute is required for a top-level poller element unless it is the default poller (default="true").

7

Please see Section 4.3.1, “Configuring An Inbound Channel Adapter” for more information. Optional. If not specified the default values used depends on the context. If a PollingConsumer is used, this atribute will default to -1. However, if a SourcePollingChannelAdapter is used, then the max-messages-per-poll attribute defaults to 1.

8

Value is set on the underlying class `PollerMetadata`Optional. If not specified it defaults to 1000 (milliseconds).

9

Bean reference to another top-level poller. The ref attribute must not be present on the top-level poller element. However, if this attribute is set, none of the following attributes must be specified: fixed-rate, trigger, cron, fixed-delay.

10

Provides the ability to reference a custom task executor. Please see the section below titled TaskExecutor Support for further information. Optional.

11

This attribute specifies the java.util.concurrent.TimeUnit enum value on the underlying org.springframework.scheduling.support.PeriodicTrigger. Therefore, this attribute can ONLY be used in combination with the fixed-delay or fixed-rate attributes. If combined with either cron or a trigger reference attribute, it will cause a failure. The minimal supported granularity for a PeriodicTrigger is MILLISECONDS. Therefore, the only available options are MILLISECONDS and SECONDS. If this value is not provided, then any fixed-delay or fixed-rate value will be interpreted as MILLISECONDS by default. Basically this enum provides a convenience for SECONDS-based interval trigger values. For hourly, daily, and monthly settings, consider using a cron trigger instead.

12

Reference to any spring configured bean which implements the org.springframework.scheduling.Trigger interface. Optional. However, if this attribute is set, none of the following attributes must be specified:fixed-delay, fixed-rate, cron, ref.

13

Allows to specify extra AOP Advices to handle additional cross cutting concerns. Please see the section below titled Transaction Support for further information. Optional.

14

Pollers can be made transactional. Please see the section below titled AOP Advice chains for further information. Optional.

Examples

For example, a simple interval-based poller with a 1-second interval would be configured like this:

<int:transformer input-channel="pollable"
    ref="transformer"
    output-channel="output">
    <int:poller fixed-rate="1000"/>
</int:transformer>

As an alternative to fixed-rate you can also use the fixed-delay attribute.

For a poller based on a Cron expression, use the cron attribute instead:

<int:transformer input-channel="pollable"
    ref="transformer"
    output-channel="output">
    <int:poller cron="*/10 * * * * MON-FRI"/>
</int:transformer>

If the input channel is a PollableChannel, then the poller configuration is required. Specifically, as mentioned above, the trigger is a required property of the PollingConsumer class. Therefore, if you omit the poller sub-element for a Polling Consumer endpoint’s configuration, an Exception may be thrown. The exception will also be thrown if you attempt to configure a poller on the element that is connected to a non-pollable channel.

It is also possible to create top-level pollers in which case only a ref is required:

<int:poller id="weekdayPoller" cron="*/10 * * * * MON-FRI"/>

<int:transformer input-channel="pollable"
    ref="transformer"
    output-channel="output">
    <int:poller ref="weekdayPoller"/>
</int:transformer>
[Note]Note

The ref attribute is only allowed on the inner-poller definitions. Defining this attribute on a top-level poller will result in a configuration exception thrown during initialization of the Application Context.

Global Default Pollers

In fact, to simplify the configuration even further, you can define a global default poller. A single top-level poller within an ApplicationContext may have the default attribute with a value of true. In that case, any endpoint with a PollableChannel for its input-channel that is defined within the same ApplicationContext and has no explicitly configured poller sub-element will use that default.

<int:poller id="defaultPoller" default="true" max-messages-per-poll="5" fixed-rate="3000"/>

<!-- No <poller/> sub-element is necessary since there is a default -->
<int:transformer input-channel="pollable"
                 ref="transformer"
                 output-channel="output"/>

Transaction Support

Spring Integration also provides transaction support for the pollers so that each receive-and-forward operation can be performed as an atomic unit-of-work. To configure transactions for a poller, simply add the_<transactional/>_ sub-element. The attributes for this element should be familiar to anyone who has experience with Spring’s Transaction management:

<int:poller fixed-delay="1000">
    <int:transactional transaction-manager="txManager"
                       propagation="REQUIRED"
                       isolation="REPEATABLE_READ"
                       timeout="10000"
                       read-only="false"/>
</int:poller>

For more information please refer to the section called “CompletableFuture”.

AOP Advice chains

Since Spring transaction support depends on the Proxy mechanism  with TransactionInterceptor (AOP Advice) handling transactional behavior of the message flow initiated by the poller, some times there is a need to provide extra Advice(s) to handle other cross cutting behavior associated with the poller. For that poller defines an advice-chain element allowing you to add more advices - class that  implements MethodInterceptor interface…​

<int:service-activator id="advicedSa" input-channel="goodInputWithAdvice" ref="testBean"
		method="good" output-channel="output">
	<int:poller max-messages-per-poll="1" fixed-rate="10000">
		 <int:advice-chain>
			<ref bean="adviceA" />
			<beans:bean class="org.bar.SampleAdvice" />
			<ref bean="txAdvice" />
		</int:advice-chain>
	</int:poller>
</int:service-activator>

For more information on how to implement MethodInterceptor please refer to AOP sections of Spring reference manual (section 8 and 9). Advice chain can also be applied on the poller that does not have any transaction configuration essentially allowing you to enhance the behavior of the message flow initiated by the poller.

[Important]Important

When using an advice chain, the <transactional/> child element cannot be specified; instead, declare a <tx:advice/> bean and add it to the <advice-chain/>. See the section called “CompletableFuture” for complete configuration.

TaskExecutor Support

The polling threads may be executed by any instance of Spring’s TaskExecutor abstraction. This enables concurrency for an endpoint or group of endpoints. As of Spring 3.0, there is a task namespace in the core Spring Framework, and its <executor/> element supports the creation of a simple thread pool executor. That element accepts attributes for common concurrency settings such as pool-size and queue-capacity. Configuring a thread-pooling executor can make a substantial difference in how the endpoint performs under load. These settings are available per-endpoint since the performance of an endpoint is one of the major factors to consider (the other major factor being the expected volume on the channel to which the endpoint subscribes). To enable concurrency for a polling endpoint that is configured with the XML namespace support, provide the task-executor reference on its <poller/> element and then provide one or more of the properties shown below:

<int:poller task-executor="pool" fixed-rate="1000"/>

<task:executor id="pool"
               pool-size="5-25"
               queue-capacity="20"
               keep-alive="120"/>

If no task-executor is provided, the consumer’s handler will be invoked in the caller’s thread. Note that the caller is usually the default TaskScheduler (see the section called “CompletableFuture”). Also, keep in mind that the task-executor attribute can provide a reference to any implementation of Spring’s TaskExecutor interface by specifying the bean name. The executor element above is simply provided for convenience.

As mentioned in the background section for Polling Consumers above, you can also configure a Polling Consumer in such a way as to emulate event-driven behavior. With a long receive-timeout and a short interval-trigger, you can ensure a very timely reaction to arriving messages even on a polled message source. Note that this will only apply to sources that have a blocking wait call with a timeout. For example, the File poller does not block, each receive() call returns immediately and either contains new files or not. Therefore, even if a poller contains a long receive-timeout, that value would never be usable in such a scenario. On the other hand when using Spring Integration’s own queue-based channels, the timeout value does have a chance to participate. The following example demonstrates how a Polling Consumer will receive Messages nearly instantaneously.

<int:service-activator input-channel="someQueueChannel"
    output-channel="output">
    <int:poller receive-timeout="30000" fixed-rate="10"/>

</int:service-activator>

Using this approach does not carry much overhead since internally it is nothing more then a timed-wait thread which does not require nearly as much CPU resource usage as a thrashing, infinite while loop for example.

8.1.5 Change Polling Rate at Runtime

When configuring Pollers with a fixed-delay or fixed-rate attribute, the default implementation will use a PeriodicTrigger instance. The PeriodicTrigger is part of the Core Spring Framework and it accepts the interval as a constructor argument, only. Therefore it cannot be changed at runtime.

However, you can define your own implementation of the org.springframework.scheduling.Trigger interface. You could even use the PeriodicTrigger as a starting point. Then, you can add a setter for the interval (period), or you could even embed your own throttling logic within the trigger itself if desired. The period property will be used with each call to nextExecutionTime to schedule the next poll. To use this custom trigger within pollers, declare the bean definition of the custom Trigger in your application context and inject the dependency into your Poller configuration using the trigger attribute, which references the custom Trigger bean instance. You can now obtain a reference to the Trigger bean and the polling interval can be changed between polls.

For an example, please see the Spring Integration Samples project. It contains a sample called dynamic-poller, which uses a custom Trigger and demonstrates the ability to change the polling interval at runtime.

https://github.com/SpringSource/spring-integration-samples/tree/master/intermediate

The sample provides a custom Trigger which implements the org.springframework.scheduling.Trigger interface. The sample’s Trigger is based on Spring’s PeriodicTrigger implementation. However, the fields of the custom trigger are not final and the properties have explicit getters and setters, allowing to dynamically change the polling period at runtime.

[Note]Note

It is important to note, though, that because the Trigger method is nextExecutionTime(), any changes to a dynamic trigger will not take effect until the next poll, based on the existing configuration. It is not possible to force a trigger to fire before it’s currently configured next execution time.

8.1.6 Payload Type Conversion

Throughout the reference manual, you will also see specific configuration and implementation examples of various endpoints which can accept a Message or any arbitrary Object as an input parameter. In the case of an Object, such a parameter will be mapped to a Message payload or part of the payload or header (when using the Spring Expression Language). However there are times when the type of input parameter of the endpoint method does not match the type of the payload or its part. In this scenario we need to perform type conversion. Spring Integration provides a convenient way for registering type converters (using the Spring 3.x ConversionService) within its own instance of a conversion service bean named integrationConversionService. That bean is automatically created as soon as the first converter is defined using the Spring Integration infrastructure. To register a Converter all you need is to implement org.springframework.core.convert.converter.Converter, org.springframework.core.convert.converter.GenericConverter or org.springframework.core.convert.converter.ConverterFactory.

The Converter implementation is the simplest and converts from a single type to another. For more sophistication, such as converting to a class hierarchy, you would implement a GenericConverter and possibly a ConditionalConverter. These give you complete access to the from and to type descriptors enabling complex conversions. For example, if you have an abstract class Foo that is the target of your conversion (parameter type, channel data type etc) and you have two concrete implementations Bar and Baz and you wish to convert to one or the other based on the input type, the GenericConverter would be a good fit. Refer to the JavaDocs for these interfaces for more information.

When you have implemented your converter, you can register it with convenient namespace support:

<int:converter ref="sampleConverter"/>

<bean id="sampleConverter" class="foo.bar.TestConverter"/>

or as an inner bean:

<int:converter>
    <bean class="o.s.i.config.xml.ConverterParserTests$TestConverter3"/>
</int:converter>

Starting with Spring Integration 4.0, the above configuration is available using annotations:

@Component
@IntegrationConverter
public class TestConverter implements Converter<Boolean, Number> {

	public Number convert(Boolean source) {
		return source ? 1 : 0;
	}

}

or as a @Configuration part:

@Configuration
@EnableIntegration
public class ContextConfiguration {

	@Bean
	@IntegrationConverter
	public SerializingConverter serializingConverter() {
		return new SerializingConverter();
	}

}
[Important]Important

When configuring an Application Context, the Spring Framework allows you to add a conversionService bean (see Configuring a ConversionService chapter). This service is used, when needed, to perform appropriate conversions during bean creation and configuration.

In contrast, the integrationConversionService is used for runtime conversions. These uses are quite different; converters that are intended for use when wiring bean constructor-args and properties may produce unintended results if used at runtime for Spring Integration expression evaluation against Messages within Datatype Channels, Payload Type transformers etc.

However, if you do want to use the Spring conversionService as the Spring Integration integrationConversionService, you can configure an alias in the Application Context:

<alias name="conversionService" alias="integrationConversionService"/>

In this case the conversionService's Converters will be available for Spring Integration runtime conversion.

8.1.7 Asynchronous polling

If you want the polling to be asynchronous, a Poller can optionally specify a task-executor attribute pointing to an existing instance of any TaskExecutor bean (Spring 3.0 provides a convenient namespace configuration via the task namespace). However, there are certain things you must understand when configuring a Poller with a TaskExecutor. 

The problem is that there are two configurations in place. The Poller and the TaskExecutor, and they both have to be in tune with each other otherwise you might end up creating an artificial memory leak.

Let’s look at the following configuration provided by one of the users on the Spring Integration Forum:

<int:channel id="publishChannel">
    <int:queue />
</int:channel>

<int:service-activator input-channel="publishChannel" ref="myService">
	<int:poller receive-timeout="5000" task-executor="taskExecutor" fixed-rate="50" />
</int:service-activator>

<task:executor id="taskExecutor" pool-size="20" />

The above configuration demonstrates one of those out of tune configurations.

By default, the task executor has an unbounded task queue. The poller keeps scheduling new tasks even though all the threads are blocked waiting for either a new message to arrive, or the timeout to expire. Given that there are 20 threads executing tasks with a 5 second timeout, they will be executed at a rate of 4 per second (5000/20 = 250ms). But, new tasks are being scheduled at a rate of 20 per second, so the internal queue in the task executor will grow at a rate of 16 per second (while the process is idle), so we essentially have a memory leak.

One of the ways to handle this is to set the queue-capacity attribute of the Task Executor; and even 0 is a reasonable value. You can also manage it by specifying what to do with messages that can not be queued by setting the rejection-policy attribute of the Task Executor (e.g., DISCARD). In other words, there are certain details you must understand with regard to configuring the TaskExecutor. Please refer to Task Execution and Scheduling of the Spring reference manual for more detail on the subject.

8.1.8 Endpoint Inner Beans

Many endpoints are composite beans; this includes all consumers and all polled inbound channel adapters. Consumers (polled or event- driven) delegate to a MessageHandler; polled adapters obtain messages by delegating to a MessageSource. Often, it is useful to obtain a reference to the delegate bean, perhaps to change configuration at runtime, or for testing. These beans can be obtained from the ApplicationContext with well-known names. MessageHandler s are registered with the application context with a bean id someConsumer.handler (where consumer is the endpoint’s id attribute). MessageSource s are registered with a bean id somePolledAdapter.source, again where somePolledAdapter is the id of the adapter.

The above only applies to the framework component itself. If you use an inner bean definition such as this:

<int:service-activator id="exampleServiceActivator" input-channel="inChannel"
            output-channel = "outChannel" method="foo">
    <beans:bean class="org.foo.ExampleServiceActivator"/>
</int:service-activator>

the bean is treated like any inner bean declared that way and is not registered with the application context. If you wish to access this bean in some other manner, declare it at the top level with an id and use the ref attribute instead. See the Spring Documentation for more information.

8.2 Endpoint Roles

Starting with version 4.2, endpoints can be assigned to roles. Roles allow endpoints to be started and stopped as a group; this is particularly useful when using leadership election where a set of endpoints can be started or stopped when leadership is granted or revoked respectively.

You can assign endpoints to roles using XML, Java configuration, or programmatically:

<int:inbound-channel-adapter id="ica" channel="someChannel" expression="'foo'" role="cluster">
    <int:poller fixed-rate="60000" />
</int:inbound-channel-adapter>
@Bean
@ServiceActivator(inputChannel = "sendAsyncChannel")
@Role("cluster")
public MessageHandler sendAsyncHandler() {
    return // some MessageHandler
}
@Payload("#args[0].toLowerCase()")
@Role("cluster")
public String handle(String payload) {
    return payload.toUpperCase();
}
@Autowired
private SmartLifecycleRoleController roleController;

...

    this.roleController.addSmartLifeCycleToRole("cluster", someEndpoint);
...

Each of these adds the endpoint to the role cluster.

Invoking roleController.startLifecyclesInRole("cluster") (and the corresponding stop... method) will start/stop the endpoints.

[Note]Note

Any object implementing SmartLifecycle can be programmatically added, not just endpoints.

The SmartLifecycleRoleController implements ApplicationListener<AbstractLeaderEvent> and it will automatically start/stop its configured SmartLifecycle objects when leadership is granted/revoked (when some bean publishes OnGrantedEvent or OnRevokedEvent respectively).

[Important]Important

When using leadership election to start/stop components, it is important to set the auto-startup XML attribute (autoStartup bean property) to false so the application context does not start the components during context intialization.

Starting with _version 4.3.8, the SmartLifecycleRoleController provides several status methods:

public Collection<String> getRoles() 1

public boolean allEndpointsRunning(String role) 2

public boolean noEndpointsRunning(String role) 3

public Map<String, Boolean> getEndpointsRunningStatus(String role) 4

1

Returns a list of the roles being managed.

2

Returns true if all endpoints in the role are running.

3

Returns true if none of the endpoints in the role are running.

4

Returns a map of component name : running status - the component name is usually the bean name.

8.3 Leadership Event Handling

Groups of endpoints can be started/stopped based on leadership being granted or revoked respectively. This is useful in clustered scenarios where shared resources must only be consumed by a single instance. An example of this is a file inbound channel adapter that is polling a shared directory. (See the section called “CompletableFuture”).

To participate in a leader election and be notified when elected leader or when leadership is revoked, an application creates a component in the application context called a "leader initiator". Normally a leader initiator is a SmartLifecycle so it starts up (optionally) automatically when the context starts, and then publishes notifications when leadership changes. By convention the user provides a Candidate that receives the callbacks and also can revoke the leadership through a Context object provided by the framework. User code can also listen for AbstractLeaderEvents, and respond accordingly, for instance using a SmartLifecycleRoleController.

There is a basic implementation of a leader initiator based on the LockRegistry abstraction. To use it you just need to create an instance as a bean, for example:

@Bean
public LockRegistryLeaderInitiator leaderInitiator(LockRegistry locks) {
    return new LockRegistryLeaderInitiator(locks);
}

If the lock registry is implemented correctly, there will only ever be at most one leader. If the lock registry also provides locks which throw exceptions (ideally InterruptedException) when they expire or are broken, then the duration of the leaderless periods can be as short as is allowed by the inherent latency in the lock implementation. By default there is a busyWaitMillis property that adds some additional latency to prevent CPU starvation in the (more usual) case that the locks are imperfect and you only know they expired by trying to obtain one again.

See the section called “CompletableFuture” for more information about leadership election and events using Zookeeper.

8.4 Messaging Gateways

The primary purpose of a Gateway is to hide the messaging API provided by Spring Integration. It allows your application’s business logic to be completely unaware of the Spring Integration API and using a generic Gateway, your code interacts instead with a simple interface, only.

8.4.1 Enter the GatewayProxyFactoryBean

As mentioned above, it would be great to have no dependency on the Spring Integration API at all - including the gateway class. For that reason, Spring Integration provides the GatewayProxyFactoryBean that generates a proxy for any interface and internally invokes the gateway methods shown below. Using dependency injection you can then expose the interface to your business methods.

Here is an example of an interface that can be used to interact with Spring Integration:

package org.cafeteria;

public interface Cafe {

    void placeOrder(Order order);

}

8.4.2 Gateway XML Namespace Support

Namespace support is also provided which allows you to configure such an interface as a service as demonstrated by the following example.

<int:gateway id="cafeService"
         service-interface="org.cafeteria.Cafe"
         default-request-channel="requestChannel"
         default-reply-timeout="10000"
         default-reply-channel="replyChannel"/>

With this configuration defined, the "cafeService" can now be injected into other beans, and the code that invokes the methods on that proxied instance of the Cafe interface has no awareness of the Spring Integration API. The general approach is similar to that of Spring Remoting (RMI, HttpInvoker, etc.). See the "Samples" Appendix for an example that uses this "gateway" element (in the Cafe demo).

The defaults in the configuration above are applied to all methods on the gateway interface; if a reply timeout is not specified, the calling thread will wait indefinitely for a reply. See the section called “CompletableFuture”.

The defaults can be overridden for individual methods; see Section 8.4.4, “Gateway Configuration with Annotations and/or XML”.

8.4.3 Setting the Default Reply Channel

Typically you don’t have to specify the default-reply-channel, since a Gateway will auto-create a temporary, anonymous reply channel, where it will listen for the reply. However, there are some cases which may prompt you to define a default-reply-channel (or reply-channel with adapter gateways such as HTTP, JMS, etc.).

For some background, we’ll quickly discuss some of the inner-workings of the Gateway. A Gateway will create a temporary point-to-point reply channel which is anonymous and is added to the Message Headers with the name replyChannel. When providing an explicit default-reply-channel (reply-channel with remote adapter gateways), you have the option to point to a publish-subscribe channel, which is so named because you can add more than one subscriber to it. Internally Spring Integration will create a Bridge between the temporary replyChannel and the explicitly defined default-reply-channel.

So let’s say you want your reply to go not only to the gateway, but also to some other consumer. In this case you would want two things: a) a named channel you can subscribe to and b) that channel is a publish-subscribe-channel. The default strategy used by the gateway will not satisfy those needs, because the reply channel added to the header is anonymous and point-to-point. This means that no other subscriber can get a handle to it and even if it could, the channel has point-to-point behavior such that only one subscriber would get the Message. So by defining a default-reply-channel you can point to a channel of your choosing, which in this case would be a publish-subscribe-channel. The Gateway would create a bridge from it to the temporary, anonymous reply channel that is stored in the header.

Another case where you might want to provide a reply channel explicitly is for monitoring or auditing via an interceptor (e.g., wiretap). You need a named channel in order to configure a Channel Interceptor.

8.4.4 Gateway Configuration with Annotations and/or XML

public interface Cafe {

    @Gateway(requestChannel="orders")
    void placeOrder(Order order);

}

You may alternatively provide such content in method sub-elements if you prefer XML configuration (see the next paragraph).

It is also possible to pass values to be interpreted as Message headers on the Message that is created and sent to the request channel by using the @Header annotation:

public interface FileWriter {

    @Gateway(requestChannel="filesOut")
    void write(byte[] content, @Header(FileHeaders.FILENAME) String filename);

}

If you prefer the XML approach of configuring Gateway methods, you can provide method sub-elements to the gateway configuration.

<int:gateway id="myGateway" service-interface="org.foo.bar.TestGateway"
      default-request-channel="inputC">
  <int:default-header name="calledMethod" expression="#gatewayMethod.name"/>
  <int:method name="echo" request-channel="inputA" reply-timeout="2" request-timeout="200"/>
  <int:method name="echoUpperCase" request-channel="inputB"/>
  <int:method name="echoViaDefault"/>
</int:gateway>

You can also provide individual headers per method invocation via XML. This could be very useful if the headers you want to set are static in nature and you don’t want to embed them in the gateway’s method signature via @Header annotations. For example, in the Loan Broker example we want to influence how aggregation of the Loan quotes will be done based on what type of request was initiated (single quote or all quotes). Determining the type of the request by evaluating what gateway method was invoked, although possible, would violate the separation of concerns paradigm (the method is a java artifact),  but expressing your intention (meta information) via Message headers is natural in a Messaging architecture.

<int:gateway id="loanBrokerGateway"
         service-interface="org.springframework.integration.loanbroker.LoanBrokerGateway">
  <int:method name="getLoanQuote" request-channel="loanBrokerPreProcessingChannel">
    <int:header name="RESPONSE_TYPE" value="BEST"/>
  </int:method>
  <int:method name="getAllLoanQuotes" request-channel="loanBrokerPreProcessingChannel">
    <int:header name="RESPONSE_TYPE" value="ALL"/>
  </int:method>
</int:gateway>

In the above case you can clearly see how a different value will be set for the RESPONSE_TYPE header based on the gateway’s method.

Expressions and "Global" Headers

The <header/> element supports expression as an alternative to value. The SpEL expression is evaluated to determine the value of the header. There is no #root object but the following variables are available:

#args - an Object[] containing the method arguments

#gatewayMethod - the java.reflect.Method object representing the method in the service-interface that was invoked. A header containing this variable can be used later in the flow, for example, for routing. For example, if you wish to route on the simple method name, you might add a header, with expression #gatewayMethod.name.

[Note]Note

The java.reflect.Method is not serializable; a header with expression #gatewayMethod will be lost if you later serialize the message. So, you may wish to use #gatewayMethod.name or #gatewayMethod.toString() in those cases; the toString() method provides a String representation of the method, including parameter and return types.

[Note]Note

Prior to 3.0, the #method variable was available, representing the method name only. This is still available, but deprecated; use #gatewayMethod.name instead.

Since 3.0, <default-header/> s can be defined to add headers to all messages produced by the gateway, regardless of the method invoked. Specific headers defined for a method take precedence over default headers. Specific headers defined for a method here will override any @Header annotations in the service interface. However, default headers will NOT override any @Header annotations in the service interface.

The gateway now also supports a default-payload-expression which will be applied for all methods (unless overridden).

8.4.5 Mapping Method Arguments to a Message

Using the configuration techniques in the previous section allows control of how method arguments are mapped to message elements (payload and header(s)). When no explicit configuration is used, certain conventions are used to perform the mapping. In some cases, these conventions cannot determine which argument is the payload and which should be mapped to headers.

public String send1(Object foo, Map bar);

public String send2(Map foo, Map bar);

In the first case, the convention will map the first argument to the payload (as long as it is not a Map) and the contents of the second become headers.

In the second case (or the first when the argument for parameter foo is a Map), the framework cannot determine which argument should be the payload; mapping will fail. This can generally be resolved using a payload-expression, a @Payload annotation and/or a @Headers annotation.

Alternatively, and whenever the conventions break down, you can take the entire responsibility for mapping the method calls to messages. To do this, implement an`MethodArgsMessageMapper` and provide it to the <gateway/> using the mapper attribute. The mapper maps a MethodArgsHolder, which is a simple class wrapping the java.reflect.Method instance and an Object[] containing the arguments. When providing a custom mapper, the default-payload-expression attribute and <default-header/> elements are not allowed on the gateway; similarly, the payload-expression attribute and <header/> elements are not allowed on any <method/> elements.

Mapping Method Arguments

Here are examples showing how method arguments can be mapped to the message (and some examples of invalid configuration):

public interface MyGateway {

    void payloadAndHeaderMapWithoutAnnotations(String s, Map<String, Object> map);

    void payloadAndHeaderMapWithAnnotations(@Payload String s, @Headers Map<String, Object> map);

    void headerValuesAndPayloadWithAnnotations(@Header("k1") String x, @Payload String s, @Header("k2") String y);

    void mapOnly(Map<String, Object> map); // the payload is the map and no custom headers are added

    void twoMapsAndOneAnnotatedWithPayload(@Payload Map<String, Object> payload, Map<String, Object> headers);

    @Payload("#args[0] + #args[1] + '!'")
    void payloadAnnotationAtMethodLevel(String a, String b);

    @Payload("@someBean.exclaim(#args[0])")
    void payloadAnnotationAtMethodLevelUsingBeanResolver(String s);

    void payloadAnnotationWithExpression(@Payload("toUpperCase()") String s);

    void payloadAnnotationWithExpressionUsingBeanResolver(@Payload("@someBean.sum(#this)") String s); //  1

    // invalid
    void twoMapsWithoutAnnotations(Map<String, Object> m1, Map<String, Object> m2);

    // invalid
    void twoPayloads(@Payload String s1, @Payload String s2);

    // invalid
    void payloadAndHeaderAnnotationsOnSameParameter(@Payload @Header("x") String s);

    // invalid
    void payloadAndHeadersAnnotationsOnSameParameter(@Payload @Headers Map<String, Object> map);

}

1

Note that in this example, the SpEL variable #this refers to the argument - in this case, the value of 's'.

The XML equivalent looks a little different, since there is no #this context for the method argument, but expressions can refer to method arguments using the #args variable:

<int:gateway id="myGateway" service-interface="org.foo.bar.MyGateway">
  <int:method name="send1" payload-expression="#args[0] + 'bar'"/>
  <int:method name="send2" payload-expression="@someBean.sum(#args[0])"/>
  <int:method name="send3" payload-expression="#method"/>
  <int:method name="send4">
    <int:header name="foo" expression="#args[2].toUpperCase()"/>
  </int:method>
</int:gateway>

8.4.6 @MessagingGateway Annotation

Starting with version 4.0, gateway service interfaces can be marked with a @MessagingGateway annotation instead of requiring the definition of a <gateway /> xml element for configuration. The following compares the two approaches for configuring the same gateway:

<int:gateway id="myGateway" service-interface="org.foo.bar.TestGateway"
      default-request-channel="inputC">
  <int:default-header name="calledMethod" expression="#gatewayMethod.name"/>
  <int:method name="echo" request-channel="inputA" reply-timeout="2" request-timeout="200"/>
  <int:method name="echoUpperCase" request-channel="inputB">
  		<int:header name="foo" value="bar"/>
  </int:method>
  <int:method name="echoViaDefault"/>
</int:gateway>
@MessagingGateway(name = "myGateway", defaultRequestChannel = "inputC",
		  defaultHeaders = @GatewayHeader(name = "calledMethod",
		                           expression="#gatewayMethod.name"))
public interface TestGateway {

   @Gateway(requestChannel = "inputA", replyTimeout = 2, requestTimeout = 200)
   String echo(String payload);

   @Gateway(requestChannel = "inputB", headers = @GatewayHeader(name = "foo", value="bar"))
   String echoUpperCase(String payload);

   String echoViaDefault(String payload);

}
[Important]Important

As with the XML version, Spring Integration creates the proxy implementation with its messaging infrastructure, when discovering these annotations during a component scan. To perform this scan and register the BeanDefinition in the application context, add the @IntegrationComponentScan annotation to a @Configuration class. The standard @ComponentScan infrastructure doesn’t deal with interfaces, therefore the custom @IntegrationComponentScan logic has been introduced to determine @MessagingGateway annotation on the interfaces and register GatewayProxyFactoryBean s for them. See also the section called “CompletableFuture”

Along with the @MessagingGateway annotation you can mark a service interface with the @Profile annotation to avoid the bean creation, if such a profile is not active.

[Note]Note

If you have no XML configuration, the @EnableIntegration annotation is required on at least one @Configuration class. See Section 3.5, “Configuration and @EnableIntegration” for more information.

8.4.7 Invoking No-Argument Methods

When invoking methods on a Gateway interface that do not have any arguments, the default behavior is to receive a Message from a PollableChannel.

At times however, you may want to trigger no-argument methods so that you can in fact interact with other components downstream that do not require user-provided parameters, e.g. triggering no-argument SQL calls or Stored Procedures.

In order to achieve send-and-receive semantics, you must provide a payload. In order to generate a payload, method parameters on the interface are not necessary. You can either use the @Payload annotation or the payload-expression attribute in XML on the method sub-element. Below please find a few examples of what the payloads could be:

  • a literal string
  • #gatewayMethod.name
  • new java.util.Date()
  • @someBean.someMethod()'s return value

Here is an example using the @Payload annotation:

public interface Cafe {

    @Payload("new java.util.Date()")
    List<Order> retrieveOpenOrders();

}

If a method has no argument and no return value, but does contain a payload expression, it will be treated as a send-only operation.

8.4.8 Error Handling

Of course, the Gateway invocation might result in errors. By default any error that has occurred downstream will be re-thrown as a MessagingException (RuntimeException) upon the Gateway’s method invocation. However there are times when you may want to simply log the error rather than propagating it, or you may want to treat an Exception as a valid reply, by mapping it to a Message that will conform to some "error message" contract that the caller understands. To accomplish this, the Gateway provides support for a Message Channel dedicated to the errors via the error-channel attribute. In the example below, you can see that a transformer is used to create a reply Message from the Exception.

<int:gateway id="sampleGateway"
    default-request-channel="gatewayChannel"
    service-interface="foo.bar.SimpleGateway"
    error-channel="exceptionTransformationChannel"/>

<int:transformer input-channel="exceptionTransformationChannel"
        ref="exceptionTransformer" method="createErrorResponse"/>

The exceptionTransformer could be a simple POJO that knows how to create the expected error response objects. That would then be the payload that is sent back to the caller. Obviously, you could do many more elaborate things in such an "error flow" if necessary. It might involve routers (including Spring Integration’s ErrorMessageExceptionTypeRouter), filters, and so on. Most of the time, a simple transformer should be sufficient, however.

Alternatively, you might want to only log the Exception (or send it somewhere asynchronously). If you provide a one-way flow, then nothing would be sent back to the caller. In the case that you want to completely suppress Exceptions, you can provide a reference to the global "nullChannel" (essentially a /dev/null approach). Finally, as mentioned above, if no "error-channel" is defined at all, then the Exceptions will propagate as usual.

[Important]Important

Exposing the messaging system via simple POJI Gateways obviously provides benefits, but "hiding" the reality of the underlying messaging system does come at a price so there are certain things you should consider. We want our Java method to return as quickly as possible and not hang for an indefinite amount of time while the caller is waiting on it to return (void, return value, or a thrown Exception). When regular methods are used as a proxies in front of the Messaging system, we have to take into account the potentially asynchronous nature of the underlying messaging. This means that there might be a chance that a Message that was initiated by a Gateway could be dropped by a Filter, thus never reaching a component that is responsible for producing a reply. Some Service Activator method might result in an Exception, thus providing no reply (as we don’t generate Null messages). So as you can see there are multiple scenarios where a reply message might not be coming. That is perfectly natural in messaging systems. However think about the implication on the gateway method. The Gateway’s method input arguments  were incorporated into a Message and sent downstream. The reply Message would be converted to a return value of the Gateway’s method. So you might want to ensure that for each Gateway call there will always be a reply Message. Otherwise, your Gateway method might never return and will hang indefinitely. One of the ways of handling this situation is via an Asynchronous Gateway (explained later in this section). Another way of handling it is to explicitly set the reply-timeout attribute. That way, the gateway will not hang any longer than the time specified by the reply-timeout and will return null if that timeout does elapse. Finally, you might want to consider setting downstream flags such as requires-reply on a service-activator or throw-exceptions-on-rejection on a filter. These options will be discussed in more detail in the final section of this chapter.

[Note]Note

If the downstream flow returns an ErrorMessage, its payload (a Throwable) is treated as a regular downstream error: if there is an error-channel configured, it will be sent there, to the error flow; otherwise the payload is thrown to the caller of gateway. Similarly, if the error flow on the error-channel returns an ErrorMessage its payload is thrown to the caller. The same applies to any message with a Throwable payload. This can be useful in asynchronous situations when when you need to propagate an Exception directly to the caller. To do so, you can either return an Exception (as the reply from some service) or throw it. Generally, even with an asynchronous flow, the framework takes care of propagating an exception thrown by the downstream flow back to the gateway. The TCP Client-Server Multiplex sample demonstrates both techniques to return the exception to the caller. It emulates a socket IO error to the waiting thread by using an aggregator with group-timeout (see the section called “Aggregator and Group Timeout”) and a MessagingTimeoutException reply on the discard flow.

8.4.9 Asynchronous Gateway

Introduction

As a pattern, the Messaging Gateway is a very nice way to hide messaging-specific code while still exposing the full capabilities of the messaging system. As you’ve seen, the GatewayProxyFactoryBean provides a convenient way to expose a Proxy over a service-interface thus giving you POJO-based access to a messaging system (based on objects in your own domain, or primitives/Strings, etc).  But when a gateway is exposed via simple POJO methods which return values it does imply that for each Request message (generated when the method is invoked) there must be a Reply message (generated when the method has returned). Since Messaging systems naturally are asynchronous you may not always be able to guarantee the contract where "for each request there will always be be a reply".  With Spring Integration 2.0 we introduced support for an Asynchronous Gateway which is a convenient way to initiate flows where you may not know if a reply is expected or how long will it take for replies to arrive.

A natural way to handle these types of scenarios in Java would be relying upon java.util.concurrent.Future instances, and that is exactly what Spring Integration uses to support an Asynchronous Gateway.

From the XML configuration, there is nothing different and you still define Asynchronous Gateway the same way as a regular Gateway.

<int:gateway id="mathService" 
     service-interface="org.springframework.integration.sample.gateway.futures.MathServiceGateway"
     default-request-channel="requestChannel"/>

However the Gateway Interface (service-interface) is a little different:

public interface MathServiceGateway {

  Future<Integer> multiplyByTwo(int i);

}

As you can see from the example above, the return type for the gateway method is a Future. When GatewayProxyFactoryBean sees that the return type of the gateway method is a Future, it immediately switches to the async mode by utilizing an AsyncTaskExecutor. That is all. The call to such a method always returns immediately with a Future instance. Then, you can interact with the Future at your own pace to get the result, cancel, etc. And, as with any other use of Future instances, calling get() may reveal a timeout, an execution exception, and so on.

MathServiceGateway mathService = ac.getBean("mathService", MathServiceGateway.class);
Future<Integer> result = mathService.multiplyByTwo(number);
// do something else here since the reply might take a moment
int finalResult =  result.get(1000, TimeUnit.SECONDS);

For a more detailed example, please refer to the async-gateway sample distributed within the Spring Integration samples.

ListenableFuture

Starting with version 4.1, async gateway methods can also return ListenableFuture (introduced in Spring Framework 4.0). These return types allow you to provide a callback which is invoked when the result is available (or an exception occurs). When the gateway detects this return type, and the task executor (see below) is an AsyncListenableTaskExecutor, the executor’s submitListenable() method is invoked.

ListenableFuture<String> result = this.asyncGateway.async("foo");
result.addCallback(new ListenableFutureCallback<String>() {

    @Override
    public void onSuccess(String result) {
        ...
    }

    @Override
    public void onFailure(Throwable t) {
        ...
    }
});

AsyncTaskExecutor

By default, the GatewayProxyFactoryBean uses org.springframework.core.task.SimpleAsyncTaskExecutor when submitting internal AsyncInvocationTask instances for any gateway method whose return type is Future. However the async-executor attribute in the <gateway/> element’s configuration allows you to provide a reference to any implementation of java.util.concurrent.Executor available within the Spring application context.

The (default) SimpleAsyncTaskExecutor supports both Future and ListenableFuture return types, returning FutureTask or ListenableFutureTask respectively. Also see the section called “CompletableFuture” below. Even though there is a default executor, it is often useful to provide an external one so that you can identify its threads in logs (when using XML, the thread name is based on the executor’s bean name):

@Bean
public AsyncTaskExecutor exec() {
    SimpleAsyncTaskExecutor simpleAsyncTaskExecutor = new SimpleAsyncTaskExecutor();
    simpleAsyncTaskExecutor.setThreadNamePrefix("exec-");
    return simpleAsyncTaskExecutor;
}

@MessagingGateway(asyncExecutor = "exec")
public interface ExecGateway {

    @Gateway(requestChannel = "gatewayChannel")
    Future<?> doAsync(String foo);

}

If you wish to return a different Future implementation, you can provide a custom executor, or disable the executor altogether and return the Future in the reply message payload from the downstream flow. To disable the executor, simply set it to null in the GatewayProxyFactoryBean (setAsyncTaskExecutor(null)). When configuring the gateway with XML, use async-executor=""; when configuring using the @MessagingGateway annotation, use:

@MessagingGateway(asyncExecutor = AnnotationConstants.NULL)
public interface NoExecGateway {

    @Gateway(requestChannel = "gatewayChannel")
    Future<?> doAsync(String foo);

}
[Important]Important

If the return type is a specific concrete Future implementation or some other subinterface that is not supported by the configured executor, the flow will run on the caller’s thread and the flow must return the required type in the reply message payload.

CompletableFuture

Starting with version 4.2, gateway methods can now return CompletableFuture<?>. There are several modes of operation when returning this type:

When an async executor is provided and the return type is exactly CompletableFuture (not a subclass), the framework will run the task on the executor and immediately return a CompletableFuture to the caller. CompletableFuture.supplyAsync(Supplier<U> supplier, Executor executor) is used to create the future.

When the async executor is explicitly set to null and the return type is CompletableFuture or the return type is a subclass of CompletableFuture, the flow is invoked on the caller’s thread. In this scenario, it is expected that the downstream flow will return a CompletableFuture of the appropriate type.

Usage Scenarios

In the following scenario, the caller thread returns immediately with a CompletableFuture<Invoice>, which is completed when the downstream flow replies to the gateway (with an Invoice object).

CompletableFuture<Invoice> order(Order order);
<int:gateway service-interface="foo.Service" default-request-channel="orders" />

In the following scenario, the caller thread returns with a CompletableFuture<Invoice> when the downstream flow provides it as the payload of the reply to the gateway. Some other process must complete the future when the invoice is ready.

CompletableFuture<Invoice> order(Order order);
<int:gateway service-interface="foo.Service" default-request-channel="orders"
    async-executor="" />

In the following scenario, the caller thread returns with a CompletableFuture<Invoice> when the downstream flow provides it as the payload of the reply to the gateway. Some other process must complete the future when the invoice is ready.

MyCompletableFuture<Invoice> order(Order order);
<int:gateway service-interface="foo.Service" default-request-channel="orders" />

In this scenario, the caller thread will return with a CompletableFuture<Invoice> when the downstream flow provides it as the payload of the reply to the gateway. Some other process must complete the future when the invoice is ready. If DEBUG logging is enabled, a log is emitted indicating that the async executor cannot be used for this scenario.

CompletableFuture s can be used to perform additional manipulation on the reply, such as:

CompletableFuture<String> process(String data);

...

CompletableFuture result = process("foo")
    .thenApply(t -> t.toUpperCase());

...

String out = result.get(10, TimeUnit.SECONDS);

===== Reactor Promise

Starting with version 4.1, the GatewayProxyFactoryBean allows the use of a Reactor with gateway interface methods, utilizing a Promise<?> return type. The internal AsyncInvocationTask is wrapped in a reactor.function.Supplier, using a default RingBufferDispatcher for the Promise consumption. Only methods with the Promise<?> return type are run on the reactor’s dispatcher.

A Promise can be used to retrieve the result later (similar to a Future<?>) or you can consume from it with the dispatcher invoking your Consumer when the result is returned to the gateway.

[Important]Important

The Promise isn’t flushed immediately by the framework. Hence the underlying message flow won’t be started before the gateway method returns (as it is with Future<?> Executor task). The flow will be started when the Promise is flushed or via Promise.await(). Alternatively, the Promise (being a Composable) might be a part of Reactor Stream<?>, when the flush() is related to the entire Stream. For example:

@MessagingGateway
public static interface TestGateway {

	@Gateway(requestChannel = "promiseChannel")
	Promise<Integer> multiply(Integer value);

	}

	    ...

	@ServiceActivator(inputChannel = "promiseChannel")
	public Integer multiply(Integer value) {
			return value * 2;
	}

		...

	Streams.defer(Arrays.asList("1", "2", "3", "4", "5"))
			.get()
			.map(Integer::parseInt)
			.mapMany(integer -> testGateway.multiply(integer))
			.collect()
			.consume(integers -> ...)
			.flush();

Another example is a simple callback scenario:

Promise<Invoice> promise = service.process(myOrder);

promise.consume(new Consumer<Invoice>() {
	@Override
	public void accept(Invoice invoice) {
		handleInvoice(invoice);
	}
})
.flush();

The calling thread continues, with handleInvoice() being called when the flow completes.

==== Gateway behavior when no response arrives

As it was explained earlier, the Gateway provides a convenient way of interacting with a Messaging system via POJO method invocations, but realizing that a typical method invocation, which is generally expected to always return (even with an Exception), might not always map one-to-one to message exchanges (e.g., a reply message might not arrive - which is equivalent to a method not returning). It is important to go over several scenarios especially in the Sync Gateway case and understand the default behavior of the Gateway and how to deal with these scenarios to make the Sync Gateway behavior more predictable regardless of the outcome of the message flow that was initialed from such Gateway.

There are certain attributes that could be configured to make Sync Gateway behavior more predictable, but some of them might not always work as you might have expected. One of them is reply-timeout (at the method level or default-reply-timeout at the gateway level). So, lets look at the reply-timeout attribute and see how it can/can’t influence the behavior of the Sync Gateway in various scenarios. We will look at single-threaded scenario (all components downstream are connected via Direct Channel) and multi-threaded scenarios (e.g., somewhere downstream you may have Pollable or Executor Channel which breaks single-thread boundary)

Long running process downstream

Sync Gateway - single-threaded. If a component downstream is still running (e.g., infinite loop or a very slow service), then setting a reply-timeout has no effect and the Gateway method call will not return until such downstream service exits (via return or exception). Sync Gateway - multi-threaded. If a component downstream is still running (e.g., infinite loop or a very slow service), in a multi-threaded message flow setting the reply-timeout will have an effect by allowing gateway method invocation to return once the timeout has been reached, since the GatewayProxyFactoryBean  will simply poll on the reply channel waiting for a message until the timeout expires. However it could result in a null return from the Gateway method if the timeout has been reached before the actual reply was produced. It is also important to understand that the reply message (if produced) will be sent to a reply channel after the Gateway method invocation might have returned, so you must be aware of that and design your flow with this in mind.

Downstream component returns 'null'

Sync Gateway - single-threaded. If a component downstream returns null and no reply-timeout has been configured, the Gateway method call will hang indefinitely unless: a) a reply-timeout has been configured or b) the requires-reply attribute has been set on the downstream component (e.g., service-activator) that might return null. In this case, an Exception would be thrown and propagated to the Gateway.Sync Gateway - multi-threaded. Behavior is the same as above.

Downstream component return signature is void while Gateway method signature is non-void

Sync Gateway - single-threaded. If a component downstream returns void and no reply-timeout has been configured, the Gateway method call will hang indefinitely unless a reply-timeout has been configured  Sync Gateway - multi-threaded Behavior is the same as above.

Downstream component results in Runtime Exception (regardless of the method signature)

Sync Gateway - single-threaded. If a component downstream throws a Runtime Exception, such exception will be propagated via an Error Message back to the gateway and re-thrown. Sync Gateway - multi-threaded Behavior is the same as above.

[Important]Important

It is also important to understand that by default reply-timeout is unbounded* which means that if not explicitly set there are several scenarios (described above) where your Gateway method invocation might hang indefinitely. So, make sure you analyze your flow and if there is even a remote possibility of one of these scenarios to occur, set the reply-timeout attribute to a safe value or, even better, set the requires-reply attribute of the downstream component to true to ensure a timely response as produced by the throwing of an Exception as soon as that downstream component does return null internally. But also, realize that there are some scenarios (see the very first one) where reply-timeout will not help. That means it is also important to analyze your message flow and decide when to use a Sync Gateway vs an Async Gateway. As you’ve seen the latter case is simply a matter of defining Gateway methods that return Future instances. Then, you are guaranteed to receive that return value, and you will have more granular control over the results of the invocation.Also, when dealing with a Router you should remember that setting the resolution-required attribute to true will result in an Exception thrown by the router if it can not resolve a particular channel. Likewise, when dealing with a Filter, you can set the throw-exception-on-rejection attribute. In both of these cases, the resulting flow will behave like that containing a service-activator with the requires-reply attribute. In other words, it will help to ensure a timely response from the Gateway method invocation.

[Note]Note

* reply-timeout is unbounded for <gateway/> elements (created by the GatewayProxyFactoryBean). Inbound gateways for external integration (ws, http, etc.) share many characteristics and attributes with these gateways. However, for those inbound gateways, the default reply-timeout is 1000 milliseconds (1 second). If a downstream async handoff is made to another thread, you may need to increase this attribute to allow enough time for the flow to complete before the gateway times out.

[Important]Important

It is important to understand that the timer starts when the thread returns to the gateway, i.e. when the flow completes or a message is handed off to another thread. At that time, the calling thread starts waiting for the reply. If the flow was completely synchronous, the reply will be immediately available; for asynchronous flows, the thread will wait for up to this time.

=== Service Activator

==== Introduction

The Service Activator is the endpoint type for connecting any Spring-managed Object to an input channel so that it may play the role of a service. If the service produces output, it may also be connected to an output channel. Alternatively, an output producing service may be located at the end of a processing pipeline or message flow in which case, the inbound Message’s "replyChannel" header can be used. This is the default behavior if no output channel is defined and, as with most of the configuration options you’ll see here, the same behavior actually applies for most of the other components we have seen.

==== Configuring Service Activator

To create a Service Activator, use the service-activator element with the input-channel and ref attributes:

<int:service-activator input-channel="exampleChannel" ref="exampleHandler"/>

The configuration above assumes that "exampleHandler" either contains a single method annotated with the @ServiceActivator annotation or that it contains only one public method at all. To delegate to an explicitly defined method of any object, simply add the "method" attribute.

<int:service-activator input-channel="exampleChannel" ref="somePojo" method="someMethod"/>

In either case, when the service method returns a non-null value, the endpoint will attempt to send the reply message to an appropriate reply channel. To determine the reply channel, it will first check if an "output-channel" was provided in the endpoint configuration:

<int:service-activator input-channel="exampleChannel" output-channel="replyChannel"
                       ref="somePojo" method="someMethod"/>

If the method returns a result and no "output-channel" is defined, the framework will then check the Message’s replyChannel header value. If that value is available, it will then check its type. If it is a`MessageChannel`, the reply message will be sent to that channel. If it is a String, then the endpoint will attempt to resolve the channel name to a channel instance. If the channel cannot be resolved, then a DestinationResolutionException will be thrown. It it can be resolved, the Message will be sent there. This is the technique used for Request Reply messaging in Spring Integration, and it is also an example of the Return Address pattern.

If your method returns a result, and you want to discard it and end the flow, you should configure the output-channel to send to a NullChannel. For convenience, the framework registers one with the name nullChannel. See Section 4.1.6, “Special Channels” for more information.

The Service Activator is one of those components that is not required to produce a reply message. If your method returns null or has a void return type, the Service Activator exits after the method invocation, without any signals. This behavior can be controlled by the AbstractReplyProducingMessageHandler.requiresReply option, also exposed as requires-reply when configuring with the XML namespace. If the flag is set to true and the method returns null, a ReplyRequiredException is thrown.

The argument in the service method could be either a Message or an arbitrary type. If the latter, then it will be assumed that it is a Message payload, which will be extracted from the message and injected into such service method. This is generally the recommended approach as it follows and promotes a POJO model when working with Spring Integration. Arguments may also have @Header or @Headers annotations as described in the section called “CompletableFuture”

[Note]Note

The service method is not required to have any arguments at all, which means you can implement event-style Service Activators, where all you care about is an invocation of the service method, not worrying about the contents of the message. Think of it as a NULL JMS message. An example use-case for such an implementation could be a simple counter/monitor of messages deposited on the input channel.

Starting with version 4.1 the framework correct converts Message properties (payload and headers) to the Java 8 Optional POJO method parameters:

public class MyBean {
    public String computeValue(Optional<String> payload,
               @Header(value="foo", required=false) String foo1,
               @Header(value="foo") Optional<String> foo2) {
        if (payload.isPresent()) {
            String value = payload.get();
            ...
        }
        else {
           ...
       }
    }

}

Using a ref attribute is generally recommended if the custom Service Activator handler implementation can be reused in other <service-activator> definitions. However if the custom Service Activator handler implementation is only used within a single definition of the <service-activator>, you can provide an inner bean definition:

<int:service-activator id="exampleServiceActivator" input-channel="inChannel"
            output-channel = "outChannel" method="foo">
    <beans:bean class="org.foo.ExampleServiceActivator"/>
</int:service-activator>
[Note]Note

Using both the "ref" attribute and an inner handler definition in the same <service-activator> configuration is not allowed, as it creates an ambiguous condition and will result in an Exception being thrown.

[Important]Important

If the "ref" attribute references a bean that extends AbstractMessageProducingHandler (such as handlers provided by the framework itself), the configuration is optimized by injecting the output channel into the handler directly. In this case, each "ref" must be to a separate bean instance (or a prototype-scoped bean), or use the inner <bean/> configuration type. If you inadvertently reference the same message handler from multiple beans, you will get a configuration exception.

Service Activators and the Spring Expression Language (SpEL)

Since Spring Integration 2.0, Service Activators can also benefit from SpEL (http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/expressions.html).

For example, you may now invoke any bean method without pointing to the bean via a ref attribute or including it as an inner bean definition. For example:

<int:service-activator input-channel="in" output-channel="out"
	expression="@accountService.processAccount(payload, headers.accountId)"/>

	<bean id="accountService" class="foo.bar.Account"/>

In the above configuration instead of injecting accountService using a ref or as an inner bean, we are simply using SpEL’s @beanId notation and invoking a method which takes a type compatible with Message payload. We are also passing a header value. As you can see, any valid SpEL expression can be evaluated against any content in the Message. For simple scenarios your Service Activators do not even have to reference a bean if all logic can be encapsulated by such an expression.

<int:service-activator input-channel="in" output-channel="out" expression="payload * 2"/>

In the above configuration our service logic is to simply multiply the payload value by 2, and SpEL lets us handle it relatively easy.

==== Asynchronous Service Activator

The service activator is invoked by the calling thread; this would be some upstream thread if the input channel is a SubscribableChannel, or a poller thread for a PollableChannel. If the service returns a ListenableFuture<?> the default action is to send that as the payload of the message sent to the output (or reply) channel. Starting with version 4.3, you can now set the async attribute to true (setAsync(true) when using Java configuration). If the service returns a ListenableFuture<?> when this is true, the calling thread is released immediately, and the reply message is sent on the thread (from within your service) that completes the future. This is particularly advantageous for long-running services using a PollableChannel because the poller thread is freed up to perform other services within the framework.

If the service completes the future with an Exception, normal error processing will occur - an ErrorMessage is sent to the errorChannel message header, if present or otherwise to the default errorChannel (if available).

=== Delayer

==== Introduction

A Delayer is a simple endpoint that allows a Message flow to be delayed by a certain interval. When a Message is delayed, the original sender will not block. Instead, the delayed Messages will be scheduled with an instance of org.springframework.scheduling.TaskScheduler to be sent to the output channel after the delay has passed. This approach is scalable even for rather long delays, since it does not result in a large number of blocked sender Threads. On the contrary, in the typical case a thread pool will be used for the actual execution of releasing the Messages. Below you will find several examples of configuring a Delayer.

==== Configuring Delayer

The <delayer> element is used to delay the Message flow between two Message Channels. As with the other endpoints, you can provide the input-channel and output-channel attributes, but the delayer also has default-delay and expression attributes (and expression sub-element) that are used to determine the number of milliseconds that each Message should be delayed. The following delays all messages by 3 seconds:

<int:delayer id="delayer" input-channel="input"
             default-delay="3000" output-channel="output"/>

If you need per-Message determination of the delay, then you can also provide the SpEL expression using the expression attribute:

<int:delayer id="delayer" input-channel="input" output-channel="output"
             default-delay="3000" expression="headers['delay']"/>

In the example above, the 3 second delay would only apply when the expression evaluates to null for a given inbound Message. If you only want to apply a delay to Messages that have a valid result of the expression evaluation, then you can use a default-delay of 0 (the default). For any Message that has a delay of 0 (or less), the Message will be sent immediately, on the calling Thread.

[Tip]Tip

The delay handler supports expression evaluation results that represent an interval in milliseconds (any Object whose toString() method produces a value that can be parsed into a Long) as well as java.util.Date instances representing an absolute time. In the first case, the milliseconds will be counted from the current time (e.g. a value of 5000 would delay the Message for at least 5 seconds from the time it is received by the Delayer). With a Date instance, the Message will not be released until the time represented by that Date object. In either case, a value that equates to a non-positive delay, or a Date in the past, will not result in any delay. Instead, it will be sent directly to the output channel on the original sender’s Thread. If the expression evaluation result is not a Date, and can not be parsed as a Long, the default delay (if any) will be applied.

[Important]Important

The expression evaluation may throw an evaluation Exception for various reasons, including an invalid expression, or other conditions. By default, such exceptions are ignored (logged at DEBUG level) and the delayer falls back to the default delay (if any). You can modify this behavior by setting the ignore-expression-failures attribute. By default this attribute is set to true and the Delayer behavior is as described above. However, if you wish to not ignore expression evaluation exceptions, and throw them to the delayer’s caller, set the ignore-expression-failures attribute to false.

[Tip]Tip

Notice in the example above that the delay expression is specified as headers['delay']. This is the SpEL Indexer syntax to access a Map element (MessageHeaders implements Map), it invokes: headers.get("delay"). For simple map element names (that do not contain .) you can also use the SpEL dot accessor syntax, where the above header expression can be specified as headers.delay. But, different results are achieved if the header is missing. In the first case, the expression will evaluate to null; the second will result in something like:

 org.springframework.expression.spel.SpelEvaluationException: EL1008E:(pos 8):
		   Field or property 'delay' cannot be found on object of type 'org.springframework.messaging.MessageHeaders'

So, if there is a possibility of the header being omitted, and you want to fall back to the default delay, it is generally more efficient (and recommended) to use the Indexer syntax instead of dot property accessor syntax, because detecting the null is faster than catching an exception.

The delayer delegates to an instance of Spring’s TaskScheduler abstraction. The default scheduler used by the delayer is the ThreadPoolTaskScheduler instance provided by Spring Integration on startup: the section called “CompletableFuture”. If you want to delegate to a different scheduler, you can provide a reference through the delayer element’s scheduler attribute:

<int:delayer id="delayer" input-channel="input" output-channel="output"
    expression="headers.delay"
    scheduler="exampleTaskScheduler"/>

<task:scheduler id="exampleTaskScheduler" pool-size="3"/>
[Tip]Tip

If you configure an external ThreadPoolTaskScheduler you can set on this scheduler property waitForTasksToCompleteOnShutdown = true. It allows successful completion of delay tasks, which already in the execution state (releasing the Message), when the application is shutdown. Before Spring Integration 2.2 this property was available on the <delayer> element, because DelayHandler could create its own scheduler on the background. Since 2.2 delayer requires an external scheduler instance and waitForTasksToCompleteOnShutdown was deleted; you should use the scheduler’s own configuration.

[Tip]Tip

Also keep in mind ThreadPoolTaskScheduler has a property errorHandler which can be injected with some implementation of org.springframework.util.ErrorHandler. This handler allows to process an Exception from the thread of the scheduled task sending the delayed message. By default it uses an org.springframework.scheduling.support.TaskUtils$LoggingErrorHandler and you will see a stack trace in the logs. You might want to consider using an org.springframework.integration.channel.MessagePublishingErrorHandler, which sends an ErrorMessage into an error-channel, either from the failed Message’s header or into the default error-channel.

==== Delayer and Message Store

The DelayHandler persists delayed Messages into the Message Group in the provided MessageStore. (The groupId is based on required id attribute of <delayer> element.) A delayed message is removed from the MessageStore by the scheduled task just before the DelayHandler sends the Message to the output-channel. If the provided MessageStore is persistent (e.g. JdbcMessageStore) it provides the ability to not lose Messages on the application shutdown. After application startup, the DelayHandler reads Messages from its Message Group in the MessageStore and reschedules them with a delay based on the original arrival time of the Message (if the delay is numeric). For messages where the delay header was a Date, that is used when rescheduling. If a delayed Message remained in the MessageStore more than its delay, it will be sent immediately after startup.

The <delayer> can be enriched with mutually exclusive sub-elements <transactional> or <advice-chain>. The List of these AOP Advices is applied to the proxied internal DelayHandler.ReleaseMessageHandler, which has the responsibility to release the Message, after the delay, on a Thread of the scheduled task. It might be used, for example, when the downstream message flow throws an Exception and the ReleaseMessageHandler's transaction will be rolled back. In this case the delayed Message will remain in the persistent MessageStore. You can use any custom org.aopalliance.aop.Advice implementation within the <advice-chain>. A sample configuration of the <delayer> may look like this:

<int:delayer id="delayer" input-channel="input" output-channel="output"
    expression="headers.delay"
    message-store="jdbcMessageStore">
    <int:advice-chain>
        <beans:ref bean="customAdviceBean"/>
        <tx:advice>
            <tx:attributes>
                <tx:method name="*" read-only="true"/>
            </tx:attributes>
        </tx:advice>
    </int:advice-chain>
</int:delayer>

The DelayHandler can be exported as a JMX MBean with managed operations getDelayedMessageCount and reschedulePersistedMessages, which allows the rescheduling of delayed persisted Messages at runtime, for example, if the TaskScheduler has previously been stopped. These operations can be invoked via a Control Bus command:

Message<String> delayerReschedulingMessage =
    MessageBuilder.withPayload("@'delayer.handler'.reschedulePersistedMessages()").build();
    controlBusChannel.send(delayerReschedulingMessage);
[Note]Note

For more information regarding the Message Store, JMX and the Control Bus, please read the section called “CompletableFuture”.

=== Scripting support

With Spring Integration 2.1 we’ve added support for the JSR223 Scripting for Java specification, introduced in Java version 6. This allows you to use scripts written in any supported language including Ruby/JRuby, Javascript and Groovy to provide the logic for various integration components similar to the way the Spring Expression Language (SpEL) is used in Spring Integration. For more information about JSR223 please refer to the documentation

[Important]Important

Note that this feature requires Java 6 or higher. Sun developed a JSR223 reference implementation which works with Java 5 but it is not officially supported and we have not tested it with Spring Integration.

In order to use a JVM scripting language, a JSR223 implementation for that language must be included in your class path. Java 6 natively supports Javascript. The Groovy and JRuby projects provide JSR233 support in their standard distribution. Other language implementations may be available or under development. Please refer to the appropriate project website for more information.

[Important]Important

Various JSR223 language implementations have been developed by third parties. A particular implementation’s compatibility with Spring Integration depends on how well it conforms to the specification and/or the implementer’s interpretation of the specification.

[Tip]Tip

If you plan to use Groovy as your scripting language, we recommended you use Spring-Integration’s Groovy Support as it offers additional features specific to Groovy. However you will find this section relevant as well.

==== Script configuration

Depending on the complexity of your integration requirements scripts may be provided inline as CDATA in XML configuration or as a reference to a Spring resource containing the script. To enable scripting support Spring Integration defines a ScriptExecutingMessageProcessor which will bind the Message Payload to a variable named payload and the Message Headers to a headers variable, both accessible within the script execution context. All that is left for you to do is write a script that uses these variables. Below are a couple of sample configurations:

Filter

<int:filter input-channel="referencedScriptInput">
   <int-script:script lang="ruby" location="some/path/to/ruby/script/RubyFilterTests.rb"/>
</int:filter>

<int:filter input-channel="inlineScriptInput">
     <int-script:script lang="groovy">
     <![CDATA[
     return payload == 'good'
   ]]>
  </int-script:script>
</int:filter>

Here, you see that the script can be included inline or can reference a resource location via the location attribute. Additionally the lang attribute corresponds to the language name (or JSR223 alias)

Other Spring Integration endpoint elements which support scripting include router, service-activator, transformer, and splitter. The scripting configuration in each case would be identical to the above (besides the endpoint element).

Another useful feature of Scripting support is the ability to update (reload) scripts without having to restart the Application Context. To accomplish this, specify the refresh-check-delay attribute on the script element:

<int-script:script location="..." refresh-check-delay="5000"/>

In the above example, the script location will be checked for updates every 5 seconds. If the script is updated, any invocation that occurs later than 5 seconds since the update will result in execution of the new script.

<int-script:script location="..." refresh-check-delay="0"/>

In the above example the context will be updated with any script modifications as soon as such modification occurs, providing a simple mechanism for real-time configuration. Any negative number value means the script will not be reloaded after initialization of the application context. This is the default behavior.

[Important]Important

Inline scripts can not be reloaded.

<int-script:script location="..." refresh-check-delay="-1"/>

Script variable bindings

Variable bindings are required to enable the script to reference variables externally provided to the script’s execution context. As we have seen, payload and headers are used as binding variables by default. You can bind additional variables to a script via <variable> sub-elements:

<script:script lang="js" location="foo/bar/MyScript.js">
    <script:variable name="foo" value="foo"/>
    <script:variable name="bar" value="bar"/>
    <script:variable name="date" ref="date"/>
</script:script>

As shown in the above example, you can bind a script variable either to a scalar value or a Spring bean reference. Note that payload and headers will still be included as binding variables.

With Spring Integration 3.0, in addition to the variable sub-element, the variables attribute has been introduced. This attribute and variable sub-elements aren’t mutually exclusive and you can combine them within one script component. However variables must be unique, regardless of where they are defined. Also, since Spring Integration 3.0, variable bindings are allowed for inline scripts too:

<service-activator input-channel="input">
    <script:script lang="ruby" variables="foo=FOO, date-ref=dateBean">
        <script:variable name="bar" ref="barBean"/>
        <script:variable name="baz" value="bar"/>
        <![CDATA[
            payload.foo = foo
            payload.date = date
            payload.bar = bar
            payload.baz = baz
            payload
        ]]>
    </script:script>
</service-activator>

The example above shows a combination of an inline script, a variable sub-element and a variables attribute. The variables attribute is a comma-separated value, where each segment contains an = separated pair of the variable and its value. The variable name can be suffixed with -ref, as in the date-ref variable above. That means that the binding variable will have the name date, but the value will be a reference to the dateBean bean from the application context. This may be useful when using Property Placeholder Configuration or command line arguments.

If you need more control over how variables are generated, you can implement your own Java class using the ScriptVariableGenerator strategy:

public interface ScriptVariableGenerator {

    Map<String, Object> generateScriptVariables(Message<?> message);

}

This interface requires you to implement the method generateScriptVariables(Message). The Message argument allows you to access any data available in the Message payload and headers and the return value is the Map of bound variables. This method will be called every time the script is executed for a Message. All you need to do is provide an implementation of ScriptVariableGenerator and reference it with the script-variable-generator attribute:

<int-script:script location="foo/bar/MyScript.groovy"
        script-variable-generator="variableGenerator"/>

<bean id="variableGenerator" class="foo.bar.MyScriptVariableGenerator"/>

If a script-variable-generator is not provided, script components use DefaultScriptVariableGenerator, which merges any provided <variable> s with payload and headers variables from the Message in its generateScriptVariables(Message) method.

[Important]Important

You cannot provide both the script-variable-generator attribute and <variable> sub-element(s) as they are mutually exclusive.

=== Groovy support

In Spring Integration 2.0 we added Groovy support allowing you to use the Groovy scripting language to provide the logic for various integration components similar to the way the Spring Expression Language (SpEL) is supported for routing, transformation and other integration concerns. For more information about Groovy please refer to the Groovy documentation which you can find on the project website.

==== Groovy configuration

With Spring Integration 2.1, Groovy Support’s configuration namespace is an extension of Spring Integration’s Scripting Support and shares the core configuration and behavior described in detail in the Scripting Support section. Even though Groovy scripts are well supported by generic Scripting Support, Groovy Support provides the Groovy configuration namespace which is backed by the Spring Framework’s org.springframework.scripting.groovy.GroovyScriptFactory and related components, offering extended capabilities for using Groovy. Below are a couple of sample configurations:

Filter

<int:filter input-channel="referencedScriptInput">
   <int-groovy:script location="some/path/to/groovy/file/GroovyFilterTests.groovy"/>
</int:filter>

<int:filter input-channel="inlineScriptInput">
     <int-groovy:script><![CDATA[
     return payload == 'good'
   ]]></int-groovy:script>
</int:filter>

As the above examples show, the configuration looks identical to the general Scripting Support configuration. The only difference is the use of the Groovy namespace as indicated in the examples by the int-groovy namespace prefix. Also note that the lang attribute on the <script> tag is not valid in this namespace.

Groovy object customization

If you need to customize the Groovy object itself, beyond setting variables, you can reference a bean that implements GroovyObjectCustomizer via the customizer attribute. For example, this might be useful if you want to implement a domain-specific language (DSL) by modifying the MetaClass and registering functions to be available within the script:

<int:service-activator input-channel="groovyChannel">
    <int-groovy:script location="foo/SomeScript.groovy" customizer="groovyCustomizer"/>
</int:service-activator>

<beans:bean id="groovyCustomizer" class="org.foo.MyGroovyObjectCustomizer"/>

Setting a custom GroovyObjectCustomizer is not mutually exclusive with <variable> sub-elements or the script-variable-generator attribute. It can also be provided when defining an inline script.

With Spring Integration 3.0, in addition to the variable sub-element, the variables attribute has been introduced. Also, groovy scripts have the ability to resolve a variable to a bean in the BeanFactory, if a binding variable was not provided with the name:

<int-groovy:script>
    <![CDATA[
        entityManager.persist(payload)
        payload
    ]]>
</int-groovy:script>

where variable entityManager is an appropriate bean in the application context.

For more information regarding <variable>, variables, and script-variable-generator, see the paragraph Script variable bindings of the section called “CompletableFuture”.

Groovy Script Compiler Customization

The @CompileStatic hint is the most popular Groovy compiler customization option, which can be used on the class or method level. See more information in the Groovy Reference Manual and, specifically, @CompileStatic. To utilize this feature for short scripts (in integration scenarios), we are forced to change a simple script like this (a <filter> script):

headers.type == 'good'

to more Java-like code:

@groovy.transform.CompileStatic
String filter(Map headers) {
	headers.type == 'good'
}

filter(headers)

With that, the filter() method will be transformed and compiled to static Java code, bypassing the Groovy dynamic phases of invocation, like getProperty() factories and CallSite proxies.

Starting with version 4.3, Spring Integration Groovy components can be configured with the compile-static boolean option, specifying that ASTTransformationCustomizer for @CompileStatic should be added to the internal CompilerConfiguration. With that in place, we can omit the method declaration with @CompileStatic in our script code and still get compiled plain Java code. In this case our script can still be short but still needs to be a little more verbose than interpreted script:

binding.variables.headers.type == 'good'

Where we can access the headers and payload (or any other) variables only through the groovy.lang.Script binding property since, with @CompileStatic, we don’t have the dynamic GroovyObject.getProperty() capability.

In addition, the compiler-configuration bean reference has been introduced. With this attribute, you can provide any other required Groovy compiler customizations, e.g. ImportCustomizer. For more information about this feature, please, refer to the Groovy Documentation: Advanced compiler configuration.

[Note]Note

Using compilerConfiguration does not automatically add a ASTTransformationCustomizer for @CompileStatic and overrides the compileStatic option. If CompileStatic is still requirement, a new ASTTransformationCustomizer(CompileStatic.class) should be manually added into the CompilationCustomizers of that custom compilerConfiguration.

[Note]Note

The Groovy compiler customization does not have any effect to the refresh-check-delay option and reloadable scripts can be statically compiled, too.

==== Control Bus

As described in (EIP), the idea behind the Control Bus is that the same messaging system can be used for monitoring and managing the components within the framework as is used for "application-level" messaging. In Spring Integration we build upon the adapters described above so that it’s possible to send Messages as a means of invoking exposed operations. One option for those operations is Groovy scripts.

<int-groovy:control-bus input-channel="operationChannel"/>

The Control Bus has an input channel that can be accessed for invoking operations on the beans in the application context.

The Groovy Control Bus executes messages on the input channel as Groovy scripts. It takes a message, compiles the body to a Script, customizes it with a GroovyObjectCustomizer, and then executes it. The Control Bus' MessageProcessor exposes all beans in the application context that are annotated with @ManagedResource, implement Spring’s Lifecycle interface or extend Spring’s CustomizableThreadCreator base class (e.g. several of the TaskExecutor and TaskScheduler implementations).

[Important]Important

Be careful about using managed beans with custom scopes (e.g. request) in the Control Bus' command scripts, especially inside an async message flow. If The Control Bus' MessageProcessor can’t expose a bean from the application context, you may end up with some BeansException during command script’s executing. For example, if a custom scope’s context is not established, the attempt to get a bean within that scope will trigger a BeanCreationException.

If you need to further customize the Groovy objects, you can also provide a reference to a bean that implements GroovyObjectCustomizer via the customizer attribute.

<int-groovy:control-bus input-channel="input"
        output-channel="output"
        customizer="groovyCustomizer"/>

<beans:bean id="groovyCustomizer" class="org.foo.MyGroovyObjectCustomizer"/>

=== Adding Behavior to Endpoints

==== Introduction

Prior to Spring Integration 2.2, you could add behavior to an entire Integration flow by adding an AOP Advice to a poller’s <advice-chain/> element. However, let’s say you want to retry, say, just a REST Web Service call, and not any downstream endpoints.

For example, consider the following flow:

inbound-adapter→poller→http-gateway1→http-gateway2→jdbc-outbound-adapter

If you configure some retry-logic into an advice chain on the poller, and, the call to http-gateway2 failed because of a network glitch, the retry would cause both http-gateway1 and http-gateway2 to be called a second time. Similarly, after a transient failure in the jdbc-outbound-adapter, both http-gateways would be called a second time before again calling the jdbc-outbound-adapter.

Spring Integration 2.2 adds the ability to add behavior to individual endpoints. This is achieved by the addition of the <request-handler-advice-chain/> element to many endpoints. For example:

<int-http:outbound-gateway id="withAdvice"
    url-expression="'http://localhost/test1'"
    request-channel="requests"
    reply-channel="nextChannel">
    <int:request-handler-advice-chain>
        <ref bean="myRetryAdvice" />
    </request-handler-advice-chain>
</int-http:outbound-gateway>

In this case, myRetryAdvice will only be applied locally to this gateway and will not apply to further actions taken downstream after the reply is sent to the nextChannel. The scope of the advice is limited to the endpoint itself.

[Important]Important

At this time, you cannot advise an entire <chain/> of endpoints. The schema does not allow a <request-handler-advice-chain/> as a child element of the chain itself.

However, a <request-handler-advice-chain/> can be added to individual reply-producing endpoints within a <chain/> element. An exception is that, in a chain that produces no reply, because the last element in the chain is an outbound-channel-adapter, that last element cannot be advised. If you need to advise such an element, it must be moved outside of the chain (with the output-channel of the chain being the input-channel of the adapter. The adapter can then be advised as normal. For chains that produce a reply, every child element can be advised.

==== Provided Advice Classes

In addition to providing the general mechanism to apply AOP Advice classes in this way, three standard Advices are provided:

  • RequestHandlerRetryAdvice
  • RequestHandlerCircuitBreakerAdvice
  • ExpressionEvaluatingRequestHandlerAdvice

These are each described in detail in the following sections.

===== Retry Advice

The retry advice (o.s.i.handler.advice.RequestHandlerRetryAdvice) leverages the rich retry mechanisms provided by the Spring Retry project. The core component of spring-retry is the RetryTemplate, which allows configuration of sophisticated retry scenarios, including RetryPolicy and BackoffPolicy strategies, with a number of implementations, as well as a RecoveryCallback strategy to determine the action to take when retries are exhausted.

Stateless Retry

Stateless retry is the case where the retry activity is handled entirely within the advice, where the thread pauses (if so configured) and retries the action.

Stateful Retry

Stateful retry is the case where the retry state is managed within the advice, but where an exception is thrown and the caller resubmits the request. An example for stateful retry is when we want the message originator (e.g. JMS) to be responsible for resubmitting, rather than performing it on the current thread. Stateful retry needs some mechanism to detect a retried submission.

Further Information

For more information on spring-retry, refer to the project’s javadocs, as well as the reference documentation for Spring Batch, where spring-retry originated.

[Warning]Warning

The default back off behavior is no back off - retries are attempted immediately. Using a back off policy that causes threads to pause between attempts may cause performance issues, including excessive memory use and thread starvation. In high volume environments, back off policies should be used with caution.

====== Configuring the Retry Advice

The following examples use a simple <service-activator/> that always throws an exception:

public class FailingService {

    public void service(String message) {
        throw new RuntimeException("foo");
    }
}

Simple Stateless Retry

This example uses the default RetryTemplate which has a SimpleRetryPolicy which tries 3 times. There is no BackOffPolicy so the 3 attempts are made back-to-back-to-back with no delay between attempts. There is no RecoveryCallback so, the result is to throw the exception to the caller after the final failed retry occurs. In a Spring Integration environment, this final exception might be handled using an error-channel on the inbound endpoint.

<int:service-activator input-channel="input" ref="failer" method="service">
    <int:request-handler-advice-chain>
        <bean class="o.s.i.handler.advice.RequestHandlerRetryAdvice"/>
    </request-handler-advice-chain>
</int:service-activator>

DEBUG [task-scheduler-2]preSend on channel 'input', message: [Payload=...]
DEBUG [task-scheduler-2]Retry: count=0
DEBUG [task-scheduler-2]Checking for rethrow: count=1
DEBUG [task-scheduler-2]Retry: count=1
DEBUG [task-scheduler-2]Checking for rethrow: count=2
DEBUG [task-scheduler-2]Retry: count=2
DEBUG [task-scheduler-2]Checking for rethrow: count=3
DEBUG [task-scheduler-2]Retry failed last attempt: count=3

Simple Stateless Retry with Recovery

This example adds a RecoveryCallback to the above example; it uses a ErrorMessageSendingRecoverer to send an ErrorMessage to a channel.

<int:service-activator input-channel="input" ref="failer" method="service">
    <int:request-handler-advice-chain>
        <bean class="o.s.i.handler.advice.RequestHandlerRetryAdvice">
            <property name="recoveryCallback">
                <bean class="o.s.i.handler.advice.ErrorMessageSendingRecoverer">
                    <constructor-arg ref="myErrorChannel" />
                </bean>
            </property>
        </bean>
    </request-handler-advice-chain>
</int:int:service-activator>

DEBUG [task-scheduler-2]preSend on channel 'input', message: [Payload=...]
DEBUG [task-scheduler-2]Retry: count=0
DEBUG [task-scheduler-2]Checking for rethrow: count=1
DEBUG [task-scheduler-2]Retry: count=1
DEBUG [task-scheduler-2]Checking for rethrow: count=2
DEBUG [task-scheduler-2]Retry: count=2
DEBUG [task-scheduler-2]Checking for rethrow: count=3
DEBUG [task-scheduler-2]Retry failed last attempt: count=3
DEBUG [task-scheduler-2]Sending ErrorMessage :failedMessage:[Payload=...]

Stateless Retry with Customized Policies, and Recovery

For more sophistication, we can provide the advice with a customized RetryTemplate. This example continues to use the SimpleRetryPolicy but it increases the attempts to 4. It also adds an ExponentialBackoffPolicy where the first retry waits 1 second, the second waits 5 seconds and the third waits 25 (for 4 attempts in all).

<int:service-activator input-channel="input" ref="failer" method="service">
    <int:request-handler-advice-chain>
        <bean class="o.s.i.handler.advice.RequestHandlerRetryAdvice">
            <property name="recoveryCallback">
                <bean class="o.s.i.handler.advice.ErrorMessageSendingRecoverer">
                    <constructor-arg ref="myErrorChannel" />
                </bean>
            </property>
            <property name="retryTemplate" ref="retryTemplate" />
        </bean>
    </request-handler-advice-chain>
</int:service-activator>

<bean id="retryTemplate" class="org.springframework.retry.support.RetryTemplate">
    <property name="retryPolicy">
        <bean class="org.springframework.retry.policy.SimpleRetryPolicy">
            <property name="maxAttempts" value="4" />
        </bean>
    </property>
    <property name="backOffPolicy">
        <bean class="org.springframework.retry.backoff.ExponentialBackOffPolicy">
            <property name="initialInterval" value="1000" />
            <property name="multiplier" value="5.0" />
            <property name="maxInterval" value="60000" />
        </bean>
    </property>
</bean>

27.058 DEBUG [task-scheduler-1]preSend on channel 'input', message: [Payload=...]
27.071 DEBUG [task-scheduler-1]Retry: count=0
27.080 DEBUG [task-scheduler-1]Sleeping for 1000
28.081 DEBUG [task-scheduler-1]Checking for rethrow: count=1
28.081 DEBUG [task-scheduler-1]Retry: count=1
28.081 DEBUG [task-scheduler-1]Sleeping for 5000
33.082 DEBUG [task-scheduler-1]Checking for rethrow: count=2
33.082 DEBUG [task-scheduler-1]Retry: count=2
33.083 DEBUG [task-scheduler-1]Sleeping for 25000
58.083 DEBUG [task-scheduler-1]Checking for rethrow: count=3
58.083 DEBUG [task-scheduler-1]Retry: count=3
58.084 DEBUG [task-scheduler-1]Checking for rethrow: count=4
58.084 DEBUG [task-scheduler-1]Retry failed last attempt: count=4
58.086 DEBUG [task-scheduler-1]Sending ErrorMessage :failedMessage:[Payload=...]

Namespace Support for Stateless Retry

Starting with version 4.0, the above configuration can be greatly simplified with the namespace support for the retry advice:

<int:service-activator input-channel="input" ref="failer" method="service">
    <int:request-handler-advice-chain>
        <bean ref="retrier" />
    </request-handler-advice-chain>
</int:service-activator>

<int:handler-retry-advice id="retrier" max-attempts="4" recovery-channel="myErrorChannel">
    <int:exponential-back-off initial="1000" multiplier="5.0" maximum="60000" />
</int:handler-retry-advice>

In this example, the advice is defined as a top level bean so it can be used in multiple request-handler-advice-chain s. You can also define the advice directly within the chain:

<int:service-activator input-channel="input" ref="failer" method="service">
    <int:request-handler-advice-chain>
        <int:retry-advice id="retrier" max-attempts="4" recovery-channel="myErrorChannel">
            <int:exponential-back-off initial="1000" multiplier="5.0" maximum="60000" />
        </int:retry-advice>
    </request-handler-advice-chain>
</int:service-activator>

A <handler-retry-advice/> with no child element uses no back off; it can have a fixed-back-off or exponential-back-off child element. If there is no recovery-channel, the exception is thrown when retries are exhausted. The namespace can only be used with stateless retry.

For more complex environments (custom policies etc), use normal <bean/> definitions.

Simple Stateful Retry with Recovery

To make retry stateful, we need to provide the Advice with a RetryStateGenerator implementation. This class is used to identify a message as being a resubmission so that the RetryTemplate can determine the current state of retry for this message. The framework provides a SpelExpressionRetryStateGenerator which determines the message identifier using a SpEL expression. This is shown below; this example again uses the default policies (3 attempts with no back off); of course, as with stateless retry, these policies can be customized.

<int:service-activator input-channel="input" ref="failer" method="service">
    <int:request-handler-advice-chain>
        <bean class="o.s.i.handler.advice.RequestHandlerRetryAdvice">
            <property name="retryStateGenerator">
                <bean class="o.s.i.handler.advice.SpelExpressionRetryStateGenerator">
                    <constructor-arg value="headers['jms_messageId']" />
                </bean>
            </property>
            <property name="recoveryCallback">
                <bean class="o.s.i.handler.advice.ErrorMessageSendingRecoverer">
                    <constructor-arg ref="myErrorChannel" />
                </bean>
            </property>
        </bean>
    </int:request-handler-advice-chain>
</int:service-activator>

24.351 DEBUG [Container#0-1]preSend on channel 'input', message: [Payload=...]
24.368 DEBUG [Container#0-1]Retry: count=0
24.387 DEBUG [Container#0-1]Checking for rethrow: count=1
24.387 DEBUG [Container#0-1]Rethrow in retry for policy: count=1
24.387 WARN  [Container#0-1]failure occurred in gateway sendAndReceive
org.springframework.integration.MessagingException: Failed to invoke handler
...
Caused by: java.lang.RuntimeException: foo
...
24.391 DEBUG [Container#0-1]Initiating transaction rollback on application exception
...
25.412 DEBUG [Container#0-1]preSend on channel 'input', message: [Payload=...]
25.412 DEBUG [Container#0-1]Retry: count=1
25.413 DEBUG [Container#0-1]Checking for rethrow: count=2
25.413 DEBUG [Container#0-1]Rethrow in retry for policy: count=2
25.413 WARN  [Container#0-1]failure occurred in gateway sendAndReceive
org.springframework.integration.MessagingException: Failed to invoke handler
...
Caused by: java.lang.RuntimeException: foo
...
25.414 DEBUG [Container#0-1]Initiating transaction rollback on application exception
...
26.418 DEBUG [Container#0-1]preSend on channel 'input', message: [Payload=...]
26.418 DEBUG [Container#0-1]Retry: count=2
26.419 DEBUG [Container#0-1]Checking for rethrow: count=3
26.419 DEBUG [Container#0-1]Rethrow in retry for policy: count=3
26.419 WARN  [Container#0-1]failure occurred in gateway sendAndReceive
org.springframework.integration.MessagingException: Failed to invoke handler
...
Caused by: java.lang.RuntimeException: foo
...
26.420 DEBUG [Container#0-1]Initiating transaction rollback on application exception
...
27.425 DEBUG [Container#0-1]preSend on channel 'input', message: [Payload=...]
27.426 DEBUG [Container#0-1]Retry failed last attempt: count=3
27.426 DEBUG [Container#0-1]Sending ErrorMessage :failedMessage:[Payload=...]

Comparing with the stateless examples, you can see that with stateful retry, the exception is thrown to the caller on each failure.

Exception Classification for Retry

Spring Retry has a great deal of flexibility for determining which exceptions can invoke retry. The default configuration will retry for all exceptions and the exception classifier just looks at the top level exception. If you configure it to, say, only retry on BarException and your application throws a FooException where the cause is a BarException, retry will not occur.

Since Spring Retry 1.0.3, the BinaryExceptionClassifier has a property traverseCauses (default false). When true it will traverse exception causes until it finds a match or there is no cause.

To use this classifier for retry, use a SimpleRetryPolicy created with the constructor that takes the max attempts, the Map of Exception s and the boolean (traverseCauses), and inject this policy into the RetryTemplate.

===== Circuit Breaker Advice

The general idea of the Circuit Breaker Pattern is that, if a service is not currently available, then don’t waste time (and resources) trying to use it. The o.s.i.handler.advice.RequestHandlerCircuitBreakerAdvice implements this pattern. When the circuit breaker is in the closed state, the endpoint will attempt to invoke the service. The circuit breaker goes to the open state if a certain number of consecutive attempts fail; when it is in the open state, new requests will "fail fast" and no attempt will be made to invoke the service until some time has expired.

When that time has expired, the circuit breaker is set to the half-open state. When in this state, if even a single attempt fails, the breaker will immediately go to the open state; if the attempt succeeds, the breaker will go to the closed state, in which case, it won’t go to the open state again until the configured number of consecutive failures again occur. Any successful attempt resets the state to zero failures for the purpose of determining when the breaker might go to the open state again.

Typically, this Advice might be used for external services, where it might take some time to fail (such as a timeout attempting to make a network connection).

The RequestHandlerCircuitBreakerAdvice has two properties: threshold and halfOpenAfter. The threshold property represents the number of consecutive failures that need to occur before the breaker goes open. It defaults to 5. The halfOpenAfter property represents the time after the last failure that the breaker will wait before attempting another request. Default is 1000 milliseconds.

Example:

<int:service-activator input-channel="input" ref="failer" method="service">
    <int:request-handler-advice-chain>
        <bean class="o.s.i.handler.advice.RequestHandlerCircuitBreakerAdvice">
            <property name="threshold" value="2" />
            <property name="halfOpenAfter" value="12000" />
        </bean>
    </int:request-handler-advice-chain>
</int:service-activator>

05.617 DEBUG [task-scheduler-1]preSend on channel 'input', message: [Payload=...]
05.638 ERROR [task-scheduler-1]org.springframework.messaging.MessageHandlingException: java.lang.RuntimeException: foo
...
10.598 DEBUG [task-scheduler-2]preSend on channel 'input', message: [Payload=...]
10.600 ERROR [task-scheduler-2]org.springframework.messaging.MessageHandlingException: java.lang.RuntimeException: foo
...
15.598 DEBUG [task-scheduler-3]preSend on channel 'input', message: [Payload=...]
15.599 ERROR [task-scheduler-3]org.springframework.messaging.MessagingException: Circuit Breaker is Open for ServiceActivator
...
20.598 DEBUG [task-scheduler-2]preSend on channel 'input', message: [Payload=...]
20.598 ERROR [task-scheduler-2]org.springframework.messaging.MessagingException: Circuit Breaker is Open for ServiceActivator
...
25.598 DEBUG [task-scheduler-5]preSend on channel 'input', message: [Payload=...]
25.601 ERROR [task-scheduler-5]org.springframework.messaging.MessageHandlingException: java.lang.RuntimeException: foo
...
30.598 DEBUG [task-scheduler-1]preSend on channel 'input', message: [Payload=foo...]
30.599 ERROR [task-scheduler-1]org.springframework.messaging.MessagingException: Circuit Breaker is Open for ServiceActivator

In the above example, the threshold is set to 2 and halfOpenAfter is set to 12 seconds; a new request arrives every 5 seconds. You can see that the first two attempts invoked the service; the third and fourth failed with an exception indicating the circuit breaker is open. The fifth request was attempted because the request was 15 seconds after the last failure; the sixth attempt fails immediately because the breaker immediately went to open.

===== Expression Evaluating Advice

The final supplied advice class is the o.s.i.handler.advice.ExpressionEvaluatingRequestHandlerAdvice. This advice is more general than the other two advices. It provides a mechanism to evaluate an expression on the original inbound message sent to the endpoint. Separate expressions are available to be evaluated, either after success, or failure. Optionally, a message containing the evaluation result, together with the input message, can be sent to a message channel.

A typical use case for this advice might be with an <ftp:outbound-channel-adapter/>, perhaps to move the file to one directory if the transfer was successful, or to another directory if it fails:

The Advice has properties to set an expression when successful, an expression for failures, and corresponding channels for each. For the successful case, the message sent to the successChannel is an AdviceMessage, with the payload being the result of the expression evaluation, and an additional property inputMessage which contains the original message sent to the handler. A message sent to the failureChannel (when the handler throws an exception) is an ErrorMessage with a payload of MessageHandlingExpressionEvaluatingAdviceException. Like all MessagingException s, this payload has failedMessage and cause properties, as well as an additional property evaluationResult, containing the result of the expression evaluation.

When an exception is thrown in the scope of the advice, by default, that exception is thrown to caller after any failureExpression is evaluated. If you wish to suppress throwing the exception, set the trapException property to true.

Example - Configuring the Advice with Java DSL. 

@SpringBootApplication
public class EerhaApplication {

    public static void main(String[] args) {
        ConfigurableApplicationContext context = SpringApplication.run(EerhaApplication.class, args);
        MessageChannel in = context.getBean("advised.input", MessageChannel.class);
        in.send(new GenericMessage<>("good"));
        in.send(new GenericMessage<>("bad"));
        context.close();
    }

    @Bean
    public IntegrationFlow advised() {
        return f -> f.handle((GenericHandler<String>) (payload, headers) -> {
            if (payload.equals("good")) {
                return null;
            }
            else {
                throw new RuntimeException("some failure");
            }
        }, c -> c.advice(expressionAdvice()));
    }

    @Bean
    public Advice expressionAdvice() {
        ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
        advice.setSuccessChannelName("success.input");
        advice.setOnSuccessExpressionString("payload + ' was successful'");
        advice.setFailureChannelName("failure.input");
        advice.setOnFailureExpressionString(
                "payload + ' was bad, with reason: ' + #exception.cause.message");
        advice.setTrapException(true);
        return advice;
    }

    @Bean
    public IntegrationFlow success() {
        return f -> f.handle(System.out::println);
    }

    @Bean
    public IntegrationFlow failure() {
        return f -> f.handle(System.out::println);
    }

}

==== Custom Advice Classes

In addition to the provided Advice classes above, you can implement your own Advice classes. While you can provide any implementation of org.aopalliance.aop.Advice (usually org.aopalliance.intercept.MethodInterceptor), it is generally recommended that you subclass o.s.i.handler.advice.AbstractRequestHandlerAdvice. This has the benefit of avoiding writing low-level Aspect Oriented Programming code as well as providing a starting point that is specifically tailored for use in this environment.

Subclasses need to implement the doInvoke()` method:

/**
 * Subclasses implement this method to apply behavior to the {@link MessageHandler} callback.execute()
 * invokes the handler method and returns its result, or null).
 * @param callback Subclasses invoke the execute() method on this interface to invoke the handler method.
 * @param target The target handler.
 * @param message The message that will be sent to the handler.
 * @return the result after invoking the {@link MessageHandler}.
 * @throws Exception
 */
protected abstract Object doInvoke(ExecutionCallback callback, Object target, Message<?> message) throws Exception;

The callback parameter is simply a convenience to avoid subclasses dealing with AOP directly; invoking the callback.execute() method invokes the message handler.

The target parameter is provided for those subclasses that need to maintain state for a specific handler, perhaps by maintaining that state in a Map, keyed by the target. This allows the same advice to be applied to multiple handlers. The RequestHandlerCircuitBreakerAdvice uses this to keep circuit breaker state for each handler.

The message parameter is the message that will be sent to the handler. While the advice cannot modify the message before invoking the handler, it can modify the payload (if it has mutable properties). Typically, an advice would use the message for logging and/or to send a copy of the message somewhere before or after invoking the handler.

The return value would normally be the value returned by callback.execute(); but the advice does have the ability to modify the return value. Note that only AbstractReplyProducingMessageHandler s return a value.

public class MyAdvice extends AbstractRequestHandlerAdvice {

    @Override
    protected Object doInvoke(ExecutionCallback callback, Object target, Message<?> message) throws Exception {
        // add code before the invocation
        Object result = callback.execute();
        // add code after the invocation
        return result;
    }
}
[Note]Note

In addition to the execute() method, the ExecutionCallback provides an additional method cloneAndExecute(). This method must be used in cases where the invocation might be called multiple times within a single execution of doInvoke(), such as in the RequestHandlerRetryAdvice. This is required because the Spring AOP org.springframework.aop.framework.ReflectiveMethodInvocation object maintains state of which advice in a chain was last invoked; this state must be reset for each call.

For more information, see the ReflectiveMethodInvocation JavaDocs.

==== Other Advice Chain Elements

While the abstract class mentioned above is provided as a convenience, you can add any Advice to the chain, including a transaction advice.

==== Handle Message Advice

As discussed in the introduction to this section, advice objects in a request handler advice chain are applied to just the current endpoint, not the downstream flow (if any). For MessageHandler s that produce a reply (AbstractReplyProducingMessageHandler), the advice is applied to an internal method handleRequestMessage() (called from MessageHandler.handleMessage()). For other message handlers, the advice is applied to MessageHandler.handleMessage().

There are some circumstances where, even if a message handler is an AbstractReplyProducingMessageHandler, the advice must be applied to the handleMessage method - for example, the Idempotent Receiver might return null and this would cause an exception if the handler’s replyRequired property is true.

Starting with version 4.3.1, a new HandleMessageAdvice and the AbstractHandleMessageAdvice base implementation have been introduced. Advice s that implement HandleMessageAdvice will always be applied to the handleMessage() method, regardless of the handler type.

It is important to understand that HandleMessageAdvice implementations (such as Idempotent Receiver), when applied to a handler that returns a response, are dissociated from the adviceChain and properly applied to the MessageHandler.handleMessage() method. Bear in mind, however, that this means the advice chain order is not complied with; and, with configuration such as:

<some-reply-producing-endpoint ... >
    <int:request-handler-advice-chain>
        <tx:advice ... />
        <bean ref="myHandleMessageAdvice" />
    </int:request-handler-advice-chain>
</some-reply-producing-endpoint>

The <tx:advice> is applied to the AbstractReplyProducingMessageHandler.handleRequestMessage(), but myHandleMessageAdvice is applied for to MessageHandler.handleMessage() and, therefore, invoked before the <tx:advice>. To retain the order, you should follow with standard Spring AOP configuration approach and use endpoint id together with the .handler suffix to obtain the target MessageHandler bean. Note, however, that in that case, the entire downstream flow would be within the transaction scope.

In the case of a MessageHandler that does not return a response, the advice chain order is retained.

==== Advising Filters

There is an additional consideration when advising Filter s. By default, any discard actions (when the filter returns false) are performed within the scope of the advice chain. This could include all the flow downstream of the discard channel. So, for example if an element downstream of the discard-channel throws an exception, and there is a retry advice, the process will be retried. This is also the case if throwExceptionOnRejection is set to true (the exception is thrown within the scope of the advice).

Setting discard-within-advice to "false" modifies this behavior and the discard (or exception) occurs after the advice chain is called.

==== Advising Endpoints Using Annotations

When configuring certain endpoints using annotations (@Filter, @ServiceActivator, @Splitter, and @Transformer), you can supply a bean name for the advice chain in the adviceChain attribute. In addition, the @Filter annotation also has the discardWithinAdvice attribute, which can be used to configure the discard behavior as discussed in the section called “CompletableFuture”. An example with the discard being performed after the advice is shown below.

@MessageEndpoint
public class MyAdvisedFilter {

    @Filter(inputChannel="input", outputChannel="output",
            adviceChain="adviceChain", discardWithinAdvice="false")
    public boolean filter(String s) {
        return s.contains("good");
    }
}

==== Ordering Advices within an Advice Chain

Advice classes are "around" advices and are applied in a nested fashion. The first advice is the outermost, the last advice the innermost (closest to the handler being advised). It is important to put the advice classes in the correct order to achieve the functionality you desire.

For example, let’s say you want to add a retry advice and a transaction advice. You may want to place the retry advice advice first, followed by the transaction advice. Then, each retry will be performed in a new transaction. On the other hand, if you want all the attempts, and any recovery operations (in the retry RecoveryCallback), to be scoped within the transaction, you would put the transaction advice first.

==== Advised Handler Properties

Sometimes, it is useful to access handler properties from within the advice. For example, most handlers implement NamedComponent and you can access the component name.

The target object can be accessed via the target argument when subclassing AbstractRequestHandlerAdvice or invocation.getThis() when implementing org.aopalliance.intercept.MethodInterceptor.

When the entire handler is advised (such as when the handler does not produce replies, or the advice implements HandleMessageAdvice), you can simply cast the target object to the desired implemented interface, such as NamedComponent.

String componentName = ((NamedComponent) target).getComponentName();

or

String componentName = ((NamedComponent) invocation.getThis()).getComponentName();

when implementing MethodInterceptor directly.

When only the handleRequestMessage() method is advised (in a reply-producing handler), you need to access the full handler, which is an AbstractReplyProducingMessageHandler…​

AbstractReplyProducingMessageHandler handler =
    ((AbstractReplyProducingMessageHandler.RequestHandler) target).getAdvisedHandler();

String componentName = handler.getComponentName();

==== Idempotent Receiver Enterprise Integration Pattern

Starting with version 4.1, Spring Integration provides an implementation of the Idempotent Receiver Enterprise Integration Pattern. It is a functional pattern and the whole idempotency logic should be implemented in the application, however to simplify the decision-making, the IdempotentReceiverInterceptor component is provided. This is an AOP Advice, which is applied to the MessageHandler.handleMessage() method and can filter a request message or mark it as a duplicate, according to its configuration.

Previously, users could have implemented this pattern, by using a custom MessageSelector in a <filter/> (Section 6.2, “Filter”), for example. However, since this pattern is really behavior of an endpoint rather than being an endpoint itself, the Idempotent Receiver implementation doesn’t provide an endpoint component; rather, it is applied to endpoints declared in the application.

The logic of the IdempotentReceiverInterceptor is based on the provided MessageSelector and, if the message isn’t accepted by that selector, it will be enriched with the duplicateMessage header set to true. The target MessageHandler (or downstream flow) can consult this header to implement the correct idempotency logic. If the IdempotentReceiverInterceptor is configured with a discardChannel and/or throwExceptionOnRejection = true, the duplicate Message won’t be sent to the target MessageHandler.handleMessage(), but discarded. If you simply want to discard (do nothing with) the duplicate Message, the discardChannel should be configured with a NullChannel, such as the default nullChannel bean.

To maintain state between messages and provide the ability to compare messages for the idempotency, the MetadataStoreSelector is provided. It accepts a MessageProcessor implementation (which creates a lookup key based on the Message) and an optional ConcurrentMetadataStore (the section called “CompletableFuture”). See the MetadataStoreSelector JavaDocs for more information. The value for ConcurrentMetadataStore also can be customized using additional MessageProcessor. By default MetadataStoreSelector uses timestamp message header.

For convenience, the MetadataStoreSelector options are configurable directly on the <idempotent-receiver> component:

<idempotent-receiver
        id=""  1
        endpoint=""  2
        selector=""  3
        discard-channel=""  4
        metadata-store=""  5
        key-strategy=""  6
        key-expression=""  7
        value-strategy=""  8
        value-expression=""  9
        throw-exception-on-rejection="" />  10

1

The id of the IdempotentReceiverInterceptor bean. Optional.

2

Consumer Endpoint name(s) or pattern(s) to which this interceptor will be applied. Separate names (patterns) with commas (,) e.g. endpoint="aaa, bbb*, *ccc, *ddd*, eee*fff". Endpoint bean names matching these patterns are then used to retrieve the target endpoint’s MessageHandler bean (using its .handler suffix), and the IdempotentReceiverInterceptor will be applied to those beans. Required.

3

A MessageSelector bean reference. Mutually exclusive with metadata-store and key-strategy (key-expression). When selector is not provided, one of key-strategy or key-strategy-expression is required.

4

Identifies the channel to which to send a message when the IdempotentReceiverInterceptor doesn’t accept it. When omitted, duplicate messages are forwarded to the handler with a duplicateMessage header. Optional.

5

A ConcurrentMetadataStore reference. Used by the underlying MetadataStoreSelector. Mutually exclusive with selector. Optional. The default MetadataStoreSelector uses an internal SimpleMetadataStore which does not maintain state across application executions.

6

A MessageProcessor reference. Used by the underlying MetadataStoreSelector. Evaluates an idempotentKey from the request Message. Mutually exclusive with selector and key-expression. When a selector is not provided, one of key-strategy or key-strategy-expression is required.

7

A SpEL expression to populate an ExpressionEvaluatingMessageProcessor. Used by the underlying MetadataStoreSelector. Evaluates an idempotentKey using the request Message as the evaluation context root object. Mutually exclusive with selector and key-strategy. When a selector is not provided, one of key-strategy or key-strategy-expression is required.

8

A MessageProcessor reference. Used by the underlying MetadataStoreSelector. Evaluates a value for the idempotentKey from the request Message. Mutually exclusive with selector and value-expression. By default, the MetadataStoreSelector uses the timestamp message header as the Metadata value.

9

A SpEL expression to populate an ExpressionEvaluatingMessageProcessor. Used by the underlying MetadataStoreSelector. Evaluates a value for the idempotentKey using the request Message as the evaluation context root object. Mutually exclusive with selector and value-strategy. By default, the MetadataStoreSelector uses the timestamp message header as the Metadata value.

10

Throw an exception if the IdempotentReceiverInterceptor rejects the message defaults to false. It is applied regardless of whether or not a discard-channel is provided.

For Java configuration, the method level IdempotentReceiver annotation is provided. It is used to mark a method that has a Messaging annotation (@ServiceActivator, @Router etc.) to specify which IdempotentReceiverInterceptor s will be applied to this endpoint:

@Bean
public IdempotentReceiverInterceptor idempotentReceiverInterceptor() {
   return new IdempotentReceiverInterceptor(new MetadataStoreSelector(m ->
                                                    m.getHeaders().get(INVOICE_NBR_HEADER)));
}

@Bean
@ServiceActivator(inputChannel = "input", outputChannel = "output")
@IdempotentReceiver("idempotentReceiverInterceptor")
public MessageHandler myService() {
    ....
}
[Note]Note

The IdempotentReceiverInterceptor is designed only for the MessageHandler.handleMessage(Message<?>) method and starting with version 4.3.1 it implements HandleMessageAdvice, with the AbstractHandleMessageAdvice as a base class, for better dissociation. See the section called “CompletableFuture” for more information.

=== Logging Channel Adapter

The <logging-channel-adapter/> is often used in conjunction with a Wire Tap, as discussed in the section called “Wire Tap”. However, it can also be used as the ultimate consumer of any flow. For example, consider a flow that ends with a <service-activator/> that returns a result, but you wish to discard that result. To do that, you could send the result to NullChannel. Alternatively, you can route it to an INFO level <logging-channel-adapter/>; that way, you can see the discarded message when logging at INFO level, but not see it when logging at, say, WARN level. With a NullChannel, you would only see the discarded message when logging at DEBUG level.

<int:logging-channel-adapter
    channel="" 1
    level="INFO" 2
    expression="" 3
    log-full-message="false" 4
    logger-name="" /> 5

1

The channel connecting the logging adapter to an upstream component.

2

The logging level at which messages sent to this adapter will be logged. Default: INFO.

3

A SpEL expression representing exactly what part(s) of the message will be logged. Default: payload - just the payload will be logged. This attribute cannot be specified if log-full-message is specified.

4

When true, the entire message will be logged (including headers). Default: false - just the payload will be logged. This attribute cannot be specified if expression is specified.

5

Specifies the name of the logger (known as category in log4j) used for log messages created by this adapter. This enables setting the log name (in the logging subsystem) for individual adapters. By default, all adapters will log under the name org.springframework.integration.handler.LoggingHandler.

==== Configuring with Java Configuration

The following Spring Boot application provides an example of configuring the LoggingHandler using Java configuration:

@SpringBootApplication
public class LoggingJavaApplication {

    public static void main(String[] args) {
        ConfigurableApplicationContext context =
             new SpringApplicationBuilder(LoggingJavaApplication.class)
                    .web(false)
                    .run(args);
         MyGateway gateway = context.getBean(MyGateway.class);
         gateway.sendToLogger("foo");
    }

    @Bean
    public MessageChannel logInputChannel() {
        return new DirectChannel();
    }

    @Bean
    @ServiceActivator(inputChannel = "logChannel")
    public LoggingHandler logging() {
        LoggingHandler adapter = new LoggingHandler(LoggingHandler.Level.DEBUG);
        adapter.setLoggerName("TEST_LOGGER");
        adapter.setLogExpressionString("headers.id + ': ' + payload");
        return adapter;
    }

    @MessagingGateway(defaultRequestChannel = "logChannel")
    public interface MyGateway {

        void sendToLogger(String data);

    }

}

== System Management

=== Metrics and Management

==== Configuring Metrics Capture

[Note]Note

Prior to version 4.2 metrics were only available when JMX was enabled. See the section called “CompletableFuture”.

To enable MessageSource, MessageChannel and MessageHandler metrics, add an <int:management/> bean to the application context, or annotate one of your @Configuration classes with @EnableIntegrationManagement. MessageSource s only maintain counts, MessageChannel s and MessageHandler s maintain duration statistics in addition to counts. See the section called “CompletableFuture” and the section called “CompletableFuture” below.

This causes the automatic registration of the IntegrationManagementConfigurer bean in the application context. Only one such bean can exist in the context and it must have the bean name integrationManagementConfigurer if registered manually via a <bean/> definition.

In addition to metrics, you can control debug logging in the main message flow. It has been found that in very high volume applications, even calls to isDebugEnabled() can be quite expensive with some logging subsystems. You can disable all such logging to avoid this overhead; exception logging (debug or otherwise) are not affected by this setting.

A number of options are available:

<int:management
    default-logging-enabled="false" 1
    default-counts-enabled="false" 2
    default-stats-enabled="false" 3
    counts-enabled-patterns="foo, !baz, ba*" 4
    stats-enabled-patterns="fiz, buz" 5
    metrics-factory="myMetricsFactory" /> 6
@Configuration
@EnableIntegration
@EnableIntegrationManagement(
    defaultLoggingEnabled = "false", 1
    defaultCountsEnabled = "false", 2
    defaultStatsEnabled = "false", 3
    countsEnabled = { "foo", "${count.patterns}" }, 4
    statsEnabled = { "qux", "!*" }, 5
    MetricsFactory = "myMetricsFactory") 6
public static class ContextConfiguration {
...
}

1 1

Set to false to disable all logging in the main message flow, regardless of the log system category settings. Set to true to enable debug logging (if also enabled by the logging subsystem).

2 2

Enable or disable count metrics for components not matching one of the patterns in <4>.

3 3

Enable or disable statistical metrics for components not matching one of the patterns in <5>.

4 4

A comma-delimited list of patterns for beans for which counts should be enabled; negate the pattern with !. First match wins (positive or negative). In the unlikely event that you have a bean name starting with !, escape the ! in the pattern: \!foo positively matches a bean named !foo.

5 5

A comma-delimited list of patterns for beans for which statistical metrics should be enabled; negate the pattern with !. First match wins (positive or negative). In the unlikely event that you have a bean name starting with !, escape the ! in the pattern: \!foo positively matches a bean named !foo. Stats implies counts.

6 6

A reference to a MetricsFactory. See the section called “CompletableFuture”.

At runtime, counts and statistics can be obtained by calling IntegrationManagementConfigurer getChannelMetrics, getHandlerMetrics and getSourceMetrics, returning MessageChannelMetrics, MessageHandlerMetrics and MessageSourceMetrics respectively.

See the javadocs for complete information about these classes.

When JMX is enabled (see the section called “CompletableFuture”), these metrics are also exposed by the IntegrationMBeanExporter.

==== MessageChannel Metric Features

Message channels report metrics according to their concrete type. If you are looking at a DirectChannel, you will see statistics for the send operation. If it is a QueueChannel, you will also see statistics for the receive operation, as well as the count of messages that are currently buffered by this QueueChannel. In both cases there are some metrics that are simple counters (message count and error count), and some that are estimates of averages of interesting quantities. The algorithms used to calculate these estimates are described briefly in the section below.

Metric TypeExampleAlgorithm

Count

Send Count

Simple incrementer. Increases by one when an event occurs.

Error Count

Send Error Count

Simple incrementer. Increases by one when an send results in an error.

Duration

Send Duration (method execution time in milliseconds)

Exponential Moving Average with decay factor (10 by default). Average of the method execution time over roughly the last 10 (default) measurements.

Rate

Send Rate (number of operations per second)

Inverse of Exponential Moving Average of the interval between events with decay in time (lapsing over 60 seconds by default) and per measurement (last 10 events by default).

Error Rate

Send Error Rate (number of errors per second)

Inverse of Exponential Moving Average of the interval between error events with decay in time (lapsing over 60 seconds by default) and per measurement (last 10 events by default).

Ratio

Send Success Ratio (ratio of successful to total sends)

Estimate the success ratio as the Exponential Moving Average of the series composed of values 1 for success and 0 for failure (decaying as per the rate measurement over time and events by default). Error ratio is 1 - success ratio.

==== MessageHandler Metric Features

The following table shows the statistics maintained for message handlers. Some metrics are simple counters (message count and error count), and one is an estimate of averages of send duration. The algorithms used to calculate these estimates are described briefly in the table below:

Metric TypeExampleAlgorithm

Count

Handle Count

Simple incrementer. Increases by one when an event occurs.

Error Count

Handler Error Count

Simple incrementer. Increases by one when an invocation results in an error.

Active Count

Handler Active Count

Indicates the number of currently active threads currently invoking the handler (or any downstream synchronous flow).

Duration

Handle Duration (method execution time in milliseconds)

Exponential Moving Average with decay factor (10 by default). Average of the method execution time over roughly the last 10 (default) measurements.

==== Time-Based Average Estimates

A feature of the time-based average estimates is that they decay with time if no new measurements arrive. To help interpret the behaviour over time, the time (in seconds) since the last measurement is also exposed as a metric.

There are two basic exponential models: decay per measurement (appropriate for duration and anything where the number of measurements is part of the metric), and decay per time unit (more suitable for rate measurements where the time in between measurements is part of the metric). Both models depend on the fact that

S(n) = sum(i=0,i=n) w(i) x(i) has a special form when w(i) = r^i, with r=constant:

S(n) = x(n) + r S(n-1) (so you only have to store S(n-1), not the whole series x(i), to generate a new metric estimate from the last measurement). The algorithms used in the duration metrics use r=exp(-1/M) with M=10. The net effect is that the estimate S(n) is more heavily weighted to recent measurements and is composed roughly of the last M measurements. So M is the "window" or lapse rate of the estimate In the case of the vanilla moving average, i is a counter over the number of measurements. In the case of the rate we interpret i as the elapsed time, or a combination of elapsed time and a counter (so the metric estimate contains contributions roughly from the last M measurements and the last T seconds).

==== Metrics Factory

A new strategy interface MetricsFactory has been introduced allowing you to provide custom channel metrics for your MessageChannel s and MessageHandler s. By default, a DefaultMetricsFactory provides default implementation of MessageChannelMetrics and MessageHandlerMetrics which are described in the next bullet. To override the default MetricsFactory configure it as described above, by providing a reference to your MetricsFactory bean instance. You can either customize the default implementations as described in the next bullet, or provide completely different implementations by extending AbstractMessageChannelMetrics and/or AbstractMessageHandlerMetrics.

In addition to the default metrics factory described above, the framework provides the AggregatingMetricsFactory. This factory creates AggregatingMessageChannelMetrics and AggregatingMessageHandlerMetrics. In very high volume scenarios, the cost of capturing statistics can be prohibitive (2 calls to the system time and storing the data for each message). The aggregating metrics aggregate the response time over a sample of messages. This can save significant CPU time.

[Caution]Caution

The statistics will be skewed if messages arrive in bursts. These metrics are intended for use with high, constant-volume, message rates.

<bean id="aggregatingMetricsFactory"
            class="org.springframework.integration.support.management.AggregatingMetricsFactory">
    <constructor-arg value="1000" /> <!-- sample size -->
</bean>

The above configuration aggregates the duration over 1000 messages. Counts (send, error) are maintained per-message but the statistics are per 1000 messages.

  • Customizing the Default Channel/Handler Statistics

See the section called “CompletableFuture” and the Javadocs for the ExponentialMovingAverage* classes for more information about these values.

By default, the DefaultMessageChannelMetrics and DefaultMessageHandlerMetrics use a window of 10 measurements, a rate period of 1 second (rate per second) and a decay lapse period of 1 minute.

If you wish to override these defaults, you can provide a custom MetricsFactory that returns appropriately configured metrics and provide a reference to it to the MBean exporter as described above.

Example:

public static class CustomMetrics implements MetricsFactory {

    @Override
    public AbstractMessageChannelMetrics createChannelMetrics(String name) {
        return new DefaultMessageChannelMetrics(name,
                new ExponentialMovingAverage(20, 1000000.),
                new ExponentialMovingAverageRate(2000, 120000, 30, true),
                new ExponentialMovingAverageRatio(130000, 40, true),
                new ExponentialMovingAverageRate(3000, 140000, 50, true));
    }

    @Override
    public AbstractMessageHandlerMetrics createHandlerMetrics(String name) {
        return new DefaultMessageHandlerMetrics(name, new ExponentialMovingAverage(20, 1000000.));
    }

}
  • Advanced Customization

The customizations described above are wholesale and will apply to all appropriate beans exported by the MBean exporter. This is the extent of customization available using XML configuration.

Individual beans can be provided with different implementations using java @Configuration or programmatically at runtime, after the application context has been refreshed, by invoking the configureMetrics methods on AbstractMessageChannel and AbstractMessageHandler.

  • Performance Improvement

Previously, the time-based metrics (see the section called “CompletableFuture”) were calculated in real time. The statistics are now calculated when retrieved instead. This resulted in a significant performance improvement, at the expense of a small amount of additional memory for each statistic. As discussed in the bullet above, the statistics can be disabled altogether, while retaining the MBean allowing the invocation of Lifecycle methods.

=== JMX Support

Spring Integration provides Channel Adapters for receiving and publishing JMX Notifications. There is also an_Inbound Channel Adapter_ for polling JMX MBean attribute values, and an Outbound Channel Adapter for invoking JMX MBean operations.

==== Notification Listening Channel Adapter

The Notification-listening Channel Adapter requires a JMX ObjectName for the MBean that publishes notifications to which this listener should be registered. A very simple configuration might look like this:

<int-jmx:notification-listening-channel-adapter id="adapter"
    channel="channel"
    object-name="example.domain:name=publisher"/>
[Tip]Tip

The notification-listening-channel-adapter registers with an MBeanServer at startup, and the default bean name is mbeanServer which happens to be the same bean name generated when using Spring’s <context:mbean-server/> element. If you need to use a different name, be sure to include the_mbean-server_ attribute.

The adapter can also accept a reference to a NotificationFilter and a handback Object to provide some context that is passed back with each Notification. Both of those attributes are optional. Extending the above example to include those attributes as well as an explicit MBeanServer bean name would produce the following:

<int-jmx:notification-listening-channel-adapter id="adapter"
    channel="channel"
    mbean-server="someServer"
    object-name="example.domain:name=somePublisher"
    notification-filter="notificationFilter"
    handback="myHandback"/>

The Notification-listening Channel Adapter is event-driven and registered with the MBeanServer directly. It does not require any poller configuration.

[Note]Note

For this component only, the object-name attribute can contain an ObjectName pattern (e.g. "org.foo:type=Bar,name=*") and the adapter will receive notifications from all MBeans with ObjectNames that match the pattern. In addition, the object-name attribute can contain a SpEL reference to a <util:list/> of ObjectName patterns:

<jmx:notification-listening-channel-adapter id="manyNotificationsAdapter"
    channel="manyNotificationsChannel"
    object-name="#{patterns}"/>

<util:list id="patterns">
    <value>org.foo:type=Foo,name=*</value>
    <value>org.foo:type=Bar,name=*</value>
</util:list>

The names of the located MBean(s) will be logged when DEBUG level logging is enabled.

==== Notification Publishing Channel Adapter

The Notification-publishing Channel Adapter is relatively simple. It only requires a JMX ObjectName in its configuration as shown below.

<context:mbean-export/>

<int-jmx:notification-publishing-channel-adapter id="adapter"
    channel="channel"
    object-name="example.domain:name=publisher"/>

It does also require that an MBeanExporter be present in the context. That is why the <context:mbean-export/> element is shown above as well.

When Messages are sent to the channel for this adapter, the Notification is created from the Message content. If the payload is a String it will be passed as the message text for the Notification. Any other payload type will be passed as the userData of the Notification.

JMX Notifications also have a type, and it should be a dot-delimited String. There are two ways to provide the type. Precedence will always be given to a Message header value associated with the JmxHeaders.NOTIFICATION_TYPE key. On the other hand, you can rely on a fallback default-notification-type attribute provided in the configuration.

<context:mbean-export/>

<int-jmx:notification-publishing-channel-adapter id="adapter"
    channel="channel"
    object-name="example.domain:name=publisher"
    default-notification-type="some.default.type"/>

==== Attribute Polling Channel Adapter

The Attribute Polling Channel Adapter is useful when you have a requirement, to periodically check on some value that is available through an MBean as a managed attribute. The poller can be configured in the same way as any other polling adapter in Spring Integration (or it’s possible to rely on the default poller). The object-name and attribute-name are required. An MBeanServer reference is also required, but it will automatically check for a bean named mbeanServer by default, just like the Notification-listening Channel Adapter described above.

<int-jmx:attribute-polling-channel-adapter id="adapter"
    channel="channel"
    object-name="example.domain:name=someService"
    attribute-name="InvocationCount">
        <int:poller max-messages-per-poll="1" fixed-rate="5000"/>
</int-jmx:attribute-polling-channel-adapter>

==== Tree Polling Channel Adapter

The Tree Polling Channel Adapter queries the JMX MBean tree and sends a message with a payload that is the graph of objects that matches the query. By default the MBeans are mapped to primitives and simple Objects like Map, List and arrays - permitting simple transformation, for example, to JSON. An MBeanServer reference is also required, but it will automatically check for a bean named mbeanServer by default, just like the Notification-listening Channel Adapter described above. A basic configuration would be:

<int-jmx:tree-polling-channel-adapter id="adapter"
    channel="channel"
    query-name="example.domain:type=*">
        <int:poller max-messages-per-poll="1" fixed-rate="5000"/>
</int-jmx:tree-polling-channel-adapter>

This will include all attributes on the MBeans selected. You can filter the attributes by providing an MBeanObjectConverter that has an appropriate filter configured. The converter can be provided as a reference to a bean definition using the converter attribute, or as an inner <bean/> definition. A DefaultMBeanObjectConverter is provided which can take a MBeanAttributeFilter in its constructor argument.

Two standard filters are provided; the NamedFieldsMBeanAttributeFilter allows you to specify a list of attributes to include and the NotNamedFieldsMBeanAttributeFilter allows you to specify a list of attributes to exclude. You can also implement your own filter

==== Operation Invoking Channel Adapter

The operation-invoking-channel-adapter enables Message-driven invocation of any managed operation exposed by an MBean. Each invocation requires the operation name to be invoked and the ObjectName of the target MBean. Both of these must be explicitly provided via adapter configuration:

<int-jmx:operation-invoking-channel-adapter id="adapter"
    object-name="example.domain:name=TestBean"
    operation-name="ping"/>

Then the adapter only needs to be able to discover the mbeanServer bean. If a different bean name is required, then provide the mbean-server attribute with a reference.

The payload of the Message will be mapped to the parameters of the operation, if any. A Map-typed payload with String keys is treated as name/value pairs, whereas a List or array would be passed as a simple argument list (with no explicit parameter names). If the operation requires a single parameter value, then the payload can represent that single value, and if the operation requires no parameters, then the payload would be ignored.

If you want to expose a channel for a single common operation to be invoked by Messages that need not contain headers, then that option works well.

==== Operation Invoking Outbound Gateway

Similar to the operation-invoking-channel-adapter Spring Integration also provides a operation-invoking-outbound-gateway, which could be used when dealing with non-void operations and a return value is required. Such return value will be sent as message payload to the reply-channel specified by this Gateway.

<int-jmx:operation-invoking-outbound-gateway request-channel="requestChannel"
   reply-channel="replyChannel"
   object-name="o.s.i.jmx.config:type=TestBean,name=testBeanGateway"
   operation-name="testWithReturn"/>

If the reply-channel attribute is not provided, the reply message will be sent to the channel that is identified by the IntegrationMessageHeaderAccessor.REPLY_CHANNEL header. That header is typically auto-created by the entry point into a message flow, such as any Gateway component. However, if the message flow was started by manually creating a Spring Integration Message and sending it directly to a Channel, then you must specify the message header explicitly or use the provided reply-channel attribute.

==== MBean Exporter

Spring Integration components themselves may be exposed as MBeans when the IntegrationMBeanExporter is configured. To create an instance of the IntegrationMBeanExporter, define a bean and provide a reference to an MBeanServer and a domain name (if desired). The domain can be left out, in which case the default domain is org.springframework.integration.

<int-jmx:mbean-export id="integrationMBeanExporter"
            default-domain="my.company.domain" server="mbeanServer"/>

<bean id="mbeanServer" class="org.springframework.jmx.support.MBeanServerFactoryBean">
    <property name="locateExistingServerIfPossible" value="true"/>
</bean>
[Important]Important

The MBean exporter is orthogonal to the one provided in Spring core - it registers message channels and message handlers, but not itself. You can expose the exporter itself, and certain other components in Spring Integration, using the standard <context:mbean-export/> tag. The exporter has a some metrics attached to it, for instance a count of the number of active handlers and the number of queued messages.

It also has a useful operation, as discussed in the section called “CompletableFuture”.

Starting with Spring Integration 4.0 the @EnableIntegrationMBeanExport annotation has been introduced for convenient configuration of a default (integrationMbeanExporter) bean of type IntegrationMBeanExporter with several useful options at the @Configuration class level. For example:

@Configuration
@EnableIntegration
@EnableIntegrationMBeanExport(server = "mbeanServer", managedComponents = "input")
public class ContextConfiguration {

	@Bean
	public MBeanServerFactoryBean mbeanServer() {
		return new MBeanServerFactoryBean();
	}
}

If there is a need to provide more options, or have several IntegrationMBeanExporter beans e.g. for different MBean Servers, or to avoid conflicts with the standard Spring MBeanExporter (e.g. via @EnableMBeanExport), you can simply configure an IntegrationMBeanExporter as a generic bean.

===== MBean ObjectNames

All the MessageChannel, MessageHandler and MessageSource instances in the application are wrapped by the MBean exporter to provide management and monitoring features. The generated JMX object names for each component type are listed in the table below:

Component TypeObjectName

MessageChannel

o.s.i:type=MessageChannel,name=<channelName>

MessageSource

o.s.i:type=MessageSource,name=<channelName>,bean=<source>

MessageHandler

o.s.i:type=MessageSource,name=<channelName>,bean=<source>

The bean attribute in the object names for sources and handlers takes one of the values in the table below:

Bean ValueDescription

endpoint

The bean name of the enclosing endpoint (e.g. <service-activator>) if there is one

anonymous

An indication that the enclosing endpoint didn’t have a user-specified bean name, so the JMX name is the input channel name

internal

For well-known Spring Integration default components

handler/source

None of the above: fallback to the toString() of the object being monitored (handler or source)

Custom elements can be appended to the object name by providing a reference to a Properties object in the object-name-static-properties attribute.

Also, since Spring Integration 3.0, you can use a custom ObjectNamingStrategy using the object-naming-strategy attribute. This permits greater control over the naming of the MBeans. For example, to group all Integration MBeans under an Integration type. A simple custom naming strategy implementation might be:

public class Namer implements ObjectNamingStrategy {

	private final ObjectNamingStrategy realNamer = new KeyNamingStrategy();
	@Override
	public ObjectName getObjectName(Object managedBean, String beanKey) throws MalformedObjectNameException {
		String actualBeanKey = beanKey.replace("type=", "type=Integration,componentType=");
		return realNamer.getObjectName(managedBean, actualBeanKey);
	}

}

The beanKey argument is a String containing the standard object name beginning with the default-domain and including any additional static properties. This example simply moves the standard type part to componentType and sets the type to Integration, enabling selection of all Integration MBeans in one query:"my.domain:type=Integration,*. This also groups the beans under one tree entry under the domain in tools like VisualVM.

[Note]Note

The default naming strategy is a MetadataNamingStrategy. The exporter propagates the default-domain to that object to allow it to generate a fallback object name if parsing of the bean key fails. If your custom naming strategy is a MetadataNamingStrategy (or subclass), the exporter will not propagate the default-domain; you will need to configure it on your strategy bean.

===== JMX Improvements

Version 4.2 introduced some important improvements, representing a fairly major overhaul to the JMX support in the framework. These resulted in a significant performance improvement of the JMX statistics collection and much more control thereof, but has some implications for user code in a few specific (uncommon) situations. These changes are detailed below, with a caution where necessary.

  • Metrics Capture

Previously, MessageSource, MessageChannel and MessageHandler metrics were captured by wrapping the object in a JDK dynamic proxy to intercept appropriate method calls and capture the statistics. The proxy was added when an integration MBean exporter was declared in the context.

Now, the statistics are captured by the beans themselves; see the section called “CompletableFuture” for more information.

[Warning]Warning

This change means that you no longer automatically get an MBean or statistics for custom MessageHandler implementations, unless those custom handlers extend AbstractMessageHandler. The simplest way to resolve this is to extend AbstractMessageHandler. If that’s not possible, or desired, another work-around is to implement the MessageHandlerMetrics interface. For convenience, a DefaultMessageHandlerMetrics is provided to capture and report statistics. Invoke the beforeHandle and afterHandle at the appropriate times. Your MessageHandlerMetrics methods can then delegate to this object to obtain each statistic. Similarly, MessageSource implementations must extend AbstractMessageSource or implement MessageSourceMetrics. Message sources only capture a count so there is no provided convenience class; simply maintain the count in an AtomicLong field.

The removal of the proxy has two additional benefits; 1) stack traces in exceptions are reduced (when JMX is enabled) because the proxy is not on the stack; 2) cases where 2 MBeans were exported for the same bean now only export a single MBean with consolidated attributes/operations (see the MBean consolidation bullet below).

  • Resolution

System.nanoTime() is now used to capture times instead of System.currentTimeMillis(). This may provide more accuracy on some JVMs, espcially when durations of less than 1 millisecond are expected

  • Setting Initial Statistics Collection State

Previously, when JMX was enabled, all sources, channels, handlers captured statistics. It is now possible to control whether the statisics are enabled on an individual component. Further, it is possible to capture simple counts on MessageChannel s and MessageHandler s instead of the complete time-based statistics. This can have significant performance implications because you can selectively configure where you need detailed statistics, as well as enable/disable at runtime.

See the section called “CompletableFuture”.

  • @IntegrationManagedResource

Similar to the @ManagedResource annotation, the @IntegrationManagedResource marks a class as eligible to be exported as an MBean; however, it will only be exported if there is an IntegrationMBeanExporter in the application context.

Certain Spring Integration classes (in the org.springframework.integration) package) that were previously annotated with`@ManagedResource` are now annotated with both @ManagedResource and @IntegrationManagedResource. This is for backwards compatibility (see the next bullet). Such MBeans will be exported by any context MBeanServeror an IntegrationMBeanExporter (but not both - if both exporters are present, the bean is exported by the integration exporter if the bean matches a managed-components pattern).

  • Consolidated MBeans

Certain classes within the framework (mapping routers for example) have additional attributes/operations over and above those provided by metrics and Lifecycle. We will use a Router as an example here.

Previously, beans of these types were exported as two distinct MBeans:

1) the metrics MBean (with an objectName such as: intDomain:type=MessageHandler,name=myRouter,bean=endpoint). This MBean had metrics attributes and metrics/Lifecycle operations.

2) a second MBean (with an objectName such as: ctxDomain:name=org.springframework.integration.config.RouterFactoryBean#0 ,type=MethodInvokingRouter) was exported with the channel mappings attribute and operations.

Now, the attributes and operations are consolidated into a single MBean. The objectName will depend on the exporter. If exported by the integration MBean exporter, the objectName will be, for example: intDomain:type=MessageHandler,name=myRouter,bean=endpoint. If exported by another exporter, the objectName will be, for example: ctxDomain:name=org.springframework.integration.config.RouterFactoryBean#0 ,type=MethodInvokingRouter. There is no difference between these MBeans (aside from the objectName), except that the statistics will not be enabled (the attributes will be 0) by exporters other than the integration exporter; statistics can be enabled at runtime using the JMX operations. When exported by the integration MBean exporter, the initial state can be managed as described above.

[Warning]Warning

If you are currently using the second MBean to change, for example, channel mappings, and you are using the integration MBean exporter, note that the objectName has changed because of the MBean consolidation. There is no change if you are not using the integration MBean exporter.

  • MBean Exporter Bean Name Patterns

Previously, the managed-components patterns were inclusive only. If a bean name matched one of the patterns it would be included. Now, the pattern can be negated by prefixing it with !. i.e. "!foo*, foox" will match all beans that don’t start with foo, except foox. Patterns are evaluated left to right and the first match (positive or negative) wins and no further patterns are applied.

[Warning]Warning

The addition of this syntax to the pattern causes one possible (although perhaps unlikey) problem. If you have a bean "!foo"and you included a pattern "!foo" in your MBean exporter’s managed-components patterns; it will no long match; the pattern will now match all beans not named foo. In this case, you can escape the ! in the pattern with \. The pattern "\!foo" means match a bean named "!foo".

  • IntegrationMBeanExporter changes

The IntegrationMBeanExporter no longer implements SmartLifecycle; this means that start() and stop() operations are no longer available to register/unregister MBeans. The MBeans are now registered during context initialization and unregistered when the context is destroyed.

===== Orderly Shutdown Managed Operation

The MBean exporter provides a JMX operation to shut down the application in an orderly manner, intended for use before terminating the JVM.

public void stopActiveComponents(long howLong)

Its use and operation are described in the section called “CompletableFuture”.

=== Message History

The key benefit of a messaging architecture is loose coupling where participating components do not maintain any awareness about one another. This fact alone makes your application extremely flexible, allowing you to change components without affecting the rest of the flow, change messaging routes,   message consuming styles (polling vs event driven), and so on. However, this unassuming style of architecture could prove to be difficult when things go wrong. When debugging, you would probably like to get as much information about the message as you can (its origin, channels it has traversed, etc.)

Message History is one of those patterns that helps by giving you an option to maintain some level of awareness of a message path either for debugging purposes or to maintain an audit trail. Spring integration provides a simple way to configure your message flows to maintain the Message History by adding a header to the Message and updating that header every time a message passes through a tracked component.

==== Message History Configuration

To enable Message History all you need is to define the message-history element in your configuration.

<int:message-history/>

Now every named component (component that has an id defined) will be tracked. The framework will set the history header in your Message. Its value is very simple - List<Properties>.

<int:gateway id="sampleGateway" 
    service-interface="org.springframework.integration.history.sample.SampleGateway"
    default-request-channel="bridgeInChannel"/>

<int:chain id="sampleChain" input-channel="chainChannel" output-channel="filterChannel">
  <int:header-enricher>
    <int:header name="baz" value="baz"/>
  </int:header-enricher>
</int:chain>

The above configuration will produce a very simple Message History structure:

[{name=sampleGateway, type=gateway, timestamp=1283281668091},
 {name=sampleChain, type=chain, timestamp=1283281668094}]

To get access to Message History all you need is access the MessageHistory header. For example:

Iterator<Properties> historyIterator =
    message.getHeaders().get(MessageHistory.HEADER_NAME, MessageHistory.class).iterator();
assertTrue(historyIterator.hasNext());
Properties gatewayHistory = historyIterator.next();
assertEquals("sampleGateway", gatewayHistory.get("name"));
assertTrue(historyIterator.hasNext());
Properties chainHistory = historyIterator.next();
assertEquals("sampleChain", chainHistory.get("name"));

You might not want to track all of the components. To limit the history to certain components based on their names, all you need is provide the tracked-components attribute and specify a comma-delimited list of component names and/or patterns that match the components you want to track.

<int:message-history tracked-components="*Gateway, sample*, foo"/>

In the above example, Message History will only be maintained for all of the components that end with Gateway, start with sample, or match the name foo exactly.

Starting with version 4.0, you can also use the @EnableMessageHistory annotation in a @Configuration class. In addition, the MessageHistoryConfigurer bean is now exposed as a JMX MBean by the IntegrationMBeanExporter (see the section called “CompletableFuture”), allowing the patterns to be changed at runtime. Note, however, that the bean must be stopped (turning off message history) in order to change the patterns. This feature might be useful to temporarily turn on history to analyze a system. The MBean’s object name is "<domain>:name=messageHistoryConfigurer,type=MessageHistoryConfigurer".

[Important]Important

If multiple beans (declared by @EnableMessageHistory and/or <message-history/>) they all must have identical component name patterns (when trimmed and sorted). Do not use a generic <bean/> definition for the MessageHistoryConfigurer.

[Note]Note

Remember that by definition the Message History header is immutable (you can’t re-write history, although some try). Therefore, when writing Message History values, the components are either creating brand new Messages (when the component is an origin), or they are copying the history from a request Message, modifying it and setting the new list on a reply Message. In either case, the values can be appended even if the Message itself is crossing thread boundaries. That means that the history values can greatly simplify debugging in an asynchronous message flow.

=== Message Store

Enterprise Integration Patterns (EIP) identifies several patterns that have the capability to buffer messages. For example, an Aggregator buffers messages until they can be released and a QueueChannel buffers messages until consumers explicitly receive those messages from that channel. Because of the failures that can occur at any point within your message flow, EIP components that buffer messages also introduce a point where messages could be lost.

To mitigate the risk of losing Messages, EIP defines the Message Store pattern which allows EIP components to store Messages typically in some type of persistent store (e.g. RDBMS).

Spring Integration provides support for the Message Store pattern by a) defining a org.springframework.integration.store.MessageStore strategy interface, b) providing several implementations of this interface, and c) exposing a message-store attribute on all components that have the capability to buffer messages so that you can inject any instance that implements the MessageStore interface.

Details on how to configure a specific Message Store implementation and/or how to inject a MessageStore implementation into a specific buffering component are described throughout the manual (see the specific component, such as QueueChannel, Aggregator, Delayer etc.), but here are a couple of samples to give you an idea:

QueueChannel

<int:channel id="myQueueChannel">
    <int:queue message-store="refToMessageStore"/>
<int:channel>

Aggregator

<int:aggregator  message-store="refToMessageStore"/>

By default Messages are stored in-memory using org.springframework.integration.store.SimpleMessageStore, an implementation of MessageStore. That might be fine for development or simple low-volume environments where the potential loss of non-persistent messages is not a concern. However, the typical production application will need a more robust option, not only to mitigate the risk of message loss but also to avoid potential out-of-memory errors. Therefore, we also provide MessageStore implementations for a variety of data-stores. Below is a complete list of supported implementations:

[Important]Important

However be aware of some limitations while using persistent implementations of the MessageStore.

The Message data (payload and headers) is serialized and deserialized using different serialization strategies depending on the implementation of the MessageStore. For example, when using JdbcMessageStore, only Serializable data is persisted by default. In this case non-Serializable headers are removed before serialization occurs. Also be aware of the protocol specific headers that are injected by transport adapters (e.g., FTP, HTTP, JMS etc.). For example, <http:inbound-channel-adapter/> maps HTTP-headers into Message Headers and one of them is an ArrayList of non-Serializable org.springframework.http.MediaType instances. However you are able to inject your own implementation of the Serializer and/or Deserializer strategy interfaces into some MessageStore implementations (such as JdbcMessageStore) to change the behaviour of serialization and deserialization.

Special attention must be paid to the headers that represent certain types of data. For example, if one of the headers contains an instance of some Spring Bean, upon deserialization you may end up with a different instance of that bean, which directly affects some of the implicit headers created by the framework (e.g., REPLY_CHANNEL or ERROR_CHANNEL). Currently they are not serializable, but even if they were, the deserialized channel would not represent the expected instance.

Beginning with Spring Integration version 3.0, this issue can be resolved with a header enricher, configured to replace these headers with a name after registering the channel with the HeaderChannelRegistry.

Also when configuring a message-flow like this: gateway → queue-channel (backed by a persistent Message Store) → service-activator That gateway creates a Temporary Reply Channel, and it will be lost by the time the service-activator’s poller reads from the queue. Again, you can use the header enricher to replace the headers with a String representation.

For more information, refer to the Section 7.2.2, “Header Enricher”.

Spring Integration 4.0 introduced two new interfaces ChannelMessageStore - to implement operations specific for QueueChannel s, PriorityCapableChannelMessageStore - to mark MessageStore implementation to be used for PriorityChannel s and to provide priority order for persisted Messages. The real behaviour depends on implementation. The Framework provides these implementations, which can be used as a persistent MessageStore for QueueChannel and PriorityChannel:

[Warning]Caution with SimpleMessageStore

Starting with version 4.1, the SimpleMessageStore no longer copies the message group when calling getMessageGroup(). For large message groups, this was a significant performance problem. 4.0.1 introduced a boolean copyOnGet allowing this to be controlled. When used internally by the aggregator, this was set to false to improve performance. It is now false by default.

Users accessing the group store outside of components such as aggregators, will now get a direct reference to the group being used by the aggregator, instead of a copy. Manipulation of the group outside of the aggregator may cause unpredictable results.

For this reason, users should not perform such manipulation, or set the copyOnGet property to true.

==== MessageGroupFactory

Starting with version 4.3, some MessageGroupStore implementations can be injected with a custom MessageGroupFactory strategy to create/customize the MessageGroup instances used by the MessageGroupStore. This defaults to a SimpleMessageGroupFactory which produces SimpleMessageGroup s based on the GroupType.HASH_SET (LinkedHashSet) internal collection. Other possible options are SYNCHRONISED_SET and BLOCKING_QUEUE, where the last one can be used to reinstate the previous SimpleMessageGroup behavior. Also the PERSISTENT option is available. See the next section for more information.

==== Persistence MessageGroupStore and Lazy-Load

Starting with version 4.3, all persistence MessageGroupStore s retrieve MessageGroup s and their messages from the store with the Lazy-Load manner. In most cases it is useful for the Correlation MessageHandler s (Section 6.4, “Aggregator” and Section 6.5, “Resequencer”), when it would be an overhead to load entire MessageGroup from the store on each correlation operation.

To switch off the lazy-load behavior the AbstractMessageGroupStore.setLazyLoadMessageGroups(false) option can be used from the configuration.

Our performance tests for lazy-load on MongoDB MessageStore (the section called “CompletableFuture”) and <aggregator> (Section 6.4, “Aggregator”) with custom release-strategy like:

<int:aggregator input-channel="inputChannel"
                output-channel="outputChannel"
                message-store="mongoStore"
                release-strategy-expression="size() == 1000"/>

demonstrate this results for 1000 simple messages:

StopWatch 'Lazy-Load Performance': running time (millis) = 38918
-----------------------------------------
ms     %     Task name
-----------------------------------------
02652  007%  Lazy-Load
36266  093%  Eager

=== Metadata Store

Many external systems, services or resources aren’t transactional (Twitter, RSS, file system etc.) and there is no any ability to mark the data as read. Or there is just need to implement the Enterprise Integration Pattern Idempotent Receiver in some integration solutions. To achieve this goal and store some previous state of the Endpoint before the next interaction with external system, or deal with the next Message, Spring Integration provides the Metadata Store component being an implementation of the org.springframework.integration.metadata.MetadataStore interface with a general key-value contract.

The Metadata Store is designed to store various types of generic meta-data (e.g., published date of the last feed entry that has been processed) to help components such as the Feed adapter deal with duplicates. If a component is not directly provided with a reference to a MetadataStore, the algorithm for locating a metadata store is as follows: First, look for a bean with id metadataStore in the ApplicationContext. If one is found then it will be used, otherwise it will create a new instance of SimpleMetadataStore which is an in-memory implementation that will only persist metadata within the lifecycle of the currently running Application Context. This means that upon restart you may end up with duplicate entries.

If you need to persist metadata between Application Context restarts, these persistent MetadataStores are provided by the framework:

The PropertiesPersistingMetadataStore is backed by a properties file and a PropertiesPersister.

By default, it only persists the state when the application context is closed normally. It implements Flushable so you can persist the state at will, be invoking flush().

<bean id="metadataStore"
    class="org.springframework.integration.store.PropertiesPersistingMetadataStore"/>

Alternatively, you can provide your own implementation of the MetadataStore interface (e.g. JdbcMetadataStore) and configure it as a bean in the Application Context.

Starting with version 4.0, SimpleMetadataStore, PropertiesPersistingMetadataStore and RedisMetadataStore implement ConcurrentMetadataStore. These provide for atomic updates and can be used across multiple component or application instances.

==== Idempotent Receiver and Metadata Store

The Metadata Store is useful for implementing the EIP Idempotent Receiver pattern, when there is need to filter an incoming Message if it has already been processed, and just discard it or perform some other logic on discarding. The following configuration is an example of how to do this:

<int:filter input-channel="serviceChannel"
			output-channel="idempotentServiceChannel"
			discard-channel="discardChannel"
			expression="@metadataStore.get(headers.businessKey) == null"/>

<int:publish-subscribe-channel id="idempotentServiceChannel"/>

<int:outbound-channel-adapter channel="idempotentServiceChannel"
                              expression="@metadataStore.put(headers.businessKey, '')"/>

<int:service-activator input-channel="idempotentServiceChannel" ref="service"/>

The value of the idempotent entry may be some expiration date, after which that entry should be removed from Metadata Store by some scheduled reaper.

Also see the section called “CompletableFuture”.

==== MetadataStoreListener

Some metadata stores (currently only zookeeper) support registering a listener to receive events when items change.

public interface MetadataStoreListener {

	void onAdd(String key, String value);

	void onRemove(String key, String oldValue);

	void onUpdate(String key, String newValue);
}

See the javadocs for more information. The MetadataStoreListenerAdapter can be subclassed if you are only interested in a subset of events.

=== Control Bus

As described in (EIP), the idea behind the Control Bus is that the same messaging system can be used for monitoring and managing the components within the framework as is used for "application-level" messaging. In Spring Integration we build upon the adapters described above so that it’s possible to send Messages as a means of invoking exposed operations.

<int:control-bus input-channel="operationChannel"/>

The Control Bus has an input channel that can be accessed for invoking operations on the beans in the application context. It also has all the common properties of a service activating endpoint, e.g. you can specify an output channel if the result of the operation has a return value that you want to send on to a downstream channel.

The Control Bus executes messages on the input channel as Spring Expression Language expressions. It takes a message, compiles the body to an expression, adds some context, and then executes it. The default context supports any method that has been annotated with @ManagedAttribute or @ManagedOperation. It also supports the methods on Spring’s Lifecycle interface, and it supports methods that are used to configure several of Spring’s TaskExecutor and TaskScheduler implementations. The simplest way to ensure that your own methods are available to the Control Bus is to use the @ManagedAttribute and/or @ManagedOperation annotations. Since those are also used for exposing methods to a JMX MBean registry, it’s a convenient by-product (often the same types of operations you want to expose to the Control Bus would be reasonable for exposing via JMX). Resolution of any particular instance within the application context is achieved in the typical SpEL syntax. Simply provide the bean name with the SpEL prefix for beans (@). For example, to execute a method on a Spring Bean a client could send a message to the operation channel as follows:

Message operation = MessageBuilder.withPayload("@myServiceBean.shutdown()").build();
operationChannel.send(operation)

The root of the context for the expression is the Message itself, so you also have access to the payload and headers as variables within your expression. This is consistent with all the other expression support in Spring Integration endpoints.

=== Orderly Shutdown

As described in the section called “CompletableFuture”, the MBean exporter provides a JMX operation stopActiveComponents, which is used to stop the application in an orderly manner. The operation has a single long parameter. The parameter indicates how long (in milliseconds) the operation will wait to allow in-flight messages to complete. The operation works as follows:

The first step calls beforeShutdown() on all beans that implement OrderlyShutdownCapable. This allows such components to prepare for shutdown. Examples of components that implement this interface, and what they do with this call include: JMS and AMQP message-driven adapters stop their listener containers; TCP server connection factories stop accepting new connections (while keeping existing connections open); TCP inbound endpoints drop (log) any new messages received; http inbound endpoints return 503 - Service Unavailable for any new requests.

The second step stops any active channels, such as JMS- or AMQP-backed channels.

The third step stops all MessageSource s.

The fourth step stops all inbound MessageProducer s (that are not OrderlyShutdownCapable).

The fifth step waits for any remaining time left, as defined by the value of the long parameter passed in to the operation. This is intended to allow any in-flight messages to complete their journeys. It is therefore important to select an appropriate timeout when invoking this operation.

The sixth step calls afterShutdown() on all OrderlyShutdownCapable components. This allows such components to perform final shutdown tasks (closing all open sockets, for example).

As discussed in the section called “CompletableFuture” this operation can be invoked using JMX. If you wish to programmatically invoke the method, you will need to inject, or otherwise get a reference to, the IntegrationMBeanExporter. If no id attribute is provided on the <int-jmx:mbean-export/> definition, the bean will have a generated name. This name contains a random component to avoid ObjectName collisions if multiple Spring Integration contexts exist in the same JVM (MBeanServer).

For this reason, if you wish to invoke the method programmatically, it is recommended that you provide the exporter with an id attribute so it can easily be accessed in the application context.

Finally, the operation can be invoked using the <control-bus>; see the monitoring Spring Integration sample application for details.

[Important]Important

The above algorithm was improved in version 4.1. Previously, all task executors and schedulers were stopped. This could cause mid-flow messages in QueueChannel s to remain. Now, the shutdown leaves pollers running in order to allow these messages to be drained and processed.

=== Integration Graph

Starting with version 4.3, Spring Integration provides access to an application’s runtime object model which can, optionally, include component metrics. It is exposed as a graph, which may be used to visualize the current state of the integration application. The o.s.i.support.management.graph package contains all the required classes to collect, build and render the runtime state of Spring Integration components as a single tree-like Graph object. The IntegrationGraphServer should be declared as a bean to build, retrieve and refresh the Graph object. The resulting Graph object can be serialized to any format, although JSON is flexible and convenient to parse and represent on the client side. A simple Spring Integration application with only the default components would expose a graph as follows:

{
  "contentDescriptor": {
    "providerVersion": "4.3.0.RELEASE",
    "providerFormatVersion": 1.0,
    "provider": "spring-integration",
    "name": "myApplication"
  },
  "nodes": [
    {
      "nodeId": 1,
      "name": "nullChannel",
      "stats": null,
      "componentType": "channel"
    },
    {
      "nodeId": 2,
      "name": "errorChannel",
      "stats": null,
      "componentType": "publish-subscribe-channel"
    },
    {
      "nodeId": 3,
      "name": "_org.springframework.integration.errorLogger",
      "stats": {
        "duration": {
          "count": 0,
          "min": 0.0,
          "max": 0.0,
          "mean": 0.0,
          "standardDeviation": 0.0,
          "countLong": 0
        },
        "errorCount": 0,
        "standardDeviationDuration": 0.0,
        "countsEnabled": true,
        "statsEnabled": true,
        "loggingEnabled": false,
        "handleCount": 0,
        "meanDuration": 0.0,
        "maxDuration": 0.0,
        "minDuration": 0.0,
        "activeCount": 0
      },
      "componentType": "logging-channel-adapter",
      "output": null,
      "input": "errorChannel"
    }
  ],
  "links": [
    {
      "from": 2,
      "to": 3,
      "type": "input"
    }
  ]
}

As you can see, the graph consists of three top-level elements.

The contentDescriptor graph element is pretty straightforward and contains general information about the application providing the data. The name can be customized on the IntegrationGraphServer bean or via spring.application.name application context environment property. Other properties are provided by the framework and allows you to distinguish a similar model from other sources.

The links graph element represents connections between nodes from the nodes graph element and, therefore, between integration components in the source Spring Integration application. For example from a MessageChannel to an EventDrivenConsumer with some MessageHandler; or from an AbstractReplyProducingMessageHandler to a MessageChannel. For the convenience and to allow to determine a link purpose, the model is supplied with the type attribute. The possible types are:

  • input - identify the direction from MessageChannel to the endpoint; inputChannel or requestChannel property;
  • output - the direction from MessageHandler, MessageProducer or SourcePollingChannelAdapter to the MessageChannel via an outputChannel or replyChannel property;
  • error - from MessageHandler on PollingConsumer or MessageProducer or SourcePollingChannelAdapter to the MessageChannel via an errorChannel property;
  • discard - from DiscardingMessageHandler (e.g. MessageFilter) to the MessageChannel via errorChannel property.
  • route - from AbstractMappingMessageRouter (e.g. HeaderValueRouter) to the MessageChannel. Similar to output but determined at run-time. May be a configured channel mapping, or a dynamically resolved channel. Routers will typically only retain up to 100 dynamic routes for this purpose, but this can be modified using the dynamicChannelLimit property.

The information from this element can be used by a visualizing tool to render connections between nodes from the nodes graph element, where the from and to numbers represent the value from the nodeId property of the linked nodes. For example the link type can be used to determine the proper port on the target node:

              +---(discard)
              |
         +----o----+
         |         |
         |         |
         |         |
(input)--o         o---(output)
         |         |
         |         |
         |         |
         +----o----+
              |
              +---(error)

The nodes graph element is perhaps the most interesting because its elements contain not only the runtime components with their componentType s and name s, but can also optionally contain metrics exposed by the component. Node elements contain various properties which are generally self-explanatory. For example, expression-based components include the expression property containing the primary expression string for the component. To enable the metrics, add an @EnableIntegrationManagement to some @Configuration class or add an <int:management/> element to your XML configuration. You can control exactly which components in the framework collect statistics. See the section called “CompletableFuture” for complete information. See the stats attribute from the _org.springframework.integration.errorLogger component in the JSON example above. The nullChannel and errorChannel don’t provide statistics information in this case, because the configuration for this example was:

@Configuration
@EnableIntegration
@EnableIntegrationManagement(statsEnabled = "_org.springframework.integration.errorLogger.handler",
      countsEnabled = "!*",
      defaultLoggingEnabled = "false")
public class ManagementConfiguration {

    @Bean
    public IntegrationGraphServer integrationGraphServer() {
        return new IntegrationGraphServer();
    }

}

The nodeId represents a unique incremental identifier to distinguish one component from another. It is also used in the links element to represent a relationship (connection) of this component to others, if any. The input and output attributes are for the inputChannel and outputChannel properties of the AbstractEndpoint, MessageHandler, SourcePollingChannelAdapter or MessageProducerSupport. See the next paragraph for more information.

==== Graph Runtime Model

Spring Integration components have various levels of complexity. For example, any polled MessageSource also has a SourcePollingChannelAdapter and a MessageChannel to which to send messages from the source data periodically. Other components might be middleware request-reply components, e.g. JmsOutboundGateway, with a consuming AbstractEndpoint to subscribe to (or poll) the requestChannel (input) for messages, and a replyChannel (output) to produce a reply message to send downstream. Meanwhile, any MessageProducerSupport implementation (e.g. ApplicationEventListeningMessageProducer) simply wraps some source protocol listening logic and sends messages to the outputChannel.

Within the graph, Spring Integration components are represented using the IntegrationNode class hierarchy, which you can find in the o.s.i.support.management.graph package. For example the ErrorCapableDiscardingMessageHandlerNode could be used for the AggregatingMessageHandler (because it has a discardChannel option) and can produce errors when consuming from a PollableChannel using a PollingConsumer. Another sample is CompositeMessageHandlerNode - for a MessageHandlerChain when subscribed to a SubscribableChannel, using an EventDrivenConsumer.

[Note]Note

The @MessagingGateway (see Section 8.4, “Messaging Gateways”) provides nodes for each its method, where the name attribute is based on the gateway’s bean name and the short method signature. For example the gateway:

@MessagingGateway(defaultRequestChannel = "four")
public interface Gate {

	void foo(String foo);

	void foo(Integer foo);

	void bar(String bar);

}

produces nodes like:


{
  "nodeId" : 10,
  "name" : "gate.bar(class java.lang.String)",
  "stats" : null,
  "componentType" : "gateway",
  "output" : "four",
  "errors" : null
},
{
  "nodeId" : 11,
  "name" : "gate.foo(class java.lang.String)",
  "stats" : null,
  "componentType" : "gateway",
  "output" : "four",
  "errors" : null
},
{
  "nodeId" : 12,
  "name" : "gate.foo(class java.lang.Integer)",
  "stats" : null,
  "componentType" : "gateway",
  "output" : "four",
  "errors" : null
}

This IntegrationNode hierarchy can be used for parsing the graph model on the client side, as well as for the understanding the general Spring Integration runtime behavior. See also Section 3.8, “Programming Tips and Tricks” for more information.

=== Integration Graph Controller

If your application is WEB-based (or built on top of Spring Boot using an embedded web container) and the Spring Integration HTTP module (see the section called “CompletableFuture”) is present on the classpath, you can use a IntegrationGraphController to expose the IntegrationGraphServer functionality as a REST service. For this purpose, the @EnableIntegrationGraphController @Configuration class annotation and the <int-http:graph-controller/> XML element, are available in the HTTP module. Together with the @EnableWebMvc annotation (or <mvc:annotation-driven/> for xml definitions), this configuration registers an IntegrationGraphController @RestController where its @RequestMapping.path can be configured on the @EnableIntegrationGraphController annotation or <int-http:graph-controller/> element. The default path is /integration.

The IntegrationGraphController @RestController provides these services:

  • @GetMapping(name = "getGraph") - to retrieve the state of the Spring Integration components since the last IntegrationGraphServer refresh. The o.s.i.support.management.graph.Graph is returned as a @ResponseBody of the REST service;
  • @GetMapping(path = "/refresh", name = "refreshGraph") - to refresh the current Graph for the actual runtime state and return it as a REST response. It is not necessary to refresh the graph for metrics, they are provided in real-time when the graph is retrieved. Refresh can be called if the application context has been modified since the graph was last retrieved and the graph is completely rebuilt.

Any Security and Cross Origin restrictions for the IntegrationGraphController can be achieved with the standard configuration options and components provided by Spring Security and Spring MVC projects. A simple example of that follows:

<mvc:annotation-driven />

<mvc:cors>
	<mvc:mapping path="/myIntegration/**"
				 allowed-origins="http://localhost:9090"
				 allowed-methods="GET" />
</mvc:cors>

<security:http>
    <security:intercept-url pattern="/myIntegration/**" access="ROLE_ADMIN" />
</security:http>


<int-http:graph-controller path="/myIntegration" />

The Java & Annotation Configuration variant follows; note that, for convenience, the annotation provides an allowedOrigins attribute; this just provides GET access to the path. For more sophistication, you can configure the CORS mappings using standard Spring MVC mechanisms.

@Configuration
@EnableWebMvc
@EnableWebSecurity
@EnableIntegration
@EnableIntegrationGraphController(path = "/testIntegration", allowedOrigins="http://localhost:9090")
public class IntegrationConfiguration extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) throws Exception {
	    http
            .authorizeRequests()
               .antMatchers("/testIntegration/**").hasRole("ADMIN")
            // ...
            .formLogin();
    }

    //...

}

= Integration Endpoints

This section covers the various Channel Adapters and Messaging Gateways provided by Spring Integration to support Message-based communication with external systems.

== Endpoint Quick Reference Table

As discussed in the sections above, Spring Integration provides a number of endpoints used to interface with external systems, file systems etc. The following is a summary of the various endpoints with quick links to the appropriate chapter.

To recap, Inbound Channel Adapters are used for one-way integration bringing data into the messaging application. Outbound Channel Adapters are used for one-way integration to send data out of the messaging application. Inbound Gateways are used for a bidirectional integration flow where some other system invokes the messaging application and receives a reply.Outbound Gateways are used for a bidirectional integration flow where the messaging application invokes some external service or entity, expecting a result.

ModuleInbound AdapterOutbound AdapterInbound GatewayOutbound Gateway

AMQP

the section called “CompletableFuture”

the section called “CompletableFuture”

the section called “CompletableFuture”

the section called “CompletableFuture”

Events

the section called “CompletableFuture”

the section called “CompletableFuture”

N

N

Feed

the section called “CompletableFuture”

N

N

N

File

the section called “CompletableFuture” and the section called “CompletableFuture”

the section called “CompletableFuture”

N

the section called “CompletableFuture”

FTP(S)

the section called “CompletableFuture”

the section called “CompletableFuture”

N

the section called “CompletableFuture”

Gemfire

the section called “CompletableFuture” and the section called “CompletableFuture”

the section called “CompletableFuture”

N

N

HTTP

the section called “CompletableFuture”

the section called “CompletableFuture”

the section called “CompletableFuture”

the section called “CompletableFuture”

JDBC

the section called “CompletableFuture” and the section called “CompletableFuture”

the section called “CompletableFuture” and the section called “CompletableFuture”

N

the section called “CompletableFuture” and the section called “CompletableFuture”

JMS

the section called “CompletableFuture” and the section called “CompletableFuture”

the section called “CompletableFuture”

the section called “CompletableFuture”

the section called “CompletableFuture”

JMX

the section called “CompletableFuture” and the section called “CompletableFuture” and the section called “CompletableFuture”

the section called “CompletableFuture” and the section called “CompletableFuture”

N

the section called “CompletableFuture”

JPA

the section called “CompletableFuture”

the section called “CompletableFuture”

N

the section called “CompletableFuture” and the section called “CompletableFuture”

Mail

the section called “CompletableFuture”

the section called “CompletableFuture”

N

N

MongoDB

the section called “CompletableFuture”

the section called “CompletableFuture”

N

N

MQTT

the section called “CompletableFuture”

the section called “CompletableFuture”

N

N

Redis

the section called “CompletableFuture” and the section called “CompletableFuture” and the section called “CompletableFuture”

the section called “CompletableFuture” and the section called “CompletableFuture” and the section called “CompletableFuture”

the section called “CompletableFuture”

the section called “CompletableFuture” and the section called “CompletableFuture”

Resource

the section called “CompletableFuture”

N

N

N

RMI

N

N

the section called “CompletableFuture”

the section called “CompletableFuture”

SFTP

the section called “CompletableFuture”

the section called “CompletableFuture”

N

the section called “CompletableFuture”

Stream

the section called “CompletableFuture”

the section called “CompletableFuture”

N

N

Syslog

the section called “CompletableFuture”

N

N

N

TCP

the section called “CompletableFuture”

the section called “CompletableFuture”

the section called “CompletableFuture”

the section called “CompletableFuture”

Twitter

the section called “CompletableFuture”

the section called “CompletableFuture”

N

the section called “CompletableFuture”

UDP

the section called “CompletableFuture”

the section called “CompletableFuture”

N

N

Web Services

N

N

the section called “CompletableFuture”

the section called “CompletableFuture”

Web Sockets

the section called “CompletableFuture”

the section called “CompletableFuture”

N

N

XMPP

the section called “CompletableFuture” and the section called “CompletableFuture”

the section called “CompletableFuture” and the section called “CompletableFuture”

N

N

In addition, as discussed in Part IV, “Core Messaging”, endpoints are provided for interfacing with Plain Old Java Objects (POJOs). As discussed in Section 4.3, “Channel Adapter”, the <int:inbound-channel-adapter> allows polling a java method for data; the <int:outbound-channel-adapter> allows sending data to a void method, and as discussed in Section 8.4, “Messaging Gateways”, the <int:gateway> allows any Java program to invoke a messaging flow. Each of these without requiring any source level dependencies on Spring Integration. The equivalent of an outbound gateway in this context would be to use a the section called “CompletableFuture” to invoke a method that returns an Object of some kind.

== AMQP Support

=== Introduction

Spring Integration provides Channel Adapters for receiving and sending messages using the Advanced Message Queuing Protocol (AMQP). The following adapters are available:

Spring Integration also provides a point-to-point Message Channel as well as a publish/subscribe Message Channel backed by AMQP Exchanges and Queues.

In order to provide AMQP support, Spring Integration relies on (Spring AMQP) which "applies core Spring concepts to the development of AMQP-based messaging solutions". Spring AMQP provides similar semantics to (Spring JMS).

Whereas the provided AMQP Channel Adapters are intended for unidirectional Messaging (send or receive) only, Spring Integration also provides inbound and outbound AMQP Gateways for request/reply operations.

[Tip]Tip

Please familiarize yourself with the reference documentation of the Spring AMQP project as well. It provides much more in-depth information regarding Spring’s integration with AMQP in general and RabbitMQ in particular.

=== Inbound Channel Adapter

A configuration sample for an AMQP Inbound Channel Adapter is shown below.

<int-amqp:inbound-channel-adapter
                                  id="inboundAmqp" 1
                                  channel="inboundChannel" 2
                                  queue-names="si.test.queue" 3
                                  acknowledge-mode="AUTO" 4
                                  advice-chain="" 5
                                  channel-transacted="" 6
                                  concurrent-consumers="" 7
                                  connection-factory="" 8
                                  error-channel="" 9
                                  expose-listener-channel="" 10
                                  header-mapper="" 11
                                  mapped-request-headers="" 12
                                  listener-container="" 13
                                  message-converter="" 14
                                  message-properties-converter="" 15
                                  phase="" (16)
                                  prefetch-count="" (17)
                                  receive-timeout="" (18)
                                  recovery-interval="" (19)
                                  missing-queues-fatal="" (20)
                                  shutdown-timeout="" (21)
                                  task-executor="" (22)
                                  transaction-attribute="" (23)
                                  transaction-manager="" (24)
                                  tx-size="" /> (25)

1

Unique ID for this adapter. Optional.

2

Message Channel to which converted Messages should be sent. Required.

3

Names of the AMQP Queues from which Messages should be consumed (comma-separated list).Required.

4

Acknowledge Mode for the MessageListenerContainer. When set to MANUAL, the delivery tag and channel are provided in message headers amqp_deliveryTag and amqp_channel respectively; the user application is responsible for acknowledgement. NONE means no acknowledgements (autoAck); AUTO means the adapter’s container will acknowledge when the downstream flow completes.Optional (Defaults to AUTO) see the section called “CompletableFuture”.

5

Extra AOP Advice(s) to handle cross cutting behavior associated with this Inbound Channel Adapter. Optional.

6

Flag to indicate that channels created by this component will be transactional. If true, tells the framework to use a transactional channel and to end all operations (send or receive) with a commit or rollback depending on the outcome, with an exception signalling a rollback. Optional (Defaults to false).

7

Specify the number of concurrent consumers to create. Default is 1. Raising the number of concurrent consumers is recommended in order to scale the consumption of messages coming in from a queue. However, note that any ordering guarantees are lost once multiple consumers are registered. In general, stick with 1 consumer for low-volume queues. Optional.

8

Bean reference to the RabbitMQ ConnectionFactory. Optional (Defaults to connectionFactory).

9

Message Channel to which error Messages should be sent. Optional.

10

Shall the listener channel (com.rabbitmq.client.Channel) be exposed to a registered ChannelAwareMessageListener. Optional (Defaults to true).

11

A reference to an AmqpHeaderMapper to use when receiving AMQP Messages. Optional. By default only standard AMQP properties (e.g. contentType) will be copied to Spring Integration MessageHeaders. Any user-defined headers within the AMQP MessageProperties will NOT be copied to the Message by the default DefaultAmqpHeaderMapper. Not allowed if request-header-names is provided.

12

Comma-separated list of names of AMQP Headers to be mapped from the AMQP request into the MessageHeaders. This can only be provided if the header-mapper reference is not provided. The values in this list can also be simple patterns to be matched against the header names (e.g. "*" or "foo*, bar" or "*foo").

13

Reference to the SimpleMessageListenerContainer to use for receiving AMQP Messages. If this attribute is provided, then no other attribute related to the listener container configuration should be provided. In other words, by setting this reference, you must take full responsibility of the listener container configuration. The only exception is the MessageListener itself. Since that is actually the core responsibility of this Channel Adapter implementation, the referenced listener container must NOT already have its own MessageListener configured. Optional.

14

The MessageConverter to use when receiving AMQP Messages. Optional.

15

The MessagePropertiesConverter to use when receiving AMQP Messages. Optional.

(16)

Specify the phase in which the underlying SimpleMessageListenerContainer should be started and stopped. The startup order proceeds from lowest to highest, and the shutdown order is the reverse of that. By default this value is Integer.MAX_VALUE meaning that this container starts as late as possible and stops as soon as possible. Optional.

(17)

Tells the AMQP broker how many messages to send to each consumer in a single request. Often this can be set quite high to improve throughput. It should be greater than or equal to the transaction size (see attribute "tx-size"). Optional (Defaults to 1).

(18)

Receive timeout in milliseconds. Optional (Defaults to 1000).

(19)

Specifies the interval between recovery attempts of the underlying SimpleMessageListenerContainer (in milliseconds) .Optional (Defaults to 5000).

(20)

If true, and none of the queues are available on the broker, the container will throw a fatal exception during startup and will stop if the queues are deleted when the container is running (after making 3 attempts to passively declare the queues). If false, the container will not throw an exception and go into recovery mode, attempting to restart according to the recovery-interval. Optional (Defaults to true).

(21)

The time to wait for workers in milliseconds after the underlying SimpleMessageListenerContainer is stopped, and before the AMQP connection is forced closed. If any workers are active when the shutdown signal comes they will be allowed to finish processing as long as they can finish within this timeout. Otherwise the connection is closed and messages remain unacked (if the channel is transactional). Optional (Defaults to 5000).

(22)

By default, the underlying SimpleMessageListenerContainer uses a SimpleAsyncTaskExecutor implementation, that fires up a new Thread for each task, executing it asynchronously. By default, the number of concurrent threads is unlimited. NOTE: This implementation does not reuse threads. Consider a thread-pooling TaskExecutor implementation as an alternative. Optional (Defaults to SimpleAsyncTaskExecutor).

(23)

By default the underlying SimpleMessageListenerContainer creates a new instance of the DefaultTransactionAttribute (takes the EJB approach to rolling back on runtime, but not checked exceptions. Optional (Defaults to DefaultTransactionAttribute).

(24)

Sets a Bean reference to an external PlatformTransactionManager on the underlying SimpleMessageListenerContainer. The transaction manager works in conjunction with the "channel-transacted" attribute. If there is already a transaction in progress when the framework is sending or receiving a message, and the channelTransacted flag is true, then the commit or rollback of the messaging transaction will be deferred until the end of the current transaction. If the channelTransacted flag is false, then no transaction semantics apply to the messaging operation (it is auto-acked). For further information see Transactions with Spring AMQP. Optional.

(25)

Tells the SimpleMessageListenerContainer how many messages to process in a single transaction (if the channel is transactional). For best results it should be less than or equal to the set "prefetch-count". Optional (Defaults to 1).

[Note]container

Note that when configuring an external container, you cannot use the Spring AMQP namespace to define the container. This is because the namespace requires at least one <listener/> element. In this environment, the listener is internal to the adapter. For this reason, you must define the container using a normal Spring <bean/> definition, such as:

<bean id="container"
 class="org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer">
    <property name="connectionFactory" ref="connectionFactory" />
    <property name="queueNames" value="foo.queue" />
    <property name="defaultRequeueRejected" value="false"/>
</bean>
[Important]Important

Even though the Spring Integration JMS and AMQP support is very similar, important differences exist. The JMS Inbound Channel Adapter is using a JmsDestinationPollingSource under the covers and expects a configured Poller. The AMQP Inbound Channel Adapter on the other side uses a`SimpleMessageListenerContainer` and is message driven. In that regard it is more similar to the JMS Message Driven Channel Adapter.

==== Configuring with Java Configuration

The following Spring Boot application provides an example of configuring the inbound adapter using Java configuration:

@SpringBootApplication
public class AmqpJavaApplication {

    public static void main(String[] args) {
        new SpringApplicationBuilder(AmqpJavaApplication.class)
            .web(false)
            .run(args);
    }

    @Bean
    public MessageChannel amqpInputChannel() {
        return new DirectChannel();
    }

    @Bean
    public AmqpInboundChannelAdapter inbound(SimpleMessageListenerContainer listenerContainer,
            @Qualifier("amqpInputChannel") MessageChannel channel) {
        AmqpInboundChannelAdapter adapter = new AmqpInboundChannelAdapter(listenerContainer);
        adapter.setOutputChannel(channel);
        return adapter;
    }

    @Bean
    public SimpleMessageListenerContainer container(ConnectionFactory connectionFactory) {
        SimpleMessageListenerContainer container =
                                   new SimpleMessageListenerContainer(connectionFactory);
        container.setQueueNames("foo");
        container.setConcurrentConsumers(2);
        // ...
        return container;
    }

    @Bean
    @ServiceActivator(inputChannel = "amqpInputChannel")
    public MessageHandler handler() {
        return new MessageHandler() {

            @Override
            public void handleMessage(Message<?> message) throws MessagingException {
                System.out.println(message.getPayload());
            }

        };
    }

}

==== Configuring with the Java DSL

The following Spring Boot application provides an example of configuring the inbound adapter using the Java DSL:

@SpringBootApplication
public class AmqpJavaApplication {

    public static void main(String[] args) {
        new SpringApplicationBuilder(AmqpJavaApplication.class)
            .web(false)
            .run(args);
    }

    @Bean
    public IntegrationFlow amqpInbound(ConnectionFactory connectionFactory) {
        return IntegrationFlows.from(Amqp.inboundAdapter(connectionFactory, "foo"))
                .handle(m -> System.out.println(m.getPayload()))
                .get();
    }

}

=== Inbound Gateway

The inbound gateway supports all the attributes on the inbound channel adapter (except channel is replaced by request-channel), plus some additional attributes:

<int-amqp:inbound-gateway
                          id="inboundGateway" 1
                          request-channel="myRequestChannel" 2
                          header-mapper="" 3
                          mapped-request-headers="" 4
                          mapped-reply-headers="" 5
                          reply-channel="myReplyChannel" 6
                          reply-timeout="1000"  7
                          amqp-template="" 8
                          default-reply-to="" /> 9

1

Unique ID for this adapter. Optional.

2

Message Channel to which converted Messages should be sent. Required.

3

A reference to an AmqpHeaderMapper to use when receiving AMQP Messages. Optional. By default only standard AMQP properties (e.g. contentType) will be copied to and from Spring Integration MessageHeaders. Any user-defined headers within the AMQP`MessageProperties` will NOT be copied to or from an AMQP Message by the default DefaultAmqpHeaderMapper. Not allowed if request-header-names or reply-header-names is provided.

4

Comma-separated list of names of AMQP Headers to be mapped from the AMQP request into the MessageHeaders. This can only be provided if the header-mapper reference is not provided. The values in this list can also be simple patterns to be matched against the header names (e.g. "*" or "foo*, bar" or "*foo").

5

Comma-separated list of names of MessageHeaders to be mapped into the AMQP Message Properties of the AMQP reply message. All standard Headers (e.g., contentType) will be mapped to AMQP Message Properties while user-defined headers will be mapped to the headers property. This can only be provided if the header-mapper reference is not provided. The values in this list can also be simple patterns to be matched against the header names (e.g. "*" or "foo*, bar" or "*foo").

6

Message Channel where reply Messages will be expected. Optional.

7

Used to set the receiveTimeout on the underlying org.springframework.integration.core.MessagingTemplate for receiving messages from the reply channel. If not specified this property will default to "1000" (1 second). Only applies if the container thread hands off to another thread before the reply is sent.

8

The customized AmqpTemplate bean reference to have more control for the reply messages to send or you can provide an alternative implementation to the RabbitTemplate.

9

The replyTo org.springframework.amqp.core.Address to be used when the requestMessage doesn’t have replyTo property. If this option isn’t specified, no amqp-template is provided, and no replyTo property exists in the request message, an IllegalStateException is thrown because the reply can’t be routed. If this option isn’t specified, and an external amqp-template is provided, no exception will be thrown. You must either specify this option, or configure a default exchange and routingKey on that template, if you anticipate cases when no replyTo property exists in the request message.

See the note in the section called “CompletableFuture” about configuring the listener-container attribute.

==== Configuring with Java Configuration

The following Spring Boot application provides an example of configuring the inbound gateway using Java configuration:

@SpringBootApplication
public class AmqpJavaApplication {

    public static void main(String[] args) {
        new SpringApplicationBuilder(AmqpJavaApplication.class)
            .web(false)
            .run(args);
    }

    @Bean
    public MessageChannel amqpInputChannel() {
        return new DirectChannel();
    }

    @Bean
    public AmqpInboundGateway inbound(SimpleMessageListenerContainer listenerContainer,
            @Qualifier("amqpInputChannel") MessageChannel channel) {
        AmqpInboundGateway gateway = new AmqpInboundGateway(listenerContainer);
        gateway.setRequestChannel(channel);
        gateway.setDefaultReplyTo("bar");
        return gateway;
    }

    @Bean
    public SimpleMessageListenerContainer container(ConnectionFactory connectionFactory) {
        SimpleMessageListenerContainer container =
                        new SimpleMessageListenerContainer(connectionFactory);
        container.setQueueNames("foo");
        container.setConcurrentConsumers(2);
        // ...
        return container;
    }

    @Bean
    @ServiceActivator(inputChannel = "amqpInputChannel")
    public MessageHandler handler() {
        return new AbstractReplyProducingMessageHandler() {

            @Override
            protected Object handleRequestMessage(Message<?> requestMessage) {
                return "reply to " + requestMessage.getPayload();
            }

        };
    }

}

==== Configuring with the Java DSL

The following Spring Boot application provides an example of configuring the inbound gateway using the Java DSL:

@SpringBootApplication
public class AmqpJavaApplication {

    public static void main(String[] args) {
        new SpringApplicationBuilder(AmqpJavaApplication.class)
            .web(false)
            .run(args);
    }

    @Bean // return the upper cased payload
    public IntegrationFlow amqpInboundGateway(ConnectionFactory connectionFactory) {
        return IntegrationFlows.from(Amqp.inboundGateway(connectionFactory, "foo"))
                .transform(String.class, String::toUpperCase)
                .get();
    }

}

=== Inbound Endpoint Acknowledge Mode

By default the inbound endpoints use acknowledge mode AUTO, which means the container automatically acks the message when the downstream integration flow completes (or a message is handed off to another thread using a QueueChannel or ExecutorChannel). Setting the mode to NONE configures the consumer such that acks are not used at all (the broker automatically acks the message as soon as it is sent). Setting the mode to`MANUAL` allows user code to ack the message at some other point during processing. To support this, with this mode, the endpoints provide the Channel and deliveryTag in the amqp_channel and amqp_deliveryTag headers respectively.

You can perform any valid rabbit command on the Channel but, generally, only basicAck and basicNack (or basicReject) would be used. In order to not interfere with the operation of the container, you should not retain a reference to the channel and just use it in the context of the current message.

[Note]Note

Since the Channel is a reference to a "live" object, it cannot be serialized and will be lost if a message is persisted.

This is an example of how you might use MANUAL acknowledgement:

@ServiceActivator(inputChannel = "foo", outputChannel = "bar")
public Object handle(@Payload String payload, @Header(AmqpHeaders.CHANNEL) Channel channel,
        @Header(AmqpHeaders.DELIVERY_TAG) Long deliveryTag) throws Exception {

    // Do some processing

    if (allOK) {
        channel.basicAck(deliveryTag, false);

        // perhaps do some more processing

    }
    else {
        channel.basicNack(deliveryTag, false, true);
    }
    return someResultForDownStreamProcessing;
}

=== Outbound Channel Adapter

A configuration sample for an AMQP Outbound Channel Adapter is shown below.

<int-amqp:outbound-channel-adapter id="outboundAmqp" 1
                               channel="outboundChannel" 2
                               amqp-template="myAmqpTemplate" 3
                               exchange-name="" 4
                               exchange-name-expression="" 5
                               order="1" 6
                               routing-key="" 7
                               routing-key-expression="" 8
                               default-delivery-mode"" 9
                               confirm-correlation-expression="" 10
                               confirm-ack-channel="" 11
                               confirm-nack-channel="" 12
                               return-channel="" 13
                               error-message-strategy="" 14
                               header-mapper="" 15
                               mapped-request-headers="" (16)
                               lazy-connect="true" /> (17)

1

Unique ID for this adapter. Optional.

2

Message Channel to which Messages should be sent in order to have them converted and published to an AMQP Exchange. Required.

3

Bean Reference to the configured AMQP Template Optional (Defaults to "amqpTemplate").

4

The name of the AMQP Exchange to which Messages should be sent. If not provided, Messages will be sent to the default, no-name Exchange. Mutually exclusive with exchange-name-expression. Optional.

5

A SpEL expression that is evaluated to determine the name of the AMQP Exchange to which Messages should be sent, with the message as the root object. If not provided, Messages will be sent to the default, no-name Exchange. Mutually exclusive with exchange-name. Optional.

6

The order for this consumer when multiple consumers are registered thereby enabling load-balancing and/or failover. Optional (Defaults to Ordered.LOWEST_PRECEDENCE [=Integer.MAX_VALUE]).

7

The fixed routing-key to use when sending Messages. By default, this will be an empty String. Mutually exclusive with routing-key-expression.Optional.

8

A SpEL expression that is evaluated to determine the routing-key to use when sending Messages, with the message as the root object (e.g. payload.key). By default, this will be an empty String. Mutually exclusive with routing-key. Optional.

9

The default delivery mode for messages; PERSISTENT or NON_PERSISTENT. Overridden if the header-mapper sets the delivery mode. The DefaultHeaderMapper sets the value if the Spring Integration message header amqp_deliveryMode is present. If this attribute is not supplied and the header mapper doesn’t set it, the default depends on the underlying spring-amqp MessagePropertiesConverter used by the RabbitTemplate. If that is not customized at all, the default is PERSISTENT. Optional.

10

An expression defining correlation data. When provided, this configures the underlying amqp template to receive publisher confirms. Requires a dedicated RabbitTemplate and a CachingConnectionFactory with the publisherConfirms property set to true. When a publisher confirm is received, and correlation data is supplied, it is written to either the confirm-ack-channel, or the confirm-nack-channel, depending on the confirmation type. The payload of the confirm is the correlation data as defined by this expression and the message will have a header amqp_publishConfirm set to true (ack) or false (nack). Examples: "headers['myCorrelationData']", "payload". Starting with version 4.1 the amqp_publishConfirmNackCause message header has been added. It contains the cause of a nack for publisher confirms. Starting with version 4.2, if the expression resolves to a Message<?> instance (such as "#this"), the message emitted on the ack/nack channel is based on that message, with the additional header(s) added. Previously, a new message was created with the correlation data as its payload, regardless of type. Optional.

11

The channel to which positive (ack) publisher confirms are sent; payload is the correlation data defined by the confirm-correlation-expression. If the expression is #root or #this, the message is built from the original message, with the amqp_publishConfirm header set to true. Optional, default=nullChannel.

12

The channel to which negative (nack) publisher confirms are sent; payload is the correlation data defined by the confirm-correlation-expression (if there is no ErrorMessageStrategy configured). If the expression is #root or #this, the message is built from the original message, with the amqp_publishConfirm header set to false. When there is an ErrorMessageStrategy, the message will be an ErrorMessage with a NackedAmqpMessageException payload. Optional, default=nullChannel.

13

The channel to which returned messages are sent. When provided, the underlying amqp template is configured to return undeliverable messages to the adapter. When there is no ErrorMessageStrategy configured, the message will be constructed from the data received from amqp, with the following additional headers: amqp_returnReplyCode, amqp_returnReplyText, amqp_returnExchange, amqp_returnRoutingKey. When there is an ErrorMessageStrategy, the message will be an ErrorMessage with a ReturnedAmqpMessageException payload. Optional.

14

A reference to an ErrorMessageStrategy implementation used to build ErrorMessage s when sending returned or negatively acknowedged messages.

15

A reference to an AmqpHeaderMapper to use when sending AMQP Messages. By default only standard AMQP properties (e.g. contentType) will be copied to the Spring Integration MessageHeaders. Any user-defined headers will NOT be copied to the Message by the default`DefaultAmqpHeaderMapper`. Not allowed if request-header-names is provided. Optional.

(16)

Comma-separated list of names of AMQP Headers to be mapped from the MessageHeaders to the AMQP Message. Not allowed if the header-mapper reference is provided. The values in this list can also be simple patterns to be matched against the header names (e.g. "*" or "foo*, bar" or "*foo").

(17)

When set to false, the endpoint will attempt to connect to the broker during application context initialization. This allows "fail fast" detection of bad configuration, but will also cause initialization to fail if the broker is down. When true (default), the connection is established (if it doesn’t already exist because some other component established it) when the first message is sent.

[Important]return-channel

Using a return-channel requires a RabbitTemplate with the mandatory property set to true, and a CachingConnectionFactory with the publisherReturns property set to true. When using multiple outbound endpoints with returns, a separate RabbitTemplate is needed for each endpoint.

==== Configuring with Java Configuration

The following Spring Boot application provides an example of configuring the outbound adapter using Java configuration:

@SpringBootApplication
@IntegrationComponentScan
public class AmqpJavaApplication {

    public static void main(String[] args) {
         ConfigurableApplicationContext context =
              new SpringApplicationBuilder(AmqpJavaApplication.class)
                       .web(false)
                       .run(args);
         MyGateway gateway = context.getBean(MyGateway.class);
         gateway.sendToRabbit("foo");
    }

    @Bean
    @ServiceActivator(inputChannel = "amqpOutboundChannel")
    public AmqpOutboundEndpoint amqpOutbound(AmqpTemplate amqpTemplate) {
        AmqpOutboundEndpoint outbound = new AmqpOutboundEndpoint(amqpTemplate);
        outbound.setRoutingKey("foo"); // default exchange - route to queue 'foo'
        return outbound;
    }

    @Bean
    public MessageChannel amqpOutboundChannel() {
        return new DirectChannel();
    }

    @MessagingGateway(defaultRequestChannel = "amqpOutboundChannel")
    public interface MyGateway {

        void sendToRabbit(String data);

    }

}

==== Configuring with the Java DSL

The following Spring Boot application provides an example of configuring the outbound adapter using the Java DSL:

@SpringBootApplication
@IntegrationComponentScan
public class AmqpJavaApplication {

    public static void main(String[] args) {
         ConfigurableApplicationContext context =
                  new SpringApplicationBuilder(AmqpJavaApplication.class)
                          .web(false)
                          .run(args);
         MyGateway gateway = context.getBean(MyGateway.class);
         gateway.sendToRabbit("foo");
    }

    @Bean
    public IntegrationFlow amqpOutbound(AmqpTemplate amqpTemplate) {
        return IntegrationFlows.from(amqpOutboundChannel())
                .handle(Amqp.outboundAdapter(amqpTemplate)
                            .routingKey("foo")) // default exchange - route to queue 'foo'
                .get();
    }

    @Bean
    public MessageChannel amqpOutboundChannel() {
        return new DirectChannel();
    }

    @MessagingGateway(defaultRequestChannel = "amqpOutboundChannel")
    public interface MyGateway {

        void sendToRabbit(String data);

    }
}

=== Outbound Gateway

Configuration for an AMQP Outbound Gateway is shown below.

<int-amqp:outbound-gateway id="inboundGateway" 1
                           request-channel="myRequestChannel" 2
                           amqp-template="" 3
                           exchange-name="" 4
                           exchange-name-expression="" 5
                           order="1" 6
                           reply-channel="" 7
                           reply-timeout="" 8
                           requires-reply="" 9
                           routing-key="" 10
                           routing-key-expression="" 11
                           default-delivery-mode"" 12
                           confirm-correlation-expression="" 13
                           confirm-ack-channel="" 14
                           confirm-nack-channel="" 15
                           return-channel="" (16)
                           error-message-strategy="" (17)
                           lazy-connect="true" /> (18)

1

Unique ID for this adapter. Optional.

2

Message Channel to which Messages should be sent in order to have them converted and published to an AMQP Exchange. Required.

3

Bean Reference to the configured AMQP Template Optional (Defaults to "amqpTemplate").

4

The name of the AMQP Exchange to which Messages should be sent. If not provided, Messages will be sent to the default, no-name Exchange. Mutually exclusive with exchange-name-expression. Optional.

5

A SpEL expression that is evaluated to determine the name of the AMQP Exchange to which Messages should be sent, with the message as the root object. If not provided, Messages will be sent to the default, no-name Exchange. Mutually exclusive with exchange-name. Optional.

6

The order for this consumer when multiple consumers are registered thereby enabling load-balancing and/or failover. Optional (Defaults to Ordered.LOWEST_PRECEDENCE [=Integer.MAX_VALUE]).

7

Message Channel to which replies should be sent after being received from an AMQP Queue and converted.Optional.

8

The time the gateway will wait when sending the reply message to the reply-channel. This only applies if the reply-channel can block - such as a QueueChannel with a capacity limit that is currently full. Default: infinity.

9

When true, the gateway will throw an exception if no reply message is received within the AmqpTemplate's replyTimeout property. Default: true.

10

The routing-key to use when sending Messages. By default, this will be an empty String. Mutually exclusive with routing-key-expression. Optional.

11

A SpEL expression that is evaluated to determine the routing-key to use when sending Messages, with the message as the root object (e.g. payload.key). By default, this will be an empty String. Mutually exclusive with routing-key. Optional.

12

The default delivery mode for messages; PERSISTENT or NON_PERSISTENT. Overridden if the header-mapper sets the delivery mode. The DefaultHeaderMapper sets the value if the Spring Integration message header amqp_deliveryMode is present. If this attribute is not supplied and the header mapper doesn’t set it, the default depends on the underlying spring-amqp MessagePropertiesConverter used by the RabbitTemplate. If that is not customized at all, the default is PERSISTENT. Optional.

13

Since version 4.2. An expression defining correlation data. When provided, this configures the underlying amqp template to receive publisher confirms. Requires a dedicated RabbitTemplate and a CachingConnectionFactory with the publisherConfirms property set to true. When a publisher confirm is received, and correlation data is supplied, it is written to either the confirm-ack-channel, or the confirm-nack-channel, depending on the confirmation type. The payload of the confirm is the correlation data as defined by this expression and the message will have a header amqp_publishConfirm set to true (ack) or false (nack). For nacks, an additional header amqp_publishConfirmNackCause is provided. Examples: "headers[myCorrelationData]", "payload". If the expression resolves to a Message<?> instance (such as "#this"), the message emitted on the ack/nack channel is based on that message, with the additional header(s) added. Previously, a new message was created with the correlation data as its payload, regardless of type. Optional.

14

The channel to which positive (ack) publisher confirms are sent; payload is the correlation data defined by the confirm-correlation-expression. If the expression is #root or #this, the message is built from the original message, with the amqp_publishConfirm header set to true. Optional, default=nullChannel.

15

The channel to which negative (nack) publisher confirms are sent; payload is the correlation data defined by the confirm-correlation-expression (if there is no ErrorMessageStrategy configured). If the expression is #root or #this, the message is built from the original message, with the amqp_publishConfirm header set to false. When there is an ErrorMessageStrategy, the message will be an ErrorMessage with a NackedAmqpMessageException payload. Optional, default=nullChannel.

(16)

The channel to which returned messages are sent. When provided, the underlying amqp template is configured to return undeliverable messages to the adapter. When there is no ErrorMessageStrategy configured, the message will be constructed from the data received from amqp, with the following additional headers: amqp_returnReplyCode, amqp_returnReplyText, amqp_returnExchange, amqp_returnRoutingKey. When there is an ErrorMessageStrategy, the message will be an ErrorMessage with a ReturnedAmqpMessageException payload. Optional.

(17)

A reference to an ErrorMessageStrategy implementation used to build ErrorMessage s when sending returned or negatively acknowedged messages.

(18)

When set to false, the endpoint will attempt to connect to the broker during application context initialization. This allows "fail fast" detection of bad configuration, by logging an error message if the broker is down. When true (default), the connection is established (if it doesn’t already exist because some other component established it) when the first message is sent.

[Important]return-channel

Using a return-channel requires a RabbitTemplate with the mandatory property set to true, and a CachingConnectionFactory with the publisherReturns property set to true. When using multiple outbound endpoints with returns, a separate RabbitTemplate is needed for each endpoint.

[Important]Important

The underlying AmqpTemplate has a default replyTimeout of 5 seconds. If you require a longer timeout, it must be configured on the template.

==== Configuring with Java Configuration

The following Spring Boot application provides an example of configuring the outbound gateway using Java configuration:

@SpringBootApplication
@IntegrationComponentScan
public class AmqpJavaApplication {

    public static void main(String[] args) {
         ConfigurableApplicationContext context =
                new SpringApplicationBuilder(AmqpJavaApplication.class)
                       .web(false)
                       .run(args);
         MyGateway gateway = context.getBean(MyGateway.class);
         String reply = gateway.sendToRabbit("foo");
         System.out.println(reply);
    }

    @Bean
    @ServiceActivator(inputChannel = "amqpOutboundChannel")
    public AmqpOutboundEndpoint amqpOutbound(AmqpTemplate amqpTemplate) {
        AmqpOutboundEndpoint outbound = new AmqpOutboundEndpoint(amqpTemplate);
        outbound.setExpectReply(true);
        outbound.setRoutingKey("foo"); // default exchange - route to queue 'foo'
        return outbound;
    }

    @Bean
    public MessageChannel amqpOutboundChannel() {
        return new DirectChannel();
    }

    @MessagingGateway(defaultRequestChannel = "amqpOutboundChannel")
    public interface MyGateway {

        String sendToRabbit(String data);

    }

}

Notice that the only difference between the outbound adapter and outbound gateway configuration is the setting of the expectReply property.

==== Configuring with the Java DSL

The following Spring Boot application provides an example of configuring the outbound adapter using the Java DSL:

@SpringBootApplication
@IntegrationComponentScan
public class AmqpJavaApplication {

    public static void main(String[] args) {
         ConfigurableApplicationContext context =
                 new SpringApplicationBuilder(AmqpJavaApplication.class)
                      .web(false)
                      .run(args);
         RabbitTemplate template = context.getBean(RabbitTemplate.class);
         MyGateway gateway = context.getBean(MyGateway.class);
         String reply = gateway.sendToRabbit("foo");
         System.out.println(reply);
    }

    @Bean
    public IntegrationFlow amqpOutbound(AmqpTemplate amqpTemplate) {
        return IntegrationFlows.from(amqpOutboundChannel())
                .handle(Amqp.outboundGateway(amqpTemplate)
                        .routingKey("foo")) // default exchange - route to queue 'foo'
                .get();
    }

    @Bean
    public MessageChannel amqpOutboundChannel() {
        return new DirectChannel();
    }

    @MessagingGateway(defaultRequestChannel = "amqpOutboundChannel")
    public interface MyGateway {

        String sendToRabbit(String data);

    }
}

=== Async Outbound Gateway

The gateway discussed in the previous section is synchronous, in that the sending thread is suspended until a reply is received (or a timeout occurs). Spring Integration version 4.3 added this asynchronous gateway, which uses the AsyncRabbitTemplate from Spring AMQP. When a message is sent, the thread returns immediately after the send completes, and the reply is sent on the template’s listener container thread when it is received. This can be useful when the gateway is invoked on a poller thread; the thread is released and is available for other tasks in the framework.

Configuration for an AMQP Async Outbound Gateway is shown below.

<int-amqp:outbound-gateway id="inboundGateway" 1
                           request-channel="myRequestChannel" 2
                           async-template="" 3
                           exchange-name="" 4
                           exchange-name-expression="" 5
                           order="1" 6
                           reply-channel="" 7
                           reply-timeout="" 8
                           requires-reply="" 9
                           routing-key="" 10
                           routing-key-expression="" 11
                           default-delivery-mode"" 12
                           confirm-correlation-expression="" 13
                           confirm-ack-channel="" 14
                           confirm-nack-channel="" 15
                           return-channel="" (16)
                           lazy-connect="true" /> (17)

1

Unique ID for this adapter. Optional.

2

Message Channel to which Messages should be sent in order to have them converted and published to an AMQP Exchange. Required.

3

Bean Reference to the configured AsyncRabbitTemplate Optional (Defaults to "asyncRabbitTemplate").

4

The name of the AMQP Exchange to which Messages should be sent. If not provided, Messages will be sent to the default, no-name Exchange. Mutually exclusive with exchange-name-expression. Optional.

5

A SpEL expression that is evaluated to determine the name of the AMQP Exchange to which Messages should be sent, with the message as the root object. If not provided, Messages will be sent to the default, no-name Exchange. Mutually exclusive with exchange-name. Optional.

6

The order for this consumer when multiple consumers are registered thereby enabling load-balancing and/or failover. Optional (Defaults to Ordered.LOWEST_PRECEDENCE [=Integer.MAX_VALUE]).

7

Message Channel to which replies should be sent after being received from an AMQP Queue and converted. Optional.

8

The time the gateway will wait when sending the reply message to the reply-channel. This only applies if the reply-channel can block - such as a QueueChannel with a capacity limit that is currently full. Default: infinity.

9

When true, the gateway will send an error message to the inbound message’s errorChannel header, if present or otherwise to the default errorChannel (if available), when no reply message is received within the AsyncRabbitTemplate's receiveTimeout property. Default: true.

10

The routing-key to use when sending Messages. By default, this will be an empty String. Mutually exclusive with routing-key-expression. Optional.

11

A SpEL expression that is evaluated to determine the routing-key to use when sending Messages, with the message as the root object (e.g. payload.key). By default, this will be an empty String. Mutually exclusive with routing-key. Optional.

12

The default delivery mode for messages; PERSISTENT or NON_PERSISTENT. Overridden if the header-mapper sets the delivery mode. The DefaultHeaderMapper sets the value if the Spring Integration message header amqp_deliveryMode is present. If this attribute is not supplied and the header mapper doesn’t set it, the default depends on the underlying spring-amqp MessagePropertiesConverter used by the RabbitTemplate. If that is not customized at all, the default is PERSISTENT. Optional.

13

An expression defining correlation data. When provided, this configures the underlying amqp template to receive publisher confirms. Requires a dedicated RabbitTemplate and a CachingConnectionFactory with the publisherConfirms property set to true. When a publisher confirm is received, and correlation data is supplied, it is written to either the confirm-ack-channel, or the confirm-nack-channel, depending on the confirmation type. The payload of the confirm is the correlation data as defined by this expression and the message will have a header amqp_publishConfirm set to true (ack) or false (nack). For nacks, an additional header amqp_publishConfirmNackCause is provided. Examples: "headers[myCorrelationData]", "payload". If the expression resolves to a Message<?> instance (such as "#this"), the message emitted on the ack/nack channel is based on that message, with the additional header(s) added. Optional.

14

The channel to which positive (ack) publisher confirms are sent; payload is the correlation data defined by the confirm-correlation-expression. Requires the underlying AsyncRabbitTemplate to have its enableConfirms property set to true. Optional, default=nullChannel.

15

Since version 4.2. The channel to which negative (nack) publisher confirms are sent; payload is the correlation data defined by the confirm-correlation-expression. Requires the underlying AsyncRabbitTemplate to have its enableConfirms property set to true. Optional, default=nullChannel.

(16)

The channel to which returned messages are sent. When provided, the underlying amqp template is configured to return undeliverable messages to the gateway. The message will be constructed from the data received from amqp, with the following additional headers: amqp_returnReplyCode, amqp_returnReplyText, amqp_returnExchange, amqp_returnRoutingKey. Requires the underlying AsyncRabbitTemplate to have its mandatory property set to true. Optional.

(17)

When set to false, the endpoint will attempt to connect to the broker during application context initialization. This allows "fail fast" detection of bad configuration, by logging an error message if the broker is down. When true (default), the connection is established (if it doesn’t already exist because some other component established it) when the first message is sent.

Also see the section called “CompletableFuture” for more information.

[Important]RabbitTemplate

When using confirms and returns, it is recommended that the RabbitTemplate wired into the AsyncRabbitTemplate be dedicated. Otherwise, unexpected side-effects may be encountered.

==== Configuring with Java Configuration

The following configuration provides an example of configuring the outbound gateway using Java configuration:

@Configuration
public class AmqpAsyncConfig {

    @Bean
    @ServiceActivator(inputChannel = "amqpOutboundChannel")
    public AsyncAmqpOutboundGateway amqpOutbound(AmqpTemplate asyncTemplate) {
        AsyncAmqpOutboundGateway outbound = new AsyncAmqpOutboundGateway(asyncTemplate);
        outbound.setRoutingKey("foo"); // default exchange - route to queue 'foo'
        return outbound;
    }

    @Bean
    public AsyncRabbitTemplate asyncTemplate(RabbitTemplate rabbitTemplate,
                     SimpleMessageListenerContainer replyContainer) {
        return new AsyncRabbitTemplate(rabbitTemplate, replyContainer);
    }

    @Bean
    public SimpleMessageListenerContainer replyContainer() {
        SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(ccf);
        container.setQueueNames("asyncRQ1");
        return container;
    }

    @Bean
    public MessageChannel amqpOutboundChannel() {
        return new DirectChannel();
    }

}

==== Configuring with the Java DSL

The following Spring Boot application provides an example of configuring the outbound adapter using the Java DSL:

// To be supplied when the DSL Amqp factory class adds support for the async gateway.

=== Outbound Message Conversion

Spring AMQP 1.4 introduced the ContentTypeDelegatingMessageConverter where the actual converter is selected based on the incoming content type message property. This could be used by inbound endpoints.

Spring Integration version 4.3 now allows the ContentTypeDelegatingMessageConverter to be used on outbound endpoints as well - with the contentType header specifiying which converter will be used.

The following configures a ContentTypeDelegatingMessageConverter with the default converter being the SimpleMessageConverter (which handles java serialization and plain text), together with a JSON converter:

<amqp:outbound-channel-adapter id="withContentTypeConverter" channel="ctRequestChannel"
                               exchange-name="someExchange"
                               routing-key="someKey"
                               amqp-template="amqpTemplateContentTypeConverter" />

<int:channel id="ctRequestChannel"/>

<rabbit:template id="amqpTemplateContentTypeConverter"
        connection-factory="connectionFactory" message-converter="ctConverter" />

<bean id="ctConverter"
        class="o.s.amqp.support.converter.ContentTypeDelegatingMessageConverter">
    <property name="delegates">
        <map>
            <entry key="application/json">
                <bean class="o.s.amqp.support.converter.Jackson2JsonMessageConverter" />
            </entry>
        </map>
    </property>
</bean>

Sending a message to ctRequestChannel with the contentType header set to application/json will cause the JSON converter to be selected.

This applies to both the outbound channel adapter and gateway.

=== Outbound User Id

Spring AMQP version 1.6 introduced a mechanism to allow the specification of a default user id for outbound messages. It has always been possible to set the AmqpHeaders.USER_ID header which will now take precedence over the default. This might be useful to message recipients; for inbound messages, if the message publisher sets the property, it is made available in the AmqpHeaders.RECEIVED_USER_ID header. Note that RabbitMQ validates that the user id is the actual user id for the connection or one for which impersonation is allowed.

To configure a default user id for outbound messages, configure it on a RabbitTemplate and configure the outbound adapter or gateway to use that template. Similarly, to set the user id property on replies, inject an appropriately configured template into the inbound gateway. See the Spring AMQP documentation for more information.

=== Delayed Message Exchange

Spring AMQP supports the RabbitMQ Delayed Message Exchange Plugin. For inbound messages, the x-delay header is mapped to the AmqpHeaders.RECEIVED_DELAY header. Setting the AMQPHeaders.DELAY header will cause the corresponding x-delay header to be set in outbound messages. You can also specify the delay and delayExpression properties on outbound endpoints (delay-expression when using XML configuration). This takes precedence over the AmqpHeaders.DELAY header.

=== AMQP Backed Message Channels

There are two Message Channel implementations available. One is point-to-point, and the other is publish/subscribe. Both of these channels provide a wide range of configuration attributes for the underlying AmqpTemplate and SimpleMessageListenerContainer as you have seen on the Channel Adapters and Gateways. However, the examples we’ll show here are going to have minimal configuration. Explore the XML schema to view the available attributes.

A point-to-point channel would look like this:

<int-amqp:channel id="p2pChannel"/>

Under the covers a Queue named "si.p2pChannel" would be declared, and this channel will send to that Queue (technically by sending to the no-name Direct Exchange with a routing key that matches this Queue’s name). This channel will also register a consumer on that Queue. If you want the channel to be "pollable" instead of message-driven, then simply provide the "message-driven" flag with a value of false:

<int-amqp:channel id="p2pPollableChannel"  message-driven="false"/>

A publish/subscribe channel would look like this:

<int-amqp:publish-subscribe-channel id="pubSubChannel"/>

Under the covers a Fanout Exchange named "si.fanout.pubSubChannel" would be declared, and this channel will send to that Fanout Exchange. This channel will also declare a server-named exclusive, auto-delete, non-durable Queue and bind that to the Fanout Exchange while registering a consumer on that Queue to receive Messages. There is no "pollable" option for a publish-subscribe-channel; it must be message-driven.

Starting with version 4.1 AMQP Backed Message Channels, alongside with channel-transacted, support template-channel-transacted to separate transactional configuration for the AbstractMessageListenerContainer and for the RabbitTemplate. Note, previously, the channel-transacted was true by default, now it changed to false as standard default value for the AbstractMessageListenerContainer.

Prior to version 4.3, AMQP-backed channels only supported messages with Serializable payloads and headers. The entire message was converted (serialized) and sent to RabbitMQ. Now, you can set the extract-payload attribute (or setExtractPayload() when using Java configuration) to true. When this flag is true, the message payload is converted and the headers mapped, in a similar manner to when using channel adapters. This allows AMQP-backed channels to be used with non-serializable payloads (perhaps with another message converter such as the Jackson2JsonMessageConverter). The default mapped headers are discussed in the section called “CompletableFuture”. You can modify the mapping by providing custom mappers using the outbound-header-mapper and inbound-header-mapper attributes. You can now also specify a default-delivery-mode, used to set the delivery mode when there is no amqp_deliveryMode header. By default, Spring AMQP MessageProperties uses PERSISTENT delivery mode.

[Important]Important

Just as with other persistence-backed channels, AMQP-backed channels are intended to provide message persistence to avoid message loss. They are not intended to distribute work to other peer applications; for that purpose, use channel adapters instead.

==== Configuring with Java Configuration

The following provides an example of configuring the channels using Java configuration:

@Bean
public AmqpChannelFactoryBean pollable(ConnectionFactory connectionFactory) {
    AmqpChannelFactoryBean factoryBean = new AmqpChannelFactoryBean();
    factoryBean.setConnectionFactory(connectionFactory);
    factoryBean.setQueueName("foo");
    factoryBean.setPubSub(false);
    return factoryBean;
}

@Bean
public AmqpChannelFactoryBean messageDriven(ConnectionFactory connectionFactory) {
    AmqpChannelFactoryBean factoryBean = new AmqpChannelFactoryBean(true);
    factoryBean.setConnectionFactory(connectionFactory);
    factoryBean.setQueueName("bar");
    factoryBean.setPubSub(false);
    return factoryBean;
}

@Bean
public AmqpChannelFactoryBean pubSub(ConnectionFactory connectionFactory) {
    AmqpChannelFactoryBean factoryBean = new AmqpChannelFactoryBean(true);
    factoryBean.setConnectionFactory(connectionFactory);
    factoryBean.setQueueName("baz");
    factoryBean.setPubSub(false);
    return factoryBean;
}

==== Configuring with the Java DSL

The following provides an example of configuring the channels using the Java DSL:

@Bean
public IntegrationFlow pollableInFlow(ConnectionFactory connectionFactory) {
    return IntegrationFlows.from(...)
            ...
            .channel(Amqp.pollableChannel(connectionFactory)
                    .queueName("foo"))
            ...
            .get();
}

@Bean
public IntegrationFlow messageDrivenInFow(ConnectionFactory connectionFactory) {
    return IntegrationFlows.from(...)
            ...
            .channel(Amqp.channel(connectionFactory)
                    .queueName("bar"))
            ...
            .get();
}

@Bean
public IntegrationFlow pubSubInFlow(ConnectionFactory connectionFactory) {
    return IntegrationFlows.from(...)
            ...
            .channel(Amqp.publisSubscribeChannel(connectionFactory)
                    .queueName("baz"))
            ...
            .get();
}

=== AMQP Message Headers

The Spring Integration AMQP Adapters will map all AMQP properties and headers automatically. (This is a change in 4.3 - previously, only standard headers were mapped). These properties will be copied by default to and from Spring Integration MessageHeaders using the DefaultAmqpHeaderMapper.

Of course, you can pass in your own implementation of AMQP specific header mappers, as the adapters have respective properties to support that.

Any user-defined headers within the AMQP MessageProperties WILL be copied to or from an AMQP Message, unless explicitly negated by the requestHeaderNames and/or replyHeaderNames properties of the DefaultAmqpHeaderMapper. For an outbound mapper, no x-* headers are mapped by default; see the caution below for the reason why.

To override the default, and revert to the pre-4.3 behavior, use STANDARD_REQUEST_HEADERS and STANDARD_REPLY_HEADERS in the properties.

[Tip]Tip

When mapping user-defined headers, the values can also contain simple wildcard patterns (e.g. "foo*" or "*foo") to be matched. * matches all headers.

Starting with version 4.1, the AbstractHeaderMapper (a DefaultAmqpHeaderMapper superclass) allows the NON_STANDARD_HEADERS token to be configured for the requestHeaderNames and/or replyHeaderNames properties (in addition to the existing STANDARD_REQUEST_HEADERS and STANDARD_REPLY_HEADERS) to map all user-defined headers.

Class org.springframework.amqp.support.AmqpHeaders identifies the default headers that will be used by the DefaultAmqpHeaderMapper:

  • amqp_appId
  • amqp_clusterId
  • amqp_contentEncoding
  • amqp_contentLength
  • content-type
  • amqp_correlationId
  • amqp_delay
  • amqp_deliveryMode
  • amqp_deliveryTag
  • amqp_expiration
  • amqp_messageCount
  • amqp_messageId
  • amqp_receivedDelay
  • amqp_receivedDeliveryMode
  • amqp_receivedExchange
  • amqp_receivedRoutingKey
  • amqp_redelivered
  • amqp_replyTo
  • amqp_timestamp
  • amqp_type
  • amqp_userId
  • amqp_publishConfirm
  • amqp_publishConfirmNackCause
  • amqp_returnReplyCode
  • amqp_returnReplyText
  • amqp_returnExchange
  • amqp_returnRoutingKey
[Caution]Caution

As mentioned above, using a header mapping pattern * is a common way to copy all headers. However, this can have some unexpected side-effects because certain RabbitMQ proprietary properties/headers will be copied as well. For example, when you use Federation, the received message may have a property named x-received-from which contains the node that sent the message. If you use the wildcard character * for the request and reply header mapping on the Inbound Gateway, this header will be copied as well, which may cause some issues with federation; this reply message may be federated back to the sending broker, which will think that a message is looping and is thus silently dropped. If you wish to use the convenience of wildcard header mapping, you may need to filter out some headers in the downstream flow. For example, to avoid copying the x-received-from header back to the reply you can use <int:header-filter ... header-names="x-received-from"> before sending the reply to the AMQP Inbound Gateway. Alternatively, you could explicitly list those properties that you actually want mapped instead of using wildcards. For these reasons, for inbound messages, the mapper by default does not map any x-* headers; it also does not map the deliveryMode to amqp_deliveryMode header, to avoid propagation of that header from an inbound message to an outbound message. Instead, this header is mapped to amqp_receivedDeliveryMode, which is not mapped on output.

Starting with version 4.3, patterns in the header mappings can be negated by preceding the pattern with !. Negated patterns get priority, so a list such as STANDARD_REQUEST_HEADERS,foo,ba*,!bar,!baz,qux,!foo will NOT map foo (nor bar nor baz); the standard headers plus bad, qux will be mapped.

[Important]Important

If you have a user defined header that begins with ! that you do wish to map, you need to escape it with \ thus: STANDARD_REQUEST_HEADERS,\!myBangHeader and it WILL be mapped.

=== AMQP Samples

To experiment with the AMQP adapters, check out the samples available in the Spring Integration Samples Git repository at:

Currently there is one sample available that demonstrates the basic functionality of the Spring Integration AMQP Adapter using an Outbound Channel Adapter and an Inbound Channel Adapter. As AMQP Broker implementation the sample uses RabbitMQ (http://www.rabbitmq.com/).

[Note]Note

In order to run the example you will need a running instance of RabbitMQ. A local installation with just the basic defaults will be sufficient. For detailed RabbitMQ installation procedures please visit: http://www.rabbitmq.com/install.html

Once the sample application is started, you enter some text on the command prompt and a message containing that entered text is dispatched to the AMQP queue. In return that message is retrieved via Spring Integration and then printed to the console.

The image belows illustrates the basic set of Spring Integration components used in this sample.

spring integration amqp sample graph

== Spring ApplicationEvent Support

Spring Integration provides support for inbound and outbound ApplicationEvents as defined by the underlying Spring Framework. For more information about Spring’s support for events and listeners, refer to the Spring Reference Manual.

=== Receiving Spring Application Events

To receive events and send them to a channel, simply define an instance of Spring Integration’s ApplicationEventListeningMessageProducer. This class is an implementation of Spring’s ApplicationListener interface. By default it will pass all received events as Spring Integration Messages. To limit based on the type of event, configure the list of event types that you want to receive with the eventTypes property. If a received event has a Message instance as its source, then that will be passed as-is. Otherwise, if a SpEL-based "payloadExpression" has been provided, that will be evaluated against the ApplicationEvent instance. If the event’s source is not a Message instance and no "payloadExpression" has been provided, then the ApplicationEvent itself will be passed as the payload.

Starting with version 4.2 the ApplicationEventListeningMessageProducer implements GenericApplicationListener and can be configured to accept not only ApplicationEvent types, but any type for treating payload events which are supported since Spring Framework 4.2, too. When the accepted event is an instance of PayloadApplicationEvent, its payload is used for the message to send.

For convenience namespace support is provided to configure an ApplicationEventListeningMessageProducer via the inbound-channel-adapter element.

<int-event:inbound-channel-adapter channel="eventChannel"
                                   error-channel="eventErrorChannel"
                                   event-types="example.FooEvent, example.BarEvent, java.util.Date"/>

<int:publish-subscribe-channel id="eventChannel"/>

In the above example, all Application Context events that match one of the types specified by the event-types (optional) attribute will be delivered as Spring Integration Messages to the Message Channel named eventChannel. If a downstream component throws an exception, a MessagingException containing the failed message and exception will be sent to the channel named eventErrorChannel. If no "error-channel" is specified and the downstream channels are synchronous, the Exception will be propagated to the caller.

=== Sending Spring Application Events

To send Spring ApplicationEvents, create an instance of the ApplicationEventPublishingMessageHandler and register it within an endpoint. This implementation of the MessageHandler interface also implements Spring’s ApplicationEventPublisherAware interface and thus acts as a bridge between Spring Integration Messages and ApplicationEvents.

For convenience namespace support is provided to configure an ApplicationEventPublishingMessageHandler via the outbound-channel-adapter element.

<int:channel id="eventChannel"/>

<int-event:outbound-channel-adapter channel="eventChannel"/>

If you are using a PollableChannel (e.g., Queue), you can also provide poller as a sub-element of the outbound-channel-adapter element. You can also optionally provide a task-executor reference for that poller. The following example demonstrates both.

<int:channel id="eventChannel">
  <int:queue/>
</int:channel>

<int-event:outbound-channel-adapter channel="eventChannel">
  <int:poller max-messages-per-poll="1" task-executor="executor" fixed-rate="100"/>
</int-event:outbound-channel-adapter>

<task:executor id="executor" pool-size="5"/>

In the above example, all messages sent to the eventChannel channel will be published as ApplicationEvents to any relevant ApplicationListener instances that are registered within the same Spring ApplicationContext. If the payload of the Message is an ApplicationEvent, it will be passed as-is. Otherwise the Message itself will be wrapped in a MessagingEvent instance.

Starting with version 4.2 the ApplicationEventPublishingMessageHandler (<int-event:outbound-channel-adapter>) can be configured with the publish-payload boolean attribute to publish to the application context payload as is, instead of wrapping it to a MessagingEvent instance.

== Feed Adapter

Spring Integration provides support for Syndication via Feed Adapters

=== Introduction

Web syndication is a form of publishing material such as news stories, press releases, blog posts, and other items typically available on a website but also made available in a feed format such as RSS or ATOM.

Spring integration provides support for Web Syndication via its feed adapter and provides convenient namespace-based configuration for it. To configure the feed namespace, include the following elements within the headers of your XML configuration file:

xmlns:int-feed="http://www.springframework.org/schema/integration/feed"
xsi:schemaLocation="http://www.springframework.org/schema/integration/feed
	http://www.springframework.org/schema/integration/feed/spring-integration-feed.xsd"

=== Feed Inbound Channel Adapter

The only adapter that is really needed to provide support for retrieving feeds is an inbound channel adapter. This allows you to subscribe to a particular URL. Below is an example configuration:

<int-feed:inbound-channel-adapter id="feedAdapter"
		channel="feedChannel"
		url="http://feeds.bbci.co.uk/news/rss.xml">
	<int:poller fixed-rate="10000" max-messages-per-poll="100" />
</int-feed:inbound-channel-adapter>

In the above configuration, we are subscribing to a URL identified by the url attribute.

As news items are retrieved they will be converted to Messages and sent to a channel identified by the channel attribute. The payload of each message will be a com.sun.syndication.feed.synd.SyndEntry instance. That encapsulates various data about a news item (content, dates, authors, etc.).

You can also see that the Inbound Feed Channel Adapter is a Polling Consumer. That means you have to provide a poller configuration. However, one important thing you must understand with regard to Feeds is that its inner-workings are slightly different then most other poling consumers. When an Inbound Feed adapter is started, it does the first poll and receives a com.sun.syndication.feed.synd.SyndEntryFeed instance. That is an object that contains multiple SyndEntry objects. Each entry is stored in the local entry queue and is released based on the value in the max-messages-per-poll attribute such that each Message will contain a single entry. If during retrieval of the entries from the entry queue the queue had become empty, the adapter will attempt to update the Feed thereby populating the queue with more entries (SyndEntry instances) if available. Otherwise the next attempt to poll for a feed will be determined by the trigger of the poller (e.g., every 10 seconds in the above configuration).

Duplicate Entries

Polling for a Feed might result in entries that have already been processed ("I already read that news item, why are you showing it to me again?"). Spring Integration provides a convenient mechanism to eliminate the need to worry about duplicate entries. Each feed entry will have a published date field. Every time a new Message is generated and sent, Spring Integration will store the value of the latest published date in an instance of the MetadataStore strategy (the section called “CompletableFuture”).

[Note]Note

The key used to persist the latest published date is the value of the (required) id attribute of the Feed Inbound Channel Adapter component plus the feedUrl from the adapter’s configuration.

== File Support

=== Introduction

Spring Integration’s File support extends the Spring Integration Core with a dedicated vocabulary to deal with reading, writing, and transforming files. It provides a namespace that enables elements defining Channel Adapters dedicated to files and support for Transformers that can read file contents into strings or byte arrays.

This section will explain the workings of FileReadingMessageSource and FileWritingMessageHandler and how to configure them as beans. Also the support for dealing with files through file specific implementations of Transformer will be discussed. Finally the file specific namespace will be explained.

=== Reading Files

A FileReadingMessageSource can be used to consume files from the filesystem. This is an implementation of MessageSource that creates messages from a file system directory.

<bean id="pollableFileSource"
    class="org.springframework.integration.file.FileReadingMessageSource"
    p:directory="${input.directory}"/>

To prevent creating messages for certain files, you may supply a FileListFilter. By default the following 2 filters are used:

  • IgnoreHiddenFileListFilter
  • AcceptOnceFileListFilter

The IgnoreHiddenFileListFilter ensures that hidden files are not being processed. Please keep in mind that the exact definition of hidden is system-dependent. For example, on UNIX-based systems, a file beginning with a period character is considered to be hidden. Microsoft Windows, on the other hand, has a dedicated file attribute to indicate hidden files.

[Important]Important

The IgnoreHiddenFileListFilter was introduced with version 4.2. In prior versions hidden files were included. With the default configuration, the IgnoreHiddenFileListFilter will be triggered first, then the AcceptOnceFileListFilter.

The AcceptOnceFileListFilter ensures files are picked up only once from the directory.

[Note]Note

The AcceptOnceFileListFilter stores its state in memory. If you wish the state to survive a system restart, consider using the FileSystemPersistentAcceptOnceFileListFilter instead. This filter stores the accepted file names in a MetadataStore implementation (the section called “CompletableFuture”). This filter matches on the filename and modified time.

Since version 4.0, this filter requires a ConcurrentMetadataStore. When used with a shared data store (such as Redis with the RedisMetadataStore) this allows filter keys to be shared across multiple application instances, or when a network file share is being used by multiple servers.

Since version 4.1.5, this filter has a new property flushOnUpdate which will cause it to flush the metadata store on every update (if the store implements Flushable).

<bean id="pollableFileSource"
    class="org.springframework.integration.file.FileReadingMessageSource"
    p:inputDirectory="${input.directory}"
    p:filter-ref="customFilterBean"/>

A common problem with reading files is that a file may be detected before it is ready. The default AcceptOnceFileListFilter does not prevent this. In most cases, this can be prevented if the file-writing process renames each file as soon as it is ready for reading. A filename-pattern or filename-regex filter that accepts only files that are ready (e.g. based on a known suffix), composed with the default AcceptOnceFileListFilter allows for this. The CompositeFileListFilter enables the composition.

<bean id="pollableFileSource"
    class="org.springframework.integration.file.FileReadingMessageSource"
    p:inputDirectory="${input.directory}"
    p:filter-ref="compositeFilter"/>

<bean id="compositeFilter"
    class="org.springframework.integration.file.filters.CompositeFileListFilter">
    <constructor-arg>
        <list>
            <bean class="o.s.i.file.filters.AcceptOnceFileListFilter"/>
            <bean class="o.s.i.file.filters.RegexPatternFileListFilter">
                <constructor-arg value="^test.*$"/>
            </bean>
        </list>
    </constructor-arg>
</bean>

If it is not possible to create the file with a temporary name and rename to the final name, another alternative is provided. The LastModifiedFileListFilter was added in version 4.2. This filter can be configured with an age property and only files older than this will be passed by the filter. The age defaults to 60 seconds, but you should choose an age that is large enough to avoid picking up a file early, due to, say, network glitches.

<bean id="filter" class="org.springframework.integration.file.filters.LastModifiedFileListFilter">
    <property name="age" value="120" />
</bean>

Starting with version 4.3.7 a ChainFileListFilter (an extension of CompositeFileListFilter) has been introduced to allow scenarios when subsequent filters should only see the result of the previous filter. (With the CompositeFileListFilter, all filters see all the files, but only files that pass all filters are passed by the CompositeFileListFilter). An example of where the new behavior is required is a combination of LastModifiedFileListFilter and AcceptOnceFileListFilter, when we do not wish to accept the file until some amount of time has elapsed. With the CompositeFileListFilter, since the AcceptOnceFileListFilter sees all the files on the first pass, it won’t pass it later when the other filter does. The CompositeFileListFilter approach is useful when a pattern filter is combined with a custom filter that looks for a secondary indicating file transfer is complete. The pattern filter might only pass the primary file (e.g. foo.txt) but the "done" filter needs to see if, say foo.done is present.

Say we have files a.txt, a.done, and b.txt.

The pattern filter only passes a.txt and b.txt, the "done" filter will see all three files and only pass a.txt. The final result of the composite filter is only a.txt is released.

[Note]Note

With the ChainFileListFilter, if any filter in the chain returns an empty list, the remaining filters are not invoked.

Directory scanning and polling

The FileReadingMessageSource doesn’t produce messages for files from the directory immediately. It uses an internal queue for eligible files returned by the scanner. The scanEachPoll option is used to ensure that the internal queue is refreshed with the latest input directory content on each poll. By default (scanEachPoll = false), the FileReadingMessageSource empties its queue before scanning the directory again. This default behavior is particularly useful to reduce scans of large numbers of files in a directory. However, in cases where custom ordering is required, it is important to consider the effects of setting this flag to true; the order in which files are processed may not be as expected. By default, files in the queue are processed in their natural (path) order. New files added by a scan, even when the queue already has files, are inserted in the appropriate position to maintain that natural order. To customize the order, the FileReadingMessageSource can accept a Comparator<File> as a constructor argument. It is used by the internal (PriorityBlockingQueue) to reorder its content according to the business requirements. Therefore, to process files in a specific order, you should provide a comparator to the FileReadingMessageSource, rather than ordering the list produced by a custom DirectoryScanner.

==== Namespace Support

The configuration for file reading can be simplified using the file specific namespace. To do this use the following template.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns:int="http://www.springframework.org/schema/integration"
  xmlns:int-file="http://www.springframework.org/schema/integration/file"
  xsi:schemaLocation="http://www.springframework.org/schema/beans
    http://www.springframework.org/schema/beans/spring-beans.xsd
    http://www.springframework.org/schema/integration
    http://www.springframework.org/schema/integration/spring-integration.xsd
    http://www.springframework.org/schema/integration/file
    http://www.springframework.org/schema/integration/file/spring-integration-file.xsd">
</beans>

Within this namespace you can reduce the FileReadingMessageSource and wrap it in an inbound Channel Adapter like this:

<int-file:inbound-channel-adapter id="filesIn1"
    directory="file:${input.directory}" prevent-duplicates="true" ignore-hidden="true"/>

<int-file:inbound-channel-adapter id="filesIn2"
    directory="file:${input.directory}"
    filter="customFilterBean" />

<int-file:inbound-channel-adapter id="filesIn3"
    directory="file:${input.directory}"
    filename-pattern="test*" />

<int-file:inbound-channel-adapter id="filesIn4"
    directory="file:${input.directory}"
    filename-regex="test[0-9]+\.txt" />

The first channel adapter example is relying on the default FileListFilter s:

  • IgnoreHiddenFileListFilter (Do not process hidden files)
  • AcceptOnceFileListFilter (Prevents duplication)

Therefore, you can also leave off the 2 attributes prevent-duplicates and ignore-hidden as they are true by default.

[Important]Important

The ignore-hidden attribute was introduced with Spring Integration 4.2. In prior versions hidden files were included.

The second channel adapter example is using a custom filter, the third is using the filename-pattern attribute to add an AntPathMatcher based filter, and the fourth is using the filename-regex attribute to add a regular expression Pattern based filter to the FileReadingMessageSource. The filename-pattern and filename-regex attributes are each mutually exclusive with the regular filter reference attribute. However, you can use the filter attribute to reference an instance of CompositeFileListFilter that combines any number of filters, including one or more pattern based filters to fit your particular needs.

When multiple processes are reading from the same directory it can be desirable to lock files to prevent them from being picked up concurrently. To do this you can use a FileLocker. There is a java.nio based implementation available out of the box, but it is also possible to implement your own locking scheme. The nio locker can be injected as follows

<int-file:inbound-channel-adapter id="filesIn"
    directory="file:${input.directory}" prevent-duplicates="true">
    <int-file:nio-locker/>
</int-file:inbound-channel-adapter>

A custom locker you can configure like this:

<int-file:inbound-channel-adapter id="filesIn"
    directory="file:${input.directory}" prevent-duplicates="true">
    <int-file:locker ref="customLocker"/>
</int-file:inbound-channel-adapter>
[Note]Note

When a file inbound adapter is configured with a locker, it will take the responsibility to acquire a lock before the file is allowed to be received. It will not assume the responsibility to unlock the file. If you have processed the file and keeping the locks hanging around you have a memory leak. If this is a problem in your case you should call FileLocker.unlock(File file) yourself at the appropriate time.

When filtering and locking files is not enough it might be needed to control the way files are listed entirely. To implement this type of requirement you can use an implementation of DirectoryScanner. This scanner allows you to determine entirely what files are listed each poll. This is also the interface that Spring Integration uses internally to wire FileListFilter s and FileLocker to the FileReadingMessageSource. A custom DirectoryScanner can be injected into the <int-file:inbound-channel-adapter/> on the scanner attribute.

<int-file:inbound-channel-adapter id="filesIn" directory="file:${input.directory}"
     scanner="customDirectoryScanner"/>

This gives you full freedom to choose the ordering, listing and locking strategies.

It is also important to understand that filters (including patterns, regex, prevent-duplicates etc) and locker s, are actually used by the scanner. Any of these attributes set on the adapter are subsequently injected into the internal scanner. For the case of an external scanner, all filter and locker attributes are prohibited on the FileReadingMessageSource; they must be specified (if required) on that custom DirectoryScanner. In other words, if you inject a scanner into the FileReadingMessageSource, you should supply filter and locker on that scanner not on the FileReadingMessageSource.

[Note]Note

The DefaultDirectoryScanner uses a IgnoreHiddenFileListFilter and AcceptOnceFileListFilter by default. To prevent their use, you should configure your own filter (e.g. AcceptAllFileListFilter) or even set it to null.

==== WatchServiceDirectoryScanner

This scanner was added in version 4.2. It replaces the existing RecursiveLeafOnlyDirectoryScanner which is inefficient for large directory trees. The FileReadingMessageSource.WatchServiceDirectoryScanner requires Java 7 or above.

This scanner relies on file system events when new files are added to the directory. During initialization, the directory is registered to generate events; the initial file list is also built. While walking the directory tree, any subdirectories encountered are also registered to generate events. On the first poll, the initial file list from walking the directory is returned. On subsequent polls, files from new creation events are returned. If a new subdirectory is added, its creation event is used to walk the new subtree to find existing files, as well as registering any new subdirectories found.

[Note]Note

There is a case with WatchKey, when its internal events queue isn’t drained by the program as quickly as the directory modification events occur. If the queue size is exceeded, a StandardWatchEventKinds.OVERFLOW is emitted to indicate that some file system events may be lost. In this case, the root directory is re-scanned completely. To avoid duplicates consider using an appropriate FileListFilter such as the AcceptOnceFileListFilter and/or remove files when processing is completed.

Since version 4.3, the top level WatchServiceDirectoryScanner has been deprecated in favor of FileReadingMessageSource internal logic for the WatchService. Now this can be enable via use-watch-service option, which is mutually exclusive with the scanner option. An internal FileReadingMessageSource.WatchServiceDirectoryScanner instance is populated for the provided directory.

In addition, now the WatchService polling logic can track the StandardWatchEventKinds.ENTRY_MODIFY and StandardWatchEventKinds.ENTRY_DELETE, too.

The ENTRY_MODIFY events logic should be implemented properly in the FileListFilter to track not only new files but also the modification, if that is requirement. Otherwise the files from those events are treated the same way.

The ENTRY_DELETE events have effect for the ResettableFileListFilter implementations and, therefore, their files are provided for the remove() operation. This means that (when this event is enabled), filters such as the AcceptOnceFileListFilter will have the file removed, meaning that, if a file with the same name appears, it will pass the filter and be sent as a message.

For this purpose the watch-events (FileReadingMessageSource.setWatchEvents(WatchEventType... watchEvents)) has been introduced (WatchEventType is a public inner enum in FileReadingMessageSource). With such an option we can implement some scenarios, when we would like to do one downstream flow logic for new files, and other for modified. We can achieve that with different <int-file:inbound-channel-adapter> definitions, but for the same directory:

<int-file:inbound-channel-adapter id="newFiles"
     directory="${input.directory}"
     use-watch-service="true"/>

<int-file:inbound-channel-adapter id="modifiedFiles"
     directory="${input.directory}"
     use-watch-service="true"
     filter="acceptAllFilter"
     watch-events="MODIFY"/> <!-- CREATE by default -->

==== Limiting Memory Consumption

A HeadDirectoryScanner can be used to limit the number of files retained in memory. This can be useful when scanning large directories. With XML configuration, this is enabled using the queue-size property on the inbound channel adapter.

Prior to version 4.2, this setting was incompatible with the use of any other filters. Any other filters (including prevent-duplicates="true") overwrote the filter used to limit the size.

[Note]Note

The use of a HeadDirectoryScanner is incompatible with an AcceptOnceFileListFilter. Since all filters are consulted during the poll decision, the AcceptOnceFileListFilter does not know that other filters might be temporarily filtering files. Even if files that were previously filtered by the HeadDirectoryScanner.HeadFilter are now available, the AcceptOnceFileListFilter will filter them.

Generally, instead of using an AcceptOnceFileListFilter in this case, one would simply remove the processed files so that the previously filtered files will be available on a future poll.

==== Configuring with Java Configuration

The following Spring Boot application provides an example of configuring the inbound adapter using Java configuration:

@SpringBootApplication
public class FileReadingJavaApplication {

    public static void main(String[] args) {
        new SpringApplicationBuilder(FileReadingJavaApplication.class)
            .web(false)
            .run(args);
    }

    @Bean
    public MessageChannel fileInputChannel() {
        return new DirectChannel();
    }

    @Bean
    @InboundChannelAdapter(value = "fileInputChannel", poller = @Poller(fixedDelay = "1000"))
    public MessageSource<File> fileReadingMessageSource() {
         FileReadingMessageSource source = new FileReadingMessageSource();
         source.setDirectory(new File(INBOUND_PATH));
         source.setFilter(new SimplePatternFileListFilter("*.txt"));
         return source;
    }

    @Bean
    @Transformer(inputChannel = "fileInputChannel", outputChannel = "processFileChannel")
    public FileToStringTransformer fileToStringTransformer() {
        return new FileToStringTransformer();
    }

}

==== Configuring with the Java DSL

The following Spring Boot application provides an example of configuring the inbound adapter using the Java DSL:

@SpringBootApplication
public class FileReadingJavaApplication {

    public static void main(String[] args) {
        new SpringApplicationBuilder(FileReadingJavaApplication.class)
            .web(false)
            .run(args);
    }

    @Bean
    public IntegrationFlow fileReadingFlow() {
         return IntegrationFlows
                  .from(s -> s.file(new File(INBOUND_PATH))
                              .patternFilter("*.txt"),
                          e -> e.poller(Pollers.fixedDelay(1000)))
                  .transform(Transformers.fileToString())
                  .channel("processFileChannel")
                  .get();
        }

}

==== 'Tail’ing Files

Another popular use case is to get lines from the end (or tail) of a file, capturing new lines when they are added. Two implementations are provided; the first, OSDelegatingFileTailingMessageProducer, uses the native tail command (on operating systems that have one). This is likely the most efficient implementation on those platforms. For operating systems that do not have a tail command, the second implementation ApacheCommonsFileTailingMessageProducer which uses the Apache commons-io Tailer class.

In both cases, file system events, such as files being unavailable etc, are published as ApplicationEvent s using the normal Spring event publishing mechanism. Examples of such events are:

[message=tail: cannot open `/tmp/foo' for reading: No such file or directory, file=/tmp/foo]

[message=tail: `/tmp/foo' has become accessible, file=/tmp/foo]

[message=tail: `/tmp/foo' has become inaccessible: No such file or directory, file=/tmp/foo]

[message=tail: `/tmp/foo' has appeared; following end of new file, file=/tmp/foo]

This sequence of events might occur, for example, when a file is rotated.

[Note]Note

Not all platforms supporting a tail command provide these status messages.

Example configurations:

<int-file:tail-inbound-channel-adapter id="native"
	channel="input"
	task-executor="exec"
	file="/tmp/foo"/>

This creates a native adapter with default -F -n 0 options (follow the file name from the current end).

<int-file:tail-inbound-channel-adapter id="native"
	channel="input"
	native-options="-F -n +0"
	task-executor="exec"
	file-delay=10000
	file="/tmp/foo"/>

This creates a native adapter with -F -n +0 options (follow the file name, emitting all existing lines). If the tail command fails (on some platforms, a missing file causes the tail to fail, even with -F specified), the command will be retried every 10 seconds.

<int-file:tail-inbound-channel-adapter id="native"
	channel="input"
	enable-status-reader="false"
	task-executor="exec"
	file="/tmp/foo"/>

By default native adapter capture from standard output and send them as messages and from standard error to raise events. Starting with version 4.3.6, you can discard the standard error events by setting the enable-status-reader to false.

<int-file:tail-inbound-channel-adapter id="apache"
	channel="input"
	task-executor="exec"
	file="/tmp/bar"
	delay="2000"
	end="false"
	reopen="true"
	file-delay="10000"/>

This creates an Apache commons-io Tailer adapter that examines the file for new lines every 2 seconds, and checks for existence of a missing file every 10 seconds. The file will be tailed from the beginning (end="false") instead of the end (which is the default). The file will be reopened for each chunk (the default is to keep the file open).

[Important]Important

Specifying the delay, end or reopen attributes, forces the use of the Apache commons-io adapter and the native-options attribute is not allowed.

=== Writing files

To write messages to the file system you can use a FileWritingMessageHandler. This class can deal with the following payload types:

  • File,
  • String
  • byte array
  • InputStream (since version 4.2)

You can configure the encoding and the charset that will be used in case of a String payload.

To make things easier, you can configure the FileWritingMessageHandler as part of an Outbound Channel Adapter or Outbound Gateway using the provided XML namespace support.

Starting with version 4.3, you can specify the buffer size to use when writing files.

==== Generating File Names

In its simplest form, the FileWritingMessageHandler only requires a destination directory for writing the files. The name of the file to be written is determined by the handler’s FileNameGenerator. The default implementation looks for a Message header whose key matches the constant defined as FileHeaders.FILENAME.

Alternatively, you can specify an expression to be evaluated against the Message in order to generate a file name, e.g. headers[myCustomHeader] + '.foo'. The expression must evaluate to a String. For convenience, the DefaultFileNameGenerator also provides the setHeaderName method, allowing you to explicitly specify the Message header whose value shall be used as the filename.

Once setup, the DefaultFileNameGenerator will employ the following resolution steps to determine the filename for a given Message payload:

  1. Evaluate the expression against the Message and, if the result is a non-empty String, use it as the filename.
  2. Otherwise, if the payload is a java.io.File, use the file’s filename.
  3. Otherwise, use the Message ID appended with .msg as the filename.

When using the XML namespace support, both, the File Outbound Channel Adapter and the File Outbound Gateway support the following two mutually exclusive configuration attributes:

  • filename-generator (a reference to a FileNameGenerator implementation)
  • filename-generator-expression (an expression evaluating to a String)

While writing files, a temporary file suffix will be used (default: .writing). It is appended to the filename while the file is being written. To customize the suffix, you can set the temporary-file-suffix attribute on both the File Outbound Channel Adapter and the File Outbound Gateway.

[Note]Note

When using the APPEND file mode, the temporary-file-suffix attribute is ignored, since the data is appended to the file directly.

Starting with version 4.2.5 the generated file name (as a result of filename-generator/filename-generator-expression evaluation) can represent a sub-path together with the target file name. It is used as a second constructor argument for File(File parent, String child) as before, but in the past we didn’t created (mkdirs()) directories for sub-path assuming only the file name. This approach is useful for cases when we need to restore the file system tree according the source directory. For example we unzipping the archive and want to save all file in the target directory at the same order.

==== Specifying the Output Directory

Both, the File Outbound Channel Adapter and the File Outbound Gateway provide two configuration attributes for specifying the output directory:

  • directory
  • directory-expression
[Note]Note

The directory-expression attribute is available since Spring Integration 2.2.

Using the directory attribute

When using the directory attribute, the output directory will be set to a fixed value, that is set at initialization time of the FileWritingMessageHandler. If you don’t specify this attribute, then you must use the directory-expression attribute.

Using the directory-expression attribute

If you want to have full SpEL support you would choose the directory-expression attribute. This attribute accepts a SpEL expression that is evaluated for each message being processed. Thus, you have full access to a Message’s payload and its headers to dynamically specify the output file directory.

The SpEL expression must resolve to either a String or to java.io.File. Furthermore the resulting String or File must point to a directory. If you don’t specify the directory-expression attribute, then you must set the directory attribute.

Using the auto-create-directory attribute

If the destination directory does not exists, yet, by default the respective destination directory and any non-existing parent directories are being created automatically. You can set the auto-create-directory attribute to false in order to prevent that. This attribute applies to both, the directory and the directory-expression attribute.

[Note]Note

When using the directory attribute and auto-create-directory is false, the following change was made starting with Spring Integration 2.2:

Instead of checking for the existence of the destination directory at initialization time of the adapter, this check is now performed for each message being processed.

Furthermore, if auto-create-directory is true and the directory was deleted between the processing of messages, the directory will be re-created for each message being processed.

==== Dealing with Existing Destination Files

When writing files and the destination file already exists, the default behavior is to overwrite that target file. This behavior, though, can be changed by setting the mode attribute on the respective File Outbound components. The following options exist:

  • REPLACE (Default)
  • APPEND
  • APPEND_NO_FLUSH
  • FAIL
  • IGNORE
[Note]Note

The mode attribute and the options APPEND, FAIL and IGNORE, are available since Spring Integration 2.2.

REPLACE

If the target file already exists, it will be overwritten. If the mode attribute is not specified, then this is the default behavior when writing files.

APPEND

This mode allows you to append Message content to the existing file instead of creating a new file each time. Note that this attribute is mutually exclusive with temporary-file-suffix attribute since when appending content to the existing file, the adapter no longer uses a temporary file. The file is closed after each message.

APPEND_NO_FLUSH

This has the same semantics as APPEND but the data is not flushed and the file is not closed after each message. This can provide a significant performance at the risk of data loss in the case of a failure. See the section called “CompletableFuture” for more information.

FAIL

If the target file exists, a MessageHandlingException is thrown.

IGNORE

If the target file exists, the message payload is silently ignored.

[Note]Note

When using a temporary file suffix (default: .writing), the IGNORE mode will apply if the final file name exists, or the temporary file name exists.

==== Flushing Files When using APPEND_NO_FLUSH

The APPEND_NO_FLUSH mode was added in version 4.3. This can improve performance because the file is not closed after each message. However, this can cause data loss in the event of a failure.

Several flushing strategies, to mitigate this data loss, are provided:

  • flushInterval - if a file is not written to for this period of time, it is automatically flushed. This is approximate and may be up to 1.33x this time.
  • Send a message to the message handler’s trigger method containing a regular expression. Files with absolute path names matching the pattern will be flushed.
  • Provide the handler with a custom MessageFlushPredicate implementation to modify the action taken when a message is sent to the trigger method.
  • Invoke one of the handler’s flushIfNeeded methods passing in a custom FileWritingMessageHandler.FlushPredicate or FileWritingMessageHandler.MessageFlushPredicate implementation.

The predicates are called for each open file. See the java docs for these interfaces for more information.

When using flushInterval, the interval starts at the last write - the file is flushed only if it is idle for the interval. Starting with version 4.3.7, and additional property flushWhenIdle can be set to false, meaning that the interval starts with the first write to a previously flushed (or new) file.

==== File Timestamps

By default, the destination file lastModified timestamp will be the time the file was created (except a rename in-place will retain the current timestamp). Starting with version 4.3, you can now configure preserve-timestamp (or setPreserveTimestamp(true) when using Java configuration). For File payloads, this will transfer the timestamp from the inbound file to the outbound (regardless of whether a copy was required). For other payloads, if the FileHeaders.SET_MODIFIED header (file_setModified) is present, it will be used to set the destination file’s lastModified timestamp, as long as the header is a Number.

==== File Outbound Channel Adapter

<int-file:outbound-channel-adapter id="filesOut" directory="${input.directory.property}"/>

The namespace based configuration also supports a delete-source-files attribute. If set to true, it will trigger the deletion of the original source files after writing to a destination. The default value for that flag is false.

<int-file:outbound-channel-adapter id="filesOut"
    directory="${output.directory}"
    delete-source-files="true"/>
[Note]Note

The delete-source-files attribute will only have an effect if the inbound Message has a File payload or if the FileHeaders.ORIGINAL_FILE header value contains either the source File instance or a String representing the original file path.

Starting with version 4.2 The FileWritingMessageHandler supports an append-new-line option. If set to true, a new line is appended to the file after a message is written. The default attribute value is false.

<int-file:outbound-channel-adapter id="newlineAdapter"
	append-new-line="true"
    directory="${output.directory}"/>

==== Outbound Gateway

In cases where you want to continue processing messages based on the written file, you can use the outbound-gateway instead. It plays a very similar role as the outbound-channel-adapter. However, after writing the file, it will also send it to the reply channel as the payload of a Message.

<int-file:outbound-gateway id="mover" request-channel="moveInput"
    reply-channel="output"
    directory="${output.directory}"
    mode="REPLACE" delete-source-files="true"/>

As mentioned earlier, you can also specify the mode attribute, which defines the behavior of how to deal with situations where the destination file already exists. Please see the section called “CompletableFuture” for further details. Generally, when using the File Outbound Gateway, the result file is returned as the Message payload on the reply channel.

This also applies when specifying the IGNORE mode. In that case the pre-existing destination file is returned. If the payload of the request message was a file, you still have access to that original file through the Message Header FileHeaders.ORIGINAL_FILE.

[Note]Note

The outbound-gateway works well in cases where you want to first move a file and then send it through a processing pipeline. In such cases, you may connect the file namespace’s inbound-channel-adapter element to the outbound-gateway and then connect that gateway’s reply-channel to the beginning of the pipeline.

If you have more elaborate requirements or need to support additional payload types as input to be converted to file content you could extend the FileWritingMessageHandler, but a much better option is to rely on a Transformer.

==== Configuring with Java Configuration

The following Spring Boot application provides an example of configuring the inbound adapter using Java configuration:

@SpringBootApplication
@IntegrationComponentScan
public class FileWritingJavaApplication {

    public static void main(String[] args) {
        ConfigurableApplicationContext context =
                      new SpringApplicationBuilder(FileWritingJavaApplication.class)
                              .web(false)
                              .run(args);
             MyGateway gateway = context.getBean(MyGateway.class);
             gateway.writeToFile("foo.txt", new File(tmpDir.getRoot(), "fileWritingFlow"), "foo");
    }

    @Bean
    @ServiceActivator(inputChannel = "writeToFileChannel")
    public MessageHandler fileWritingMessageHandler() {
         Expression directoryExpression = new SpelExpressionParser().parseExpression("headers.directory");
         FileWritingMessageHandler handler = new FileWritingMessageHandler(directoryExpression);
         handler.setFileExistsMode(FileExistsMode.APPEND);
         return handler;
    }

    @MessagingGateway(defaultRequestChannel = "writeToFileChannel")
    public interface MyGateway {

        void writeToFile(@Header(FileHeaders.FILENAME) String fileName,
                       @Header(FileHeaders.FILENAME) File directory, String data);

    }
}

==== Configuring with the Java DSL

The following Spring Boot application provides an example of configuring the inbound adapter using the Java DSL:

@SpringBootApplication
public class FileWritingJavaApplication {

    public static void main(String[] args) {
        ConfigurableApplicationContext context =
                 new SpringApplicationBuilder(FileWritingJavaApplication.class)
                         .web(false)
                         .run(args);
        MessageChannel fileWritingInput = context.getBean("fileWritingInput", MessageChannel.class);
        fileWritingInput.send(new GenericMessage<>("foo"));
    }

    @Bean
   	public IntegrationFlow fileWritingFlow() {
   	    return IntegrationFlows.from("fileWritingInput")
   		        .enrichHeaders(h -> h.header(FileHeaders.FILENAME, "foo.txt")
   		                  .header("directory", new File(tmpDir.getRoot(), "fileWritingFlow")))
   	            .handleWithAdapter(a -> a.fileGateway(m -> m.getHeaders().get("directory")))
   	            .channel(MessageChannels.queue("fileWritingResultChannel"))
   	            .get();
    }

}

=== File Transformers

To transform data read from the file system to objects and the other way around you need to do some work. Contrary to FileReadingMessageSource and to a lesser extent FileWritingMessageHandler, it is very likely that you will need your own mechanism to get the job done. For this you can implement the Transformer interface. Or extend the AbstractFilePayloadTransformer for inbound messages. Some obvious implementations have been provided.

FileToByteArrayTransformer transforms Files into byte[] using Spring’s FileCopyUtils. It is often better to use a sequence of transformers than to put all transformations in a single class. In that case the File to byte[] conversion might be a logical first step.

FileToStringTransformer will convert Files to Strings as the name suggests. If nothing else, this can be useful for debugging (consider using with a Wire Tap).

To configure File specific transformers you can use the appropriate elements from the file namespace.

<int-file:file-to-bytes-transformer  input-channel="input" output-channel="output"
    delete-files="true"/>

<int-file:file-to-string-transformer input-channel="input" output-channel="output"
    delete-files="true" charset="UTF-8"/>

The delete-files option signals to the transformer that it should delete the inbound File after the transformation is complete. This is in no way a replacement for using the AcceptOnceFileListFilter when the FileReadingMessageSource is being used in a multi-threaded environment (e.g. Spring Integration in general).

=== File Splitter

The FileSplitter was added in version 4.1.2 and namespace support was added in version 4.2. The FileSplitter splits text files into individual lines, based on BufferedReader.readLine(). By default, the splitter uses an Iterator to emit lines one-at-a-time as they are read from the file. Setting the iterator property to false causes it to read all the lines into memory before emitting them as messages. One use case for this might be if you want to detect I/O errors on the file before sending any messages containing lines. However, it is only practical for relatively short files.

Inbound payloads can be File, String (a File path), InputStream, or Reader. Other payload types will be emitted unchanged.

<int-file:splitter id="splitter" 1
    iterator="" 2
    markers="" 3
    markers-json="" 4
    apply-sequence="" 5
    requires-reply="" 6
    charset="" 7
    input-channel="" 8
    output-channel="" 9
    send-timeout="" 10
    auto-startup="" 11
    order="" 12
    phase="" /> 13

1

The bean name of the splitter.

2

Set to true to use an iterator (default); false to load the file into memory before sending lines.

3

Set to true to emit start/end of file marker messages before and after the file data. Markers are messages with FileSplitter.FileMarker payloads (with START and END values in the mark property). Markers might be used when sequentially processing files in a downstream flow where some lines are filtered. They enable the downstream processing to know when a file has been completely processed. In addition, a header file_marker containing START or END are added to these messages. The END marker includes a line count. If the file is empty, only START and END markers are emitted with 0 as the lineCount. Default: false. When true, apply-sequence is false by default. Also see markers-json.

4

When markers is true, set this to true and the FileMarker objects will be converted to a JSON String. Requires a supported JSON processor library on the classpath (Jackson, Boon).

5

Set to false to disable the inclusion of sequenceSize and sequenceNumber headers in messages. Default: true, unless markers is true. When true and markers is true, the markers are included in the sequencing. When true and iterator is true, the sequenceSize header is set to 0 because the size is unknown.

6

Set to true to cause a RequiresReplyException to be thrown if there are no lines in the file. Default: false.

7

Set the charset name to be used when reading the text data into String payloads. Default: platform charset.

8

Set the input channel used to send messages to the splitter.

9

Set the output channel to which messages will be sent.

10

Set the send timeout - only applies if the output-channel can block - such as a full QueueChannel.

11

Set to false to disable automatically starting the splitter when the context is refreshed. Default: true.

12

Set the order of this endpoint if the input-channel is a <publish-subscribe-channel/>.

13

Set the startup phase for the splitter (used when auto-startup is true).

Java Configuration

@Splitter(inputChannel="toSplitter")
@Bean
public MessageHandler fileSplitter() {
    FileSplitter splitter = new FileSplitter(true, true);
    splitter.setApplySequence(true);
    splitter.setOutputChannel(outputChannel);
    return splitter;
}

The FileSplitter will also split any text-based InputStream into lines. When used in conjunction with an FTP or SFTP streaming inbound channel adapter, or an FTP or SFTP outbound gateway using the stream option to retrieve a file, starting with version 4.3, the splitter will automatically close the session supporting the stream, when the file is completely consumed. See the section called “CompletableFuture” and the section called “CompletableFuture” as well as the section called “CompletableFuture” and the section called “CompletableFuture” for more information about these facilities.

When using Java configuration, an additional constructor is available:

public FileSplitter(boolean iterator, boolean markers, boolean markersJson)

When markersJson is true, the markers will be represented as a JSON string, as long as a suitable JSON processor library, such as Jackson or Boon, is on the classpath.

== FTP/FTPS Adapters

Spring Integration provides support for file transfer operations via FTP and FTPS.

=== Introduction

The File Transfer Protocol (FTP) is a simple network protocol which allows you to transfer files between two computers on the Internet.

There are two actors when it comes to FTP communication: client and server. To transfer files with FTP/FTPS, you use a client which initiates a connection to a remote computer that is running an FTP server. After the connection is established, the client can choose to send and/or receive copies of files.

Spring Integration supports sending and receiving files over FTP/FTPS by providing three client side endpoints: Inbound Channel Adapter, Outbound Channel Adapter, and Outbound Gateway. It also provides convenient namespace-based configuration options for defining these client components.

To use the FTP namespace, add the following to the header of your XML file:

xmlns:int-ftp="http://www.springframework.org/schema/integration/ftp"
xsi:schemaLocation="http://www.springframework.org/schema/integration/ftp
    http://www.springframework.org/schema/integration/ftp/spring-integration-ftp.xsd"

=== FTP Session Factory

[Important]Important

Starting with version 3.0, sessions are no longer cached by default. See the section called “CompletableFuture”.

Before configuring FTP adapters you must configure an FTP Session Factory. You can configure the FTP Session Factory with a regular bean definition where the implementation class is org.springframework.integration.ftp.session.DefaultFtpSessionFactory: Below is a basic configuration:

<bean id="ftpClientFactory"
    class="org.springframework.integration.ftp.session.DefaultFtpSessionFactory">
    <property name="host" value="localhost"/>
    <property name="port" value="22"/>
    <property name="username" value="kermit"/>
    <property name="password" value="frog"/>
    <property name="clientMode" value="0"/>
    <property name="fileType" value="2"/>
    <property name="bufferSize" value="100000"/>
</bean>

For FTPS connections all you need to do is use org.springframework.integration.ftp.session.DefaultFtpsSessionFactory instead. Below is the complete configuration sample:

<bean id="ftpClientFactory"
    class="org.springframework.integration.ftp.client.DefaultFtpsClientFactory">
    <property name="host" value="localhost"/>
    <property name="port" value="22"/>
    <property name="username" value="oleg"/>
    <property name="password" value="password"/>
    <property name="clientMode" value="1"/>
    <property name="fileType" value="2"/>
    <property name="useClientMode" value="true"/>
    <property name="cipherSuites" value="a,b.c"/>
    <property name="keyManager" ref="keyManager"/>
    <property name="protocol" value="SSL"/>
    <property name="trustManager" ref="trustManager"/>
    <property name="prot" value="P"/>
    <property name="needClientAuth" value="true"/>
    <property name="authValue" value="oleg"/>
    <property name="sessionCreation" value="true"/>
    <property name="protocols" value="SSL, TLS"/>
    <property name="implicit" value="true"/>
</bean>

Every time an adapter requests a session object from its SessionFactory the session is returned from a session pool maintained by a caching wrapper around the factory. A Session in the session pool might go stale (if it has been disconnected by the server due to inactivity) so the SessionFactory will perform validation to make sure that it never returns a stale session to the adapter. If a stale session was encountered, it will be removed from the pool, and a new one will be created.

[Note]Note

If you experience connectivity problems and would like to trace Session creation as well as see which Sessions are polled you may enable it by setting the logger to TRACE level (e.g., log4j.category.org.springframework.integration.file=TRACE)

Now all you need to do is inject these session factories into your adapters. Obviously the protocol (FTP or FTPS) that an adapter will use depends on the type of session factory that has been injected into the adapter.

[Note]Note

A more practical way to provide values for FTP/FTPS Session Factories is by using Spring’s property placeholder support (See: http://docs.spring.io/spring/docs/current/spring-framework-reference/html/beans.html#beans-factory-placeholderconfigurer).

Advanced Configuration

DefaultFtpSessionFactory provides an abstraction over the underlying client API which, since Spring Integration 2.0, is Apache Commons Net. This spares you from the low level configuration details of the org.apache.commons.net.ftp.FTPClient. Several common properties are exposed on the session factory (since version 4.0, this now includes connectTimeout, defaultTimeout and dataTimeout). However there are times when access to lower level FTPClient configuration is necessary to achieve more advanced configuration (e.g., setting the port range for active mode etc.). For that purpose, AbstractFtpSessionFactory (the base class for all FTP Session Factories) exposes hooks, in the form of the two post-processing methods below.

/**
 * Will handle additional initialization after client.connect() method was invoked,
 * but before any action on the client has been taken
 */
protected void postProcessClientAfterConnect(T t) throws IOException {
    // NOOP
}
/**
 * Will handle additional initialization before client.connect() method was invoked.
 */
protected void postProcessClientBeforeConnect(T client) throws IOException {
    // NOOP
}

As you can see, there is no default implementation for these two methods. However, by extending DefaultFtpSessionFactory you can override these methods to provide more advanced configuration of the FTPClient. For example:

public class AdvancedFtpSessionFactory extends DefaultFtpSessionFactory {

    protected void postProcessClientBeforeConnect(FTPClient ftpClient) throws IOException {
       ftpClient.setActivePortRange(4000, 5000);
    }
}

=== Delegating Session Factory

Version 4.2 introduced the DelegatingSessionFactory which allows the selection of the actual session factory at runtime. Prior to invoking the ftp endpoint, call setThreadKey() on the factory to associate a key with the current thread. That key is then used to lookup the actual session factory to be used. The key can be cleared by calling clearThreadKey() after use.

Convenience methods have been added so this can easily be done from a message flow:

<bean id="dsf" class="org.springframework.integration.file.remote.session.DelegatingSessionFactory">
    <constructor-arg>
        <bean class="o.s.i.file.remote.session.DefaultSessionFactoryLocator">
            <!-- delegate factories here -->
        </bean>
    </constructor-arg>
</bean>

<int:service-activator input-channel="in" output-channel="c1"
        expression="@dsf.setThreadKey(#root, headers['factoryToUse'])" />

<int-ftp:outbound-gateway request-channel="c1" reply-channel="c2" ... />

<int:service-activator input-channel="c2" output-channel="out"
        expression="@dsf.clearThreadKey(#root)" />
[Important]Important

When using session caching (see the section called “CompletableFuture”), each of the delegates should be cached; you cannot cache the DelegatingSessionFactory itself.

=== FTP Inbound Channel Adapter

The FTP Inbound Channel Adapter is a special listener that will connect to the FTP server and will listen for the remote directory events (e.g., new file created) at which point it will initiate a file transfer.

<int-ftp:inbound-channel-adapter id="ftpInbound"
    channel="ftpChannel"
    session-factory="ftpSessionFactory"
    auto-create-local-directory="true"
    delete-remote-files="true"
    filename-pattern="*.txt"
    remote-directory="some/remote/path"
    remote-file-separator="/"
    preserve-timestamp="true"
    local-filename-generator-expression="#this.toUpperCase() + '.a'"
    local-filter="myFilter"
    temporary-file-suffix=".writing"
    local-directory=".">
    <int:poller fixed-rate="1000"/>
</int-ftp:inbound-channel-adapter>

As you can see from the configuration above you can configure an FTP Inbound Channel Adapter via the inbound-channel-adapter element while also providing values for various attributes such as local-directory, filename-pattern (which is based on simple pattern matching, not regular expressions), and of course the reference to a session-factory.

By default the transferred file will carry the same name as the original file. If you want to override this behavior you can set the local-filename-generator-expression attribute which allows you to provide a SpEL Expression to generate the name of the local file. Unlike outbound gateways and adapters where the root object of the SpEL Evaluation Context is a Message, this inbound adapter does not yet have the Message at the time of evaluation since that’s what it ultimately generates with the transferred file as its payload. So, the root object of the SpEL Evaluation Context is the original name of the remote file (String).

Starting with Spring Integration 3.0, you can specify the preserve-timestamp attribute (default false); when true, the local file’s modified timestamp will be set to the value retrieved from the server; otherwise it will be set to the current time.

Starting with version 4.2, you can specify remote-directory-expression instead of remote-directory, allowing you to dynamically determine the directory on each poll. e.g remote-directory-expression="@myBean.determineRemoteDir()".

Starting with version 4.3, the remote-directory/remote-directory-expression attributes can be omitted assuming null. In this case, according to the FTP protocol, the Client working directory is used as a default remote directory.

Sometimes file filtering based on the simple pattern specified via filename-pattern attribute might not be sufficient. If this is the case, you can use the filename-regex attribute to specify a Regular Expression (e.g. filename-regex=".*\.test$"). And of course if you need complete control you can use filter attribute and provide a reference to any custom implementation of the org.springframework.integration.file.filters.FileListFilter, a strategy interface for filtering a list of files. This filter determines which remote files are retrieved. You can also combine a pattern based filter with other filters, such as an AcceptOnceFileListFilter to avoid synchronizing files that have previously been fetched, by using a CompositeFileListFilter.

The AcceptOnceFileListFilter stores its state in memory. If you wish the state to survive a system restart, consider using the FtpPersistentAcceptOnceFileListFilter instead. This filter stores the accepted file names in an instance of the MetadataStore strategy (the section called “CompletableFuture”). This filter matches on the filename and the remote modified time.

Since version 4.0, this filter requires a ConcurrentMetadataStore. When used with a shared data store (such as Redis with the RedisMetadataStore) this allows filter keys to be shared across multiple application or server instances.

The above discussion refers to filtering the files before retrieving them. Once the files have been retrieved, an additional filter is applied to the files on the file system. By default, this is an`AcceptOnceFileListFilter` which, as discussed, retains state in memory and does not consider the file’s modified time. Unless your application removes files after processing, the adapter will re-process the files on disk by default after an application restart.

Also, if you configure the filter to use a FtpPersistentAcceptOnceFileListFilter, and the remote file timestamp changes (causing it to be re-fetched), the default local filter will not allow this new file to be processed.

Use the local-filter attribute to configure the behavior of the local file system filter. Starting with verion 4.3.8, a FileSystemPersistentAcceptOnceFileListFilter is configured by default. This filter stores the accepted file names and modified timestamp in an instance of the MetadataStore strategy (the section called “CompletableFuture”), and will detect changes to the local file modified time. The default MetadataStore is a SimpleMetadataStore which stores state in memory.

Since version 4.1.5, these filters have a new property flushOnUpdate which will cause them to flush the metadata store on every update (if the store implements Flushable).

[Important]Important

Further, if you use a distributed MetadataStore (such as the section called “CompletableFuture” or the section called “CompletableFuture”) you can have multiple instances of the same adapter/application and be sure that one and only one will process a file.

The actual local filter is a CompositeFileListFilter containing the supplied filter and a pattern filter that prevents processing files that are in the process of being downloaded (based on the temporary-file-suffix); files are downloaded with this suffix (default: .writing) and the file is renamed to its final name when the transfer is complete, making it visible to the filter.

The remote-file-separator attribute allows you to configure a file separator character to use if the default / is not applicable for your particular environment.

Please refer to the schema for more details on these attributes.

It is also important to understand that the FTP Inbound Channel Adapter is a Polling Consumer and therefore you must configure a poller (either via a global default or a local sub-element). Once a file has been transferred, a Message with a java.io.File as its payload will be generated and sent to the channel identified by the channel attribute.

More on File Filtering and Large Files

Sometimes the file that just appeared in the monitored (remote) directory is not complete. Typically such a file will be written with temporary extension (e.g., foo.txt.writing) and then renamed after the writing process finished. As a user in most cases you are only interested in files that are complete and would like to filter only files that are complete. To handle these scenarios you can use the filtering support provided by the filename-pattern, filename-regex and filter attributes. Here is an example that uses a custom Filter implementation.

<int-ftp:inbound-channel-adapter
    channel="ftpChannel"
    session-factory="ftpSessionFactory"
    filter="customFilter"
    local-directory="file:/my_transfers">
    remote-directory="some/remote/path"
    <int:poller fixed-rate="1000"/>
</int-ftp:inbound-channel-adapter>

<bean id="customFilter" class="org.example.CustomFilter"/>

Poller configuration notes for the inbound FTP adapter

The job of the inbound FTP adapter consists of two tasks: 1) Communicate with a remote server in order to transfer files from a remote directory to a local directory. 2) For each transferred file, generate a Message with that file as a payload and send it to the channel identified by the channel attribute. That is why they are called channel-adapters rather than just adapters. The main job of such an adapter is to generate a Message to be sent to a Message Channel. Essentially, the second task mentioned above takes precedence in such a way that IF your local directory already has one or more files it will first generate Messages from those, and ONLY when all local files have been processed, will it initiate the remote communication to retrieve more files.

Also, when configuring a trigger on the poller you should pay close attention to the max-messages-per-poll attribute. Its default value is 1 for all SourcePollingChannelAdapter instances (including FTP). This means that as soon as one file is processed, it will wait for the next execution time as determined by your trigger configuration. If you happened to have one or more files sitting in the local-directory, it would process those files before it would initiate communication with the remote FTP server. And, if the max-messages-per-poll were set to 1 (default), then it would be processing only one file at a time with intervals as defined by your trigger, essentially working as one-poll === one-file.

For typical file-transfer use cases, you most likely want the opposite behavior: to process all the files you can for each poll and only then wait for the next poll. If that is the case, set max-messages-per-poll to -1. Then, on each poll, the adapter will attempt to generate as many Messages as it possibly can. In other words, it will process everything in the local directory, and then it will connect to the remote directory to transfer everything that is available there to be processed locally. Only then is the poll operation considered complete, and the poller will wait for the next execution time.

You can alternatively set the max-messages-per-poll value to a positive value indicating the upward limit of Messages to be created from files with each poll. For example, a value of 10 means that on each poll it will attempt to process no more than 10 files.

==== Recovering from Failures

It is important to understand the architecture of the adapter. There is a file synchronizer which fetches the files, and a FileReadingMessageSource to emit a message for each synchronized file. As discussed above, there are two filters involved. The filter attribute (and patterns) refers to the remote (FTP) file list - to avoid fetching files that have already been fetched. The local-filter is used by the FileReadingMessageSource to determine which files are to be sent as messages.

The synchronizer lists the remote files and consults its filter; the files are then transferred. If an IO error occurs during file transfer, any files that have already been added to the filter are removed so they are eligible to be re-fetched on the next poll. This only applies if the filter implements ReversibleFileListFilter (such as the AcceptOnceFileListFilter).

If, after synchronizing the files, an error occurs on the downstream flow processing a file, there is no automatic rollback of the filter so the failed file will not be reprocessed by default.

If you wish to reprocess such files after a failure, you can use configuration similar to the following to facilitate the removal of the failed file from the filter. This will work for any ResettableFileListFilter.

<int-ftp:inbound-channel-adapter id="ftpAdapter"
        session-factory="ftpSessionFactory"
        channel="requestChannel"
        remote-directory-expression="'/sftpSource'"
        local-directory="file:myLocalDir"
        auto-create-local-directory="true"
        filename-pattern="*.txt"
        local-filter="acceptOnceFilter">
    <int:poller fixed-rate="1000">
        <int:transactional synchronization-factory="syncFactory" />
    </int:poller>
</int-ftp:inbound-channel-adapter>

<bean id="acceptOnceFilter"
    class="org.springframework.integration.file.filters.AcceptOnceFileListFilter" />

<int:transaction-synchronization-factory id="syncFactory">
    <int:after-rollback expression="@acceptOnceFilter.remove(payload)" />
</int:transaction-synchronization-factory>

<bean id="transactionManager"
    class="org.springframework.integration.transaction.PseudoTransactionManager" />

==== Configuring with Java Configuration

The following Spring Boot application provides an example of configuring the inbound adapter using Java configuration:

@SpringBootApplication
public class FtpJavaApplication {

    public static void main(String[] args) {
        new SpringApplicationBuilder(FtpJavaApplication.class)
            .web(false)
            .run(args);
    }

    @Bean
    public SessionFactory<FTPFile> ftpSessionFactory() {
        DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
        sf.setHost("localhost");
        sf.setPort(port);
        sf.setUsername("foo");
        sf.setPassword("foo");
        return new CachingSessionFactory<FTPFile>(sf);
    }

    @Bean
    public FtpInboundFileSynchronizer ftpInboundFileSynchronizer() {
        FtpInboundFileSynchronizer fileSynchronizer = new FtpInboundFileSynchronizer(ftpSessionFactory());
        fileSynchronizer.setDeleteRemoteFiles(false);
        fileSynchronizer.setRemoteDirectory("foo");
        fileSynchronizer.setFilter(new FtpSimplePatternFileListFilter("*.xml"));
        return fileSynchronizer;
    }

    @Bean
    @InboundChannelAdapter(channel = "ftpChannel", poller = @Poller(fixedDelay = "5000"))
    public MessageSource<File> ftpMessageSource() {
        FtpInboundFileSynchronizingMessageSource source =
                new FtpInboundFileSynchronizingMessageSource(ftpInboundFileSynchronizer());
        source.setLocalDirectory(new File("ftp-inbound"));
        source.setAutoCreateLocalDirectory(true);
        source.setLocalFilter(new AcceptOnceFileListFilter<File>());
        return source;
    }

    @Bean
    @ServiceActivator(inputChannel = "ftpChannel")
    public MessageHandler handler() {
        return new MessageHandler() {

            @Override
            public void handleMessage(Message<?> message) throws MessagingException {
                System.out.println(message.getPayload());
            }

        };
    }

}

==== Configuring with the Java DSL

The following Spring Boot application provides an example of configuring the inbound adapter using the Java DSL:

@SpringBootApplication
public class FtpJavaApplication {

    public static void main(String[] args) {
        new SpringApplicationBuilder(FtpJavaApplication.class)
            .web(false)
            .run(args);
    }

    @Bean
    public IntegrationFlow ftpInboundFlow() {
        return IntegrationFlows
            .from(s -> s.ftp(this.ftpSessionFactory)
                    .preserveTimestamp(true)
                    .remoteDirectory("foo")
                    .regexFilter(".*\\.txt$")
                    .localFilename(f -> f.toUpperCase() + ".a")
                    .localDirectory(new File("d:\\ftp_files")),
                e -> e.id("ftpInboundAdapter")
                    .autoStartup(true)
                    .poller(Pollers.fixedDelay(5000)))
            .handle(m -> System.out.println(m.getPayload()))
            .get();
    }
}

=== FTP Streaming Inbound Channel Adapter

The streaming inbound channel adapter was introduced in version 4.3. This adapter produces message with payloads of type InputStream, allowing files to be fetched without writing to the local file system. Since the session remains open, the consuming application is responsible for closing the session when the file has been consumed. The session is provided in the IntegrationMessageHeaderAccessor.CLOSEABLE_RESOURCE header. Standard framework components, such as the FileSplitter and StreamTransformer will automatically close the session. See the section called “CompletableFuture” and the section called “Stream Transformer” for more information about these components.

<int-ftp:inbound-streaming-channel-adapter id="ftpInbound"
            channel="ftpChannel"
            session-factory="sessionFactory"
            filename-pattern="*.txt"
            filename-regex=".*\.txt"
            filter="filter"
            remote-file-separator="/"
            comparator="comparator"
            remote-directory-expression="'foo/bar'">
        <int:poller fixed-rate="1000" />
</int-ftp:inbound-streaming-channel-adapter>

Only one of filename-pattern, filename-regex or filter is allowed.

[Important]Important

Unlike the non-streaming inbound channel adapter, this adapter does not prevent duplicates by default. If you do not delete the remote file (e.g. using an outbound gateway with an rm command) and you wish to prevent the file being processed again, you can configure an FtpPersistentFileListFilter in the filter attribute. If you don’t actually want to persist the state, an in-memory SimpleMetadataStore can be used with the filter. If you wish to use a filename pattern (or regex) as well, use a CompositeFileListFilter.

==== Configuring with Java Configuration

The following Spring Boot application provides an example of configuring the inbound adapter using Java configuration:

@SpringBootApplication
public class FtpJavaApplication {

    public static void main(String[] args) {
        new SpringApplicationBuilder(FtpJavaApplication.class)
            .web(false)
            .run(args);
    }

    @Bean
    @InboundChannelAdapter(channel = "stream")
    public MessageSource<InputStream> ftpMessageSource() {
        FtpStreamingMessageSource messageSource = new FtpStreamingMessageSource(template(), null);
        messageSource.setRemoteDirectory("ftpSource/");
        messageSource.setFilter(new FtpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(),
                           "streaming"));
        return messageSource;
    }

    @Bean
    @Transformer(inputChannel = "stream", outputChannel = "data")
    public org.springframework.integration.transformer.Transformer transformer() {
        return new StreamTransformer();
    }

    @Bean
    public FtpRemoteFileTemplate template() {
        return new FtpRemoteFileTemplate(ftpSessionFactory());
    }

}

=== FTP Outbound Channel Adapter

The FTP Outbound Channel Adapter relies upon a MessageHandler implementation that will connect to the FTP server and initiate an FTP transfer for every file it receives in the payload of incoming Messages. It also supports several representations of the File so you are not limited only to java.io.File typed payloads. The FTP Outbound Channel Adapter supports the following payloads: 1) java.io.File - the actual file object; 2) byte[] - a byte array that represents the file contents; and 3) java.lang.String - text that represents the file contents.

<int-ftp:outbound-channel-adapter id="ftpOutbound"
    channel="ftpChannel"
    session-factory="ftpSessionFactory"
    charset="UTF-8"
    remote-file-separator="/"
    auto-create-directory="true"
    remote-directory-expression="headers.['remote_dir']"
    temporary-remote-directory-expression="headers.['temp_remote_dir']"
    filename-generator="fileNameGenerator"
    use-temporary-filename="true"
    mode="REPLACE"/>

As you can see from the configuration above you can configure an FTP Outbound Channel Adapter via the outbound-channel-adapter element while also providing values for various attributes such as filename-generator (an implementation of the org.springframework.integration.file.FileNameGenerator strategy interface), a reference to a session-factory, as well as other attributes. You can also see some examples of *expression attributes which allow you to use SpEL to configure things like remote-directory-expression, temporary-remote-directory-expression and remote-filename-generator-expression (a SpEL alternative to filename-generator shown above). As with any component that allows the usage of SpEL, access to Payload and Message Headers is available via payload and headers variables. Please refer to the schema for more details on the available attributes.

[Note]Note

By default Spring Integration will use o.s.i.file.DefaultFileNameGenerator if none is specified. DefaultFileNameGenerator will determine the file name based on the value of the file_name header (if it exists) in the MessageHeaders, or if the payload of the Message is already a java.io.File, then it will use the original name of that file.

[Important]Important

Defining certain values (e.g., remote-directory) might be platform/ftp server dependent. For example as it was reported on this forum http://forum.springsource.org/showthread.php?p=333478&posted=1#post333478 on some platforms you must add slash to the end of the directory definition (e.g., remote-directory="/foo/bar/" instead of remote-directory="/foo/bar")

Starting with version 4.1, you can specify the mode when transferring the file. By default, an existing file will be overwritten; the modes are defined on enum FileExistsMode, having values REPLACE (default), APPEND, IGNORE, and FAIL. With IGNORE and FAIL, the file is not transferred; FAIL causes an exception to be thrown whereas IGNORE silently ignores the transfer (although a DEBUG log entry is produced).

Avoiding Partially Written Files

One of the common problems, when dealing with file transfers, is the possibility of processing a partial file - a file might appear in the file system before its transfer is actually complete.

To deal with this issue, Spring Integration FTP adapters use a very common algorithm where files are transferred under a temporary name and then renamed once they are fully transferred.

By default, every file that is in the process of being transferred will appear in the file system with an additional suffix which, by default, is .writing; this can be changed using the temporary-file-suffix attribute.

However, there may be situations where you don’t want to use this technique (for example, if the server does not permit renaming files). For situations like this, you can disable this feature by setting use-temporary-file-name to false (default is true). When this attribute is false, the file is written with its final name and the consuming application will need some other mechanism to detect that the file is completely uploaded before accessing it.

==== Configuring with Java Configuration

The following Spring Boot application provides an example of configuring the Outbound Adapter using Java configuration:

@SpringBootApplication
@IntegrationComponentScan
public class FtpJavaApplication {

    public static void main(String[] args) {
        ConfigurableApplicationContext context =
                    new SpringApplicationBuilder(FtpJavaApplication.class)
                        .web(false)
                        .run(args);
        MyGateway gateway = context.getBean(MyGateway.class);
        gateway.sendToFtp(new File("/foo/bar.txt"));
    }

    @Bean
    public SessionFactory<FTPFile> ftpSessionFactory() {
        DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
        sf.setHost("localhost");
        sf.setPort(port);
        sf.setUsername("foo");
        sf.setPassword("foo");
        return new CachingSessionFactory<FTPFile>(sf);
    }

    @Bean
    @ServiceActivator(inputChannel = "ftpChannel")
    public MessageHandler handler() {
        FtpMessageHandler handler = new FtpMessageHandler(ftpSessionFactory());
        handler.setRemoteDirectoryExpressionString("headers['remote-target-dir']");
        handler.setFileNameGenerator(new FileNameGenerator() {

            @Override
            public String generateFileName(Message<?> message) {
                 return "handlerContent.test";
            }

        });
        return handler;
    }

    @MessagingGateway
    public interface MyGateway {

         @Gateway(requestChannel = "toFtpChannel")
         void sendToFtp(File file);

    }
}

==== Configuring with the Java DSL

The following Spring Boot application provides an example of configuring the Outbound Adapter using the Java DSL:

@SpringBootApplication
@IntegrationComponentScan
public class FtpJavaApplication {

    public static void main(String[] args) {
        ConfigurableApplicationContext context =
            new SpringApplicationBuilder(FtpJavaApplication.class)
                .web(false)
                .run(args);
        MyGateway gateway = context.getBean(MyGateway.class);
        gateway.sendToFtp(new File("/foo/bar.txt"));
    }

    @Bean
    public SessionFactory<FTPFile> ftpSessionFactory() {
        DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
        sf.setHost("localhost");
        sf.setPort(port);
        sf.setUsername("foo");
        sf.setPassword("foo");
        return new CachingSessionFactory<FTPFile>(sf);
    }

    @Bean
    public IntegrationFlow ftpOutboundFlow() {
        return IntegrationFlows.from("toFtpChannel")
                .handle(Ftp.outboundAdapter(ftpSessionFactory(), FileExistsMode.FAIL)
                        .useTemporaryFileName(false)
                        .fileNameExpression("headers['" + FileHeaders.FILENAME + "']")
                        .remoteDirectory(this.ftpServer.getTargetFtpDirectory().getName())
                ).get();
    }

    @MessagingGateway
    public interface MyGateway {

         @Gateway(requestChannel = "toFtpChannel")
         void sendToFtp(File file);

    }

}

=== FTP Outbound Gateway

The FTP Outbound Gateway provides a limited set of commands to interact with a remote FTP/FTPS server. Commands supported are:

  • ls (list files)
  • get (retrieve file)
  • mget (retrieve file(s))
  • rm (remove file(s))
  • mv (move/rename file)
  • put (send file)
  • mput (send multiple files)

ls

ls lists remote file(s) and supports the following options:

  • -1 - just retrieve a list of file names, default is to retrieve a list of FileInfo objects.
  • -a - include all files (including those starting with .)
  • -f - do not sort the list
  • -dirs - include directories (excluded by default)
  • -links - include symbolic links (excluded by default)
  • -R - list the remote directory recursively

In addition, filename filtering is provided, in the same manner as the inbound-channel-adapter.

The message payload resulting from an ls operation is a list of file names, or a list of FileInfo objects. These objects provide information such as modified time, permissions etc.

The remote directory that the ls command acted on is provided in the file_remoteDirectory header.

When using the recursive option (-R), the fileName includes any subdirectory elements, representing a relative path to the file (relative to the remote directory). If the -dirs option is included, each recursive directory is also returned as an element in the list. In this case, it is recommended that the -1 is not used because you would not be able to determine files Vs. directories, which is achievable using the FileInfo objects.

Starting with version 4.3, the FtpSession supports null for the list() and listNames() methods, therefore the expression attribute can be omitted. For Java configuration, there are two constructors without an expression argument for convenience. null for LS, PUT and MPUT commands is treated as the Client working directory according to the FTP protocol. All other commands must be supplied with the expression to evaluate remote path against request message. The working directory can be set via the FTPClient.changeWorkingDirectory() function when you extend the DefaultFtpSessionFactory and implement postProcessClientAfterConnect() callback.

get

get retrieves a remote file and supports the following option:

  • -P - preserve the timestamp of the remote file
  • -stream - retrieve the remote file as a stream.

The remote directory is provided in the file_remoteDirectory header, and the filename is provided in the file_remoteFile header.

The message payload resulting from a get operation is a File object representing the retrieved file, or an InputStream when the -stream option is provided. This option allows retrieving the file as a stream. For text files, a common use case is to combine this operation with a File Splitter or Stream Transformer. When consuming remote files as streams, the user is responsible for closing the Session after the stream is consumed. For convenience, the Session is provided in the IntegrationMessageHeaderAccessor.CLOSEABLE_RESOURCE header, a convenience method is provided on the IntegrationMessageHeaderAccessor:

Closeable closeable = new IntegrationMessageHeaderAccessor(message).getCloseableResource();
if (closeable != null) {
    closeable.close();
}

Note: In previous releases the session was in the file_remoteSession header, but this is deprecated - use IntegrationMessageHeaderAccessor.CLOSEABLE_RESOURCE instead.

Framework components such as the File Splitter and Stream Transformer will automatically close the session after the data is transferred.

The following shows an example of consuming a file as a stream:

<int-ftp:outbound-gateway session-factory="ftpSessionFactory"
                            request-channel="inboundGetStream"
                            command="get"
                            command-options="-stream"
                            expression="payload"
                            remote-directory="ftpTarget"
                            reply-channel="stream" />

<int-file:splitter input-channel="stream" output-channel="lines" />

Note: if you consume the input stream in a custom component, you must close the Session. You can either do that in your custom code, or route a copy of the message to a service-activator and use SpEL:

<int:service-activator input-channel="closeSession"
    expression="headers[T(org.springframework.integration.IntegrationMessageHeaderAccessor).CLOSEABLE_RESOURCE].close()" />

mget

mget retrieves multiple remote files based on a pattern and supports the following option:

  • -P - preserve the timestamps of the remote files
  • -x - Throw an exception if no files match the pattern (otherwise an empty list is returned)

The message payload resulting from an mget operation is a List<File> object - a List of File objects, each representing a retrieved file.

The remote directory is provided in the file_remoteDirectory header, and the pattern for the file names is provided in the file_remoteFile header.

[Note]Notes for when using recursion (-R)

The pattern is ignored, and * is assumed. By default, the entire remote tree is retrieved. However, files in the tree can be filtered, by providing a`FileListFilter`; directories in the tree can also be filtered this way. A FileListFilter can be provided by reference or by filename-pattern or filename-regex attributes. For example, filename-regex="(subDir|.*1.txt)" will retrieve all files ending with 1.txt in the remote directory and the subdirectory subDir. If a subdirectory is filtered, no additional traversal of that subdirectory is performed.

The -dirs option is not allowed (the recursive mget uses the recursive ls to obtain the directory tree and the directories themselves cannot be included in the list).

Typically, you would use the #remoteDirectory variable in the local-directory-expression so that the remote directory structure is retained locally.

See also the section called “CompletableFuture”.

put

put sends a file to the remote server; the payload of the message can be a java.io.File, a byte[] or a String. A remote-filename-generator (or expression) is used to name the remote file. Other available attributes include remote-directory, temporary-remote-directory (and their *-expression) equivalents, use-temporary-file-name, and auto-create-directory. Refer to the schema documentation for more information.

The message payload resulting from a put operation is a String representing the full path of the file on the server after transfer.

mput

mput sends multiple files to the server and supports the following option:

  • -R - Recursive - send all files (possibly filtered) in the directory and subdirectories

The message payload must be a java.io.File representing a local directory.

The same attributes as the put command are supported. In addition, files in the local directory can be filtered with one of mput-pattern, mput-regex or mput-filter. The filter works with recursion, as long as the subdirectories themselves pass the filter. Subdirectories that do not pass the filter are not recursed.

The message payload resulting from an mget operation is a List<String> object - a List of remote file paths resulting from the transfer.

See also the section called “CompletableFuture”.

rm

The rm command has no options.

The message payload resulting from an rm operation is Boolean.TRUE if the remove was successful, Boolean.FALSE otherwise. The remote directory is provided in the file_remoteDirectory header, and the filename is provided in the file_remoteFile header.

mv

The mv command has no options.

The expression attribute defines the "from" path and the rename-expression attribute defines the "to" path. By default, the rename-expression is headers['file_renameTo']. This expression must not evaluate to null, or an empty String. If necessary, any remote directories needed will be created. The payload of the result message is Boolean.TRUE. The original remote directory is provided in the file_remoteDirectory header, and the filename is provided in the file_remoteFile header. The new path is in the file_renameTo header.

Additional Information

The get and mget commands support the local-filename-generator-expression attribute. It defines a SpEL expression to generate the name of local file(s) during the transfer. The root object of the evaluation context is the request Message but, in addition, the remoteFileName variable is also available, which is particularly useful for mget, for example: local-filename-generator-expression="#remoteFileName.toUpperCase() + headers.foo".

The get and mget commands support the local-directory-expression attribute. It defines a SpEL expression to generate the name of local directory(ies) during the transfer. The root object of the evaluation context is the request Message but, in addition, the remoteDirectory variable is also available, which is particularly useful for mget, for example: local-directory-expression="'/tmp/local/' + #remoteDirectory.toUpperCase() + headers.foo". This attribute is mutually exclusive with local-directory attribute.

For all commands, the PATH that the command acts on is provided by the expression property of the gateway. For the mget command, the expression might evaluate to , meaning retrieve all files, or somedirectory/ etc.

Here is an example of a gateway configured for an ls command…​

<int-ftp:outbound-gateway id="gateway1"
    session-factory="ftpSessionFactory"
    request-channel="inbound1"
    command="ls"
    command-options="-1"
    expression="payload"
    reply-channel="toSplitter"/>

The payload of the message sent to the toSplitter channel is a list of String objects containing the filename of each file. If the command-options was omitted, it would be a list of FileInfo objects. Options are provided space-delimited, e.g. command-options="-1 -dirs -links".

Starting with version 4.2, the GET, MGET, PUT and MPUT commands support a FileExistsMode property (mode when using the namespace support). This affects the behavior when the local file exists (GET and MGET) or the remote file exists (PUT and MPUT). Supported modes are REPLACE, APPEND, FAIL and IGNORE. For backwards compatibility, the default mode for PUT and MPUT operations is REPLACE and for GET and MGET operations, the default is FAIL.

==== Configuring with Java Configuration

The following Spring Boot application provides an example of configuring the Outbound Gateway using Java configuration:

@SpringBootApplication
public class FtpJavaApplication {

    public static void main(String[] args) {
        new SpringApplicationBuilder(FtpJavaApplication.class)
            .web(false)
            .run(args);
    }

    @Bean
    public SessionFactory<FTPFile> ftpSessionFactory() {
        DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
        sf.setHost("localhost");
        sf.setPort(port);
        sf.setUsername("foo");
        sf.setPassword("foo");
        return new CachingSessionFactory<FTPFile>(sf);
    }

    @Bean
    @ServiceActivator(inputChannel = "ftpChannel")
    public MessageHandler handler() {
        FtpOutboundGateway ftpOutboundGateway = new FtpOutboundGateway(ftpSessionFactory(), "ls");
        ftpOutboundGateway.setOutputChannelName("lsReplyChannel");
        return ftpOutboundGateway;
    }

}

==== Configuring with the Java DSL

The following Spring Boot application provides an example of configuring the Outbound Gateway using the Java DSL:

@SpringBootApplication
public class FtpJavaApplication {

    public static void main(String[] args) {
        new SpringApplicationBuilder(FtpJavaApplication.class)
            .web(false)
            .run(args);
    }

    @Bean
    public SessionFactory<FTPFile> ftpSessionFactory() {
        DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
        sf.setHost("localhost");
        sf.setPort(port);
        sf.setUsername("foo");
        sf.setPassword("foo");
        return new CachingSessionFactory<FTPFile>(sf);
    }

    @Bean
    public FtpOutboundGatewaySpec ftpOutboundGateway() {
        return Ftp.outboundGateway(ftpSessionFactory(),
            AbstractRemoteFileOutboundGateway.Command.MGET, "payload")
            .options(AbstractRemoteFileOutboundGateway.Option.RECURSIVE)
            .regexFileNameFilter("(subFtpSource|.*1.txt)")
            .localDirectoryExpression("'localDirectory/' + #remoteDirectory")
            .localFilenameExpression("#remoteFileName.replaceFirst('ftpSource', 'localTarget')");
    }

    @Bean
    public IntegrationFlow ftpMGetFlow(AbstractRemoteFileOutboundGateway<FTPFile> ftpOutboundGateway) {
        return f -> f
            .handle(ftpOutboundGateway)
            .channel(c -> c.queue("remoteFileOutputChannel"));
    }

}

==== Outbound Gateway Partial Success (mget and mput)

When performing operations on multiple files (mget and mput) it is possible that an exception occurs some time after one or more files have been transferred. In this case (starting with version 4.2), a PartialSuccessException is thrown. As well as the usual MessagingException properties (failedMessage and cause), this exception has two additional properties:

  • partialResults - the successful transfer results.
  • derivedInput - the list of files generated from the request message (e.g. local files to transfer for an mput).

This will enable you to determine which files were successfully transferred, and which were not.

In the case of a recursive mput, the PartialSuccessException may have nested PartialSuccessException s.

Consider:

root/
|- file1.txt
|- subdir/
   | - file2.txt
   | - file3.txt
|- zoo.txt

If the exception occurs on file3.txt, the PartialSuccessException thrown by the gateway will have derivedInput of file1.txt, subdir, zoo.txt and partialResults of file1.txt. It’s cause will be another PartialSuccessException with derivedInput of file2.txt, file3.txt and partialResults of file2.txt.

=== FTP Session Caching

[Important]Important

Starting with Spring Integration version 3.0, sessions are no longer cached by default; the cache-sessions attribute is no longer supported on endpoints. You must use a CachingSessionFactory (see below) if you wish to cache sessions.

In versions prior to 3.0, the sessions were cached automatically by default. A cache-sessions attribute was available for disabling the auto caching, but that solution did not provide a way to configure other session caching attributes. For example, you could not limit on the number of sessions created. To support that requirement and other configuration options, a CachingSessionFactory was provided. It provides sessionCacheSize and sessionWaitTimeout properties. As its name suggests, the sessionCacheSize property controls how many active sessions the factory will maintain in its cache (the DEFAULT is unbounded). If the sessionCacheSize threshold has been reached, any attempt to acquire another session will block until either one of the cached sessions becomes available or until the wait time for a Session expires (the DEFAULT wait time is Integer.MAX_VALUE). The sessionWaitTimeout property enables configuration of that value.

If you want your Sessions to be cached, simply configure your default Session Factory as described above and then wrap it in an instance of CachingSessionFactory where you may provide those additional properties.

<bean id="ftpSessionFactory" class="o.s.i.ftp.session.DefaultFtpSessionFactory">
    <property name="host" value="localhost"/>
</bean>

<bean id="cachingSessionFactory" class="o.s.i.file.remote.session.CachingSessionFactory">
    <constructor-arg ref="ftpSessionFactory"/>
    <constructor-arg value="10"/>
    <property name="sessionWaitTimeout" value="1000"/>
</bean>

In the above example you see a CachingSessionFactory created with the sessionCacheSize set to 10 and the sessionWaitTimeout set to 1 second (its value is in milliseconds).

Starting with Spring Integration version 3.0, the CachingConnectionFactory provides a resetCache() method. When invoked, all idle sessions are immediately closed and in-use sessions are closed when they are returned to the cache. New requests for sessions will establish new sessions as necessary.

=== RemoteFileTemplate

Starting with Spring Integration version 3.0 a new abstraction is provided over the FtpSession object. The template provides methods to send, retrieve (as an InputStream), remove, and rename files. In addition an execute method is provided allowing the caller to execute multiple operations on the session. In all cases, the template takes care of reliably closing the session. For more information, refer to the JavaDocs for RemoteFileTemplate. There is a subclass for FTP: FtpRemoteFileTemplate.

Additional methods were added in version 4.1 including getClientInstance() which provides access to the underlying FTPClient enabling access to low-level APIs.

Not all FTP servers properly implement STAT <path> command, in that it can return a positive result for a non-existent path. The NLST command reliably returns the name, when the path is a file and it exists. However, this does not support checking that an empty directory exists since NLST always returns an empty list in this case, when the path is a directory. Since the template doesn’t know if the path represents a directory or not, it has to perform additional checks when the path does not appear to exist, when using NLST. This adds overhead, requiring several requests to the server. Starting with version 4.1.9 the FtpRemoteFileTemplate provides FtpRemoteFileTemplate.ExistsMode property with the following options:

  • STAT - Perform the STAT FTP command (FTPClient.getStatus(path)) to check the path existence; this is the default and requires that your FTP server properly supports the STAT command (with a path).
  • NLST - Perform the NLST FTP command - FTPClient.listName(path); use this if you are testing for a path that is a full path to a file; it won’t work for empty directories.
  • NLST_AND_DIRS - Perform the NLST command first and if it returns no files, fall back to a technique which temporarily switches the working directory using FTPClient.changeWorkingDirectory(path). See FtpSession.exists() for more information.

Since we know that the FileExistsMode.FAIL case is always only looking for a file (and not a directory), we safely use NLST mode for the FtpMessageHandler and FtpOutboundGateway components.

For any other cases the FtpRemoteFileTemplate can be extended for implementing a custom logic in the overridden exist() method.

=== MessageSessionCallback

Starting with Spring Integration version 4.2, a MessageSessionCallback<F, T> implementation can be used with the <int-ftp:outbound-gateway/> (FtpOutboundGateway) to perform any operation(s) on the Session<FTPFile> with the requestMessage context. It can be used for any non-standard or low-level FTP operation (or several); for example, allowing access from an integration flow definition, and functional interface (Lambda) implementation injection:

@Bean
@ServiceActivator(inputChannel = "ftpChannel")
public MessageHandler ftpOutboundGateway(SessionFactory<FTPFile> sessionFactory) {
    return new FtpOutboundGateway(sessionFactory,
         (session, requestMessage) -> session.list(requestMessage.getPayload()));
}

Another example might be to pre- or post- process the file data being sent/retrieved.

When using XML configuration, the <int-ftp:outbound-gateway/> provides a session-callback attribute to allow you to specify the MessageSessionCallback bean name.

[Note]Note

The session-callback is mutually exclusive with the command and expression attributes. When configuring with Java, different constructors are available in the FtpOutboundGateway class.

== GemFire Support

Spring Integration provides support for VMWare vFabric GemFire

=== Introduction

VMWare vFabric GemFire (GemFire) is a distributed data management platform providing a key-value data grid along with advanced distributed system features such as event processing, continuous querying, and remote function execution. This guide assumes some familiarity with GemFire and its API.

Spring integration provides support for GemFire by providing inbound adapters for entry and continuous query events, an outbound adapter to write entries to the cache, and MessageStore and MessageGroupStore implementations. Spring integration leverages thehttp://www.springsource.org/spring-gemfire[Spring Gemfire] project, providing a thin wrapper over its components.

To configure the int-gfe namespace, include the following elements within the headers of your XML configuration file:

xmlns:int-gfe="http://www.springframework.org/schema/integration/gemfire"
xsi:schemaLocation="http://www.springframework.org/schema/integration/gemfire
	http://www.springframework.org/schema/integration/gemfire/spring-integration-gemfire.xsd"

=== Inbound Channel Adapter

The inbound-channel-adapter produces messages on a channel triggered by a GemFire EntryEvent. GemFire generates events whenever an entry is CREATED, UPDATED, DESTROYED, or INVALIDATED in the associated region. The inbound channel adapter allows you to filter on a subset of these events. For example, you may want to only produce messages in response to an entry being CREATED. In addition, the inbound channel adapter can evaluate a SpEL expression if, for example, you want your message payload to contain an event property such as the new entry value.

<gfe:cache/>
<gfe:replicated-region id="region"/>
<int-gfe:inbound-channel-adapter id="inputChannel" region="region"
    cache-events="CREATED" expression="newValue"/>

In the above configuration, we are creating a GemFire Cache and Region using Spring GemFire’s gfe namespace. The inbound-channel-adapter requires a reference to the GemFire region for which the adapter will be listening for events. Optional attributes include cache-events which can contain a comma separated list of event types for which a message will be produced on the input channel. By default CREATED and UPDATED are enabled. Note that this adapter conforms to Spring integration conventions. If no channel attribute is provided, the channel will be created from the id attribute. This adapter also supports an error-channel. The GemFire EntryEvent is the #root object of the expression evaluation. Example:

expression="new foo.MyEvent(key, oldValue, newValue)"

If the expression attribute is not provided, the message payload will be the GemFire EntryEvent itself.

=== Continuous Query Inbound Channel Adapter

The cq-inbound-channel-adapter produces messages a channel triggered by a GemFire continuous query or CqEvent event. Spring GemFire introduced continuous query support in release 1.1, including a ContinuousQueryListenerContainer which provides a nice abstraction over the GemFire native API. This adapter requires a reference to a ContinuousQueryListenerContainer, and creates a listener for a given query and executes the query. The continuous query acts as an event source that will fire whenever its result set changes state.

[Note]Note

GemFire queries are written in OQL and are scoped to the entire cache (not just one region). Additionally, continuous queries require a remote (i.e., running in a separate process or remote host) cache server. Please consult the GemFire documentation for more information on implementing continuous queries.

<gfe:client-cache id="client-cache" pool-name="client-pool"/>

<gfe:pool id="client-pool" subscription-enabled="true" >
    <!--configure server or locator here required to address the cache server -->
</gfe:pool>

<gfe:client-region id="test" cache-ref="client-cache" pool-name="client-pool"/>

<gfe:cq-listener-container id="queryListenerContainer" cache="client-cache"
    pool-name="client-pool"/>

<int-gfe:cq-inbound-channel-adapter id="inputChannel"
    cq-listener-container="queryListenerContainer"
    query="select * from /test"/>

In the above configuration, we are creating a GemFire client cache (recall a remote cache server is required for this implementation and its address is configured as a sub-element of the pool), a client region and a ContinuousQueryListenerContainer using Spring GemFire. The continuous query inbound channel adapter requires a cq-listener-container attribute which contains a reference to the ContinuousQueryListenerContainer. Optionally, it accepts an expression attribute which uses SpEL to transform the CqEvent or extract an individual property as needed. The cq-inbound-channel-adapter provides a query-events attribute, containing a comma separated list of event types for which a message will be produced on the input channel. Available event types are CREATED, UPDATED, DESTROYED, REGION_DESTROYED, REGION_INVALIDATED. CREATED and UPDATED are enabled by default. Additional optional attributes include, query-name which provides an optional query name, and expression which works as described in the above section, and durable - a boolean value indicating if the query is durable (false by default). Note that this adapter conforms to Spring integration conventions. If no channel attribute is provided, the channel will be created from the id attribute. This adapter also supports an error-channel

=== Outbound Channel Adapter

The outbound-channel-adapter writes cache entries mapped from the message payload. In its simplest form, it expects a payload of type java.util.Map and puts the map entries into its configured region.

<int-gfe:outbound-channel-adapter id="cacheChannel" region="region"/>

Given the above configuration, an exception will be thrown if the payload is not a Map. Additionally, the outbound channel adapter can be configured to create a map of cache entries using SpEL of course.

<int-gfe:outbound-channel-adapter id="cacheChannel" region="region">
    <int-gfe:cache-entries>
        <entry key="payload.toUpperCase()" value="payload.toLowerCase()"/>
        <entry key="'foo'" value="'bar'"/>
    </int-gfe:cache-entries>
</int-gfe:outbound-channel-adapter>

In the above configuration, the inner element cache-entries is semantically equivalent to Spring map element. The adapter interprets the key and value attributes as SpEL expressions with the message as the evaluation context. Note that this contain arbitrary cache entries (not only those derived from the message) and that literal values must be enclosed in single quotes. In the above example, if the message sent to cacheChannel has a String payload with a value "Hello", two entries [HELLO:hello, foo:bar] will be written (created or updated) in the cache region. This adapter also supports the order attribute which may be useful if it is bound to a PublishSubscribeChannel.

=== Gemfire Message Store

As described in EIP, a Message Store allows you to persist Messages. This can be very useful when dealing with components that have a capability to buffer messages (QueueChannel, Aggregator, Resequencer, etc.) if reliability is a concern. In Spring Integration, the MessageStore strategy also provides the foundation for thehttp://www.eaipatterns.com/StoreInLibrary.html[ClaimCheck] pattern, which is described in EIP as well.

Spring Integration’s Gemfire module provides the GemfireMessageStore which is an implementation of both the the MessageStore strategy (mainly used by the QueueChannel and ClaimCheck patterns) and the MessageGroupStore strategy (mainly used by the Aggregator and Resequencer patterns).

<bean id="gemfireMessageStore" class="o.s.i.gemfire.store.GemfireMessageStore">
    <constructor-arg ref="myRegion"/>
</bean>

<gfe:cache/>

<gfe:replicated-region id="myRegion"/>


<int:channel id="somePersistentQueueChannel">
    <int:queue message-store="gemfireMessageStore"/>
<int:channel>

<int:aggregator input-channel="inputChannel" output-channel="outputChannel"
    message-store="gemfireMessageStore"/>

In the above example, the cache and region are configured using the spring-gemfire namespace (not to be confused with the spring-integration-gemfire namespace). Often it is desirable for the message store to be maintained in one or more remote cache servers in a client-server configuration (See the GemFire product documentation for more details). In this case, you configure a client cache, client region, and client pool and inject the region into the MessageStore. Here is an example:

<bean id="gemfireMessageStore"
    class="org.springframework.integration.gemfire.store.GemfireMessageStore">
    <constructor-arg ref="myRegion"/>
</bean>

<gfe:client-cache/>

<gfe:client-region id="myRegion" shortcut="PROXY" pool-name="messageStorePool"/>

<gfe:pool id="messageStorePool">
    <gfe:server host="localhost" port="40404" />
</gfe:pool>

Note the pool element is configured with the address of a cache server (a locator may be substituted here). The region is configured as a PROXY so that no data will be stored locally. The region’s id corresponds to a region with the same name configured in the cache server.

Starting with version 4.3.12, the GemfireMessageStore supports the key prefix option to allow distinguishing between instances of the store on the same Gemfire region.

=== Gemfire Lock Registry

Starting with version 4.0, the GemfireLockRegistry is available. Certain components (for example aggregator and resequencer) use a lock obtained from a LockRegistry instance to ensure that only one thread is manipulating a group at a time. The DefaultLockRegistry performs this function within a single component; you can now configure an external lock registry on these components. When used with a shared MessageGroupStore, the GemfireLockRegistry can be use to provide this functionality across multiple application instances, such that only one instance can manipulate the group at a time.

[Note]Note

One of the GemfireLockRegistry constructors requires a Region as an argument; it is used to obtain a Lock via the getDistributedLock() method. This operation requires GLOBAL scope for the Region. Another constructor requires Cache and the Region will be created with GLOBAL scope and with the name LockRegistry.

=== Gemfire Metadata Store

As of Spring Integration 4.0, a new Gemfire-based MetadataStore (the section called “CompletableFuture”) implementation is available. The GemfireMetadataStore can be used to maintain metadata state across application restarts. This new MetadataStore implementation can be used with adapters such as:

In order to instruct these adapters to use the new GemfireMetadataStore, simply declare a Spring bean using the bean name metadataStore. The Twitter Inbound Channel Adapter and the Feed Inbound Channel Adapter will both automatically pick up and use the declared GemfireMetadataStore.

[Note]Note

The GemfireMetadataStore also implements ConcurrentMetadataStore, allowing it to be reliably shared across multiple application instances where only one instance will be allowed to store or modify a key’s value. These methods give various levels of concurrency guarantees based on the scope and data policy of the region. They are implemented in the peer cache and client/server cache but are disallowed in peer Regions having NORMAL or EMPTY data policies.

== HTTP Support

=== Introduction

The HTTP support allows for the execution of HTTP requests and the processing of inbound HTTP requests. Because interaction over HTTP is always synchronous, even if all that is returned is a 200 status code, the HTTP support consists of two gateway implementations: HttpInboundEndpoint and HttpRequestExecutingMessageHandler.

=== Http Inbound Components

To receive messages over HTTP, you need to use an HTTP Inbound Channel Adapter or Gateway. To support the HTTP Inbound Adapters, they need to be deployed within a servlet container such as Apache Tomcat or Jetty. The easiest way to do this is to use Spring’s HttpRequestHandlerServlet, by providing the following servlet definition in the web.xml file:

<servlet>
    <servlet-name>inboundGateway</servlet-name>
    <servlet-class>o.s.web.context.support.HttpRequestHandlerServlet</servlet-class>
</servlet>

Notice that the servlet name matches the bean name. For more information on using the HttpRequestHandlerServlet, see chapter Remoting and web services using Spring, which is part of the Spring Framework Reference documentation.

If you are running within a Spring MVC application, then the aforementioned explicit servlet definition is not necessary. In that case, the bean name for your gateway can be matched against the URL path just like a Spring MVC Controller bean. For more information, please see the chapter Web MVC framework, which is part of the Spring Framework Reference documentation.

[Tip]Tip

For a sample application and the corresponding configuration, please see the Spring Integration Samples repository. It contains the Http Sample application demonstrating Spring Integration’s HTTP support.

Below is an example bean definition for a simple HTTP inbound endpoint.

<bean id="httpInbound"
  class="org.springframework.integration.http.inbound.HttpRequestHandlingMessagingGateway">
  <property name="requestChannel" ref="httpRequestChannel" />
  <property name="replyChannel" ref="httpReplyChannel" />
</bean>

The HttpRequestHandlingMessagingGateway accepts a list of HttpMessageConverter instances or else relies on a default list. The converters allow customization of the mapping from HttpServletRequest to Message. The default converters encapsulate simple strategies, which for example will create a String message for a POST request where the content type starts with "text", see the Javadoc for full details. An additional flag (mergeWithDefaultConverters) can be set along with the list of custom HttpMessageConverter to add the default converters after the custom converters. By default this flag is set to false, meaning that the custom converters replace the default list.

The message conversion process uses the (optional) requestPayloadType property and the incoming Content-Type header. Starting with version 4.3, if a request has no content type header, application/octet-stream is assumed, as recommended by RFC 2616. Previously, the body of such messages was ignored.

Starting with Spring Integration 2.0, MultiPart File support is implemented. If the request has been wrapped as a MultipartHttpServletRequest, when using the default converters, that request will be converted to a Message payload that is a MultiValueMap containing values that may be byte arrays, Strings, or instances of Spring’s MultipartFile depending on the content type of the individual parts.

[Note]Note

The HTTP inbound Endpoint will locate a MultipartResolver in the context if one exists with the bean name "multipartResolver" (the same name expected by Spring’s DispatcherServlet). If it does in fact locate that bean, then the support for MultipartFiles will be enabled on the inbound request mapper. Otherwise, it will fail when trying to map a multipart-file request to a Spring Integration Message. For more on Spring’s support for MultipartResolver, refer to the Spring Reference Manual.

If you wish to proxy a multipart/form-data to another server, it may be better to keep it in raw form. To handle this situation, do not add the multipartResolver bean to the context; configure the endpoint to expect a byte[] request; customize the message converters to include a ByteArrayHttpMessageConverter, and disable the default multipart converter. You may need some other converter(s) for the replies:

<int-http:inbound-gateway
                  channel="receiveChannel"
                  path="/inboundAdapter.htm"
                  request-payload-type="byte[]"
                  message-converters="converters"
                  merge-with-default-converters="false"
                  supported-methods="POST" />

<util:list id="converters">
    <beans:bean class="org.springframework.http.converter.ByteArrayHttpMessageConverter" />
    <beans:bean class="org.springframework.http.converter.StringHttpMessageConverter" />
    <beans:bean class="org.springframework.http.converter.json.MappingJackson2HttpMessageConverter" />
</util:list>

In sending a response to the client there are a number of ways to customize the behavior of the gateway. By default the gateway will simply acknowledge that the request was received by sending a 200 status code back. It is possible to customize this response by providing a viewName to be resolved by the Spring MVC ViewResolver. In the case that the gateway should expect a reply to the Message then setting the expectReply flag (constructor argument) will cause the gateway to wait for a reply Message before creating an HTTP response. Below is an example of a gateway configured to serve as a Spring MVC Controller with a view name. Because of the constructor arg value of TRUE, it wait for a reply. This also shows how to customize the HTTP methods accepted by the gateway, which are POST and GET by default.

<bean id="httpInbound"
  class="org.springframework.integration.http.inbound.HttpRequestHandlingController">
  <constructor-arg value="true" /> <!-- indicates that a reply is expected -->
  <property name="requestChannel" ref="httpRequestChannel" />
  <property name="replyChannel" ref="httpReplyChannel" />
  <property name="viewName" value="jsonView" />
  <property name="supportedMethodNames" >
    <list>
      <value>GET</value>
      <value>DELETE</value>
    </list>
  </property>
</bean>

The reply message will be available in the Model map. The key that is used for that map entry by default is reply, but this can be overridden by setting the replyKey property on the endpoint’s configuration.

=== Http Outbound Components

To configure the HttpRequestExecutingMessageHandler write a bean definition like this:

<bean id="httpOutbound"
  class="org.springframework.integration.http.outbound.HttpRequestExecutingMessageHandler">
  <constructor-arg value="http://localhost:8080/example" />
  <property name="outputChannel" ref="responseChannel" />
</bean>

This bean definition will execute HTTP requests by delegating to a RestTemplate. That template in turn delegates to a list of HttpMessageConverters to generate the HTTP request body from the Message payload. You can configure those converters as well as the ClientHttpRequestFactory instance to use:

<bean id="httpOutbound"
  class="org.springframework.integration.http.outbound.HttpRequestExecutingMessageHandler">
  <constructor-arg value="http://localhost:8080/example" />
  <property name="outputChannel" ref="responseChannel" />
  <property name="messageConverters" ref="messageConverterList" />
  <property name="requestFactory" ref="customRequestFactory" />
</bean>

By default the HTTP request will be generated using an instance of SimpleClientHttpRequestFactory which uses the JDK HttpURLConnection. Use of the Apache Commons HTTP Client is also supported through the provided CommonsClientHttpRequestFactory which can be injected as shown above.

[Note]Note

In the case of the Outbound Gateway, the reply message produced by the gateway will contain all Message Headers present in the request message.

Cookies

Basic cookie support is provided by the transfer-cookies attribute on the outbound gateway. When set to true (default is false), a Set-Cookie header received from the server in a response will be converted to Cookie in the reply message. This header will then be used on subsequent sends. This enables simple stateful interactions, such as…​

...->logonGateway->...->doWorkGateway->...->logoffGateway->...

If transfer-cookies is false, any Set-Cookie header received will remain as Set-Cookie in the reply message, and will be dropped on subsequent sends.

[Note]Note: Empty Response Bodies

HTTP is a request/response protocol. However the response may not have a body, just headers. In this case, the HttpRequestExecutingMessageHandler produces a reply Message with the payload being an org.springframework.http.ResponseEntity, regardless of any provided expected-response-type. According to the HTTP RFC Status Code Definitions, there are many statuses which identify that a response MUST NOT contain a message-body (e.g. 204 No Content). There are also cases where calls to the same URL might, or might not, return a response body; for example, the first request to an HTTP resource returns content, but the second does not (e.g. 304 Not Modified). In all cases, however, the http_statusCode message header is populated. This can be used in some routing logic after the Http Outbound Gateway. You could also use a`<payload-type-router/>` to route messages with an ResponseEntity to a different flow than that used for responses with a body.

[Note]Note: expected-response-type

Further to the note above regarding empty response bodies, if a response does contain a body, you must provide an appropriate expected-response-type attribute or, again, you will simply receive a ResponseEntity with no body. The expected-response-type must be compatible with the (configured or default) HttpMessageConverter s and the Content-Type header in the response. Of course, this can be an abstract class, or even an interface (such as java.io.Serializable when using java serialization and Content-Type: application/x-java-serialized-object).

=== HTTP Namespace Support

==== Introduction

Spring Integration provides an http namespace and the corresponding schema definition. To include it in your configuration, simply provide the following namespace declaration in your application context configuration file:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns:int="http://www.springframework.org/schema/integration"
  xmlns:int-http="http://www.springframework.org/schema/integration/http"
  xsi:schemaLocation="
    http://www.springframework.org/schema/beans
    http://www.springframework.org/schema/beans/spring-beans.xsd
    http://www.springframework.org/schema/integration
    http://www.springframework.org/schema/integration/spring-integration.xsd
    http://www.springframework.org/schema/integration/http
    http://www.springframework.org/schema/integration/http/spring-integration-http.xsd">
    ...
</beans>

==== Inbound

The XML Namespace provides two components for handling HTTP Inbound requests. In order to process requests without returning a dedicated response, use the inbound-channel-adapter:

<int-http:inbound-channel-adapter id="httpChannelAdapter" channel="requests"
    supported-methods="PUT, DELETE"/>

To process requests that do expect a response, use an inbound-gateway:

<int-http:inbound-gateway id="inboundGateway"
    request-channel="requests"
    reply-channel="responses"/>

==== Request Mapping Support

[Note]Note

Spring Integration 3.0 is improving the REST support by introducing the IntegrationRequestMappingHandlerMapping. The implementation relies on the enhanced REST support provided by Spring Framework 3.1 or higher.

The parsing of the HTTP Inbound Gateway or the HTTP Inbound Channel Adapter registers an integrationRequestMappingHandlerMapping bean of type IntegrationRequestMappingHandlerMapping, in case there is none registered, yet. This particular implementation of the HandlerMapping delegates its logic to the RequestMappingInfoHandlerMapping. The implementation provides similar functionality as the one provided by the org.springframework.web.bind.annotation.RequestMapping annotation in Spring MVC.

[Note]Note

For more information, please see Mapping Requests With @RequestMapping.

For this purpose, Spring Integration 3.0 introduces the <request-mapping> sub-element. This optional sub-element can be added to the <http:inbound-channel-adapter> and the <http:inbound-gateway>. It works in conjunction with the path and supported-methods attributes:

<inbound-gateway id="inboundController"
    request-channel="requests"
    reply-channel="responses"
    path="/foo/{fooId}"
    supported-methods="GET"
    view-name="foo"
    error-code="oops">
   <request-mapping headers="User-Agent"
     params="myParam=myValue"
     consumes="application/json"
     produces="!text/plain"/>
</inbound-gateway>

Based on this configuration, the namespace parser creates an instance of the IntegrationRequestMappingHandlerMapping (if none exists, yet), a HttpRequestHandlingController bean and associated with it an instance of RequestMapping, which in turn, is converted to the Spring MVC RequestMappingInfo.

The <request-mapping> sub-element provides the following attributes:

  • headers
  • params
  • consumes
  • produces

With the path and supported-methods attributes of the <http:inbound-channel-adapter> or the <http:inbound-gateway>, <request-mapping> attributes translate directly into the respective options provided by the org.springframework.web.bind.annotation.RequestMapping annotation in Spring MVC.

The <request-mapping> sub-element allows you to configure several Spring Integration HTTP Inbound Endpoints to the same path (or even the same supported-methods) and to provide different downstream message flows based on incoming HTTP requests.

Alternatively, you can also declare just one HTTP Inbound Endpoint and apply routing and filtering logic within the Spring Integration flow to achieve the same result. This allows you to get the Message into the flow as early as possibly, e.g.:

<int-http:inbound-gateway request-channel="httpMethodRouter"
    supported-methods="GET,DELETE"
    path="/process/{entId}"
    payload-expression="#pathVariables.entId"/>

<int:router input-channel="httpMethodRouter" expression="headers.http_requestMethod">
    <int:mapping value="GET" channel="in1"/>
    <int:mapping value="DELETE" channel="in2"/>
</int:router>

<int:service-activator input-channel="in1" ref="service" method="getEntity"/>

<int:service-activator input-channel="in2" ref="service" method="delete"/>

For more information regarding Handler Mappings, please see: Handler Mappings.

==== Cross-Origin Resource Sharing (CORS) Support

Starting with version 4.2 the <http:inbound-channel-adapter> and <http:inbound-gateway> can be configured with a <cross-origin> sub-element. It represents the same options as Spring MVC’s @CrossOrigin for @Controller methods and allows the configuration of Cross-origin resource sharing (CORS) for Spring Integration HTTP endpoints:

  • origin - List of allowed origins. * means that all origins are allowed. These values are placed in the Access-Control-Allow-Origin header of both the pre-flight and actual responses. Default value is *.
  • allowed-headers - Indicates which request headers can be used during the actual request. * means that all headers asked by the client are allowed. This property controls the value of the pre-flight response’s Access-Control-Allow-Headers header. Default value is *.
  • exposed-headers - List of response headers that the user-agent will allow the client to access. This property controls the value of the actual response’s Access-Control-Expose-Headers header.
  • method - The HTTP request methods to allow: GET, POST, HEAD, OPTIONS, PUT, PATCH, DELETE, TRACE. Methods specified here overrides those in supported-methods.
  • allow-credentials - Set to true if the the browser should include any cookies associated to the domain of the request, or false if it should not. Empty string "" means undefined. If true, the pre-flight response will include the header Access-Control-Allow-Credentials=true. Default value is true.
  • max-age - Controls the cache duration for pre-flight responses. Setting this to a reasonable value can reduce the number of pre-flight request/response interactions required by the browser. This property controls the value of the Access-Control-Max-Age header in the pre-flight response. A value of -1 means undefined. Default value is 1800 seconds, or 30 minutes.

The CORS Java Configuration is represented by the org.springframework.integration.http.inbound.CrossOrigin class, instances of which can be injected to the HttpRequestHandlingEndpointSupport beans.

==== Response StatusCode

Starting with version 4.1 the <http:inbound-channel-adapter> can be configured with a status-code-expression to override the default 200 OK status. The expression must return an object which can be converted to an org.springframework.http.HttpStatus enum value. The evaluationContext has a BeanResolver but no variables, so the usage of this attribute is somewhat limited. An example might be to resolve, at runtime, some scoped Bean that returns a status code value but, most likely, it will be set to a fixed value such as status-code=expression="204" (No Content), or status-code-expression="T(org.springframework.http.HttpStatus).NO_CONTENT". By default, status-code-expression is null meaning that the normal 200 OK response status will be returned.

<http:inbound-channel-adapter id="inboundController"
       channel="requests" view-name="foo" error-code="oops"
       status-code-expression="T(org.springframework.http.HttpStatus).ACCEPTED">
   <request-mapping headers="BAR"/>
</http:inbound-channel-adapter>

The <http:inbound-gateway> resolves the status code from the http_statusCode header of the reply Message. Starting with version 4.2, the default response status code when no reply is received within the reply-timeout is 500 Internal Server Error. There are two ways to modify this behavior:

  • add a reply-timeout-status-code-expression - this has the same semantics as the status-code-expression on the inbound adapter.
  • Add an error-channel and return an appropriate message with an http status code header, such as…​
<int:chain input-channel="errors">
    <int:header-enricher>
        <int:header name="http_statusCode" value="504" />
    </int:header-enricher>
    <int:transformer expression="payload.failedMessage" />
</int:chain>

The payload of the ErrorMessage is a MessageTimeoutException; it must be transformed to something that can be converted by the gateway, such as a String; a good candidate is the exception’s message property, which is the value used when using the expression technique.

If the error flow times out after a main flow timeout, 500 Internal Server Error is returned, or the reply-timeout-status-code-expression is evaluated, if present.

[Note]Note

previously, the default status code for a timeout was 200 OK; to restore that behavior, set reply-timeout-status-code-expression="200".

==== URI Template Variables and Expressions

By Using the path attribute in conjunction with the payload-expression attribute as well as the header sub-element, you have a high degree of flexibility for mapping inbound request data.

In the following example configuration, an Inbound Channel Adapter is configured to accept requests using the following URI: /first-name/{firstName}/last-name/{lastName}

Using the payload-expression attribute, the URI template variable {firstName} is mapped to be the Message payload, while the {lastName} URI template variable will map to the lname Message header.

<int-http:inbound-channel-adapter id="inboundAdapterWithExpressions"
    path="/first-name/{firstName}/last-name/{lastName}"
    channel="requests"
    payload-expression="#pathVariables.firstName">
    <int-http:header name="lname" expression="#pathVariables.lastName"/>
</int-http:inbound-channel-adapter>

For more information about URI template variables, please see the Spring Reference Manual: uri template patterns.

Since Spring Integration 3.0, in addition to the existing #pathVariables and #requestParams variables being available in payload and header expressions, other useful variables have been added.

The entire list of available expression variables:

  • #requestParams - the MultiValueMap from the ServletRequest parameterMap.
  • #pathVariables - the Map from URI Template placeholders and their values;
  • #matrixVariables - the Map of MultiValueMap according to Spring MVC Specification. Note, #matrixVariables require Spring MVC 3.2 or higher;
  • #requestAttributes - the org.springframework.web.context.request.RequestAttributes associated with the current Request;
  • #requestHeaders - the org.springframework.http.HttpHeaders object from the current Request;
  • #cookies - the Map<String, Cookie> of javax.servlet.http.Cookie s from the current Request.

Note, all these values (and others) can be accessed within expressions in the downstream message flow via the ThreadLocal org.springframework.web.context.request.RequestAttributes variable, if that message flow is single-threaded and lives within the request thread:

<int-:transformer
	expression="T(org.springframework.web.context.request.RequestContextHolder).
	              requestAttributes.request.queryString"/>

==== Outbound

To configure the outbound gateway you can use the namespace support as well. The following code snippet shows the different configuration options for an outbound Http gateway. Most importantly, notice that the http-method and expected-response-type are provided. Those are two of the most commonly configured values. The default http-method is POST, and the default response type is null. With a null response type, the payload of the reply Message would contain the ResponseEntity as long as it’s http status is a success (non-successful status codes will throw Exceptions). If you are expecting a different type, such as a String, then provide that fully-qualified class name as shown below. See also the note about empty response bodies in the section called “CompletableFuture”.

[Important]Important

Beginning with Spring Integration 2.1 the request-timeout attribute of the HTTP Outbound Gateway was renamed to reply-timeout to better reflect the intent.

<int-http:outbound-gateway id="example"
    request-channel="requests"
    url="http://localhost/test"
    http-method="POST"
    extract-request-payload="false"
    expected-response-type="java.lang.String"
    charset="UTF-8"
    request-factory="requestFactory"
    reply-timeout="1234"
    reply-channel="replies"/>
[Important]Important

Since Spring Integration 2.2, Java serialization over HTTP is no longer enabled by default. Previously, when setting the expected-response-type attribute to a Serializable object, the Accept header was not properly set up. Since Spring Integration 2.2, the SerializingHttpMessageConverter has now been updated to set the Accept header to application/x-java-serialized-object.

However, because this could cause incompatibility with existing applications, it was decided to no longer automatically add this converter to the HTTP endpoints. If you wish to use Java serialization, you will need to add the SerializingHttpMessageConverter to the appropriate endpoints, using the message-converters attribute, when using XML configuration, or using the setMessageConverters() method. Alternatively, you may wish to consider using JSON instead which is enabled by simply having Jackson on the classpath.

Beginning with Spring Integration 2.2 you can also determine the HTTP Method dynamically using SpEL and the http-method-expression attribute. Note that this attribute is obviously murually exclusive with http-method You can also use expected-response-type-expression attribute instead of expected-response-type and provide any valid SpEL expression that determines the type of the response.

<int-http:outbound-gateway id="example"
    request-channel="requests"
    url="http://localhost/test"
    http-method-expression="headers.httpMethod"
    extract-request-payload="false"
    expected-response-type-expression="payload"
    charset="UTF-8"
    request-factory="requestFactory"
    reply-timeout="1234"
    reply-channel="replies"/>

If your outbound adapter is to be used in a unidirectional way, then you can use an outbound-channel-adapter instead. This means that a successful response will simply execute without sending any Messages to a reply channel. In the case of any non-successful response status code, it will throw an exception. The configuration looks very similar to the gateway:

<int-http:outbound-channel-adapter id="example"
    url="http://localhost/example"
    http-method="GET"
    channel="requests"
    charset="UTF-8"
    extract-payload="false"
    expected-response-type="java.lang.String"
    request-factory="someRequestFactory"
    order="3"
    auto-startup="false"/>
[Note]Note

To specify the URL; you can use either the url attribute or the url-expression attribute. The url is a simple string (with placedholders for URI variables, as described below); the url-expression is a SpEL expression, with the Message as the root object, enabling dynamic urls. The url resulting from the expression evaluation can still have placeholders for URI variables.

In previous releases, some users used the place holders to replace the entire URL with a URI variable. Changes in Spring 3.1 can cause some issues with escaped characters, such as ?. For this reason, it is recommended that if you wish to generate the URL entirely at runtime, you use the url-expression attribute.

==== Mapping URI Variables

If your URL contains URI variables, you can map them using the uri-variable sub-element. This sub-element is available for the Http Outbound Gateway and the Http Outbound Channel Adapter.

<int-http:outbound-gateway id="trafficGateway"
    url="http://local.yahooapis.com/trafficData?appid=YdnDemo&amp;zip={zipCode}"
    request-channel="trafficChannel"
    http-method="GET"
    expected-response-type="java.lang.String">
    <int-http:uri-variable name="zipCode" expression="payload.getZip()"/>
</int-http:outbound-gateway>

The uri-variable sub-element defines two attributes: name and expression. The name attribute identifies the name of the URI variable, while the expression attribute is used to set the actual value. Using the expression attribute, you can leverage the full power of the Spring Expression Language (SpEL) which gives you full dynamic access to the message payload and the message headers. For example, in the above configuration the getZip() method will be invoked on the payload object of the Message and the result of that method will be used as the value for the URI variable named zipCode.

Since Spring Integration 3.0, HTTP Outbound Endpoints support the uri-variables-expression attribute to specify an Expression which should be evaluated, resulting in a Map for all URI variable placeholders within the URL template. It provides a mechanism whereby different variable expressions can be used, based on the outbound message. This attribute is mutually exclusive with the <uri-variable/> sub-element:

<int-http:outbound-gateway
     url="http://foo.host/{foo}/bars/{bar}"
     request-channel="trafficChannel"
     http-method="GET"
     uri-variables-expression="@uriVariablesBean.populate(payload)"
     expected-response-type="java.lang.String"/>

where uriVariablesBean might be:

public class UriVariablesBean {
	private static final ExpressionParser EXPRESSION_PARSER = new SpelExpressionParser();

	public Map<String, ?> populate(Object payload) {
		Map<String, Object> variables = new HashMap<String, Object>();
		if (payload instanceOf String.class)) {
			variables.put("foo", "foo"));
		}
		else {
			variables.put("foo", EXPRESSION_PARSER.parseExpression("headers.bar"));
		}
		return variables;
	}

}
[Note]Note

The uri-variables-expression must evaluate to a Map. The values of the Map must be instances of String or Expression. This Map is provided to an ExpressionEvalMap for further resolution of URI variable placeholders using those expressions in the context of the outbound Message.

IMPORTANT

The uriVariablesExpression property provides a very powerful mechanism for evaluating URI variables. It is anticipated that simple expressions like the example above will be used. However, you could also configure something like this "@uriVariablesBean.populate(#root)" with an expression in the returned map being variables.put("foo", EXPRESSION_PARSER.parseExpression(message.getHeaders().get("bar", String.class)));, where the expression is dynamically provided in the message header bar. Since the header may come from an untrusted source, the HTTP outbound endpoints use a SimpleEvaluationContext when evaluating these expressions; allowing only a subset of SpEL features to be used. If you trust your message sources and wish to use the restricted SpEL constructs, set the trustedSpel property of the outbound endpoint to true.

Scenarios when we need to supply a dynamic set of URI variables on per message basis can be achieved with the custom url-expression and some utilities for building and encoding URL parameters:

url-expression="T(org.springframework.web.util.UriComponentsBuilder)
                           .fromHttpUrl('http://HOST:PORT/PATH')
                           .queryParams(payload)
                           .build()
                           .toUri()"

where queryParams() expects a MultiValueMap<String, String> as an argument, so a real set of URL query parameters can be build in advance, before performing request.

The whole queryString can also be presented as an uri variable:

<int-http:outbound-gateway id="proxyGateway" request-channel="testChannel"
              url="http://testServer/test?{queryString}">
    <int-http:uri-variable name="queryString" expression="'a=A&amp;b=B'"/>
</int-http:outbound-gateway>

In this case the URL encoding must be provided manually. For example the org.apache.http.client.utils.URLEncodedUtils#format() can be used for this purpose. A mentioned, manually built, MultiValueMap<String, String> can be converted to the the List<NameValuePair> format() method argument using this Java Streams snippet:

List<NameValuePair> nameValuePairs =
    params.entrySet()
            .stream()
            .flatMap(e -> e
                    .getValue()
                    .stream()
                    .map(v -> new BasicNameValuePair(e.getKey(), v)))
            .collect(Collectors.toList());

==== Controlling URI Encoding

By default, the URL string is encoded (see UriComponentsBuilder) to the URI object before sending the request. In some scenarios with a non-standard URI (e.g. the RabbitMQ Rest API) it is undesirable to perform the encoding. The <http:outbound-gateway/> and <http:outbound-channel-adapter/> provide an encode-uri attribute. To disable encoding the URL, this attribute should be set to false (by default it is true). If you wish to partially encode some of the URL, this can be achieved using an expression within a <uri-variable/>:

<http:outbound-gateway url="http://somehost/%2f/fooApps?bar={param}" encode-uri="false">
          <http:uri-variable name="param"
            expression="T(org.apache.commons.httpclient.util.URIUtil)
                                             .encodeWithinQuery('Hello World!')"/>
</http:outbound-gateway>

=== Timeout Handling

In the context of HTTP components, there are two timing areas that have to be considered.

Timeouts when interacting with Spring Integration Channels

Timeouts when interacting with a remote HTTP server

First, the components interact with Message Channels, for which timeouts can be specified. For example, an HTTP Inbound Gateway will forward messages received from connected HTTP Clients to a Message Channel (Request Timeout) and consequently the HTTP Inbound Gateway will receive a reply Message from the Reply Channel (Reply Timeout) that will be used to generate the HTTP Response. Please see the figure below for an illustration.

Figure 8.1. How timeout settings apply to an HTTP Inbound Gateway

http inbound gateway

For outbound endpoints, the second thing to consider is timing while interacting with the remote server.

Figure 8.2. How timeout settings apply to an HTTP Outbound Gateway

http outbound gateway

You may want to configure the HTTP related timeout behavior, when making active HTTP requests using the HTTP Outbound Gateway or the HTTP Outbound Channel Adapter. In those instances, these two components use Spring’s RestTemplate support to execute HTTP requests.

In order to configure timeouts for the HTTP Outbound Gateway and the HTTP Outbound Channel Adapter, you can either reference a RestTemplate bean directly, using the rest-template attribute, or you can provide a reference to a ClientHttpRequestFactory bean using the request-factory attribute. Spring provides the following implementations of the ClientHttpRequestFactory interface:

SimpleClientHttpRequestFactory - Uses standard J2SE facilities for making HTTP Requests

HttpComponentsClientHttpRequestFactory - Uses Apache HttpComponents HttpClient (Since Spring 3.1)

ClientHttpRequestFactory - Uses Jakarta Commons HttpClient (Deprecated as of Spring 3.1)

If you don’t explicitly configure the request-factory or rest-template attribute respectively, then a default RestTemplate which uses a SimpleClientHttpRequestFactory will be instantiated.

[Note]Note

With some JVM implementations, the handling of timeouts using the URLConnection class may not be consistent.

E.g. from the Java™ Platform, Standard Edition 6 API Specification on setConnectTimeout: [quote] Some non-standard implementation of this method may ignore the specified timeout. To see the connect timeout set, please call getConnectTimeout().

Please test your timeouts if you have specific needs. Consider using the HttpComponentsClientHttpRequestFactory which, in turn, uses Apache HttpComponents HttpClient instead.

[Important]Important

When using the Apache HttpComponents HttpClient with a Pooling Connection Manager, be aware that, by default, the connection manager will create no more than 2 concurrent connections per given route and no more than 20 connections in total. For many real-world applications these limits may prove too constraining. Refer to the Apache documentation (link above) for information about configuring this important component.

Here is an example of how to configure an HTTP Outbound Gateway using a SimpleClientHttpRequestFactory, configured with connect and read timeouts of 5 seconds respectively:

<int-http:outbound-gateway url="http://www.google.com/ig/api?weather={city}"
                           http-method="GET"
                           expected-response-type="java.lang.String"
                           request-factory="requestFactory"
                           request-channel="requestChannel"
                           reply-channel="replyChannel">
    <int-http:uri-variable name="city" expression="payload"/>
</int-http:outbound-gateway>

<bean id="requestFactory"
      class="org.springframework.http.client.SimpleClientHttpRequestFactory">
    <property name="connectTimeout" value="5000"/>
    <property name="readTimeout"    value="5000"/>
</bean>

HTTP Outbound Gateway

For the HTTP Outbound Gateway, the XML Schema defines only the reply-timeout. The reply-timeout maps to the sendTimeout property of the org.springframework.integration.http.outbound.HttpRequestExecutingMessageHandler class. More precisely, the property is set on the extended AbstractReplyProducingMessageHandler class, which ultimately sets the property on the MessagingTemplate.

The value of the sendTimeout property defaults to "-1" and will be applied to the connected MessageChannel. This means, that depending on the implementation, the Message Channel’s send method may block indefinitely. Furthermore, the sendTimeout property is only used, when the actual MessageChannel implementation has a blocking send (such as full bounded QueueChannel).

HTTP Inbound Gateway

For the HTTP Inbound Gateway, the XML Schema defines the request-timeout attribute, which will be used to set the requestTimeout property on the HttpRequestHandlingMessagingGateway class (on the extended MessagingGatewaySupport class). Secondly, the_reply-timeout_ attribute exists and it maps to the replyTimeout property on the same class.

The default for both timeout properties is "1000ms". Ultimately, the request-timeout property will be used to set the sendTimeout on the used MessagingTemplate instance. The replyTimeout property on the other hand, will be used to set the receiveTimeout property on the used MessagingTemplate instance.

[Tip]Tip

In order to simulate connection timeouts, connect to a non-routable IP address, for example 10.255.255.10.

=== HTTP Proxy configuration

If you are behind a proxy and need to configure proxy settings for HTTP outbound adapters and/or gateways, you can apply one of two approaches. In most cases, you can rely on the standard Java System Properties that control the proxy settings. Otherwise, you can explicitly configure a Spring bean for the HTTP client request factory instance.

Standard Java Proxy configuration

There are 3 System Properties you can set to configure the proxy settings that will be used by the HTTP protocol handler:

  • http.proxyHost - the host name of the proxy server.
  • http.proxyPort - the port number, the default value being 80.
  • http.nonProxyHosts - a list of hosts that should be reached directly, bypassing the proxy. This is a list of patterns separated by |. The patterns may start or end with a * for wildcards. Any host matching one of these patterns will be reached through a direct connection instead of through a proxy.

And for HTTPS:

  • https.proxyHost - the host name of the proxy server.
  • https.proxyPort - the port number, the default value being 80.

For more information please refer to this document: http://download.oracle.com/javase/6/docs/technotes/guides/net/proxies.html

Spring’s SimpleClientHttpRequestFactory

If for any reason, you need more explicit control over the proxy configuration, you can use Spring’s SimpleClientHttpRequestFactory and configure its proxy property as such:

<bean id="requestFactory"
    class="org.springframework.http.client.SimpleClientHttpRequestFactory">
    <property name="proxy">
        <bean id="proxy" class="java.net.Proxy">
            <constructor-arg>
                <util:constant static-field="java.net.Proxy.Type.HTTP"/>
            </constructor-arg>
            <constructor-arg>
                <bean class="java.net.InetSocketAddress">
                    <constructor-arg value="123.0.0.1"/>
                    <constructor-arg value="8080"/>
                </bean>
            </constructor-arg>
        </bean>
    </property>
</bean>

=== HTTP Header Mappings

Spring Integration provides support for Http Header mapping for both HTTP Request and HTTP Responses.

By default all standard Http Headers as defined here http://en.wikipedia.org/wiki/List_of_HTTP_header_fields will be mapped from the message to HTTP request/response headers without further configuration. However if you do need further customization you may provide additional configuration via convenient namespace support. You can provide a comma-separated list of header names, and you can also include simple patterns with the * character acting as a wildcard. If you do provide such values, it will override the default behavior. Basically, it assumes you are in complete control at that point. However, if you do want to include all of the standard HTTP headers, you can use the shortcut patterns: HTTP_REQUEST_HEADERS and HTTP_RESPONSE_HEADERS. Here are some examples:

<int-http:outbound-gateway id="httpGateway"
    url="http://localhost/test2"
    mapped-request-headers="foo, bar"
    mapped-response-headers="X-*, HTTP_RESPONSE_HEADERS"
    channel="someChannel"/>

<int-http:outbound-channel-adapter id="httpAdapter"
    url="http://localhost/test2"
    mapped-request-headers="foo, bar, HTTP_REQUEST_HEADERS"
    channel="someChannel"/>

The adapters and gateways will use the DefaultHttpHeaderMapper which now provides two static factory methods for "inbound" and "outbound" adapters so that the proper direction can be applied (mapping HTTP requests/responses IN/OUT as appropriate).

If further customization is required you can also configure a DefaultHttpHeaderMapper independently and inject it into the adapter via the header-mapper attribute.

<int-http:outbound-gateway id="httpGateway"
    url="http://localhost/test2"
    header-mapper="headerMapper"
    channel="someChannel"/>

<bean id="headerMapper" class="o.s.i.http.support.DefaultHttpHeaderMapper">
    <property name="inboundHeaderNames" value="foo*, *bar, baz"/>
    <property name="outboundHeaderNames" value="a*b, d"/>
</bean>

Of course, you can even implement the HeaderMapper strategy interface directly and provide a reference to that if you need to do something other than what the DefaultHttpHeaderMapper supports.

=== Integration Graph Controller

Starting with version 4.3, the HTTP module provides an @EnableIntegrationGraphController @Configuration class annotation and <int-http:graph-controller/> XML element to expose the IntegrationGraphServer as a REST service. See the section called “CompletableFuture” for more information.

=== HTTP Samples

==== Multipart HTTP request - RestTemplate (client) and Http Inbound Gateway (server)

This example demonstrates how simple it is to send a Multipart HTTP request via Spring’s RestTemplate and receive it with a Spring Integration HTTP Inbound Adapter. All we are doing is creating a MultiValueMap and populating it with multi-part data. The RestTemplate will take care of the rest (no pun intended) by converting it to a MultipartHttpServletRequest . This particular client will send a multipart HTTP Request which contains the name of the company as well as an image file with the company logo.

RestTemplate template = new RestTemplate();
String uri = "http://localhost:8080/multipart-http/inboundAdapter.htm";
Resource s2logo = 
   new ClassPathResource("org/springframework/samples/multipart/spring09_logo.png");
MultiValueMap map = new LinkedMultiValueMap();
map.add("company""SpringSource");
map.add("company-logo", s2logo);
HttpHeaders headers = new HttpHeaders();
headers.setContentType(new MediaType("multipart""form-data"));
HttpEntity request = new HttpEntity(map, headers);
ResponseEntity<?> httpResponse = template.exchange(uri, HttpMethod.POST, request, null);

That is all for the client.

On the server side we have the following configuration:

<int-http:inbound-channel-adapter id="httpInboundAdapter"
    channel="receiveChannel"
    path="/inboundAdapter.htm"
    supported-methods="GET, POST"/>

<int:channel id="receiveChannel"/>

<int:service-activator input-channel="receiveChannel">
    <bean class="org.springframework.integration.samples.multipart.MultipartReceiver"/>
</int:service-activator>

<bean id="multipartResolver"
    class="org.springframework.web.multipart.commons.CommonsMultipartResolver"/>

The httpInboundAdapter will receive the request, convert it to a Message with a payload that is a LinkedMultiValueMap. We then are parsing that in the multipartReceiver service-activator;

public void receive(LinkedMultiValueMap<String, Object> multipartRequest){
    System.out.println("### Successfully received multipart request ###");
    for (String elementName : multipartRequest.keySet()) {
        if (elementName.equals("company")){
            System.out.println("\t" + elementName + " - " +
                ((String[]) multipartRequest.getFirst("company"))[0]);
        }
        else if (elementName.equals("company-logo")){
            System.out.println("\t" + elementName + " - as UploadedMultipartFile: " +
                ((UploadedMultipartFile) multipartRequest
                    .getFirst("company-logo")).getOriginalFilename());
        }
    }
}

You should see the following output:

### Successfully received multipart request ###
   company - SpringSource
   company-logo - as UploadedMultipartFile: spring09_logo.png

== JDBC Support

Spring Integration provides Channel Adapters for receiving and sending messages via database queries. Through those adapters Spring Integration supports not only plain JDBC SQL Queries, but also Stored Procedure and Stored Function calls.

The following JDBC components are available by default:

Furthermore, the Spring Integration JDBC Module also provides a JDBC Message Store

=== Inbound Channel Adapter

The main function of an inbound Channel Adapter is to execute a SQL SELECT query and turn the result set as a message. The message payload is the whole result set, expressed as a List, and the types of the items in the list depend on the row-mapping strategy that is used. The default strategy is a generic mapper that just returns a Map for each row in the query result. Optionally, this can be changed by adding a reference to a RowMapper instance (see the Spring JDBC documentation for more detailed information about row mapping).

[Note]Note

If you want to convert rows in the SELECT query result to individual messages you can use a downstream splitter.

The inbound adapter also requires a reference to either a JdbcTemplate instance or a DataSource.

As well as the SELECT statement to generate the messages, the adapter above also has an UPDATE statement that is being used to mark the records as processed so that they don’t show up in the next poll. The update can be parameterized by the list of ids from the original select. This is done through a naming convention by default (a column in the input result set called "id" is translated into a list in the parameter map for the update called "id"). The following example defines an inbound Channel Adapter with an update query and a DataSource reference.

<int-jdbc:inbound-channel-adapter query="select * from item where status=2"
    channel="target" data-source="dataSource"
    update="update item set status=10 where id in (:id)" />
[Note]Note

The parameters in the update query are specified with a colon (:) prefix to the name of a parameter (which in this case is an expression to be applied to each of the rows in the polled result set). This is a standard feature of the named parameter JDBC support in Spring JDBC combined with a convention (projection onto the polled result list) adopted in Spring Integration. The underlying Spring JDBC features limit the available expressions (e.g. most special characters other than period are disallowed), but since the target is usually a list of or an individual object addressable by simple bean paths this isn’t unduly restrictive.

To change the parameter generation strategy you can inject a SqlParameterSourceFactory into the adapter to override the default behavior (the adapter has a sql-parameter-source-factory attribute). Spring Integration provides a ExpressionEvaluatingSqlParameterSourceFactory which will create a SpEL-based parameter source, with the results of the query as the #root object. (If update-per-row is true, the root object is the row). If the same parameter name appears multiple times in the update query, it is evaluated only one time, and its result is cached.

You can also use a parameter source for the select query. In this case, since there is no "result" object to evaluate against, a single parameter source is used each time (rather than using a parameter source factory). Starting with version 4.0, you can use Spring to create a SpEL based parameter source as follows:

<int-jdbc:inbound-channel-adapter query="select * from item where status=:status"
	channel="target" data-source="dataSource"
	select-sql-parameter-source="parameterSource" />

<bean id="parameterSource" factory-bean="parameterSourceFactory"
			factory-method="createParameterSourceNoCache">
	<constructor-arg value="" />
</bean>

<bean id="parameterSourceFactory"
		class="o.s.integration.jdbc.ExpressionEvaluatingSqlParameterSourceFactory">
	<property name="parameterExpressions">
		<map>
			<entry key="status" value="@statusBean.which()" />
		</map>
	</property>
</bean>

<bean id="statusBean" class="foo.StatusDetermination" />

The value in each parameter expression can be any valid SpEL expression. The #root object for the expression evaluation is the constructor argument defined on the parameterSource bean. It is static for all evaluations (in this case, an empty String).

[Important]Important

Use the createParameterSourceNoCache factory method; otherwise the parameter source will cache the result of the evaluation. Also note that, because caching is disabled, if the same parameter name appears in the select query multiple times, it will be re-evaluated for each occurrence.

==== Polling and Transactions

The inbound adapter accepts a regular Spring Integration poller as a sub element, so for instance the frequency of the polling can be controlled. A very important feature of the poller for JDBC usage is the option to wrap the poll operation in a transaction, for example:

<int-jdbc:inbound-channel-adapter query="..."
        channel="target" data-source="dataSource" update="...">
    <int:poller fixed-rate="1000">
        <int:transactional/>
    </int:poller>
</int-jdbc:inbound-channel-adapter>
[Note]Note

If a poller is not explicitly specified, a default value will be used (and as per normal with Spring Integration can be defined as a top level bean).

In this example the database is polled every 1000 milliseconds, and the update and select queries are both executed in the same transaction. The transaction manager configuration is not shown, but as long as it is aware of the data source then the poll is transactional. A common use case is for the downstream channels to be direct channels (the default), so that the endpoints are invoked in the same thread, and hence the same transaction. Then if any of them fail, the transaction rolls back and the input data is reverted to its original state.

==== Max-rows-per-poll versus Max-messages-per-poll

The JDBC Inbound Channel Adapter defines an attribute max-rows-per-poll. When you specify the adapter’s Poller, you can also define a property called max-messages-per-poll. While these two attributes look similar, their meaning is quite different.

max-messages-per-poll specifies the number of times the query is executed per polling interval, whereas max-rows-per-poll specifies the number of rows returned for each execution.

Under normal circumstances, you would likely not want to set the Poller’s max-messages-per-poll property when using the JDBC Inbound Channel Adapter. Its default value is 1, which means that the JDBC Inbound Channel Adapter's receive() method is executed exactly once for each poll interval.

Setting the max-messages-per-poll attribute to a larger value means that the query is executed that many times back to back. For more information regarding the max-messages-per-poll attribute, please see Section 4.3.1, “Configuring An Inbound Channel Adapter”.

In contrast, the max-rows-per-poll attribute, if greater than 0, specifies the maximum number of rows that will be used from the query result set, per execution of the receive() method. If the attribute is set to 0, then all rows will be included in the resulting message. If not explicitly set, the attribute defaults to 0.

=== Outbound Channel Adapter

The outbound Channel Adapter is the inverse of the inbound: its role is to handle a message and use it to execute a SQL query. The message payload and headers are available by default as input parameters to the query, for instance:

<int-jdbc:outbound-channel-adapter
    query="insert into foos (id, status, name) values (:headers[id], 0, :payload[foo])"
    data-source="dataSource"
    channel="input"/>

In the example above, messages arriving on the channel labelled input have a payload of a map with key foo, so the [] operator dereferences that value from the map. The headers are also accessed as a map.

[Note]Note

The parameters in the query above are bean property expressions on the incoming message (not Spring EL expressions). This behavior is part of the SqlParameterSource which is the default source created by the outbound adapter. Other behavior is possible in the adapter, and requires the user to inject a different SqlParameterSourceFactory.

The outbound adapter requires a reference to either a DataSource or a JdbcTemplate. It can also have a SqlParameterSourceFactory injected to control the binding of each incoming message to a query.

If the input channel is a direct channel, then the outbound adapter runs its query in the same thread, and therefore the same transaction (if there is one) as the sender of the message.

Passing Parameters using SpEL Expressions

A common requirement for most JDBC Channel Adapters is to pass parameters as part of Sql queries or Stored Procedures/Functions. As mentioned above, these parameters are by default bean property expressions, not SpEL expressions. However, if you need to pass SpEL expression as parameters, you must inject a SqlParameterSourceFactory explicitly.

The following example uses a ExpressionEvaluatingSqlParameterSourceFactory to achieve that requirement.

<jdbc:outbound-channel-adapter data-source="dataSource" channel="input"
    query="insert into MESSAGES (MESSAGE_ID,PAYLOAD,CREATED_DATE)     \
    values (:id, :payload, :createdDate)"
    sql-parameter-source-factory="spelSource"/>

<bean id="spelSource"
      class="o.s.integration.jdbc.ExpressionEvaluatingSqlParameterSourceFactory">
    <property name="parameterExpressions">
        <map>
            <entry key="id"          value="headers['id'].toString()"/>
            <entry key="createdDate" value="new java.util.Date()"/>
            <entry key="payload"     value="payload"/>
        </map>
    </property>
</bean>

For further information, please also see the section called “CompletableFuture”

PreparedStatement Callback

There are some cases when the flexibility and loose-coupling of SqlParameterSourceFactory isn’t enough for the target PreparedStatement or we need to do some low-level JDBC work. The Spring JDBC module provides APIs to configure the execution environment (e.g. ConnectionCallback or PreparedStatementCreator) and manipulation of parameter values (e.g. SqlParameterSource). Or even APIs for low level operations, for example StatementCallback.

Starting with Spring Integration 4.2, the MessagePreparedStatementSetter is available to allow the specification of parameters on the PreparedStatement manually, in the requestMessage context. This class plays exactly the same role as PreparedStatementSetter in the standard Spring JDBC API. Actually it is invoked directly from an inline PreparedStatementSetter implementation, when the JdbcMessageHandler invokes execute on the JdbcTemplate.

This functional interface option is mutually exclusive with sqlParameterSourceFactory and can be used as a more powerful alternative to populate parameters of the PreparedStatement from the requestMessage. For example it is useful when we need to store File data to the DataBase BLOB column in a stream manner:

@Bean
@ServiceActivator(inputChannel = "storeFileChannel")
public MessageHandler jdbcMessageHandler(DataSource dataSource) {
    JdbcMessageHandler jdbcMessageHandler = new JdbcMessageHandler(dataSource,
            "INSERT INTO imagedb (image_name, content, description) VALUES (?, ?, ?)");
    jdbcMessageHandler.setPreparedStatementSetter((ps, m) -> {
        ps.setString(1, m.getHeaders().get(FileHeaders.FILENAME));
        try (FileInputStream inputStream = new FileInputStream((File) m.getPayload())) {
            ps.setBlob(2, inputStream);
        }
        catch (Exception e) {
            throw new MessageHandlingException(m, e);
        }
        ps.setClob(3, new StringReader(m.getHeaders().get("description", String.class)));
    });
    return jdbcMessageHandler;
}

From the XML configuration perspective, the prepared-statement-setter attribute is available on the <int-jdbc:outbound-channel-adapter> component, to specify a MessagePreparedStatementSetter bean reference.

=== Outbound Gateway

The outbound Gateway is like a combination of the outbound and inbound adapters: its role is to handle a message and use it to execute a SQL query and then respond with the result sending it to a reply channel. The message payload and headers are available by default as input parameters to the query, for instance:

<int-jdbc:outbound-gateway
    update="insert into foos (id, status, name) values (:headers[id], 0, :payload[foo])"
    request-channel="input" reply-channel="output" data-source="dataSource" />

The result of the above would be to insert a record into the "foos" table and return a message to the output channel indicating the number of rows affected (the payload is a map: {UPDATED=1}).

If the update query is an insert with auto-generated keys, the reply message can be populated with the generated keys by adding keys-generated="true" to the above example (this is not the default because it is not supported by some database platforms). For example:

<int-jdbc:outbound-gateway
    update="insert into foos (status, name) values (0, :payload[foo])"
    request-channel="input" reply-channel="output" data-source="dataSource"
    keys-generated="true"/>

Instead of the update count or the generated keys, you can also provide a select query to execute and generate a reply message from the result (like the inbound adapter), e.g:

<int-jdbc:outbound-gateway
    update="insert into foos (id, status, name) values (:headers[id], 0, :payload[foo])"
    query="select * from foos where id=:headers[$id]"
    request-channel="input" reply-channel="output" data-source="dataSource"/>

Since Spring Integration 2.2 the update SQL query is no longer mandatory. You can now solely provide a select query, using either the query attribute or the query sub-element. This is extremely useful if you need to actively retrieve data using e.g. a generic Gateway or a Payload Enricher. The reply message is then generated from the result, like the inbound adapter, and passed to the reply channel.

<int-jdbc:outbound-gateway
    query="select * from foos where id=:headers[id]"
    request-channel="input"
    reply-channel="output"
    data-source="dataSource"/>

As with the channel adapters, there is also the option to provide SqlParameterSourceFactory instances for request and reply. The default is the same as for the outbound adapter, so the request message is available as the root of an expression. If keys-generated="true" then the root of the expression is the generated keys (a map if there is only one or a list of maps if multi-valued).

The outbound gateway requires a reference to either a DataSource or a JdbcTemplate. It can also have a SqlParameterSourceFactory injected to control the binding of the incoming message to the query.

Starting with the version 4.2 the request-prepared-statement-setter attribute is available on the <int-jdbc:outbound-gateway> as an alternative to the request-sql-parameter-source-factory. It allows you to specify a MessagePreparedStatementSetter bean reference, which implements more sophisticated PreparedStatement preparation before its execution.

See the section called “CompletableFuture” for more information about MessagePreparedStatementSetter.

=== JDBC Message Store

Spring Integration provides 2 JDBC specific Message Store implementations. The first one, is the JdbcMessageStore which is suitable to be used in conjunction with Aggregators and the Claim-Check pattern. While it can be used for backing Message Channels as well, you may want to consider using the JdbcChannelMessageStore implementation instead, as it provides a more targeted and scalable implementation.

==== The Generic JDBC Message Store

The JDBC module provides an implementation of the Spring Integration MessageStore (important in the Claim Check pattern) and MessageGroupStore (important in stateful patterns like Aggregator) backed by a database. Both interfaces are implemented by the JdbcMessageStore, and there is also support for configuring store instances in XML. For example:

<int-jdbc:message-store id="messageStore" data-source="dataSource"/>

A JdbcTemplate can be specified instead of a DataSource.

Other optional attributes are show in the next example:

<int-jdbc:message-store id="messageStore" data-source="dataSource"
    lob-handler="lobHandler" table-prefix="MY_INT_"/>

Here we have specified a LobHandler for dealing with messages as large objects (e.g. often necessary if using Oracle) and a prefix for the table names in the queries generated by the store. The table name prefix defaults to INT_.

[Note]Note

If you plan on using MySQL, please use MySQL version 5.6.4 or higher, if possible. Prior versions do not support fractional seconds for temporal data types. Because of that, messages may not arrive in the precise FIFO order when polling from such a MySQL Message Store.

Therefore, starting with Spring Integration 3.0, we provide an additional set of DDL scripts for MySQL version 5.6.4 or higher:

  • schema-drop-mysql-5_6_4.sql
  • schema-mysql-5_6_4.sql

For more information, please see: Fractional Seconds in Time Values.

Also important, please ensure that you use an up-to-date version of the JDBC driver for MySQL (Connector/J), e.g. version 5.1.24 or higher.

==== Backing Message Channels

If you intend backing Message Channels using JDBC, it is recommended to use the provided JdbcChannelMessageStore implementation instead. It can only be used in conjunction with Message Channels.

Supported Databases

The JdbcChannelMessageStore uses database specific SQL queries to retrieve messages from the database. Therefore, users must set the ChannelMessageStoreQueryProvider property on the JdbcChannelMessageStore. This channelMessageStoreQueryProvider provides the SQL queries and Spring Integration provides support for the following relational databases:

  • PostgreSQL
  • HSQLDB
  • MySQL
  • Oracle
  • Derby
  • H2

If your database is not listed, you can easily extend the AbstractChannelMessageStoreQueryProvider class and provide your own custom queries.

Since version 4.0, the MESSAGE_SEQUENCE column has been added to the table to ensure first-in-first-out (FIFO) queueing even when messages are stored in the same millisecond.

[Important]Important

Generally it is not recommended to use a relational database for the purpose of queuing. Instead, if possible, consider using either JMS or AMQP backed channels instead. For further reference please see the following resources:

Concurrent Polling

When polling a Message Channel, you have the option to configure the associated Poller with a TaskExecutor reference.

[Important]Important

Keep in mind, though, that if you use a JDBC backed Message Channel and you are planning on polling the channel and consequently the message store transactionally with multiple threads, you should ensure that you use a relational database that supports Multiversion Concurrency Control (MVCC). Otherwise, locking may be an issue and the performance, when using multiple threads, may not materialize as expected. For example Apache Derby is problematic in that regard.

To achieve better JDBC queue throughput, and avoid issues when different threads may poll the same Message from the queue, it is important to set the usingIdCache property of JdbcChannelMessageStore to true when using databases that do not support MVCC:

<bean id="queryProvider"
    class="o.s.i.jdbc.store.channel.PostgresChannelMessageStoreQueryProvider"/>

<int:transaction-synchronization-factory id="syncFactory">
    <int:after-commit expression="@store.removeFromIdCache(headers.id.toString())" />
    <int:after-rollback expression="@store.removeFromIdCache(headers.id.toString())"/>
</int:transaction-synchronization-factory>

<task:executor id="pool" pool-size="10"
    queue-capacity="10" rejection-policy="CALLER_RUNS" />

<bean id="store" class="o.s.i.jdbc.store.JdbcChannelMessageStore">
    <property name="dataSource" ref="dataSource"/>
    <property name="channelMessageStoreQueryProvider" ref="queryProvider"/>
    <property name="region" value="TX_TIMEOUT"/>
    <property name="usingIdCache" value="true"/>
</bean>

<int:channel id="inputChannel">
    <int:queue message-store="store"/>
</int:channel>

<int:bridge input-channel="inputChannel" output-channel="outputChannel">
    <int:poller fixed-delay="500" receive-timeout="500"
        max-messages-per-poll="1" task-executor="pool">
        <int:transactional propagation="REQUIRED" synchronization-factory="syncFactory"
        isolation="READ_COMMITTED" transaction-manager="transactionManager" />
    </int:poller>
</int:bridge>

<int:channel id="outputChannel" />

Priority Channel

Starting with version 4.0, the JdbcChannelMessageStore implements PriorityCapableChannelMessageStore and provides the priorityEnabled option allowing it to be used as a message-store reference for priority-queue s. For this purpose, the INT_CHANNEL_MESSAGE has a MESSAGE_PRIORITY column to store the value of PRIORITY Message header. In addition, a new MESSAGE_SEQUENCE column is also provided to achieve a robust first-in-first-out (FIFO) polling mechanism, even when multiple messages are stored with the same priority in the same millisecond. Messages are polled (selected) from the database with order by MESSAGE_PRIORITY DESC NULLS LAST, CREATED_DATE, MESSAGE_SEQUENCE.

[Note]Note

It’s not recommended to use the same JdbcChannelMessageStore bean for priority and non-priority queue channel, because priorityEnabled option applies to the entire store and proper FIFO queue semantics will not be retained for the queue channel. However the same INT_CHANNEL_MESSAGE table, and even region, can be used for both JdbcChannelMessageStore types. To configure that scenario, simply extend one message store bean from the other:

<bean id="channelStore" class="o.s.i.jdbc.store.JdbcChannelMessageStore">
    <property name="dataSource" ref="dataSource"/>
    <property name="channelMessageStoreQueryProvider" ref="queryProvider"/>
</bean>

<int:channel id="queueChannel">
    <int:queue message-store="channelStore"/>
</int:channel>

<bean id="priorityStore" parent="channelStore">
    <property name="priorityEnabled" value="true"/>
</bean>

<int:channel id="priorityChannel">
    <int:priority-queue message-store="priorityStore"/>
</int:channel>

==== Initializing the Database

Spring Integration ships with some sample scripts that can be used to initialize a database. In the spring-integration-jdbc JAR file you will find scripts in the org.springframework.integration.jdbc and in the org.springframework.integration.jdbc.store.channel package: there is a create and a drop script example for a range of common database platforms. A common way to use these scripts is to reference them in a Spring JDBC data source initializer. Note that the scripts are provided as samples or specifications of the the required table and column names. You may find that you need to enhance them for production use (e.g. with index declarations).

==== Partitioning a Message Store

It is common to use a JdbcMessageStore as a global store for a group of applications, or nodes in the same application. To provide some protection against name clashes, and to give control over the database meta-data configuration, the message store allows the tables to be partitioned in two ways. One is to use separate table names, by changing the prefix as described above, and the other is to specify a "region" name for partitioning data within a single table. An important use case for this is when the MessageStore is managing persistent queues backing a Spring Integration Message Channel. The message data for a persistent channel is keyed in the store on the channel name, so if the channel names are not globally unique then there is the danger of channels picking up data that was not intended for them. To avoid this, the message store region can be used to keep data separate for different physical channels that happen to have the same logical name.

=== Stored Procedures

In certain situations plain JDBC support is not sufficient. Maybe you deal with legacy relational database schemas or you have complex data processing needs, but ultimately you have to use Stored Procedures or Stored Functions. Since Spring Integration 2.1, we provide three components in order to execute Stored Procedures or Stored Functions:

  • Stored Procedures Inbound Channel Adapter
  • Stored Procedures Outbound Channel Adapter
  • Stored Procedures Outbound Gateway

==== Supported Databases

In order to enable calls to Stored Procedures and Stored Functions, the Stored Procedure components use the org.springframework.jdbc.core.simple.SimpleJdbcCall class. Consequently, the following databases are fully supported for executing Stored Procedures:

  • Apache Derby
  • DB2
  • MySQL
  • Microsoft SQL Server
  • Oracle
  • PostgreSQL
  • Sybase

If you want to execute Stored Functions instead, the following databases are fully supported:

  • MySQL
  • Microsoft SQL Server
  • Oracle
  • PostgreSQL
[Note]Note

Even though your particular database may not be fully supported, chances are, that you can use the Stored Procedure Spring Integration components quite successfully anyway, provided your RDBMS supports Stored Procedures or Functions.

As a matter of fact, some of the provided integration tests use the H2 database. Nevertheless, it is very important to thoroughly test those usage scenarios.

==== Configuration

The Stored Procedure components provide full XML Namespace support and configuring the components is similar as for the general purpose JDBC components discussed earlier.

==== Common Configuration Attributes

Certain configuration parameters are shared among all Stored Procedure components and are described below:

auto-startup

Lifecycle attribute signaling if this component should be started during Application Context startup. Defaults to true. Optional.

data-source

Reference to a javax.sql.DataSource, which is used to access the database. Required.

id

Identifies the underlying Spring bean definition, which is an instance of either EventDrivenConsumer or PollingConsumer, depending on whether the Outbound Channel Adapter’s channel attribute references a SubscribableChannel or a PollableChannel. Optional.

ignore-column-meta-data

For fully supported databases, the underlying SimpleJdbcCall class can automatically retrieve the parameter information for the to be invoked Stored Procedure or Function from the JDBC Meta-data.

However, if the used database does not support meta data lookups or if you like to provide customized parameter definitions, this flag can be set to true. It defaults to false. Optional.

is-function

If true, a SQL Function is called. In that case the stored-procedure-name or stored-procedure-name-expression attributes define the name of the called function. Defaults to false. Optional.

stored-procedure-name

The attribute specifies the name of the stored procedure. If the is-function attribute is set to true, this attribute specifies the function name instead. Either this property or stored-procedure-name-expression must be specified.

stored-procedure-name-expression

This attribute specifies the name of the stored procedure using a SpEL expression. Using SpEL you have access to the full message (if available), including its headers and payload. You can use this attribute to invoke different Stored Procedures at runtime. For example, you can provide Stored Procedure names that you would like to execute as a Message Header. The expression must resolve to a String.

If the is-function attribute is set to true, this attribute specifies a Stored Function. Either this property or stored-procedure-name must be specified.

jdbc-call-operations-cache-size

Defines the maximum number of cached SimpleJdbcCallOperations instances. Basically, for each Stored Procedure Name a new SimpleJdbcCallOperations instance is created that in return is being cached.

[Note]Note

The stored-procedure-name-expression attribute and the jdbc-call-operations-cache-size were added with Spring Integration 2.2.

The default cache size is 10. A value of 0 disables caching. Negative values are not permitted.

If you enable JMX, statistical information about the jdbc-call-operations-cache is exposed as MBean. Please see the section called “CompletableFuture” for more information.

sql-parameter-source-factory (Not available for the Stored Procedure Inbound Channel Adapter.)

Reference to a SqlParameterSourceFactory. By default bean properties of the passed in Message payload will be used as a source for the Stored Procedure’s input parameters using a BeanPropertySqlParameterSourceFactory.

This may be sufficient for basic use cases. For more sophisticated options, consider passing in one or more ProcedureParameter. Please also refer to the section called “CompletableFuture”. Optional.

use-payload-as-parameter-source (Not available for the Stored Procedure Inbound Channel Adapter.)

If set to true, the payload of the Message will be used as a source for providing parameters. If false, however, the entire Message will be available as a source for parameters.

If no Procedure Parameters are passed in, this property will default to true. This means that using a default BeanPropertySqlParameterSourceFactory the bean properties of the payload will be used as a source for parameter values for the to-be-executed Stored Procedure or Stored Function.

However, if Procedure Parameters are passed in, then this property will by default evaluate to false. ProcedureParameter allow for SpEL Expressions to be provided and therefore it is highly beneficial to have access to the entire Message. The property is set on the underlying StoredProcExecutor. Optional.

==== Common Configuration Sub-Elements

The Stored Procedure components share a common set of sub-elements to define and pass parameters to Stored Procedures or Functions. The following elements are available:

  • parameter
  • returning-resultset
  • sql-parameter-definition
  • poller

parameter

Provides a mechanism to provide Stored Procedure parameters. Parameters can be either static or provided using a SpEL Expressions. Optional.

<int-jdbc:parameter name=""     1
                    type=""     2
                    value=""/>  3

<int-jdbc:parameter name=""
                    expression=""/> 4

1

The name of the parameter to be passed into the Stored Procedure or Stored Function. Required.

2

This attribute specifies the type of the value. If nothing is provided this attribute will default to java.lang.String. This attribute is only used when the value attribute is used. Optional.

3

The value of the parameter. You have to provider either this attribute or the expression attribute must be provided instead. Optional.

4

Instead of the value attribute, you can also specify a SpEL expression for passing the value of the parameter. If you specify the expression the value attribute is not allowed. Optional.

returning-resultset

Stored Procedures may return multiple result sets. By setting one or more returning-resultset elements, you can specify RowMappers in order to convert each returned ResultSet to meaningful objects. Optional.

<int-jdbc:returning-resultset name="" row-mapper="" />

sql-parameter-definition

If you are using a database that is fully supported, you typically don’t have to specify the Stored Procedure parameter definitions. Instead, those parameters can be automatically derived from the JDBC Meta-data. However, if you are using databases that are not fully supported, you must set those parameters explicitly using the sql-parameter-definition sub-element.

You can also choose to turn off any processing of parameter meta data information obtained via JDBC using the ignore-column-meta-data attribute.

<int-jdbc:sql-parameter-definition
                                   name=""                           1
                                   direction="IN"                    2
                                   type="STRING"                     3
                                   scale="5"                         4
                                   type-name="FOO_STRUCT"            5
                                   return-type="fooSqlReturnType"/>  6

1

Specifies the name of the SQL parameter. Required.

2

Specifies the direction of the SQL parameter definition. Defaults to IN. Valid values are: IN, OUT and INOUT. If your procedure is returning ResultSets, please use the returning-resultset element. Optional.

3

The SQL type used for this SQL parameter definition. Will translate into the integer value as defined by java.sql.Types. Alternatively you can provide the integer value as well. If this attribute is not explicitly set, then it will default to VARCHAR. Optional.

4

The scale of the SQL parameter. Only used for numeric and decimal parameters. Optional.

5

The typeName for types that are user-named like: STRUCT, DISTINCT, JAVA_OBJECT, named array types. This attribute is mutually exclusive with the scale attribute. Optional.

6

The reference to a custom value handler for complex types. An implementation of SqlReturnType. This attribute is mutually exclusive with the scale attribute and is applicable for OUT(INOUT)-parameters only. Optional.

poller

Allows you to configure a Message Poller if this endpoint is a PollingConsumer. Optional.

==== Defining Parameter Sources

Parameter Sources govern the techniques of retrieving and mapping the Spring Integration Message properties to the relevant Stored Procedure input parameters. The Stored Procedure components follow certain rules.

By default bean properties of the passed in Message payload will be used as a source for the Stored Procedure’s input parameters. In that case a BeanPropertySqlParameterSourceFactory will be used. This may be sufficient for basic use cases. The following example illustrates that default behavior.

[Important]Important

Please be aware that for the "automatic" lookup of bean properties using the BeanPropertySqlParameterSourceFactory to work, your bean properties must be defined in lower case. This is due to the fact that in org.springframework.jdbc.core.metadata.CallMetaDataContext (method matchInParameterValuesWithCallParameters()), the retrieved Stored Procedure parameter declarations are converted to lower case. As a result, if you have camel-case bean properties such as "lastName", the lookup will fail. In that case, please provide an explicit ProcedureParameter.

Let’s assume we have a payload that consists of a simple bean with the following three properties: id, name and description. Furthermore, we have a simplistic Stored Procedure called INSERT_COFFEE that accepts three input parameters: id, name and description. We also use a fully supported database. In that case the following configuration for a Stored Procedure Outbound Adapter will be sufficient:

<int-jdbc:stored-proc-outbound-channel-adapter data-source="dataSource"
    channel="insertCoffeeProcedureRequestChannel"
    stored-procedure-name="INSERT_COFFEE"/>

For more sophisticated options consider passing in one or more ProcedureParameter.

If you do provide ProcedureParameter explicitly, then as default an ExpressionEvaluatingSqlParameterSourceFactory will be used for parameter processing in order to enable the full power of SpEL expressions.

Furthermore, if you need even more control over how parameters are retrieved, consider passing in a custom implementation of a SqlParameterSourceFactory using the sql-parameter-source-factory attribute.

==== Stored Procedure Inbound Channel Adapter

<int-jdbc:stored-proc-inbound-channel-adapter
                                   channel=""                                    1
                                   stored-procedure-name=""
                                   data-source=""
                                   auto-startup="true"
                                   id=""
                                   ignore-column-meta-data="false"
                                   is-function="false"
                                   skip-undeclared-results=""                    2
                                   return-value-required="false"                 3
    <int:poller/>
    <int-jdbc:sql-parameter-definition name="" direction="IN"
                                               type="STRING"
                                               scale=""/>
    <int-jdbc:parameter name="" type="" value=""/>
    <int-jdbc:parameter name="" expression=""/>
    <int-jdbc:returning-resultset name="" row-mapper="" />
</int-jdbc:stored-proc-inbound-channel-adapter>

1

Channel to which polled messages will be sent. If the stored procedure or function does not return any data, the payload of the Message will be Null. Required.

2

If this attribute is set to true, all results from a stored procedure call that do not have a corresponding SqlOutParameter declaration are bypassed. For example, stored procedures can return an update count value, even though your stored procedure declared only a single result parameter.

3

If this attribute is set to true, then all results from a stored procedure call that don’t have a corresponding SqlOutParameter declaration will be bypassed. E.g. Stored Procedures may return an update count value, even though your Stored Procedure only declared a single result parameter. The exact behavior depends on the used database. The value is set on the underlying JdbcTemplate. Few developers will probably ever want to process update counts, thus the value defaults to true. Optional.

Indicates whether this procedure’s return value should be included. Since Spring Integration 3.0. Optional.

==== Stored Procedure Outbound Channel Adapter

<int-jdbc:stored-proc-outbound-channel-adapter channel=""                        1
                                               stored-procedure-name=""
                                               data-source=""
                                               auto-startup="true"
                                               id=""
                                               ignore-column-meta-data="false"
                                               order=""                          2
                                               sql-parameter-source-factory=""
                                               use-payload-as-parameter-source="">
    <int:poller fixed-rate=""/>
    <int-jdbc:sql-parameter-definition name=""/>
    <int-jdbc:parameter name=""/>

</int-jdbc:stored-proc-outbound-channel-adapter>

1

The receiving Message Channel of this endpoint. Required.

2

Specifies the order for invocation when this endpoint is connected as a subscriber to a channel. This is particularly relevant when that channel is using a failover dispatching strategy. It has no effect when this endpoint itself is a Polling Consumer for a channel with a queue. Optional.

==== Stored Procedure Outbound Gateway

<int-jdbc:stored-proc-outbound-gateway request-channel=""                        1
                                       stored-procedure-name=""
                                       data-source=""
                                   auto-startup="true"
                                   id=""
                                   ignore-column-meta-data="false"
                                   is-function="false"
                                   order=""
                                   reply-channel=""                              2
                                   reply-timeout=""                              3
                                   return-value-required="false"                 4
                                   skip-undeclared-results=""                    5
                                   sql-parameter-source-factory=""
                                   use-payload-as-parameter-source="">
<int-jdbc:sql-parameter-definition name="" direction="IN"
                                   type=""
                                   scale="10"/>
<int-jdbc:sql-parameter-definition name=""/>
<int-jdbc:parameter name="" type="" value=""/>
<int-jdbc:parameter name="" expression=""/>
<int-jdbc:returning-resultset name="" row-mapper="" />

1

The receiving Message Channel of this endpoint. Required.

2

Message Channel to which replies should be sent, after receiving the database response. Optional.

3

Allows you to specify how long this gateway will wait for the reply message to be sent successfully before throwing an exception. Keep in mind that when sending to a DirectChannel, the invocation will occur in the sender’s thread so the failing of the send operation may be caused by other components further downstream. By default the Gateway will wait indefinitely. The value is specified in milliseconds. Optional.

4

Indicates whether this procedure’s return value should be included. Optional.

5

If the skip-undeclared-results attribute is set to true, then all results from a stored procedure call that don’t have a corresponding SqlOutParameter declaration will be bypassed. E.g. Stored Procedures may return an update count value, even though your Stored Procedure only declared a single result parameter. The exact behavior depends on the used database. The value is set on the underlying JdbcTemplate. Few developers will probably ever want to process update counts, thus the value defaults to true. Optional.

==== Examples

In the following two examples we call Apache Derby Stored Procedures. The first procedure will call a Stored Procedure that returns a ResultSet, and using a RowMapper the data is converted into a domain object, which then becomes the Spring Integration message payload.

In the second sample we call a Stored Procedure that uses Output Parameters instead, in order to return data.

[Note]Note

Please have a look at the Spring Integration Samples project, located at null

The project contains the Apache Derby example referenced here, as well as instruction on how to run it. The Spring Integration Samples project also provides an example using Oracle Stored Procedures.

In the first example, we call a Stored Procedure named FIND_ALL_COFFEE_BEVERAGES that does not define any input parameters but which returns a ResultSet.

In Apache Derby, Stored Procedures are implemented using Java. Here is the method signature followed by the corresponding Sql:

public static void findAllCoffeeBeverages(ResultSet[] coffeeBeverages)
            throws SQLException {
    ...
}
CREATE PROCEDURE FIND_ALL_COFFEE_BEVERAGES() \
PARAMETER STYLE JAVA LANGUAGE JAVA MODIFIES SQL DATA DYNAMIC RESULT SETS 1 \
EXTERNAL NAME 'org.springframework.integration.jdbc.storedproc.derby.DerbyStoredProcedures.findAllCoffeeBeverages';

In Spring Integration, you can now call this Stored Procedure using e.g. a stored-proc-outbound-gateway

<int-jdbc:stored-proc-outbound-gateway id="outbound-gateway-storedproc-find-all"
                                       data-source="dataSource"
                                       request-channel="findAllProcedureRequestChannel"
                                       expect-single-result="true"
                                       stored-procedure-name="FIND_ALL_COFFEE_BEVERAGES">
<int-jdbc:returning-resultset name="coffeeBeverages"
    row-mapper="org.springframework.integration.support.CoffeBeverageMapper"/>
</int-jdbc:stored-proc-outbound-gateway>

In the second example, we call a Stored Procedure named FIND_COFFEE that has one input parameter. Instead of returning a ResultSet, an output parameter is used:

public static void findCoffee(int coffeeId, String[] coffeeDescription)
            throws SQLException {
    ...
}
CREATE PROCEDURE FIND_COFFEE(IN ID INTEGER, OUT COFFEE_DESCRIPTION VARCHAR(200)) \
PARAMETER STYLE JAVA LANGUAGE JAVA EXTERNAL NAME \
'org.springframework.integration.jdbc.storedproc.derby.DerbyStoredProcedures.findCoffee';

In Spring Integration, you can now call this Stored Procedure using e.g. a stored-proc-outbound-gateway

<int-jdbc:stored-proc-outbound-gateway id="outbound-gateway-storedproc-find-coffee"
                                       data-source="dataSource"
                                       request-channel="findCoffeeProcedureRequestChannel"
                                       skip-undeclared-results="true"
                                       stored-procedure-name="FIND_COFFEE"
                                       expect-single-result="true">
    <int-jdbc:parameter name="ID" expression="payload" />
</int-jdbc:stored-proc-outbound-gateway>

=== JDBC Lock Registry

Starting with version 4.3, the JdbcLockRegistry is available. Certain components (for example aggregator and resequencer) use a lock obtained from a LockRegistry instance to ensure that only one thread is manipulating a group at a time. The DefaultLockRegistry performs this function within a single component; you can now configure an external lock registry on these components. When used with a shared MessageGroupStore, the JdbcLockRegistry can be use to provide this functionality across multiple application instances, such that only one instance can manipulate the group at a time.

When a lock is released by a local thread, another local thread will generally be able to acquire the lock immediately. If a lock is released by a thread using a different registry instance, it can take up to 100ms to acquire the lock.

The JdbcLockRegistry is based on the LockRepository abstraction, where a DefaultLockRepository implementation is present. The data base schema scripts are located in the org.springframework.integration.jdbc package divided to the particular RDBMS vendors. For example the H2 DDL for lock table looks like:

CREATE TABLE INT_LOCK  (
    LOCK_KEY CHAR(36),
    REGION VARCHAR(100),
    CLIENT_ID CHAR(36),
    CREATED_DATE TIMESTAMP NOT NULL,
    constraint LOCK_PK primary key (LOCK_KEY, REGION)
);

The INT_ can be changed according to the target data base design requirements. Therefore prefix property must be used on the DefaultLockRepository bean definition.

Sometimes it happens that one application has moved to the state when it can’t release distributed lock - remove the particular record in the data base. For this purpose such dead locks can be expired by the other application on the next locking invocation. The timeToLive (TTL) option on the DefaultLockRepository is provided for this purpose. The user may also want to specify CLIENT_ID for the locks stored for a given DefaultLockRepository instance. In this case you can specify the id to be associated with the DefaultLockRepository as a constructor parameter.

== JPA Support

Spring Integration’s JPA (Java Persistence API) module provides components for performing various database operations using JPA. The following components are provided:

These components can be used to perform select, create, update and delete operations on the targeted databases by sending/receiving messages to them.

The JPA Inbound Channel Adapter lets you poll and retrieve (select) data from the database using JPA whereas the JPA_Outbound Channel Adapter_ lets you create, update and delete entities.

Outbound Gateways for JPA can be used to persist entities to the database, yet allowing you to continue with the flow and execute further components downstream. Similarly, you can use an Outbound Gateway to retrieve entities from the database.

For example, you may use the Outbound Gateway, which receives a Message with a user Id as payload on its request channel, to query the database and retrieve the User entity and pass it downstream for further processing.

Recognizing these semantic differences, Spring Integration provides 2 separate JPA Outbound Gateways:

  • Retrieving Outbound Gateway
  • Updating Outbound Gateway

Functionality

All JPA components perform their respective JPA operations by using either one of the following:

  • Entity classes
  • Java Persistence Query Language (JPQL) for update, select and delete (inserts are not supported by JPQL)
  • Native Query
  • Named Query

In the following sections we will describe each of these components in more detail.

=== Supported Persistence Providers

The Spring Integration JPA support has been tested using the following persistence providers:

  • Hibernate
  • OpenJPA
  • EclipseLink

When using a persistence provider, please ensure that the provider is compatible with JPA 2.0.

=== Java Implementation

Each of the provided components will use the o.s.i.jpa.core.JpaExecutor class which in turn will use an implementation of the o.s.i.jpa.core.JpaOperations interface. JpaOperations operates like a typical Data Access Object (DAO) and provides methods such as find, persist, executeUpdate etc. For most use cases the provided default implementation o.s.i.jpa.core.DefaultJpaOperations should be sufficient. Nevertheless, you have the option to optionally specify your own implementation in case you require custom behavior.

For initializing a JpaExecutor you have to use one of 3 available constructors that accept one of:

  • EntityManagerFactory
  • EntityManager or
  • JpaOperations
[Note]Note

The XML Namespace Support described further below is also very flexible and provides configuration attributes for each JPA component to pass in an EntityManagerFactory, EntityManager or JpaOperations reference.

Java Configuration Example

The following example of a JPA Retrieving Outbound Gateway is configured purely through Java. In typical usage scenarios you will most likely prefer the XML Namespace Support described further below. However, the example illustrates how the classes are wired up. Understanding the inner workings can also be very helpful for debugging or customizing the individual JPA components.

First, we instantiate a JpaExecutor using an EntityManager as constructor argument. The JpaExecutor is then in return used as constructor argument for the o.s.i.jpa.outbound.JpaOutboundGateway and the JpaOutboundGateway will be passed as constructor argument into the EventDrivenConsumer.

<bean id="jpaExecutor" class="o.s.i.jpa.core.JpaExecutor">
    <constructor-arg name="entityManager" ref="entityManager"/>
    <property name="entityClass"        value="o.s.i.jpa.test.entity.StudentDomain"/>
    <property name="jpaQuery"           value="select s from Student s where s.id = :id"/>
    <property name="expectSingleResult" value="true"/>
    <property name="jpaParameters" >
        <util:list>
            <bean class="org.springframework.integration.jpa.support.JpaParameter">
                <property name="name"       value="id"/>
                <property name="expression" value="payload"/>
            </bean>
        </util:list>
    </property>
</bean>

<bean id="jpaOutboundGateway" class="o.s.i.jpa.outbound.JpaOutboundGateway">
    <constructor-arg ref="jpaExecutor"/>
    <property        name="gatewayType"   value="RETRIEVING"/>
    <property        name="outputChannel" ref="studentReplyChannel"/>
</bean>

<bean id="getStudentEndpoint"
      class="org.springframework.integration.endpoint.EventDrivenConsumer">
    <constructor-arg name="inputChannel" ref="getStudentChannel"/>
    <constructor-arg name="handler"      ref="jpaOutboundGateway"/>
</bean>
[Note]Note

For more examples of constructing JPA components purely through Java, see the JUnit test-cases for the JPA Adapters.

=== Namespace Support

When using XML namespace support, the underlying parser classes will instantiate the relevant Java classes for you. Thus, you typically don’t have to deal with the inner workings of the JPA adapter. This section will document the XML Namespace Support provided by the Spring Integration and will show you how to use the XML Namespace Support to configure the Jpa components.

==== Common XML Namespace Configuration Attributes

Certain configuration parameters are shared amongst all JPA components and are described below:

auto-startup

Lifecycle attribute signaling if this component should be started during Application Context startup. Defaults to true. Optional.

id

Identifies the underlying Spring bean definition, which is an instance of either EventDrivenConsumer or PollingConsumer. Optional.

entity-manager-factory

The reference to the JPA Entity Manager Factory that will be used by the adapter to create the EntityManager. Either this attribute or the entity-manager attribute or the jpa-operations attribute must be provided.

entity-manager

The reference to the JPA Entity Manager that will be used by the component. Either this attribute or the enity-manager-factory attribute or the jpa-operations attribute must be provided.

[Note]Note

Usually your Spring Application Context only defines a JPA Entity Manager Factory and the EntityManager is injected using the @PersistenceContext annotation. This, however, is not applicable for the Spring Integration JPA components. Usually, injecting the JPA Entity Manager Factory will be best but in case you want to inject an EntityManager explicitly, you have to define a SharedEntityManagerBean. For more information, please see the relevanthttp://static.springsource.org/spring/docs/current/javadoc-api/org/springframework/orm/jpa/support/SharedEntityManagerBean.html[JavaDoc].

<bean id="entityManager"
      class="org.springframework.orm.jpa.support.SharedEntityManagerBean">
    <property name="entityManagerFactory" ref="entityManagerFactoryBean" />
</bean>

jpa-operations

Reference to a bean implementing the JpaOperations interface. In rare cases it might be advisable to provide your own implementation of the JpaOperations interface, instead of relying on the default implementation org.springframework.integration.jpa.core.DefaultJpaOperations. As JpaOperations wraps the necessary datasource; the JPA Entity Manager or JPA Entity Manager Factory must not be provided, if the jpa-operations attribute is used.

entity-class

The fully qualified name of the entity class. The exact semantics of this attribute vary, depending on whether we are performing a persist/update operation or whether we are retrieving objects from the database.

When retrieving data, you can specify the entity-class attribute to indicate that you would like to retrieve objects of this type from the database. In that case you must not define any of the query attributes ( jpa-query, native-query or named-query )

When persisting data, the entity-class attribute will indicate the type of object to persist. If not specified (for persist operations) the entity class will be automatically retrieved from the Message’s payload.

jpa-query

Defines the JPA query (Java Persistence Query Language) to be used.

native-query

Defines the native SQL query to be used.

named-query

This attribute refers to a named query. A named query can either be defined in Native SQL or JPAQL but the underlying JPA persistence provider handles that distinction internally.

==== Providing JPA Query Parameters

For providing parameters, the parameter XML sub-element can be used. It provides a mechanism to provide parameters for the queries that are either based on the Java Persistence Query Language (JPQL) or native SQL queries. Parameters can also be provided for Named Queries.

Expression based Parameters

<int-jpa:parameter expression="payload.name" name="firstName"/>

Value based Parameters

<int-jpa:parameter name="name" type="java.lang.String" value="myName"/>

Positional Parameters

<int-jpa:parameter expression="payload.name"/>
<int-jpa:parameter type="java.lang.Integer" value="21"/>

==== Transaction Handling

All JPA operations like Insert, Update and Delete require a transaction to be active whenever they are performed. For Inbound Channel Adapters there is nothing special to be done, it is similar to the way we configure transaction managers with pollers used with other inbound channel adapters.The xml snippet below shows a sample where a transaction manager is configured with the poller used with an Inbound Channel Adapter.

<int-jpa:inbound-channel-adapter
    channel="inboundChannelAdapterOne"
    entity-manager="em"
    auto-startup="true"
    jpa-query="select s from Student s"
    expect-single-result="true"
    delete-after-poll="true">
    <int:poller fixed-rate="2000" >
        <int:transactional propagation="REQUIRED"
            transaction-manager="transactionManager"/>
    </int:poller>
</int-jpa:inbound-channel-adapter>

However, it may be necessary to specifically start a transaction when using an Outbound Channel Adapter/Gateway. If a DirectChannel is an input channel for the outbound adapter/gateway, and if transaction is active in the current thread of execution, the JPA operation will be performed in the same transaction context. We can also configure to execute this JPA operation in a new transaction as below.

<int-jpa:outbound-gateway
    request-channel="namedQueryRequestChannel"
    reply-channel="namedQueryResponseChannel"
    named-query="updateStudentByRollNumber"
    entity-manager="em"
    gateway-type="UPDATING">
    <int-jpa:parameter name="lastName" expression="payload"/>
    <int-jpa:parameter name="rollNumber" expression="headers['rollNumber']"/>
		<int-jpa:transactional propagation="REQUIRES_NEW"
        transaction-manager="transactionManager"/>
</int-jpa:outbound-gateway>

As we can see above, the transactional sub element of the outbound gateway/adapter will be used to specify the transaction attributes. It is optional to define this child element if you have DirectChannel as an input channel to the adapter and you want the adapter to execute the operations in the same transaction context as the caller. If, however, you are using an ExecutorChannel, it is required to have the transactional sub element as the invoking client’s transaction context is not propagated.

[Note]Note

Unlike the transactional sub element of the poller which is defined in the spring integration’s namespace, the transactional sub element for the outbound gateway/adapter is defined in the jpa namespace.

=== Inbound Channel Adapter

An Inbound Channel Adapter is used to execute a select query over the database using JPA QL and return the result. The message payload will be either a single entity or a List of entities. Below is a sample xml snippet that shows a sample usage of inbound-channel-adapter.

<int-jpa:inbound-channel-adapter channel="inboundChannelAdapterOne"  1
                    entity-manager="em"  2
                    auto-startup="true"  3
                    query="select s from Student s"  4
                    expect-single-result="true"  5
                    max-results=""  6
                    max-results-expression=""  7
                    delete-after-poll="true"  8
                    flush-after-delete="true">  9
    <int:poller fixed-rate="2000" >
      <int:transactional propagation="REQUIRED" transaction-manager="transactionManager"/>
    </int:poller>
</int-jpa:inbound-channel-adapter>

1

The channel over which the inbound-channel-adapter will put the messages with the payload received after executing the provided JPA QL in the query attribute.

2

The EntityManager instance that will be used to perform the required JPA operations.

3

Attribute signalling if the component should be automatically started on startup of the Application Context. The value defaults to true

4

The JPA QL that needs to be executed and whose result needs to be sent out as the payload of the message

5

The attribute that tells if the executed JPQL query gives a single entity in the result or a List of entities. If the value is set to true, the single entity retrieved is sent as the payload of the message. If, however, multiple results are returned after setting this to true, a MessagingException is thrown. The value defaults to false.

6

This non zero, non negative integer value tells the adapter not to select more than given number of rows on execution of the select operation. By default, if this attribute is not set, all the possible records are selected by given query. This attribute is mutually exclusive with max-results-expression. Optional.

7

An expression, mutually exclusive with max-results, that can be used to provide an expression that will be evaluated to find the maximum number of results in a result set. Optional.

8

Set this value to true if you want to delete the rows received after execution of the query. Please ensure that the component is operating as part of a transaction. Otherwise, you may encounter an Exception such as: java.lang.IllegalArgumentException: Removing a detached instance …​

9

Set this value to true if you want to the persistence context immediately after deleting received entities and if you don’t want rely on the EntityManager's flushMode. The default value is set to false.

==== Configuration Parameter Reference

<int-jpa:inbound-channel-adapter
  auto-startup="true"  1
  channel=""  2
  delete-after-poll="false"   3
  delete-per-row="false"   4
  entity-class=""   5
  entity-manager=""  6
  entity-manager-factory=""  7
  expect-single-result="false"  8
  id=""
  jpa-operations=""  9
  jpa-query=""  10
  named-query=""  11
  native-query=""  12
  parameter-source=""  13
  send-timeout="">  14
  <int:poller ref="myPoller"/>
 </int-jpa:inbound-channel-adapter>

1

This Lifecycle attribute signaled if this component should be started during startup of the Application Context. This attribute defaults to true.Optional.

2

The channel to which the adapter will send a message with the payload that was received after performing the desired JPA operation.

3

A boolean flag that indicates whether the records selected are to be deleted after they are being polled by the adapter. By default the value is false, that is, the records will not be deleted. Please ensure that the component is operating as part of a transaction. Otherwise, you may encounter an Exception such as: java.lang.IllegalArgumentException: Removing a detached instance …​.Optional.

4

A boolean flag that indicates whether the records can be deleted in bulk or are deleted one record at a time. By default the value is false, that is, the records are bulk deleted.Optional.

5

The fully qualified name of the entity class that would be queried from the database. The adapter will automatically build a JPA Query to be executed based on the entity class name provided.Optional.

6

An instance of javax.persistence.EntityManager that will be used to perform the JPA operations. Optional.

7

An instance of javax.persistence.EntityManagerFactory that will be used to obtain an instance of javax.persistence.EntityManager that will perform the JPA operations. Optional.

8

A boolean flag indicating whether the select operation is expected to return a single result or a List of results. If this flag is set to true, the single entity selected is sent as the payload of the message. If multiple entities are returned, an exception is thrown. If false, the List of entities is being sent as the payload of the message. By default the value is false.Optional.

9

An implementation of org.springframework.integration.jpa.core.JpaOperations that would be used to perform the JPA operations. It is recommended not to provide an implementation of your own but use the default org.springframework.integration.jpa.core.DefaultJpaOperations implementation. Either of the entity-manager, entity-manager-factory or jpa-operations attributes is to be used. Optional.

10

The JPA QL that needs to be executed by this adapter.Optional.

11

The named query that needs to be executed by this adapter.Optional.

12

The native query that will be executed by this adapter. Either of the jpa-query, named-query,entity-class or native-query attributes are to be used. Optional.

13

An implementation of o.s.i.jpa.support.parametersource.ParameterSource which will be used to resolve the values of the parameters provided in the query. Ignored if entity-class attribute is provided.Optional.

14

Maximum amount of time in milliseconds to wait when sending a message to the channel.Optional.

=== Outbound Channel Adapter

The JPA Outbound channel adapter allows you to accept messages over a request channel. The payload can either be used as the entity to be persisted, or used along with the headers in parameter expressions for a defined JPQL query to be executed. In the following sub sections we shall see what those possible ways of performing these operations are.

==== Using an Entity Class

The XML snippet below shows how we can use the Outbound Channel Adapter to persist an entity to the database.

<int-jpa:outbound-channel-adapter channel="entityTypeChannel"  1
    entity-class="org.springframework.integration.jpa.test.entity.Student"  2
    persist-mode="PERSIST"  3
    entity-manager="em"/ > 4

1

The channel over which a valid JPA entity will be sent to the JPA Outbound Channel Adapter.

2

The fully qualified name of the entity class that would be accepted by the adapter to be persisted in the database. You can actually leave off this attribute in most cases as the adapter can determine the entity class automatically from the Spring Integration Message payload.

3

The operation that needs to be done by the adapter, valid values are PERSIST, MERGE and DELETE. The default value is MERGE.

4

The JPA entity manager to be used.

As we can see above these 4 attributes of the outbound-channel-adapter are all we need to configure it to accept entities over the input channel and process them to PERSIST,MERGE or DELETE it from the underlying data source.

[Note]Note

As of Spring Integration 3.0, payloads to persist or merge can also be of type http://docs.oracle.com/javase/7/docs/api/java/lang/Iterable.html[java.lang.Iterable]. In that case, each object returned by the Iterable is treated as an entity and persisted or merged using the underlying EntityManager. NULL values returned by the iterator are ignored.

==== Using JPA Query Language (JPA QL)

We have seen in the above sub section how to perform a PERSIST action using an entity We will now see how to use the outbound channel adapter which uses JPA QL (Java Persistence API Query Language)

<int-jpa:outbound-channel-adapter channel="jpaQlChannel"  1
  jpa-query="update Student s set s.firstName = :firstName where s.rollNumber = :rollNumber"  2
  entity-manager="em">  3
    <int-jpa:parameter name="firstName"  expression="payload['firstName']"/>  4
    <int-jpa:parameter name="rollNumber" expression="payload['rollNumber']"/>
</int-jpa:outbound-channel-adapter>

1

The input channel over which the message is being sent to the outbound channel adapter

2

The JPA QL that needs to be executed.This query may contain parameters that will be evaluated using the parameter child tag.

3

The entity manager used by the adapter to perform the JPA operations

4

This sub element, one for each parameter will be used to evaluate the value of the parameter names specified in the JPA QL specified in the query attribute

The parameter sub element accepts an attribute name which corresponds to the named parameter specified in the provided JPA QL (point 2 in the above mentioned sample). The value of the parameter can either be static or can be derived using an expression. The static value and the expression to derive the value is specified using the value and the expression attributes respectively. These attributes are mutually exclusive.

If the value attribute is specified we can provide an optional type attribute. The value of this attribute is the fully qualified name of the class whose value is represented by the value attribute. By default the type is assumed to be a java.lang.String.

<int-jpa:outbound-channel-adapter ...
>
    <int-jpa:parameter name="level" value="2" type="java.lang.Integer"/>
    <int-jpa:parameter name="name" expression="payload['name']"/>
</int-jpa:outbound-channel-adapter>

As seen in the above snippet, it is perfectly valid to use multiple parameter sub elements within an outbound channel adapter tag and derive some parameters using expressions and some with static value. However, care should be taken not to specify the same parameter name multiple times, and, provide one parameter sub element for each named parameter specified in the JPA query. For example, we are specifying two parameters level and name where level attribute is a static value of type java.lang.Integer, where as the name attribute is derived from the payload of the message

[Note]Note

Though specifying select is valid for JPA QL, it makes no sense as outbound channel adapters will not be returning any result. If you want to select some values, consider using the outbound gateway instead.

==== Using Native Queries

In this section we will see how to use native queries to perform the operations using JPA outbound channel adapter. Using native queries is similar to using JPA QL, except that the query specified here is a native database query. By choosing native queries we lose the database vendor independence which we get using JPA QL.

One of the things we can achieve using native queries is to perform database inserts, which is not possible using JPA QL (To perform inserts we send JPA entities to the channel adapter as we have seen earlier). Below is a small xml fragment that demonstrates the use of native query to insert values in a table. Please note that we have only mentioned the important attributes below. All other attributes like channel, entity-manager and the parameter sub element has the same semantics as when we use JPA QL.

[Important]Important

Please be aware that named parameters may not be supported by your JPA provider in conjunction with native SQL queries. While they work fine using Hibernate, OpenJPA and EclipseLink do NOT support them: https://issues.apache.org/jira/browse/OPENJPA-111 Section 3.8.12 of the JPA 2.0 spec states: "Only positional parameter binding and positional access to result items may be portably used for native queries."

<int-jpa:outbound-channel-adapter channel="nativeQlChannel"
  native-query="insert into STUDENT_TABLE(FIRST_NAME,LAST_UPDATED) values (:lastName,:lastUpdated)"  1
  entity-manager="em">
    <int-jpa:parameter name="lastName" expression="payload['updatedLastName']"/>
    <int-jpa:parameter name="lastUpdated" expression="new java.util.Date()"/>
</int-jpa:outbound-channel-adapter>

1

The native query that will be executed by this outbound channel adapter

==== Using Named Queries

We will now see how to use named queries after seeing using entity, JPA QL and native query in previous sub sections. Using named query is also very similar to using JPA QL or a native query, except that we specify a named query instead of a query. Before we go further and see the xml fragment for the declaration of the outbound-channel-adapter, we will see how named JPA named queries are defined.

In our case, if we have an entity called Student, then we have the following in the class to define two named queries selectStudent and updateStudent. Below is a way to define named queries using annotations

@Entity
@Table(name="Student")
@NamedQueries({
    @NamedQuery(name="selectStudent",
        query="select s from Student s where s.lastName = 'Last One'"),
    @NamedQuery(name="updateStudent",
        query="update Student s set s.lastName = :lastName,
               lastUpdated = :lastUpdated where s.id in (select max(a.id) from Student a)")
})
public class Student {

...

You can alternatively use the orm.xml to define named queries as seen below

<entity-mappings ...>
    ...
    <named-query name="selectStudent">
        <query>select s from Student s where s.lastName = 'Last One'</query>
    </named-query>
</entity-mappings>

Now that we have seen how we can define named queries using annotations or using orm.xml, we will now see a small xml fragment for defining an outbound-channel-adapter using named query

<int-jpa:outbound-channel-adapter channel="namedQueryChannel"
            named-query="updateStudent"	 1
            entity-manager="em">
        <int-jpa:parameter name="lastName" expression="payload['updatedLastName']"/>
        <int-jpa:parameter name="lastUpdated" expression="new java.util.Date()"/>
</int-jpa:outbound-channel-adapter>

1

The named query that we want the adapter to execute when it receives a message over the channel

==== Configuration Parameter Reference

<int-jpa:outbound-channel-adapter
  auto-startup="true"  1
  channel=""  2
  entity-class=""  3
  entity-manager=""  4
  entity-manager-factory=""  5
  id=""
  jpa-operations=""  6
  jpa-query=""  7
  named-query=""  8
  native-query=""  9
  order=""  10
  parameter-source-factory=""   11
  persist-mode="MERGE"   12
  flush="true"   13
  flush-size="10"   14
  clear-on-flush="true"   15
  use-payload-as-parameter-source="true"   (16)
	<int:poller/>
	<int-jpa:transactional/>    (17)
	<int-jpa:parameter/>    (18)
</int-jpa:outbound-channel-adapter>

1

Lifecycle attribute signaling if this component should be started during Application Context startup. Defaults to true. Optional.

2

The channel from which the outbound adapter will receive messages for performing the desired operation.

3

The fully qualified name of the entity class for the JPA Operation. The attributes entity-class, query and named-query are mutually exclusive. Optional.

4

An instance of javax.persistence.EntityManager that will be used to perform the JPA operations. Optional.

5

An instance of javax.persistence.EntityManagerFactory that will be used to obtain an instance of javax.persistence.EntityManager that will perform the JPA operations. Optional.

6

An implementation of org.springframework.integration.jpa.core.JpaOperations that would be used to perform the JPA operations. It is recommended not to provide an implementation of your own but use the default org.springframework.integration.jpa.core.DefaultJpaOperations implementation. Either of the entity-manager, entity-manager-factory or jpa-operations attributes is to be used. Optional.

7

The JPA QL that needs to be executed by this adapter.Optional.

8

The named query that needs to be executed by this adapter.Optional.

9

The native query that will be executed by this adapter. Either of the jpa-query, named-query or native-query attributes are to be used. Optional.

10

The order for this consumer when multiple consumers are registered thereby managing load- balancing and/or failover. Optional (Defaults to Ordered.LOWEST_PRECEDENCE).

11

An instance of o.s.i.jpa.support.parametersource.ParameterSourceFactory that will be used to get an instance of o.s.i.jpa.support.parametersource.ParameterSource which will be used to resolve the values of the parameters provided in the query. Ignored if operations are performed using a JPA entity. If a parameter sub element is used, the factory must be of type ExpressionEvaluatingParameterSourceFactory located in package o.s.i.jpa.support.parametersource. Optional.

12

Accepts one of the following: PERSIST, MERGE or DELETE. Indicates the operation that the adapter needs to perform. Relevant only if an entity is being used for JPA operations. Ignored if JPA QL, named query or native query is provided. Defaults to MERGE. Optional. As of Spring Integration 3.0, payloads to persist or merge can also be of type http://docs.oracle.com/javase/7/docs/api/java/lang/Iterable.html[java.lang.Iterable]. In that case, each object returned by the Iterable is treated as an entity and persisted or merged using the underlying EntityManager. NULL values returned by the iterator are ignored.

13

Set this value to true if you want to flush the persistence context immediately after persist, merge or delete operations and don’t want to rely on the EntityManager's flushMode. The default value is set to false. Applies only if the flush-size attribute isn’t specified. If this attribute is set to true, then flush-size will be implicitly set to 1, if it wasn’t configured to any other value.

14

Set this attribute to a value greater than 0 if you want to flush the persistence context immediately after persist, merge or delete operations and don’t want to rely on the EntityManagers flushMode. The default value is set to 0 which means 'no flush. This attribute is geared towards messages with Iterable payloads. For instance, if flush-size is set to 3, then entityManager.flush() is called after every third entity. Furthermore, entityManager.flush() will be called once more after the entire loop. There is no reason to configure the flush attribute, if the flush-size attribute is specified with a value greater than 0.

15

Set this value to true if you want to clear persistence context immediately after each flush operation. The attribute’s value is applied only if the flush attribute is set to true or if the flush-size attribute is set to a value greater than 0.

(16)

If set to true, the payload of the Message will be used as a source for providing parameters. If false, however, the entire Message will be available as a source for parameters.Optional.

(17)

Defines the transaction management attributes and the reference to transaction manager to be used by the JPA adapter.Optional.

(18)

One or more parameter attributes, one for each parameter used in the query. The value or expression provided will be evaluated to compute the value of the parameter.Optional.

=== Outbound Gateways

The JPA Inbound Channel Adapter allows you to poll a database in order to retrieve one or more JPA entities and the retrieved data is consequently used to start a Spring Integration flow using the retrieved data as message payload.

Additionally, you may use JPA Outbound Channel Adapters at the end of your flow in order to persist data, essentially terminating the flow at the end of the persistence operation.

However, how can you execute JPA persistence operation in the middle of a flow? For example, you may have business data that you are processing in your Spring Integration message flow, that you would like to persist, yet you still need to execute other components further downstream. Or instead of polling the database using a poller, you rather have the need to execute JPQL queries and retrieve data actively which then is used to being processed in subsequent components within your flow.

This is where JPA Outbound Gateways come into play. They give you the ability to persist data as well as retrieving data. To facilitate these uses, Spring Integration provides two types of JPA Outbound Gateways:

  • Updating Outbound Gateway
  • Retrieving Outbound Gateway

Whenever the Outbound Gateway is used to perform an action that saves, updates or soley deletes some records in the database, you need to use an Updating Outbound Gateway gateway. If for example an entity is used to persist it, then a merged/persisted entity is returned as a result. In other cases the number of records affected (updated or deleted) is returned instead.

When retrieving (selecting) data from the database, we use a Retrieving Outbound Gateway. With a Retrieving Outbound Gateway gateway, we can use either JPQL, Named Queries (native or JPQL-based) or Native Queries (SQL) for selecting the data and retrieving the results.

An Updating Outbound Gateway is functionally very similar to an Outbound Channel Adapter, except that an Updating Outbound Gateway is used to send a result to the Gateway’s reply channel after performing the given JPA operation.

A Retrieving Outbound Gateway is quite similar to an Inbound Channel Adapter.

[Note]Note

We recommend you to first refer to the JPA Outbound Channel Adapter section and the JPA Inbound Channel Adapter sections above, as most of the common concepts are being explained there.

This similarity was the main factor to use the central JpaExecutor class to unify common functionality as much as possible.

Common for all JPA Outbound Gateways and simlar to the outbound-channel-adapter, we can use

  • Entity classes
  • JPA Query Language (JPQL)
  • Native query
  • Named query

for performing various JPA operations. For configuration examples please see the section called “CompletableFuture”.

==== Common Configuration Parameters

JPA Outbound Gateways always have access to the Spring Integration Message as input. As such the following parameters are available:

parameter-source-factory

An instance of o.s.i.jpa.support.parametersource.ParameterSourceFactory that will be used to get an instance of o.s.i.jpa.support.parametersource.ParameterSource. The ParameterSource is used to resolve the values of the parameters provided in the query. The_parameter-source-factory_ attribute is ignored, if operations are performed using a JPA entity. If a parameter sub-element is used, the factory must be of type ExpressionEvaluatingParameterSourceFactory, located in package o.s.i.jpa.support.parametersource. Optional.

use-payload-as-parameter-source

If set to true, the payload of the Message will be used as a source for providing parameters. If set to false, the entire Message will be available as a source for parameters. If no JPA Parameters are passed in, this property will default to true. This means that using a default BeanPropertyParameterSourceFactory, the bean properties of the payload will be used as a source for parameter values for the to-be-executed JPA query. However, if JPA Parameters are passed in, then this property will by default evaluate to false. The reason is that JPA Parameters allow for SpEL Expressions to be provided and therefore it is highly beneficial to have access to the entire Message, including the Headers.

==== Updating Outbound Gateway

<int-jpa:updating-outbound-gateway request-channel=""  1
    auto-startup="true"
    entity-class=""
    entity-manager=""
    entity-manager-factory=""
    id=""
    jpa-operations=""
    jpa-query=""
    named-query=""
    native-query=""
    order=""
    parameter-source-factory=""
    persist-mode="MERGE"
    reply-channel=""  2
    reply-timeout=""  3
    use-payload-as-parameter-source="true">

    <int:poller/>
    <int-jpa:transactional/>

    <int-jpa:parameter name="" type="" value=""/>
    <int-jpa:parameter name="" expression=""/>
</int-jpa:updating-outbound-gateway>

1

The channel from which the outbound gateway will receive messages for performing the desired operation. This attribute is similar to channel attribute of the outbound-channel-adapter.Optional.

2

The channel to which the gateway will send the response after performing the required JPA operation. If this attribute is not defined, the request message must have a replyChannel header. Optional.

3

Specifies the time the gateway will wait to send the result to the reply channel. Only applies when the reply channel itself might block the send (for example a bounded QueueChannel that is currently full). By default the Gateway will wait indefinitely. The value is specified in milliseconds. Optional.

==== Retrieving Outbound Gateway

<int-jpa:retrieving-outbound-gateway request-channel=""
    auto-startup="true"
    delete-after-poll="false"
    delete-in-batch="false"
    entity-class=""
    id-expression=""  1
    entity-manager=""
    entity-manager-factory=""
    expect-single-result="false"  2
    id=""
    jpa-operations=""
    jpa-query=""
    max-results=""  3
    max-results-expression=""  4
    first-result=""  5
    first-result-expression=""  6
    named-query=""
    native-query=""
    order=""
    parameter-source-factory=""
    reply-channel=""
    reply-timeout=""
    use-payload-as-parameter-source="true">
    <int:poller></int:poller>
    <int-jpa:transactional/>

    <int-jpa:parameter name="" type="" value=""/>
    <int-jpa:parameter name="" expression=""/>
</int-jpa:retrieving-outbound-gateway>

1

(Since Spring Integration 4.0) The SpEL expression to determine the primaryKey value for EntityManager.find(Class entityClass, Object primaryKey) method against the requestMessage as root object of evaluation context. The entityClass argument is determined from entity-class attribute, if presented, otherwise from payload class. All other attributed are disallowed in case of id-expression. Optional.

2

A boolean flag indicating whether the select operation is expected to return a single result or a List of results. If this flag is set to true, the single entity selected is sent as the payload of the message. If multiple entities are returned, an exception is thrown. If false, the List of entities is being sent as the payload of the message. By default the value is false. Optional.

3

This non zero, non negative integer value tells the adapter not to select more than given number of rows on execution of the select operation. By default, if this attribute is not set, all the possible records are selected by given query. This attribute is mutually exclusive with max-results-expression. Optional.

4

An expression, mutually exclusive with max-results, that can be used to provide an expression that will be evaluated to find the maximum number of results in a result set. Optional.

5

This non zero, non negative integer value tells the adapter the first record from which the results are to be retrieved This attribute is mutually exclusive to first-result-expression. This attribute is introduced since version 3.0. Optional.

6

This expression is evaluated against the message to find the position of first record in the result set to be retrieved This attribute is mutually exclusive to first-result. This attribute is introduced since version 3.0. Optional.

[Important]Important

When choosing to delete entities upon retrieval and you have retrieved a collection of entities, please be aware that by default entities are deleted on a per entity basis. This may cause performance issues.

Alternatively, you can set attribute deleteInBatch to true, which will perform a batch delete. However, please be aware of the limitation that in that case cascading deletes are not supported.

JSR 317: Java™ Persistence 2.0 states in chapter Chapter 4.10, Bulk Update and Delete Operations that:

"A delete operation only applies to entities of the specified class and its subclasses. It does not cascade to related entities."

For more information please see JSR 317: Java™ Persistence 2.0

==== JPA Outbound Gateway Samples

This section contains various examples of the Updating Outbound Gateway and Retrieving Outbound Gateway

Update using an Entity Class

In this example an Updating Outbound Gateway is persisted using solely the entity class org.springframework.integration.jpa.test.entity.Student as JPA defining parameter.

<int-jpa:updating-outbound-gateway request-channel="entityRequestChannel"  1
    reply-channel="entityResponseChannel"  2
    entity-class="org.springframework.integration.jpa.test.entity.Student"
    entity-manager="em"/>

1

This is the request channel for the outbound gateway, this is similar to the channel attribute of the outbound-channel-adapter

2

This is where a gateway differs from an outbound adapter, this is the channel over which the reply of the performed JPA operation is received. If,however, you are not interested in the reply received and just want to perform the operation, then using a JPA outbound-channel-adapter is the appropriate choice. In above case, where we are using entity class, the reply will be the entity object that was created/merged as a result of the JPA operation.

Update using JPQL

In this example, we will see how we can update an entity using the Java Persistence Query Language (JPQL). For this we use an_Updating Outbound Gateway_.

<int-jpa:updating-outbound-gateway request-channel="jpaqlRequestChannel"
  reply-channel="jpaqlResponseChannel"
  jpa-query="update Student s set s.lastName = :lastName where s.rollNumber = :rollNumber"  1
  entity-manager="em">
    <int-jpa:parameter name="lastName" expression="payload"/>
    <int-jpa:parameter name="rollNumber" expression="headers['rollNumber']"/>
</int-jpa:updating-outbound-gateway>

1

The JPQL query that will be executed by the gateway. Since an Updating Outbound Gateway is used, only update and delete JPQL queries would be sensible choices.

When sending a message with a String payload and containing a header rollNumber with a long value, the last name of the student with the provided roll number is updated to the value provided in the message payload. When using an UPDATING gateway, the return value is always an integer value which denotes the number of records affected by execution of the JPA QL.

Retrieving an Entity using JPQL

The following examples uses a Retrieving Outbound Gateway together with JPQL to retrieve (select) one or more entities from the database.

<int-jpa:retrieving-outbound-gateway request-channel="retrievingGatewayReqChannel"
    reply-channel="retrievingGatewayReplyChannel"
    jpa-query="select s from Student s where s.firstName = :firstName and s.lastName = :lastName"
    entity-manager="em">
    <int-jpa:parameter name="firstName" expression="payload"/>
    <int-jpa:parameter name="lastName" expression="headers['lastName']"/>
</int-jpa:outbound-gateway>

Retrieving an Entity using id-expression

The following examples uses a Retrieving Outbound Gateway together with id-expression to retrieve (find) one and only one entity from the database. The primaryKey is the result of id-expression evaluation. The entityClass is a class of Message payload.

<int-jpa:retrieving-outbound-gateway
	request-channel="retrievingGatewayReqChannel"
    reply-channel="retrievingGatewayReplyChannel"
    id-expression="payload.id"
    entity-manager="em"/>

Update using a Named Query

Using a Named Query is basically the same as using a JPQL query directly. The difference is that the named-query attribute is used instead, as seen in the xml snippet below.

<int-jpa:updating-outbound-gateway request-channel="namedQueryRequestChannel"
    reply-channel="namedQueryResponseChannel"
    named-query="updateStudentByRollNumber"
    entity-manager="em">
    <int-jpa:parameter name="lastName" expression="payload"/>
    <int-jpa:parameter name="rollNumber" expression="headers['rollNumber']"/>
</int-jpa:outbound-gateway>
[Note]Note

You can find a complete Sample application for using Spring Integration’s JPA adapter at jpa sample.

== JMS Support

Spring Integration provides Channel Adapters for receiving and sending JMS messages. There are actually two JMS-based inbound Channel Adapters. The first uses Spring’s JmsTemplate to receive based on a polling period. The second is "message-driven" and relies upon a Spring MessageListener container. There is also an outbound Channel Adapter which uses the JmsTemplate to convert and send a JMS Message on demand.

As you can see from above by using JmsTemplate and MessageListener container Spring Integration relies on Spring’s JMS support. This is important to understand since most of the attributes exposed on these adapters will configure the underlying Spring’s JmsTemplate and/or MessageListener container. For more details about JmsTemplate and MessageListener container please refer to Spring JMS documentation.

Whereas the JMS Channel Adapters are intended for unidirectional Messaging (send-only or receive-only), Spring Integration also provides inbound and outbound JMS Gateways for request/reply operations. The inbound gateway relies on one of Spring’s MessageListener container implementations for Message-driven reception that is also capable of sending a return value to the reply-to Destination as provided by the received Message. The outbound Gateway sends a JMS Message to a request-destination (or request-destination-name or request-destination-expression) and then receives a reply Message. The reply-destination reference (or reply-destination-name or reply-destination-expression) can be configured explicitly or else the outbound gateway will use a JMS TemporaryQueue.

Prior to Spring Integration 2.2, if necessary, a TemporaryQueue was created (and removed) for each request/reply. Beginning with Spring Integration 2.2, the outbound gateway can be configured to use a MessageListener container to receive replies instead of directly using a new (or cached) Consumer to receive the reply for each request. When so configured, and no explicit reply destination is provided, a single TemporaryQueue is used for each gateway instead of one for each request.

=== Inbound Channel Adapter

The inbound Channel Adapter requires a reference to either a single JmsTemplate instance or both ConnectionFactory and Destination (a destinationName can be provided in place of the destination reference). The following example defines an inbound Channel Adapter with a Destination reference.

<int-jms:inbound-channel-adapter id="jmsIn" destination="inQueue" channel="exampleChannel">
    <int:poller fixed-rate="30000"/>
</int-jms:inbound-channel-adapter>
[Tip]Tip

Notice from the configuration that the inbound-channel-adapter is a Polling Consumer. That means that it invokes receive() when triggered. This should only be used in situations where polling is done relatively infrequently and timeliness is not important. For all other situations (a vast majority of JMS-based use-cases), the message-driven-channel-adapter described below is a better option.

[Note]Note

All of the JMS adapters that require a reference to the ConnectionFactory will automatically look for a bean named "connectionFactory" by default. That is why you don’t see a "connection-factory" attribute in many of the examples. However, if your JMS ConnectionFactory has a different bean name, then you will need to provide that attribute.

If extract-payload is set to true (which is the default), the received JMS Message will be passed through the MessageConverter. When relying on the default SimpleMessageConverter, this means that the resulting Spring Integration Message will have the JMS Message’s body as its payload. A JMS TextMessage will produce a String-based payload, a JMS BytesMessage will produce a byte array payload, and a JMS ObjectMessage 's Serializable instance will become the Spring Integration Message’s payload. If instead you prefer to have the raw JMS Message as the Spring Integration Message’s payload, then set extract-payload to false.

<int-jms:inbound-channel-adapter id="jmsIn"
    destination="inQueue"
    channel="exampleChannel"
    extract-payload="false"/>
    <int:poller fixed-rate="30000"/>
</int-jms:inbound-channel-adapter>

==== Transactions

Starting with version 4.0, the inbound channel adapter supports the session-transacted attribute. In earlier versions, you had to inject a JmsTemplate with sessionTransacted set to true. (The adapter did allow the acknowledge attribute to be set to transacted but this was incorrect and did not work).

Note, however, that setting session-transacted to true has little value because the transaction is committed immediately after the receive() and before the message is sent to the channel.

If you want the entire flow to be transactional (for example if there is a downstream outbound channel adapter), you must use a transactional poller, with a JmsTransactionManager. Or, consider using a jms-message-driven-channel-adapter with acknowledge set to transacted (the default).

=== Message-Driven Channel Adapter

The "message-driven-channel-adapter" requires a reference to either an instance of a Spring MessageListener container (any subclass of AbstractMessageListenerContainer) or both ConnectionFactory and Destination (a destinationName can be provided in place of the destination reference). The following example defines a message-driven Channel Adapter with a Destination reference.

<int-jms:message-driven-channel-adapter id="jmsIn" destination="inQueue" channel="exampleChannel"/>
[Note]Note

The Message-Driven adapter also accepts several properties that pertain to the MessageListener container. These values are only considered if you do not provide a container reference. In that case, an instance of DefaultMessageListenerContainer will be created and configured based on these properties. For example, you can specify the "transaction-manager" reference, the "concurrent-consumers" value, and several other property references and values. Refer to the JavaDoc and Spring Integration’s JMS Schema (spring-integration-jms.xsd) for more details.

If you have a custom listener container implementation (usually a subclass of DefaultMessageListenerContainer), you can either provide a reference to an instance of it using the container attribute, or simply provide its fully qualified class name using the container-class attribute. In that case, the attributes on the adapter are transferred to an instance of your custom container.

[Important]Important

Starting with version 4.2, the default acknowledge mode is transacted, unless an external container is provided, in which case the container should be configured as needed. It is recommended to use transacted with the DefaultMessageListenerContainer to avoid message loss.

The extract-payload property has the same effect as described above, and once again its default value is true. The poller sub-element is not applicable for a message-driven Channel Adapter, as it will be actively invoked. For most usage scenarios, the message-driven approach is better since the Messages will be passed along to the MessageChannel as soon as they are received from the underlying JMS consumer.

Finally, the <message-driven-channel-adapter> also accepts the error-channel attribute. This provides the same basic functionality as described in Section 8.4.1, “Enter the GatewayProxyFactoryBean”.

<int-jms:message-driven-channel-adapter id="jmsIn" destination="inQueue"
    channel="exampleChannel"
    error-channel="exampleErrorChannel"/>

When comparing this to the generic gateway configuration, or the JMS inbound-gateway that will be discussed below, the key difference here is that we are in a one-way flow since this is a channel-adapter, not a gateway. Therefore, the flow downstream from the error-channel should also be one-way. For example, it could simply send to a logging handler, or it could be connected to a different JMS <outbound-channel-adapter> element.

When consuming from topics, set the pub-sub-domain attribute to true; set subscription-durable to true for a durable subscription, subscription-shared for a shared subscription (requires a JMS 2.0 broker and has been available since version 4.2). Use subscription-name to name the subscription.

==== Inbound Conversion Errors

[Note]Note

Starting with version 4.2 the error-channel is used for the conversion errors, too. Previously, if a JMS <message-driven-channel-adapter/> or <inbound-gateway/> could not deliver a message due to a conversion error, an exception would be thrown back to the container. If the container was configured to use transactions, the message would be rolled back and redelivered repeatedly. The conversion process occurs before and during message construction so such errors were not sent to the error-channel. Now such conversion exceptions result in an ErrorMessage being sent to the error-channel, with the exception as the payload. If you wish the transaction to be rolled back, and you have an error-channel defined, the integration flow on the error-channel must re-throw the exception (or another). If the error flow does not throw an exception, the transaction will be committed and the message removed. If no error-channel is defined, the exception is thrown back to the container, as before.

=== Outbound Channel Adapter

The JmsSendingMessageHandler implements the MessageHandler interface and is capable of converting Spring Integration Messages to JMS messages and then sending to a JMS destination. It requires either a jmsTemplate reference or both connectionFactory and destination references (again, the destinationName may be provided in place of the destination). As with the inbound Channel Adapter, the easiest way to configure this adapter is with the namespace support. The following configuration will produce an adapter that receives Spring Integration Messages from the "exampleChannel" and then converts those into JMS Messages and sends them to the JMS Destination reference whose bean name is "outQueue".

<int-jms:outbound-channel-adapter id="jmsOut" destination="outQueue" channel="exampleChannel"/>

As with the inbound Channel Adapters, there is an extract-payload property. However, the meaning is reversed for the outbound adapter. Rather than applying to the JMS Message, the boolean property applies to the Spring Integration Message payload. In other words, the decision is whether to pass the Spring Integration Message itself as the JMS Message body or whether to pass the Spring Integration Message’s payload as the JMS Message body. The default value is once again true. Therefore, if you pass a Spring Integration Message whose payload is a String, a JMS TextMessage will be created. If on the other hand you want to send the actual Spring Integration Message to another system via JMS, then simply set this to false.

[Note]Note

Regardless of the boolean value for payload extraction, the Spring Integration MessageHeaders will map to JMS properties as long as you are relying on the default converter or provide a reference to another instance of HeaderMappingMessageConverter (the same holds true for inbound adapters except that in those cases, it’s the JMS properties mapping to Spring Integration MessageHeaders).

==== Transactions

Starting with version 4.0, the outbound channel adapter supports the session-transacted attribute. In earlier versions, you had to inject a JmsTemplate with sessionTransacted set to true. The attribute now sets the property on the built-in default JmsTemplate. If a transaction exists (perhaps from an upstream message-driven-channel-adapter) the send will be performed within the same transaction. Otherwise a new transaction will be started.

=== Inbound Gateway

Spring Integration’s message-driven JMS inbound-gateway delegates to a MessageListener container, supports dynamically adjusting concurrent consumers, and can also handle replies. The inbound gateway requires references to a ConnectionFactory, and a request Destination (or requestDestinationName). The following example defines a JMS "inbound-gateway" that receives from the JMS queue referenced by the bean id "inQueue" and sends to the Spring Integration channel named "exampleChannel".

<int-jms:inbound-gateway id="jmsInGateway"
    request-destination="inQueue"
    request-channel="exampleChannel"/>

Since the gateways provide request/reply behavior instead of unidirectional send or receive, they also have two distinct properties for the "payload extraction" (as discussed above for the Channel Adapters' extract-payload setting). For an inbound-gateway, the extract-request-payload property determines whether the received JMS Message body will be extracted. If false, the JMS Message itself will become the Spring Integration Message payload. The default is true.

Similarly, for an inbound-gateway the extract-reply-payload property applies to the Spring Integration Message that is going to be converted into a reply JMS Message. If you want to pass the whole Spring Integration Message (as the body of a JMS ObjectMessage) then set this to false. By default, it is also true such that the Spring Integration Message payload will be converted into a JMS Message (e.g. String payload becomes a JMS TextMessage).

As with anything else, Gateway invocation might result in error. By default Producer will not be notified of the errors that might have occurred on the consumer side and will time out waiting for the reply. However there might be times when you want to communicate an error condition back to the consumer, in other words treat the Exception as a valid reply by mapping it to a Message. To accomplish this JMS Inbound Gateway provides support for a Message Channel to which errors can be sent for processing, potentially resulting in a reply Message payload that conforms to some contract defining what a caller may expect as an "error" reply. Such a channel can be configured via the error-channel attribute.

<int-jms:inbound-gateway request-destination="requestQueue"
          request-channel="jmsinputchannel"
          error-channel="errorTransformationChannel"/>

<int:transformer input-channel="exceptionTransformationChannel"
        ref="exceptionTransformer" method="createErrorResponse"/>

You might notice that this example looks very similar to that included within Section 8.4.1, “Enter the GatewayProxyFactoryBean”. The same idea applies here: The exceptionTransformer could be a simple POJO that creates error response objects, you could reference the "nullChannel" to suppress the errors, or you could leave error-channel out to let the Exception propagate.

When consuming from topics, set the pub-sub-domain attribute to true; set subscription-durable to true for a durable subscription, subscription-shared for a shared subscription (requires a JMS 2.0 broker and has been available since version 4.2). Use subscription-name to name the subscription.

[Important]Important

Starting with version 4.2, the default acknowledge mode is transacted, unless an external container is provided, in which case the container should be configured as needed. It is recommended to use transacted with the DefaultMessageListenerContainer to avoid message loss.

=== Outbound Gateway

The outbound Gateway creates JMS Messages from Spring Integration Messages and then sends to a request-destination. It will then handle the JMS reply Message either by using a selector to receive from the reply-destination that you configure, or if no reply-destination is provided, it will create JMS TemporaryQueue s.