The first part of this chapter covers some background theory and reveals quite a bit about the underlying API that drives Spring Integration’s various messaging components. This information can be helpful if you want to really understand what’s going on behind the scenes. However, if you want to get up and running with the simplified namespace-based configuration of the various elements, feel free to skip ahead to Section 8.1.4, “Endpoint Namespace Support” for now.
As mentioned in the overview, Message Endpoints are responsible for connecting the various messaging components to channels. Over the next several chapters, you will see a number of different components that consume Messages. Some of these are also capable of sending reply Messages. Sending Messages is quite straightforward. As shown above in Section 4.1, “Message Channels”, it’s easy to send a Message to a Message Channel. However, receiving is a bit more complicated. The main reason is that there are two types of consumers: Polling Consumers and Event Driven Consumers.
Of the two, Event Driven Consumers are much simpler. Without any need to manage and schedule a separate poller thread, they are essentially just listeners with a callback method. When connecting to one of Spring Integration’s subscribable Message Channels, this simple option works great. However, when connecting to a buffering, pollable Message Channel, some component has to schedule and manage the polling thread(s). Spring Integration provides two different endpoint implementations to accommodate these two types of consumers. Therefore, the consumers themselves can simply implement the callback interface. When polling is required, the endpoint acts as a container for the consumer instance. The benefit is similar to that of using a container for hosting Message Driven Beans, but since these consumers are simply Spring-managed Objects running within an ApplicationContext, it more closely resembles Spring’s own MessageListener containers.
Spring Integration’s MessageHandler
interface is implemented by many of the components within the framework.
In other words, this is not part of the public API, and a developer would not typically implement MessageHandler
directly.
Nevertheless, it is used by a Message Consumer for actually handling the consumed Messages, and so being aware of this strategy interface does help in terms of understanding the overall role of a consumer.
The interface is defined as follows:
public interface MessageHandler { void handleMessage(Message<?> message); }
Despite its simplicity, this provides the foundation for most of the components that will be covered in the following chapters (Routers, Transformers, Splitters, Aggregators, Service Activators, etc). Those components each perform very different functionality with the Messages they handle, but the requirements for actually receiving a Message are the same, and the choice between polling and event-driven behavior is also the same. Spring Integration provides two endpoint implementations that host these callback-based handlers and allow them to be connected to Message Channels.
Because it is the simpler of the two, we will cover the Event Driven Consumer endpoint first.
You may recall that the SubscribableChannel
interface provides a subscribe()
method and that the method accepts a MessageHandler
parameter (as shown in the section called “SubscribableChannel”):
subscribableChannel.subscribe(messageHandler);
Since a handler that is subscribed to a channel does not have to actively poll that channel, this is an Event Driven Consumer, and the implementation provided by Spring Integration accepts a a SubscribableChannel
and a MessageHandler
:
SubscribableChannel channel = context.getBean("subscribableChannel", SubscribableChannel.class); EventDrivenConsumer consumer = new EventDrivenConsumer(channel, exampleHandler);
Spring Integration also provides a PollingConsumer
, and it can be instantiated in the same way except that the channel must implement PollableChannel
:
PollableChannel channel = context.getBean("pollableChannel", PollableChannel.class); PollingConsumer consumer = new PollingConsumer(channel, exampleHandler);
![]() | Note |
---|---|
For more information regarding Polling Consumers, please also read Section 4.2, “Poller” as well as Section 4.3, “Channel Adapter”. |
There are many other configuration options for the Polling Consumer. For example, the trigger is a required property:
PollingConsumer consumer = new PollingConsumer(channel, handler); consumer.setTrigger(new IntervalTrigger(30, TimeUnit.SECONDS));
Spring Integration currently provides two implementations of the Trigger
interface: IntervalTrigger
and CronTrigger
.
The IntervalTrigger
is typically defined with a simple interval (in milliseconds), but also supports an initialDelay property and a boolean fixedRate property (the default is false, i.e.
fixed delay):
IntervalTrigger trigger = new IntervalTrigger(1000); trigger.setInitialDelay(5000); trigger.setFixedRate(true);
The CronTrigger
simply requires a valid cron expression (see the Javadoc for details):
CronTrigger trigger = new CronTrigger("*/10 * * * * MON-FRI");
In addition to the trigger, several other polling-related configuration properties may be specified:
PollingConsumer consumer = new PollingConsumer(channel, handler); consumer.setMaxMessagesPerPoll(10); consumer.setReceiveTimeout(5000);
The maxMessagesPerPoll property specifies the maximum number of messages to receive within a given poll operation.
This means that the poller will continue calling receive() without waiting until either null
is returned or that max is reached.
For example, if a poller has a 10 second interval trigger and a maxMessagesPerPoll setting of 25, and it is polling a channel that has 100 messages in its queue, all 100 messages can be retrieved within 40 seconds.
It grabs 25, waits 10 seconds, grabs the next 25, and so on.
The receiveTimeout property specifies the amount of time the poller should wait if no messages are available when it invokes the receive operation. For example, consider two options that seem similar on the surface but are actually quite different: the first has an interval trigger of 5 seconds and a receive timeout of 50 milliseconds while the second has an interval trigger of 50 milliseconds and a receive timeout of 5 seconds. The first one may receive a message up to 4950 milliseconds later than it arrived on the channel (if that message arrived immediately after one of its poll calls returned). On the other hand, the second configuration will never miss a message by more than 50 milliseconds. The difference is that the second option requires a thread to wait, but as a result it is able to respond much more quickly to arriving messages. This technique, known as long polling, can be used to emulate event-driven behavior on a polled source.
A Polling Consumer may also delegate to a Spring TaskExecutor
, as illustrated in the following example:
PollingConsumer consumer = new PollingConsumer(channel, handler); TaskExecutor taskExecutor = context.getBean("exampleExecutor", TaskExecutor.class); consumer.setTaskExecutor(taskExecutor);
Furthermore, a PollingConsumer
has a property called adviceChain.
This property allows you to specify a List
of AOP Advices for handling additional cross cutting concerns including transactions.
These advices are applied around the doPoll()
method.
For more in-depth information, please see the sections AOP Advice chains and Transaction Support under Section 8.1.4, “Endpoint Namespace Support”.
The examples above show dependency lookups, but keep in mind that these consumers will most often be configured as Spring bean definitions.
In fact, Spring Integration also provides a FactoryBean
called ConsumerEndpointFactoryBean
that creates the appropriate consumer type based on the type of channel, and there is full XML namespace support to even further hide those details.
The namespace-based configuration will be featured as each component type is introduced.
![]() | Note |
---|---|
Many of the |
Throughout the reference manual, you will see specific configuration examples for endpoint elements, such as router, transformer, service-activator, and so on.
Most of these will support an input-channel attribute and many will support an output-channel attribute.
After being parsed, these endpoint elements produce an instance of either the PollingConsumer
or the EventDrivenConsumer
depending on the type of the input-channel that is referenced: PollableChannel
or SubscribableChannel
respectively.
When the channel is pollable, then the polling behavior is determined based on the endpoint element’s poller sub-element and its attributes.
In the configuration below you find a poller with all available configuration options:
<int:poller cron=""default="false"
error-channel=""
fixed-delay=""
fixed-rate=""
id=""
max-messages-per-poll=""
receive-timeout=""
ref=""
task-executor=""
time-unit="MILLISECONDS"
trigger="">
<int:advice-chain />
<int:transactional />
</int:poller>
Provides the ability to configure Pollers using Cron expressions.
The underlying implementation uses an | |
By setting this attribute to true, it is possible to define exactly one (1) global default poller.
An exception is raised if more than one default poller is defined in the application context.
Any endpoints connected to a PollableChannel (PollingConsumer) or any SourcePollingChannelAdapter that does not have any explicitly configured poller will then use the global default Poller.
Optional.
Defaults to | |
Identifies the channel which error messages will be sent to if a failure occurs in this poller’s invocation.
To completely suppress Exceptions, provide a reference to the | |
The fixed delay trigger uses a | |
The fixed rate trigger uses a | |
The Id referring to the Poller’s underlying bean-definition, which is of type | |
Please see Section 4.3.1, “Configuring An Inbound Channel Adapter” for more information.
Optional.
If not specified the default values used depends on the context.
If a | |
Value is set on the underlying class | |
Bean reference to another top-level poller.
The | |
Provides the ability to reference a custom task executor. Please see the section below titled TaskExecutor Support for further information. Optional. | |
This attribute specifies the | |
Reference to any spring configured bean which implements the | |
Allows to specify extra AOP Advices to handle additional cross cutting concerns. Please see the section below titled Transaction Support for further information. Optional. | |
Pollers can be made transactional. Please see the section below titled AOP Advice chains for further information. Optional. |
Examples
For example, a simple interval-based poller with a 1-second interval would be configured like this:
<int:transformer input-channel="pollable" ref="transformer" output-channel="output"> <int:poller fixed-rate="1000"/> </int:transformer>
As an alternative to fixed-rate you can also use the fixed-delay attribute.
For a poller based on a Cron expression, use the cron attribute instead:
<int:transformer input-channel="pollable" ref="transformer" output-channel="output"> <int:poller cron="*/10 * * * * MON-FRI"/> </int:transformer>
If the input channel is a PollableChannel
, then the poller configuration is required.
Specifically, as mentioned above, the trigger is a required property of the PollingConsumer class.
Therefore, if you omit the poller sub-element for a Polling Consumer endpoint’s configuration, an Exception may be thrown.
The exception will also be thrown if you attempt to configure a poller on the element that is connected to a non-pollable channel.
It is also possible to create top-level pollers in which case only a ref is required:
<int:poller id="weekdayPoller" cron="*/10 * * * * MON-FRI"/> <int:transformer input-channel="pollable" ref="transformer" output-channel="output"> <int:poller ref="weekdayPoller"/> </int:transformer>
![]() | Note |
---|---|
The ref attribute is only allowed on the inner-poller definitions. Defining this attribute on a top-level poller will result in a configuration exception thrown during initialization of the Application Context. |
Global Default Pollers
In fact, to simplify the configuration even further, you can define a global default poller.
A single top-level poller within an ApplicationContext may have the default
attribute with a value of true.
In that case, any endpoint with a PollableChannel for its input-channel that is defined within the same ApplicationContext and has no explicitly configured poller sub-element will use that default.
<int:poller id="defaultPoller" default="true" max-messages-per-poll="5" fixed-rate="3000"/> <!-- No <poller/> sub-element is necessary since there is a default --> <int:transformer input-channel="pollable" ref="transformer" output-channel="output"/>
Transaction Support
Spring Integration also provides transaction support for the pollers so that each receive-and-forward operation can be performed as an atomic unit-of-work. To configure transactions for a poller, simply add the_<transactional/>_ sub-element. The attributes for this element should be familiar to anyone who has experience with Spring’s Transaction management:
<int:poller fixed-delay="1000"> <int:transactional transaction-manager="txManager" propagation="REQUIRED" isolation="REPEATABLE_READ" timeout="10000" read-only="false"/> </int:poller>
For more information please refer to the section called “CompletableFuture”.
AOP Advice chains
Since Spring transaction support depends on the Proxy mechanism with TransactionInterceptor
(AOP Advice) handling transactional behavior of the message flow initiated by the poller, some times there is a need to provide extra Advice(s) to handle other cross cutting behavior associated with the poller.
For that poller defines an advice-chain element allowing you to add more advices - class that implements MethodInterceptor
interface…
<int:service-activator id="advicedSa" input-channel="goodInputWithAdvice" ref="testBean" method="good" output-channel="output"> <int:poller max-messages-per-poll="1" fixed-rate="10000"> <int:advice-chain> <ref bean="adviceA" /> <beans:bean class="org.bar.SampleAdvice" /> <ref bean="txAdvice" /> </int:advice-chain> </int:poller> </int:service-activator>
For more information on how to implement MethodInterceptor please refer to AOP sections of Spring reference manual (section 8 and 9). Advice chain can also be applied on the poller that does not have any transaction configuration essentially allowing you to enhance the behavior of the message flow initiated by the poller.
![]() | Important |
---|---|
When using an advice chain, the |
TaskExecutor Support
The polling threads may be executed by any instance of Spring’s TaskExecutor
abstraction.
This enables concurrency for an endpoint or group of endpoints.
As of Spring 3.0, there is a task namespace in the core Spring Framework, and its <executor/> element supports the creation of a simple thread pool executor.
That element accepts attributes for common concurrency settings such as pool-size and queue-capacity.
Configuring a thread-pooling executor can make a substantial difference in how the endpoint performs under load.
These settings are available per-endpoint since the performance of an endpoint is one of the major factors to consider (the other major factor being the expected volume on the channel to which the endpoint subscribes).
To enable concurrency for a polling endpoint that is configured with the XML namespace support, provide the task-executor reference on its <poller/> element and then provide one or more of the properties shown below:
<int:poller task-executor="pool" fixed-rate="1000"/> <task:executor id="pool" pool-size="5-25" queue-capacity="20" keep-alive="120"/>
If no task-executor is provided, the consumer’s handler will be invoked in the caller’s thread.
Note that the caller is usually the default TaskScheduler
(see the section called “CompletableFuture”).
Also, keep in mind that the task-executor attribute can provide a reference to any implementation of Spring’s TaskExecutor
interface by specifying the bean name.
The executor element above is simply provided for convenience.
As mentioned in the background section for Polling Consumers above, you can also configure a Polling Consumer in such a way as to emulate event-driven behavior. With a long receive-timeout and a short interval-trigger, you can ensure a very timely reaction to arriving messages even on a polled message source. Note that this will only apply to sources that have a blocking wait call with a timeout. For example, the File poller does not block, each receive() call returns immediately and either contains new files or not. Therefore, even if a poller contains a long receive-timeout, that value would never be usable in such a scenario. On the other hand when using Spring Integration’s own queue-based channels, the timeout value does have a chance to participate. The following example demonstrates how a Polling Consumer will receive Messages nearly instantaneously.
<int:service-activator input-channel="someQueueChannel" output-channel="output"> <int:poller receive-timeout="30000" fixed-rate="10"/> </int:service-activator>
Using this approach does not carry much overhead since internally it is nothing more then a timed-wait thread which does not require nearly as much CPU resource usage as a thrashing, infinite while loop for example.
When configuring Pollers with a fixed-delay
or fixed-rate
attribute, the default implementation will use a PeriodicTrigger
instance.
The PeriodicTrigger
is part of the Core Spring Framework and it accepts the interval as a constructor argument, only.
Therefore it cannot be changed at runtime.
However, you can define your own implementation of the org.springframework.scheduling.Trigger
interface.
You could even use the PeriodicTrigger as a starting point.
Then, you can add a setter for the interval (period), or you could even embed your own throttling logic within the trigger itself if desired.
The period property will be used with each call to nextExecutionTime to schedule the next poll.
To use this custom trigger within pollers, declare the bean definition of the custom Trigger in your application context and inject the dependency into your Poller configuration using the trigger
attribute, which references the custom Trigger bean instance.
You can now obtain a reference to the Trigger bean and the polling interval can be changed between polls.
For an example, please see the Spring Integration Samples project. It contains a sample called dynamic-poller, which uses a custom Trigger and demonstrates the ability to change the polling interval at runtime.
https://github.com/SpringSource/spring-integration-samples/tree/master/intermediate
The sample provides a custom Trigger which implements the org.springframework.scheduling.Trigger interface. The sample’s Trigger is based on Spring’s PeriodicTrigger implementation. However, the fields of the custom trigger are not final and the properties have explicit getters and setters, allowing to dynamically change the polling period at runtime.
![]() | Note |
---|---|
It is important to note, though, that because the Trigger method is nextExecutionTime(), any changes to a dynamic trigger will not take effect until the next poll, based on the existing configuration. It is not possible to force a trigger to fire before it’s currently configured next execution time. |
Throughout the reference manual, you will also see specific configuration and implementation examples of various endpoints which can accept a Message or any arbitrary Object as an input parameter.
In the case of an Object, such a parameter will be mapped to a Message payload or part of the payload or header (when using the Spring Expression Language).
However there are times when the type of input parameter of the endpoint method does not match the type of the payload or its part.
In this scenario we need to perform type conversion.
Spring Integration provides a convenient way for registering type converters (using the Spring ConversionService
) within its own instance of a conversion service bean named integrationConversionService.
That bean is automatically created as soon as the first converter is defined using the Spring Integration infrastructure.
To register a Converter all you need is to implement org.springframework.core.convert.converter.Converter
, org.springframework.core.convert.converter.GenericConverter
or org.springframework.core.convert.converter.ConverterFactory
.
The Converter
implementation is the simplest and converts from a single type to another.
For more sophistication, such as converting to a class hierarchy, you would implement a GenericConverter
and possibly a ConditionalConverter
.
These give you complete access to the from and to type descriptors enabling complex conversions.
For example, if you have an abstract class Foo
that is the target of your conversion (parameter type, channel data type etc) and you have two concrete implementations Bar
and Baz
and you wish to convert to one or the other based on the input type, the GenericConverter
would be a good fit.
Refer to the JavaDocs for these interfaces for more information.
When you have implemented your converter, you can register it with convenient namespace support:
<int:converter ref="sampleConverter"/> <bean id="sampleConverter" class="foo.bar.TestConverter"/>
or as an inner bean:
<int:converter> <bean class="o.s.i.config.xml.ConverterParserTests$TestConverter3"/> </int:converter>
Starting with Spring Integration 4.0, the above configuration is available using annotations:
@Component @IntegrationConverter public class TestConverter implements Converter<Boolean, Number> { public Number convert(Boolean source) { return source ? 1 : 0; } }
or as a @Configuration
part:
@Configuration @EnableIntegration public class ContextConfiguration { @Bean @IntegrationConverter public SerializingConverter serializingConverter() { return new SerializingConverter(); } }
![]() | Important |
---|---|
When configuring an Application Context, the Spring Framework allows you to add a conversionService bean (see Configuring a ConversionService chapter). This service is used, when needed, to perform appropriate conversions during bean creation and configuration. In contrast, the integrationConversionService is used for runtime conversions. These uses are quite different; converters that are intended for use when wiring bean constructor-args and properties may produce unintended results if used at runtime for Spring Integration expression evaluation against Messages within Datatype Channels, Payload Type transformers etc. However, if you do want to use the Spring conversionService as the Spring Integration integrationConversionService, you can configure an alias in the Application Context: <alias name="conversionService" alias="integrationConversionService"/> In this case the conversionService's Converters will be available for Spring Integration runtime conversion. |
Starting with version 5.0, by default, the method invocation mechanism is based on the org.springframework.messaging.handler.invocation.InvocableHandlerMethod
infrastructure.
Its HandlerMethodArgumentResolver
implementations (e.g. PayloadArgumentResolver
and MessageMethodArgumentResolver
) can use the MessageConverter
abstraction to convert an incoming payload
to the target method argument type.
The conversion can be based on the contentType
message header.
For this purpose Spring Integration provides the ConfigurableCompositeMessageConverter
that delegates to a list of registered converters to be invoked until one of them returns a non-null result.
By default this converter provides (in strict order):
MappingJackson2MessageConverter
if Jackson processor is present in classpath;
ByteArrayMessageConverter
ObjectStringMessageConverter
GenericMessageConverter
Please, consult their JavaDocs for more information about their purpose and appropriate contentType
value for conversion.
The ConfigurableCompositeMessageConverter
is used because it can be be supplied with any other MessageConverter
s including or excluding above mentioned default converters and registered as an appropriate bean in the application context overriding the default one:
@Bean(name = IntegrationContextUtils.ARGUMENT_RESOLVER_MESSAGE_CONVERTER_BEAN_NAME) public ConfigurableCompositeMessageConverter compositeMessageConverter() { List<MessageConverter> converters = Arrays.asList(new MarshallingMessageConverter(jaxb2Marshaller()), new JavaSerializationMessageConverter()); return new ConfigurableCompositeMessageConverter(converters); }
And those two new converters will be registered in the composite before the defaults.
You can also not use a ConfigurableCompositeMessageConverter
, but provide your own MessageConverter
by registering a bean with the name integrationArgumentResolverMessageConverter
(IntegrationContextUtils.ARGUMENT_RESOLVER_MESSAGE_CONVERTER_BEAN_NAME
constant).
![]() | Note |
---|---|
The |
If you want the polling to be asynchronous, a Poller can optionally specify a task-executor attribute pointing to an existing instance of any TaskExecutor
bean (Spring 3.0 provides a convenient namespace configuration via the task
namespace).
However, there are certain things you must understand when configuring a Poller with a TaskExecutor.
The problem is that there are two configurations in place. The Poller and the TaskExecutor, and they both have to be in tune with each other otherwise you might end up creating an artificial memory leak.
Let’s look at the following configuration provided by one of the users on the Spring Integration Forum:
<int:channel id="publishChannel"> <int:queue /> </int:channel> <int:service-activator input-channel="publishChannel" ref="myService"> <int:poller receive-timeout="5000" task-executor="taskExecutor" fixed-rate="50" /> </int:service-activator> <task:executor id="taskExecutor" pool-size="20" />
The above configuration demonstrates one of those out of tune configurations.
By default, the task executor has an unbounded task queue. The poller keeps scheduling new tasks even though all the threads are blocked waiting for either a new message to arrive, or the timeout to expire. Given that there are 20 threads executing tasks with a 5 second timeout, they will be executed at a rate of 4 per second (5000/20 = 250ms). But, new tasks are being scheduled at a rate of 20 per second, so the internal queue in the task executor will grow at a rate of 16 per second (while the process is idle), so we essentially have a memory leak.
One of the ways to handle this is to set the queue-capacity
attribute of the Task Executor; and even 0 is a reasonable
value.
You can also manage it by specifying what to do with messages that can not be queued by setting the rejection-policy
attribute of the Task Executor (e.g., DISCARD).
In other words, there are certain details you must understand with regard to configuring the TaskExecutor.
Please refer to Task Execution and Scheduling of the Spring reference manual for more detail on the subject.
Many endpoints are composite beans; this includes all consumers and all polled inbound channel adapters.
Consumers (polled or event- driven) delegate to a MessageHandler
; polled adapters obtain messages by delegating to a MessageSource
.
Often, it is useful to obtain a reference to the delegate bean, perhaps to change configuration at runtime, or for testing.
These beans can be obtained from the ApplicationContext
with well-known names.
MessageHandler
s are registered with the application context with a bean id someConsumer.handler
(where consumer is the endpoint’s id
attribute).
MessageSource
s are registered with a bean id somePolledAdapter.source
, again where somePolledAdapter is the id of the adapter.
The above only applies to the framework component itself. If you use an inner bean definition such as this:
<int:service-activator id="exampleServiceActivator" input-channel="inChannel" output-channel = "outChannel" method="foo"> <beans:bean class="org.foo.ExampleServiceActivator"/> </int:service-activator>
the bean is treated like any inner bean declared that way and is not registered with the application context.
If you wish to access this bean in some other manner, declare it at the top level with an id
and use the ref
attribute instead.
See the Spring Documentation for more information.
Starting with version 4.2, endpoints can be assigned to roles. Roles allow endpoints to be started and stopped as a group; this is particularly useful when using leadership election where a set of endpoints can be started or stopped when leadership is granted or revoked respectively.
You can assign endpoints to roles using XML, Java configuration, or programmatically:
<int:inbound-channel-adapter id="ica" channel="someChannel" expression="'foo'" role="cluster" auto-startup="false"> <int:poller fixed-rate="60000" /> </int:inbound-channel-adapter>
@Bean @ServiceActivator(inputChannel = "sendAsyncChannel", autoStartup="false") @Role("cluster") public MessageHandler sendAsyncHandler() { return // some MessageHandler }
@Payload("#args[0].toLowerCase()") @Role("cluster") public String handle(String payload) { return payload.toUpperCase(); }
@Autowired private SmartLifecycleRoleController roleController; ... this.roleController.addSmartLifeCycleToRole("cluster", someEndpoint); ...
IntegrationFlow flow -> flow
.handle(..., e -> e.role("cluster"));
Each of these adds the endpoint to the role cluster
.
Invoking roleController.startLifecyclesInRole("cluster")
(and the corresponding stop...
method) will start/stop
the endpoints.
![]() | Note |
---|---|
Any object implementing |
The SmartLifecycleRoleController
implements ApplicationListener<AbstractLeaderEvent>
and it will automatically
start/stop its configured SmartLifecycle
objects when leadership is granted/revoked (when some bean publishes
OnGrantedEvent
or OnRevokedEvent
respectively).
![]() | Important |
---|---|
When using leadership election to start/stop components, it is important to set the |
Starting with _version 4.3.8, the SmartLifecycleRoleController
provides several status methods:
public Collection<String> getRoles()public boolean allEndpointsRunning(String role)
public boolean noEndpointsRunning(String role)
public Map<String, Boolean> getEndpointsRunningStatus(String role)
Groups of endpoints can be started/stopped based on leadership being granted or revoked respectively. This is useful in clustered scenarios where shared resources must only be consumed by a single instance. An example of this is a file inbound channel adapter that is polling a shared directory. (See the section called “CompletableFuture”).
To participate in a leader election and be notified when elected leader, when leadership is revoked or, failure to acquire the resources to become leader, an application creates a component in the application context called a "leader initiator".
Normally a leader initiator is a SmartLifecycle
so it starts up (optionally) automatically when the context starts, and then publishes notifications when leadership changes.
Users can also receive failure notifications by setting the publishFailedEvents
to true
(starting with version 5.0), in cases when they want take a specific action if a failure occurs.
By convention, the user provides a Candidate
that receives the callbacks and also can revoke the leadership through a Context
object provided by the framework.
User code can also listen for org.springframework.integration.leader.event.AbstractLeaderEvent
s (the super class of OnGrantedEvent
and OnRevokedEvent
), and respond accordingly, for instance using a SmartLifecycleRoleController
.
The events contain a reference to the Context
object:
public interface Context { boolean isLeader(); void yield(); String getRole(); }
Starting with version 5.0.6, the context provides a reference to the candidate’s role.
There is a basic implementation of a leader initiator based on the LockRegistry
abstraction.
To use it you just need to create an instance as a bean, for example:
@Bean public LockRegistryLeaderInitiator leaderInitiator(LockRegistry locks) { return new LockRegistryLeaderInitiator(locks); }
If the lock registry is implemented correctly, there will only ever be at most one leader.
If the lock registry also provides locks which throw exceptions (ideally InterruptedException
) when they expire or are broken, then the duration of the leaderless periods can be as short as is allowed by the inherent latency in the lock implementation.
By default there is a busyWaitMillis
property that adds some additional latency to prevent CPU starvation in the (more usual) case that the locks are imperfect and you only know they expired by trying to obtain one again.
See the section called “CompletableFuture” for more information about leadership election and events using Zookeeper.
The primary purpose of a Gateway is to hide the messaging API provided by Spring Integration. It allows your application’s business logic to be completely unaware of the Spring Integration API and using a generic Gateway, your code interacts instead with a simple interface, only.
As mentioned above, it would be great to have no dependency on the Spring Integration API at all - including the gateway class.
For that reason, Spring Integration provides the GatewayProxyFactoryBean
that generates a proxy for any interface and internally invokes the gateway methods shown below.
Using dependency injection you can then expose the interface to your business methods.
Here is an example of an interface that can be used to interact with Spring Integration:
package org.cafeteria; public interface Cafe { void placeOrder(Order order); }
Namespace support is also provided which allows you to configure such an interface as a service as demonstrated by the following example.
<int:gateway id="cafeService" service-interface="org.cafeteria.Cafe" default-request-channel="requestChannel" default-reply-timeout="10000" default-reply-channel="replyChannel"/>
With this configuration defined, the "cafeService" can now be injected into other beans, and the code that invokes the methods on that proxied instance of the Cafe interface has no awareness of the Spring Integration API. The general approach is similar to that of Spring Remoting (RMI, HttpInvoker, etc.). See the "Samples" Appendix for an example that uses this "gateway" element (in the Cafe demo).
The defaults in the configuration above are applied to all methods on the gateway interface; if a reply timeout is not specified, the calling thread will wait indefinitely for a reply. See the section called “CompletableFuture”.
The defaults can be overridden for individual methods; see Section 8.4.4, “Gateway Configuration with Annotations and/or XML”.
Typically you don’t have to specify the default-reply-channel
, since a Gateway will auto-create a temporary, anonymous reply channel, where it will listen for the reply.
However, there are some cases which may prompt you to define a default-reply-channel
(or reply-channel
with adapter gateways such as HTTP, JMS, etc.).
For some background, we’ll quickly discuss some of the inner-workings of the Gateway.
A Gateway will create a temporary point-to-point reply channel which is anonymous and is added to the Message Headers with the name replyChannel
.
When providing an explicit default-reply-channel
(reply-channel
with remote adapter gateways), you have the option to point to a publish-subscribe channel, which is so named because you can add more than one subscriber to it.
Internally Spring Integration will create a Bridge between the temporary replyChannel
and the explicitly defined default-reply-channel
.
So let’s say you want your reply to go not only to the gateway, but also to some other consumer.
In this case you would want two things: a) a named channel you can subscribe to and b) that channel is a publish-subscribe-channel. The default strategy used by the gateway will not satisfy those needs, because the reply channel added to the header is anonymous and point-to-point.
This means that no other subscriber can get a handle to it and even if it could, the channel has point-to-point behavior such that only one subscriber would get the Message.
So by defining a default-reply-channel
you can point to a channel of your choosing, which in this case would be a publish-subscribe-channel
.
The Gateway would create a bridge from it to the temporary, anonymous reply channel that is stored in the header.
Another case where you might want to provide a reply channel explicitly is for monitoring or auditing via an interceptor (e.g., wiretap). You need a named channel in order to configure a Channel Interceptor.
public interface Cafe { @Gateway(requestChannel="orders") void placeOrder(Order order); }
You may alternatively provide such content in method
sub-elements if you prefer XML configuration (see the next paragraph).
It is also possible to pass values to be interpreted as Message headers on the Message that is created and sent to the
request channel by using the @Header
annotation:
public interface FileWriter { @Gateway(requestChannel="filesOut") void write(byte[] content, @Header(FileHeaders.FILENAME) String filename); }
If you prefer the XML approach of configuring Gateway methods, you can provide method sub-elements to the gateway configuration.
<int:gateway id="myGateway" service-interface="org.foo.bar.TestGateway" default-request-channel="inputC"> <int:default-header name="calledMethod" expression="#gatewayMethod.name"/> <int:method name="echo" request-channel="inputA" reply-timeout="2" request-timeout="200"/> <int:method name="echoUpperCase" request-channel="inputB"/> <int:method name="echoViaDefault"/> </int:gateway>
You can also provide individual headers per method invocation via XML.
This could be very useful if the headers you want to set are static in nature and you don’t want to embed them in the gateway’s method signature via @Header
annotations.
For example, in the Loan Broker example we want to influence how aggregation of the Loan quotes will be done based on what type of request was initiated (single quote or all quotes).
Determining the type of the request by evaluating what gateway method was invoked, although possible, would violate the separation of concerns paradigm (the method is a java artifact), but expressing your intention (meta information) via Message headers is natural in a Messaging architecture.
<int:gateway id="loanBrokerGateway" service-interface="org.springframework.integration.loanbroker.LoanBrokerGateway"> <int:method name="getLoanQuote" request-channel="loanBrokerPreProcessingChannel"> <int:header name="RESPONSE_TYPE" value="BEST"/> </int:method> <int:method name="getAllLoanQuotes" request-channel="loanBrokerPreProcessingChannel"> <int:header name="RESPONSE_TYPE" value="ALL"/> </int:method> </int:gateway>
In the above case you can clearly see how a different value will be set for the RESPONSE_TYPE header based on the gateway’s method.
Expressions and "Global" Headers
The <header/>
element supports expression
as an alternative to value
.
The SpEL expression is evaluated to determine the value of the header.
There is no #root
object but the following variables are available:
#args - an Object[]
containing the method arguments
#gatewayMethod - the java.reflect.Method
object representing the method in the service-interface
that was invoked.
A header containing this variable can be used later in the flow, for example, for routing.
For example, if you wish to route on the simple method name, you might add a header, with expression #gatewayMethod.name
.
![]() | Note |
---|---|
The |
Since 3.0, <default-header/>
s can be defined to add headers to all messages produced by the gateway, regardless of the method invoked.
Specific headers defined for a method take precedence over default headers.
Specific headers defined for a method here will override any @Header
annotations in the service interface.
However, default headers will NOT override any @Header
annotations in the service interface.
The gateway now also supports a default-payload-expression
which will be applied for all methods (unless overridden).
Using the configuration techniques in the previous section allows control of how method arguments are mapped to message elements (payload and header(s)). When no explicit configuration is used, certain conventions are used to perform the mapping. In some cases, these conventions cannot determine which argument is the payload and which should be mapped to headers.
public String send1(Object foo, Map bar); public String send2(Map foo, Map bar);
In the first case, the convention will map the first argument to the payload (as long as it is not a Map
) and the contents of the second become headers.
In the second case (or the first when the argument for parameter foo
is a Map
), the framework cannot determine which argument should be the payload; mapping will fail.
This can generally be resolved using a payload-expression
, a @Payload
annotation and/or a @Headers
annotation.
Alternatively, and whenever the conventions break down, you can take the entire responsibility for mapping the method calls to messages.
To do this, implement an MethodArgsMessageMapper
and provide it to the <gateway/>
using the mapper
attribute.
The mapper maps a MethodArgsHolder
, which is a simple class wrapping the java.reflect.Method
instance and an Object[]
containing the arguments.
When providing a custom mapper, the default-payload-expression
attribute and <default-header/>
elements are not allowed on the gateway; similarly, the payload-expression
attribute and <header/>
elements are not allowed on any <method/>
elements.
Mapping Method Arguments
Here are examples showing how method arguments can be mapped to the message (and some examples of invalid configuration):
public interface MyGateway { void payloadAndHeaderMapWithoutAnnotations(String s, Map<String, Object> map); void payloadAndHeaderMapWithAnnotations(@Payload String s, @Headers Map<String, Object> map); void headerValuesAndPayloadWithAnnotations(@Header("k1") String x, @Payload String s, @Header("k2") String y); void mapOnly(Map<String, Object> map); // the payload is the map and no custom headers are added void twoMapsAndOneAnnotatedWithPayload(@Payload Map<String, Object> payload, Map<String, Object> headers); @Payload("#args[0] + #args[1] + '!'") void payloadAnnotationAtMethodLevel(String a, String b); @Payload("@someBean.exclaim(#args[0])") void payloadAnnotationAtMethodLevelUsingBeanResolver(String s); void payloadAnnotationWithExpression(@Payload("toUpperCase()") String s); void payloadAnnotationWithExpressionUsingBeanResolver(@Payload("@someBean.sum(#this)") String s); //// invalid void twoMapsWithoutAnnotations(Map<String, Object> m1, Map<String, Object> m2); // invalid void twoPayloads(@Payload String s1, @Payload String s2); // invalid void payloadAndHeaderAnnotationsOnSameParameter(@Payload @Header("x") String s); // invalid void payloadAndHeadersAnnotationsOnSameParameter(@Payload @Headers Map<String, Object> map); }
Note that in this example, the SpEL variable |
The XML equivalent looks a little different, since there is no #this
context for the method argument, but expressions can refer to method arguments using the #args
variable:
<int:gateway id="myGateway" service-interface="org.foo.bar.MyGateway"> <int:method name="send1" payload-expression="#args[0] + 'bar'"/> <int:method name="send2" payload-expression="@someBean.sum(#args[0])"/> <int:method name="send3" payload-expression="#method"/> <int:method name="send4"> <int:header name="foo" expression="#args[2].toUpperCase()"/> </int:method> </int:gateway>
Starting with version 4.0, gateway service interfaces can be marked with a @MessagingGateway
annotation instead of requiring the definition of a <gateway />
xml element for configuration.
The following compares the two approaches for configuring the same gateway:
<int:gateway id="myGateway" service-interface="org.foo.bar.TestGateway" default-request-channel="inputC"> <int:default-header name="calledMethod" expression="#gatewayMethod.name"/> <int:method name="echo" request-channel="inputA" reply-timeout="2" request-timeout="200"/> <int:method name="echoUpperCase" request-channel="inputB"> <int:header name="foo" value="bar"/> </int:method> <int:method name="echoViaDefault"/> </int:gateway>
@MessagingGateway(name = "myGateway", defaultRequestChannel = "inputC", defaultHeaders = @GatewayHeader(name = "calledMethod", expression="#gatewayMethod.name")) public interface TestGateway { @Gateway(requestChannel = "inputA", replyTimeout = 2, requestTimeout = 200) String echo(String payload); @Gateway(requestChannel = "inputB", headers = @GatewayHeader(name = "foo", value="bar")) String echoUpperCase(String payload); String echoViaDefault(String payload); }
![]() | Important |
---|---|
As with the XML version, Spring Integration creates the |
Along with the @MessagingGateway
annotation you can mark a service interface with the @Profile
annotation to avoid the bean creation, if such a profile is not active.
![]() | Note |
---|---|
If you have no XML configuration, the |
When invoking methods on a Gateway interface that do not have any arguments, the default behavior is to receive a Message
from a PollableChannel
.
At times however, you may want to trigger no-argument methods so that you can in fact interact with other components downstream that do not require user-provided parameters, e.g. triggering no-argument SQL calls or Stored Procedures.
In order to achieve send-and-receive semantics, you must provide a payload.
In order to generate a payload, method parameters on the interface are not necessary.
You can either use the @Payload
annotation or the payload-expression
attribute in XML on the method
sub-element.
Below please find a few examples of what the payloads could be:
Here is an example using the @Payload
annotation:
public interface Cafe {
@Payload("new java.util.Date()")
List<Order> retrieveOpenOrders();
}
If a method has no argument and no return value, but does contain a payload expression, it will be treated as a send-only operation.
Of course, the Gateway invocation might result in errors. By default, any error that occurs downstream will be re-thrown as is upon the Gateway’s method invocation. For example, consider the following simple flow:
gateway -> service-activator
If the service invoked by the service activator throws a FooException
, the framework wraps it in a MessagingException
, attaching the message passed to the service activator in the failedMessage
property.
Any logging performed by the framework will therefore have full context of the failure.
When the exception is caught by the gateway, by default, the FooException
will be unwrapped and thrown to the caller.
You can configure a throws
clause on the gateway method declaration for matching the particular exception type in the cause chain.
For example if you would like to catch a whole MessagingException
with all the messaging information of the reason of downstream error, you should have a gateway method like this:
public interface MyGateway { void performProcess() throws MessagingException; }
Since we encourage POJO programming, you may not want to expose the caller to messaging infrastructure.
If your gateway method does not have a throws
clause, the gateway will traverse the cause tree looking for a RuntimeException
(that is not a MessagingException
).
If none is found, the framework will simply throw the MessagingException
.
If the FooException
in the discussion above has a cause BarException
and your method throws BarException
then the gateway will further unwrap that and throw it to the caller.
When a gateway is declared with no service-interface
, an internal framework interface RequestReplyExchanger
is used.
public interface RequestReplyExchanger { Message<?> exchange(Message<?> request) throws MessagingException; }
Before version 5.0 this exchange
method did not have a throws
clause and therefore the exception was unwrapped.
If you are using this interface, and wish to restore the previous unwrap behavior, use a custom service-interface
instead, or simply access the cause
of the MessagingException
yourself.
However there are times when you may want to simply log the error rather than propagating it, or you may want to treat an Exception as a valid reply, by mapping it to a Message that will conform to some "error message" contract that the caller understands.
To accomplish this, the Gateway provides support for a Message Channel dedicated to the errors via the error-channel attribute.
In the example below, you can see that a transformer is used to create a reply Message
from the Exception
.
<int:gateway id="sampleGateway" default-request-channel="gatewayChannel" service-interface="foo.bar.SimpleGateway" error-channel="exceptionTransformationChannel"/> <int:transformer input-channel="exceptionTransformationChannel" ref="exceptionTransformer" method="createErrorResponse"/>
The exceptionTransformer could be a simple POJO that knows how to create the expected error response objects.
That would then be the payload that is sent back to the caller.
Obviously, you could do many more elaborate things in such an "error flow" if necessary.
It might involve routers (including Spring Integration’s ErrorMessageExceptionTypeRouter
), filters, and so on.
Most of the time, a simple transformer should be sufficient, however.
Alternatively, you might want to only log the Exception (or send it somewhere asynchronously). If you provide a one-way flow, then nothing would be sent back to the caller. In the case that you want to completely suppress Exceptions, you can provide a reference to the global "nullChannel" (essentially a /dev/null approach). Finally, as mentioned above, if no "error-channel" is defined at all, then the Exceptions will propagate as usual.
When using the @MessagingGateway
annotation (see Section 8.4.6, “@MessagingGateway Annotation”), use the errroChannel
attribute.
Starting with version 5.0, when using a gateway method with a void
return type (one-way flow), the error-channel
reference (if provided) is populated in the standard errorChannel
header of each message sent.
This allows a downstream async flow, based on the standard ExecutorChannel
configuration (or a QueueChannel
), to override a default global errorChannel
exceptions sending behavior.
Previously you had to specify an errorChannel
header manually via @GatewayHeader
annotation or <header>
sub-element.
The error-channel
property was ignored for void
methods with an asynchronous flow; error messages were sent to the default errorChannel
instead.
![]() | Important |
---|---|
Exposing the messaging system via simple POJI Gateways obviously provides benefits, but "hiding" the reality of the underlying messaging system does come at a price so there are certain things you should consider. We want our Java method to return as quickly as possible and not hang for an indefinite amount of time while the caller is waiting on it to return (void, return value, or a thrown Exception). When regular methods are used as a proxies in front of the Messaging system, we have to take into account the potentially asynchronous nature of the underlying messaging. This means that there might be a chance that a Message that was initiated by a Gateway could be dropped by a Filter, thus never reaching a component that is responsible for producing a reply. Some Service Activator method might result in an Exception, thus providing no reply (as we don’t generate Null messages). So as you can see there are multiple scenarios where a reply message might not be coming. That is perfectly natural in messaging systems. However think about the implication on the gateway method. The Gateway’s method input arguments were incorporated into a Message and sent downstream. The reply Message would be converted to a return value of the Gateway’s method. So you might want to ensure that for each Gateway call there will always be a reply Message. Otherwise, your Gateway method might never return and will hang indefinitely. One of the ways of handling this situation is via an Asynchronous Gateway (explained later in this section). Another way of handling it is to explicitly set the reply-timeout attribute. That way, the gateway will not hang any longer than the time specified by the reply-timeout and will return null if that timeout does elapse. Finally, you might want to consider setting downstream flags such as requires-reply on a service-activator or throw-exceptions-on-rejection on a filter. These options will be discussed in more detail in the final section of this chapter. |
![]() | Note |
---|---|
If the downstream flow returns an |
There are two properties requestTimeout
and replyTimeout
.
The request timeout only applies if the channel can block (e.g. a bounded QueueChannel
that is full).
The reply timeout is how long the gateway will wait for a reply, or return null
; it defaults to infinity.
The timeouts can be set as defaults for all methods on the gateway (defaultRequestTimeout
, defaultReplyTimeout
) (or on the MessagingGateway
interface annotation).
Individual methods can override these defaults (in <method/>
child elements) or on the @Gateway
annotation.
Starting with version 5.0, the timeouts can be defined as expressions:
@Gateway(payloadExpression = "#args[0]", requestChannel = "someChannel", requestTimeoutExpression = "#args[1]", replyTimeoutExpression = "#args[2]") String lateReply(String payload, long requestTimeout, long replyTimeout);
The evaluation context has a BeanResolver
(use @someBean
to reference other beans) and the #args
array variable is available.
When configuring with XML, the timeout attributes can be a simple long value or a SpEL expression.
<method name="someMethod" request-channel="someRequestChannel" payload-expression="#args[0]" request-timeout="1000" reply-timeout="#args[1]"> </method>
As a pattern, the Messaging Gateway is a very nice way to hide messaging-specific code while still exposing the full capabilities of the messaging system.
As you’ve seen, the GatewayProxyFactoryBean
provides a convenient way to expose a Proxy over a service-interface thus giving you POJO-based access to a messaging system (based on objects in your own domain, or primitives/Strings, etc).
But when a gateway is exposed via simple POJO methods which return values it does imply that for each Request message (generated when the method is invoked) there must be a Reply message (generated when the method has returned).
Since Messaging systems naturally are asynchronous you may not always be able to guarantee the contract where "for each request there will always be be a reply". With Spring Integration 2.0 we introduced support for an Asynchronous Gateway which is a convenient way to initiate flows where you may not know if a reply is expected or how long will it take for replies to arrive.
A natural way to handle these types of scenarios in Java would be relying upon java.util.concurrent.Future instances, and that is exactly what Spring Integration uses to support an Asynchronous Gateway.
From the XML configuration, there is nothing different and you still define Asynchronous Gateway the same way as a regular Gateway.
<int:gateway id="mathService" service-interface="org.springframework.integration.sample.gateway.futures.MathServiceGateway" default-request-channel="requestChannel"/>
However the Gateway Interface (service-interface) is a little different:
public interface MathServiceGateway { Future<Integer> multiplyByTwo(int i); }
As you can see from the example above, the return type for the gateway method is a Future
.
When GatewayProxyFactoryBean
sees that the return type of the gateway method is a Future
, it immediately switches to the async mode by utilizing an AsyncTaskExecutor
.
That is all.
The call to such a method always returns immediately with a Future
instance.
Then, you can interact with the Future
at your own pace to get the result, cancel, etc.
And, as with any other use of Future instances, calling get() may reveal a timeout, an execution exception, and so on.
MathServiceGateway mathService = ac.getBean("mathService", MathServiceGateway.class); Future<Integer> result = mathService.multiplyByTwo(number); // do something else here since the reply might take a moment int finalResult = result.get(1000, TimeUnit.SECONDS);
For a more detailed example, please refer to the async-gateway sample distributed within the Spring Integration samples.
Starting with version 4.1, async gateway methods can also return ListenableFuture
(introduced in Spring Framework 4.0).
These return types allow you to provide a callback which is invoked when the result is available (or an exception occurs).
When the gateway detects this return type, and the task executor (see below) is an AsyncListenableTaskExecutor
, the executor’s submitListenable()
method is invoked.
ListenableFuture<String> result = this.asyncGateway.async("foo"); result.addCallback(new ListenableFutureCallback<String>() { @Override public void onSuccess(String result) { ... } @Override public void onFailure(Throwable t) { ... } });
By default, the GatewayProxyFactoryBean
uses org.springframework.core.task.SimpleAsyncTaskExecutor
when submitting internal AsyncInvocationTask
instances for any gateway method whose return type is Future
.
However the async-executor
attribute in the <gateway/>
element’s configuration allows you to provide a reference to any implementation of java.util.concurrent.Executor
available within the Spring application context.
The (default) SimpleAsyncTaskExecutor
supports both Future
and ListenableFuture
return types, returning FutureTask
or ListenableFutureTask
respectively. Also see the section called “CompletableFuture” below.
Even though there is a default executor, it is often useful to provide an external one so that you can identify its threads in logs (when using XML, the thread name is based on the executor’s bean name):
@Bean public AsyncTaskExecutor exec() { SimpleAsyncTaskExecutor simpleAsyncTaskExecutor = new SimpleAsyncTaskExecutor(); simpleAsyncTaskExecutor.setThreadNamePrefix("exec-"); return simpleAsyncTaskExecutor; } @MessagingGateway(asyncExecutor = "exec") public interface ExecGateway { @Gateway(requestChannel = "gatewayChannel") Future<?> doAsync(String foo); }
If you wish to return a different Future
implementation, you can provide a custom executor, or disable the executor altogether and return the Future
in the reply message payload from the downstream flow.
To disable the executor, simply set it to null
in the GatewayProxyFactoryBean
(setAsyncTaskExecutor(null)
).
When configuring the gateway with XML, use async-executor=""
; when configuring using the @MessagingGateway
annotation, use:
@MessagingGateway(asyncExecutor = AnnotationConstants.NULL) public interface NoExecGateway { @Gateway(requestChannel = "gatewayChannel") Future<?> doAsync(String foo); }
![]() | Important |
---|---|
If the return type is a specific concrete |
Starting with version 4.2, gateway methods can now return CompletableFuture<?>
.
There are several modes of operation when returning this type:
When an async executor is provided and the return type is exactly CompletableFuture
(not a subclass), the framework
will run the task on the executor and immediately return a CompletableFuture
to the caller.
CompletableFuture.supplyAsync(Supplier<U> supplier, Executor executor)
is used to create the future.
When the async executor is explicitly set to null
and the return type is CompletableFuture
or the return type
is a subclass of CompletableFuture
, the flow is invoked on the caller’s thread.
In this scenario, it is expected that the downstream flow will return a CompletableFuture
of the appropriate type.
Usage Scenarios
In the following scenario, the caller thread returns immediately with a CompletableFuture<Invoice>
, which is completed when the downstream flow replies to the gateway (with an Invoice
object).
CompletableFuture<Invoice> order(Order order);
<int:gateway service-interface="foo.Service" default-request-channel="orders" />
In the following scenario, the caller thread returns with a CompletableFuture<Invoice>
when the downstream flow provides it as the payload of the reply to the gateway.
Some other process must complete the future when the invoice is ready.
CompletableFuture<Invoice> order(Order order);
<int:gateway service-interface="foo.Service" default-request-channel="orders" async-executor="" />
In the following scenario, the caller thread returns with a CompletableFuture<Invoice>
when the downstream flow provides it as the payload of the reply to the gateway.
Some other process must complete the future when the invoice is ready.
MyCompletableFuture<Invoice> order(Order order);
<int:gateway service-interface="foo.Service" default-request-channel="orders" />
In this scenario, the caller thread will return with a CompletableFuture<Invoice> when the downstream flow provides
it as the payload of the reply to the gateway.
Some other process must complete the future when the invoice is ready.
If DEBUG
logging is enabled, a log is emitted indicating that the async executor cannot be used for this scenario.
CompletableFuture
s can be used to perform additional manipulation on the reply, such as:
CompletableFuture<String> process(String data); ... CompletableFuture result = process("foo") .thenApply(t -> t.toUpperCase()); ... String out = result.get(10, TimeUnit.SECONDS);
===== Reactor Mono
Starting with version 5.0, the GatewayProxyFactoryBean
allows the use of the Project Reactor with gateway interface methods, utilizing a Mono<T>
return type.
The internal AsyncInvocationTask
is wrapped in a Mono.fromCallable()
.
A Mono
can be used to retrieve the result later (similar to a Future<?>
) or you can consume from it with the dispatcher invoking your Consumer
when the result is returned to the gateway.
![]() | Important |
---|---|
The |
@MessagingGateway public static interface TestGateway { @Gateway(requestChannel = "promiseChannel") Mono<Integer> multiply(Integer value); } ... @ServiceActivator(inputChannel = "promiseChannel") public Integer multiply(Integer value) { return value * 2; } ... Flux.just("1", "2", "3", "4", "5") .map(Integer::parseInt) .flatMap(this.testGateway::multiply) .collectList() .subscribe(integers -> ...);
Another example is a simple callback scenario:
Mono<Invoice> mono = service.process(myOrder); mono.subscribe(invoice -> handleInvoice(invoice));
The calling thread continues, with handleInvoice()
being called when the flow completes.
==== Gateway behavior when no response arrives
As it was explained earlier, the Gateway provides a convenient way of interacting with a Messaging system via POJO method invocations, but realizing that a typical method invocation, which is generally expected to always return (even with an Exception), might not always map one-to-one to message exchanges (e.g., a reply message might not arrive - which is equivalent to a method not returning). It is important to go over several scenarios especially in the Sync Gateway case and understand the default behavior of the Gateway and how to deal with these scenarios to make the Sync Gateway behavior more predictable regardless of the outcome of the message flow that was initialed from such Gateway.
There are certain attributes that could be configured to make Sync Gateway behavior more predictable, but some of them might not always work as you might have expected. One of them is reply-timeout (at the method level or default-reply-timeout at the gateway level). So, lets look at the reply-timeout attribute and see how it can/can’t influence the behavior of the Sync Gateway in various scenarios. We will look at single-threaded scenario (all components downstream are connected via Direct Channel) and multi-threaded scenarios (e.g., somewhere downstream you may have Pollable or Executor Channel which breaks single-thread boundary)
Long running process downstream
Sync Gateway - single-threaded.
If a component downstream is still running (e.g., infinite loop or a very slow service), then setting a reply-timeout has no effect and the Gateway method call will not return until such downstream service exits (via return or exception).
Sync Gateway - multi-threaded.
If a component downstream is still running (e.g., infinite loop or a very slow service), in a multi-threaded message flow setting the reply-timeout will have an effect by allowing gateway method invocation to return once the timeout has been reached, since the GatewayProxyFactoryBean
will simply poll on the reply channel waiting for a message until the timeout expires.
However it could result in a null return from the Gateway method if the timeout has been reached before the actual reply was produced. It is also important to understand that the reply message (if produced) will be sent to a reply channel after the Gateway method invocation might have returned, so you must be aware of that and design your flow with this in mind.
Downstream component returns 'null'
Sync Gateway - single-threaded. If a component downstream returns null and no reply-timeout has been configured, the Gateway method call will hang indefinitely unless: a) a reply-timeout has been configured or b) the requires-reply attribute has been set on the downstream component (e.g., service-activator) that might return null. In this case, an Exception would be thrown and propagated to the Gateway.Sync Gateway - multi-threaded. Behavior is the same as above.
Downstream component return signature is void while Gateway method signature is non-void
Sync Gateway - single-threaded. If a component downstream returns void and no reply-timeout has been configured, the Gateway method call will hang indefinitely unless a reply-timeout has been configured Sync Gateway - multi-threaded Behavior is the same as above.
Downstream component results in Runtime Exception (regardless of the method signature)
Sync Gateway - single-threaded. If a component downstream throws a Runtime Exception, such exception will be propagated via an Error Message back to the gateway and re-thrown. Sync Gateway - multi-threaded Behavior is the same as above.
![]() | Important |
---|---|
It is also important to understand that by default reply-timeout is unbounded* which means that if not explicitly set there are several scenarios (described above) where your Gateway method invocation might hang indefinitely. So, make sure you analyze your flow and if there is even a remote possibility of one of these scenarios to occur, set the reply-timeout attribute to a safe value or, even better, set the requires-reply attribute of the downstream component to true to ensure a timely response as produced by the throwing of an Exception as soon as that downstream component does return null internally. But also, realize that there are some scenarios (see the very first one) where reply-timeout will not help. That means it is also important to analyze your message flow and decide when to use a Sync Gateway vs an Async Gateway. As you’ve seen the latter case is simply a matter of defining Gateway methods that return Future instances. Then, you are guaranteed to receive that return value, and you will have more granular control over the results of the invocation.Also, when dealing with a Router you should remember that setting the resolution-required attribute to true will result in an Exception thrown by the router if it can not resolve a particular channel. Likewise, when dealing with a Filter, you can set the throw-exception-on-rejection attribute. In both of these cases, the resulting flow will behave like that containing a service-activator with the requires-reply attribute. In other words, it will help to ensure a timely response from the Gateway method invocation. |
![]() | Note |
---|---|
* reply-timeout is unbounded for <gateway/> elements (created by the GatewayProxyFactoryBean). Inbound gateways for external integration (ws, http, etc.) share many characteristics and attributes with these gateways. However, for those inbound gateways, the default reply-timeout is 1000 milliseconds (1 second). If a downstream async hand-off is made to another thread, you may need to increase this attribute to allow enough time for the flow to complete before the gateway times out. |
![]() | Important |
---|---|
It is important to understand that the timer starts when the thread returns to the gateway, i.e. when the flow completes or a message is handed off to another thread. At that time, the calling thread starts waiting for the reply. If the flow was completely synchronous, the reply will be immediately available; for asynchronous flows, the thread will wait for up to this time. |
Also see the section called “CompletableFuture” in the Java DSL chapter for options to define gateways via IntegrationFlows
.
The Service Activator is the endpoint type for connecting any Spring-managed Object to an input channel so that it may play the role of a service. If the service produces output, it may also be connected to an output channel. Alternatively, an output producing service may be located at the end of a processing pipeline or message flow in which case, the inbound Message’s "replyChannel" header can be used. This is the default behavior if no output channel is defined and, as with most of the configuration options you’ll see here, the same behavior actually applies for most of the other components we have seen.
==== Configuring Service Activator
To create a Service Activator, use the service-activator element with the input-channel and ref attributes:
<int:service-activator input-channel="exampleChannel" ref="exampleHandler"/>
The configuration above selects all methods from the exampleHandler
which meet one of the Messaging requirements:
@ServiceActivator
;
public
;
void
return if requiresReply == true
.
The target method for invocation at runtime is selected for each request message by their payload
type.
Or as a fallback to Message<?>
type if such a method is present on target class.
Starting with version 5.0, one service method can be marked with the @org.springframework.integration.annotation.Default
as a fallback for all non-matching cases.
This can be useful when using Section 8.1.7, “Content Type Conversion” with the target method being invoked after conversion.
To delegate to an explicitly defined method of any object, simply add the "method" attribute.
<int:service-activator input-channel="exampleChannel" ref="somePojo" method="someMethod"/>
In either case, when the service method returns a non-null value, the endpoint will attempt to send the reply message to an appropriate reply channel. To determine the reply channel, it will first check if an "output-channel" was provided in the endpoint configuration:
<int:service-activator input-channel="exampleChannel" output-channel="replyChannel" ref="somePojo" method="someMethod"/>
If the method returns a result and no "output-channel" is defined, the framework will then check the request Message’s replyChannel
header value.
If that value is available, it will then check its type.
If it is a MessageChannel
, the reply message will be sent to that channel.
If it is a String
, then the endpoint will attempt to resolve the channel name to a channel instance.
If the channel cannot be resolved, then a DestinationResolutionException
will be thrown.
It it can be resolved, the Message will be sent there.
If the request Message doesn’t have replyChannel
header and and the reply
object is a Message
, its replyChannel
header is consulted for a target destination.
This is the technique used for Request Reply messaging in Spring Integration, and it is also an example of the Return Address pattern.
If your method returns a result, and you want to discard it and end the flow, you should configure the output-channel
to send to a NullChannel
.
For convenience, the framework registers one with the name nullChannel
.
See Section 4.1.6, “Special Channels” for more information.
The Service Activator is one of those components that is not required to produce a reply message.
If your method returns null
or has a void
return type, the Service Activator exits after the method invocation, without any signals.
This behavior can be controlled by the AbstractReplyProducingMessageHandler.requiresReply
option, also exposed as requires-reply
when configuring with the XML namespace.
If the flag is set to true
and the method returns null, a ReplyRequiredException
is thrown.
The argument in the service method could be either a Message or an arbitrary type. If the latter, then it will be assumed that it is a Message payload, which will be extracted from the message and injected into such service method. This is generally the recommended approach as it follows and promotes a POJO model when working with Spring Integration. Arguments may also have @Header or @Headers annotations as described in the section called “CompletableFuture”
![]() | Note |
---|---|
The service method is not required to have any arguments at all, which means you can implement event-style Service Activators, where all you care about is an invocation of the service method, not worrying about the contents of the message. Think of it as a NULL JMS message. An example use-case for such an implementation could be a simple counter/monitor of messages deposited on the input channel. |
Starting with version 4.1 the framework correct converts Message properties (payload
and headers
) to the Java 8 Optional
POJO method parameters:
public class MyBean { public String computeValue(Optional<String> payload, @Header(value="foo", required=false) String foo1, @Header(value="foo") Optional<String> foo2) { if (payload.isPresent()) { String value = payload.get(); ... } else { ... } } }
Using a ref
attribute is generally recommended if the custom Service Activator handler implementation can be reused in other <service-activator>
definitions.
However if the custom Service Activator handler implementation is only used within a single definition of the <service-activator>
, you can provide an inner bean definition:
<int:service-activator id="exampleServiceActivator" input-channel="inChannel" output-channel = "outChannel" method="foo"> <beans:bean class="org.foo.ExampleServiceActivator"/> </int:service-activator>
![]() | Note |
---|---|
Using both the "ref" attribute and an inner handler definition in the same |
![]() | Important |
---|---|
If the "ref" attribute references a bean that extends |
Service Activators and the Spring Expression Language (SpEL)
Since Spring Integration 2.0, Service Activators can also benefit from SpEL (http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/expressions.html).
For example, you may now invoke any bean method without pointing to the bean via a ref
attribute or including it as an inner bean definition.
For example:
<int:service-activator input-channel="in" output-channel="out" expression="@accountService.processAccount(payload, headers.accountId)"/> <bean id="accountService" class="foo.bar.Account"/>
In the above configuration instead of injecting accountService using a ref
or as an inner bean, we are simply using SpEL’s @beanId
notation and invoking a method which takes a type compatible with Message payload.
We are also passing a header value.
As you can see, any valid SpEL expression can be evaluated against any content in the Message.
For simple scenarios your Service Activators do not even have to reference a bean if all logic can be encapsulated by such an expression.
<int:service-activator input-channel="in" output-channel="out" expression="payload * 2"/>
In the above configuration our service logic is to simply multiply the payload value by 2, and SpEL lets us handle it relatively easy.
See the section called “CompletableFuture” in Java DSL chapter for more information about configuring Service Activator.
==== Asynchronous Service Activator
The service activator is invoked by the calling thread; this would be some upstream thread if the input channel is a
SubscribableChannel
, or a poller thread for a PollableChannel
.
If the service returns a ListenableFuture<?>
the default action is to send that as the payload of the message sent
to the output (or reply) channel.
Starting with version 4.3, you can now set the async
attribute to true (setAsync(true)
when using
Java configuration).
If the service returns a ListenableFuture<?>
when this is true, the calling thread is released immediately, and the
reply message is sent on the thread (from within your service) that completes the future.
This is particularly advantageous for long-running services using a PollableChannel
because the poller thread is
freed up to perform other services within the framework.
If the service completes the future with an Exception
, normal error processing will occur - an ErrorMessage
is
sent to the errorChannel
message header, if present or otherwise to the default errorChannel
(if available).
A Delayer is a simple endpoint that allows a Message flow to be delayed by a certain interval.
When a Message is delayed, the original sender will not block.
Instead, the delayed Messages will be scheduled with an instance of org.springframework.scheduling.TaskScheduler
to be sent to the output channel after the delay has passed.
This approach is scalable even for rather long delays, since it does not result in a large number of blocked sender Threads.
On the contrary, in the typical case a thread pool will be used for the actual execution of releasing the Messages.
Below you will find several examples of configuring a Delayer.
The <delayer>
element is used to delay the Message flow between two Message Channels.
As with the other endpoints, you can provide the input-channel and output-channel attributes, but the delayer also has default-delay and expression attributes (and expression sub-element) that are used to determine the number of milliseconds that each Message should be delayed.
The following delays all messages by 3 seconds:
<int:delayer id="delayer" input-channel="input" default-delay="3000" output-channel="output"/>
If you need per-Message determination of the delay, then you can also provide the SpEL expression using the expression attribute:
<int:delayer id="delayer" input-channel="input" output-channel="output" default-delay="3000" expression="headers['delay']"/>
In the example above, the 3 second delay would only apply when the expression evaluates to null for a given inbound Message. If you only want to apply a delay to Messages that have a valid result of the expression evaluation, then you can use a default-delay of 0 (the default). For any Message that has a delay of 0 (or less), the Message will be sent immediately, on the calling Thread.
The java configuration equivalent of the second example is:
@ServiceActivator(inputChannel = "input") @Bean public DelayHandler delayer() { DelayHandler handler = new DelayHandler("delayer.messageGroupId"); handler.setDefaultDelay(3_000L); handler.setDelayExpressionString("headers['delay']"); handler.setOutputChannelName("output"); return handler; }
and with the Java DSL:
@Bean public IntegrationFlow flow() { return IntegrationFlows.from("input") .delay("delayer.messageGroupId", d -> d .defaultDelay(3_000L) .delayExpression("headers['delay']")) .channel("output") .get(); }
![]() | Note |
---|---|
The XML parser uses a message group id |
![]() | Tip |
---|---|
The delay handler supports expression evaluation results that represent an interval in milliseconds (any Object whose |
![]() | Important |
---|---|
The expression evaluation may throw an evaluation Exception for various reasons, including an invalid expression, or other conditions.
By default, such exceptions are ignored (logged at DEBUG level) and the delayer falls back to the default delay (if any).
You can modify this behavior by setting the |
![]() | Tip |
---|---|
Notice in the example above that the delay expression is specified as org.springframework.expression.spel.SpelEvaluationException: EL1008E:(pos 8): Field or property 'delay' cannot be found on object of type 'org.springframework.messaging.MessageHeaders' So, if there is a possibility of the header being omitted, and you want to fall back to the default delay, it is generally more efficient (and recommended) to use the Indexer syntax instead of dot property accessor syntax, because detecting the null is faster than catching an exception. |
The delayer delegates to an instance of Spring’s TaskScheduler
abstraction.
The default scheduler used by the delayer is the ThreadPoolTaskScheduler
instance provided by Spring Integration on startup: the section called “CompletableFuture”.
If you want to delegate to a different scheduler, you can provide a reference through the delayer element’s scheduler attribute:
<int:delayer id="delayer" input-channel="input" output-channel="output" expression="headers.delay" scheduler="exampleTaskScheduler"/> <task:scheduler id="exampleTaskScheduler" pool-size="3"/>
![]() | Tip |
---|---|
If you configure an external |
![]() | Tip |
---|---|
Also keep in mind |
==== Delayer and a Message Store
The DelayHandler
persists delayed Messages into the Message Group in the provided MessageStore
.
(The groupId is based on required id attribute of <delayer>
element.) A delayed message is removed from the MessageStore
by the scheduled task just before the DelayHandler
sends the Message to the output-channel
.
If the provided MessageStore
is persistent (e.g.
JdbcMessageStore
) it provides the ability to not lose Messages on the application shutdown.
After application startup, the DelayHandler
reads Messages from its Message Group in the MessageStore
and reschedules them with a delay based on the original arrival time of the Message (if the delay is numeric).
For messages where the delay header was a Date
, that is used when rescheduling.
If a delayed Message remained in the MessageStore
more than its delay, it will be sent immediately after startup.
The <delayer>
can be enriched with mutually exclusive sub-elements <transactional>
or <advice-chain>
.
The List of these AOP Advices is applied to the proxied internal DelayHandler.ReleaseMessageHandler
, which has the responsibility to release the Message, after the delay, on a Thread
of the scheduled task.
It might be used, for example, when the downstream message flow throws an Exception and the ReleaseMessageHandler
's transaction will be rolled back.
In this case the delayed Message will remain in the persistent MessageStore
.
You can use any custom org.aopalliance.aop.Advice
implementation within the <advice-chain>
.
A sample configuration of the <delayer>
may look like this:
<int:delayer id="delayer" input-channel="input" output-channel="output" expression="headers.delay" message-store="jdbcMessageStore"> <int:advice-chain> <beans:ref bean="customAdviceBean"/> <tx:advice> <tx:attributes> <tx:method name="*" read-only="true"/> </tx:attributes> </tx:advice> </int:advice-chain> </int:delayer>
The DelayHandler
can be exported as a JMX MBean
with managed operations (getDelayedMessageCount
and reschedulePersistedMessages
), which allows the rescheduling of delayed persisted messages at runtime — for example, if the TaskScheduler
has previously been stopped.
These operations can be invoked through a Control Bus
command, as the following example shows:
Message<String> delayerReschedulingMessage =
MessageBuilder.withPayload("@'delayer.handler'.reschedulePersistedMessages()").build();
controlBusChannel.send(delayerReschedulingMessage);
![]() | Note |
---|---|
For more information regarding the message store, JMX, and the control bus, see the section called “CompletableFuture”. |
Starting with version 5.0.8, there are two new properties on the delayer:
maxAttempts
(default 5)
retryDelay
(default 1 second)
When a message is released, if the downstream flow fails, the release will be attempted after the retryDelay
.
If the maxAttempts
is reached, the message is discarded (unless the release is transactional, in which case the message will remain in the store, but will no longer be scheduled for release, until the application is restarted, or the reschedulePersistedMessages()
method is invoked, as discussed above).
In addition, you can configure a delayedMessageErrorChannel
; when a release fails, an ErrorMessage
is sent to that channel with the exception as the payload and has the originalMessage
property.
The ErrorMessage
contains a header IntegrationMessageHeaderAccessor.DELIVERY_ATTEMPT
containing the current count.
If the error flow consumes the error message and exits normally, no further action is taken; if the release is transactional, the transaction will commit and the message deleted from the store.
If the error flow throws an exception, the release will be retried up to maxAttempts
as discussed above.
With Spring Integration 2.1 we’ve added support for the JSR223 Scripting for Java specification, introduced in Java version 6. This allows you to use scripts written in any supported language including Ruby/JRuby, Javascript and Groovy to provide the logic for various integration components similar to the way the Spring Expression Language (SpEL) is used in Spring Integration. For more information about JSR223 please refer to the documentation
![]() | Important |
---|---|
Note that this feature requires Java 6 or higher. Sun developed a JSR223 reference implementation which works with Java 5 but it is not officially supported and we have not tested it with Spring Integration. |
In order to use a JVM scripting language, a JSR223 implementation for that language must be included in your class path. Java 6 natively supports Javascript. The Groovy and JRuby projects provide JSR233 support in their standard distribution. Other language implementations may be available or under development. Please refer to the appropriate project website for more information.
![]() | Important |
---|---|
Various JSR223 language implementations have been developed by third parties. A particular implementation’s compatibility with Spring Integration depends on how well it conforms to the specification and/or the implementer’s interpretation of the specification. |
![]() | Tip |
---|---|
If you plan to use Groovy as your scripting language, we recommended you use Spring-Integration’s Groovy Support as it offers additional features specific to Groovy. However you will find this section relevant as well. |
Depending on the complexity of your integration requirements scripts may be provided inline as CDATA in XML configuration or as a reference to a Spring resource containing the script.
To enable scripting support Spring Integration defines a ScriptExecutingMessageProcessor
which will bind the Message Payload to a variable named payload
and the Message Headers to a headers
variable, both accessible within the script execution context.
All that is left for you to do is write a script that uses these variables.
Below are a couple of sample configurations:
Filter
<int:filter input-channel="referencedScriptInput"> <int-script:script lang="ruby" location="some/path/to/ruby/script/RubyFilterTests.rb"/> </int:filter> <int:filter input-channel="inlineScriptInput"> <int-script:script lang="groovy"> <![CDATA[ return payload == 'good' ]]> </int-script:script> </int:filter>
Here, you see that the script can be included inline or can reference a resource location via the location
attribute.
Additionally the lang
attribute corresponds to the language name (or JSR223 alias)
Other Spring Integration endpoint elements which support scripting include router, service-activator, transformer, and splitter. The scripting configuration in each case would be identical to the above (besides the endpoint element).
Another useful feature of Scripting support is the ability to update (reload) scripts without having to restart the Application Context.
To accomplish this, specify the refresh-check-delay
attribute on the script element:
<int-script:script location="..." refresh-check-delay="5000"/>
In the above example, the script location will be checked for updates every 5 seconds. If the script is updated, any invocation that occurs later than 5 seconds since the update will result in execution of the new script.
<int-script:script location="..." refresh-check-delay="0"/>
In the above example the context will be updated with any script modifications as soon as such modification occurs, providing a simple mechanism for real-time configuration. Any negative number value means the script will not be reloaded after initialization of the application context. This is the default behavior.
![]() | Important |
---|---|
Inline scripts can not be reloaded. |
<int-script:script location="..." refresh-check-delay="-1"/>
Script variable bindings
Variable bindings are required to enable the script to reference variables externally provided to the script’s execution context.
As we have seen, payload
and headers
are used as binding variables by default.
You can bind additional variables to a script via <variable>
sub-elements:
<script:script lang="js" location="foo/bar/MyScript.js"> <script:variable name="foo" value="foo"/> <script:variable name="bar" value="bar"/> <script:variable name="date" ref="date"/> </script:script>
As shown in the above example, you can bind a script variable either to a scalar value or a Spring bean reference.
Note that payload
and headers
will still be included as binding variables.
With Spring Integration 3.0, in addition to the variable
sub-element, the variables
attribute has been introduced.
This attribute and variable
sub-elements aren’t mutually exclusive and you can combine them within one script
component.
However variables must be unique, regardless of where they are defined.
Also, since Spring Integration 3.0, variable bindings are allowed for inline scripts too:
<service-activator input-channel="input"> <script:script lang="ruby" variables="foo=FOO, date-ref=dateBean"> <script:variable name="bar" ref="barBean"/> <script:variable name="baz" value="bar"/> <![CDATA[ payload.foo = foo payload.date = date payload.bar = bar payload.baz = baz payload ]]> </script:script> </service-activator>
The example above shows a combination of an inline script, a variable
sub-element and a variables
attribute.
The variables
attribute is a comma-separated value, where each segment contains an = separated pair of the variable and its value.
The variable name can be suffixed with -ref
, as in the date-ref
variable above.
That means that the binding variable will have the name date
, but the value will be a reference to the dateBean
bean from the application context.
This may be useful when using Property Placeholder Configuration or command line arguments.
If you need more control over how variables are generated, you can implement your own Java class using the ScriptVariableGenerator
strategy:
public interface ScriptVariableGenerator { Map<String, Object> generateScriptVariables(Message<?> message); }
This interface requires you to implement the method generateScriptVariables(Message)
.
The Message argument allows you to access any data available in the Message payload and headers and the return value is the Map of bound variables.
This method will be called every time the script is executed for a Message.
All you need to do is provide an implementation of ScriptVariableGenerator
and reference it with the script-variable-generator
attribute:
<int-script:script location="foo/bar/MyScript.groovy" script-variable-generator="variableGenerator"/> <bean id="variableGenerator" class="foo.bar.MyScriptVariableGenerator"/>
If a script-variable-generator
is not provided, script components use DefaultScriptVariableGenerator
, which merges any provided <variable>
s with payload and headers variables from the Message
in its generateScriptVariables(Message)
method.
![]() | Important |
---|---|
You cannot provide both the |
In Spring Integration 2.0 we added Groovy support allowing you to use the Groovy scripting language to provide the logic for various integration components similar to the way the Spring Expression Language (SpEL) is supported for routing, transformation and other integration concerns. For more information about Groovy please refer to the Groovy documentation which you can find on the project website.
With Spring Integration 2.1, Groovy Support’s configuration namespace is an extension of Spring Integration’s Scripting Support and shares the core configuration and behavior described in detail in the Scripting Support section.
Even though Groovy scripts are well supported by generic Scripting Support, Groovy Support provides the Groovy configuration namespace which is backed by the Spring Framework’s org.springframework.scripting.groovy.GroovyScriptFactory
and related components, offering extended capabilities for using Groovy.
Below are a couple of sample configurations:
Filter
<int:filter input-channel="referencedScriptInput"> <int-groovy:script location="some/path/to/groovy/file/GroovyFilterTests.groovy"/> </int:filter> <int:filter input-channel="inlineScriptInput"> <int-groovy:script><![CDATA[ return payload == 'good' ]]></int-groovy:script> </int:filter>
As the above examples show, the configuration looks identical to the general Scripting Support configuration.
The only difference is the use of the Groovy namespace as indicated in the examples by the int-groovy namespace prefix.
Also note that the lang
attribute on the <script>
tag is not valid in this namespace.
Groovy object customization
If you need to customize the Groovy object itself, beyond setting variables, you can reference a bean that implements GroovyObjectCustomizer
via the customizer
attribute.
For example, this might be useful if you want to implement a domain-specific language (DSL) by modifying the MetaClass
and registering functions to be available within the script:
<int:service-activator input-channel="groovyChannel"> <int-groovy:script location="foo/SomeScript.groovy" customizer="groovyCustomizer"/> </int:service-activator> <beans:bean id="groovyCustomizer" class="org.foo.MyGroovyObjectCustomizer"/>
Setting a custom GroovyObjectCustomizer
is not mutually exclusive with <variable>
sub-elements or the script-variable-generator
attribute.
It can also be provided when defining an inline script.
With Spring Integration 3.0, in addition to the variable
sub-element, the variables
attribute has been introduced.
Also, groovy scripts have the ability to resolve a variable to a bean in the BeanFactory
, if a binding variable was not provided with the name:
<int-groovy:script> <![CDATA[ entityManager.persist(payload) payload ]]> </int-groovy:script>
where variable entityManager
is an appropriate bean in the application context.
For more information regarding <variable>
, variables
, and script-variable-generator
, see the paragraph Script variable bindings of the section called “CompletableFuture”.
Groovy Script Compiler Customization
The @CompileStatic
hint is the most popular Groovy compiler customization option,
which can be used on the class or method level.
See more information in the Groovy
Reference Manual and,
specifically, @CompileStatic.
To utilize this feature for short scripts (in integration scenarios), we are forced to change a simple script like this
(a <filter>
script):
headers.type == 'good'
to more Java-like code:
@groovy.transform.CompileStatic String filter(Map headers) { headers.type == 'good' } filter(headers)
With that, the filter()
method will be transformed and compiled to static Java code, bypassing the Groovy
dynamic phases of invocation, like getProperty()
factories and CallSite
proxies.
Starting with version 4.3, Spring Integration Groovy components can be configured with the compile-static
boolean
option, specifying that ASTTransformationCustomizer
for @CompileStatic
should be added to the internal
CompilerConfiguration
.
With that in place, we can omit the method declaration with @CompileStatic
in our script code and still get compiled
plain Java code.
In this case our script can still be short but still needs to be a little more verbose than interpreted script:
binding.variables.headers.type == 'good'
Where we can access the headers
and payload
(or any other) variables only through the groovy.lang.Script
binding
property since, with @CompileStatic
, we don’t have the dynamic GroovyObject.getProperty()
capability.
In addition, the compiler-configuration
bean reference has been introduced.
With this attribute, you can provide any other required Groovy compiler customizations, e.g. ImportCustomizer
.
For more information about this feature, please, refer to the Groovy Documentation:
Advanced compiler configuration.
![]() | Note |
---|---|
Using |
![]() | Note |
---|---|
The Groovy compiler customization does not have any effect to the |
As described in (EIP), the idea behind the Control Bus is that the same messaging system can be used for monitoring and managing the components within the framework as is used for "application-level" messaging. In Spring Integration we build upon the adapters described above so that it’s possible to send Messages as a means of invoking exposed operations. One option for those operations is Groovy scripts.
<int-groovy:control-bus input-channel="operationChannel"/>
The Control Bus has an input channel that can be accessed for invoking operations on the beans in the application context.
The Groovy Control Bus executes messages on the input channel as Groovy scripts.
It takes a message, compiles the body to a Script, customizes it with a GroovyObjectCustomizer
, and then executes it.
The Control Bus' MessageProcessor
exposes all beans in the application context that are annotated with @ManagedResource
, implement Spring’s Lifecycle
interface or extend Spring’s CustomizableThreadCreator
base class (e.g.
several of the TaskExecutor
and TaskScheduler
implementations).
![]() | Important |
---|---|
Be careful about using managed beans with custom scopes (e.g.
request) in the Control Bus' command scripts, especially inside an async message flow.
If The Control Bus' |
If you need to further customize the Groovy objects, you can also provide a reference to a bean that implements GroovyObjectCustomizer
via the customizer
attribute.
<int-groovy:control-bus input-channel="input" output-channel="output" customizer="groovyCustomizer"/> <beans:bean id="groovyCustomizer" class="org.foo.MyGroovyObjectCustomizer"/>
=== Adding Behavior to Endpoints
Prior to Spring Integration 2.2, you could add behavior to an entire Integration flow by adding an AOP Advice to a poller’s <advice-chain/>
element.
However, let’s say you want to retry, say, just a REST Web Service call, and not any downstream endpoints.
For example, consider the following flow:
inbound-adapter→poller→http-gateway1→http-gateway2→jdbc-outbound-adapter
If you configure some retry-logic into an advice chain on the poller, and, the call to http-gateway2 failed because of a network glitch, the retry would cause both http-gateway1 and http-gateway2 to be called a second time. Similarly, after a transient failure in the jdbc-outbound-adapter, both http-gateways would be called a second time before again calling the jdbc-outbound-adapter.
Spring Integration 2.2 adds the ability to add behavior to individual endpoints.
This is achieved by the addition of the <request-handler-advice-chain/>
element to many endpoints.
For example:
<int-http:outbound-gateway id="withAdvice" url-expression="'http://localhost/test1'" request-channel="requests" reply-channel="nextChannel"> <int:request-handler-advice-chain> <ref bean="myRetryAdvice" /> </request-handler-advice-chain> </int-http:outbound-gateway>
In this case, myRetryAdvice will only be applied locally to this gateway and will not apply to further actions taken downstream after the reply is sent to the nextChannel. The scope of the advice is limited to the endpoint itself.
![]() | Important |
---|---|
At this time, you cannot advise an entire However, a |
In addition to providing the general mechanism to apply AOP Advice classes in this way, three standard Advices are provided:
RequestHandlerRetryAdvice
RequestHandlerCircuitBreakerAdvice
ExpressionEvaluatingRequestHandlerAdvice
These are each described in detail in the following sections.
The retry advice (o.s.i.handler.advice.RequestHandlerRetryAdvice
) leverages the rich retry mechanisms provided by the Spring Retry project.
The core component of spring-retry
is the RetryTemplate
, which allows configuration of sophisticated retry scenarios, including RetryPolicy
and BackoffPolicy
strategies, with a number of implementations, as well as a RecoveryCallback
strategy to determine the action to take when retries are exhausted.
Stateless Retry
Stateless retry is the case where the retry activity is handled entirely within the advice, where the thread pauses (if so configured) and retries the action.
Stateful Retry
Stateful retry is the case where the retry state is managed within the advice, but where an exception is thrown and the caller resubmits the request. An example for stateful retry is when we want the message originator (e.g. JMS) to be responsible for resubmitting, rather than performing it on the current thread. Stateful retry needs some mechanism to detect a retried submission.
Further Information
For more information on spring-retry
, refer to the project’s javadocs, as well as the reference documentation for Spring Batch, where spring-retry
originated.
![]() | Warning |
---|---|
The default back off behavior is no back off - retries are attempted immediately. Using a back off policy that causes threads to pause between attempts may cause performance issues, including excessive memory use and thread starvation. In high volume environments, back off policies should be used with caution. |
====== Configuring the Retry Advice
The following examples use a simple <service-activator/>
that always throws an exception:
public class FailingService { public void service(String message) { throw new RuntimeException("foo"); } }
Simple Stateless Retry
This example uses the default RetryTemplate
which has a SimpleRetryPolicy
which tries 3 times.
There is no BackOffPolicy
so the 3 attempts are made back-to-back-to-back with no delay between attempts.
There is no RecoveryCallback
so, the result is to throw the exception to the caller after the final failed retry occurs.
In a Spring Integration environment, this final exception might be handled using an error-channel
on the inbound endpoint.
<int:service-activator input-channel="input" ref="failer" method="service"> <int:request-handler-advice-chain> <bean class="o.s.i.handler.advice.RequestHandlerRetryAdvice"/> </request-handler-advice-chain> </int:service-activator> DEBUG [task-scheduler-2]preSend on channel 'input', message: [Payload=...] DEBUG [task-scheduler-2]Retry: count=0 DEBUG [task-scheduler-2]Checking for rethrow: count=1 DEBUG [task-scheduler-2]Retry: count=1 DEBUG [task-scheduler-2]Checking for rethrow: count=2 DEBUG [task-scheduler-2]Retry: count=2 DEBUG [task-scheduler-2]Checking for rethrow: count=3 DEBUG [task-scheduler-2]Retry failed last attempt: count=3
Simple Stateless Retry with Recovery
This example adds a RecoveryCallback
to the above example; it uses a ErrorMessageSendingRecoverer
to send an ErrorMessage
to a channel.
<int:service-activator input-channel="input" ref="failer" method="service"> <int:request-handler-advice-chain> <bean class="o.s.i.handler.advice.RequestHandlerRetryAdvice"> <property name="recoveryCallback"> <bean class="o.s.i.handler.advice.ErrorMessageSendingRecoverer"> <constructor-arg ref="myErrorChannel" /> </bean> </property> </bean> </request-handler-advice-chain> </int:int:service-activator> DEBUG [task-scheduler-2]preSend on channel 'input', message: [Payload=...] DEBUG [task-scheduler-2]Retry: count=0 DEBUG [task-scheduler-2]Checking for rethrow: count=1 DEBUG [task-scheduler-2]Retry: count=1 DEBUG [task-scheduler-2]Checking for rethrow: count=2 DEBUG [task-scheduler-2]Retry: count=2 DEBUG [task-scheduler-2]Checking for rethrow: count=3 DEBUG [task-scheduler-2]Retry failed last attempt: count=3 DEBUG [task-scheduler-2]Sending ErrorMessage :failedMessage:[Payload=...]
Stateless Retry with Customized Policies, and Recovery
For more sophistication, we can provide the advice with a customized RetryTemplate
.
This example continues to use the SimpleRetryPolicy
but it increases the attempts to 4.
It also adds an ExponentialBackoffPolicy
where the first retry waits 1 second, the second waits 5 seconds and the third waits 25 (for 4 attempts in all).
<int:service-activator input-channel="input" ref="failer" method="service"> <int:request-handler-advice-chain> <bean class="o.s.i.handler.advice.RequestHandlerRetryAdvice"> <property name="recoveryCallback"> <bean class="o.s.i.handler.advice.ErrorMessageSendingRecoverer"> <constructor-arg ref="myErrorChannel" /> </bean> </property> <property name="retryTemplate" ref="retryTemplate" /> </bean> </request-handler-advice-chain> </int:service-activator> <bean id="retryTemplate" class="org.springframework.retry.support.RetryTemplate"> <property name="retryPolicy"> <bean class="org.springframework.retry.policy.SimpleRetryPolicy"> <property name="maxAttempts" value="4" /> </bean> </property> <property name="backOffPolicy"> <bean class="org.springframework.retry.backoff.ExponentialBackOffPolicy"> <property name="initialInterval" value="1000" /> <property name="multiplier" value="5.0" /> <property name="maxInterval" value="60000" /> </bean> </property> </bean> 27.058 DEBUG [task-scheduler-1]preSend on channel 'input', message: [Payload=...] 27.071 DEBUG [task-scheduler-1]Retry: count=0 27.080 DEBUG [task-scheduler-1]Sleeping for 1000 28.081 DEBUG [task-scheduler-1]Checking for rethrow: count=1 28.081 DEBUG [task-scheduler-1]Retry: count=1 28.081 DEBUG [task-scheduler-1]Sleeping for 5000 33.082 DEBUG [task-scheduler-1]Checking for rethrow: count=2 33.082 DEBUG [task-scheduler-1]Retry: count=2 33.083 DEBUG [task-scheduler-1]Sleeping for 25000 58.083 DEBUG [task-scheduler-1]Checking for rethrow: count=3 58.083 DEBUG [task-scheduler-1]Retry: count=3 58.084 DEBUG [task-scheduler-1]Checking for rethrow: count=4 58.084 DEBUG [task-scheduler-1]Retry failed last attempt: count=4 58.086 DEBUG [task-scheduler-1]Sending ErrorMessage :failedMessage:[Payload=...]
Namespace Support for Stateless Retry
Starting with version 4.0, the above configuration can be greatly simplified with the namespace support for the retry advice:
<int:service-activator input-channel="input" ref="failer" method="service"> <int:request-handler-advice-chain> <bean ref="retrier" /> </request-handler-advice-chain> </int:service-activator> <int:handler-retry-advice id="retrier" max-attempts="4" recovery-channel="myErrorChannel"> <int:exponential-back-off initial="1000" multiplier="5.0" maximum="60000" /> </int:handler-retry-advice>
In this example, the advice is defined as a top level bean so it can be used in multiple request-handler-advice-chain
s.
You can also define the advice directly within the chain:
<int:service-activator input-channel="input" ref="failer" method="service"> <int:request-handler-advice-chain> <int:retry-advice id="retrier" max-attempts="4" recovery-channel="myErrorChannel"> <int:exponential-back-off initial="1000" multiplier="5.0" maximum="60000" /> </int:retry-advice> </request-handler-advice-chain> </int:service-activator>
A <handler-retry-advice/>
with no child element uses no back off; it can have a fixed-back-off
or exponential-back-off
child element.
If there is no recovery-channel
, the exception is thrown when retries are exhausted.
The namespace can only be used with stateless retry.
For more complex environments (custom policies etc), use normal <bean/>
definitions.
Simple Stateful Retry with Recovery
To make retry stateful, we need to provide the Advice with a RetryStateGenerator implementation.
This class is used to identify a message as being a resubmission so that the RetryTemplate
can determine the current state of retry for this message.
The framework provides a SpelExpressionRetryStateGenerator
which determines the message identifier using a SpEL expression.
This is shown below; this example again uses the default policies (3 attempts with no back off); of course, as with stateless retry, these policies can be customized.
<int:service-activator input-channel="input" ref="failer" method="service"> <int:request-handler-advice-chain> <bean class="o.s.i.handler.advice.RequestHandlerRetryAdvice"> <property name="retryStateGenerator"> <bean class="o.s.i.handler.advice.SpelExpressionRetryStateGenerator"> <constructor-arg value="headers['jms_messageId']" /> </bean> </property> <property name="recoveryCallback"> <bean class="o.s.i.handler.advice.ErrorMessageSendingRecoverer"> <constructor-arg ref="myErrorChannel" /> </bean> </property> </bean> </int:request-handler-advice-chain> </int:service-activator> 24.351 DEBUG [Container#0-1]preSend on channel 'input', message: [Payload=...] 24.368 DEBUG [Container#0-1]Retry: count=0 24.387 DEBUG [Container#0-1]Checking for rethrow: count=1 24.387 DEBUG [Container#0-1]Rethrow in retry for policy: count=1 24.387 WARN [Container#0-1]failure occurred in gateway sendAndReceive org.springframework.integration.MessagingException: Failed to invoke handler ... Caused by: java.lang.RuntimeException: foo ... 24.391 DEBUG [Container#0-1]Initiating transaction rollback on application exception ... 25.412 DEBUG [Container#0-1]preSend on channel 'input', message: [Payload=...] 25.412 DEBUG [Container#0-1]Retry: count=1 25.413 DEBUG [Container#0-1]Checking for rethrow: count=2 25.413 DEBUG [Container#0-1]Rethrow in retry for policy: count=2 25.413 WARN [Container#0-1]failure occurred in gateway sendAndReceive org.springframework.integration.MessagingException: Failed to invoke handler ... Caused by: java.lang.RuntimeException: foo ... 25.414 DEBUG [Container#0-1]Initiating transaction rollback on application exception ... 26.418 DEBUG [Container#0-1]preSend on channel 'input', message: [Payload=...] 26.418 DEBUG [Container#0-1]Retry: count=2 26.419 DEBUG [Container#0-1]Checking for rethrow: count=3 26.419 DEBUG [Container#0-1]Rethrow in retry for policy: count=3 26.419 WARN [Container#0-1]failure occurred in gateway sendAndReceive org.springframework.integration.MessagingException: Failed to invoke handler ... Caused by: java.lang.RuntimeException: foo ... 26.420 DEBUG [Container#0-1]Initiating transaction rollback on application exception ... 27.425 DEBUG [Container#0-1]preSend on channel 'input', message: [Payload=...] 27.426 DEBUG [Container#0-1]Retry failed last attempt: count=3 27.426 DEBUG [Container#0-1]Sending ErrorMessage :failedMessage:[Payload=...]
Comparing with the stateless examples, you can see that with stateful retry, the exception is thrown to the caller on each failure.
Exception Classification for Retry
Spring Retry has a great deal of flexibility for determining which exceptions can invoke retry.
The default configuration will retry for all exceptions and the exception classifier just looks at the top level exception.
If you configure it to, say, only retry on BarException
and your application throws a FooException
where the cause is a BarException
, retry will not occur.
Since Spring Retry 1.0.3, the BinaryExceptionClassifier
has a property traverseCauses
(default false
).
When true
it will traverse exception causes until it finds a match or there is no cause.
To use this classifier for retry, use a SimpleRetryPolicy
created with the constructor that takes the max attempts, the Map
of Exception
s and the boolean (traverseCauses), and inject this policy into the RetryTemplate
.
The general idea of the Circuit Breaker Pattern is that, if a service is not currently available, then don’t waste time (and resources) trying to use it.
The o.s.i.handler.advice.RequestHandlerCircuitBreakerAdvice
implements this pattern.
When the circuit breaker is in the closed state, the endpoint will attempt to invoke the service.
The circuit breaker goes to the open state if a certain number of consecutive attempts fail; when it is in the open state, new requests will "fail fast" and no attempt will be made to invoke the service until some time has expired.
When that time has expired, the circuit breaker is set to the half-open state. When in this state, if even a single attempt fails, the breaker will immediately go to the open state; if the attempt succeeds, the breaker will go to the closed state, in which case, it won’t go to the open state again until the configured number of consecutive failures again occur. Any successful attempt resets the state to zero failures for the purpose of determining when the breaker might go to the open state again.
Typically, this Advice might be used for external services, where it might take some time to fail (such as a timeout attempting to make a network connection).
The RequestHandlerCircuitBreakerAdvice
has two properties: threshold
and halfOpenAfter
.
The threshold property represents the number of consecutive failures that need to occur before the breaker goes open.
It defaults to 5.
The halfOpenAfter property represents the time after the last failure that the breaker will wait before attempting another request.
Default is 1000 milliseconds.
Example:
<int:service-activator input-channel="input" ref="failer" method="service"> <int:request-handler-advice-chain> <bean class="o.s.i.handler.advice.RequestHandlerCircuitBreakerAdvice"> <property name="threshold" value="2" /> <property name="halfOpenAfter" value="12000" /> </bean> </int:request-handler-advice-chain> </int:service-activator> 05.617 DEBUG [task-scheduler-1]preSend on channel 'input', message: [Payload=...] 05.638 ERROR [task-scheduler-1]org.springframework.messaging.MessageHandlingException: java.lang.RuntimeException: foo ... 10.598 DEBUG [task-scheduler-2]preSend on channel 'input', message: [Payload=...] 10.600 ERROR [task-scheduler-2]org.springframework.messaging.MessageHandlingException: java.lang.RuntimeException: foo ... 15.598 DEBUG [task-scheduler-3]preSend on channel 'input', message: [Payload=...] 15.599 ERROR [task-scheduler-3]org.springframework.messaging.MessagingException: Circuit Breaker is Open for ServiceActivator ... 20.598 DEBUG [task-scheduler-2]preSend on channel 'input', message: [Payload=...] 20.598 ERROR [task-scheduler-2]org.springframework.messaging.MessagingException: Circuit Breaker is Open for ServiceActivator ... 25.598 DEBUG [task-scheduler-5]preSend on channel 'input', message: [Payload=...] 25.601 ERROR [task-scheduler-5]org.springframework.messaging.MessageHandlingException: java.lang.RuntimeException: foo ... 30.598 DEBUG [task-scheduler-1]preSend on channel 'input', message: [Payload=foo...] 30.599 ERROR [task-scheduler-1]org.springframework.messaging.MessagingException: Circuit Breaker is Open for ServiceActivator
In the above example, the threshold is set to 2 and halfOpenAfter is set to 12 seconds; a new request arrives every 5 seconds. You can see that the first two attempts invoked the service; the third and fourth failed with an exception indicating the circuit breaker is open. The fifth request was attempted because the request was 15 seconds after the last failure; the sixth attempt fails immediately because the breaker immediately went to open.
===== Expression Evaluating Advice
The final supplied advice class is the o.s.i.handler.advice.ExpressionEvaluatingRequestHandlerAdvice
.
This advice is more general than the other two advices.
It provides a mechanism to evaluate an expression on the original inbound message sent to the endpoint.
Separate expressions are available to be evaluated, either after success, or failure.
Optionally, a message containing the evaluation result, together with the input message, can be sent to a message channel.
A typical use case for this advice might be with an <ftp:outbound-channel-adapter/>
, perhaps to move the file to one directory if the transfer was successful, or to another directory if it fails:
The Advice has properties to set an expression when successful, an expression for failures, and corresponding channels for each.
For the successful case, the message sent to the successChannel is an AdviceMessage
, with the payload being the result of the expression evaluation, and an additional property inputMessage
which contains the original message sent to the handler.
A message sent to the failureChannel (when the handler throws an exception) is an ErrorMessage
with a payload of MessageHandlingExpressionEvaluatingAdviceException
.
Like all MessagingException
s, this payload has failedMessage
and cause
properties, as well as an additional property evaluationResult
, containing the result of the expression evaluation.
When an exception is thrown in the scope of the advice, by default, that exception is thrown to caller after any
failureExpression
is evaluated.
If you wish to suppress throwing the exception, set the trapException
property to true
.
Example - Configuring the Advice with Java DSL.
@SpringBootApplication public class EerhaApplication { public static void main(String[] args) { ConfigurableApplicationContext context = SpringApplication.run(EerhaApplication.class, args); MessageChannel in = context.getBean("advised.input", MessageChannel.class); in.send(new GenericMessage<>("good")); in.send(new GenericMessage<>("bad")); context.close(); } @Bean public IntegrationFlow advised() { return f -> f.handle((GenericHandler<String>) (payload, headers) -> { if (payload.equals("good")) { return null; } else { throw new RuntimeException("some failure"); } }, c -> c.advice(expressionAdvice())); } @Bean public Advice expressionAdvice() { ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice(); advice.setSuccessChannelName("success.input"); advice.setOnSuccessExpressionString("payload + ' was successful'"); advice.setFailureChannelName("failure.input"); advice.setOnFailureExpressionString( "payload + ' was bad, with reason: ' + #exception.cause.message"); advice.setTrapException(true); return advice; } @Bean public IntegrationFlow success() { return f -> f.handle(System.out::println); } @Bean public IntegrationFlow failure() { return f -> f.handle(System.out::println); } }
In addition to the provided Advice classes above, you can implement your own Advice classes.
While you can provide any implementation of org.aopalliance.aop.Advice
(usually org.aopalliance.intercept.MethodInterceptor
), it is generally recommended that you subclass o.s.i.handler.advice.AbstractRequestHandlerAdvice
.
This has the benefit of avoiding writing low-level Aspect Oriented Programming code as well as providing a starting point that is specifically tailored for use in this environment.
Subclasses need to implement the doInvoke()`
method:
/** * Subclasses implement this method to apply behavior to the {@link MessageHandler} callback.execute() * invokes the handler method and returns its result, or null). * @param callback Subclasses invoke the execute() method on this interface to invoke the handler method. * @param target The target handler. * @param message The message that will be sent to the handler. * @return the result after invoking the {@link MessageHandler}. * @throws Exception */ protected abstract Object doInvoke(ExecutionCallback callback, Object target, Message<?> message) throws Exception;
The callback parameter is simply a convenience to avoid subclasses dealing with AOP directly; invoking the callback.execute()
method invokes the message handler.
The target parameter is provided for those subclasses that need to maintain state for a specific handler, perhaps by maintaining that state in a Map
, keyed by the target.
This allows the same advice to be applied to multiple handlers.
The RequestHandlerCircuitBreakerAdvice
uses this to keep circuit breaker state for each handler.
The message parameter is the message that will be sent to the handler. While the advice cannot modify the message before invoking the handler, it can modify the payload (if it has mutable properties). Typically, an advice would use the message for logging and/or to send a copy of the message somewhere before or after invoking the handler.
The return value would normally be the value returned by callback.execute()
; but the advice does have the ability to modify the return value.
Note that only AbstractReplyProducingMessageHandler
s return a value.
public class MyAdvice extends AbstractRequestHandlerAdvice { @Override protected Object doInvoke(ExecutionCallback callback, Object target, Message<?> message) throws Exception { // add code before the invocation Object result = callback.execute(); // add code after the invocation return result; } }
![]() | Note |
---|---|
In addition to the For more information, see the ReflectiveMethodInvocation JavaDocs. |
==== Other Advice Chain Elements
While the abstract class mentioned above is provided as a convenience, you can add any Advice
to the chain, including a transaction advice.
As discussed in the introduction to this section, advice objects in a request handler advice chain are applied to just the current endpoint, not the downstream flow (if any).
For MessageHandler
s that produce a reply (AbstractReplyProducingMessageHandler
), the advice is applied to an internal method
handleRequestMessage()
(called from MessageHandler.handleMessage()
).
For other message handlers, the advice is applied to MessageHandler.handleMessage()
.
There are some circumstances where, even if a message handler is an AbstractReplyProducingMessageHandler
, the advice must be applied to the handleMessage
method - for example, the Idempotent Receiver might return null
and this would cause an exception if the handler’s replyRequired
property is true.
Starting with version 4.3.1, a new HandleMessageAdvice
and the AbstractHandleMessageAdvice
base implementation have been introduced.
Advice
s that implement HandleMessageAdvice
will always be applied to the handleMessage()
method, regardless of the handler type.
It is important to understand that HandleMessageAdvice
implementations (such as Idempotent Receiver), when applied to a handler that returns a response, are dissociated from the adviceChain
and properly applied to the MessageHandler.handleMessage()
method.
Bear in mind, however, that this means the advice chain order is not complied with; and, with configuration such as:
<some-reply-producing-endpoint ... > <int:request-handler-advice-chain> <tx:advice ... /> <bean ref="myHandleMessageAdvice" /> </int:request-handler-advice-chain> </some-reply-producing-endpoint>
The <tx:advice>
is applied to the AbstractReplyProducingMessageHandler.handleRequestMessage()
, but myHandleMessageAdvice
is applied for to MessageHandler.handleMessage()
and, therefore, invoked before the <tx:advice>
.
To retain the order, you should follow with standard Spring AOP configuration approach and use endpoint id
together with the .handler
suffix to obtain the target MessageHandler
bean.
Note, however, that in that case, the entire downstream flow would be within the transaction scope.
In the case of a MessageHandler
that does not return a response, the advice chain order is retained.
Starting with version 5.0 a new TransactionHandleMessageAdvice
has been introduced to make the whole downstream flow transactional, thanks to the HandleMessageAdvice
implementation.
When regular TransactionInterceptor
is used in the <request-handler-advice-chain>
, for example via <tx:advice>
configuration, a started transaction is only applied only for an internal AbstractReplyProducingMessageHandler.handleRequestMessage()
and isn’t propagated to the downstream flow.
To simplify XML configuration, alongside with the <request-handler-advice-chain>
, a <transactional>
sub-element has been added to all <outbound-gateway>
and <service-activator>
& family components:
<int-rmi:outbound-gateway remote-channel="foo" host="localhost" request-channel="good" reply-channel="reply" port="#{@port}"> <int-rmi:transactional/> </int-rmi:outbound-gateway> <bean id="transactionManager" class="org.mockito.Mockito" factory-method="mock"> <constructor-arg value="org.springframework.transaction.PlatformTransactionManager"/> </bean>
For whom is familiar with JPA Integration components such a configuration isn’t new, but now we can start transaction from any point in our flow, not only from the <poller>
or Message Driven Channel Adapter like in JMS.
Java & Annotation configuration can be simplified via newly introduced TransactionInterceptorBuilder
and the result bean name can be used in the Messaging Annotations adviceChain
attribute:
@Bean public ConcurrentMetadataStore store() { return new SimpleMetadataStore(hazelcastInstance() .getMap("idempotentReceiverMetadataStore")); } @Bean public IdempotentReceiverInterceptor idempotentReceiverInterceptor() { return new IdempotentReceiverInterceptor( new MetadataStoreSelector( message -> message.getPayload().toString(), message -> message.getPayload().toString().toUpperCase(), store())); } @Bean public TransactionInterceptor transactionInterceptor() { return new TransactionInterceptorBuilder(true) .transactionManager(this.transactionManager) .isolation(Isolation.READ_COMMITTED) .propagation(Propagation.REQUIRES_NEW) .build(); } @Bean @org.springframework.integration.annotation.Transformer(inputChannel = "input", outputChannel = "output", adviceChain = { "idempotentReceiverInterceptor", "transactionInterceptor" }) public Transformer transformer() { return message -> message; }
Note the true
for the TransactionInterceptorBuilder
constructor, which means produce TransactionHandleMessageAdvice
, not regular TransactionInterceptor
.
Java DSL supports such an Advice
via .transactional()
options on the endpoint configuration:
@Bean public IntegrationFlow updatingGatewayFlow() { return f -> f .handle(Jpa.updatingGateway(this.entityManagerFactory), e -> e.transactional(true)) .channel(c -> c.queue("persistResults")); }
There is an additional consideration when advising Filter
s.
By default, any discard actions (when the filter returns false) are performed within the scope of the advice chain.
This could include all the flow downstream of the discard channel.
So, for example if an element downstream of the discard-channel throws an exception, and there is a retry advice, the process will be retried.
This is also the case if throwExceptionOnRejection is set to true (the exception is thrown within the scope of the advice).
Setting discard-within-advice to "false" modifies this behavior and the discard (or exception) occurs after the advice chain is called.
==== Advising Endpoints Using Annotations
When configuring certain endpoints using annotations (@Filter
, @ServiceActivator
, @Splitter
, and @Transformer
), you can supply a bean name for the advice chain in the adviceChain
attribute.
In addition, the @Filter
annotation also has the discardWithinAdvice
attribute, which can be used to configure the discard behavior as discussed in the section called “CompletableFuture”.
An example with the discard being performed after the advice is shown below.
@MessageEndpoint public class MyAdvisedFilter { @Filter(inputChannel="input", outputChannel="output", adviceChain="adviceChain", discardWithinAdvice="false") public boolean filter(String s) { return s.contains("good"); } }
==== Ordering Advices within an Advice Chain
Advice classes are "around" advices and are applied in a nested fashion. The first advice is the outermost, the last advice the innermost (closest to the handler being advised). It is important to put the advice classes in the correct order to achieve the functionality you desire.
For example, let’s say you want to add a retry advice and a transaction advice.
You may want to place the retry advice advice first, followed by the transaction advice.
Then, each retry will be performed in a new transaction.
On the other hand, if you want all the attempts, and any recovery operations (in the retry RecoveryCallback
), to be scoped within the transaction, you would put the transaction advice first.
==== Advised Handler Properties
Sometimes, it is useful to access handler properties from within the advice.
For example, most handlers implement NamedComponent
and you can access the component name.
The target object can be accessed via the target
argument when subclassing AbstractRequestHandlerAdvice
or
invocation.getThis()
when implementing org.aopalliance.intercept.MethodInterceptor
.
When the entire handler is advised (such as when the handler does not produce replies, or the advice implements HandleMessageAdvice
), you can simply cast the target object to the desired implemented interface, such as NamedComponent
.
String componentName = ((NamedComponent) target).getComponentName();
or
String componentName = ((NamedComponent) invocation.getThis()).getComponentName();
when implementing MethodInterceptor
directly.
When only the handleRequestMessage()
method is advised (in a reply-producing handler), you need to access the
full handler, which is an AbstractReplyProducingMessageHandler
…
AbstractReplyProducingMessageHandler handler = ((AbstractReplyProducingMessageHandler.RequestHandler) target).getAdvisedHandler(); String componentName = handler.getComponentName();
==== Idempotent Receiver Enterprise Integration Pattern
Starting with version 4.1, Spring Integration provides an implementation of the Idempotent Receiver Enterprise Integration Pattern.
It is a functional pattern and the whole idempotency logic should be implemented in the application, however to simplify the decision-making, the IdempotentReceiverInterceptor
component is provided.
This is an AOP Advice
, which is applied to the MessageHandler.handleMessage()
method and can filter
a request message or mark it as a duplicate
, according to its configuration.
Previously, users could have implemented this pattern, by using a custom MessageSelector in a <filter/>
(Section 6.2, “Filter”), for example.
However, since this pattern is really behavior of an endpoint rather than being an endpoint itself, the Idempotent Receiver implementation doesn’t provide an endpoint component; rather, it is applied to endpoints declared in the application.
The logic of the IdempotentReceiverInterceptor
is based on the provided MessageSelector
and, if the message isn’t accepted by that selector, it will be enriched with the duplicateMessage
header set to true
.
The target MessageHandler
(or downstream flow) can consult this header to implement the correct idempotency logic.
If the IdempotentReceiverInterceptor
is configured with a discardChannel
and/or throwExceptionOnRejection = true
, the duplicate Message won’t be sent to the target MessageHandler.handleMessage()
, but discarded.
If you simply want to discard (do nothing with) the duplicate Message, the discardChannel
should be configured with a NullChannel
, such as the default nullChannel
bean.
To maintain state between messages and provide the ability to compare messages for the idempotency, the MetadataStoreSelector
is provided.
It accepts a MessageProcessor
implementation (which creates a lookup key based on the Message
) and an optional ConcurrentMetadataStore
(the section called “CompletableFuture”).
See the MetadataStoreSelector
JavaDocs for more information.
The value
for ConcurrentMetadataStore
also can be customized using additional MessageProcessor
.
By default MetadataStoreSelector
uses timestamp
message header.
For convenience, the MetadataStoreSelector
options are configurable directly on the <idempotent-receiver>
component:
<idempotent-receiver id=""endpoint=""
selector=""
discard-channel=""
metadata-store=""
key-strategy=""
key-expression=""
value-strategy=""
value-expression=""
throw-exception-on-rejection="" />
The id of the | |
Consumer Endpoint name(s) or pattern(s) to which this interceptor will be applied.
Separate names (patterns) with commas ( | |
A | |
Identifies the channel to which to send a message when the | |
A | |
A | |
A SpEL expression to populate an | |
A | |
A SpEL expression to populate an | |
Throw an exception if the |
For Java configuration, the method level IdempotentReceiver
annotation is provided.
It is used to mark a method
that has a Messaging annotation (@ServiceActivator
, @Router
etc.) to specify which IdempotentReceiverInterceptor
s will be applied to this endpoint:
@Bean public IdempotentReceiverInterceptor idempotentReceiverInterceptor() { return new IdempotentReceiverInterceptor(new MetadataStoreSelector(m -> m.getHeaders().get(INVOICE_NBR_HEADER))); } @Bean @ServiceActivator(inputChannel = "input", outputChannel = "output") @IdempotentReceiver("idempotentReceiverInterceptor") public MessageHandler myService() { .... }
And with the Java DSL, the interceptor is added to the endpoint’s advice chain:
@Bean public IntegrationFlow flow() { ... .handle("someBean", "someMethod", e -> e.advice(idempotentReceiverInterceptor()) ... }
![]() | Note |
---|---|
The |
The <logging-channel-adapter/>
is often used in conjunction with a Wire Tap, as discussed in the section called “Wire Tap”.
However, it can also be used as the ultimate consumer of any flow.
For example, consider a flow that ends with a <service-activator/>
that returns a result, but you wish to discard that result.
To do that, you could send the result to NullChannel
.
Alternatively, you can route it to an INFO
level <logging-channel-adapter/>
; that way, you can see the discarded message when logging at INFO
level, but not see it when logging at, say, WARN
level.
With a NullChannel
, you would only see the discarded message when logging at DEBUG
level.
<int:logging-channel-adapter channel=""level="INFO"
expression=""
log-full-message="false"
logger-name="" />
The channel connecting the logging adapter to an upstream component. | |
The logging level at which messages sent to this adapter will be logged.
Default: | |
A SpEL expression representing exactly what part(s) of the message will be logged.
Default: | |
When | |
Specifies the name of the logger (known as |
==== Configuring with Java Configuration
The following Spring Boot application provides an example of configuring the LoggingHandler
using Java configuration:
@SpringBootApplication public class LoggingJavaApplication { public static void main(String[] args) { ConfigurableApplicationContext context = new SpringApplicationBuilder(LoggingJavaApplication.class) .web(false) .run(args); MyGateway gateway = context.getBean(MyGateway.class); gateway.sendToLogger("foo"); } @Bean @ServiceActivator(inputChannel = "logChannel") public LoggingHandler logging() { LoggingHandler adapter = new LoggingHandler(LoggingHandler.Level.DEBUG); adapter.setLoggerName("TEST_LOGGER"); adapter.setLogExpressionString("headers.id + ': ' + payload"); return adapter; } @MessagingGateway(defaultRequestChannel = "logChannel") public interface MyGateway { void sendToLogger(String data); } }
==== Configuring with the Java DSL
The following Spring Boot application provides an example of configuring the logging channel adapter using the Java DSL:
@SpringBootApplication public class LoggingJavaApplication { public static void main(String[] args) { ConfigurableApplicationContext context = new SpringApplicationBuilder(LoggingJavaApplication.class) .web(false) .run(args); MyGateway gateway = context.getBean(MyGateway.class); gateway.sendToLogger("foo"); } @Bean public IntegrationFlow loggingFlow() { return IntegrationFlows.from(MyGateway.class) .log(LoggingHandler.Level.DEBUG, "TEST_LOGGER", m -> m.getHeaders().getId() + ": " + m.getPayload()); } @MessagingGateway public interface MyGateway { void sendToLogger(String data); } }
The Spring Integration JavaConfig and DSL provides a set of convenient Builders and a fluent API to configure Spring Integration message flows from Spring @Configuration
classes.
@Configuration @EnableIntegration public class MyConfiguration { @Bean public AtomicInteger integerSource() { return new AtomicInteger(); } @Bean public IntegrationFlow myFlow() { return IntegrationFlows.from(integerSource::getAndIncrement, c -> c.poller(Pollers.fixedRate(100))) .channel("inputChannel") .filter((Integer p) -> p > 0) .transform(Object::toString) .channel(MessageChannels.queue()) .get(); } }
As the result after ApplicationContext
start up Spring Integration endpoints and Message Channels will be created as is the case after XML parsing.
Such configuration can be used to replace XML configuration or along side with it.
The Java DSL for Spring Integration is essentially a facade for Spring Integration.
The DSL provides a simple way to embed Spring Integration Message Flows into your application using the fluent Builder
pattern together with existing Java and Annotation configurations from Spring Framework and Spring Integration as well.
Another useful tool to simplify configuration is Java 8 Lambdas.
The cafe is a good example of using the DSL.
The DSL is presented by the IntegrationFlows
Factory for the IntegrationFlowBuilder
.
This produces the IntegrationFlow
component, which should be registered as a Spring bean (@Bean
).
The builder pattern is used to express arbitrarily complex structures as a hierarchy of methods that may accept Lambdas as arguments.
The IntegrationFlowBuilder
just collects integration components (MessageChannel
s, AbstractEndpoint
s etc.) in the IntegrationFlow
bean for further parsing and registration of concrete beans in the application context by the IntegrationFlowBeanPostProcessor
.
The Java DSL uses Spring Integration classes directly and bypasses any XML generation and parsing. However, the DSL offers more than syntactic sugar on top of XML. One of its most compelling features is the ability to define inline Lambdas to implement endpoint logic, eliminating the need for external classes to implement custom logic. In some sense, Spring Integration’s support for the Spring Expression Language (SpEL) and inline scripting address this, but Java Lambdas are easier and much more powerful.
The org.springframework.integration.dsl
package contains the IntegrationFlowBuilder
API mentioned above and a bunch of IntegrationComponentSpec
implementations which are builders too and provide the fluent API to configure concrete endpoints.
The IntegrationFlowBuilder
infrastructure provides common EIP for message based applications, such as channels, endpoints, pollers and channel interceptors.
Endpoints are expressed as verbs in the DSL to improve readability. The following list includes the common DSL method names and the associated EIP endpoint:
Transformer
Filter
ServiceActivator
Splitter
Aggregator
Router
Bridge
Conceptually, integration processes are constructed by composing these endpoints into one or more message flows.
Note that EIP does not formally define the term message flow, but it is useful to think of it as a unit of work that uses well known messaging patterns.
The DSL provides an IntegrationFlow
component to define a composition of channels and endpoints between them, but now IntegrationFlow
plays only the configuration role to populate real beans in the application context and isn’t used at runtime:
@Bean public IntegrationFlow integerFlow() { return IntegrationFlows.from("input") .<String, Integer>transform(Integer::parseInt) .get(); }
Here we use the IntegrationFlows
factory to define an IntegrationFlow
bean using EIP-methods from IntegrationFlowBuilder
.
The transform
method accepts a Lambda as an endpoint argument to operate on the message payload.
The real argument of this method is GenericTransformer<S, T>
, hence any out-of-the-box transformers (ObjectToJsonTransformer
, FileToStringTransformer
etc.) can be used here.
Under the covers, IntegrationFlowBuilder
recognizes the MessageHandler
and endpoint for that: MessageTransformingHandler
and ConsumerEndpointFactoryBean
, respectively.
Let’s look at another example:
@Bean public IntegrationFlow myFlow() { return IntegrationFlows.from("input") .filter("World"::equals) .transform("Hello "::concat) .handle(System.out::println) .get(); }
The above example composes a sequence of Filter -> Transformer -> Service Activator
.
The flow is one way, that is it does not provide a a reply message but simply prints the payload to STDOUT.
The endpoints are automatically wired together using direct channels.
.<Message<?>, Foo>transform(m -> newFooFromMessage(m))
This will fail at runtime with a ClassCastException
because the lambda doesn’t retain the argument type and the framework will attempt to cast the payload to a Message<?>
.
Instead, use:
.(Message.class, m -> newFooFromMessage(m))
In addition to the IntegrationFlowBuilder
with EIP-methods the Java DSL provides a fluent API to configure MessageChannel
s.
For this purpose the MessageChannels
builder factory is provided:
@Bean public MessageChannel priorityChannel() { return MessageChannels.priority(this.mongoDbChannelMessageStore, "priorityGroup") .interceptor(wireTap()) .get(); }
The same MessageChannels
builder factory can be used in the channel()
EIP-method from IntegrationFlowBuilder
to wire endpoints similar to an input-channel
/output-channel
pair in the XML configuration.
By default endpoints are wired via DirectChannel
s where the bean name is based on the pattern: [IntegrationFlow.beanName].channel#[channelNameIndex]
.
This rule is applied for unnamed channels produced by inline MessageChannels
builder factory usage, too.
However all MessageChannels
methods have a channelId
-aware variant to create the bean names for MessageChannel
s.
The MessageChannel
references can be used as well as beanName
, as bean-method invocations.
Here is a sample with possible variants of channel()
EIP-method usage:
@Bean public MessageChannel queueChannel() { return MessageChannels.queue().get(); } @Bean public MessageChannel publishSubscribe() { return MessageChannels.publishSubscribe().get(); } @Bean public IntegrationFlow channelFlow() { return IntegrationFlows.from("input") .fixedSubscriberChannel() .channel("queueChannel") .channel(publishSubscribe()) .channel(MessageChannels.executor("executorChannel", this.taskExecutor)) .channel("output") .get(); }
from("input")
means: find and use the MessageChannel
with the "input" id, or create one;
fixedSubscriberChannel()
produces an instance of FixedSubscriberChannel
and registers it with name channelFlow.channel#0
;
channel("queueChannel")
works the same way but, of course, uses an existing "queueChannel" bean;
channel(publishSubscribe())
- the bean-method reference;
channel(MessageChannels.executor("executorChannel", this.taskExecutor))
the IntegrationFlowBuilder
unwraps IntegrationComponentSpec
to the ExecutorChannel
and registers it as "executorChannel";
channel("output")
- registers the DirectChannel
bean with "output" name as long as there are no beans with this name.
Note: the IntegrationFlow
definition shown above is valid and all of its channels are applied to endpoints with BridgeHandler
s.
![]() | Important |
---|---|
Be careful to use the same inline channel definition via |
@Bean public IntegrationFlow startFlow() { return IntegrationFlows.from("input") .transform(...) .channel(MessageChannels.queue("queueChannel")) .get(); } @Bean public IntegrationFlow endFlow() { return IntegrationFlows.from(MessageChannels.queue("queueChannel")) .handle(...) .get(); }
You end up with:
Caused by: java.lang.IllegalStateException: Could not register object [queueChannel] under bean name 'queueChannel': there is already object [queueChannel] bound at o.s.b.f.s.DefaultSingletonBeanRegistry.registerSingleton(DefaultSingletonBeanRegistry.java:129)
To make it working there is just need to declare @Bean
for that channel and use its bean-method from different IntegrationFlow
s.
A similar fluent API is provided to configure PollerMetadata
for AbstractPollingEndpoint
implementations.
The Pollers
builder factory can be used to configure common bean definitions or those created from IntegrationFlowBuilder
EIP-methods:
@Bean(name = PollerMetadata.DEFAULT_POLLER) public PollerSpec poller() { return Pollers.fixedRate(500) .errorChannel("myErrors"); }
See Pollers
and PollerSpec
Java Docs for more information.
![]() | Important |
---|---|
If you use the DSL to construct a |
=== DSL and Endpoint Configuration
All IntegrationFlowBuilder
EIP-methods have a variant to apply the Lambda parameter to provide options for AbstractEndpoint
s: SmartLifecycle
, PollerMetadata
, request-handler-advice-chain
etc.
Each of them has generic arguments, so it allows you to simply configure an endpoint and even its MessageHandler
in the context:
@Bean public IntegrationFlow flow2() { return IntegrationFlows.from(this.inputChannel) .transform(new PayloadSerializingTransformer(), c -> c.autoStartup(false).id("payloadSerializingTransformer")) .transform((Integer p) -> p * 2, c -> c.advice(this.expressionAdvice())) .get(); }
In addition the EndpointSpec
provides an id()
method to allow you to register an endpoint bean with a given bean name, rather than a generated one.
The DSL API provides a convenient, fluent Transformers
factory to be used as inline target object definition within .transform()
EIP-method:
@Bean public IntegrationFlow transformFlow() { return IntegrationFlows.from("input") .transform(Transformers.fromJson(MyPojo.class)) .transform(Transformers.serializer()) .get(); }
It avoids inconvenient coding using setters and makes the flow definition more straightforward.
Note, that Transformers
can be use to declare target Transformer
s as @Bean
s and, again, use them from IntegrationFlow
definition as bean-methods.
Nevertheless, the DSL parser takes care about bean declarations for inline objects, if they aren’t defined as beans yet.
See Transformers
Java Docs for more information and supported factory methods.
Also see Lambdas And Message<?>
Arguments.
Typically message flows start from some Inbound Channel Adapter (e.g. <int-jdbc:inbound-channel-adapter>
).
The adapter is configured with <poller>
and it asks a MessageSource<?>
for producing messages periodically.
Java DSL allows to start IntegrationFlow
from a MessageSource<?>
, too.
For this purpose IntegrationFlows
builder factory provides overloaded IntegrationFlows.from(MessageSource<?> messageSource)
method.
The MessageSource<?>
may be configured as a bean and provided as argument for that method.
The second parameter of IntegrationFlows.from()
is a Consumer<SourcePollingChannelAdapterSpec>
Lambda and allows to provide options for the SourcePollingChannelAdapter
, e.g. PollerMetadata
or SmartLifecycle
:
@Bean public MessageSource<Object> jdbcMessageSource() { return new JdbcPollingChannelAdapter(this.dataSource, "SELECT * FROM foo"); } @Bean public IntegrationFlow pollingFlow() { return IntegrationFlows.from(jdbcMessageSource(), c -> c.poller(Pollers.fixedRate(100).maxMessagesPerPoll(1))) .transform(Transformers.toJson()) .channel("furtherProcessChannel") .get(); }
There is also an IntegrationFlows.from()
variant based on the java.util.function.Supplier
if there is no requirements to build Message
objects directly.
The result of the Supplier.get()
is wrapped to the Message
(if it isn’t message already) by Framework automatically.
The next sections discuss selected endpoints which require further explanation.
Spring Integration natively provides specialized router types including:
HeaderValueRouter
PayloadTypeRouter
ExceptionTypeRouter
RecipientListRouter
XPathRouter
As with many other DSL IntegrationFlowBuilder
EIP-methods the route()
method can apply any out-of-the-box AbstractMessageRouter
implementation, or for convenience a String
as a SpEL expression, or a ref
/method
pair.
In addition route()
can be configured with a Lambda - the inline method invocation case, and with a Lambda for a Consumer<RouterSpec<MethodInvokingRouter>>
.
The fluent API also provides AbstractMappingMessageRouter
options like channelMapping(String key, String channelName)
pairs:
@Bean public IntegrationFlow routeFlow() { return IntegrationFlows.from("routerInput") .<Integer, Boolean>route(p -> p % 2 == 0, m -> m.suffix("Channel") .channelMapping("true", "even") .channelMapping("false", "odd") ) .get(); }
A simple expression-based router:
@Bean public IntegrationFlow routeFlow() { return IntegrationFlows.from("routerInput") .route("headers['destChannel']") .get(); }
The routeToRecipients()
method takes a Consumer<RecipientListRouterSpec>
:
@Bean public IntegrationFlow recipientListFlow() { return IntegrationFlows.from("recipientListInput") .<String, String>transform(p -> p.replaceFirst("Payload", "")) .routeToRecipients(r -> r .recipient("foo-channel", "'foo' == payload") .recipient("bar-channel", m -> m.getHeaders().containsKey("recipient") && (boolean) m.getHeaders().get("recipient")) .recipientFlow("'foo' == payload or 'bar' == payload or 'baz' == payload", f -> f.<String, String>transform(String::toUpperCase) .channel(c -> c.queue("recipientListSubFlow1Result"))) .recipientFlow((String p) -> p.startsWith("baz"), f -> f.transform("Hello "::concat) .channel(c -> c.queue("recipientListSubFlow2Result"))) .recipientFlow(new FunctionExpression<Message<?>>(m -> "bax".equals(m.getPayload())), f -> f.channel(c -> c.queue("recipientListSubFlow3Result"))) .defaultOutputToParentFlow()) .get(); }
The .defaultOutputToParentFlow()
of the .routeToRecipients()
allows to make the router’s defaultOutput
as a gateway to continue a process for the unmatched messages in the main flow.
Also see Lambdas And Message<?>
Arguments.
A splitter is created using the split()
EIP-method.
By default, if the payload is a Iterable
, Iterator
, Array
, Stream
or Reactive Publisher
, this will output each item as an individual message.
This takes a Lambda, SpEL expression, any AbstractMessageSplitter
implementation, or can be used without parameters to provide the DefaultMessageSplitter
.
For example:
@Bean public IntegrationFlow splitFlow() { return IntegrationFlows.from("splitInput") .split(s -> s.applySequence(false).get().getT2().setDelimiters(",")) .channel(MessageChannels.executor(this.taskExecutor())) .get(); }
This creates a splitter that splits a message containing a comma delimited String.
Note: the getT2()
method comes from Tuple
Collection
which is the result of EndpointSpec.get()
and represents a pair of ConsumerEndpointFactoryBean
and DefaultMessageSplitter
for the example above.
Also see Lambdas And Message<?>
Arguments.
=== Aggregators and Resequencers
An Aggregator
is conceptually the converse of a Splitter
.
It aggregates a sequence of individual messages into a single message and is necessarily more complex.
By default, an aggregator will return a message containing a collection of payloads from incoming messages.
The same rules are applied for the Resequencer
:
@Bean public IntegrationFlow splitAggregateFlow() { return IntegrationFlows.from("splitAggregateInput") .split() .channel(MessageChannels.executor(this.taskExecutor())) .resequence() .aggregate() .get(); }
The above is a canonical example of splitter/aggregator pattern.
The split()
method splits the list into individual messages and sends them to the ExecutorChannel
.
The resequence()
method reorders messages by sequence details from message headers.
The aggregate()
method just collects those messages to the result list.
However, you may change the default behavior by specifying a release strategy and correlation strategy, among other things. Consider the following:
.aggregate(a -> a.correlationStrategy(m -> m.getHeaders().get("myCorrelationKey")) .releaseStrategy(g -> g.size() > 10) .messageStore(messageStore()))
The similar Lambda configurations are provided for the resequence()
EIP-method.
=== ServiceActivators (.handle())
The .handle()
EIP-method’s goal is to invoke any MessageHandler
implementation or any method on some POJO.
Another option to define "activity" via Lambda expression.
Hence a generic GenericHandler<P>
functional interface has been introduced.
Its handle
method requires two arguments - P payload
and Map<String, Object> headers
.
Having that we can define a flow like this:
@Bean public IntegrationFlow myFlow() { return IntegrationFlows.from("flow3Input") .<Integer>handle((p, h) -> p * 2) .get(); }
However one main goal of Spring Integration an achieving of loose coupling
via runtime type conversion from message payload to target arguments of message handler.
Since Java doesn’t support generic type resolution for Lambda classes, we introduced a workaround with additional payloadType
argument for the most EIP-methods and LambdaMessageProcessor
, which delegates the hard conversion work to the Spring’s ConversionService
using provided type
and requested message to target method arguments.
The IntegrationFlow
might look like this:
@Bean public IntegrationFlow integerFlow() { return IntegrationFlows.from("input") .<byte[], String>transform(p - > new String(p, "UTF-8")) .handle(Integer.class, (p, h) -> p * 2) .get(); }
Of course we register some custom BytesToIntegerConverter
within ConversionService
and get rid of that additional .transform()
.
Also see Lambdas And Message<?>
Arguments.
For convenience to log the message journey throw the Spring Integration flow (<logging-channel-adapter>
), a log()
operator is presented.
Underneath it is represented by the WireTap
ChannelInterceptor
and LoggingHandler
as subscriber.
It is responsible to log message incoming into the next endpoint or for the current channel:
.filter(...)
.log(LoggingHandler.Level.ERROR, "test.category", m -> m.getHeaders().getId())
.route(...)
In this example an id
header will be logged with ERROR
level onto "test.category" only for messages passed the filter and before routing.
=== MessageChannelSpec.wireTap()
A .wireTap()
fluent API exists for MessageChannelSpec
builders.
A target configuration gains much more from Java DSL usage:
@Bean public QueueChannelSpec myChannel() { return MessageChannels.queue() .wireTap("loggingFlow.input"); } @Bean public IntegrationFlow loggingFlow() { return f -> f.log(); }
The log()
or wireTap()
opearators are applied to the current MessageChannel
(if it is an instance of ChannelInterceptorAware
) or an intermediate DirectChannel
is injected into the flow for the currently configured endpoint.
In the example below the WireTap
interceptor is added to the myChannel
directly, because DirectChannel
implements ChannelInterceptorAware
:
@Bean MessageChannel myChannel() { return new DirectChannel(); } ... .channel(myChannel()) .log() }
When current MessageChannel
doesn’t implement ChannelInterceptorAware
, an implicit DirectChannel
and BridgeHandler
are injected into the IntegrationFlow
and the WireTap
is added to this new DirectChannel
.
And when there is not any channel declaration like in this sample:
.handle(...) .log() }
an implicit DirectChannel
is injected in the current position of the IntegrationFlow
and it is used as an output channel for the currently configured ServiceActivatingHandler
(the .handle()
above).
![]() | Important |
---|---|
If |
@Bean public IntegrationFlow sseFlow() { return IntegrationFlows .from(WebFlux.inboundGateway("/sse") .requestMapping(m -> m.produces(MediaType.TEXT_EVENT_STREAM_VALUE))) .handle((p, h) -> Flux.just("foo", "bar", "baz")) .log(LoggingHandler.Level.WARN) .bridge() .get(); }
=== Working With Message Flows
As we have seen, IntegrationFlowBuilder
provides a top level API to produce Integration components wired to message flows.
This is convenient if your integration may be accomplished with a single flow (which is often the case).
Alternately IntegrationFlow
s can be joined via MessageChannel
s.
By default, the MessageFlow behaves as a Chain in Spring Integration parlance.
That is, the endpoints are automatically wired implicitly via DirectChannel
s.
The message flow is not actually constructed as a chain, affording much more flexibility.
For example, you may send a message to any component within the flow, if you know its inputChannel
name, i.e., explicitly define it.
You may also reference externally defined channels within a flow to allow the use of channel adapters to enable remote transport protocols, file I/O, and the like, instead of direct channels.
As such, the DSL does not support the Spring Integration chain element since it doesn’t add much value.
Since the Spring Integration Java DSL produces the same bean definition model as any other configuration options and is based on the existing Spring Framework @Configuration
infrastructure, it can be used together with Integration XML definitions and wired with Spring Integration Messaging Annotations configuration.
Another alternative to define direct IntegrationFlow
s is based on a fact that IntegrationFlow
can be declared as Lambda too:
@Bean public IntegrationFlow lambdaFlow() { return f -> f.filter("World"::equals) .transform("Hello "::concat) .handle(System.out::println); }
The result of this definition is the same bunch of Integration components wired with implicit direct channel.
Only limitation is here, that this flow is started with named direct channel - lambdaFlow.input
.
And Lambda flow can’t start from MessageSource
or MessageProducer
.
Starting with version 5.0.6, the generated bean names for the components in an IntegrationFlow
include the flow bean followed by a dot as a prefix.
For example the ConsumerEndpointFactoryBean
for the .transform("Hello "::concat)
in the sample above, will end up with te bean name like lambdaFlow.org.springframework.integration.config.
ConsumerEndpointFactoryBean#0
.
The Transformer
implementation bean for that endpoint will have a bean name such as lambdaFlow.org.springframework.integration.transformer.
MethodInvokingTransformer#0
.
These generated bean names are prepended with the flow id prefix for purposes such as parsing logs or grouping components together in some analysis tool, as well as to avoid a race condition when we concurrently register integration flows at runtime.
See the section called “CompletableFuture” for more information.
The FunctionExpression
(an implementation of SpEL Expression
) has been introduced to get a gain of Java and Lambda usage for the method and its generics
context.
The Function<T, R>
option is provided for the DSL components alongside with expression
option, when there is the implicit Strategy
variant from Core Spring Integration.
The usage may look like:
.enrich(e -> e.requestChannel("enrichChannel") .requestPayload(Message::getPayload) .propertyFunction("date", m -> new Date()))
The FunctionExpression
also supports runtime type conversion as it is done in the standard SpelExpression
.
Some of if...else
and publish-subscribe
components provide the support to specify their logic or mapping using Sub Flows.
The simplest sample is .publishSubscribeChannel()
:
@Bean public IntegrationFlow subscribersFlow() { return flow -> flow .publishSubscribeChannel(Executors.newCachedThreadPool(), s -> s .subscribe(f -> f .<Integer>handle((p, h) -> p / 2) .channel(c -> c.queue("subscriber1Results"))) .subscribe(f -> f .<Integer>handle((p, h) -> p * 2) .channel(c -> c.queue("subscriber2Results")))) .<Integer>handle((p, h) -> p * 3) .channel(c -> c.queue("subscriber3Results")); }
Of course the same result we can achieve with separate IntegrationFlow
@Bean
definitions, but we hope you’ll find the subflow style of logic composition useful.
Similar publish-subscribe
subflow composition provides .routeToRecipients()
.
Another sample is .discardFlow()
on the .filter()
instead of .discardChannel()
.
The .route()
deserves special attention.
As a sample:
@Bean public IntegrationFlow routeFlow() { return f -> f .<Integer, Boolean>route(p -> p % 2 == 0, m -> m.channelMapping("true", "evenChannel") .subFlowMapping("false", sf -> sf.<Integer>handle((p, h) -> p * 3))) .transform(Object::toString) .channel(c -> c.queue("oddChannel")); }
The .channelMapping()
continues to work as in regular Router
mapping, but the .subFlowMapping()
tied that subflow with main flow.
In other words, any router’s subflow returns to the main flow after .route()
.
Of course, subflows can be nested with any depth, but we don’t recommend to do that because, in fact, even in the router case, adding complex subflows within a flow would quickly begin to look like a plate of spaghetti and difficult for a human to parse.
All of the examples so far illustrate how the DSL supports a messaging architecture using the Spring Integration programming model, but we haven’t done any real integration yet. This requires access to remote resources via http, jms, amqp, tcp, jdbc, ftp, smtp, and the like, or access to the local file system. Spring Integration supports all of these and more. Ideally, the DSL should offer first class support for all of them but it is a daunting task to implement all of these and keep up as new adapters are added to Spring Integration. So the expectation is that the DSL will continually be catching up with Spring Integration.
Anyway we are providing the hi-level API to define protocol-specific seamlessly.
This is achieved with Factory and Builder patterns and, of course, with Lambdas.
The factory classes can be considered "Namespace Factories", because they play the same role as XML namespace for components from the concrete protocol-specific Spring Integration modules.
Currently, Spring Integration Java DSL supports Amqp
, Feed
, Jms
, Files
, (S)Ftp
, Http
, JPA
, MongoDb
, TCP/UDP
, Mail
, WebFlux
and Scripts
namespace factories:
@Bean public IntegrationFlow amqpFlow() { return IntegrationFlows.from(Amqp.inboundGateway(this.rabbitConnectionFactory, queue())) .transform("hello "::concat) .transform(String.class, String::toUpperCase) .get(); } @Bean public IntegrationFlow jmsOutboundGatewayFlow() { return IntegrationFlows.from("jmsOutboundGatewayChannel") .handle(Jms.outboundGateway(this.jmsConnectionFactory) .replyContainer(c -> c.concurrentConsumers(3) .sessionTransacted(true)) .requestDestination("jmsPipelineTest")) .get(); } @Bean public IntegrationFlow sendMailFlow() { return IntegrationFlows.from("sendMailChannel") .handle(Mail.outboundAdapter("localhost") .port(smtpPort) .credentials("user", "pw") .protocol("smtp") .javaMailProperties(p -> p.put("mail.debug", "true")), e -> e.id("sendMailEndpoint")) .get(); }
We show here the usage of namespace factories as inline adapters declarations, however they can be used from @Bean
definitions to make the IntegrationFlow
method-chain more readable.
We are soliciting community feedback on these namespace factories before we spend effort on others; we’d also appreciate some prioritization for which adapters/gateways we should support next.
See more Java DSL samples in the protocol-specific chapter throughout this reference manual.
All other protocol channel adapters may be configured as generic beans and wired to the IntegrationFlow
:
@Bean public QueueChannelSpec wrongMessagesChannel() { return MessageChannels .queue() .wireTap("wrongMessagesWireTapChannel"); } @Bean public IntegrationFlow xpathFlow(MessageChannel wrongMessagesChannel) { return IntegrationFlows.from("inputChannel") .filter(new StringValueTestXPathMessageSelector("namespace-uri(/*)", "my:namespace"), e -> e.discardChannel(wrongMessagesChannel)) .log(LoggingHandler.Level.ERROR, "test.category", m -> m.getHeaders().getId()) .route(xpathRouter(wrongMessagesChannel)) .get(); } @Bean public AbstractMappingMessageRouter xpathRouter(MessageChannel wrongMessagesChannel) { XPathRouter router = new XPathRouter("local-name(/*)"); router.setEvaluateAsString(true); router.setResolutionRequired(false); router.setDefaultOutputChannel(wrongMessagesChannel); router.setChannelMapping("Tags", "splittingChannel"); router.setChannelMapping("Tag", "receivedChannel"); return router; }
The IntegrationFlow
as an interface can be implemented directly and specified as component for scanning:
@Component public class MyFlow implements IntegrationFlow { @Override public void configure(IntegrationFlowDefinition<?> f) { f.<String, String>transform(String::toUpperCase); } }
And yes, it is picked up by the IntegrationFlowBeanPostProcessor
and correctly parsed and registered in the application context.
For convenience and loosely coupled architecture the IntegrationFlowAdapter
base class implementation is provided.
It requires a buildFlow()
method implementation to produce an IntegrationFlowDefinition
using one of from()
support methods:
@Component public class MyFlowAdapter extends IntegrationFlowAdapter { private final AtomicBoolean invoked = new tomicBoolean(); public Date nextExecutionTime(TriggerContext triggerContext) { return this.invoked.getAndSet(true) ? null : new Date(); } @Override protected IntegrationFlowDefinition<?> buildFlow() { return from(this, "messageSource", e -> e.poller(p -> p.trigger(this::nextExecutionTime))) .split(this) .transform(this) .aggregate(a -> a.processor(this, null), null) .enrichHeaders(Collections.singletonMap("foo", "FOO")) .filter(this) .handle(this) .channel(c -> c.queue("myFlowAdapterOutput")); } public String messageSource() { return "B,A,R"; } @Splitter public String[] split(String payload) { return StringUtils.commaDelimitedListToStringArray(payload); } @Transformer public String transform(String payload) { return payload.toLowerCase(); } @Aggregator public String aggregate(List<String> payloads) { return payloads.stream().collect(Collectors.joining()); } @Filter public boolean filter(@Header Optional<String> foo) { return foo.isPresent(); } @ServiceActivator public String handle(String payload, @Header String foo) { return payload + ":" + foo; } }
=== Dynamic and runtime Integration Flows
The IntegrationFlow
s and therefore all its dependant components can be registered at runtime.
This was done previously by the BeanFactory.registerSingleton()
hook and now via newly introduced in the Spring Framework 5.0
programmatic BeanDefinition
registration with the instanceSupplier
hook:
BeanDefinition beanDefinition =
BeanDefinitionBuilder.genericBeanDefinition((Class<Object>) bean.getClass(), () -> bean)
.getRawBeanDefinition();
((BeanDefinitionRegistry) this.beanFactory).registerBeanDefinition(beanName, beanDefinition);
and all the necessary bean initialization and lifecycle is done automatically as it is with the standard context configuration bean definitions.
To simplify the development experience Spring Integration introduced IntegrationFlowContext
to register and manage IntegrationFlow
instances at runtime:
@Autowired private AbstractServerConnectionFactory server1; @Autowired private IntegrationFlowContext flowContext; ... @Test public void testTcpGateways() { TestingUtilities.waitListening(this.server1, null); IntegrationFlow flow = f -> f .handle(Tcp.outboundGateway(Tcp.netClient("localhost", this.server1.getPort()) .serializer(TcpCodecs.crlf()) .deserializer(TcpCodecs.lengthHeader1()) .id("client1")) .remoteTimeout(m -> 5000)) .transform(Transformers.objectToString()); IntegrationFlowRegistration theFlow = this.flowContext.registration(flow).register(); assertThat(theFlow.getMessagingTemplate().convertSendAndReceive("foo", String.class), equalTo("FOO")); }
This is useful when we have multi configuration options and have to create several instances of similar flows.
So, we can iterate our options and create and register IntegrationFlow
s within loop.
Another variant when our source of data isn’t Spring-based and we must create it on the fly.
Such a sample is Reactive Streams event source:
Flux<Message<?>> messageFlux = Flux.just("1,2,3,4") .map(v -> v.split(",")) .flatMapIterable(Arrays::asList) .map(Integer::parseInt) .map(GenericMessage<Integer>::new); QueueChannel resultChannel = new QueueChannel(); IntegrationFlow integrationFlow = IntegrationFlows.from(messageFlux) .<Integer, Integer>transform(p -> p * 2) .channel(resultChannel) .get(); this.integrationFlowContext.registration(integrationFlow) .register();
The IntegrationFlowRegistrationBuilder
(as a result of the IntegrationFlowContext.registration()
) can be used to specify a bean name for the IntegrationFlow
to register, to control its autoStartup
and also for additional, non Integration beans registration.
Usually those additional beans are connection factories (AMQP, JMS, (S)FTP, TCP/UDP etc.), serializers/deserializers or any other required support components.
Such a dynamically registered IntegrationFlow
and all its dependant beans can be removed afterwards using IntegrationFlowRegistration.destroy()
callback.
See IntegrationFlowContext
JavaDocs for more information.
![]() | Note |
---|---|
Starting with version 5.0.6, all generated bean names in an |
Also, starting with version 5.0.6, the registration builder API has a new method useFlowIdAsPrefix()
.
This is useful if you wish to declare multiple instances of the same flow and avoid bean name collisions if components in the flows have the same id.
For example:
private void registerFlows() { IntegrationFlowRegistration flow1 = this.flowContext.registration(buildFlow(1234)) .id("tcp1") .useFlowIdAsPrefix() .register(); IntegrationFlowRegistration flow2 = this.flowContext.registration(buildFlow(1235)) .id("tcp2") .useFlowIdAsPrefix() .register(); } private IntegrationFlow buildFlow(int port) { return f -> f .handle(Tcp.outboundGateway(Tcp.netClient("localhost", port) .serializer(TcpCodecs.crlf()) .deserializer(TcpCodecs.lengthHeader1()) .id("client")) .remoteTimeout(m -> 5000)) .transform(Transformers.objectToString()); }
In this case, the message handler for the first flow can be referenced with bean name tcp1.client.handler
.
![]() | Note |
---|---|
an |
=== IntegrationFlow as Gateway
The IntegrationFlow
can start from the service interface providing GatewayProxyFactoryBean
component:
public interface ControlBusGateway { void send(String command); } ... @Bean public IntegrationFlow controlBusFlow() { return IntegrationFlows.from(ControlBusGateway.class) .controlBus() .get(); }
All the proxy for interface methods are supplied with the channel to send messages to the next integration component in the IntegrationFlow
.
The service interface can be marked with the @MessagingGateway
as well as methods with the @Gateway
annotations.
Nevertheless the requestChannel
is ignored and overridden with that internal channel for the next component in the IntegrationFlow
.
Otherwise such a configuration via IntegrationFlow
won’t make sense.
By default a GatewayProxyFactoryBean
gets a conventional bean name like [FLOW_BEAN_NAME.gateway]
.
That id can be changed via @MessagingGateway.name()
attribute or the overloaded from(Class<?> serviceInterface, String beanName)
factory method.
With the Java 8 on board we even can create such an Integration Gateway with the java.util.function
interfaces:
@Bean public IntegrationFlow errorRecovererFlow() { return IntegrationFlows.from(Function.class, "errorRecovererFunction") .handle((GenericHandler<?>) (p, h) -> { throw new RuntimeException("intentional"); }, e -> e.advice(retryAdvice())) .get(); }
That can be used lately as:
@Autowired @Qualifier("errorRecovererFunction") private Function<String, String> errorRecovererFlowGateway;
==== Configuring Metrics Capture
![]() | Note |
---|---|
Prior to version 4.2 metrics were only available when JMX was enabled. See the section called “CompletableFuture”. |
To enable MessageSource
, MessageChannel
and MessageHandler
metrics, add an <int:management/>
bean to the
application context, or annotate one of your @Configuration
classes with @EnableIntegrationManagement
.
MessageSource
s only maintain counts, MessageChannel
s and MessageHandler
s maintain duration statistics in
addition to counts.
See the section called “CompletableFuture” and the section called “CompletableFuture” below.
This causes the automatic registration of the IntegrationManagementConfigurer
bean in the application context.
Only one such bean can exist in the context and it must have the bean name integrationManagementConfigurer
if registered manually via a <bean/>
definition.
This bean applies it’s configuration to beans after all beans in the context have been instantiated.
In addition to metrics, you can control debug logging in the main message flow.
It has been found that in very high volume applications, even calls to isDebugEnabled()
can be quite expensive with
some logging subsystems.
You can disable all such logging to avoid this overhead; exception logging (debug or otherwise) are not affected
by this setting.
A number of options are available:
<int:management default-logging-enabled="true"default-counts-enabled="false"
default-stats-enabled="false"
counts-enabled-patterns="foo, !baz, ba*"
stats-enabled-patterns="fiz, buz"
metrics-factory="myMetricsFactory" />
@Configuration @EnableIntegration @EnableIntegrationManagement( defaultLoggingEnabled = "true",defaultCountsEnabled = "false",
defaultStatsEnabled = "false",
countsEnabled = { "foo", "${count.patterns}" },
statsEnabled = { "qux", "!*" },
MetricsFactory = "myMetricsFactory")
public static class ContextConfiguration { ... }
Set to | |
Enable or disable count metrics for components not matching one of the patterns in <4>.
Only applied if you have not explicitly configured the setting in a bean definition.
Default | |
Enable or disable statistical metrics for components not matching one of the patterns in <5>. Only applied if you have not explicitly configured the setting in a bean definition. Default false. | |
A comma-delimited list of patterns for beans for which counts should be enabled; negate the pattern with | |
A comma-delimited list of patterns for beans for which statistical metrics should be enabled; negate the pattern
with | |
A reference to a |
At runtime, counts and statistics can be obtained by calling IntegrationManagementConfigurer
getChannelMetrics
,
getHandlerMetrics
and getSourceMetrics
, returning MessageChannelMetrics
, MessageHandlerMetrics
and
MessageSourceMetrics
respectively.
See the javadocs for complete information about these classes.
When JMX is enabled (see the section called “CompletableFuture”), these metrics are also exposed by the IntegrationMBeanExporter
.
defaultLoggingEnabled
, defaultCountsEnabled
, and defaultStatsEnabled
are only applied if you have not explicitly configured the corresponding setting in a bean definition.
Starting with version 5.0.2, the framework will automatically detect if there is a single MetricsFactory
bean in the application context and use it instead of the default metrics factory.
Starting with version 5.0.3, the presence of a Micrometer MeterRegistry
in the application context will trigger support for Micrometer metrics in addition to the inbuilt metrics (inbuilt metrics will be removed in a future release).
![]() | Important |
---|---|
Micrometer was first supported in version 5.0.2, but changes were made to the Micrometer |
Simply add a MeterRegistry
bean of choice to the application context.
If the IntegrationManagementConfigurer
detects exactly one MeterRegistry
bean, it will configure a MicrometerMetricsCaptor
bean with name integrationMicrometerMetricsCaptor
.
For each MessageHandler
and MessageChannel
, timers are registered.
For each MessageSource
, a counter is registered.
This only applies to objects that extend AbstractMessageHandler
, AbstractMessageChannel
and AbstractMessageSource
respectively (which is the case for most framework components).
With Micrometer metrics, the statsEnabled
flag takes no effect, since statistics capture is delegated to Micrometer.
The countsEnabled
flag controls whether the Micrometer Meter
s are updated when processing each message.
The Timer
Meters for send operations on message channels have the following name/tags:
name
: spring.integration.send
tag
: type:channel
tag
: name:<componentName>
tag
: result:(success|failure)
tag
: exception:(none|exception simple class name)
description
: Send processing time
(A failure
result with a none
exception means the channel send()
operation returned false
).
The Counter
Meters for receive operations on pollable message channels have the following names/tags:
name
: spring.integration.receive
tag
: type:channel
tag
: name:<componentName>
tag
: result:(success|failure)
tag
: exception:(none|exception simple class name)
description
: Messages received
The Timer
Meters for operations on message handlers have the following name/tags:
name
: spring.integration.send
tag
: type:handler
tag
: name:<componentName>
tag
: result:(success|failure)
tag
: exception:(none|exception simple class name)
description
: Send processing time
The Counter
meters for message sources have the following names/tags:
name
: spring.integration.receive
tag
: type:source
tag
: name:<componentName>
tag
: result:success
tag
: exception:none
description
: Messages received
In addition, there are three Gauge
Meters:
spring.integration.channels
- the number of MessageChannels
in the application.
spring.integration.handlers
- the number of MessageHandlers
in the application.
spring.integration.sources
- the number of MessageSources
in the application.
==== MessageChannel Metric Features
These legacy metrics will be removed in a future release; see the section called “CompletableFuture”.
Message channels report metrics according to their concrete type.
If you are looking at a DirectChannel
, you will see statistics for the send operation.
If it is a QueueChannel
, you will also see statistics for the receive operation, as well as the count of messages that are currently buffered by this QueueChannel
.
In both cases there are some metrics that are simple counters (message count and error count), and some that are estimates of averages of interesting quantities.
The algorithms used to calculate these estimates are described briefly in the section below.
Metric Type | Example | Algorithm |
---|---|---|
Count | Send Count | Simple incrementer. Increases by one when an event occurs. |
Error Count | Send Error Count | Simple incrementer. Increases by one when an send results in an error. |
Duration | Send Duration (method execution time in milliseconds) | Exponential Moving Average with decay factor (10 by default). Average of the method execution time over roughly the last 10 (default) measurements. |
Rate | Send Rate (number of operations per second) | Inverse of Exponential Moving Average of the interval between events with decay in time (lapsing over 60 seconds by default) and per measurement (last 10 events by default). |
Error Rate | Send Error Rate (number of errors per second) | Inverse of Exponential Moving Average of the interval between error events with decay in time (lapsing over 60 seconds by default) and per measurement (last 10 events by default). |
Ratio | Send Success Ratio (ratio of successful to total sends) | Estimate the success ratio as the Exponential Moving Average of the series composed of values 1 for success and 0 for failure (decaying as per the rate measurement over time and events by default). Error ratio is 1 - success ratio. |
==== MessageHandler Metric Features
These legacy metrics will be removed in a future release; see the section called “CompletableFuture”.
The following table shows the statistics maintained for message handlers. Some metrics are simple counters (message count and error count), and one is an estimate of averages of send duration. The algorithms used to calculate these estimates are described briefly in the table below:
Metric Type | Example | Algorithm |
---|---|---|
Count | Handle Count | Simple incrementer. Increases by one when an event occurs. |
Error Count | Handler Error Count | Simple incrementer. Increases by one when an invocation results in an error. |
Active Count | Handler Active Count | Indicates the number of currently active threads currently invoking the handler (or any downstream synchronous flow). |
Duration | Handle Duration (method execution time in milliseconds) | Exponential Moving Average with decay factor (10 by default). Average of the method execution time over roughly the last 10 (default) measurements. |
==== Time-Based Average Estimates
A feature of the time-based average estimates is that they decay with time if no new measurements arrive. To help interpret the behaviour over time, the time (in seconds) since the last measurement is also exposed as a metric.
There are two basic exponential models: decay per measurement (appropriate for duration and anything where the number of measurements is part of the metric), and decay per time unit (more suitable for rate measurements where the time in between measurements is part of the metric). Both models depend on the fact that
S(n) = sum(i=0,i=n) w(i) x(i)
has a special form when w(i) = r^i
, with r=constant
:
S(n) = x(n) + r S(n-1)
(so you only have to store S(n-1)
, not the whole series x(i)
, to generate a new metric estimate from the last measurement).
The algorithms used in the duration metrics use r=exp(-1/M)
with M=10
.
The net effect is that the estimate S(n)
is more heavily weighted to recent measurements and is composed roughly of the last M
measurements.
So M
is the "window" or lapse rate of the estimate In the case of the vanilla moving average, i
is a counter over the number of measurements.
In the case of the rate we interpret i
as the elapsed time, or a combination of elapsed time and a counter (so the metric estimate contains contributions roughly from the last M
measurements and the last T
seconds).
A strategy interface MetricsFactory
has been introduced allowing you to provide custom channel metrics for your
MessageChannel
s and MessageHandler
s.
By default, a DefaultMetricsFactory
provides default implementation of MessageChannelMetrics
and
MessageHandlerMetrics
which are described above.
To override the default MetricsFactory
configure it as described above, by providing a reference to your
MetricsFactory
bean instance.
You can either customize the default implementations as described in the next bullet, or provide completely different
implementations by extending AbstractMessageChannelMetrics
and/or AbstractMessageHandlerMetrics
.
Also see the section called “CompletableFuture”.
In addition to the default metrics factory described above, the framework provides the AggregatingMetricsFactory
.
This factory creates AggregatingMessageChannelMetrics
and AggregatingMessageHandlerMetrics
.
In very high volume scenarios, the cost of capturing statistics can be prohibitive (2 calls to the system time and
storing the data for each message).
The aggregating metrics aggregate the response time over a sample of messages.
This can save significant CPU time.
![]() | Caution |
---|---|
The statistics will be skewed if messages arrive in bursts. These metrics are intended for use with high, constant-volume, message rates. |
<bean id="aggregatingMetricsFactory" class="org.springframework.integration.support.management.AggregatingMetricsFactory"> <constructor-arg value="1000" /> <!-- sample size --> </bean>
The above configuration aggregates the duration over 1000 messages. Counts (send, error) are maintained per-message but the statistics are per 1000 messages.
See the section called “CompletableFuture” and the Javadocs for the ExponentialMovingAverage*
classes for more information about these
values.
By default, the DefaultMessageChannelMetrics
and DefaultMessageHandlerMetrics
use a window
of 10 measurements,
a rate period of 1 second (rate per second) and a decay lapse period of 1 minute.
If you wish to override these defaults, you can provide a custom MetricsFactory
that returns appropriately configured
metrics and provide a reference to it to the MBean exporter as described above.
Example:
public static class CustomMetrics implements MetricsFactory { @Override public AbstractMessageChannelMetrics createChannelMetrics(String name) { return new DefaultMessageChannelMetrics(name, new ExponentialMovingAverage(20, 1000000.), new ExponentialMovingAverageRate(2000, 120000, 30, true), new ExponentialMovingAverageRatio(130000, 40, true), new ExponentialMovingAverageRate(3000, 140000, 50, true)); } @Override public AbstractMessageHandlerMetrics createHandlerMetrics(String name) { return new DefaultMessageHandlerMetrics(name, new ExponentialMovingAverage(20, 1000000.)); } }
The customizations described above are wholesale and will apply to all appropriate beans exported by the MBean exporter. This is the extent of customization available using XML configuration.
Individual beans can be provided with different implementations using java @Configuration
or programmatically at
runtime, after the application context has been refreshed, by invoking the configureMetrics
methods on
AbstractMessageChannel
and AbstractMessageHandler
.
Previously, the time-based metrics (see the section called “CompletableFuture”) were calculated in real time.
The statistics are now calculated when retrieved instead.
This resulted in a significant performance improvement, at the expense of a small amount of additional memory for each statistic.
As discussed in the bullet above, the statistics can be disabled altogether, while retaining the MBean allowing the invocation of Lifecycle
methods.
Spring Integration provides Channel Adapters for receiving and publishing JMX Notifications. There is also an_Inbound Channel Adapter_ for polling JMX MBean attribute values, and an Outbound Channel Adapter for invoking JMX MBean operations.
==== Notification Listening Channel Adapter
The Notification-listening Channel Adapter requires a JMX ObjectName for the MBean that publishes notifications to which this listener should be registered. A very simple configuration might look like this:
<int-jmx:notification-listening-channel-adapter id="adapter" channel="channel" object-name="example.domain:name=publisher"/>
![]() | Tip |
---|---|
The notification-listening-channel-adapter registers with an |
The adapter can also accept a reference to a NotificationFilter
and a handback Object to provide some context that is passed back with each Notification.
Both of those attributes are optional.
Extending the above example to include those attributes as well as an explicit MBeanServer
bean name would produce the following:
<int-jmx:notification-listening-channel-adapter id="adapter" channel="channel" mbean-server="someServer" object-name="example.domain:name=somePublisher" notification-filter="notificationFilter" handback="myHandback"/>
The Notification-listening Channel Adapter is event-driven and registered with the MBeanServer
directly.
It does not require any poller configuration.
![]() | Note |
---|---|
For this component only, the object-name attribute can contain an ObjectName pattern (e.g. "org.foo:type=Bar,name=*") and the adapter will receive notifications from all MBeans with ObjectNames that match the pattern. In addition, the object-name attribute can contain a SpEL reference to a <util:list/> of ObjectName patterns: <jmx:notification-listening-channel-adapter id="manyNotificationsAdapter" channel="manyNotificationsChannel" object-name="#{patterns}"/> <util:list id="patterns"> <value>org.foo:type=Foo,name=*</value> <value>org.foo:type=Bar,name=*</value> </util:list> The names of the located MBean(s) will be logged when DEBUG level logging is enabled. |
==== Notification Publishing Channel Adapter
The Notification-publishing Channel Adapter is relatively simple. It only requires a JMX ObjectName in its configuration as shown below.
<context:mbean-export/> <int-jmx:notification-publishing-channel-adapter id="adapter" channel="channel" object-name="example.domain:name=publisher"/>
It does also require that an MBeanExporter
be present in the context.
That is why the <context:mbean-export/> element is shown above as well.
When Messages are sent to the channel for this adapter, the Notification is created from the Message content. If the payload is a String it will be passed as the message text for the Notification. Any other payload type will be passed as the userData of the Notification.
JMX Notifications also have a type, and it should be a dot-delimited String.
There are two ways to provide the type.
Precedence will always be given to a Message header value associated with the JmxHeaders.NOTIFICATION_TYPE
key.
On the other hand, you can rely on a fallback default-notification-type attribute provided in the configuration.
<context:mbean-export/> <int-jmx:notification-publishing-channel-adapter id="adapter" channel="channel" object-name="example.domain:name=publisher" default-notification-type="some.default.type"/>
==== Attribute Polling Channel Adapter
The Attribute Polling Channel Adapter is useful when you have a requirement, to periodically check on some value that is available through an MBean as a managed attribute. The poller can be configured in the same way as any other polling adapter in Spring Integration (or it’s possible to rely on the default poller). The object-name and attribute-name are required. An MBeanServer reference is also required, but it will automatically check for a bean named mbeanServer by default, just like the Notification-listening Channel Adapter described above.
<int-jmx:attribute-polling-channel-adapter id="adapter" channel="channel" object-name="example.domain:name=someService" attribute-name="InvocationCount"> <int:poller max-messages-per-poll="1" fixed-rate="5000"/> </int-jmx:attribute-polling-channel-adapter>
==== Tree Polling Channel Adapter
The Tree Polling Channel Adapter queries the JMX MBean tree and sends a message with a payload that is the graph of objects that matches the query. By default the MBeans are mapped to primitives and simple Objects like Map, List and arrays - permitting simple transformation, for example, to JSON. An MBeanServer reference is also required, but it will automatically check for a bean named mbeanServer by default, just like the Notification-listening Channel Adapter described above. A basic configuration would be:
<int-jmx:tree-polling-channel-adapter id="adapter" channel="channel" query-name="example.domain:type=*"> <int:poller max-messages-per-poll="1" fixed-rate="5000"/> </int-jmx:tree-polling-channel-adapter>
This will include all attributes on the MBeans selected.
You can filter the attributes by providing an MBeanObjectConverter
that has an appropriate filter configured.
The converter can be provided as a reference to a bean definition using the converter
attribute, or as an inner <bean/> definition.
A DefaultMBeanObjectConverter
is provided which can take a MBeanAttributeFilter
in its constructor argument.
Two standard filters are provided; the NamedFieldsMBeanAttributeFilter
allows you to specify a list of attributes to include and the NotNamedFieldsMBeanAttributeFilter
allows you to specify a list of attributes to exclude.
You can also implement your own filter
==== Operation Invoking Channel Adapter
The operation-invoking-channel-adapter enables Message-driven invocation of any managed operation exposed by an MBean. Each invocation requires the operation name to be invoked and the ObjectName of the target MBean. Both of these must be explicitly provided via adapter configuration:
<int-jmx:operation-invoking-channel-adapter id="adapter" object-name="example.domain:name=TestBean" operation-name="ping"/>
Then the adapter only needs to be able to discover the mbeanServer bean. If a different bean name is required, then provide the mbean-server attribute with a reference.
The payload of the Message will be mapped to the parameters of the operation, if any. A Map-typed payload with String keys is treated as name/value pairs, whereas a List or array would be passed as a simple argument list (with no explicit parameter names). If the operation requires a single parameter value, then the payload can represent that single value, and if the operation requires no parameters, then the payload would be ignored.
If you want to expose a channel for a single common operation to be invoked by Messages that need not contain headers, then that option works well.
==== Operation Invoking Outbound Gateway
Similar to the operation-invoking-channel-adapter Spring Integration also provides a operation-invoking-outbound-gateway, which could be used when dealing with non-void operations and a return value is required. Such return value will be sent as message payload to the reply-channel specified by this Gateway.
<int-jmx:operation-invoking-outbound-gateway request-channel="requestChannel" reply-channel="replyChannel" object-name="o.s.i.jmx.config:type=TestBean,name=testBeanGateway" operation-name="testWithReturn"/>
If the reply-channel attribute is not provided, the reply message will be sent to the channel that is identified by the IntegrationMessageHeaderAccessor.REPLY_CHANNEL
header.
That header is typically auto-created by the entry point into a message flow, such as any Gateway component.
However, if the message flow was started by manually creating a Spring Integration Message and sending it directly to a Channel, then you must specify the message header explicitly or use the provided reply-channel attribute.
Spring Integration components themselves may be exposed as MBeans when the IntegrationMBeanExporter
is configured.
To create an instance of the IntegrationMBeanExporter
, define a bean and provide a reference to an MBeanServer
and a domain name (if desired).
The domain can be left out, in which case the default domain is org.springframework.integration.
<int-jmx:mbean-export id="integrationMBeanExporter" default-domain="my.company.domain" server="mbeanServer"/> <bean id="mbeanServer" class="org.springframework.jmx.support.MBeanServerFactoryBean"> <property name="locateExistingServerIfPossible" value="true"/> </bean>
![]() | Important |
---|---|
The MBean exporter is orthogonal to the one provided in Spring core - it registers message channels and message handlers, but not itself.
You can expose the exporter itself, and certain other components in Spring Integration, using the standard It also has a useful operation, as discussed in the section called “CompletableFuture”. |
Starting with Spring Integration 4.0 the @EnableIntegrationMBeanExport
annotation has been introduced for convenient configuration of a default (integrationMbeanExporter
) bean of type IntegrationMBeanExporter
with several useful options at the @Configuration
class level.
For example:
@Configuration @EnableIntegration @EnableIntegrationMBeanExport(server = "mbeanServer", managedComponents = "input") public class ContextConfiguration { @Bean public MBeanServerFactoryBean mbeanServer() { return new MBeanServerFactoryBean(); } }
If there is a need to provide more options, or have several IntegrationMBeanExporter
beans e.g.
for different MBean Servers, or to avoid conflicts with the standard Spring MBeanExporter
(e.g.
via @EnableMBeanExport
), you can simply configure an IntegrationMBeanExporter
as a generic bean.
All the MessageChannel
, MessageHandler
and MessageSource
instances in the application are wrapped by the MBean exporter to provide management and monitoring features.
The generated JMX object names for each component type are listed in the table below:
Component Type | ObjectName |
---|---|
MessageChannel | o.s.i:type=MessageChannel,name=<channelName> |
MessageSource | o.s.i:type=MessageSource,name=<channelName>,bean=<source> |
MessageHandler | o.s.i:type=MessageSource,name=<channelName>,bean=<source> |
The bean attribute in the object names for sources and handlers takes one of the values in the table below:
Bean Value | Description |
---|---|
endpoint | The bean name of the enclosing endpoint (e.g. <service-activator>) if there is one |
anonymous | An indication that the enclosing endpoint didn’t have a user-specified bean name, so the JMX name is the input channel name |
internal | For well-known Spring Integration default components |
handler/source | None of the above: fallback to the |
Custom elements can be appended to the object name by providing a reference to a Properties
object in the object-name-static-properties
attribute.
Also, since Spring Integration 3.0, you can use a custom ObjectNamingStrategy using the object-naming-strategy
attribute.
This permits greater control over the naming of the MBeans.
For example, to group all Integration MBeans under an Integration type.
A simple custom naming strategy implementation might be:
public class Namer implements ObjectNamingStrategy { private final ObjectNamingStrategy realNamer = new KeyNamingStrategy(); @Override public ObjectName getObjectName(Object managedBean, String beanKey) throws MalformedObjectNameException { String actualBeanKey = beanKey.replace("type=", "type=Integration,componentType="); return realNamer.getObjectName(managedBean, actualBeanKey); } }
The beanKey
argument is a String containing the standard object name beginning with the default-domain
and including any additional static properties.
This example simply moves the standard type
part to componentType
and sets the type
to Integration, enabling selection of all Integration MBeans in one query:"my.domain:type=Integration,*
.
This also groups the beans under one tree entry under the domain in tools like VisualVM.
![]() | Note |
---|---|
The default naming strategy is a MetadataNamingStrategy.
The exporter propagates the |
Starting with version 5.0.9; any bean names (represented by the name
key in the object name) can be quoted if they contain any characters that are not allowed in a Java identifier (or period .
).
This requires setting spring.integration.jmx.quote.names=true
in a META-INF/spring.integration.properties
file on the class path.
In 5.1 it will not be configurable.
Version 4.2 introduced some important improvements, representing a fairly major overhaul to the JMX support in the framework. These resulted in a significant performance improvement of the JMX statistics collection and much more control thereof, but has some implications for user code in a few specific (uncommon) situations. These changes are detailed below, with a caution where necessary.
Previously, MessageSource
, MessageChannel
and MessageHandler
metrics were captured by wrapping the object in a JDK dynamic proxy to intercept appropriate method calls and capture the statistics.
The proxy was added when an integration MBean exporter was declared in the context.
Now, the statistics are captured by the beans themselves; see the section called “CompletableFuture” for more information.
![]() | Warning |
---|---|
This change means that you no longer automatically get an MBean or statistics for custom |
The removal of the proxy has two additional benefits; 1) stack traces in exceptions are reduced (when JMX is enabled) because the proxy is not on the stack; 2) cases where 2 MBeans were exported for the same bean now only export a single MBean with consolidated attributes/operations (see the MBean consolidation bullet below).
System.nanoTime()
is now used to capture times instead of System.currentTimeMillis()
.
This may provide more accuracy on some JVMs, espcially when durations of less than 1 millisecond are expected
Previously, when JMX was enabled, all sources, channels, handlers captured statistics.
It is now possible to control whether the statisics are enabled on an individual component.
Further, it is possible to capture simple counts on MessageChannel
s and MessageHandler
s instead of the complete time-based statistics.
This can have significant performance implications because you can selectively configure where you need detailed statistics, as well as enable/disable at runtime.
See the section called “CompletableFuture”.
Similar to the @ManagedResource
annotation, the @IntegrationManagedResource
marks a class as eligible to be exported as an MBean; however, it will only be exported if there is an IntegrationMBeanExporter
in the application context.
Certain Spring Integration classes (in the org.springframework.integration
) package) that were previously annotated with`@ManagedResource` are now annotated with both @ManagedResource
and @IntegrationManagedResource
.
This is for backwards compatibility (see the next bullet).
Such MBeans will be exported by any context MBeanServer
or an IntegrationMBeanExporter
(but not both - if both exporters are present, the bean is exported by the integration exporter if the bean matches a managed-components
pattern).
Certain classes within the framework (mapping routers for example) have additional attributes/operations over and above those provided by metrics and Lifecycle
.
We will use a Router
as an example here.
Previously, beans of these types were exported as two distinct MBeans:
1) the metrics MBean (with an objectName such as: intDomain:type=MessageHandler,name=myRouter,bean=endpoint
).
This MBean had metrics attributes and metrics/Lifecycle operations.
2) a second MBean (with an objectName such as:
ctxDomain:name=org.springframework.integration.config.RouterFactoryBean#0
,type=MethodInvokingRouter
)
was exported with the channel mappings attribute and operations.
Now, the attributes and operations are consolidated into a single MBean.
The objectName will depend on the exporter.
If exported by the integration MBean exporter, the objectName will be, for example: intDomain:type=MessageHandler,name=myRouter,bean=endpoint
.
If exported by another exporter, the objectName will be, for example: ctxDomain:name=org.springframework.integration.config.RouterFactoryBean#0
,type=MethodInvokingRouter
.
There is no difference between these MBeans (aside from the objectName), except that the statistics will not be enabled (the attributes will be 0) by exporters other than the integration exporter; statistics can be enabled at runtime using the JMX operations.
When exported by the integration MBean exporter, the initial state can be managed as described above.
![]() | Warning |
---|---|
If you are currently using the second MBean to change, for example, channel mappings, and you are using the integration MBean exporter, note that the objectName has changed because of the MBean consolidation. There is no change if you are not using the integration MBean exporter. |
Previously, the managed-components
patterns were inclusive only.
If a bean name matched one of the patterns it would be included.
Now, the pattern can be negated by prefixing it with !
.
i.e.
"!foo*, foox"
will match all beans that don’t start with foo
, except foox
.
Patterns are evaluated left to right and the first match (positive or negative) wins and no further patterns are applied.
![]() | Warning |
---|---|
The addition of this syntax to the pattern causes one possible (although perhaps unlikely) problem.
If you have a bean |
The IntegrationMBeanExporter
no longer implements SmartLifecycle
; this means that start()
and stop()
operations
are no longer available to register/unregister MBeans.
The MBeans are now registered during context initialization and unregistered when the context is destroyed.
===== Orderly Shutdown Managed Operation
The MBean exporter provides a JMX operation to shut down the application in an orderly manner, intended for use before terminating the JVM.
public void stopActiveComponents(long howLong)
Its use and operation are described in the section called “CompletableFuture”.
The key benefit of a messaging architecture is loose coupling where participating components do not maintain any awareness about one another. This fact alone makes your application extremely flexible, allowing you to change components without affecting the rest of the flow, change messaging routes, message consuming styles (polling vs event driven), and so on. However, this unassuming style of architecture could prove to be difficult when things go wrong. When debugging, you would probably like to get as much information about the message as you can (its origin, channels it has traversed, etc.)
Message History is one of those patterns that helps by giving you an option to maintain some level of awareness of a message path either for debugging purposes or to maintain an audit trail. Spring integration provides a simple way to configure your message flows to maintain the Message History by adding a header to the Message and updating that header every time a message passes through a tracked component.
==== Message History Configuration
To enable Message History all you need is to define the message-history
element in your configuration.
<int:message-history/>
Now every named component (component that has an id defined) will be tracked.
The framework will set the history header in your Message.
Its value is very simple - List<Properties>
.
<int:gateway id="sampleGateway" service-interface="org.springframework.integration.history.sample.SampleGateway" default-request-channel="bridgeInChannel"/> <int:chain id="sampleChain" input-channel="chainChannel" output-channel="filterChannel"> <int:header-enricher> <int:header name="baz" value="baz"/> </int:header-enricher> </int:chain>
The above configuration will produce a very simple Message History structure:
[{name=sampleGateway, type=gateway, timestamp=1283281668091}, {name=sampleChain, type=chain, timestamp=1283281668094}]
To get access to Message History all you need is access the MessageHistory header. For example:
Iterator<Properties> historyIterator = message.getHeaders().get(MessageHistory.HEADER_NAME, MessageHistory.class).iterator(); assertTrue(historyIterator.hasNext()); Properties gatewayHistory = historyIterator.next(); assertEquals("sampleGateway", gatewayHistory.get("name")); assertTrue(historyIterator.hasNext()); Properties chainHistory = historyIterator.next(); assertEquals("sampleChain", chainHistory.get("name"));
You might not want to track all of the components.
To limit the history to certain components based on their names, all you need is provide the tracked-components
attribute and specify a comma-delimited list of component names and/or patterns that match the components you want to track.
<int:message-history tracked-components="*Gateway, sample*, foo"/>
In the above example, Message History will only be maintained for all of the components that end with Gateway, start with sample, or match the name foo exactly.
Starting with version 4.0, you can also use the @EnableMessageHistory
annotation in a @Configuration
class.
In addition, the MessageHistoryConfigurer
bean is now exposed as a JMX MBean by the IntegrationMBeanExporter
(see the section called “CompletableFuture”), allowing the patterns to be changed at runtime.
Note, however, that the bean must be stopped (turning off message history) in order to change the patterns.
This feature might be useful to temporarily turn on history to analyze a system.
The MBean’s object name is "<domain>:name=messageHistoryConfigurer,type=MessageHistoryConfigurer"
.
![]() | Important |
---|---|
If multiple beans (declared by |
![]() | Note |
---|---|
Remember that by definition the Message History header is immutable (you can’t re-write history, although some try). Therefore, when writing Message History values, the components are either creating brand new Messages (when the component is an origin), or they are copying the history from a request Message, modifying it and setting the new list on a reply Message. In either case, the values can be appended even if the Message itself is crossing thread boundaries. That means that the history values can greatly simplify debugging in an asynchronous message flow. |
Enterprise Integration Patterns (EIP) identifies several patterns that have the capability to buffer messages. For example, an Aggregator buffers messages until they can be released and a QueueChannel buffers messages until consumers explicitly receive those messages from that channel. Because of the failures that can occur at any point within your message flow, EIP components that buffer messages also introduce a point where messages could be lost.
To mitigate the risk of losing Messages, EIP defines the Message Store pattern which allows EIP components to store Messages typically in some type of persistent store (e.g. RDBMS).
Spring Integration provides support for the Message Store pattern by a) defining a org.springframework.integration.store.MessageStore
strategy interface, b) providing several implementations of this interface, and c) exposing a message-store
attribute on all components that have the capability to buffer messages so that you can inject any instance that implements the MessageStore
interface.
Details on how to configure a specific Message Store implementation and/or how to inject a MessageStore
implementation into a specific buffering component are described throughout the manual (see the specific component, such as QueueChannel, Aggregator, Delayer etc.), but here are a couple of samples to give you an idea:
QueueChannel
<int:channel id="myQueueChannel"> <int:queue message-store="refToMessageStore"/> <int:channel>
Aggregator
<int:aggregator … message-store="refToMessageStore"/>
By default Messages are stored in-memory using org.springframework.integration.store.SimpleMessageStore
, an implementation of MessageStore
.
That might be fine for development or simple low-volume environments where the potential loss of non-persistent messages is not a concern.
However, the typical production application will need a more robust option, not only to mitigate the risk of message loss but also to avoid potential out-of-memory errors.
Therefore, we also provide MessageStore implementations for a variety of data-stores.
Below is a complete list of supported implementations:
![]() | Important |
---|---|
However be aware of some limitations while using persistent implementations of the The Message data (payload and headers) is serialized and deserialized using different serialization strategies depending on the implementation of the Special attention must be paid to the headers that represent certain types of data. For example, if one of the headers contains an instance of some Spring Bean, upon deserialization you may end up with a different instance of that bean, which directly affects some of the implicit headers created by the framework (e.g., REPLY_CHANNEL or ERROR_CHANNEL). Currently they are not serializable, but even if they were, the deserialized channel would not represent the expected instance. Beginning with Spring Integration version 3.0, this issue can be resolved with a header enricher, configured to replace these headers with a name after registering the channel with the Also when configuring a message-flow like this: gateway → queue-channel (backed by a persistent Message Store) → service-activator That gateway creates a Temporary Reply Channel, and it will be lost by the time the service-activator’s poller reads from the queue. Again, you can use the header enricher to replace the headers with a String representation. For more information, refer to the Section 7.2.2, “Header Enricher”. |
Spring Integration 4.0 introduced two new interfaces ChannelMessageStore
- to implement operations specific for QueueChannel
s, PriorityCapableChannelMessageStore
- to mark MessageStore
implementation to be used for PriorityChannel
s and to provide priority order for persisted Messages.
The real behaviour depends on implementation.
The Framework provides these implementations, which can be used as a persistent MessageStore
for QueueChannel
and PriorityChannel
:
Starting with version 4.3, some MessageGroupStore
implementations can be injected with a custom
MessageGroupFactory
strategy to create/customize the MessageGroup
instances used by the MessageGroupStore
.
This defaults to a SimpleMessageGroupFactory
which produces SimpleMessageGroup
s based on the GroupType.HASH_SET
(LinkedHashSet
) internal collection.
Other possible options are SYNCHRONISED_SET
and BLOCKING_QUEUE
, where the last one can be used to reinstate the
previous SimpleMessageGroup
behavior.
Also the PERSISTENT
option is available. See the next section for more information.
Starting with _version 5.0.1, the LIST
option is also available for use-cases when the order and uniqueness of messages in the group doesn’t matter.
==== Persistence MessageGroupStore and Lazy-Load
Starting with version 4.3, all persistence MessageGroupStore
s retrieve MessageGroup
s and their messages
from the store with the Lazy-Load manner.
In most cases it is useful for the Correlation MessageHandler
s (Section 6.4, “Aggregator” and Section 6.5, “Resequencer”),
when it would be an overhead to load entire MessageGroup
from the store on each correlation operation.
To switch off the lazy-load behavior the AbstractMessageGroupStore.setLazyLoadMessageGroups(false)
option
can be used from the configuration.
Our performance tests for lazy-load on MongoDB MessageStore
(the section called “CompletableFuture”) and
<aggregator>
(Section 6.4, “Aggregator”)
with custom release-strategy
like:
<int:aggregator input-channel="inputChannel" output-channel="outputChannel" message-store="mongoStore" release-strategy-expression="size() == 1000"/>
demonstrate this results for 1000 simple messages:
StopWatch 'Lazy-Load Performance': running time (millis) = 38918 ----------------------------------------- ms % Task name ----------------------------------------- 02652 007% Lazy-Load 36266 093% Eager
Many external systems, services or resources aren’t transactional (Twitter, RSS, file system etc.) and there is no any ability to mark the data as read.
Or there is just need to implement the Enterprise Integration Pattern Idempotent Receiver in some integration solutions.
To achieve this goal and store some previous state of the Endpoint before the next interaction with external system, or deal with the next Message, Spring Integration provides the Metadata Store component being an implementation of the org.springframework.integration.metadata.MetadataStore
interface with a general key-value contract.
The Metadata Store is designed to store various types of generic meta-data (e.g., published date of the last feed entry that has been processed) to help components such as the Feed adapter deal with duplicates.
If a component is not directly provided with a reference to a MetadataStore
, the algorithm for locating a metadata store is as follows: First, look for a bean with id metadataStore
in the ApplicationContext.
If one is found then it will be used, otherwise it will create a new instance of SimpleMetadataStore
which is an in-memory implementation that will only persist metadata within the lifecycle of the currently running Application Context.
This means that upon restart you may end up with duplicate entries.
If you need to persist metadata between Application Context restarts, these persistent MetadataStores
are provided by
the framework:
The PropertiesPersistingMetadataStore
is backed by a properties file and a PropertiesPersister.
By default, it only persists the state when the application context is closed normally. It implements Flushable
so you
can persist the state at will, be invoking flush()
.
<bean id="metadataStore" class="org.springframework.integration.metadata.PropertiesPersistingMetadataStore"/>
Alternatively, you can provide your own implementation of the MetadataStore
interface (e.g.
JdbcMetadataStore) and configure it as a bean in the Application Context.
Starting with version 4.0, SimpleMetadataStore
, PropertiesPersistingMetadataStore
and RedisMetadataStore
implement ConcurrentMetadataStore
.
These provide for atomic updates and can be used across multiple component or application instances.
==== Idempotent Receiver and Metadata Store
The Metadata Store is useful for implementing the EIP Idempotent Receiver pattern, when there is need to filter an incoming Message if it has already been processed, and just discard it or perform some other logic on discarding. The following configuration is an example of how to do this:
<int:filter input-channel="serviceChannel" output-channel="idempotentServiceChannel" discard-channel="discardChannel" expression="@metadataStore.get(headers.businessKey) == null"/> <int:publish-subscribe-channel id="idempotentServiceChannel"/> <int:outbound-channel-adapter channel="idempotentServiceChannel" expression="@metadataStore.put(headers.businessKey, '')"/> <int:service-activator input-channel="idempotentServiceChannel" ref="service"/>
The value
of the idempotent entry may be some expiration date, after which that entry should be removed from Metadata Store by some scheduled reaper.
Also see the section called “CompletableFuture”.
Some metadata stores (currently only zookeeper) support registering a listener to receive events when items change.
public interface MetadataStoreListener { void onAdd(String key, String value); void onRemove(String key, String oldValue); void onUpdate(String key, String newValue); }
See the javadocs for more information.
The MetadataStoreListenerAdapter
can be subclassed if you are only interested in a subset of events.
As described in (EIP), the idea behind the Control Bus is that the same messaging system can be used for monitoring and managing the components within the framework as is used for "application-level" messaging. In Spring Integration we build upon the adapters described above so that it’s possible to send Messages as a means of invoking exposed operations.
<int:control-bus input-channel="operationChannel"/>
The Control Bus has an input channel that can be accessed for invoking operations on the beans in the application context. It also has all the common properties of a service activating endpoint, e.g. you can specify an output channel if the result of the operation has a return value that you want to send on to a downstream channel.
The Control Bus executes messages on the input channel as Spring Expression Language expressions.
It takes a message, compiles the body to an expression, adds some context, and then executes it.
The default context supports any method that has been annotated with @ManagedAttribute
or @ManagedOperation
.
It also supports the methods on Spring’s Lifecycle
interface, and it supports methods that are used to configure several of Spring’s TaskExecutor
and TaskScheduler
implementations.
The simplest way to ensure that your own methods are available to the Control Bus is to use the @ManagedAttribute
and/or @ManagedOperation
annotations.
Since those are also used for exposing methods to a JMX MBean registry, it’s a convenient by-product (often the same types of operations you want to expose to the Control Bus would be reasonable for exposing via JMX).
Resolution of any particular instance within the application context is achieved in the typical SpEL syntax.
Simply provide the bean name with the SpEL prefix for beans (@
).
For example, to execute a method on a Spring Bean a client could send a message to the operation channel as follows:
Message operation = MessageBuilder.withPayload("@myServiceBean.shutdown()").build();
operationChannel.send(operation)
The root of the context for the expression is the Message
itself, so you also have access to the payload
and headers
as variables within your expression.
This is consistent with all the other expression support in Spring Integration endpoints.
With Java and Annotations the Control Bus can be configured as follows:
@Bean @ServiceActivator(inputChannel = "operationChannel") public ExpressionControlBusFactoryBean controlBus() { return new ExpressionControlBusFactoryBean(); }
Or, when using Java DSL flow definitions:
@Bean public IntegrationFlow controlBusFlow() { return IntegrationFlows.from("controlBus") .controlBus() .get(); }
Or, if you prefer Lambda style with automatic DirectChannel
creation:
@Bean public IntegrationFlow controlBus() { return IntegrationFlowDefinition::controlBus; }
In this case, the channel is named controlBus.input
.
As described in the section called “CompletableFuture”, the MBean exporter provides a JMX operation stopActiveComponents, which is used to stop the application in an orderly manner. The operation has a single long parameter. The parameter indicates how long (in milliseconds) the operation will wait to allow in-flight messages to complete. The operation works as follows:
The first step calls beforeShutdown()
on all beans that implement OrderlyShutdownCapable
.
This allows such components to prepare for shutdown.
Examples of components that implement this interface, and what they do with this call include: JMS and AMQP message-driven adapters stop their listener containers; TCP server connection factories stop accepting new connections (while keeping existing connections open); TCP inbound endpoints drop (log) any new messages received; http inbound endpoints return 503 - Service Unavailable for any new requests.
The second step stops any active channels, such as JMS- or AMQP-backed channels.
The third step stops all MessageSource
s.
The fourth step stops all inbound MessageProducer
s (that are not OrderlyShutdownCapable
).
The fifth step waits for any remaining time left, as defined by the value of the long parameter passed in to the operation. This is intended to allow any in-flight messages to complete their journeys. It is therefore important to select an appropriate timeout when invoking this operation.
The sixth step calls afterShutdown()
on all OrderlyShutdownCapable components.
This allows such components to perform final shutdown tasks (closing all open sockets, for example).
As discussed in the section called “CompletableFuture” this operation can be invoked using JMX.
If you wish to programmatically invoke the method, you will need to inject, or otherwise get a reference to, the IntegrationMBeanExporter
.
If no id
attribute is provided on the <int-jmx:mbean-export/>
definition, the bean will have a generated name.
This name contains a random component to avoid ObjectName
collisions if multiple Spring Integration contexts exist in the same JVM (MBeanServer).
For this reason, if you wish to invoke the method programmatically, it is recommended that you provide the exporter with an id
attribute so it can easily be accessed in the application context.
Finally, the operation can be invoked using the <control-bus>
; see the monitoring Spring Integration sample application for details.
![]() | Important |
---|---|
The above algorithm was improved in version 4.1.
Previously, all task executors and schedulers were stopped.
This could cause mid-flow messages in |
Starting with version 4.3, Spring Integration provides access to an application’s runtime object model which can, optionally, include component metrics.
It is exposed as a graph, which may be used to visualize the current state of the integration application.
The o.s.i.support.management.graph
package contains all the required classes to collect, build and render the runtime state of Spring Integration components as a single tree-like Graph
object.
The IntegrationGraphServer
should be declared as a bean to build, retrieve and refresh the Graph
object.
The resulting Graph
object can be serialized to any format, although JSON is flexible and convenient to parse and represent on the client side.
A simple Spring Integration application with only the default components would expose a graph as follows:
{ "contentDescriptor": { "providerVersion": "4.3.0.RELEASE", "providerFormatVersion": 1.0, "provider": "spring-integration", "name": "myApplication" }, "nodes": [ { "nodeId": 1, "name": "nullChannel", "stats": null, "componentType": "channel" }, { "nodeId": 2, "name": "errorChannel", "stats": null, "componentType": "publish-subscribe-channel" }, { "nodeId": 3, "name": "_org.springframework.integration.errorLogger", "stats": { "duration": { "count": 0, "min": 0.0, "max": 0.0, "mean": 0.0, "standardDeviation": 0.0, "countLong": 0 }, "errorCount": 0, "standardDeviationDuration": 0.0, "countsEnabled": true, "statsEnabled": true, "loggingEnabled": false, "handleCount": 0, "meanDuration": 0.0, "maxDuration": 0.0, "minDuration": 0.0, "activeCount": 0 }, "componentType": "logging-channel-adapter", "output": null, "input": "errorChannel" } ], "links": [ { "from": 2, "to": 3, "type": "input" } ] }
As you can see, the graph consists of three top-level elements.
The contentDescriptor
graph element is pretty straightforward and contains general information about the application providing the data.
The name
can be customized on the IntegrationGraphServer
bean or via spring.application.name
application context environment property.
Other properties are provided by the framework and allows you to distinguish a similar model from other sources.
The links
graph element represents connections between nodes from the nodes
graph element and, therefore, between integration components in the source Spring Integration application.
For example from a MessageChannel
to an EventDrivenConsumer
with some MessageHandler
;
or from an AbstractReplyProducingMessageHandler
to a MessageChannel
.
For the convenience and to allow to determine a link purpose, the model is supplied with the type
attribute.
The possible types are:
MessageChannel
to the endpoint; inputChannel
or requestChannel
property;
MessageHandler
, MessageProducer
or SourcePollingChannelAdapter
to the MessageChannel
via an outputChannel
or replyChannel
property;
MessageHandler
on PollingConsumer
or MessageProducer
or SourcePollingChannelAdapter
to the MessageChannel
via an errorChannel
property;
DiscardingMessageHandler
(e.g. MessageFilter
) to the MessageChannel
via errorChannel
property.
AbstractMappingMessageRouter
(e.g. HeaderValueRouter
) to the MessageChannel
.
Similar to output but determined at run-time.
May be a configured channel mapping, or a dynamically resolved channel.
Routers will typically only retain up to 100 dynamic routes for this purpose, but this can be modified using the dynamicChannelLimit
property.
The information from this element can be used by a visualizing tool to render connections between nodes from the nodes
graph element, where the from
and to
numbers represent the value from the nodeId
property of the linked nodes.
For example the link type
can be used to determine the proper port on the target node:
+---(discard) | +----o----+ | | | | | | (input)--o o---(output) | | | | | | +----o----+ | +---(error)
The nodes
graph element is perhaps the most interesting because its elements contain not only the runtime components with their componentType
s and name
s, but can also optionally contain metrics exposed by the component.
Node elements contain various properties which are generally self-explanatory.
For example, expression-based components include the expression
property containing the primary expression string for the component.
To enable the metrics, add an @EnableIntegrationManagement
to some @Configuration
class or add an <int:management/>
element to your XML configuration.
You can control exactly which components in the framework collect statistics.
See the section called “CompletableFuture” for complete information.
See the stats
attribute from the _org.springframework.integration.errorLogger
component in the JSON example above.
The nullChannel
and errorChannel
don’t provide statistics information in this case, because the configuration for this example was:
@Configuration @EnableIntegration @EnableIntegrationManagement(statsEnabled = "_org.springframework.integration.errorLogger.handler", countsEnabled = "!*", defaultLoggingEnabled = "false") public class ManagementConfiguration { @Bean public IntegrationGraphServer integrationGraphServer() { return new IntegrationGraphServer(); } }
The nodeId
represents a unique incremental identifier to distinguish one component from another.
It is also used in the links
element to represent a relationship (connection) of this component to others, if any.
The input
and output
attributes are for the inputChannel
and outputChannel
properties of the AbstractEndpoint
, MessageHandler
, SourcePollingChannelAdapter
or MessageProducerSupport
.
See the next paragraph for more information.
==== Graph Runtime Model
Spring Integration components have various levels of complexity.
For example, any polled MessageSource
also has a SourcePollingChannelAdapter
and a MessageChannel
to which to send messages from the source data periodically.
Other components might be middleware request-reply components, e.g. JmsOutboundGateway
, with a consuming AbstractEndpoint
to subscribe to (or poll) the requestChannel
(input
) for messages, and a replyChannel
(output
) to produce a reply message to send downstream.
Meanwhile, any MessageProducerSupport
implementation (e.g. ApplicationEventListeningMessageProducer
) simply wraps some source protocol listening logic and sends messages to the outputChannel
.
Within the graph, Spring Integration components are represented using the IntegrationNode
class hierarchy, which you can find in the o.s.i.support.management.graph
package.
For example the ErrorCapableDiscardingMessageHandlerNode
could be used for the AggregatingMessageHandler
(because it has a discardChannel
option) and can produce errors when consuming from a PollableChannel
using a PollingConsumer
.
Another sample is CompositeMessageHandlerNode
- for a MessageHandlerChain
when subscribed to a SubscribableChannel
, using an EventDrivenConsumer
.
![]() | Note |
---|---|
The |
@MessagingGateway(defaultRequestChannel = "four") public interface Gate { void foo(String foo); void foo(Integer foo); void bar(String bar); }
produces nodes like:
{ "nodeId" : 10, "name" : "gate.bar(class java.lang.String)", "stats" : null, "componentType" : "gateway", "output" : "four", "errors" : null }, { "nodeId" : 11, "name" : "gate.foo(class java.lang.String)", "stats" : null, "componentType" : "gateway", "output" : "four", "errors" : null }, { "nodeId" : 12, "name" : "gate.foo(class java.lang.Integer)", "stats" : null, "componentType" : "gateway", "output" : "four", "errors" : null }
This IntegrationNode
hierarchy can be used for parsing the graph model on the client side, as well as for the understanding the general Spring Integration runtime behavior.
See also Section 3.8, “Programming Tips and Tricks” for more information.
=== Integration Graph Controller
If your application is WEB-based (or built on top of Spring Boot using an embedded web container) and the Spring Integration HTTP or WebFlux module (see the section called “CompletableFuture” and the section called “CompletableFuture”) is present on the classpath, you can use a IntegrationGraphController
to expose the IntegrationGraphServer
functionality as a REST service.
For this purpose, the @EnableIntegrationGraphController
@Configuration
class annotation and the <int-http:graph-controller/>
XML element, are available in the HTTP module.
Together with the @EnableWebMvc
annotation (or <mvc:annotation-driven/>
for xml definitions), this configuration registers an IntegrationGraphController
@RestController
where its @RequestMapping.path
can be configured on the @EnableIntegrationGraphController
annotation or <int-http:graph-controller/>
element.
The default path is /integration
.
The IntegrationGraphController
@RestController
provides these services:
@GetMapping(name = "getGraph")
- to retrieve the state of the Spring Integration components since the last IntegrationGraphServer
refresh.
The o.s.i.support.management.graph.Graph
is returned as a @ResponseBody
of the REST service;
@GetMapping(path = "/refresh", name = "refreshGraph")
- to refresh the current Graph
for the actual runtime state and return it as a REST response.
It is not necessary to refresh the graph for metrics, they are provided in real-time when the graph is retrieved.
Refresh can be called if the application context has been modified since the graph was last retrieved and the graph is completely rebuilt.
Any Security and Cross Origin restrictions for the IntegrationGraphController
can be achieved with the standard configuration options and components provided by Spring Security and Spring MVC projects.
A simple example of that follows:
<mvc:annotation-driven /> <mvc:cors> <mvc:mapping path="/myIntegration/**" allowed-origins="http://localhost:9090" allowed-methods="GET" /> </mvc:cors> <security:http> <security:intercept-url pattern="/myIntegration/**" access="ROLE_ADMIN" /> </security:http> <int-http:graph-controller path="/myIntegration" />
The Java & Annotation Configuration variant follows; note that, for convenience, the annotation provides an allowedOrigins
attribute; this just provides GET
access to the path
.
For more sophistication, you can configure the CORS mappings using standard Spring MVC mechanisms.
@Configuration @EnableWebMvc // or @EnableWebFlux @EnableWebSecurity // or @EnableWebFluxSecurity @EnableIntegration @EnableIntegrationGraphController(path = "/testIntegration", allowedOrigins="http://localhost:9090") public class IntegrationConfiguration extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .authorizeRequests() .antMatchers("/testIntegration/**").hasRole("ADMIN") // ... .formLogin(); } //... }
This section covers the various Channel Adapters and Messaging Gateways provided by Spring Integration to support Message-based communication with external systems.
== Endpoint Quick Reference Table
As discussed in the sections above, Spring Integration provides a number of endpoints used to interface with external systems, file systems etc. The following is a summary of the various endpoints with quick links to the appropriate chapter.
To recap, Inbound Channel Adapters are used for one-way integration bringing data into the messaging application. Outbound Channel Adapters are used for one-way integration to send data out of the messaging application. Inbound Gateways are used for a bidirectional integration flow where some other system invokes the messaging application and receives a reply.Outbound Gateways are used for a bidirectional integration flow where the messaging application invokes some external service or entity, expecting a result.
In addition, as discussed in Part IV, “Core Messaging”, endpoints are provided for interfacing with Plain Old Java Objects (POJOs).
As discussed in Section 4.3, “Channel Adapter”, the <int:inbound-channel-adapter>
allows polling a java method for data; the <int:outbound-channel-adapter>
allows sending data to a void
method, and as discussed in Section 8.4, “Messaging Gateways”, the <int:gateway>
allows any Java program to invoke a messaging flow.
Each of these without requiring any source level dependencies on Spring Integration.
The equivalent of an outbound gateway in this context would be to use a the section called “CompletableFuture” to invoke a method that returns an Object of some kind.
Spring Integration provides Channel Adapters for receiving and sending messages using the Advanced Message Queuing Protocol (AMQP). The following adapters are available:
Spring Integration also provides a point-to-point Message Channel as well as a publish/subscribe Message Channel backed by AMQP Exchanges and Queues.
In order to provide AMQP support, Spring Integration relies on (Spring AMQP) which "applies core Spring concepts to the development of AMQP-based messaging solutions". Spring AMQP provides similar semantics to (Spring JMS).
Whereas the provided AMQP Channel Adapters are intended for unidirectional Messaging (send or receive) only, Spring Integration also provides inbound and outbound AMQP Gateways for request/reply operations.
![]() | Tip |
---|---|
Please familiarize yourself with the reference documentation of the Spring AMQP project as well. It provides much more in-depth information regarding Spring’s integration with AMQP in general and RabbitMQ in particular. |
A configuration sample for an AMQP Inbound Channel Adapter is shown below.
<int-amqp:inbound-channel-adapter id="inboundAmqp"channel="inboundChannel"
queue-names="si.test.queue"
acknowledge-mode="AUTO"
advice-chain=""
channel-transacted=""
concurrent-consumers=""
connection-factory=""
error-channel=""
expose-listener-channel=""
header-mapper=""
mapped-request-headers=""
listener-container=""
message-converter=""
message-properties-converter=""
phase="" (16) prefetch-count="" (17) receive-timeout="" (18) recovery-interval="" (19) missing-queues-fatal="" (20) shutdown-timeout="" (21) task-executor="" (22) transaction-attribute="" (23) transaction-manager="" (24) tx-size="" (25) consumers-per-queue /> (26)
Unique ID for this adapter. Optional. | |
Message Channel to which converted Messages should be sent. Required. | |
Names of the AMQP Queues from which Messages should be consumed (comma-separated list).Required. | |
Acknowledge Mode for the | |
Extra AOP Advice(s) to handle cross cutting behavior associated with this Inbound Channel Adapter. Optional. | |
Flag to indicate that channels created by this component will be transactional. If true, tells the framework to use a transactional channel and to end all operations (send or receive) with a commit or rollback depending on the outcome, with an exception signalling a rollback. Optional (Defaults to false). | |
Specify the number of concurrent consumers to create. Default is 1. Raising the number of concurrent consumers is recommended in order to scale the consumption of messages coming in from a queue. However, note that any ordering guarantees are lost once multiple consumers are registered. In general, use 1 consumer for low-volume queues. Not allowed when consumers-per-queue is set. Optional. | |
Bean reference to the RabbitMQ ConnectionFactory. Optional (Defaults to connectionFactory). | |
Message Channel to which error Messages should be sent. Optional. | |
Shall the listener channel (com.rabbitmq.client.Channel) be exposed to a registered | |
A reference to an | |
Comma-separated list of names of AMQP Headers to be mapped from the AMQP request into the MessageHeaders. This can only be provided if the header-mapper reference is not provided. The values in this list can also be simple patterns to be matched against the header names (e.g. "*" or "foo*, bar" or "*foo"). | |
Reference to the | |
The MessageConverter to use when receiving AMQP Messages. Optional. | |
The MessagePropertiesConverter to use when receiving AMQP Messages. Optional. | |
Specify the phase in which the underlying | |
Tells the AMQP broker how many messages to send to each consumer in a single request. Often this can be set quite high to improve throughput. It should be greater than or equal to the transaction size (see attribute "tx-size"). Optional (Defaults to 1). | |
Receive timeout in milliseconds. Optional (Defaults to 1000). | |
Specifies the interval between recovery attempts of the underlying | |
If true, and none of the queues are available on the broker, the container will throw a fatal exception during startup and will stop if the queues are deleted when the container is running (after making 3 attempts to passively declare the queues).
If false, the container will not throw an exception and go into recovery mode, attempting to restart according to the | |
The time to wait for workers in milliseconds after the underlying | |
By default, the underlying | |
By default the underlying | |
Sets a Bean reference to an external | |
Tells the | |
Indicates that the underlying listener container should be a |
![]() | container |
---|---|
Note that when configuring an external container, you cannot use the Spring AMQP namespace to define the container.
This is because the namespace requires at least one <bean id="container" class="org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer"> <property name="connectionFactory" ref="connectionFactory" /> <property name="queueNames" value="foo.queue" /> <property name="defaultRequeueRejected" value="false"/> </bean> |
![]() | Important |
---|---|
Even though the Spring Integration JMS and AMQP support is very similar, important differences exist.
The JMS Inbound Channel Adapter is using a |
==== Configuring with Java Configuration
The following Spring Boot application provides an example of configuring the inbound adapter using Java configuration:
@SpringBootApplication public class AmqpJavaApplication { public static void main(String[] args) { new SpringApplicationBuilder(AmqpJavaApplication.class) .web(false) .run(args); } @Bean public MessageChannel amqpInputChannel() { return new DirectChannel(); } @Bean public AmqpInboundChannelAdapter inbound(SimpleMessageListenerContainer listenerContainer, @Qualifier("amqpInputChannel") MessageChannel channel) { AmqpInboundChannelAdapter adapter = new AmqpInboundChannelAdapter(listenerContainer); adapter.setOutputChannel(channel); return adapter; } @Bean public SimpleMessageListenerContainer container(ConnectionFactory connectionFactory) { SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory); container.setQueueNames("foo"); container.setConcurrentConsumers(2); // ... return container; } @Bean @ServiceActivator(inputChannel = "amqpInputChannel") public MessageHandler handler() { return new MessageHandler() { @Override public void handleMessage(Message<?> message) throws MessagingException { System.out.println(message.getPayload()); } }; } }
==== Configuring with the Java DSL
The following Spring Boot application provides an example of configuring the inbound adapter using the Java DSL:
@SpringBootApplication public class AmqpJavaApplication { public static void main(String[] args) { new SpringApplicationBuilder(AmqpJavaApplication.class) .web(false) .run(args); } @Bean public IntegrationFlow amqpInbound(ConnectionFactory connectionFactory) { return IntegrationFlows.from(Amqp.inboundAdapter(connectionFactory, "foo")) .handle(m -> System.out.println(m.getPayload())) .get(); } }
=== Polled Inbound Channel Adapter
Starting with version 5.0.1, a polled channel adapter is provided, allowing fetching individual messages on-demand, for example with a MessageSourcePollingTemplate
or a poller.
See Section 4.2.3, “Deferred Acknowledgment Pollable Message Source” for more information.
It does not currently have XML configuration.
@Bean public AmqpMessageSource source(ConnectionFactory connectionFactory) { return new AmpqpMessageSource(connectionFactory, "someQueue"); }
Refer to the javadocs for configuration properties.
With the Java DSL:
@Bean public IntegrationFlow flow() { return IntegrationFlows.from(Amqp.inboundPolledAdapter(connectionFactory(), DSL_QUEUE), e -> e.poller(Pollers.fixedDelay(1_000)).autoStartup(false)) .handle(p -> { ... }) .get(); }
The inbound gateway supports all the attributes on the inbound channel adapter (except channel is replaced by request-channel), plus some additional attributes:
<int-amqp:inbound-gateway id="inboundGateway"request-channel="myRequestChannel"
header-mapper=""
mapped-request-headers=""
mapped-reply-headers=""
reply-channel="myReplyChannel"
reply-timeout="1000"
amqp-template=""
default-reply-to="" />
Unique ID for this adapter. Optional. | |
Message Channel to which converted Messages should be sent. Required. | |
A reference to an | |
Comma-separated list of names of AMQP Headers to be mapped from the AMQP request into the | |
Comma-separated list of names of | |
Message Channel where reply Messages will be expected. Optional. | |
Used to set the | |
The customized | |
The |
See the note in the section called “CompletableFuture” about configuring the listener-container
attribute.
==== Configuring with Java Configuration
The following Spring Boot application provides an example of configuring the inbound gateway using Java configuration:
@SpringBootApplication public class AmqpJavaApplication { public static void main(String[] args) { new SpringApplicationBuilder(AmqpJavaApplication.class) .web(false) .run(args); } @Bean public MessageChannel amqpInputChannel() { return new DirectChannel(); } @Bean public AmqpInboundGateway inbound(SimpleMessageListenerContainer listenerContainer, @Qualifier("amqpInputChannel") MessageChannel channel) { AmqpInboundGateway gateway = new AmqpInboundGateway(listenerContainer); gateway.setRequestChannel(channel); gateway.setDefaultReplyTo("bar"); return gateway; } @Bean public SimpleMessageListenerContainer container(ConnectionFactory connectionFactory) { SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory); container.setQueueNames("foo"); container.setConcurrentConsumers(2); // ... return container; } @Bean @ServiceActivator(inputChannel = "amqpInputChannel") public MessageHandler handler() { return new AbstractReplyProducingMessageHandler() { @Override protected Object handleRequestMessage(Message<?> requestMessage) { return "reply to " + requestMessage.getPayload(); } }; } }
==== Configuring with the Java DSL
The following Spring Boot application provides an example of configuring the inbound gateway using the Java DSL:
@SpringBootApplication public class AmqpJavaApplication { public static void main(String[] args) { new SpringApplicationBuilder(AmqpJavaApplication.class) .web(false) .run(args); } @Bean // return the upper cased payload public IntegrationFlow amqpInboundGateway(ConnectionFactory connectionFactory) { return IntegrationFlows.from(Amqp.inboundGateway(connectionFactory, "foo")) .transform(String.class, String::toUpperCase) .get(); } }
=== Inbound Endpoint Acknowledge Mode
By default the inbound endpoints use acknowledge mode AUTO
, which means the container automatically acks the message when the downstream integration flow completes (or a message is handed off to another thread using a QueueChannel
or ExecutorChannel
).
Setting the mode to NONE
configures the consumer such that acks are not used at all (the broker automatically acks the message as soon as it is sent).
Setting the mode to MANUAL
allows user code to ack the message at some other point during processing.
To support this, with this mode, the endpoints provide the Channel
and deliveryTag
in the amqp_channel
and amqp_deliveryTag
headers respectively.
You can perform any valid rabbit command on the Channel
but, generally, only basicAck
and basicNack
(or basicReject
) would be used.
In order to not interfere with the operation of the container, you should not retain a reference to the channel and just use it in the context of the current message.
![]() | Note |
---|---|
Since the |
This is an example of how you might use MANUAL
acknowledgement:
@ServiceActivator(inputChannel = "foo", outputChannel = "bar") public Object handle(@Payload String payload, @Header(AmqpHeaders.CHANNEL) Channel channel, @Header(AmqpHeaders.DELIVERY_TAG) Long deliveryTag) throws Exception { // Do some processing if (allOK) { channel.basicAck(deliveryTag, false); // perhaps do some more processing } else { channel.basicNack(deliveryTag, false, true); } return someResultForDownStreamProcessing; }
A configuration sample for an AMQP Outbound Channel Adapter is shown below.
<int-amqp:outbound-channel-adapter id="outboundAmqp"channel="outboundChannel"
amqp-template="myAmqpTemplate"
exchange-name=""
exchange-name-expression=""
order="1"
routing-key=""
routing-key-expression=""
default-delivery-mode""
confirm-correlation-expression=""
confirm-ack-channel=""
confirm-nack-channel=""
return-channel=""
error-message-strategy=""
header-mapper=""
mapped-request-headers="" (16) lazy-connect="true" /> (17)
Unique ID for this adapter. Optional. | |
Message Channel to which Messages should be sent in order to have them converted and published to an AMQP Exchange. Required. | |
Bean Reference to the configured AMQP Template Optional (Defaults to "amqpTemplate"). | |
The name of the AMQP Exchange to which Messages should be sent. If not provided, Messages will be sent to the default, no-name Exchange. Mutually exclusive with exchange-name-expression. Optional. | |
A SpEL expression that is evaluated to determine the name of the AMQP Exchange to which Messages should be sent, with the message as the root object. If not provided, Messages will be sent to the default, no-name Exchange. Mutually exclusive with exchange-name. Optional. | |
The order for this consumer when multiple consumers are registered thereby enabling load-balancing and/or failover. Optional (Defaults to Ordered.LOWEST_PRECEDENCE [=Integer.MAX_VALUE]). | |
The fixed routing-key to use when sending Messages. By default, this will be an empty String. Mutually exclusive with routing-key-expression.Optional. | |
A SpEL expression that is evaluated to determine the routing-key to use when sending Messages, with the message as the root object (e.g. payload.key). By default, this will be an empty String. Mutually exclusive with routing-key. Optional. | |
The default delivery mode for messages; | |
An expression defining correlation data.
When provided, this configures the underlying amqp template to receive publisher confirms.
Requires a dedicated | |
The channel to which positive (ack) publisher confirms are sent; payload is the correlation data defined by the confirm-correlation-expression.
If the expression is | |
The channel to which negative (nack) publisher confirms are sent; payload is the correlation data defined by the confirm-correlation-expression (if there is no | |
The channel to which returned messages are sent.
When provided, the underlying amqp template is configured to return undeliverable messages to the adapter.
When there is no | |
A reference to an | |
A reference to an | |
Comma-separated list of names of AMQP Headers to be mapped from the | |
When set to |
![]() | return-channel |
---|---|
Using a |
==== Configuring with Java Configuration
The following Spring Boot application provides an example of configuring the outbound adapter using Java configuration:
@SpringBootApplication @IntegrationComponentScan public class AmqpJavaApplication { public static void main(String[] args) { ConfigurableApplicationContext context = new SpringApplicationBuilder(AmqpJavaApplication.class) .web(false) .run(args); MyGateway gateway = context.getBean(MyGateway.class); gateway.sendToRabbit("foo"); } @Bean @ServiceActivator(inputChannel = "amqpOutboundChannel") public AmqpOutboundEndpoint amqpOutbound(AmqpTemplate amqpTemplate) { AmqpOutboundEndpoint outbound = new AmqpOutboundEndpoint(amqpTemplate); outbound.setRoutingKey("foo"); // default exchange - route to queue 'foo' return outbound; } @Bean public MessageChannel amqpOutboundChannel() { return new DirectChannel(); } @MessagingGateway(defaultRequestChannel = "amqpOutboundChannel") public interface MyGateway { void sendToRabbit(String data); } }
==== Configuring with the Java DSL
The following Spring Boot application provides an example of configuring the outbound adapter using the Java DSL:
@SpringBootApplication @IntegrationComponentScan public class AmqpJavaApplication { public static void main(String[] args) { ConfigurableApplicationContext context = new SpringApplicationBuilder(AmqpJavaApplication.class) .web(false) .run(args); MyGateway gateway = context.getBean(MyGateway.class); gateway.sendToRabbit("foo"); } @Bean public IntegrationFlow amqpOutbound(AmqpTemplate amqpTemplate) { return IntegrationFlows.from(amqpOutboundChannel()) .handle(Amqp.outboundAdapter(amqpTemplate) .routingKey("foo")) // default exchange - route to queue 'foo' .get(); } @Bean public MessageChannel amqpOutboundChannel() { return new DirectChannel(); } @MessagingGateway(defaultRequestChannel = "amqpOutboundChannel") public interface MyGateway { void sendToRabbit(String data); } }
Configuration for an AMQP Outbound Gateway is shown below.
<int-amqp:outbound-gateway id="inboundGateway"request-channel="myRequestChannel"
amqp-template=""
exchange-name=""
exchange-name-expression=""
order="1"
reply-channel=""
reply-timeout=""
requires-reply=""
routing-key=""
routing-key-expression=""
default-delivery-mode""
confirm-correlation-expression=""
confirm-ack-channel=""
confirm-nack-channel=""
return-channel="" (16) error-message-strategy="" (17) lazy-connect="true" /> (18)
Unique ID for this adapter. Optional. | |
Message Channel to which Messages should be sent in order to have them converted and published to an AMQP Exchange. Required. | |
Bean Reference to the configured AMQP Template Optional (Defaults to "amqpTemplate"). | |
The name of the AMQP Exchange to which Messages should be sent. If not provided, Messages will be sent to the default, no-name Exchange. Mutually exclusive with exchange-name-expression. Optional. | |
A SpEL expression that is evaluated to determine the name of the AMQP Exchange to which Messages should be sent, with the message as the root object. If not provided, Messages will be sent to the default, no-name Exchange. Mutually exclusive with exchange-name. Optional. | |
The order for this consumer when multiple consumers are registered thereby enabling load-balancing and/or failover. Optional (Defaults to Ordered.LOWEST_PRECEDENCE [=Integer.MAX_VALUE]). | |
Message Channel to which replies should be sent after being received from an AMQP Queue and converted.Optional. | |
The time the gateway will wait when sending the reply message to the | |
When | |
The routing-key to use when sending Messages. By default, this will be an empty String. Mutually exclusive with routing-key-expression. Optional. | |
A SpEL expression that is evaluated to determine the routing-key to use when sending Messages, with the message as the root object (e.g. payload.key). By default, this will be an empty String. Mutually exclusive with routing-key. Optional. | |
The default delivery mode for messages; | |
Since version 4.2. An expression defining correlation data.
When provided, this configures the underlying amqp template to receive publisher confirms.
Requires a dedicated | |
The channel to which positive (ack) publisher confirms are sent; payload is the correlation data defined by the confirm-correlation-expression.
If the expression is | |
The channel to which negative (nack) publisher confirms are sent; payload is the correlation data defined by the confirm-correlation-expression (if there is no | |
The channel to which returned messages are sent.
When provided, the underlying amqp template is configured to return undeliverable messages to the adapter.
When there is no | |
A reference to an | |
When set to |
![]() | return-channel |
---|---|
Using a |
![]() | Important |
---|---|
The underlying |
==== Configuring with Java Configuration
The following Spring Boot application provides an example of configuring the outbound gateway using Java configuration:
@SpringBootApplication @IntegrationComponentScan public class AmqpJavaApplication { public static void main(String[] args) { ConfigurableApplicationContext context = new SpringApplicationBuilder(AmqpJavaApplication.class) .web(false) .run(args); MyGateway gateway = context.getBean(MyGateway.class); String reply = gateway.sendToRabbit("foo"); System.out.println(reply); } @Bean @ServiceActivator(inputChannel = "amqpOutboundChannel") public AmqpOutboundEndpoint amqpOutbound(AmqpTemplate amqpTemplate) { AmqpOutboundEndpoint outbound = new AmqpOutboundEndpoint(amqpTemplate); outbound.setExpectReply(true); outbound.setRoutingKey("foo"); // default exchange - route to queue 'foo' return outbound; } @Bean public MessageChannel amqpOutboundChannel() { return new DirectChannel(); } @MessagingGateway(defaultRequestChannel = "amqpOutboundChannel") public interface MyGateway { String sendToRabbit(String data); } }
Notice that the only difference between the outbound adapter and outbound gateway configuration is the setting of the
expectReply
property.
==== Configuring with the Java DSL
The following Spring Boot application provides an example of configuring the outbound adapter using the Java DSL:
@SpringBootApplication @IntegrationComponentScan public class AmqpJavaApplication { public static void main(String[] args) { ConfigurableApplicationContext context = new SpringApplicationBuilder(AmqpJavaApplication.class) .web(false) .run(args); RabbitTemplate template = context.getBean(RabbitTemplate.class); MyGateway gateway = context.getBean(MyGateway.class); String reply = gateway.sendToRabbit("foo"); System.out.println(reply); } @Bean public IntegrationFlow amqpOutbound(AmqpTemplate amqpTemplate) { return IntegrationFlows.from(amqpOutboundChannel()) .handle(Amqp.outboundGateway(amqpTemplate) .routingKey("foo")) // default exchange - route to queue 'foo' .get(); } @Bean public MessageChannel amqpOutboundChannel() { return new DirectChannel(); } @MessagingGateway(defaultRequestChannel = "amqpOutboundChannel") public interface MyGateway { String sendToRabbit(String data); } }
The gateway discussed in the previous section is synchronous, in that the sending thread is suspended until a
reply is received (or a timeout occurs).
Spring Integration version 4.3 added this asynchronous gateway, which uses the AsyncRabbitTemplate
from Spring AMQP.
When a message is sent, the thread returns immediately after the send completes, and the reply is sent on the template’s
listener container thread when it is received.
This can be useful when the gateway is invoked on a poller thread; the thread is released and is available for other
tasks in the framework.
Configuration for an AMQP Async Outbound Gateway is shown below.
<int-amqp:outbound-gateway id="inboundGateway"request-channel="myRequestChannel"
async-template=""
exchange-name=""
exchange-name-expression=""
order="1"
reply-channel=""
reply-timeout=""
requires-reply=""
routing-key=""
routing-key-expression=""
default-delivery-mode""
confirm-correlation-expression=""
confirm-ack-channel=""
confirm-nack-channel=""
return-channel="" (16) lazy-connect="true" /> (17)
Unique ID for this adapter. Optional. | |
Message Channel to which Messages should be sent in order to have them converted and published to an AMQP Exchange. Required. | |
Bean Reference to the configured | |
The name of the AMQP Exchange to which Messages should be sent. If not provided, Messages will be sent to the default, no-name Exchange. Mutually exclusive with exchange-name-expression. Optional. | |
A SpEL expression that is evaluated to determine the name of the AMQP Exchange to which Messages should be sent, with the message as the root object. If not provided, Messages will be sent to the default, no-name Exchange. Mutually exclusive with exchange-name. Optional. | |
The order for this consumer when multiple consumers are registered thereby enabling load-balancing and/or failover. Optional (Defaults to Ordered.LOWEST_PRECEDENCE [=Integer.MAX_VALUE]). | |
Message Channel to which replies should be sent after being received from an AMQP Queue and converted. Optional. | |
The time the gateway will wait when sending the reply message to the | |
When | |
The routing-key to use when sending Messages. By default, this will be an empty String. Mutually exclusive with routing-key-expression. Optional. | |
A SpEL expression that is evaluated to determine the routing-key to use when sending Messages, with the message as the root object (e.g. payload.key). By default, this will be an empty String. Mutually exclusive with routing-key. Optional. | |
The default delivery mode for messages; | |
An expression defining correlation data.
When provided, this configures the underlying amqp template to receive publisher confirms.
Requires a dedicated | |
The channel to which positive (ack) publisher confirms are sent; payload is the correlation
data defined by the confirm-correlation-expression.
Requires the underlying | |
Since version 4.2. The channel to which negative (nack) publisher confirms are sent; payload is the correlation
data defined by the confirm-correlation-expression.
Requires the underlying | |
The channel to which returned messages are sent.
When provided, the underlying amqp template is configured to return undeliverable messages to the gateway.
The message will be constructed from the data received from amqp, with the following additional headers:
amqp_returnReplyCode, amqp_returnReplyText, amqp_returnExchange, amqp_returnRoutingKey.
Requires the underlying | |
When set to |
Also see the section called “CompletableFuture” for more information.
![]() | RabbitTemplate |
---|---|
When using confirms and returns, it is recommended that the |
==== Configuring with Java Configuration
The following configuration provides an example of configuring the outbound gateway using Java configuration:
@Configuration public class AmqpAsyncConfig { @Bean @ServiceActivator(inputChannel = "amqpOutboundChannel") public AsyncAmqpOutboundGateway amqpOutbound(AmqpTemplate asyncTemplate) { AsyncAmqpOutboundGateway outbound = new AsyncAmqpOutboundGateway(asyncTemplate); outbound.setRoutingKey("foo"); // default exchange - route to queue 'foo' return outbound; } @Bean public AsyncRabbitTemplate asyncTemplate(RabbitTemplate rabbitTemplate, SimpleMessageListenerContainer replyContainer) { return new AsyncRabbitTemplate(rabbitTemplate, replyContainer); } @Bean public SimpleMessageListenerContainer replyContainer() { SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(ccf); container.setQueueNames("asyncRQ1"); return container; } @Bean public MessageChannel amqpOutboundChannel() { return new DirectChannel(); } }
==== Configuring with the Java DSL
The following Spring Boot application provides an example of configuring the outbound adapter using the Java DSL:
@SpringBootApplication public class AmqpAsyncApplication { public static void main(String[] args) { ConfigurableApplicationContext context = new SpringApplicationBuilder(AmqpAsyncApplication.class) .web(false) .run(args); MyGateway gateway = context.getBean(MyGateway.class); String reply = gateway.sendToRabbit("foo"); System.out.println(reply); } @Bean public IntegrationFlow asyncAmqpOutbound(AsyncRabbitTemplate asyncRabbitTemplate) { return f -> f .handle(Amqp.asyncOutboundGateway(asyncRabbitTemplate) .routingKey("foo")); // default exchange - route to queue 'foo' } @MessagingGateway(defaultRequestChannel = "asyncAmqpOutbound.input") public interface MyGateway { String sendToRabbit(String data); } }
=== Outbound Message Conversion
Spring AMQP 1.4 introduced the ContentTypeDelegatingMessageConverter
where the actual converter is selected based
on the incoming content type message property.
This could be used by inbound endpoints.
Spring Integration version 4.3 now allows the ContentTypeDelegatingMessageConverter
to be used on outbound
endpoints as well - with the contentType
header specifiying which converter will be used.
The following configures a ContentTypeDelegatingMessageConverter
with the default converter being the
SimpleMessageConverter
(which handles java serialization and plain text), together with a JSON converter:
<amqp:outbound-channel-adapter id="withContentTypeConverter" channel="ctRequestChannel" exchange-name="someExchange" routing-key="someKey" amqp-template="amqpTemplateContentTypeConverter" /> <int:channel id="ctRequestChannel"/> <rabbit:template id="amqpTemplateContentTypeConverter" connection-factory="connectionFactory" message-converter="ctConverter" /> <bean id="ctConverter" class="o.s.amqp.support.converter.ContentTypeDelegatingMessageConverter"> <property name="delegates"> <map> <entry key="application/json"> <bean class="o.s.amqp.support.converter.Jackson2JsonMessageConverter" /> </entry> </map> </property> </bean>
Sending a message to ctRequestChannel
with the contentType
header set to application/json
will cause the
JSON converter to be selected.
This applies to both the outbound channel adapter and gateway.
Starting with version 5.0, headers that are added to the MessageProperties
of the outbound message are never overwritten by mapped headers (by default).
Previously, this was only the case if the message converter was a ContentTypeDelegatingMessageConverter
(in that case, the header was mapped first, so that the proper converter could be selected).
For other converters, such as the SimpleMessageConverter
, mapped headers overwrote any headers added by the converter.
This caused problems when an outbound message had some left over contentType
header (perhaps from an inbound channel adapter) and the correct outbound contentType
was incorrectly overwritten.
The work-around was to use a header filter to remove the header before sending the message to the outbound endpoint.
There are, however, cases where the previous behavior is desired.
For example, with a String
payload containing JSON, the SimpleMessageConverter
is not aware of the content and sets the contentType
message property to text/plain
, but your application would like to override that to application/json
by setting the the contentType
header of the message sent to the outbound endpoint.
The ObjectToJsonTransformer
does exactly that (by default).
There is now a property on the outbound channel adapter and gateway (as well as AMQP-backed channels) headersMappedLast
.
Setting this to true
will restore the behavior of overwriting the property added by the converter.
Spring AMQP version 1.6 introduced a mechanism to allow the specification of a default user id for outbound messages.
It has always been possible to set the AmqpHeaders.USER_ID
header which will now take precedence over the default.
This might be useful to message recipients; for inbound messages, if the message publisher sets the property, it is made available in the AmqpHeaders.RECEIVED_USER_ID
header.
Note that RabbitMQ validates that the user id is the actual user id for the connection or one for which impersonation is allowed.
To configure a default user id for outbound messages, configure it on a RabbitTemplate
and configure the outbound adapter or gateway to use that template.
Similarly, to set the user id property on replies, inject an appropriately configured template into the inbound gateway.
See the Spring AMQP documentation for more information.
Spring AMQP supports the RabbitMQ Delayed Message Exchange Plugin.
For inbound messages, the x-delay
header is mapped to the AmqpHeaders.RECEIVED_DELAY
header.
Setting the AMQPHeaders.DELAY
header will cause the corresponding x-delay
header to be set in outbound messages.
You can also specify the delay
and delayExpression
properties on outbound endpoints (delay-expression
when using XML configuration).
This takes precedence over the AmqpHeaders.DELAY
header.
=== AMQP Backed Message Channels
There are two Message Channel implementations available. One is point-to-point, and the other is publish/subscribe. Both of these channels provide a wide range of configuration attributes for the underlying AmqpTemplate and SimpleMessageListenerContainer as you have seen on the Channel Adapters and Gateways. However, the examples we’ll show here are going to have minimal configuration. Explore the XML schema to view the available attributes.
A point-to-point channel would look like this:
<int-amqp:channel id="p2pChannel"/>
Under the covers a Queue named "si.p2pChannel" would be declared, and this channel will send to that Queue (technically by sending to the no-name Direct Exchange with a routing key that matches this Queue’s name).
This channel will also register a consumer on that Queue.
If you want the channel to be "pollable" instead of message-driven, then simply provide the "message-driven" flag with
a value of false
:
<int-amqp:channel id="p2pPollableChannel" message-driven="false"/>
A publish/subscribe channel would look like this:
<int-amqp:publish-subscribe-channel id="pubSubChannel"/>
Under the covers a Fanout Exchange named "si.fanout.pubSubChannel" would be declared, and this channel will send to that Fanout Exchange. This channel will also declare a server-named exclusive, auto-delete, non-durable Queue and bind that to the Fanout Exchange while registering a consumer on that Queue to receive Messages. There is no "pollable" option for a publish-subscribe-channel; it must be message-driven.
Starting with version 4.1 AMQP Backed Message Channels, alongside with channel-transacted
, support
template-channel-transacted
to separate transactional
configuration for the AbstractMessageListenerContainer
and
for the RabbitTemplate
.
Note, previously, the channel-transacted
was true
by default, now it changed to false
as standard default value
for the AbstractMessageListenerContainer
.
Prior to version 4.3, AMQP-backed channels only supported messages with Serializable
payloads and headers.
The entire message was converted (serialized) and sent to RabbitMQ.
Now, you can set the extract-payload
attribute (or setExtractPayload()
when using Java configuration) to true.
When this flag is true
, the message payload is converted and the headers mapped, in a similar manner to when using
channel adapters.
This allows AMQP-backed channels to be used with non-serializable payloads (perhaps with another message converter
such as the Jackson2JsonMessageConverter
).
The default mapped headers are discussed in the section called “CompletableFuture”.
You can modify the mapping by providing custom mappers using the outbound-header-mapper
and inbound-header-mapper
attributes.
You can now also specify a default-delivery-mode
, used to set the delivery mode when there is no
amqp_deliveryMode
header.
By default, Spring AMQP MessageProperties
uses PERSISTENT
delivery mode.
![]() | Important |
---|---|
Just as with other persistence-backed channels, AMQP-backed channels are intended to provide message persistence to avoid message loss. They are not intended to distribute work to other peer applications; for that purpose, use channel adapters instead. |
![]() | Important |
---|---|
Starting with version 5.0, the pollable channel now blocks the poller thread for the specified |
==== Configuring with Java Configuration
The following provides an example of configuring the channels using Java configuration:
@Bean public AmqpChannelFactoryBean pollable(ConnectionFactory connectionFactory) { AmqpChannelFactoryBean factoryBean = new AmqpChannelFactoryBean(); factoryBean.setConnectionFactory(connectionFactory); factoryBean.setQueueName("foo"); factoryBean.setPubSub(false); return factoryBean; } @Bean public AmqpChannelFactoryBean messageDriven(ConnectionFactory connectionFactory) { AmqpChannelFactoryBean factoryBean = new AmqpChannelFactoryBean(true); factoryBean.setConnectionFactory(connectionFactory); factoryBean.setQueueName("bar"); factoryBean.setPubSub(false); return factoryBean; } @Bean public AmqpChannelFactoryBean pubSub(ConnectionFactory connectionFactory) { AmqpChannelFactoryBean factoryBean = new AmqpChannelFactoryBean(true); factoryBean.setConnectionFactory(connectionFactory); factoryBean.setQueueName("baz"); factoryBean.setPubSub(false); return factoryBean; }
==== Configuring with the Java DSL
The following provides an example of configuring the channels using the Java DSL:
@Bean public IntegrationFlow pollableInFlow(ConnectionFactory connectionFactory) { return IntegrationFlows.from(...) ... .channel(Amqp.pollableChannel(connectionFactory) .queueName("foo")) ... .get(); } @Bean public IntegrationFlow messageDrivenInFow(ConnectionFactory connectionFactory) { return IntegrationFlows.from(...) ... .channel(Amqp.channel(connectionFactory) .queueName("bar")) ... .get(); } @Bean public IntegrationFlow pubSubInFlow(ConnectionFactory connectionFactory) { return IntegrationFlows.from(...) ... .channel(Amqp.publisSubscribeChannel(connectionFactory) .queueName("baz")) ... .get(); }
The Spring Integration AMQP Adapters will map all AMQP properties and headers automatically.
(This is a change in 4.3 - previously, only standard headers were mapped).
These properties will be copied by default to and from Spring Integration MessageHeaders
using the
DefaultAmqpHeaderMapper.
Of course, you can pass in your own implementation of AMQP specific header mappers, as the adapters have respective properties to support that.
Any user-defined headers within the AMQP MessageProperties WILL
be copied to or from an AMQP Message, unless explicitly negated by the requestHeaderNames and/or
replyHeaderNames properties of the DefaultAmqpHeaderMapper
.
For an outbound mapper, no x-*
headers are mapped by default; see the caution below for the reason why.
To override the default, and revert to the pre-4.3 behavior, use STANDARD_REQUEST_HEADERS
and
STANDARD_REPLY_HEADERS
in the properties.
![]() | Tip |
---|---|
When mapping user-defined headers, the values can also contain simple wildcard patterns (e.g. "foo*" or "*foo")
to be matched.
|
Starting with version 4.1, the AbstractHeaderMapper
(a DefaultAmqpHeaderMapper
superclass) allows the
NON_STANDARD_HEADERS
token to be configured for the requestHeaderNames and/or replyHeaderNames properties
(in addition to the existing STANDARD_REQUEST_HEADERS
and STANDARD_REPLY_HEADERS
) to map all user-defined headers.
Class org.springframework.amqp.support.AmqpHeaders
identifies the default headers that will be used by the
DefaultAmqpHeaderMapper
:
![]() | Caution |
---|---|
As mentioned above, using a header mapping pattern |
Starting with version 4.3, patterns in the header mappings can be negated by preceding the pattern with !
.
Negated patterns get priority, so a list such as
STANDARD_REQUEST_HEADERS,foo,ba*,!bar,!baz,qux,!foo
will NOT map foo
(nor bar
nor baz
); the standard headers plus bad
, qux
will be mapped.
![]() | Important |
---|---|
If you have a user defined header that begins with |
=== AMQP Samples
To experiment with the AMQP adapters, check out the samples available in the Spring Integration Samples Git repository at:
Currently there is one sample available that demonstrates the basic functionality of the Spring Integration AMQP Adapter using an Outbound Channel Adapter and an Inbound Channel Adapter. As AMQP Broker implementation the sample uses RabbitMQ (http://www.rabbitmq.com/).
![]() | Note |
---|---|
In order to run the example you will need a running instance of RabbitMQ. A local installation with just the basic defaults will be sufficient. For detailed RabbitMQ installation procedures please visit: http://www.rabbitmq.com/install.html |
Once the sample application is started, you enter some text on the command prompt and a message containing that entered text is dispatched to the AMQP queue. In return that message is retrieved via Spring Integration and then printed to the console.
The image belows illustrates the basic set of Spring Integration components used in this sample.
== Spring ApplicationEvent Support
Spring Integration provides support for inbound and outbound ApplicationEvents
as defined by the underlying Spring Framework.
For more information about Spring’s support for events and listeners, refer to the Spring Reference Manual.
=== Receiving Spring Application Events
To receive events and send them to a channel, simply define an instance of Spring Integration’s ApplicationEventListeningMessageProducer
.
This class is an implementation of Spring’s ApplicationListener
interface.
By default it will pass all received events as Spring Integration Messages.
To limit based on the type of event, configure the list of event types that you want to receive with the eventTypes property.
If a received event has a Message instance as its source, then that will be passed as-is.
Otherwise, if a SpEL-based "payloadExpression" has been provided, that will be evaluated against the ApplicationEvent instance.
If the event’s source is not a Message instance and no "payloadExpression" has been provided, then the ApplicationEvent itself will be passed as the payload.
Starting with version 4.2 the ApplicationEventListeningMessageProducer
implements GenericApplicationListener
and can be configured to accept not only ApplicationEvent
types, but any type for treating payload events
which are supported since Spring Framework 4.2, too.
When the accepted event is an instance of PayloadApplicationEvent
, its payload
is used for the message to send.
For convenience namespace support is provided to configure an ApplicationEventListeningMessageProducer
via the inbound-channel-adapter element.
<int-event:inbound-channel-adapter channel="eventChannel" error-channel="eventErrorChannel" event-types="example.FooEvent, example.BarEvent, java.util.Date"/> <int:publish-subscribe-channel id="eventChannel"/>
In the above example, all Application Context events that match one of the types specified by the event-types (optional) attribute will be delivered as Spring Integration Messages to the Message Channel named eventChannel. If a downstream component throws an exception, a MessagingException containing the failed message and exception will be sent to the channel named eventErrorChannel. If no "error-channel" is specified and the downstream channels are synchronous, the Exception will be propagated to the caller.
=== Sending Spring Application Events
To send Spring ApplicationEvents
, create an instance of the ApplicationEventPublishingMessageHandler
and register it within an endpoint.
This implementation of the MessageHandler
interface also implements Spring’s ApplicationEventPublisherAware
interface and thus acts as a bridge between Spring Integration Messages and ApplicationEvents
.
For convenience namespace support is provided to configure an ApplicationEventPublishingMessageHandler
via the outbound-channel-adapter element.
<int:channel id="eventChannel"/> <int-event:outbound-channel-adapter channel="eventChannel"/>
If you are using a PollableChannel (e.g., Queue), you can also provide poller as a sub-element of the outbound-channel-adapter element. You can also optionally provide a task-executor reference for that poller. The following example demonstrates both.
<int:channel id="eventChannel"> <int:queue/> </int:channel> <int-event:outbound-channel-adapter channel="eventChannel"> <int:poller max-messages-per-poll="1" task-executor="executor" fixed-rate="100"/> </int-event:outbound-channel-adapter> <task:executor id="executor" pool-size="5"/>
In the above example, all messages sent to the eventChannel channel will be published as ApplicationEvents to any relevant ApplicationListener instances that are registered within the same Spring ApplicationContext.
If the payload of the Message is an ApplicationEvent, it will be passed as-is.
Otherwise the Message itself will be wrapped in a MessagingEvent
instance.
Starting with version 4.2 the ApplicationEventPublishingMessageHandler
(<int-event:outbound-channel-adapter>
)
can be configured with the publish-payload
boolean attribute to publish to the application context payload
as is,
instead of wrapping it to a MessagingEvent
instance.
Spring Integration provides support for Syndication via Feed Adapters. The implementation is based on the ROME Framework.
Web syndication is a form of publishing material such as news stories, press releases, blog posts, and other items typically available on a website but also made available in a feed format such as RSS or ATOM.
Spring integration provides support for Web Syndication via its feed adapter and provides convenient namespace-based configuration for it. To configure the feed namespace, include the following elements within the headers of your XML configuration file:
xmlns:int-feed="http://www.springframework.org/schema/integration/feed" xsi:schemaLocation="http://www.springframework.org/schema/integration/feed http://www.springframework.org/schema/integration/feed/spring-integration-feed.xsd"
=== Feed Inbound Channel Adapter
The only adapter that is really needed to provide support for retrieving feeds is an inbound channel adapter. This allows you to subscribe to a particular URL. Below is an example configuration:
<int-feed:inbound-channel-adapter id="feedAdapter" channel="feedChannel" url="http://feeds.bbci.co.uk/news/rss.xml"> <int:poller fixed-rate="10000" max-messages-per-poll="100" /> </int-feed:inbound-channel-adapter>
In the above configuration, we are subscribing to a URL identified by the url
attribute.
As news items are retrieved they will be converted to Messages and sent to a channel identified by the channel
attribute.
The payload of each message will be a com.sun.syndication.feed.synd.SyndEntry
instance.
That encapsulates various data about a news item (content, dates, authors, etc.).
You can also see that the Inbound Feed Channel Adapter is a Polling Consumer.
That means you have to provide a poller configuration.
However, one important thing you must understand with regard to Feeds is that its inner-workings are slightly different then most other poling consumers.
When an Inbound Feed adapter is started, it does the first poll and receives a com.sun.syndication.feed.synd.SyndEntryFeed
instance.
That is an object that contains multiple SyndEntry
objects.
Each entry is stored in the local entry queue and is released based on the value in the max-messages-per-poll
attribute such that each Message will contain a single entry.
If during retrieval of the entries from the entry queue the queue had become empty, the adapter will attempt to update the Feed thereby populating the queue with more entries (SyndEntry
instances) if available.
Otherwise the next attempt to poll for a feed will be determined by the trigger of the poller (e.g., every 10 seconds in the above configuration).
Duplicate Entries
Polling for a Feed might result in entries that have already been processed ("I already read that news item, why are you showing it to me again?").
Spring Integration provides a convenient mechanism to eliminate the need to worry about duplicate entries.
Each feed entry will have a published date field.
Every time a new Message
is generated and sent, Spring Integration will store the value of the latest published date in an instance of the MetadataStore
strategy (the section called “CompletableFuture”).
![]() | Note |
---|---|
The key used to persist the latest published date is the value of the (required) |
Other Options
Starting with version 5.0, the deprecated com.rometools.fetcher.FeedFetcher
option has been removed and an overloaded FeedEntryMessageSource
constructor for an org.springframework.core.io.Resource
is provided.
This is useful when Feed source isn’t an HTTP endpoint, but any other resource, local or remote on FTP, for example.
In the FeedEntryMessageSource
logic such a resource (or provided URL
) is parsed by the SyndFeedInput
to the SyndFeed
object for processing mentioned above.
A customized SyndFeedInput
(for example with the allowDoctypes
option) instance also can be injected to the FeedEntryMessageSource
.
=== Java DSL and Annotation configuration
The following Spring Boot application provides an example of configuring the Inbound Adapter using the Java DSL:
@SpringBootApplication public class FeedJavaApplication { public static void main(String[] args) { new SpringApplicationBuilder(FeedJavaApplication.class) .web(false) .run(args); } @Value("org/springframework/integration/feed/sample.rss") private Resource feedResource; @Bean public MetadataStore metadataStore() { PropertiesPersistingMetadataStore metadataStore = new PropertiesPersistingMetadataStore(); metadataStore.setBaseDirectory(tempFolder.getRoot().getAbsolutePath()); return metadataStore; } @Bean public IntegrationFlow feedFlow() { return IntegrationFlows .from(Feed.inboundAdapter(this.feedResource, "feedTest") .metadataStore(metadataStore()), e -> e.poller(p -> p.fixedDelay(100))) .channel(c -> c.queue("entries")) .get(); } }
Spring Integration’s File support extends the Spring Integration Core with a dedicated vocabulary to deal with reading, writing, and transforming files. It provides a namespace that enables elements defining Channel Adapters dedicated to files and support for Transformers that can read file contents into strings or byte arrays.
This section will explain the workings of FileReadingMessageSource
and FileWritingMessageHandler
and how to configure them as beans.
Also the support for dealing with files through file specific implementations of Transformer
will be discussed.
Finally the file specific namespace will be explained.
A FileReadingMessageSource
can be used to consume files from the filesystem.
This is an implementation of MessageSource
that creates messages from a file system directory.
<bean id="pollableFileSource" class="org.springframework.integration.file.FileReadingMessageSource" p:directory="${input.directory}"/>
To prevent creating messages for certain files, you may supply a FileListFilter
.
By default the following 2 filters are used:
IgnoreHiddenFileListFilter
AcceptOnceFileListFilter
The IgnoreHiddenFileListFilter
ensures that hidden files are not being processed.
Please keep in mind that the exact definition of hidden is system-dependent. For example,
on UNIX-based systems, a file beginning with a period character is considered to be hidden.
Microsoft Windows, on the other hand, has a dedicated file attribute to indicate
hidden files.
![]() | Important |
---|---|
The |
The AcceptOnceFileListFilter
ensures files are picked up only once from the directory.
![]() | Note |
---|---|
The Since version 4.0, this filter requires a Since version 4.1.5, this filter has a new property |
<bean id="pollableFileSource" class="org.springframework.integration.file.FileReadingMessageSource" p:inputDirectory="${input.directory}" p:filter-ref="customFilterBean"/>
A common problem with reading files is that a file may be detected before it is ready.
The default AcceptOnceFileListFilter
does not prevent this.
In most cases, this can be prevented if the file-writing process renames each file as soon as it is ready for reading.
A filename-pattern
or filename-regex
filter that accepts only files that are ready (e.g.
based on a known suffix), composed with the default AcceptOnceFileListFilter
allows for this.
The CompositeFileListFilter
enables the composition.
<bean id="pollableFileSource" class="org.springframework.integration.file.FileReadingMessageSource" p:inputDirectory="${input.directory}" p:filter-ref="compositeFilter"/> <bean id="compositeFilter" class="org.springframework.integration.file.filters.CompositeFileListFilter"> <constructor-arg> <list> <bean class="o.s.i.file.filters.AcceptOnceFileListFilter"/> <bean class="o.s.i.file.filters.RegexPatternFileListFilter"> <constructor-arg value="^test.*$"/> </bean> </list> </constructor-arg> </bean>
If it is not possible to create the file with a temporary name and rename to the final name, another alternative is
provided.
The LastModifiedFileListFilter
was added in version 4.2.
This filter can be configured with an age
property and only files older than this will be passed by the filter.
The age defaults to 60 seconds, but you should choose an age that is large enough to avoid picking up a file early, due
to, say, network glitches.
<bean id="filter" class="org.springframework.integration.file.filters.LastModifiedFileListFilter"> <property name="age" value="120" /> </bean>
Starting with version 4.3.7 a ChainFileListFilter
(an extension of CompositeFileListFilter
) has been introduced to allow scenarios when subsequent filters should only see the result of the previous filter.
(With the CompositeFileListFilter
, all filters see all the files, but only files that pass all filters are passed by the CompositeFileListFilter
).
An example of where the new behavior is required is a combination of LastModifiedFileListFilter
and AcceptOnceFileListFilter
, when we do not wish to accept the file until some amount of time has elapsed.
With the CompositeFileListFilter
, since the AcceptOnceFileListFilter
sees all the files on the first pass, it won’t pass it later when the other filter does.
The CompositeFileListFilter
approach is useful when a pattern filter is combined with a custom filter that looks for a secondary indicating file transfer is complete.
The pattern filter might only pass the primary file (e.g. foo.txt
) but the "done" filter needs to see if, say foo.done
is present.
Say we have files a.txt
, a.done
, and b.txt
.
The pattern filter only passes a.txt
and b.txt
, the "done" filter will see all three files and only pass a.txt
.
The final result of the composite filter is only a.txt
is released.
![]() | Note |
---|---|
With the |
Starting with version 5.0 an ExpressionFileListFilter
has been introduced to allow to execute SpEL expression against file as a context evaluation root object.
For this purpose all the XML components for file handling (local and remote), alongside with an existing filter
attribute, have been supplied with the filter-expression
option:
<int-file:inbound-channel-adapter directory="${inputdir}" filter-expression="name matches '.text'" auto-startup="false"/>
Starting with version 5.0.5, a DiscardAwareFileListFilter
is provided for implementations when there is an interest in the event of the rejected files.
For this purpose such a filter implementation should be supplied with a callback via addDiscardCallback(Consumer<File>)
.
In the Framework this functionality is used from the FileReadingMessageSource.WatchServiceDirectoryScanner
in combination with LastModifiedFileListFilter
.
Unlike the regular DirectoryScanner
, the WatchService
provides files for processing according the events on the target file system.
At the moment of polling an internal queue with those files, the LastModifiedFileListFilter
may discard them because they are too young in regards to its configured age
.
Therefore we lose the file for the future possible considerations.
The discard callback hook allows us to retain the file in the internal queue, so it is available to be checked against the age
in the subsequent polls.
The CompositeFileListFilter
also implements a DiscardAwareFileListFilter
and populates provided discard callback to all its DiscardAwareFileListFilter
delegates.
![]() | Note |
---|---|
Since |
Message Headers
Starting with version 5.0 the FileReadingMessageSource
, in addition to the payload
as a polled File
, populates these headers to the outbound Message
:
FileHeaders.FILENAME
- the File.getName()
of the file to send.
Can be used for subsequent rename or copy logic;
FileHeaders.ORIGINAL_FILE
- the File
object itself.
Typically this header is populated automatically by Framework components, like the section called “CompletableFuture” or the section called “CompletableFuture”, when we lose the original File
object.
But for consistency and convenience with any other custom use-cases this header can be useful to get access to the original file;
FileHeaders.RELATIVE_PATH
- a new header introduced to represent the part of file path relative to the root directory for the scan.
This header can be useful when the requirement is to restore a source directory hierarchy in the other places.
For this purpose the DefaultFileNameGenerator
(the section called “CompletableFuture”) can be configured to use this header.
Directory scanning and polling
The FileReadingMessageSource
doesn’t produce messages for files from the directory immediately.
It uses an internal queue for eligible files returned by the scanner
.
The scanEachPoll
option is used to ensure that the internal queue is refreshed with the latest input directory
content on each poll.
By default (scanEachPoll = false
), the FileReadingMessageSource
empties its queue before scanning the directory
again.
This default behavior is particularly useful to reduce scans of large numbers of files in a directory.
However, in cases where custom ordering is required, it is important to consider the effects of setting this flag to
true
; the order in which files are processed may not be as expected.
By default, files in the queue are processed in their natural (path
) order.
New files added by a scan, even when the queue already has files, are inserted in the appropriate position to maintain
that natural order.
To customize the order, the FileReadingMessageSource
can accept a Comparator<File>
as a constructor argument.
It is used by the internal (PriorityBlockingQueue
) to reorder its content according to the business requirements.
Therefore, to process files in a specific order, you should provide a comparator to the FileReadingMessageSource
,
rather than ordering the list produced by a custom DirectoryScanner
.
Starting with version 5.0, a new RecursiveDirectoryScanner
is presented to perform file tree visiting.
The implementation is based on the Files.walk(Path start, int maxDepth, FileVisitOption... options)
functionality.
The root directory (DirectoryScanner.listFiles(File)
argument) is excluded from the result.
All other sub-directories includes/excludes are based on the target FileListFilter
implementation.
For example the SimplePatternFileListFilter
filters directories by default.
See AbstractDirectoryAwareFileListFilter
and its implementations for more information.
The configuration for file reading can be simplified using the file specific namespace. To do this use the following template.
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:int="http://www.springframework.org/schema/integration" xmlns:int-file="http://www.springframework.org/schema/integration/file" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd http://www.springframework.org/schema/integration/file http://www.springframework.org/schema/integration/file/spring-integration-file.xsd"> </beans>
Within this namespace you can reduce the FileReadingMessageSource
and wrap it in an inbound Channel Adapter like this:
<int-file:inbound-channel-adapter id="filesIn1" directory="file:${input.directory}" prevent-duplicates="true" ignore-hidden="true"/> <int-file:inbound-channel-adapter id="filesIn2" directory="file:${input.directory}" filter="customFilterBean" /> <int-file:inbound-channel-adapter id="filesIn3" directory="file:${input.directory}" filename-pattern="test*" /> <int-file:inbound-channel-adapter id="filesIn4" directory="file:${input.directory}" filename-regex="test[0-9]+\.txt" />
The first channel adapter example is relying on the default FileListFilter
s:
IgnoreHiddenFileListFilter
(Do not process hidden files)
AcceptOnceFileListFilter
(Prevents duplication)
Therefore, you can also leave off the 2 attributes prevent-duplicates
and ignore-hidden
as they are true
by default.
![]() | Important |
---|---|
The |
The second channel adapter example is using a custom filter, the third is using the filename-pattern attribute to
add an AntPathMatcher
based filter, and the fourth is using the filename-regex attribute to add a regular expression Pattern based filter to the FileReadingMessageSource
.
The filename-pattern and filename-regex attributes are each mutually exclusive with the regular filter reference attribute.
However, you can use the filter attribute to reference an instance of CompositeFileListFilter
that combines any number of filters, including one or more pattern based filters to fit your particular needs.
When multiple processes are reading from the same directory it can be desirable to lock files to prevent them from being picked up concurrently.
To do this you can use a FileLocker
.
There is a java.nio based implementation available out of the box, but it is also possible to implement your own locking scheme.
The nio locker can be injected as follows
<int-file:inbound-channel-adapter id="filesIn" directory="file:${input.directory}" prevent-duplicates="true"> <int-file:nio-locker/> </int-file:inbound-channel-adapter>
A custom locker you can configure like this:
<int-file:inbound-channel-adapter id="filesIn" directory="file:${input.directory}" prevent-duplicates="true"> <int-file:locker ref="customLocker"/> </int-file:inbound-channel-adapter>
![]() | Note |
---|---|
When a file inbound adapter is configured with a locker, it will take the responsibility to acquire a lock before the file is allowed to be received.
It will not assume the responsibility to unlock the file. If you have processed the file and keeping the locks hanging around you have a memory leak.
If this is a problem in your case you should call |
When filtering and locking files is not enough it might be needed to control the way files are listed entirely.
To implement this type of requirement you can use an implementation of DirectoryScanner
.
This scanner allows you to determine entirely what files are listed each poll.
This is also the interface that Spring Integration uses internally to wire FileListFilter
s and FileLocker
to the FileReadingMessageSource
.
A custom DirectoryScanner
can be injected into the <int-file:inbound-channel-adapter/>
on the scanner
attribute.
<int-file:inbound-channel-adapter id="filesIn" directory="file:${input.directory}" scanner="customDirectoryScanner"/>
This gives you full freedom to choose the ordering, listing and locking strategies.
It is also important to understand that filters (including patterns
, regex
, prevent-duplicates
etc) and locker
s,
are actually used by the scanner
.
Any of these attributes set on the adapter are subsequently injected into the internal scanner
.
For the case of an external scanner
, all filter and locker attributes are prohibited on the
FileReadingMessageSource
; they must be specified (if required) on that custom DirectoryScanner
.
In other words, if you inject a scanner
into the FileReadingMessageSource
, you should supply filter
and locker
on that scanner
not on the FileReadingMessageSource
.
![]() | Note |
---|---|
The |
==== WatchServiceDirectoryScanner
The FileReadingMessageSource.WatchServiceDirectoryScanner
relies on file system events when new files are added to the directory.
During initialization, the directory is registered to generate events; the initial file list is also built.
While walking the directory tree, any subdirectories encountered are also registered to generate events.
On the first poll, the initial file list from walking the directory is returned.
On subsequent polls, files from new creation events are returned.
If a new subdirectory is added, its creation event is used to walk the new subtree to find existing files, as well
as registering any new subdirectories found.
![]() | Note |
---|---|
There is a case with |
The WatchServiceDirectoryScanner
can be enable via FileReadingMessageSource.use-watch-service
option, which is mutually exclusive with the scanner
option.
An internal FileReadingMessageSource.WatchServiceDirectoryScanner
instance is populated for the provided directory
.
In addition, now the WatchService
polling logic can track the StandardWatchEventKinds.ENTRY_MODIFY
and
StandardWatchEventKinds.ENTRY_DELETE
, too.
The ENTRY_MODIFY
events logic should be implemented properly in the FileListFilter
to track not only new files but
also the modification, if that is requirement.
Otherwise the files from those events are treated the same way.
The ENTRY_DELETE
events have effect for the ResettableFileListFilter
implementations and, therefore, their files
are provided for the remove()
operation.
This means that (when this event is enabled), filters such as the AcceptOnceFileListFilter
will have the file removed,
meaning that, if a file with the same name appears, it will pass the filter and be sent as a message.
For this purpose the watch-events
(FileReadingMessageSource.setWatchEvents(WatchEventType... watchEvents)
) has been introduced
(WatchEventType
is a public inner enum in FileReadingMessageSource
).
With such an option we can implement some scenarios, when we would like to do one downstream flow logic for new files,
and other for modified.
We can achieve that with different <int-file:inbound-channel-adapter>
definitions, but for the same directory:
<int-file:inbound-channel-adapter id="newFiles" directory="${input.directory}" use-watch-service="true"/> <int-file:inbound-channel-adapter id="modifiedFiles" directory="${input.directory}" use-watch-service="true" filter="acceptAllFilter" watch-events="MODIFY"/> <!-- CREATE by default -->
==== Limiting Memory Consumption
A HeadDirectoryScanner
can be used to limit the number of files retained in memory.
This can be useful when scanning large directories.
With XML configuration, this is enabled using the queue-size
property on the inbound channel adapter.
Prior to version 4.2, this setting was incompatible with the use of any other filters.
Any other filters (including prevent-duplicates="true"
) overwrote the filter used to limit the size.
![]() | Note |
---|---|
The use of a Generally, instead of using an |
==== Configuring with Java Configuration
The following Spring Boot application provides an example of configuring the inbound adapter using Java configuration:
@SpringBootApplication public class FileReadingJavaApplication { public static void main(String[] args) { new SpringApplicationBuilder(FileReadingJavaApplication.class) .web(false) .run(args); } @Bean public MessageChannel fileInputChannel() { return new DirectChannel(); } @Bean @InboundChannelAdapter(value = "fileInputChannel", poller = @Poller(fixedDelay = "1000")) public MessageSource<File> fileReadingMessageSource() { FileReadingMessageSource source = new FileReadingMessageSource(); source.setDirectory(new File(INBOUND_PATH)); source.setFilter(new SimplePatternFileListFilter("*.txt")); return source; } @Bean @Transformer(inputChannel = "fileInputChannel", outputChannel = "processFileChannel") public FileToStringTransformer fileToStringTransformer() { return new FileToStringTransformer(); } }
==== Configuring with the Java DSL
The following Spring Boot application provides an example of configuring the inbound adapter using the Java DSL:
@SpringBootApplication public class FileReadingJavaApplication { public static void main(String[] args) { new SpringApplicationBuilder(FileReadingJavaApplication.class) .web(false) .run(args); } @Bean public IntegrationFlow fileReadingFlow() { return IntegrationFlows .from(s -> s.file(new File(INBOUND_PATH)) .patternFilter("*.txt"), e -> e.poller(Pollers.fixedDelay(1000))) .transform(Files.toStringTransformer()) .channel("processFileChannel") .get(); } }
Another popular use case is to get lines from the end (or tail) of a file, capturing new lines when they are added.
Two implementations are provided; the first, OSDelegatingFileTailingMessageProducer
, uses the native tail
command (on operating systems that have one).
This is likely the most efficient implementation on those platforms.
For operating systems that do not have a tail
command, the second implementation ApacheCommonsFileTailingMessageProducer
which uses the Apache commons-io
Tailer
class.
In both cases, file system events, such as files being unavailable etc, are published as ApplicationEvent
s using the normal Spring event publishing mechanism.
Examples of such events are:
[message=tail: cannot open `/tmp/foo' for reading:
No such file or directory, file=/tmp/foo]
[message=tail: `/tmp/foo' has become accessible, file=/tmp/foo]
[message=tail: `/tmp/foo' has become inaccessible:
No such file or directory, file=/tmp/foo]
[message=tail: `/tmp/foo' has appeared;
following end of new file, file=/tmp/foo]
This sequence of events might occur, for example, when a file is rotated.
Starting with version 5.0, a FileTailingIdleEvent
is emitted when there is no data in the file during idleEventInterval
.
[message=Idle timeout, file=/tmp/foo] [idle time=5438]
![]() | Note |
---|---|
Not all platforms supporting a |
Messages emitted from these endpoints have the following headers:
FileHeaders.ORIGINAL_FILE
- the File
object
FileHeaders.FILENAME
- the file name (File.getName()
)
![]() | Note |
---|---|
In versions prior to version 5.0, the |
Example configurations:
<int-file:tail-inbound-channel-adapter id="native" channel="input" task-executor="exec" file="/tmp/foo"/>
This creates a native adapter with default -F -n 0 options (follow the file name from the current end).
<int-file:tail-inbound-channel-adapter id="native" channel="input" native-options="-F -n +0" task-executor="exec" file-delay=10000 file="/tmp/foo"/>
This creates a native adapter with -F -n +0 options (follow the file name, emitting all existing lines).
If the tail command fails (on some platforms, a missing file causes the tail
to fail, even with -F
specified), the command will be retried every 10 seconds.
<int-file:tail-inbound-channel-adapter id="native" channel="input" enable-status-reader="false" task-executor="exec" file="/tmp/foo"/>
By default native adapter capture from standard output and send them as messages and from standard error to raise events.
Starting with version 4.3.6, you can discard the standard error events by setting the enable-status-reader
to false
.
<int-file:tail-inbound-channel-adapter id="native" channel="input" idle-event-interval="5000" task-executor="exec" file="/tmp/foo"/>
IdleEventInterval
is set to 5000 then, if no lines are written for 5 second, FileTailingIdleEvent
will be triggered every 5 second.
This can be useful if we need to stop the adapter.
<int-file:tail-inbound-channel-adapter id="apache" channel="input" task-executor="exec" file="/tmp/bar" delay="2000" end="false" reopen="true" file-delay="10000"/>
This creates an Apache commons-io Tailer
adapter that examines the file for new lines every 2 seconds, and checks for existence of a missing file every 10 seconds.
The file will be tailed from the beginning (end="false"
) instead of the end (which is the default).
The file will be reopened for each chunk (the default is to keep the file open).
![]() | Important |
---|---|
Specifying the |
==== Dealing With Incomplete Data
A common problem in file transfer scenarios is how to determine that the transfer is complete, so you don’t start reading an incomplete file.
A common technique to solve this problem is to write the file with a temporary name and then atomically rename it to the final name.
This, together with a filter that masks the temporary file from being picked up by the consumer provides a robust solution.
This technique is used by Spring Integration components that write files (locally or remotely); by default, they append .writing
to the file name and remove it when the transfer is complete.
Another common technique is to write a second "marker" file to indicate the file transfer is complete.
In this scenario, say, you should not consider foo.txt
to be available for use until foo.txt.complete
is also present.
Spring Integration version 5.0 introduces new filters to support this mechanism.
Implementations are provided for the file system (FileSystemMarkerFilePresentFileListFilter
), FTP and SFTP.
They are configurable such that the marker file can have any name, although it will usually be related to the file being transferred.
See the javadocs for more information.
To write messages to the file system you can use a FileWritingMessageHandler. This class can deal with the following payload types:
You can configure the encoding and the charset that will be used in case of a String payload.
To make things easier, you can configure the FileWritingMessageHandler
as part of an Outbound Channel Adapter or
Outbound Gateway using the provided XML namespace support.
Starting with version 4.3, you can specify the buffer size to use when writing files.
In its simplest form, the FileWritingMessageHandler
only requires a destination directory for writing the files.
The name of the file to be written is determined by the handler’s FileNameGenerator.
The default implementation looks for a Message header whose key matches the constant defined as FileHeaders.FILENAME.
Alternatively, you can specify an expression to be evaluated against the Message in order to generate a file name, e.g. headers[myCustomHeader] + '.foo'.
The expression must evaluate to a String
.
For convenience, the DefaultFileNameGenerator
also provides the setHeaderName method, allowing you to explicitly specify the Message header whose value shall be used as the filename.
Once setup, the DefaultFileNameGenerator
will employ the following resolution steps to determine the filename for a given Message payload:
String
, use it as the filename.
java.io.File
, use the file’s filename.
msg
as the filename.
When using the XML namespace support, both, the File Outbound Channel Adapter and the File Outbound Gateway support the following two mutually exclusive configuration attributes:
filename-generator
(a reference to a FileNameGenerator
implementation)
filename-generator-expression
(an expression evaluating to a String
)
While writing files, a temporary file suffix will be used (default: .writing
).
It is appended to the filename while the file is being written.
To customize the suffix, you can set the temporary-file-suffix attribute on both the File Outbound Channel Adapter and the File Outbound Gateway.
![]() | Note |
---|---|
When using the APPEND file mode, the temporary-file-suffix attribute is ignored, since the data is appended to the file directly. |
Starting with version 4.2.5 the generated file name (as a result of filename-generator
/filename-generator-expression
evaluation) can represent a sub-path together with the target file name.
It is used as a second constructor argument for File(File parent, String child)
as before, but in the past we didn’t
created (mkdirs()
) directories for sub-path assuming only the file name.
This approach is useful for cases when we need to restore the file system tree according the source directory.
For example we unzipping the archive and want to save all file in the target directory at the same order.
==== Specifying the Output Directory
Both, the File Outbound Channel Adapter and the File Outbound Gateway provide two configuration attributes for specifying the output directory:
![]() | Note |
---|---|
The directory-expression attribute is available since Spring Integration 2.2. |
Using the directory attribute
When using the directory attribute, the output directory will be set to a fixed value, that is set at initialization time of the FileWritingMessageHandler
.
If you don’t specify this attribute, then you must use the directory-expression attribute.
Using the directory-expression attribute
If you want to have full SpEL support you would choose the directory-expression attribute. This attribute accepts a SpEL expression that is evaluated for each message being processed. Thus, you have full access to a Message’s payload and its headers to dynamically specify the output file directory.
The SpEL expression must resolve to either a String
or to java.io.File
.
Furthermore the resulting String
or File
must point to a directory.
If you don’t specify the directory-expression attribute, then you must set the directory attribute.
Using the auto-create-directory attribute
If the destination directory does not exists, yet, by default the respective destination directory and any non-existing parent directories are being created automatically. You can set the auto-create-directory attribute to false in order to prevent that. This attribute applies to both, the directory and the directory-expression attribute.
![]() | Note |
---|---|
When using the directory attribute and auto-create-directory is Instead of checking for the existence of the destination directory at initialization time of the adapter, this check is now performed for each message being processed. Furthermore, if auto-create-directory is |
==== Dealing with Existing Destination Files
When writing files and the destination file already exists, the default behavior is to overwrite that target file. This behavior, though, can be changed by setting the mode attribute on the respective File Outbound components. The following options exist:
![]() | Note |
---|---|
The mode attribute and the options APPEND, FAIL and IGNORE, are available since Spring Integration 2.2. |
REPLACE
If the target file already exists, it will be overwritten. If the mode attribute is not specified, then this is the default behavior when writing files.
REPLACE_IF_MODIFIED
If the target file already exists, it will be overwritten only if the last modified timestamp is different to the source file.
For File
payloads, the payload lastModified
time is compared to the existing file.
For other payloads, the FileHeaders.SET_MODIFIED
(file_setModified
) header is compared to the existing file.
If the header is missing, or has a value that is not a Number
, the file is always replaced.
APPEND
This mode allows you to append Message content to the existing file instead of creating a new file each time. Note that this attribute is mutually exclusive with temporary-file-suffix attribute since when appending content to the existing file, the adapter no longer uses a temporary file. The file is closed after each message.
APPEND_NO_FLUSH
This has the same semantics as APPEND but the data is not flushed and the file is not closed after each message. This can provide a significant performance at the risk of data loss in the case of a failure. See the section called “CompletableFuture” for more information.
FAIL
If the target file exists, a MessageHandlingException is thrown.
IGNORE
If the target file exists, the message payload is silently ignored.
![]() | Note |
---|---|
When using a temporary file suffix (default: |
==== Flushing Files When using APPEND_NO_FLUSH
The APPEND_NO_FLUSH mode was added in version 4.3. This can improve performance because the file is not closed after each message. However, this can cause data loss in the event of a failure.
Several flushing strategies, to mitigate this data loss, are provided:
flushInterval
- if a file is not written to for this period of time, it is automatically flushed.
This is approximate and may be up to 1.33x
this time (with an average of 1.167x
).
trigger
method containing a regular expression.
Files with absolute path names matching the pattern will be flushed.
MessageFlushPredicate
implementation to modify the action taken when a message
is sent to the trigger
method.
flushIfNeeded
methods passing in a custom FileWritingMessageHandler.FlushPredicate
or FileWritingMessageHandler.MessageFlushPredicate
implementation.
The predicates are called for each open file. See the java docs for these interfaces for more information. Note that, since version 5.0, the predicate methods provide another parameter - the time that the current file was first written to if new or previously closed.
When using flushInterval
, the interval starts at the last write - the file is flushed only if it is idle for the interval.
Starting with version 4.3.7, and additional property flushWhenIdle
can be set to false
, meaning that the interval starts with the first write to a previously flushed (or new) file.
By default, the destination file lastModified
timestamp will be the time the file was created (except a rename
in-place will retain the current timestamp).
Starting with version 4.3, you can now configure preserve-timestamp
(or setPreserveTimestamp(true)
when using
Java configuration).
For File
payloads, this will transfer the timestamp from the inbound file to the outbound (regardless of whether a
copy was required).
For other payloads, if the FileHeaders.SET_MODIFIED
header (file_setModified
) is present, it will be used to set
the destination file’s lastModified
timestamp, as long as the header is a Number
.
Starting with version 5.0, when writing files to a file system that supports Posix permissions, you can specify those permissions on the outbound channel adapter or gateway.
The property is an integer and is usually supplied in the familiar octal format; e.g. 0640
meaning the owner has read/write permissions, the group has read only permission and others have no access.
==== File Outbound Channel Adapter
<int-file:outbound-channel-adapter id="filesOut" directory="${input.directory.property}"/>
The namespace based configuration also supports a delete-source-files
attribute.
If set to true
, it will trigger the deletion of the original source files after writing to a destination.
The default value for that flag is false
.
<int-file:outbound-channel-adapter id="filesOut" directory="${output.directory}" delete-source-files="true"/>
![]() | Note |
---|---|
The |
Starting with version 4.2 The FileWritingMessageHandler
supports an append-new-line
option.
If set to true
, a new line is appended to the file after a message is written.
The default attribute value is false
.
<int-file:outbound-channel-adapter id="newlineAdapter" append-new-line="true" directory="${output.directory}"/>
In cases where you want to continue processing messages based on the written file, you can use the outbound-gateway
instead.
It plays a very similar role as the outbound-channel-adapter
.
However, after writing the file, it will also send it to the reply channel as the payload of a Message.
<int-file:outbound-gateway id="mover" request-channel="moveInput" reply-channel="output" directory="${output.directory}" mode="REPLACE" delete-source-files="true"/>
As mentioned earlier, you can also specify the mode attribute, which defines the behavior of how to deal with situations where the destination file already exists. Please see the section called “CompletableFuture” for further details. Generally, when using the File Outbound Gateway, the result file is returned as the Message payload on the reply channel.
This also applies when specifying the IGNORE mode. In that case the pre-existing destination file is returned. If the payload of the request message was a file, you still have access to that original file through the Message Header FileHeaders.ORIGINAL_FILE.
![]() | Note |
---|---|
The outbound-gateway works well in cases where you want to first move a file and then send it through a processing pipeline.
In such cases, you may connect the file namespace’s |
If you have more elaborate requirements or need to support additional payload types as input to be converted to file content you could extend the FileWritingMessageHandler
, but a much better option is to rely on a Transformer
.
==== Configuring with Java Configuration
The following Spring Boot application provides an example of configuring the inbound adapter using Java configuration:
@SpringBootApplication @IntegrationComponentScan public class FileWritingJavaApplication { public static void main(String[] args) { ConfigurableApplicationContext context = new SpringApplicationBuilder(FileWritingJavaApplication.class) .web(false) .run(args); MyGateway gateway = context.getBean(MyGateway.class); gateway.writeToFile("foo.txt", new File(tmpDir.getRoot(), "fileWritingFlow"), "foo"); } @Bean @ServiceActivator(inputChannel = "writeToFileChannel") public MessageHandler fileWritingMessageHandler() { Expression directoryExpression = new SpelExpressionParser().parseExpression("headers.directory"); FileWritingMessageHandler handler = new FileWritingMessageHandler(directoryExpression); handler.setFileExistsMode(FileExistsMode.APPEND); return handler; } @MessagingGateway(defaultRequestChannel = "writeToFileChannel") public interface MyGateway { void writeToFile(@Header(FileHeaders.FILENAME) String fileName, @Header(FileHeaders.FILENAME) File directory, String data); } }
==== Configuring with the Java DSL
The following Spring Boot application provides an example of configuring the inbound adapter using the Java DSL:
@SpringBootApplication public class FileWritingJavaApplication { public static void main(String[] args) { ConfigurableApplicationContext context = new SpringApplicationBuilder(FileWritingJavaApplication.class) .web(false) .run(args); MessageChannel fileWritingInput = context.getBean("fileWritingInput", MessageChannel.class); fileWritingInput.send(new GenericMessage<>("foo")); } @Bean public IntegrationFlow fileWritingFlow() { return IntegrationFlows.from("fileWritingInput") .enrichHeaders(h -> h.header(FileHeaders.FILENAME, "foo.txt") .header("directory", new File(tmpDir.getRoot(), "fileWritingFlow"))) .handleWithAdapter(a -> a.fileGateway(m -> m.getHeaders().get("directory"))) .channel(MessageChannels.queue("fileWritingResultChannel")) .get(); } }
To transform data read from the file system to objects and the other way around you need to do some work.
Contrary to FileReadingMessageSource
and to a lesser extent FileWritingMessageHandler
, it is very likely that you will need your own mechanism to get the job done.
For this you can implement the Transformer
interface.
Or extend the AbstractFilePayloadTransformer
for inbound messages.
Some obvious implementations have been provided.
FileToByteArrayTransformer
transforms Files into byte[]
using Spring’s FileCopyUtils
.
It is often better to use a sequence of transformers than to put all transformations in a single class.
In that case the File
to byte[]
conversion might be a logical first step.
FileToStringTransformer
will convert Files to Strings as the name suggests.
If nothing else, this can be useful for debugging (consider using with a Wire Tap).
To configure File specific transformers you can use the appropriate elements from the file namespace.
<int-file:file-to-bytes-transformer input-channel="input" output-channel="output" delete-files="true"/> <int-file:file-to-string-transformer input-channel="input" output-channel="output" delete-files="true" charset="UTF-8"/>
The delete-files option signals to the transformer that it should delete the inbound File after the transformation is complete.
This is in no way a replacement for using the AcceptOnceFileListFilter
when the FileReadingMessageSource
is being used in a multi-threaded environment (e.g.
Spring Integration in general).
The FileSplitter
was added in version 4.1.2 and namespace support was added in version 4.2.
The FileSplitter
splits text files into individual lines, based on BufferedReader.readLine()
.
By default, the splitter uses an Iterator
to emit lines one-at-a-time as they are read from the file.
Setting the iterator
property to false
causes it to read all the lines into memory before emitting them as messages.
One use case for this might be if you want to detect I/O errors on the file before sending any messages containing
lines.
However, it is only practical for relatively short files.
Inbound payloads can be File
, String
(a File
path), InputStream
, or Reader
.
Other payload types will be emitted unchanged.
<int-file:splitter id="splitter"iterator=""
markers=""
markers-json=""
apply-sequence=""
requires-reply=""
charset=""
first-line-as-header=""
input-channel=""
output-channel=""
send-timeout=""
auto-startup=""
order=""
phase="" />
The bean name of the splitter. | |
Set to | |
Set to | |
When | |
Set to | |
Set to | |
Set the charset name to be used when reading the text data into | |
The header name for the first line to be carried as a header in the messages emitted for the remaining lines. Since version 5.0. | |
Set the input channel used to send messages to the splitter. | |
Set the output channel to which messages will be sent. | |
Set the send timeout - only applies if the | |
Set to | |
Set the order of this endpoint if the | |
Set the startup phase for the splitter (used when |
The FileSplitter
will also split any text-based InputStream
into lines.
When used in conjunction with an FTP or SFTP streaming inbound channel adapter, or an FTP or SFTP outbound gateway
using the stream
option to retrieve a file, starting with version 4.3, the splitter will automatically close
the session supporting the stream, when the file is completely consumed.
See the section called “CompletableFuture” and the section called “CompletableFuture” as well as the section called “CompletableFuture” and the section called “CompletableFuture” for more
information about these facilities.
When using Java configuration, an additional constructor is available:
public FileSplitter(boolean iterator, boolean markers, boolean markersJson)
When markersJson
is true, the markers will be represented as a JSON string, as long as a suitable JSON processor library, such as Jackson or Boon, is on the classpath.
Starting with version 5.0, the firstLineAsHeader
option is introduced to specify that the first line of content is a header (such as column names in a CSV file).
The argument passed to this property is the header name under which the first line will be carried as a header in the messages emitted for the remaining lines.
This line is not included in the sequence header (if applySequence
is true) nor in the FileMarker.END
lineCount
.
If file contains only the header line, the file is treated as empty and therefore only FileMarker
s are emitted during splitting (if markers are enabled, otherwise no messages are emitted).
By default (if no header name is set), the first line is considered to be data and will be the payload of the first emitted message.
If you need more complex logic about headers extraction from the file content (not first line, not the whole content of the line, not one header etc.), consider to use Header Enricher upfront of the FileSplitter
.
The lines which have been moved to the headers might be filtered downstream from the normal content process.
==== Configuring with Java Configuration
@Splitter(inputChannel="toSplitter") @Bean public MessageHandler fileSplitter() { FileSplitter splitter = new FileSplitter(true, true); splitter.setApplySequence(true); splitter.setOutputChannel(outputChannel); return splitter; }
==== Configuring with the Java DSL
The following Spring Boot application provides an example of configuring the inbound adapter using the Java DSL:
@SpringBootApplication public class FileSplitterApplication { public static void main(String[] args) { new SpringApplicationBuilder(FileSplitterApplication.class) .web(false) .run(args); } @Bean public IntegrationFlow fileSplitterFlow() { return IntegrationFlows .from(Files.inboundAdapter(tmpDir.getRoot()) .filter(new ChainFileListFilter<File>() .addFilter(new AcceptOnceFileListFilter<>()) .addFilter(new ExpressionFileListFilter<>( new FunctionExpression<File>(f -> "foo.tmp".equals(f.getName())))))) .split(Files.splitter() .markers() .charset(StandardCharsets.US_ASCII) .firstLineAsHeader("fileHeader") .applySequence(true)) .channel(c -> c.queue("fileSplittingResultChannel")) .get(); } }
Spring Integration provides support for file transfer operations via FTP and FTPS.
The File Transfer Protocol (FTP) is a simple network protocol which allows you to transfer files between two computers on the Internet.
There are two actors when it comes to FTP communication: client and server. To transfer files with FTP/FTPS, you use a client which initiates a connection to a remote computer that is running an FTP server. After the connection is established, the client can choose to send and/or receive copies of files.
Spring Integration supports sending and receiving files over FTP/FTPS by providing three client side endpoints: Inbound Channel Adapter, Outbound Channel Adapter, and Outbound Gateway. It also provides convenient namespace-based configuration options for defining these client components.
To use the FTP namespace, add the following to the header of your XML file:
xmlns:int-ftp="http://www.springframework.org/schema/integration/ftp" xsi:schemaLocation="http://www.springframework.org/schema/integration/ftp http://www.springframework.org/schema/integration/ftp/spring-integration-ftp.xsd"
==== Default Factories
![]() | Important |
---|---|
Starting with version 3.0, sessions are no longer cached by default. See the section called “CompletableFuture”. |
Before configuring FTP adapters you must configure an FTP Session Factory.
You can configure the FTP Session Factory with a regular bean definition where the implementation class is org.springframework.integration.ftp.session.DefaultFtpSessionFactory
: Below is a basic configuration:
<bean id="ftpClientFactory" class="org.springframework.integration.ftp.session.DefaultFtpSessionFactory"> <property name="host" value="localhost"/> <property name="port" value="22"/> <property name="username" value="kermit"/> <property name="password" value="frog"/> <property name="clientMode" value="0"/> <property name="fileType" value="2"/> <property name="bufferSize" value="100000"/> </bean>
For FTPS connections all you need to do is use org.springframework.integration.ftp.session.DefaultFtpsSessionFactory
instead.
Below is the complete configuration sample:
<bean id="ftpClientFactory" class="org.springframework.integration.ftp.session.DefaultFtpsSessionFactory"> <property name="host" value="localhost"/> <property name="port" value="22"/> <property name="username" value="oleg"/> <property name="password" value="password"/> <property name="clientMode" value="1"/> <property name="fileType" value="2"/> <property name="useClientMode" value="true"/> <property name="cipherSuites" value="a,b.c"/> <property name="keyManager" ref="keyManager"/> <property name="protocol" value="SSL"/> <property name="trustManager" ref="trustManager"/> <property name="prot" value="P"/> <property name="needClientAuth" value="true"/> <property name="authValue" value="oleg"/> <property name="sessionCreation" value="true"/> <property name="protocols" value="SSL, TLS"/> <property name="implicit" value="true"/> </bean>
Every time an adapter requests a session object from its SessionFactory
the session is returned from a session pool maintained by a caching wrapper around the factory.
A Session in the session pool might go stale (if it has been disconnected by the server due to inactivity) so the SessionFactory
will perform validation to make sure that it never returns a stale session to the adapter.
If a stale session was encountered, it will be removed from the pool, and a new one will be created.
![]() | Note |
---|---|
If you experience connectivity problems and would like to trace Session creation as well as see which Sessions are polled you may enable it by setting the logger to TRACE level (e.g., log4j.category.org.springframework.integration.file=TRACE) |
Now all you need to do is inject these session factories into your adapters. Obviously the protocol (FTP or FTPS) that an adapter will use depends on the type of session factory that has been injected into the adapter.
![]() | Note |
---|---|
A more practical way to provide values for FTP/FTPS Session Factories is by using Spring’s property placeholder support (See: http://docs.spring.io/spring/docs/current/spring-framework-reference/html/beans.html#beans-factory-placeholderconfigurer). |
Advanced Configuration
DefaultFtpSessionFactory
provides an abstraction over the underlying client API which, since Spring Integration 2.0, is Apache Commons Net.
This spares you from the low level configuration details of the org.apache.commons.net.ftp.FTPClient
.
Several common properties are exposed on the session factory (since version 4.0, this now includes connectTimeout
, defaultTimeout
and dataTimeout
).
However there are times when access to lower level FTPClient
configuration is necessary to achieve more advanced configuration (e.g., setting the port range for active mode etc.).
For that purpose, AbstractFtpSessionFactory
(the base class for all FTP Session Factories) exposes hooks, in the form of the two post-processing methods below.
/** * Will handle additional initialization after client.connect() method was invoked, * but before any action on the client has been taken */ protected void postProcessClientAfterConnect(T t) throws IOException { // NOOP } /** * Will handle additional initialization before client.connect() method was invoked. */ protected void postProcessClientBeforeConnect(T client) throws IOException { // NOOP }
As you can see, there is no default implementation for these two methods.
However, by extending DefaultFtpSessionFactory
you can override these methods to provide more advanced configuration of the FTPClient
.
For example:
public class AdvancedFtpSessionFactory extends DefaultFtpSessionFactory { protected void postProcessClientBeforeConnect(FTPClient ftpClient) throws IOException { ftpClient.setActivePortRange(4000, 5000); } }
==== FTPS and Shared SSLSession
When using FTP over SSL/TLS, some servers require the same SSLSession
to be used on the control and data connections; this is to prevent "stealing" data connections; see here for more information.
Currently, the Apache FTPSClient does not support this feature - see NET-408.
The following solution, courtesy of Stack Overflow, uses reflection on the sun.security.ssl.SSLSessionContextImpl
so may not work on other JVMs.
The stack overflow answer was submitted in 2015 and the solution has been tested by the Spring Integration team recently on JDK 1.8.0_112.
@Bean public DefaultFtpsSessionFactory sf() { DefaultFtpsSessionFactory sf = new DefaultFtpsSessionFactory() { @Override protected FTPSClient createClientInstance() { return new SharedSSLFTPSClient(); } }; sf.setHost("..."); sf.setPort(21); sf.setUsername("..."); sf.setPassword("..."); sf.setNeedClientAuth(true); return sf; } private static final class SharedSSLFTPSClient extends FTPSClient { @Override protected void _prepareDataSocket_(final Socket socket) throws IOException { if (socket instanceof SSLSocket) { // Control socket is SSL final SSLSession session = ((SSLSocket) _socket_).getSession(); final SSLSessionContext context = session.getSessionContext(); context.setSessionCacheSize(0); // you might want to limit the cache try { final Field sessionHostPortCache = context.getClass() .getDeclaredField("sessionHostPortCache"); sessionHostPortCache.setAccessible(true); final Object cache = sessionHostPortCache.get(context); final Method method = cache.getClass().getDeclaredMethod("put", Object.class, Object.class); method.setAccessible(true); String key = String.format("%s:%s", socket.getInetAddress().getHostName(), String.valueOf(socket.getPort())).toLowerCase(Locale.ROOT); method.invoke(cache, key, session); key = String.format("%s:%s", socket.getInetAddress().getHostAddress(), String.valueOf(socket.getPort())).toLowerCase(Locale.ROOT); method.invoke(cache, key, session); } catch (NoSuchFieldException e) { // Not running in expected JRE logger.warn("No field sessionHostPortCache in SSLSessionContext", e); } catch (Exception e) { // Not running in expected JRE logger.warn(e.getMessage()); } } } }
=== Delegating Session Factory
Version 4.2 introduced the DelegatingSessionFactory
which allows the selection of the actual session factory at
runtime.
Prior to invoking the ftp endpoint, call setThreadKey()
on the factory to associate a key with the current thread.
That key is then used to lookup the actual session factory to be used.
The key can be cleared by calling clearThreadKey()
after use.
Convenience methods have been added so this can easily be done from a message flow:
<bean id="dsf" class="org.springframework.integration.file.remote.session.DelegatingSessionFactory"> <constructor-arg> <bean class="o.s.i.file.remote.session.DefaultSessionFactoryLocator"> <!-- delegate factories here --> </bean> </constructor-arg> </bean> <int:service-activator input-channel="in" output-channel="c1" expression="@dsf.setThreadKey(#root, headers['factoryToUse'])" /> <int-ftp:outbound-gateway request-channel="c1" reply-channel="c2" ... /> <int:service-activator input-channel="c2" output-channel="out" expression="@dsf.clearThreadKey(#root)" />
![]() | Important |
---|---|
When using session caching (see the section called “CompletableFuture”), each of the delegates should be cached; you
cannot cache the |
Starting with version 5.0.7, the DelegatingSessionFactory
can be used in conjunction with a RotatingServerAdvice
to poll multiple servers; see the section called “CompletableFuture”.
=== FTP Inbound Channel Adapter
The FTP Inbound Channel Adapter is a special listener that will connect to the FTP server and will listen for the remote directory events (e.g., new file created) at which point it will initiate a file transfer.
<int-ftp:inbound-channel-adapter id="ftpInbound" channel="ftpChannel" session-factory="ftpSessionFactory" auto-create-local-directory="true" delete-remote-files="true" filename-pattern="*.txt" remote-directory="some/remote/path" remote-file-separator="/" preserve-timestamp="true" local-filename-generator-expression="#this.toUpperCase() + '.a'" scanner="myDirScanner" local-filter="myFilter" temporary-file-suffix=".writing" max-fetch-size="-1" local-directory="."> <int:poller fixed-rate="1000"/> </int-ftp:inbound-channel-adapter>
As you can see from the configuration above you can configure an FTP Inbound Channel Adapter via the inbound-channel-adapter
element while also providing values for various attributes such as local-directory
, filename-pattern
(which is based on simple pattern matching, not regular expressions), and of course the reference to a session-factory
.
By default the transferred file will carry the same name as the original file.
If you want to override this behavior you can set the local-filename-generator-expression
attribute which allows you to provide a SpEL Expression to generate the name of the local file.
Unlike outbound gateways and adapters where the root object of the SpEL Evaluation Context is a Message
, this inbound adapter does not yet have the Message at the time of evaluation since that’s what it ultimately generates with the transferred file as its payload.
So, the root object of the SpEL Evaluation Context is the original name of the remote file (String).
The inbound channel adapter first retrieves the file to a local directory and then emits each file according to the poller configuration.
Starting with version 5.0, you can now limit the number of files fetched from the FTP server when new file retrievals are needed.
This can be beneficial when the target files are very large and/or when running in a clustered system with a persistent file list filter discussed below.
Use max-fetch-size
for this purpose; a negative value (default) means no limit and all matching files will be retrieved; see the section called “CompletableFuture” for more information.
Since version 5.0, you can also provide a custom DirectoryScanner
implementation to the inbound-channel-adapter
via the scanner
attribute.
Starting with Spring Integration 3.0, you can specify the preserve-timestamp
attribute (default false
); when true
, the local file’s modified timestamp will be set to the value retrieved from the server; otherwise it will be set to the current time.
Starting with version 4.2, you can specify remote-directory-expression
instead of remote-directory
, allowing
you to dynamically determine the directory on each poll.
e.g remote-directory-expression="@myBean.determineRemoteDir()"
.
Starting with version 4.3, the remote-directory
/remote-directory-expression
attributes can be omitted assuming null
.
In this case, according to the FTP protocol, the Client working directory is used as a default remote directory.
Sometimes file filtering based on the simple pattern specified via filename-pattern
attribute might not be sufficient.
If this is the case, you can use the filename-regex
attribute to specify a Regular Expression (e.g. filename-regex=".*\.test$"
).
And of course if you need complete control you can use filter
attribute and provide a reference to any custom implementation of the org.springframework.integration.file.filters.FileListFilter
, a strategy interface for filtering a list of files.
This filter determines which remote files are retrieved.
You can also combine a pattern based filter with other filters, such as an AcceptOnceFileListFilter
to avoid synchronizing files that have previously been fetched, by using a CompositeFileListFilter
.
The AcceptOnceFileListFilter
stores its state in memory.
If you wish the state to survive a system restart, consider using the FtpPersistentAcceptOnceFileListFilter
instead.
This filter stores the accepted file names in an instance of the MetadataStore
strategy (the section called “CompletableFuture”).
This filter matches on the filename and the remote modified time.
Since version 4.0, this filter requires a ConcurrentMetadataStore
.
When used with a shared data store (such as Redis
with the RedisMetadataStore
) this allows filter keys to be shared across multiple application or server instances.
Starting with version 5.0, the FtpPersistentAcceptOnceFileListFilter
with in-memory SimpleMetadataStore
is applied by default for the FtpInboundFileSynchronizer
.
This filter is also applied together with the regex
or pattern
option in the XML configuration as well as via FtpInboundChannelAdapterSpec
in Java DSL.
Any other use-cases can be reached via CompositeFileListFilter
(or ChainFileListFilter
).
The above discussion refers to filtering the files before retrieving them.
Once the files have been retrieved, an additional filter is applied to the files on the file system.
By default, this is an AcceptOnceFileListFilter
which, as discussed, retains state in memory and does not consider the file’s modified time.
Unless your application removes files after processing, the adapter will re-process the files on disk by default after an application restart.
Also, if you configure the filter
to use a FtpPersistentAcceptOnceFileListFilter
, and the remote file timestamp changes (causing it to be re-fetched), the default local filter will not allow this new file to be processed.
Use the local-filter
attribute to configure the behavior of the local file system filter.
Starting with version 4.3.8, a FileSystemPersistentAcceptOnceFileListFilter
is configured by default.
This filter stores the accepted file names and modified timestamp in an instance of the MetadataStore
strategy (the section called “CompletableFuture”), and will detect changes to the local file modified time.
The default MetadataStore
is a SimpleMetadataStore
which stores state in memory.
Since version 4.1.5, these filters have a new property flushOnUpdate
which will cause them to flush the
metadata store on every update (if the store implements Flushable
).
![]() | Important |
---|---|
Further, if you use a distributed |
The actual local filter is a CompositeFileListFilter
containing the supplied filter and a pattern filter that prevents processing files that are in the process of being downloaded (based on the temporary-file-suffix
); files are downloaded with this suffix (default: .writing
) and the file is renamed to its final name when the transfer is complete, making it visible to the filter.
The remote-file-separator
attribute allows you to configure a file separator character to use if the default / is not applicable for your particular environment.
Please refer to the schema for more details on these attributes.
It is also important to understand that the FTP Inbound Channel Adapter is a Polling Consumer and therefore you must configure a poller (either via a global default or a local sub-element).
Once a file has been transferred, a Message with a java.io.File
as its payload will be generated and sent to the channel identified by the channel
attribute.
More on File Filtering and Large Files
Sometimes the file that just appeared in the monitored (remote) directory is not complete.
Typically such a file will be written with temporary extension (e.g., foo.txt.writing) and then renamed after the writing process finished.
As a user in most cases you are only interested in files that are complete and would like to filter only files that are complete.
To handle these scenarios you can use the filtering support provided by the filename-pattern
, filename-regex
and filter
attributes.
Here is an example that uses a custom Filter implementation.
<int-ftp:inbound-channel-adapter channel="ftpChannel" session-factory="ftpSessionFactory" filter="customFilter" local-directory="file:/my_transfers"> remote-directory="some/remote/path" <int:poller fixed-rate="1000"/> </int-ftp:inbound-channel-adapter> <bean id="customFilter" class="org.example.CustomFilter"/>
Poller configuration notes for the inbound FTP adapter
The job of the inbound FTP adapter consists of two tasks: 1) Communicate with a remote server in order to transfer files from a remote directory to a local directory. 2) For each transferred file, generate a Message with that file as a payload and send it to the channel identified by the channel attribute. That is why they are called channel-adapters rather than just adapters. The main job of such an adapter is to generate a Message to be sent to a Message Channel. Essentially, the second task mentioned above takes precedence in such a way that IF your local directory already has one or more files it will first generate Messages from those, and ONLY when all local files have been processed, will it initiate the remote communication to retrieve more files.
Also, when configuring a trigger on the poller you should pay close attention to the max-messages-per-poll
attribute.
Its default value is 1 for all SourcePollingChannelAdapter
instances (including FTP).
This means that as soon as one file is processed, it will wait for the next execution time as determined by your trigger configuration.
If you happened to have one or more files sitting in the local-directory
, it would process those files before it would initiate communication with the remote FTP server.
And, if the max-messages-per-poll
were set to 1 (default), then it would be processing only one file at a time with intervals as defined by your trigger, essentially working as one-poll === one-file.
For typical file-transfer use cases, you most likely want the opposite behavior: to process all the files you can for each poll and only then wait for the next poll.
If that is the case, set max-messages-per-poll
to -1.
Then, on each poll, the adapter will attempt to generate as many Messages as it possibly can.
In other words, it will process everything in the local directory, and then it will connect to the remote directory to transfer everything that is available there to be processed locally.
Only then is the poll operation considered complete, and the poller will wait for the next execution time.
You can alternatively set the max-messages-per-poll value to a positive value indicating the upward limit of Messages to be created from files with each poll. For example, a value of 10 means that on each poll it will attempt to process no more than 10 files.
==== Recovering from Failures
It is important to understand the architecture of the adapter.
There is a file synchronizer which fetches the files, and a FileReadingMessageSource
to emit a message for each
synchronized file.
As discussed above, there are two filters involved.
The filter
attribute (and patterns) refers to the remote (FTP) file list - to avoid fetching files that have already
been fetched.
The local-filter
is used by the FileReadingMessageSource
to determine which files are to be sent as messages.
The synchronizer lists the remote files and consults its filter; the files are then transferred.
If an IO error occurs during file transfer, any files that have already been added to the filter are removed so they
are eligible to be re-fetched on the next poll.
This only applies if the filter implements ReversibleFileListFilter
(such as the AcceptOnceFileListFilter
).
If, after synchronizing the files, an error occurs on the downstream flow processing a file, there is no automatic rollback of the filter so the failed file will not be reprocessed by default.
If you wish to reprocess such files after a failure, you can use configuration similar to the following to facilitate
the removal of the failed file from the filter.
This will work for any ResettableFileListFilter
.
<int-ftp:inbound-channel-adapter id="ftpAdapter" session-factory="ftpSessionFactory" channel="requestChannel" remote-directory-expression="'/sftpSource'" local-directory="file:myLocalDir" auto-create-local-directory="true" filename-pattern="*.txt"> <int:poller fixed-rate="1000"> <int:transactional synchronization-factory="syncFactory" /> </int:poller> </int-ftp:inbound-channel-adapter> <bean id="acceptOnceFilter" class="org.springframework.integration.file.filters.AcceptOnceFileListFilter" /> <int:transaction-synchronization-factory id="syncFactory"> <int:after-rollback expression="payload.delete()" /> </int:transaction-synchronization-factory> <bean id="transactionManager" class="org.springframework.integration.transaction.PseudoTransactionManager" />
Starting with version 5.0, the Inbound Channel Adapter can build sub-directories locally according the generated local file name.
That can be a remote sub-path as well.
To be able to read local directory recursively for modification according the hierarchy support, an internal FileReadingMessageSource
now can be supplied with a new RecursiveDirectoryScanner
based on the Files.walk()
algorithm.
See AbstractInboundFileSynchronizingMessageSource.setScanner()
for more information.
Also the AbstractInboundFileSynchronizingMessageSource
can now be switched to the WatchService
-based DirectoryScanner
via setUseWatchService()
option.
It is also configured for all the WatchEventType
s to react for any modifications in local directory.
The reprocessing sample above is based on the build-in functionality of the FileReadingMessageSource.WatchServiceDirectoryScanner
to perform ResettableFileListFilter.remove()
when the file is deleted (StandardWatchEventKinds.ENTRY_DELETE
) from the local directory.
See the section called “CompletableFuture” for more information.
==== Configuring with Java Configuration
The following Spring Boot application provides an example of configuring the inbound adapter using Java configuration:
@SpringBootApplication public class FtpJavaApplication { public static void main(String[] args) { new SpringApplicationBuilder(FtpJavaApplication.class) .web(false) .run(args); } @Bean public SessionFactory<FTPFile> ftpSessionFactory() { DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory(); sf.setHost("localhost"); sf.setPort(port); sf.setUsername("foo"); sf.setPassword("foo"); return new CachingSessionFactory<FTPFile>(sf); } @Bean public FtpInboundFileSynchronizer ftpInboundFileSynchronizer() { FtpInboundFileSynchronizer fileSynchronizer = new FtpInboundFileSynchronizer(ftpSessionFactory()); fileSynchronizer.setDeleteRemoteFiles(false); fileSynchronizer.setRemoteDirectory("foo"); fileSynchronizer.setFilter(new FtpSimplePatternFileListFilter("*.xml")); return fileSynchronizer; } @Bean @InboundChannelAdapter(channel = "ftpChannel", poller = @Poller(fixedDelay = "5000")) public MessageSource<File> ftpMessageSource() { FtpInboundFileSynchronizingMessageSource source = new FtpInboundFileSynchronizingMessageSource(ftpInboundFileSynchronizer()); source.setLocalDirectory(new File("ftp-inbound")); source.setAutoCreateLocalDirectory(true); source.setLocalFilter(new AcceptOnceFileListFilter<File>()); source.setMaxFetchSize(1); return source; } @Bean @ServiceActivator(inputChannel = "ftpChannel") public MessageHandler handler() { return new MessageHandler() { @Override public void handleMessage(Message<?> message) throws MessagingException { System.out.println(message.getPayload()); } }; } }
==== Configuring with the Java DSL
The following Spring Boot application provides an example of configuring the inbound adapter using the Java DSL:
@SpringBootApplication public class FtpJavaApplication { public static void main(String[] args) { new SpringApplicationBuilder(FtpJavaApplication.class) .web(false) .run(args); } @Bean public IntegrationFlow ftpInboundFlow() { return IntegrationFlows .from(s -> s.ftp(this.ftpSessionFactory) .preserveTimestamp(true) .remoteDirectory("foo") .regexFilter(".*\\.txt$") .localFilename(f -> f.toUpperCase() + ".a") .localDirectory(new File("d:\\ftp_files")), e -> e.id("ftpInboundAdapter") .autoStartup(true) .poller(Pollers.fixedDelay(5000))) .handle(m -> System.out.println(m.getPayload())) .get(); } }
==== Dealing With Incomplete Data
See the section called “CompletableFuture”.
The FtpSystemMarkerFilePresentFileListFilter
is provided to filter remote files that don’t have a corresponding marker file on the remote system.
See the javadocs for configuration information.
=== FTP Streaming Inbound Channel Adapter
The streaming inbound channel adapter was introduced in version 4.3.
This adapter produces message with payloads of type InputStream
, allowing files to be fetched without writing to the
local file system.
Since the session remains open, the consuming application is responsible for closing the session when the file has been
consumed.
The session is provided in the closeableResource
header (IntegrationMessageHeaderAccessor.CLOSEABLE_RESOURCE
).
Standard framework components, such as the FileSplitter
and StreamTransformer
will automatically close the session.
See the section called “CompletableFuture” and the section called “Stream Transformer” for more information about these components.
<int-ftp:inbound-streaming-channel-adapter id="ftpInbound" channel="ftpChannel" session-factory="sessionFactory" filename-pattern="*.txt" filename-regex=".*\.txt" filter="filter" filter-expression="@myFilterBean.check(#root)" remote-file-separator="/" comparator="comparator" max-fetch-size="1" remote-directory-expression="'foo/bar'"> <int:poller fixed-rate="1000" /> </int-ftp:inbound-streaming-channel-adapter>
Only one of filename-pattern
, filename-regex
, filter
or filter-expression
is allowed.
![]() | Important |
---|---|
Starting with version 5.0, by default, the |
Use the max-fetch-size
attribute to limit the number of files fetched on each poll when a fetch is necessary; set to 1 and use a persistent filter when running in a clustered environment; see the section called “CompletableFuture” for more information.
The adapter puts the remote directory and file name in headers FileHeaders.REMOTE_DIRECTORY
and FileHeaders.REMOTE_FILE
respectively.
Starting with version 5.0, additional remote file information, represented in JSON by default, is provided in the FileHeaders.REMOTE_FILE_INFO
header.
If you set the fileInfoJson
property on the FtpStreamingMessageSource
to false
, the header will contain an FtpFileInfo
object.
The FTPFile
object provided by the underlying Apache Net library can be accessed using the FtpFileInfo.getFileInfo()
method.
The fileInfoJson
property is not available when using XML configuration but you can set it by injecting the FtpStreamingMessageSource
into one of your configuration classes.
==== Configuring with Java Configuration
The following Spring Boot application provides an example of configuring the inbound adapter using Java configuration:
@SpringBootApplication public class FtpJavaApplication { public static void main(String[] args) { new SpringApplicationBuilder(FtpJavaApplication.class) .web(false) .run(args); } @Bean @InboundChannelAdapter(channel = "stream") public MessageSource<InputStream> ftpMessageSource() { FtpStreamingMessageSource messageSource = new FtpStreamingMessageSource(template()); messageSource.setRemoteDirectory("ftpSource/"); messageSource.setFilter(new AcceptAllFileListFilter<>()); messageSource.setMaxFetchSize(1); return messageSource; } @Bean @Transformer(inputChannel = "stream", outputChannel = "data") public org.springframework.integration.transformer.Transformer transformer() { return new StreamTransformer("UTF-8"); } @Bean public FtpRemoteFileTemplate template() { return new FtpRemoteFileTemplate(ftpSessionFactory()); } @ServiceActivator(inputChannel = "data", adviceChain = "after") @Bean public MessageHandler handle() { return System.out::println; } @Bean public ExpressionEvaluatingRequestHandlerAdvice after() { ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice(); advice.setOnSuccessExpression( "@template.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])"); advice.setPropagateEvaluationFailures(true); return advice; } }
Notice that, in this example, the message handler downstream of the transformer has an advice that removes the remote file after processing.
=== Inbound Channel Adapters: Polling Multiple Servers and Directories
Starting with version 5.0.7, the RotatingServerAdvice
is available; when configured as a poller advice, the inbound adapters can poll multiple servers and directories.
Configure the advice and add it to the poller’s advice chain as normal.
A DelegatingSessionFactory
is used to select the server see the section called “CompletableFuture” for more information.
The advice configuration consists of a list of RotatingServerAdvice.KeyDirectory
objects.
Example.
@Bean public RotatingServerAdvice advice() { List<KeyDirectory> keyDirectories = new ArrayList<>(); keyDirectories.add(new KeyDirectory("one", "foo")); keyDirectories.add(new KeyDirectory("one", "bar")); keyDirectories.add(new KeyDirectory("two", "baz")); keyDirectories.add(new KeyDirectory("two", "qux")); keyDirectories.add(new KeyDirectory("three", "fiz")); keyDirectories.add(new KeyDirectory("three", "buz")); return new RotatingServerAdvice(delegatingSf(), keyDirectories); }
This advice will poll directory foo
on server one
until no new files exist then move to directory bar
and then directory baz
on server two
, etc.
This default behavior can be modified with the fair
constructor arg:
fair.
@Bean public RotatingServerAdvice advice() { ... return new RotatingServerAdvice(delegatingSf(), keyDirectories, true); }
In this case, the advice will move to the next server/directory regardless of whether the previous poll returned a file.
Alternatively, you can provide your own RotatingServerAdvice.RotationPolicy
to reconfigure the message source as needed:
policy.
public interface RotationPolicy { void beforeReceive(MessageSource<?> source); void afterReceive(boolean messageReceived, MessageSource<?> source); }
and
custom.
@Bean public RotatingServerAdvice advice() { return new RotatingServerAdvice(myRotationPolicy()); }
The local-filename-generator-expression
attribute (localFilenameGeneratorExpression
on the synchronizer) can now contain the #remoteDirectory
variable.
This allows files retrieved from different directories to be downloaded to similar directories locally:
@Bean public IntegrationFlow flow() { return IntegrationFlows.from(Ftp.inboundAdapter(sf()) .filter(new FtpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "rotate")) .localDirectory(new File(tmpDir)) .localFilenameExpression("#remoteDirectory + T(java.io.File).separator + #root") .remoteDirectory("."), e -> e.poller(Pollers.fixedDelay(1).advice(advice()))) .channel(MessageChannels.queue("files")) .get(); }
![]() | Important |
---|---|
Do not configure a |
=== Inbound Channel Adapters: Controlling Remote File Fetching
There are two properties that should be considered when configuring inbound channel adapters.
max-messages-per-poll
, as with all pollers, can be used to limit the number of messages emitted on each poll (if more than the configured value are ready).
max-fetch-size
(since version 5.0) can limit the number of files retrieved from the remote server at a time.
The following scenarios assume the starting state is an empty local directory.
max-messages-per-poll=2
and max-fetch-size=1
, the adapter will fetch one file, emit it, fetch the next file, emit it; then sleep until the next poll.
max-messages-per-poll=2
and max-fetch-size=2
), the adapter will fetch both files, then emit each one.
max-messages-per-poll=2
and max-fetch-size=4
, the adapter will fetch up to 4 files (if available) and emit the first two (if there are at least two); the next two files will be emitted on the next poll.
max-messages-per-poll=2
and max-fetch-size
not specified, the adapter will fetch all remote files and emit the first two (if there are at least two); the subsequent files will be emitted on subsequent polls (2-at-a-time); when all are consumed, the remote fetch will be attempted again, to pick up any new files.
![]() | Important |
---|---|
When deploying multiple instances of an application, a small |
Another use for max-fetch-size
is if you want to stop fetching remote files, but continue to process files that have already been fetched.
Setting the maxFetchSize
property on the MessageSource
(programmatically, via JMX, or via a control bus) effectively stops the adapter from fetching more files, but allows the poller to continue to emit messages for files that have previously been fetched.
If the poller is active when the property is changed, the change will take effect on the next poll.
=== FTP Outbound Channel Adapter
The FTP Outbound Channel Adapter relies upon a MessageHandler
implementation that will connect to the FTP server and initiate an FTP transfer for every file it receives in the payload of incoming Messages.
It also supports several representations of the File so you are not limited only to java.io.File typed payloads.
The FTP Outbound Channel Adapter supports the following payloads: 1) java.io.File
- the actual file object; 2) byte[]
- a byte array that represents the file contents; and 3) java.lang.String
- text that represents the file contents.
<int-ftp:outbound-channel-adapter id="ftpOutbound" channel="ftpChannel" session-factory="ftpSessionFactory" charset="UTF-8" remote-file-separator="/" auto-create-directory="true" remote-directory-expression="headers['remote_dir']" temporary-remote-directory-expression="headers['temp_remote_dir']" filename-generator="fileNameGenerator" use-temporary-filename="true" mode="REPLACE"/>
As you can see from the configuration above you can configure an FTP Outbound Channel Adapter via the outbound-channel-adapter
element while also providing values for various attributes such as filename-generator
(an implementation of the org.springframework.integration.file.FileNameGenerator
strategy interface), a reference to a session-factory
, as well as other attributes.
You can also see some examples of *expression
attributes which allow you to use SpEL to configure things like remote-directory-expression
, temporary-remote-directory-expression
and remote-filename-generator-expression
(a SpEL alternative to filename-generator
shown above).
As with any component that allows the usage of SpEL, access to Payload and Message Headers is available via payload and headers variables.
Please refer to the schema for more details on the available attributes.
![]() | Note |
---|---|
By default Spring Integration will use |
![]() | Important |
---|---|
Defining certain values (e.g., remote-directory) might be platform/ftp server dependent. For example as it was reported on this forum http://forum.springsource.org/showthread.php?p=333478&posted=1#post333478 on some platforms you must add slash to the end of the directory definition (e.g., remote-directory="/foo/bar/" instead of remote-directory="/foo/bar") |
Starting with version 4.1, you can specify the mode
when transferring the file.
By default, an existing file will be overwritten; the modes are defined on enum
FileExistsMode
, having values REPLACE
(default), APPEND
, IGNORE
, and FAIL
.
With IGNORE
and FAIL
, the file is not transferred; FAIL
causes an exception to be thrown whereas IGNORE
silently ignores the transfer (although a DEBUG
log entry is produced).
Avoiding Partially Written Files
One of the common problems, when dealing with file transfers, is the possibility of processing a partial file - a file might appear in the file system before its transfer is actually complete.
To deal with this issue, Spring Integration FTP adapters use a very common algorithm where files are transferred under a temporary name and then renamed once they are fully transferred.
By default, every file that is in the process of being transferred will appear in the file system with an additional suffix which, by default, is .writing
; this can be changed using the temporary-file-suffix
attribute.
However, there may be situations where you don’t want to use this technique (for example, if the server does not permit renaming files).
For situations like this, you can disable this feature by setting use-temporary-file-name
to false
(default is true
).
When this attribute is false
, the file is written with its final name and the consuming application will need some other mechanism to detect that the file is completely uploaded before accessing it.
==== Configuring with Java Configuration
The following Spring Boot application provides an example of configuring the Outbound Adapter using Java configuration:
@SpringBootApplication @IntegrationComponentScan public class FtpJavaApplication { public static void main(String[] args) { ConfigurableApplicationContext context = new SpringApplicationBuilder(FtpJavaApplication.class) .web(false) .run(args); MyGateway gateway = context.getBean(MyGateway.class); gateway.sendToFtp(new File("/foo/bar.txt")); } @Bean public SessionFactory<FTPFile> ftpSessionFactory() { DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory(); sf.setHost("localhost"); sf.setPort(port); sf.setUsername("foo"); sf.setPassword("foo"); return new CachingSessionFactory<FTPFile>(sf); } @Bean @ServiceActivator(inputChannel = "ftpChannel") public MessageHandler handler() { FtpMessageHandler handler = new FtpMessageHandler(ftpSessionFactory()); handler.setRemoteDirectoryExpressionString("headers['remote-target-dir']"); handler.setFileNameGenerator(new FileNameGenerator() { @Override public String generateFileName(Message<?> message) { return "handlerContent.test"; } }); return handler; } @MessagingGateway public interface MyGateway { @Gateway(requestChannel = "toFtpChannel") void sendToFtp(File file); } }
==== Configuring with the Java DSL
The following Spring Boot application provides an example of configuring the Outbound Adapter using the Java DSL:
@SpringBootApplication @IntegrationComponentScan public class FtpJavaApplication { public static void main(String[] args) { ConfigurableApplicationContext context = new SpringApplicationBuilder(FtpJavaApplication.class) .web(false) .run(args); MyGateway gateway = context.getBean(MyGateway.class); gateway.sendToFtp(new File("/foo/bar.txt")); } @Bean public SessionFactory<FTPFile> ftpSessionFactory() { DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory(); sf.setHost("localhost"); sf.setPort(port); sf.setUsername("foo"); sf.setPassword("foo"); return new CachingSessionFactory<FTPFile>(sf); } @Bean public IntegrationFlow ftpOutboundFlow() { return IntegrationFlows.from("toFtpChannel") .handle(Ftp.outboundAdapter(ftpSessionFactory(), FileExistsMode.FAIL) .useTemporaryFileName(false) .fileNameExpression("headers['" + FileHeaders.FILENAME + "']") .remoteDirectory(this.ftpServer.getTargetFtpDirectory().getName()) ).get(); } @MessagingGateway public interface MyGateway { @Gateway(requestChannel = "toFtpChannel") void sendToFtp(File file); } }
The FTP Outbound Gateway provides a limited set of commands to interact with a remote FTP/FTPS server. Commands supported are:
ls
ls lists remote file(s) and supports the following options:
FileInfo
objects.
In addition, filename filtering is provided, in the same manner as the inbound-channel-adapter
.
The message payload resulting from an ls operation is a list of file names, or a list of FileInfo
objects.
These objects provide information such as modified time, permissions etc.
The remote directory that the ls command acted on is provided in the file_remoteDirectory
header.
When using the recursive option (-R
), the fileName
includes any subdirectory elements, representing a relative path to the file (relative to the remote directory).
If the -dirs
option is included, each recursive directory is also returned as an element in the list.
In this case, it is recommended that the -1
is not used because you would not be able to determine files Vs.
directories, which is achievable using the FileInfo
objects.
Starting with version 4.3, the FtpSession
supports null
for the list()
and listNames()
methods,
therefore the expression
attribute can be omitted.
For Java configuration, there are two constructors without an expression
argument for convenience.
null
for LS
, NLST
, PUT
and MPUT
commands is treated as the Client working directory according to the FTP protocol.
All other commands must be supplied with the expression
to evaluate remote path against request message.
The working directory can be set via the FTPClient.changeWorkingDirectory()
function when you extend the DefaultFtpSessionFactory
and implement postProcessClientAfterConnect()
callback.
nlst
(Since version 5.0)
Lists remote file names and supports the following options:
The message payload resulting from an nlst operation is a list of file names.
The remote directory that the nlst command acted on is provided in the file_remoteDirectory
header.
Unlike the -1
option for the ls command (see above), which uses the LIST
command, the nlst command sends an NLST
command to the target FTP server.
This command is useful when the server doesn’t support LIST
, due to security restrictions, for example.
The result of the nlst is just the names, therefore the framework can’t determine if an entity is a directory, to perform filtering or recursive listing, for example.
get
get retrieves a remote file and supports the following option:
FileExistsMode
is IGNORE
and the local file already exists.
The remote directory is provided in the file_remoteDirectory
header, and the filename is provided in the file_remoteFile
header.
The message payload resulting from a get operation is a File
object representing the retrieved file, or
an InputStream
when the -stream
option is provided.
This option allows retrieving the file as a stream.
For text files, a common use case is to combine this operation with a File Splitter or
Stream Transformer.
When consuming remote files as streams, the user is responsible for closing the Session
after the stream is
consumed.
For convenience, the Session
is provided in the closeableResource
header, a convenience method is provided on the
IntegrationMessageHeaderAccessor
:
Closeable closeable = new IntegrationMessageHeaderAccessor(message).getCloseableResource(); if (closeable != null) { closeable.close(); }
Framework components such as the File Splitter and Stream Transformer will automatically close the session after the data is transferred.
The following shows an example of consuming a file as a stream:
<int-ftp:outbound-gateway session-factory="ftpSessionFactory" request-channel="inboundGetStream" command="get" command-options="-stream" expression="payload" remote-directory="ftpTarget" reply-channel="stream" /> <int-file:splitter input-channel="stream" output-channel="lines" />
Note: if you consume the input stream in a custom component, you must close the Session
.
You can either do that in your custom code, or route a copy of the message to a service-activator
and use SpEL:
<int:service-activator input-channel="closeSession" expression="headers['closeableResource'].close()" />
mget
mget retrieves multiple remote files based on a pattern and supports the following options:
FileExistsMode
is IGNORE
and the local file already exists.
The message payload resulting from an mget operation is a List<File>
object - a List of File objects, each representing a retrieved file.
![]() | Important |
---|---|
Starting with version 5.0, if the |
The expression used to determine the remote path should produce a result that ends with *
- e.g. foo/*
will fetch the complete tree under foo
.
Starting with version 5.0, a recursive MGET
, combined with the new FileExistsMode.REPLACE_IF_MODIFIED
mode, can be used to periodically synchronize an entire remote directory tree locally.
This mode will set the local file last modified timestamp with the remote timestamp, regardless of the -P
(preserve timestamp) option.
![]() | Notes for when using recursion (-R ) |
---|---|
The pattern is ignored, and If a subdirectory is filtered, no additional traversal of that subdirectory is performed. The Typically, you would use the |
Starting with version 5.0, the FtpSimplePatternFileListFilter
and FtpRegexPatternFileListFilter
can be configured to always pass directories by setting the alwaysAcceptDirectories
to true
.
This allows recursion for a simple pattern; examples follow:
<bean id="starDotTxtFilter" class="org.springframework.integration.ftp.filters.FtpSimplePatternFileListFilter"> <constructor-arg value="*.txt" /> <property name="alwaysAcceptDirectories" value="true" /> </bean> <bean id="dotStarDotTxtFilter" class="org.springframework.integration.ftp.filters.FtpRegexPatternFileListFilter"> <constructor-arg value="^.*\.txt$" /> <property name="alwaysAcceptDirectories" value="true" /> </bean>
and provide one of these filters using filter
property on the gateway.
See also the section called “CompletableFuture”.
put
put sends a file to the remote server; the payload of the message can be a java.io.File
, a byte[]
or a String
.
A remote-filename-generator
(or expression) is used to name the remote file.
Other available attributes include remote-directory
, temporary-remote-directory
(and their *-expression
) equivalents, use-temporary-file-name
, and auto-create-directory
.
Refer to the schema documentation for more information.
The message payload resulting from a put operation is a String
representing the full path of the file on the server after transfer.
mput
mput sends multiple files to the server and supports the following option:
The message payload must be a java.io.File
representing a local directory.
The same attributes as the put
command are supported.
In addition, files in the local directory can be filtered with one of mput-pattern
, mput-regex
, mput-filter
or mput-filter-expression
.
The filter works with recursion, as long as the subdirectories themselves pass the filter.
Subdirectories that do not pass the filter are not recursed.
The message payload resulting from an mget operation is a List<String>
object - a List of remote file paths resulting from the transfer.
See also the section called “CompletableFuture”.
rm
The rm command has no options.
The message payload resulting from an rm operation is Boolean.TRUE if the remove was successful, Boolean.FALSE otherwise.
The remote directory is provided in the file_remoteDirectory
header, and the filename is provided in the file_remoteFile
header.
mv
The mv command has no options.
The expression attribute defines the "from" path and the rename-expression attribute defines the "to" path.
By default, the rename-expression is headers['file_renameTo']
.
This expression must not evaluate to null, or an empty String
.
If necessary, any remote directories needed will be created.
The payload of the result message is Boolean.TRUE
.
The original remote directory is provided in the file_remoteDirectory
header, and the filename is provided in the file_remoteFile
header.
The new path is in the file_renameTo
header.
Additional Information
The get and mget commands support the local-filename-generator-expression attribute.
It defines a SpEL expression to generate the name of local file(s) during the transfer.
The root object of the evaluation context is the request Message but, in addition, the remoteFileName
variable is also available, which is particularly useful for mget, for example: local-filename-generator-expression="#remoteFileName.toUpperCase() + headers.foo"
.
The get and mget commands support the local-directory-expression attribute.
It defines a SpEL expression to generate the name of local directory(ies) during the transfer.
The root object of the evaluation context is the request Message but, in addition, the remoteDirectory
variable is also available, which is particularly useful for mget, for example: local-directory-expression="'/tmp/local/' + #remoteDirectory.toUpperCase() + headers.foo"
.
This attribute is mutually exclusive with local-directory attribute.
For all commands, the PATH that the command acts on is provided by the expression property of the gateway. For the mget command, the expression might evaluate to , meaning retrieve all files, or somedirectory/ etc.
Here is an example of a gateway configured for an ls command…
<int-ftp:outbound-gateway id="gateway1" session-factory="ftpSessionFactory" request-channel="inbound1" command="ls" command-options="-1" expression="payload" reply-channel="toSplitter"/>
The payload of the message sent to the toSplitter
channel is a list of String objects containing the filename of each file.
If the command-options
was omitted, it would be a list of FileInfo
objects.
Options are provided space-delimited, e.g. command-options="-1 -dirs -links"
.
Starting with version 4.2, the GET
, MGET
, PUT
and MPUT
commands support a FileExistsMode
property (mode
when using the namespace support). This affects the behavior when the local file exists (GET
and MGET
) or the remote
file exists (PUT
and MPUT
). Supported modes are REPLACE
, APPEND
, FAIL
and IGNORE
.
For backwards compatibility, the default mode for PUT
and MPUT
operations is REPLACE
and for GET
and MGET
operations, the default is FAIL
.
Starting with version 5.0, the setWorkingDirExpression()
(working-dir-expression
) option is provided on the FtpOutboundGateway
(<int-ftp:outbound-gateway>
) enabling the client working directory to be changed at runtime; the expression is evaluated against the request message.
The previous working directory is restored after each gateway operation.
==== Configuring with Java Configuration
The following Spring Boot application provides an example of configuring the Outbound Gateway using Java configuration:
@SpringBootApplication public class FtpJavaApplication { public static void main(String[] args) { new SpringApplicationBuilder(FtpJavaApplication.class) .web(false) .run(args); } @Bean public SessionFactory<FTPFile> ftpSessionFactory() { DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory(); sf.setHost("localhost"); sf.setPort(port); sf.setUsername("foo"); sf.setPassword("foo"); return new CachingSessionFactory<FTPFile>(sf); } @Bean @ServiceActivator(inputChannel = "ftpChannel") public MessageHandler handler() { FtpOutboundGateway ftpOutboundGateway = new FtpOutboundGateway(ftpSessionFactory(), "ls", "'my_remote_dir/'"); ftpOutboundGateway.setOutputChannelName("lsReplyChannel"); return ftpOutboundGateway; } }
==== Configuring with the Java DSL
The following Spring Boot application provides an example of configuring the Outbound Gateway using the Java DSL:
@SpringBootApplication public class FtpJavaApplication { public static void main(String[] args) { new SpringApplicationBuilder(FtpJavaApplication.class) .web(false) .run(args); } @Bean public SessionFactory<FTPFile> ftpSessionFactory() { DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory(); sf.setHost("localhost"); sf.setPort(port); sf.setUsername("foo"); sf.setPassword("foo"); return new CachingSessionFactory<FTPFile>(sf); } @Bean public FtpOutboundGatewaySpec ftpOutboundGateway() { return Ftp.outboundGateway(ftpSessionFactory(), AbstractRemoteFileOutboundGateway.Command.MGET, "payload") .options(AbstractRemoteFileOutboundGateway.Option.RECURSIVE) .regexFileNameFilter("(subFtpSource|.*1.txt)") .localDirectoryExpression("'localDirectory/' + #remoteDirectory") .localFilenameExpression("#remoteFileName.replaceFirst('ftpSource', 'localTarget')"); } @Bean public IntegrationFlow ftpMGetFlow(AbstractRemoteFileOutboundGateway<FTPFile> ftpOutboundGateway) { return f -> f .handle(ftpOutboundGateway) .channel(c -> c.queue("remoteFileOutputChannel")); } }
==== Outbound Gateway Partial Success (mget and mput)
When performing operations on multiple files (mget
and mput
) it is possible that an exception occurs some time after
one or more files have been transferred.
In this case (starting with version 4.2), a PartialSuccessException
is thrown.
As well as the usual MessagingException
properties (failedMessage
and cause
), this exception has two additional
properties:
partialResults
- the successful transfer results.
derivedInput
- the list of files generated from the request message (e.g. local files to transfer for an mput
).
This will enable you to determine which files were successfully transferred, and which were not.
In the case of a recursive mput
, the PartialSuccessException
may have nested PartialSuccessException
s.
Consider:
root/ |- file1.txt |- subdir/ | - file2.txt | - file3.txt |- zoo.txt
If the exception occurs on file3.txt
, the PartialSuccessException
thrown by the gateway will have derivedInput
of file1.txt
, subdir
, zoo.txt
and partialResults
of file1.txt
.
It’s cause
will be another PartialSuccessException
with derivedInput
of file2.txt
, file3.txt
and
partialResults
of file2.txt
.
![]() | Important |
---|---|
Starting with Spring Integration version 3.0, sessions are no longer cached by default; the |
In versions prior to 3.0, the sessions were cached automatically by default.
A cache-sessions
attribute was available for disabling the auto caching, but that solution did not provide a way to configure other session caching attributes.
For example, you could not limit on the number of sessions created.
To support that requirement and other configuration options, a CachingSessionFactory
was provided.
It provides sessionCacheSize
and sessionWaitTimeout
properties.
As its name suggests, the sessionCacheSize
property controls how many active sessions the factory will maintain in its cache (the DEFAULT is unbounded).
If the sessionCacheSize
threshold has been reached, any attempt to acquire another session will block until either one of the cached sessions becomes available or until the wait time for a Session expires (the DEFAULT wait time is Integer.MAX_VALUE).
The sessionWaitTimeout
property enables configuration of that value.
If you want your Sessions to be cached, simply configure your default Session Factory as described above and then wrap it in an instance of CachingSessionFactory
where you may provide those additional properties.
<bean id="ftpSessionFactory" class="o.s.i.ftp.session.DefaultFtpSessionFactory"> <property name="host" value="localhost"/> </bean> <bean id="cachingSessionFactory" class="o.s.i.file.remote.session.CachingSessionFactory"> <constructor-arg ref="ftpSessionFactory"/> <constructor-arg value="10"/> <property name="sessionWaitTimeout" value="1000"/> </bean>
In the above example you see a CachingSessionFactory
created with the sessionCacheSize
set to 10 and the
sessionWaitTimeout
set to 1 second (its value is in milliseconds).
Starting with Spring Integration version 3.0, the CachingConnectionFactory
provides a resetCache()
method.
When invoked, all idle sessions are immediately closed and in-use sessions are closed when they are returned to the cache.
New requests for sessions will establish new sessions as necessary.
Starting with Spring Integration version 3.0 a new abstraction is provided over the FtpSession
object.
The template provides methods to send, retrieve (as an InputStream
), remove, and rename files.
In addition an execute
method is provided allowing the caller to execute multiple operations on the session.
In all cases, the template takes care of reliably closing the session.
For more information, refer to the
JavaDocs for RemoteFileTemplate
.
There is a subclass for FTP: FtpRemoteFileTemplate
.
Additional methods were added in version 4.1 including getClientInstance()
which provides access to the underlying FTPClient
enabling access to low-level APIs.
Not all FTP servers properly implement STAT <path>
command, in that it can return a positive result for a non-existent path.
The NLST
command reliably returns the name, when the path is a file and it exists.
However, this does not support checking that an empty directory exists since NLST
always returns an empty list in this case, when the path is a directory.
Since the template doesn’t know if the path represents a directory or not, it has to perform additional checks when the path does not appear to exist, when using NLST
.
This adds overhead, requiring several requests to the server.
Starting with version 4.1.9 the FtpRemoteFileTemplate
provides FtpRemoteFileTemplate.ExistsMode
property with the following options:
STAT
- Perform the STAT
FTP command (FTPClient.getStatus(path)
) to check the path existence; this is the default and requires that your FTP server properly supports the STAT
command (with a path).
NLST
- Perform the NLST
FTP command - FTPClient.listName(path)
; use this if you are testing for a path that is a full path to a file; it won’t work for empty directories.
NLST_AND_DIRS
- Perform the NLST
command first and if it returns no files, fall back to a technique which temporarily switches the working directory using FTPClient.changeWorkingDirectory(path)
.
See FtpSession.exists()
for more information.
Since we know that the FileExistsMode.FAIL
case is always only looking for a file (and not a directory), we safely use NLST
mode for the FtpMessageHandler
and FtpOutboundGateway
components.
For any other cases the FtpRemoteFileTemplate
can be extended for implementing a custom logic in the overridden exist()
method.
Starting with version 5.0, the new RemoteFileOperations.invoke(OperationsCallback<F, T> action)
method is available.
This method allows several RemoteFileOperations
calls to be called in the scope of the same, thread-bounded, Session
.
This is useful when you need to perform several high-level operations of the RemoteFileTemplate
as one unit of work.
For example AbstractRemoteFileOutboundGateway
uses it with the mput command implementation, where we perform a put operation for each file in the provided directory and recursively for its sub-directories.
See the JavaDocs for more information.
Starting with Spring Integration version 4.2, a MessageSessionCallback<F, T>
implementation can be used with the
<int-ftp:outbound-gateway/>
(FtpOutboundGateway
) to perform any operation(s) on the Session<FTPFile>
with
the requestMessage
context.
It can be used for any non-standard or low-level FTP operation (or several); for example, allowing access
from an integration flow definition, and functional interface (Lambda) implementation injection:
@Bean @ServiceActivator(inputChannel = "ftpChannel") public MessageHandler ftpOutboundGateway(SessionFactory<FTPFile> sessionFactory) { return new FtpOutboundGateway(sessionFactory, (session, requestMessage) -> session.list(requestMessage.getPayload())); }
Another example might be to pre- or post- process the file data being sent/retrieved.
When using XML configuration, the <int-ftp:outbound-gateway/>
provides a session-callback
attribute to allow you to
specify the MessageSessionCallback
bean name.
![]() | Note |
---|---|
The |
Spring Integration provides support for VMWare vFabric GemFire
VMWare vFabric GemFire (GemFire) is a distributed data management platform providing a key-value data grid along with advanced distributed system features such as event processing, continuous querying, and remote function execution. This guide assumes some familiarity with GemFire and its API.
Spring integration provides support for GemFire by providing inbound adapters for entry and continuous query events, an outbound adapter to write entries to the cache, and MessageStore
and MessageGroupStore
implementations.
Spring integration leverages thehttp://www.springsource.org/spring-gemfire[Spring Gemfire] project, providing a thin wrapper over its components.
To configure the int-gfe namespace, include the following elements within the headers of your XML configuration file:
xmlns:int-gfe="http://www.springframework.org/schema/integration/gemfire" xsi:schemaLocation="http://www.springframework.org/schema/integration/gemfire http://www.springframework.org/schema/integration/gemfire/spring-integration-gemfire.xsd"
The inbound-channel-adapter produces messages on a channel triggered by a GemFire EntryEvent
.
GemFire generates events whenever an entry is CREATED, UPDATED, DESTROYED, or INVALIDATED in the associated region.
The inbound channel adapter allows you to filter on a subset of these events.
For example, you may want to only produce messages in response to an entry being CREATED.
In addition, the inbound channel adapter can evaluate a SpEL expression if, for example, you want your message payload to contain an event property such as the new entry value.
<gfe:cache/> <gfe:replicated-region id="region"/> <int-gfe:inbound-channel-adapter id="inputChannel" region="region" cache-events="CREATED" expression="newValue"/>
In the above configuration, we are creating a GemFire Cache
and Region
using Spring GemFire’s gfe namespace.
The inbound-channel-adapter requires a reference to the GemFire region for which the adapter will be listening for events.
Optional attributes include cache-events
which can contain a comma separated list of event types for which a message will be produced on the input channel.
By default CREATED and UPDATED are enabled.
Note that this adapter conforms to Spring integration conventions.
If no channel
attribute is provided, the channel will be created from the id
attribute.
This adapter also supports an error-channel
.
The GemFire EntryEvent is the #root
object of the expression
evaluation.
Example:
expression="new foo.MyEvent(key, oldValue, newValue)"
If the expression
attribute is not provided, the message payload will be the GemFire EntryEvent
itself.
=== Continuous Query Inbound Channel Adapter
The cq-inbound-channel-adapter produces messages a channel triggered by a GemFire continuous query or CqEvent
event.
Spring GemFire introduced continuous query support in release 1.1, including a ContinuousQueryListenerContainer
which provides a nice abstraction over the GemFire native API.
This adapter requires a reference to a ContinuousQueryListenerContainer, and creates a listener for a given query
and executes the query.
The continuous query acts as an event source that will fire whenever its result set changes state.
![]() | Note |
---|---|
GemFire queries are written in OQL and are scoped to the entire cache (not just one region). Additionally, continuous queries require a remote (i.e., running in a separate process or remote host) cache server. Please consult the GemFire documentation for more information on implementing continuous queries. |
<gfe:client-cache id="client-cache" pool-name="client-pool"/> <gfe:pool id="client-pool" subscription-enabled="true" > <!--configure server or locator here required to address the cache server --> </gfe:pool> <gfe:client-region id="test" cache-ref="client-cache" pool-name="client-pool"/> <gfe:cq-listener-container id="queryListenerContainer" cache="client-cache" pool-name="client-pool"/> <int-gfe:cq-inbound-channel-adapter id="inputChannel" cq-listener-container="queryListenerContainer" query="select * from /test"/>
In the above configuration, we are creating a GemFire client cache (recall a remote cache server is required for this implementation and its address is configured as a sub-element of the pool), a client region and a ContinuousQueryListenerContainer
using Spring GemFire.
The continuous query inbound channel adapter requires a cq-listener-container
attribute which contains a reference to the ContinuousQueryListenerContainer
.
Optionally, it accepts an expression
attribute which uses SpEL to transform the CqEvent
or extract an individual property as needed.
The cq-inbound-channel-adapter provides a query-events
attribute, containing a comma separated list of event types for which a message will be produced on the input channel.
Available event types are CREATED, UPDATED, DESTROYED, REGION_DESTROYED, REGION_INVALIDATED.
CREATED and UPDATED are enabled by default.
Additional optional attributes include, query-name
which provides an optional query name, and expression
which works as described in the above section, and durable
- a boolean value indicating if the query is durable (false by default).
Note that this adapter conforms to Spring integration conventions.
If no channel
attribute is provided, the channel will be created from the id
attribute.
This adapter also supports an error-channel
The outbound-channel-adapter writes cache entries mapped from the message payload.
In its simplest form, it expects a payload of type java.util.Map
and puts the map entries into its configured region.
<int-gfe:outbound-channel-adapter id="cacheChannel" region="region"/>
Given the above configuration, an exception will be thrown if the payload is not a Map. Additionally, the outbound channel adapter can be configured to create a map of cache entries using SpEL of course.
<int-gfe:outbound-channel-adapter id="cacheChannel" region="region"> <int-gfe:cache-entries> <entry key="payload.toUpperCase()" value="payload.toLowerCase()"/> <entry key="'foo'" value="'bar'"/> </int-gfe:cache-entries> </int-gfe:outbound-channel-adapter>
In the above configuration, the inner element cache-entries
is semantically equivalent to Spring map element.
The adapter interprets the key
and value
attributes as SpEL expressions with the message as the evaluation context.
Note that this contain arbitrary cache entries (not only those derived from the message) and that literal values must be enclosed in single quotes.
In the above example, if the message sent to cacheChannel
has a String payload with a value "Hello", two entries [HELLO:hello, foo:bar]
will be written (created or updated) in the cache region.
This adapter also supports the order
attribute which may be useful if it is bound to a PublishSubscribeChannel.
As described in EIP, a Message Store allows you to persist Messages. This can be very useful when dealing with components that have a capability to buffer messages (QueueChannel, Aggregator, Resequencer, etc.) if reliability is a concern. In Spring Integration, the MessageStore strategy also provides the foundation for thehttp://www.eaipatterns.com/StoreInLibrary.html[ClaimCheck] pattern, which is described in EIP as well.
Spring Integration’s Gemfire module provides the GemfireMessageStore
which is an implementation of both the the MessageStore
strategy (mainly used by the QueueChannel and ClaimCheck patterns) and the MessageGroupStore
strategy (mainly used by the Aggregator and Resequencer patterns).
<bean id="gemfireMessageStore" class="o.s.i.gemfire.store.GemfireMessageStore"> <constructor-arg ref="myRegion"/> </bean> <gfe:cache/> <gfe:replicated-region id="myRegion"/> <int:channel id="somePersistentQueueChannel"> <int:queue message-store="gemfireMessageStore"/> <int:channel> <int:aggregator input-channel="inputChannel" output-channel="outputChannel" message-store="gemfireMessageStore"/>
In the above example, the cache and region are configured using the spring-gemfire namespace (not to be confused with the spring-integration-gemfire namespace). Often it is desirable for the message store to be maintained in one or more remote cache servers in a client-server configuration (See the GemFire product documentation for more details). In this case, you configure a client cache, client region, and client pool and inject the region into the MessageStore. Here is an example:
<bean id="gemfireMessageStore" class="org.springframework.integration.gemfire.store.GemfireMessageStore"> <constructor-arg ref="myRegion"/> </bean> <gfe:client-cache/> <gfe:client-region id="myRegion" shortcut="PROXY" pool-name="messageStorePool"/> <gfe:pool id="messageStorePool"> <gfe:server host="localhost" port="40404" /> </gfe:pool>
Note the pool element is configured with the address of a cache server (a locator may be substituted here). The region is configured as a PROXY so that no data will be stored locally. The region’s id corresponds to a region with the same name configured in the cache server.
Starting with version 4.3.12, the GemfireMessageStore
supports the key prefix
option to allow distinguishing between instances of the store on the same Gemfire region.
Starting with version 4.0, the GemfireLockRegistry
is available.
Certain components (for example aggregator and resequencer) use a lock obtained from a LockRegistry
instance to ensure that only one thread is manipulating a group at a time.
The DefaultLockRegistry
performs this function within a single component; you can now configure an external lock registry on these components.
When used with a shared MessageGroupStore
, the GemfireLockRegistry
can be use to provide this functionality across multiple application instances, such that only one instance can manipulate the group at a time.
![]() | Note |
---|---|
One of the |
As of version 4.0, a new Gemfire-based MetadataStore
(the section called “CompletableFuture”) implementation is available.
The GemfireMetadataStore
can be used to maintain metadata state across application restarts.
This new MetadataStore
implementation can be used with adapters such as:
In order to instruct these adapters to use the new GemfireMetadataStore
, simply declare a Spring bean using the bean name metadataStore.
The Twitter Inbound Channel Adapter and the Feed Inbound Channel Adapter will both automatically pick up and use the declared GemfireMetadataStore
.
![]() | Note |
---|---|
The |
![]() | Note |
---|---|
Since version 5.0, the |
GemfireMetadataStore metadataStore = new GemfireMetadataStore(cache); metadataStore.addListener(new MetadataStoreListenerAdapter() { @Override public void onAdd(String key, String value) { ... } });
The HTTP support allows for the execution of HTTP requests and the processing of inbound HTTP requests.
The HTTP support consists of the following gateway implementations: HttpInboundEndpoint
, HttpRequestExecutingMessageHandler
.
Also see the section called “CompletableFuture”.
To receive messages over HTTP, you need to use an HTTP Inbound Channel Adapter or Gateway. To support the HTTP Inbound Adapters, they need to be deployed within a servlet container such as Apache Tomcat or Jetty. The easiest way to do this is to use Spring’s HttpRequestHandlerServlet, by providing the following servlet definition in the web.xml file:
<servlet> <servlet-name>inboundGateway</servlet-name> <servlet-class>o.s.web.context.support.HttpRequestHandlerServlet</servlet-class> </servlet>
Notice that the servlet name matches the bean name.
For more information on using the HttpRequestHandlerServlet
, see chapter
Remoting and web services using Spring,
which is part of the Spring Framework Reference documentation.
If you are running within a Spring MVC application, then the aforementioned explicit servlet definition is not necessary. In that case, the bean name for your gateway can be matched against the URL path just like a Spring MVC Controller bean. For more information, please see the chapter Web MVC framework, which is part of the Spring Framework Reference documentation.
![]() | Tip |
---|---|
For a sample application and the corresponding configuration, please see the Spring Integration Samples repository. It contains the Http Sample application demonstrating Spring Integration’s HTTP support. |
Below is an example bean definition for a simple HTTP inbound endpoint.
<bean id="httpInbound" class="org.springframework.integration.http.inbound.HttpRequestHandlingMessagingGateway"> <property name="requestChannel" ref="httpRequestChannel" /> <property name="replyChannel" ref="httpReplyChannel" /> </bean>
The HttpRequestHandlingMessagingGateway
accepts a list of HttpMessageConverter
instances or else relies on a default list.
The converters allow customization of the mapping from HttpServletRequest
to Message
.
The default converters encapsulate simple strategies, which for example will create a String message for a POST request where the content type starts with "text", see the Javadoc for full details.
An additional flag (mergeWithDefaultConverters
) can be set along with the list of custom HttpMessageConverter
to add the default converters after the custom converters.
By default this flag is set to false, meaning that the custom converters replace the default list.
The message conversion process uses the (optional) requestPayloadType
property and the incoming Content-Type
header.
Starting with version 4.3, if a request has no content type header, application/octet-stream
is assumed, as
recommended by RFC 2616
.
Previously, the body of such messages was ignored.
Starting with Spring Integration 2.0, MultiPart File support is implemented.
If the request has been wrapped as a MultipartHttpServletRequest
, when using the default converters, that request will be converted to a Message payload that is a MultiValueMap
containing values that may be byte arrays, Strings, or instances of Spring’s MultipartFile
depending on the content type of the individual parts.
![]() | Note |
---|---|
The HTTP inbound Endpoint will locate a |
If you wish to proxy a multipart/form-data
to another server, it may be better to keep it in raw form.
To handle this situation, do not add the multipartResolver
bean to the context; configure the endpoint to expect
a byte[]
request; customize the message converters to include a ByteArrayHttpMessageConverter
, and
disable the default multipart converter.
You may need some other converter(s) for the replies:
<int-http:inbound-gateway channel="receiveChannel" path="/inboundAdapter.htm" request-payload-type="byte[]" message-converters="converters" merge-with-default-converters="false" supported-methods="POST" /> <util:list id="converters"> <beans:bean class="org.springframework.http.converter.ByteArrayHttpMessageConverter" /> <beans:bean class="org.springframework.http.converter.StringHttpMessageConverter" /> <beans:bean class="org.springframework.http.converter.json.MappingJackson2HttpMessageConverter" /> </util:list>
In sending a response to the client there are a number of ways to customize the behavior of the gateway.
By default the gateway will simply acknowledge that the request was received by sending a 200 status code back.
It is possible to customize this response by providing a viewName to be resolved by the Spring MVC ViewResolver
.
In the case that the gateway should expect a reply to the Message
then setting the expectReply
flag (constructor argument) will cause the gateway to wait for a reply Message
before creating an HTTP response.
Below is an example of a gateway configured to serve as a Spring MVC Controller with a view name.
Because of the constructor arg value of TRUE, it wait for a reply.
This also shows how to customize the HTTP methods accepted by the gateway, which are POST and GET by default.
<bean id="httpInbound" class="org.springframework.integration.http.inbound.HttpRequestHandlingController"> <constructor-arg value="true" /> <!-- indicates that a reply is expected --> <property name="requestChannel" ref="httpRequestChannel" /> <property name="replyChannel" ref="httpReplyChannel" /> <property name="viewName" value="jsonView" /> <property name="supportedMethodNames" > <list> <value>GET</value> <value>DELETE</value> </list> </property> </bean>
The reply message will be available in the Model map. The key that is used for that map entry by default is reply, but this can be overridden by setting the replyKey property on the endpoint’s configuration.
=== Http Outbound Components ==== HttpRequestExecutingMessageHandler
To configure the HttpRequestExecutingMessageHandler
write a bean definition like this:
<bean id="httpOutbound" class="org.springframework.integration.http.outbound.HttpRequestExecutingMessageHandler"> <constructor-arg value="http://localhost:8080/example" /> <property name="outputChannel" ref="responseChannel" /> </bean>
This bean definition will execute HTTP requests by delegating to a RestTemplate
.
That template in turn delegates to a list of HttpMessageConverters to generate the HTTP request body from the Message payload.
You can configure those converters as well as the ClientHttpRequestFactory instance to use:
<bean id="httpOutbound" class="org.springframework.integration.http.outbound.HttpRequestExecutingMessageHandler"> <constructor-arg value="http://localhost:8080/example" /> <property name="outputChannel" ref="responseChannel" /> <property name="messageConverters" ref="messageConverterList" /> <property name="requestFactory" ref="customRequestFactory" /> </bean>
By default the HTTP request will be generated using an instance of SimpleClientHttpRequestFactory
which uses the JDK HttpURLConnection
.
Use of the Apache Commons HTTP Client is also supported through the provided CommonsClientHttpRequestFactory
which can be injected as shown above.
![]() | Note |
---|---|
In the case of the Outbound Gateway, the reply message produced by the gateway will contain all Message Headers present in the request message. |
Cookies
Basic cookie support is provided by the transfer-cookies attribute on the outbound gateway. When set to true (default is false), a Set-Cookie header received from the server in a response will be converted to Cookie in the reply message. This header will then be used on subsequent sends. This enables simple stateful interactions, such as…
...->logonGateway->...->doWorkGateway->...->logoffGateway->...
If transfer-cookies is false, any Set-Cookie header received will remain as Set-Cookie in the reply message, and will be dropped on subsequent sends.
![]() | Note: Empty Response Bodies |
---|---|
HTTP is a request/response protocol.
However the response may not have a body, just headers.
In this case, the |
![]() | Note: expected-response-type |
---|---|
Further to the note above regarding empty response bodies, if a response does contain a body, you must provide an appropriate |
==== Introduction
Spring Integration provides an http namespace and the corresponding schema definition. To include it in your configuration, simply provide the following namespace declaration in your application context configuration file:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:int="http://www.springframework.org/schema/integration" xmlns:int-http="http://www.springframework.org/schema/integration/http" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd http://www.springframework.org/schema/integration/http http://www.springframework.org/schema/integration/http/spring-integration-http.xsd"> ... </beans>
==== Inbound
The XML Namespace provides two components for handling HTTP Inbound requests. In order to process requests without returning a dedicated response, use the inbound-channel-adapter:
<int-http:inbound-channel-adapter id="httpChannelAdapter" channel="requests" supported-methods="PUT, DELETE"/>
To process requests that do expect a response, use an inbound-gateway:
<int-http:inbound-gateway id="inboundGateway" request-channel="requests" reply-channel="responses"/>
![]() | Note |
---|---|
Spring Integration 3.0 is improving the REST support by introducing the IntegrationRequestMappingHandlerMapping. The implementation relies on the enhanced REST support provided by Spring Framework 3.1 or higher. |
The parsing of the HTTP Inbound Gateway or the HTTP Inbound Channel Adapter registers an integrationRequestMappingHandlerMapping
bean of type IntegrationRequestMappingHandlerMapping, in case there is none registered, yet.
This particular implementation of the HandlerMapping
delegates its logic to the RequestMappingInfoHandlerMapping
.
The implementation provides similar functionality as the one provided by the org.springframework.web.bind.annotation.RequestMapping
annotation in Spring MVC.
![]() | Note |
---|---|
For more information, please see Mapping Requests With |
For this purpose, Spring Integration 3.0 introduces the <request-mapping>
sub-element.
This optional sub-element can be added to the <http:inbound-channel-adapter>
and the <http:inbound-gateway>
.
It works in conjunction with the path
and supported-methods
attributes:
<inbound-gateway id="inboundController" request-channel="requests" reply-channel="responses" path="/foo/{fooId}" supported-methods="GET" view-name="foo" error-code="oops"> <request-mapping headers="User-Agent" params="myParam=myValue" consumes="application/json" produces="!text/plain"/> </inbound-gateway>
Based on this configuration, the namespace parser creates an instance of the IntegrationRequestMappingHandlerMapping
(if none exists, yet), a HttpRequestHandlingController
bean and associated with it an instance of RequestMapping
, which in turn, is converted to the Spring MVC RequestMappingInfo
.
The <request-mapping>
sub-element provides the following attributes:
With the path
and supported-methods
attributes of the <http:inbound-channel-adapter>
or the <http:inbound-gateway>
, <request-mapping>
attributes translate directly into the respective options provided by the org.springframework.web.bind.annotation.RequestMapping
annotation in Spring MVC.
The <request-mapping>
sub-element allows you to configure several Spring Integration HTTP Inbound Endpoints to the same path
(or even the same supported-methods
) and to provide different downstream message flows based on incoming HTTP requests.
Alternatively, you can also declare just one HTTP Inbound Endpoint and apply routing and filtering logic within the Spring Integration flow to achieve the same result.
This allows you to get the Message
into the flow as early as possibly, e.g.:
<int-http:inbound-gateway request-channel="httpMethodRouter" supported-methods="GET,DELETE" path="/process/{entId}" payload-expression="#pathVariables.entId"/> <int:router input-channel="httpMethodRouter" expression="headers.http_requestMethod"> <int:mapping value="GET" channel="in1"/> <int:mapping value="DELETE" channel="in2"/> </int:router> <int:service-activator input-channel="in1" ref="service" method="getEntity"/> <int:service-activator input-channel="in2" ref="service" method="delete"/>
For more information regarding Handler Mappings, please see: Handler Mappings.
==== Cross-Origin Resource Sharing (CORS) Support
Starting with version 4.2 the <http:inbound-channel-adapter>
and <http:inbound-gateway>
can be configured with
a <cross-origin>
sub-element.
It represents the same options as Spring MVC’s @CrossOrigin
for @Controller
methods
and allows the configuration of Cross-origin resource sharing (CORS) for Spring Integration HTTP endpoints:
origin
- List of allowed origins.
*
means that all origins are allowed.
These values are placed in the Access-Control-Allow-Origin
header of both the pre-flight
and actual responses.
Default value is *
.
allowed-headers
- Indicates which request headers can be used during the actual request.
*
means that all headers asked by the client are allowed.
This property controls the value of the pre-flight response’s Access-Control-Allow-Headers
header.
Default value is *
.
exposed-headers
- List of response headers that the user-agent will allow the client to access.
This property controls the value of the actual response’s Access-Control-Expose-Headers
header.
method
- The HTTP request methods to allow: GET, POST, HEAD, OPTIONS, PUT, PATCH, DELETE, TRACE.
Methods specified here overrides those in supported-methods
.
allow-credentials
- Set to true
if the the browser should include any cookies associated to the domain
of the request, or false
if it should not.
Empty string "" means undefined.
If true
, the pre-flight response will include the header Access-Control-Allow-Credentials=true
.
Default value is true
.
max-age
- Controls the cache duration for pre-flight responses.
Setting this to a reasonable value can reduce the number of pre-flight request/response interactions required by
the browser.
This property controls the value of the Access-Control-Max-Age
header in the pre-flight response.
A value of -1
means undefined.
Default value is 1800 seconds, or 30 minutes.
The CORS Java Configuration is represented by the org.springframework.integration.http.inbound.CrossOrigin
class,
instances of which can be injected to the HttpRequestHandlingEndpointSupport
beans.
Starting with version 4.1 the <http:inbound-channel-adapter>
can be configured with a status-code-expression
to override the default 200 OK
status.
The expression must return an object which can be converted to an org.springframework.http.HttpStatus
enum value.
The evaluationContext
has a BeanResolver
but no variables, so the usage of this attribute is somewhat limited.
An example might be to resolve, at runtime, some scoped Bean that returns a status code value but, most likely, it will be set to a fixed value such as status-code=expression="204"
(No Content), or status-code-expression="T(org.springframework.http.HttpStatus).NO_CONTENT"
.
By default, status-code-expression
is null meaning that the normal 200 OK response status will be returned.
<http:inbound-channel-adapter id="inboundController" channel="requests" view-name="foo" error-code="oops" status-code-expression="T(org.springframework.http.HttpStatus).ACCEPTED"> <request-mapping headers="BAR"/> </http:inbound-channel-adapter>
The <http:inbound-gateway>
resolves the status code from the http_statusCode
header of the reply Message.
Starting with version 4.2, the default response status code when no reply is received within the reply-timeout
is 500 Internal Server Error
.
There are two ways to modify this behavior:
reply-timeout-status-code-expression
- this has the same semantics as the status-code-expression
on the
inbound adapter.
error-channel
and return an appropriate message with an http status code header, such as…
<int:chain input-channel="errors"> <int:header-enricher> <int:header name="http_statusCode" value="504" /> </int:header-enricher> <int:transformer expression="payload.failedMessage" /> </int:chain>
The payload of the ErrorMessage
is a MessageTimeoutException
; it must be transformed to something that can be
converted by the gateway, such as a String
; a good candidate is the exception’s message property, which is the
value used when using the expression technique.
If the error flow times out after a main flow timeout, 500 Internal Server Error
is returned, or the
reply-timeout-status-code-expression
is evaluated, if present.
![]() | Note |
---|---|
previously, the default status code for a timeout was |
==== URI Template Variables and Expressions
By Using the path attribute in conjunction with the payload-expression attribute as well as the header sub-element, you have a high degree of flexibility for mapping inbound request data.
In the following example configuration, an Inbound Channel Adapter is configured to accept requests using the following URI: /first-name/{firstName}/last-name/{lastName}
Using the payload-expression attribute, the URI template variable {firstName} is mapped to be the Message payload, while the {lastName} URI template variable will map to the lname Message header.
<int-http:inbound-channel-adapter id="inboundAdapterWithExpressions" path="/first-name/{firstName}/last-name/{lastName}" channel="requests" payload-expression="#pathVariables.firstName"> <int-http:header name="lname" expression="#pathVariables.lastName"/> </int-http:inbound-channel-adapter>
For more information about URI template variables, please see the Spring Reference Manual: uri template patterns.
Since Spring Integration 3.0, in addition to the existing #pathVariables
and #requestParams
variables being available in payload and header expressions, other useful variables have been added.
The entire list of available expression variables:
MultiValueMap
from the ServletRequest
parameterMap
.
Map
from URI Template placeholders and their values;
Map
of MultiValueMap
according to Spring MVC Specification.
Note, #matrixVariables require Spring MVC 3.2 or higher;
org.springframework.web.context.request.RequestAttributes
associated with the current Request;
org.springframework.http.HttpHeaders
object from the current Request;
Map<String, Cookie>
of javax.servlet.http.Cookie
s from the current Request.
Note, all these values (and others) can be accessed within expressions in the downstream message flow via the ThreadLocal
org.springframework.web.context.request.RequestAttributes
variable, if that message flow is single-threaded and lives within the request thread:
<int-:transformer expression="T(org.springframework.web.context.request.RequestContextHolder). requestAttributes.request.queryString"/>
==== Outbound
To configure the outbound gateway you can use the namespace support as well.
The following code snippet shows the different configuration options for an outbound Http gateway.
Most importantly, notice that the http-method and expected-response-type are provided.
Those are two of the most commonly configured values.
The default http-method is POST, and the default response type is null.
With a null response type, the payload of the reply Message would contain the ResponseEntity as long as it’s http status is a success (non-successful status codes will throw Exceptions).
If you are expecting a different type, such as a String
, then provide that fully-qualified class name as shown below.
See also the note about empty response bodies in the section called “CompletableFuture”.
![]() | Important |
---|---|
Beginning with Spring Integration 2.1 the request-timeout attribute of the HTTP Outbound Gateway was renamed to reply-timeout to better reflect the intent. |
<int-http:outbound-gateway id="example" request-channel="requests" url="http://localhost/test" http-method="POST" extract-request-payload="false" expected-response-type="java.lang.String" charset="UTF-8" request-factory="requestFactory" reply-timeout="1234" reply-channel="replies"/>
![]() | Important |
---|---|
Since Spring Integration 2.2, Java serialization over HTTP is no longer enabled by default.
Previously, when setting the However, because this could cause incompatibility with existing applications, it was decided to no longer automatically add this converter to the HTTP endpoints.
If you wish to use Java serialization, you will need to add the |
Beginning with Spring Integration 2.2 you can also determine the HTTP Method dynamically using SpEL and the http-method-expression attribute.
Note that this attribute is obviously mutually exclusive with http-method You can also use expected-response-type-expression
attribute instead of expected-response-type
and provide any valid SpEL expression that determines the type of the response.
<int-http:outbound-gateway id="example" request-channel="requests" url="http://localhost/test" http-method-expression="headers.httpMethod" extract-request-payload="false" expected-response-type-expression="payload" charset="UTF-8" request-factory="requestFactory" reply-timeout="1234" reply-channel="replies"/>
If your outbound adapter is to be used in a unidirectional way, then you can use an outbound-channel-adapter instead. This means that a successful response will simply execute without sending any Messages to a reply channel. In the case of any non-successful response status code, it will throw an exception. The configuration looks very similar to the gateway:
<int-http:outbound-channel-adapter id="example" url="http://localhost/example" http-method="GET" channel="requests" charset="UTF-8" extract-payload="false" expected-response-type="java.lang.String" request-factory="someRequestFactory" order="3" auto-startup="false"/>
![]() | Note |
---|---|
To specify the URL; you can use either the url attribute or the url-expression attribute. The url is a simple string (with placeholders for URI variables, as described below); the url-expression is a SpEL expression, with the Message as the root object, enabling dynamic urls. The url resulting from the expression evaluation can still have placeholders for URI variables. In previous releases, some users used the place holders to replace the entire URL with a URI variable. Changes in Spring 3.1 can cause some issues with escaped characters, such as ?. For this reason, it is recommended that if you wish to generate the URL entirely at runtime, you use the url-expression attribute. |
If your URL contains URI variables, you can map them using the uri-variable
sub-element.
This sub-element is available for the Http Outbound Gateway and the Http Outbound Channel Adapter.
<int-http:outbound-gateway id="trafficGateway" url="http://local.yahooapis.com/trafficData?appid=YdnDemo&zip={zipCode}" request-channel="trafficChannel" http-method="GET" expected-response-type="java.lang.String"> <int-http:uri-variable name="zipCode" expression="payload.getZip()"/> </int-http:outbound-gateway>
The uri-variable
sub-element defines two attributes: name
and expression
.
The name
attribute identifies the name of the URI variable, while the expression
attribute is used to set the actual value.
Using the expression
attribute, you can leverage the full power of the Spring Expression Language (SpEL) which gives you full dynamic access to the message payload and the message headers.
For example, in the above configuration the getZip()
method will be invoked on the payload object of the Message and the result of that method will be used as the value for the URI variable named zipCode.
Since Spring Integration 3.0, HTTP Outbound Endpoints support the uri-variables-expression
attribute to specify an Expression
which should be evaluated, resulting in a Map
for all URI variable placeholders within the URL template.
It provides a mechanism whereby different variable expressions can be used, based on the outbound message.
This attribute is mutually exclusive with the <uri-variable/>
sub-element:
<int-http:outbound-gateway url="http://foo.host/{foo}/bars/{bar}" request-channel="trafficChannel" http-method="GET" uri-variables-expression="@uriVariablesBean.populate(payload)" expected-response-type="java.lang.String"/>
where uriVariablesBean
might be:
public class UriVariablesBean { private static final ExpressionParser EXPRESSION_PARSER = new SpelExpressionParser(); public Map<String, ?> populate(Object payload) { Map<String, Object> variables = new HashMap<String, Object>(); if (payload instanceOf String.class)) { variables.put("foo", "foo")); } else { variables.put("foo", EXPRESSION_PARSER.parseExpression("headers.bar")); } return variables; } }
![]() | Note |
---|---|
The |
IMPORTANT
The uriVariablesExpression
property provides a very powerful mechanism for evaluating URI variables.
It is anticipated that simple expressions like the example above will be used.
However, you could also configure something like this "@uriVariablesBean.populate(#root)"
with an expression in the returned map being variables.put("foo", EXPRESSION_PARSER.parseExpression(message.getHeaders().get("bar", String.class)));
, where the expression is dynamically provided in the message header bar
.
Since the header may come from an untrusted source, the HTTP outbound endpoints use a SimpleEvaluationContext
when evaluating these expressions; allowing only a subset of SpEL features to be used.
If you trust your message sources and wish to use the restricted SpEL constructs, set the trustedSpel
property of the outbound endpoint to true
.
Scenarios when we need to supply a dynamic set of URI variables on per message basis can be achieved with the custom url-expression
and some utilities for building and encoding URL parameters:
url-expression="T(org.springframework.web.util.UriComponentsBuilder) .fromHttpUrl('http://HOST:PORT/PATH') .queryParams(payload) .build() .toUri()"
where queryParams()
expects a MultiValueMap<String, String>
as an argument, so a real set of URL query parameters can be build in advance, before performing request.
The whole queryString
can also be presented as an uri variable:
<int-http:outbound-gateway id="proxyGateway" request-channel="testChannel" url="http://testServer/test?{queryString}"> <int-http:uri-variable name="queryString" expression="'a=A&b=B'"/> </int-http:outbound-gateway>
In this case the URL encoding must be provided manually.
For example the org.apache.http.client.utils.URLEncodedUtils#format()
can be used for this purpose.
A mentioned, manually built, MultiValueMap<String, String>
can be converted to the the List<NameValuePair>
format()
method argument using this Java Streams snippet:
List<NameValuePair> nameValuePairs =
params.entrySet()
.stream()
.flatMap(e -> e
.getValue()
.stream()
.map(v -> new BasicNameValuePair(e.getKey(), v)))
.collect(Collectors.toList());
==== Controlling URI Encoding
By default, the URL string is encoded (see UriComponentsBuilder) to the URI object before sending the request.
In some scenarios with a non-standard URI (e.g.
the RabbitMQ Rest API) it is undesirable to perform the encoding.
The <http:outbound-gateway/>
and <http:outbound-channel-adapter/>
provide an encode-uri
attribute.
To disable encoding the URL, this attribute should be set to false
(by default it is true
).
If you wish to partially encode some of the URL, this can be achieved using an expression
within a <uri-variable/>
:
<http:outbound-gateway url="http://somehost/%2f/fooApps?bar={param}" encode-uri="false"> <http:uri-variable name="param" expression="T(org.apache.commons.httpclient.util.URIUtil) .encodeWithinQuery('Hello World!')"/> </http:outbound-gateway>
=== Configuring HTTP Endpoints with Java
Inbound Gateway Using Java Configuration.
@Bean public HttpRequestHandlingMessagingGateway inbound() { HttpRequestHandlingMessagingGateway gateway = new HttpRequestHandlingMessagingGateway(true); gateway.setRequestMapping(mapping()); gateway.setRequestPayloadType(String.class); gateway.setRequestChannelName("httpRequest"); return gateway; } @Bean public RequestMapping mapping() { RequestMapping requestMapping = new RequestMapping(); requestMapping.setPathPatterns("/foo"); requestMapping.setMethods(HttpMethod.POST); return requestMapping; }
Inbound Gateway Using the Java DSL.
@Bean public IntegrationFlow inbound() { return IntegrationFlows.from(Http.inboundGateway("/foo") .requestMapping(m -> m.methods(HttpMethod.POST)) .requestPayloadType(String.class)) .channel("httpRequest") .get(); }
Outbound Gateway Using Java Configuration.
@ServiceActivator(inputChannel = "httpOutRequest") @Bean public HttpRequestExecutingMessageHandler outbound() { HttpRequestExecutingMessageHandler handler = new HttpRequestExecutingMessageHandler("http://localhost:8080/foo"); handler.setHttpMethod(HttpMethod.POST); handler.setExpectedResponseType(String.class); return handler; }
Outbound Gateway Using the Java DSL.
@Bean public IntegrationFlow outbound() { return IntegrationFlows.from("httpOutRequest") .handle(Http.outboundGateway("http://localhost:8080/foo") .httpMethod(HttpMethod.POST) .expectedResponseType(String.class)) .get(); }
In the context of HTTP components, there are two timing areas that have to be considered.
Timeouts when interacting with Spring Integration Channels
Timeouts when interacting with a remote HTTP server
First, the components interact with Message Channels, for which timeouts can be specified. For example, an HTTP Inbound Gateway will forward messages received from connected HTTP Clients to a Message Channel (Request Timeout) and consequently the HTTP Inbound Gateway will receive a reply Message from the Reply Channel (Reply Timeout) that will be used to generate the HTTP Response. Please see the figure below for an illustration.
For outbound endpoints, the second thing to consider is timing while interacting with the remote server.
You may want to configure the HTTP related timeout behavior, when making active HTTP requests using the HTTP Outbound Gateway or the HTTP Outbound Channel Adapter. In those instances, these two components use Spring’s RestTemplate support to execute HTTP requests.
In order to configure timeouts for the HTTP Outbound Gateway and the HTTP Outbound Channel Adapter, you can either reference a RestTemplate
bean directly, using the rest-template attribute, or you can provide a reference to a ClientHttpRequestFactory bean using the request-factory attribute.
Spring provides the following implementations of the ClientHttpRequestFactory
interface:
SimpleClientHttpRequestFactory - Uses standard J2SE facilities for making HTTP Requests
HttpComponentsClientHttpRequestFactory - Uses Apache HttpComponents HttpClient (Since Spring 3.1)
If you don’t explicitly configure the request-factory or rest-template attribute respectively, then a default RestTemplate which uses a SimpleClientHttpRequestFactory
will be instantiated.
![]() | Note |
---|---|
With some JVM implementations, the handling of timeouts using the URLConnection class may not be consistent. E.g. from the Java™ Platform, Standard Edition 6 API Specification on setConnectTimeout: [quote] Some non-standard implementation of this method may ignore the specified timeout. To see the connect timeout set, please call getConnectTimeout(). Please test your timeouts if you have specific needs.
Consider using the |
![]() | Important |
---|---|
When using the Apache HttpComponents HttpClient with a Pooling Connection Manager, be aware that, by default, the connection manager will create no more than 2 concurrent connections per given route and no more than 20 connections in total. For many real-world applications these limits may prove too constraining. Refer to the Apache documentation (link above) for information about configuring this important component. |
Here is an example of how to configure an HTTP Outbound Gateway using a SimpleClientHttpRequestFactory
, configured with connect and read timeouts of 5 seconds respectively:
<int-http:outbound-gateway url="http://www.google.com/ig/api?weather={city}" http-method="GET" expected-response-type="java.lang.String" request-factory="requestFactory" request-channel="requestChannel" reply-channel="replyChannel"> <int-http:uri-variable name="city" expression="payload"/> </int-http:outbound-gateway> <bean id="requestFactory" class="org.springframework.http.client.SimpleClientHttpRequestFactory"> <property name="connectTimeout" value="5000"/> <property name="readTimeout" value="5000"/> </bean>
HTTP Outbound Gateway
For the HTTP Outbound Gateway, the XML Schema defines only the reply-timeout.
The reply-timeout maps to the sendTimeout property of the org.springframework.integration.http.outbound.HttpRequestExecutingMessageHandler class.
More precisely, the property is set on the extended AbstractReplyProducingMessageHandler
class, which ultimately sets the property on the MessagingTemplate
.
The value of the sendTimeout property defaults to "-1" and will be applied to the connected MessageChannel
.
This means, that depending on the implementation, the Message Channel’s send method may block indefinitely.
Furthermore, the sendTimeout property is only used, when the actual MessageChannel implementation has a blocking send (such as full bounded QueueChannel).
HTTP Inbound Gateway
For the HTTP Inbound Gateway, the XML Schema defines the request-timeout attribute, which will be used to set the requestTimeout property on the HttpRequestHandlingMessagingGateway
class (on the extended MessagingGatewaySupport class).
Secondly, the_reply-timeout_ attribute exists and it maps to the replyTimeout property on the same class.
The default for both timeout properties is "1000ms".
Ultimately, the request-timeout property will be used to set the sendTimeout on the used MessagingTemplate
instance.
The replyTimeout property on the other hand, will be used to set the receiveTimeout property on the used MessagingTemplate
instance.
![]() | Tip |
---|---|
In order to simulate connection timeouts, connect to a non-routable IP address, for example 10.255.255.10. |
If you are behind a proxy and need to configure proxy settings for HTTP outbound adapters and/or gateways, you can apply one of two approaches. In most cases, you can rely on the standard Java System Properties that control the proxy settings. Otherwise, you can explicitly configure a Spring bean for the HTTP client request factory instance.
Standard Java Proxy configuration
There are 3 System Properties you can set to configure the proxy settings that will be used by the HTTP protocol handler:
And for HTTPS:
For more information please refer to this document: http://download.oracle.com/javase/6/docs/technotes/guides/net/proxies.html
Spring’s SimpleClientHttpRequestFactory
If for any reason, you need more explicit control over the proxy configuration, you can use Spring’s SimpleClientHttpRequestFactory
and configure its proxy property as such:
<bean id="requestFactory" class="org.springframework.http.client.SimpleClientHttpRequestFactory"> <property name="proxy"> <bean id="proxy" class="java.net.Proxy"> <constructor-arg> <util:constant static-field="java.net.Proxy.Type.HTTP"/> </constructor-arg> <constructor-arg> <bean class="java.net.InetSocketAddress"> <constructor-arg value="123.0.0.1"/> <constructor-arg value="8080"/> </bean> </constructor-arg> </bean> </property> </bean>
Spring Integration provides support for Http Header mapping for both HTTP Request and HTTP Responses.
By default all standard Http Headers as defined here http://en.wikipedia.org/wiki/List_of_HTTP_header_fields will be mapped from the message to HTTP request/response headers without further configuration.
However if you do need further customization you may provide additional configuration via convenient namespace support.
You can provide a comma-separated list of header names, and you can also include simple patterns with the * character acting as a wildcard.
If you do provide such values, it will override the default behavior.
Basically, it assumes you are in complete control at that point.
However, if you do want to include all of the standard HTTP headers, you can use the shortcut patterns: HTTP_REQUEST_HEADERS
and HTTP_RESPONSE_HEADERS
.
Here are some examples:
<int-http:outbound-gateway id="httpGateway" url="http://localhost/test2" mapped-request-headers="foo, bar" mapped-response-headers="X-*, HTTP_RESPONSE_HEADERS" channel="someChannel"/> <int-http:outbound-channel-adapter id="httpAdapter" url="http://localhost/test2" mapped-request-headers="foo, bar, HTTP_REQUEST_HEADERS" channel="someChannel"/>
The adapters and gateways will use the DefaultHttpHeaderMapper
which now provides two static factory methods for "inbound" and "outbound" adapters so that the proper direction can be applied (mapping HTTP requests/responses IN/OUT as appropriate).
If further customization is required you can also configure a DefaultHttpHeaderMapper
independently and inject it into the adapter via the header-mapper
attribute.
Before version 5.0, the DefaultHttpHeaderMapper
the default prefix for user-defined, non-standard HTTP headers was X-
.
In _version 5.0_
this has been changed to an empty string.
According to RFC-6648, the use of such prefixes is now discouraged.
This option can still be customized by setting the DefaultHttpHeaderMapper.setUserDefinedHeaderPrefix()
property.
<int-http:outbound-gateway id="httpGateway" url="http://localhost/test2" header-mapper="headerMapper" channel="someChannel"/> <bean id="headerMapper" class="o.s.i.http.support.DefaultHttpHeaderMapper"> <property name="inboundHeaderNames" value="foo*, *bar, baz"/> <property name="outboundHeaderNames" value="a*b, d"/> </bean>
Of course, you can even implement the HeaderMapper strategy interface directly and provide a reference to that if you need to do something other than what the DefaultHttpHeaderMapper
supports.
=== Integration Graph Controller
Starting with version 4.3, the HTTP module provides an @EnableIntegrationGraphController
@Configuration
class annotation and <int-http:graph-controller/>
XML element to expose the IntegrationGraphServer
as a REST service.
See the section called “CompletableFuture” for more information.
==== Multipart HTTP request - RestTemplate (client) and Http Inbound Gateway (server)
This example demonstrates how simple it is to send a Multipart HTTP request via Spring’s RestTemplate and receive it with a Spring Integration HTTP Inbound Adapter.
All we are doing is creating a MultiValueMap
and populating it with multi-part data.
The RestTemplate
will take care of the rest (no pun intended) by converting it to a MultipartHttpServletRequest
. This particular client will send a multipart HTTP Request which contains the name of the company as well as an image file with the company logo.
RestTemplate template = new RestTemplate(); String uri = "http://localhost:8080/multipart-http/inboundAdapter.htm"; Resource s2logo = new ClassPathResource("org/springframework/samples/multipart/spring09_logo.png"); MultiValueMap map = new LinkedMultiValueMap(); map.add("company", "SpringSource"); map.add("company-logo", s2logo); HttpHeaders headers = new HttpHeaders(); headers.setContentType(new MediaType("multipart", "form-data")); HttpEntity request = new HttpEntity(map, headers); ResponseEntity<?> httpResponse = template.exchange(uri, HttpMethod.POST, request, null);
That is all for the client.
On the server side we have the following configuration:
<int-http:inbound-channel-adapter id="httpInboundAdapter" channel="receiveChannel" path="/inboundAdapter.htm" supported-methods="GET, POST"/> <int:channel id="receiveChannel"/> <int:service-activator input-channel="receiveChannel"> <bean class="org.springframework.integration.samples.multipart.MultipartReceiver"/> </int:service-activator> <bean id="multipartResolver" class="org.springframework.web.multipart.commons.CommonsMultipartResolver"/>
The httpInboundAdapter will receive the request, convert it to a Message
with a payload that is a LinkedMultiValueMap
.
We then are parsing that in the multipartReceiver service-activator;
public void receive(LinkedMultiValueMap<String, Object> multipartRequest){ System.out.println("### Successfully received multipart request ###"); for (String elementName : multipartRequest.keySet()) { if (elementName.equals("company")){ System.out.println("\t" + elementName + " - " + ((String[]) multipartRequest.getFirst("company"))[0]); } else if (elementName.equals("company-logo")){ System.out.println("\t" + elementName + " - as UploadedMultipartFile: " + ((UploadedMultipartFile) multipartRequest .getFirst("company-logo")).getOriginalFilename()); } } }
You should see the following output:
### Successfully received multipart request ### company - SpringSource company-logo - as UploadedMultipartFile: spring09_logo.png
Spring Integration provides Channel Adapters for receiving and sending messages via database queries. Through those adapters Spring Integration supports not only plain JDBC SQL Queries, but also Stored Procedure and Stored Function calls.
The following JDBC components are available by default:
Furthermore, the Spring Integration JDBC Module also provides a JDBC Message Store
The main function of an inbound Channel Adapter is to execute a SQL SELECT
query and turn the result set as a message.
The message payload is the whole result set, expressed as a List
, and the types of the items in the list depend on the row-mapping strategy that is used.
The default strategy is a generic mapper that just returns a Map
for each row in the query result.
Optionally, this can be changed by adding a reference to a RowMapper
instance (see the Spring JDBC documentation for more detailed information about row mapping).
![]() | Note |
---|---|
If you want to convert rows in the SELECT query result to individual messages you can use a downstream splitter. |
The inbound adapter also requires a reference to either a JdbcTemplate
instance or a DataSource
.
As well as the SELECT
statement to generate the messages, the adapter above also has an UPDATE
statement that is being used to mark the records as processed so that they don’t show up in the next poll.
The update can be parameterized by the list of ids from the original select.
This is done through a naming convention by default (a column in the input result set called "id" is translated into a list in the parameter map for the update called "id").
The following example defines an inbound Channel Adapter with an update query and a DataSource
reference.
<int-jdbc:inbound-channel-adapter query="select * from item where status=2" channel="target" data-source="dataSource" update="update item set status=10 where id in (:id)" />
![]() | Note |
---|---|
The parameters in the update query are specified with a colon (:) prefix to the name of a parameter (which in this case is an expression to be applied to each of the rows in the polled result set). This is a standard feature of the named parameter JDBC support in Spring JDBC combined with a convention (projection onto the polled result list) adopted in Spring Integration. The underlying Spring JDBC features limit the available expressions (e.g. most special characters other than period are disallowed), but since the target is usually a list of or an individual object addressable by simple bean paths this isn’t unduly restrictive. |
To change the parameter generation strategy you can inject a SqlParameterSourceFactory
into the adapter to override the default behavior (the adapter has a sql-parameter-source-factory
attribute).
Spring Integration provides a ExpressionEvaluatingSqlParameterSourceFactory
which will create a SpEL-based parameter source, with the results of the query as the #root
object.
(If update-per-row
is true, the root object is the row).
If the same parameter name appears multiple times in the update query, it is evaluated only one time, and its result is cached.
You can also use a parameter source for the select query. In this case, since there is no "result" object to evaluate against, a single parameter source is used each time (rather than using a parameter source factory). Starting with version 4.0, you can use Spring to create a SpEL based parameter source as follows:
<int-jdbc:inbound-channel-adapter query="select * from item where status=:status" channel="target" data-source="dataSource" select-sql-parameter-source="parameterSource" /> <bean id="parameterSource" factory-bean="parameterSourceFactory" factory-method="createParameterSourceNoCache"> <constructor-arg value="" /> </bean> <bean id="parameterSourceFactory" class="o.s.integration.jdbc.ExpressionEvaluatingSqlParameterSourceFactory"> <property name="parameterExpressions"> <map> <entry key="status" value="@statusBean.which()" /> </map> </property> </bean> <bean id="statusBean" class="foo.StatusDetermination" />
The value
in each parameter expression can be any valid SpEL expression.
The #root
object for the expression evaluation is the constructor argument defined on the parameterSource
bean.
It is static for all evaluations (in this case, an empty String).
Starting with version 5.0, the ExpressionEvaluatingSqlParameterSourceFactory
can be supplied with the sqlParameterTypes
to specify the target SQL type for the particular parameter.
Below example provides sql type for the parameters being used in the query.
<int-jdbc:inbound-channel-adapter query="select * from item where status=:status" channel="target" data-source="dataSource" select-sql-parameter-source="parameterSource" /> <bean id="parameterSource" factory-bean="parameterSourceFactory" factory-method="createParameterSourceNoCache"> <constructor-arg value="" /> </bean> <bean id="parameterSourceFactory" class="o.s.integration.jdbc.ExpressionEvaluatingSqlParameterSourceFactory"> <property name="sqlParameterTypes"> <map> <entry key="status" value="#{ T(java.sql.Types).BINARY}" /> </map> </property> </bean>
![]() | Important |
---|---|
Use the |
==== Polling and Transactions
The inbound adapter accepts a regular Spring Integration poller as a sub element, so for instance the frequency of the polling can be controlled. A very important feature of the poller for JDBC usage is the option to wrap the poll operation in a transaction, for example:
<int-jdbc:inbound-channel-adapter query="..." channel="target" data-source="dataSource" update="..."> <int:poller fixed-rate="1000"> <int:transactional/> </int:poller> </int-jdbc:inbound-channel-adapter>
![]() | Note |
---|---|
If a poller is not explicitly specified, a default value will be used (and as per normal with Spring Integration can be defined as a top level bean). |
In this example the database is polled every 1000 milliseconds, and the update and select queries are both executed in the same transaction. The transaction manager configuration is not shown, but as long as it is aware of the data source then the poll is transactional. A common use case is for the downstream channels to be direct channels (the default), so that the endpoints are invoked in the same thread, and hence the same transaction. Then if any of them fail, the transaction rolls back and the input data is reverted to its original state.
==== Max-rows-per-poll versus Max-messages-per-poll
The JDBC Inbound Channel Adapter defines an attribute max-rows-per-poll
.
When you specify the adapter’s Poller, you can also define a property called max-messages-per-poll
.
While these two attributes look similar, their meaning is quite different.
max-messages-per-poll
specifies the number of times the query is executed per polling interval, whereas max-rows-per-poll
specifies the number of rows returned for each execution.
Under normal circumstances, you would likely not want to set the Poller’s max-messages-per-poll
property when using the JDBC Inbound Channel Adapter.
Its default value is 1, which means that the JDBC Inbound Channel Adapter's receive()
method is executed exactly once for each poll interval.
Setting the max-messages-per-poll
attribute to a larger value means that the query is executed that many times back to back.
For more information regarding the max-messages-per-poll
attribute, please see Section 4.3.1, “Configuring An Inbound Channel Adapter”.
In contrast, the max-rows-per-poll
attribute, if greater than 0, specifies the maximum number of rows that will be used from the query result set, per execution of the receive()
method.
If the attribute is set to 0, then all rows will be included in the resulting message.
If not explicitly set, the attribute defaults to 0.
The outbound channel adapter is the inverse of the inbound: its role is to handle a message and use it to execute a SQL query. By default, the message payload and headers are available as input parameters to the query, as the following example shows:
<int-jdbc:outbound-channel-adapter query="insert into foos (id, status, name) values (:headers[id], 0, :payload[foo])" data-source="dataSource" channel="input"/>
In the example above, messages arriving on the channel labelled input have a payload of a map with key foo, so the []
operator dereferences that value from the map.
The headers are also accessed as a map.
![]() | Note |
---|---|
The parameters in the query above are bean property expressions on the incoming message (not Spring EL expressions).
This behavior is part of the |
The outbound adapter requires a reference to either a DataSource
or a JdbcTemplate
.
It can also have a SqlParameterSourceFactory
injected to control the binding of each incoming message to a query.
If the input channel is a direct channel, then the outbound adapter runs its query in the same thread, and therefore the same transaction (if there is one) as the sender of the message.
Passing Parameters using SpEL Expressions
A common requirement for most JDBC Channel Adapters is to pass parameters as part of Sql queries or Stored Procedures/Functions.
As mentioned above, these parameters are by default bean property expressions, not SpEL expressions.
However, if you need to pass SpEL expression as parameters, you must inject a SqlParameterSourceFactory
explicitly.
The following example uses a ExpressionEvaluatingSqlParameterSourceFactory
to achieve that requirement.
<jdbc:outbound-channel-adapter data-source="dataSource" channel="input" query="insert into MESSAGES (MESSAGE_ID,PAYLOAD,CREATED_DATE) \ values (:id, :payload, :createdDate)" sql-parameter-source-factory="spelSource"/> <bean id="spelSource" class="o.s.integration.jdbc.ExpressionEvaluatingSqlParameterSourceFactory"> <property name="parameterExpressions"> <map> <entry key="id" value="headers['id'].toString()"/> <entry key="createdDate" value="new java.util.Date()"/> <entry key="payload" value="payload"/> </map> </property> </bean>
For further information, please also see the section called “CompletableFuture”
PreparedStatement Callback
There are some cases when the flexibility and loose-coupling of SqlParameterSourceFactory
isn’t enough for the target
PreparedStatement
or we need to do some low-level JDBC work.
The Spring JDBC module provides APIs to configure the execution environment (e.g. ConnectionCallback
or PreparedStatementCreator
) and manipulation of parameter values (e.g. SqlParameterSource
).
Or even APIs for low level operations, for example StatementCallback
.
Starting with Spring Integration 4.2, the MessagePreparedStatementSetter
is available to allow
the specification of parameters on the PreparedStatement
manually, in the requestMessage
context.
This class plays exactly the same role as PreparedStatementSetter
in the standard Spring JDBC API.
Actually it is invoked directly from an inline PreparedStatementSetter
implementation, when the JdbcMessageHandler
invokes execute
on the JdbcTemplate
.
This functional interface option is mutually exclusive with sqlParameterSourceFactory
and can be used as a more
powerful alternative to populate parameters of the PreparedStatement
from the requestMessage
.
For example it is useful when we need to store File
data to the DataBase BLOB
column in a stream manner:
@Bean @ServiceActivator(inputChannel = "storeFileChannel") public MessageHandler jdbcMessageHandler(DataSource dataSource) { JdbcMessageHandler jdbcMessageHandler = new JdbcMessageHandler(dataSource, "INSERT INTO imagedb (image_name, content, description) VALUES (?, ?, ?)"); jdbcMessageHandler.setPreparedStatementSetter((ps, m) -> { ps.setString(1, m.getHeaders().get(FileHeaders.FILENAME)); try (FileInputStream inputStream = new FileInputStream((File) m.getPayload()); ) { ps.setBlob(2, inputStream); } catch (Exception e) { throw new MessageHandlingException(m, e); } ps.setClob(3, new StringReader(m.getHeaders().get("description", String.class))); }); return jdbcMessageHandler; }
From the XML configuration perspective, the prepared-statement-setter
attribute is available on the
<int-jdbc:outbound-channel-adapter>
component, to specify a MessagePreparedStatementSetter
bean reference.
The outbound Gateway is like a combination of the outbound and inbound adapters: its role is to handle a message and use it to execute a SQL query and then respond with the result sending it to a reply channel. The message payload and headers are available by default as input parameters to the query, for instance:
<int-jdbc:outbound-gateway update="insert into foos (id, status, name) values (:headers[id], 0, :payload[foo])" request-channel="input" reply-channel="output" data-source="dataSource" />
The result of the above would be to insert a record into the "foos" table and return a message to the output channel indicating the number of rows affected (the payload is a map: {UPDATED=1}
).
If the update query is an insert with auto-generated keys, the reply message can be populated with the generated keys by adding keys-generated="true"
to the above example (this is not the default because it is not supported by some database platforms).
For example:
<int-jdbc:outbound-gateway update="insert into mythings (status, name) values (0, :payload[thing])" request-channel="input" reply-channel="output" data-source="dataSource" keys-generated="true"/>
Instead of the update count or the generated keys, you can also provide a select query to execute and generate a reply message from the result (like the inbound adapter), e.g:
<int-jdbc:outbound-gateway update="insert into foos (id, status, name) values (:headers[id], 0, :payload[foo])" query="select * from foos where id=:headers[$id]" request-channel="input" reply-channel="output" data-source="dataSource"/>
Since Spring Integration 2.2 the update SQL query is no longer mandatory. You can now solely provide a select query, using either the query attribute or the query sub-element. This is extremely useful if you need to actively retrieve data using e.g. a generic Gateway or a Payload Enricher. The reply message is then generated from the result, like the inbound adapter, and passed to the reply channel.
<int-jdbc:outbound-gateway query="select * from foos where id=:headers[id]" request-channel="input" reply-channel="output" data-source="dataSource"/>
By default the component for the SELECT query returns only one, first row from the cursor.
This can be adjusted with the max-rows-per-poll
option.
Consider to specify max-rows-per-poll="0"
if you need to return all the rows from the SELECT.
As with the channel adapters, you can also provide SqlParameterSourceFactory
instances for request and reply.
The default is the same as for the outbound adapter, so the request message is available as the root of an expression.
If keys-generated="true"
then the root of the expression is the generated keys (a map if there is only one or a list of maps if multi-valued).
The outbound gateway requires a reference to either a DataSource or a JdbcTemplate.
It can also have a SqlParameterSourceFactory
injected to control the binding of the incoming message to the query.
Starting with the version 4.2 the request-prepared-statement-setter
attribute is available on the
<int-jdbc:outbound-gateway>
as an alternative to the request-sql-parameter-source-factory
.
It allows you to specify a MessagePreparedStatementSetter
bean reference, which implements more sophisticated
PreparedStatement
preparation before its execution.
See the section called “CompletableFuture” for more information about MessagePreparedStatementSetter
.
Spring Integration provides 2 JDBC specific Message Store implementations.
The first one, is the JdbcMessageStore
which is suitable to be used in conjunction with Aggregators and the Claim-Check pattern.
While it can be used for backing Message Channels as well, you may want to consider using the JdbcChannelMessageStore
implementation instead, as it provides a more targeted and scalable implementation.
![]() | Important |
---|---|
Starting with versions 5.0.11, 5.1.2, the indexes for the |
![]() | Note |
---|---|
When using the |
==== Initializing the Database
Before starting to use JDBC Message Store components, it is important to provision target data base with the appropriate objects.
Spring Integration ships with some sample scripts that can be used to initialize a database.
In the spring-integration-jdbc
JAR file you can find scripts in the org.springframework.integration.jdbc
package: there is a create and a drop script example for a range of common database platforms.
A common way to use these scripts is to reference them in a Spring JDBC data source initializer.
Note that the scripts are provided as samples or specifications of the the required table and column names.
You may find that you need to enhance them for production use (e.g. with index declarations).
==== The Generic JDBC Message Store
The JDBC module provides an implementation of the Spring Integration MessageStore
(important in the Claim Check pattern) and MessageGroupStore
(important in stateful patterns like Aggregator) backed by a database.
Both interfaces are implemented by the JdbcMessageStore
, and there is also support for configuring store instances in XML.
For example:
<int-jdbc:message-store id="messageStore" data-source="dataSource"/>
A JdbcTemplate
can be specified instead of a DataSource
.
Other optional attributes are show in the next example:
<int-jdbc:message-store id="messageStore" data-source="dataSource" lob-handler="lobHandler" table-prefix="MY_INT_"/>
Here we have specified a LobHandler
for dealing with messages as large objects (e.g.
often necessary if using Oracle) and a prefix for the table names in the queries generated by the store.
The table name prefix defaults to INT_
.
If you intend backing Message Channels using JDBC, it is recommended to use the provided JdbcChannelMessageStore
implementation instead.
It can only be used in conjunction with Message Channels.
Supported Databases
The JdbcChannelMessageStore
uses database specific SQL queries to retrieve messages from the database.
Therefore, users must set the ChannelMessageStoreQueryProvider
property on the JdbcChannelMessageStore
.
This channelMessageStoreQueryProvider
provides the SQL queries and Spring Integration provides support for the following relational databases:
If your database is not listed, you can easily extend the AbstractChannelMessageStoreQueryProvider
class and provide your own custom queries.
Since version 4.0, the MESSAGE_SEQUENCE
column has been added to the table to ensure first-in-first-out (FIFO) queueing even when messages are stored in the same millisecond.
Since version 5.0, by overloading ChannelMessageStorePreparedStatementSetter
class you can provide custom implementation for message insertion in the JdbcChannelMessageStore
.
It might be different columns or table structure or serialization strategy.
For example, instead of default serialization to byte[]
, we can store its structure in JSON string.
Below example uses the default implementation of setValues
to store common columns and overrides the behavior just to store the message payload as varchar.
public class JsonPreparedStatementSetter extends ChannelMessageStorePreparedStatementSetter { public JsonPreparedStatementSetter() { super(); } @Override public void setValues(PreparedStatement preparedStatement, Message<?> requestMessage, Object groupId, String region, boolean priorityEnabled) throws SQLException { // Populate common columns super.setValues(preparedStatement, requestMessage, groupId, region, priorityEnabled); // Store message payload as varchar preparedStatement.setString(6, requestMessage.getPayload().toString()); } }
![]() | Important |
---|---|
Generally it is not recommended to use a relational database for the purpose of queuing. Instead, if possible, consider using either JMS or AMQP backed channels instead. For further reference please see the following resources: |
Concurrent Polling
When polling a Message Channel, you have the option to configure the associated Poller
with a TaskExecutor
reference.
![]() | Important |
---|---|
Keep in mind, though, that if you use a JDBC backed Message Channel and you are planning on polling the channel and consequently the message store transactionally with multiple threads, you should ensure that you use a relational database that supports Multiversion Concurrency Control (MVCC). Otherwise, locking may be an issue and the performance, when using multiple threads, may not materialize as expected. For example Apache Derby is problematic in that regard. To achieve better JDBC queue throughput, and avoid issues when different threads may poll the same |
<bean id="queryProvider" class="o.s.i.jdbc.store.channel.PostgresChannelMessageStoreQueryProvider"/> <int:transaction-synchronization-factory id="syncFactory"> <int:after-commit expression="@store.removeFromIdCache(headers.id.toString())" /> <int:after-rollback expression="@store.removeFromIdCache(headers.id.toString())"/> </int:transaction-synchronization-factory> <task:executor id="pool" pool-size="10" queue-capacity="10" rejection-policy="CALLER_RUNS" /> <bean id="store" class="o.s.i.jdbc.store.JdbcChannelMessageStore"> <property name="dataSource" ref="dataSource"/> <property name="channelMessageStoreQueryProvider" ref="queryProvider"/> <property name="region" value="TX_TIMEOUT"/> <property name="usingIdCache" value="true"/> </bean> <int:channel id="inputChannel"> <int:queue message-store="store"/> </int:channel> <int:bridge input-channel="inputChannel" output-channel="outputChannel"> <int:poller fixed-delay="500" receive-timeout="500" max-messages-per-poll="1" task-executor="pool"> <int:transactional propagation="REQUIRED" synchronization-factory="syncFactory" isolation="READ_COMMITTED" transaction-manager="transactionManager" /> </int:poller> </int:bridge> <int:channel id="outputChannel" />
Priority Channel
Starting with version 4.0, the JdbcChannelMessageStore
implements PriorityCapableChannelMessageStore
and provides the priorityEnabled
option allowing it to be used as a message-store
reference for priority-queue
s.
For this purpose, the INT_CHANNEL_MESSAGE
has a MESSAGE_PRIORITY
column to store the value of PRIORITY
Message header.
In addition, a new MESSAGE_SEQUENCE
column is also provided to achieve a robust first-in-first-out (FIFO) polling mechanism, even when multiple messages are stored with the same priority in the same millisecond.
Messages are polled (selected) from the database with order by MESSAGE_PRIORITY DESC NULLS LAST, CREATED_DATE, MESSAGE_SEQUENCE
.
![]() | Note |
---|---|
It’s not recommended to use the same |
<bean id="channelStore" class="o.s.i.jdbc.store.JdbcChannelMessageStore"> <property name="dataSource" ref="dataSource"/> <property name="channelMessageStoreQueryProvider" ref="queryProvider"/> </bean> <int:channel id="queueChannel"> <int:queue message-store="channelStore"/> </int:channel> <bean id="priorityStore" parent="channelStore"> <property name="priorityEnabled" value="true"/> </bean> <int:channel id="priorityChannel"> <int:priority-queue message-store="priorityStore"/> </int:channel>
==== Partitioning a Message Store
It is common to use a JdbcMessageStore
as a global store for a group of applications, or nodes in the same application.
To provide some protection against name clashes, and to give control over the database meta-data configuration, the message store allows the tables to be partitioned in two ways.
One is to use separate table names, by changing the prefix as described above, and the other is to specify a "region" name for partitioning data within a single table.
An important use case for this is when the MessageStore is managing persistent queues backing a Spring Integration Message Channel.
The message data for a persistent channel is keyed in the store on the channel name, so if the channel names are not globally unique then there is the danger of channels picking up data that was not intended for them.
To avoid this, the message store region can be used to keep data separate for different physical channels that happen to have the same logical name.
In certain situations plain JDBC support is not sufficient. Maybe you deal with legacy relational database schemas or you have complex data processing needs, but ultimately you have to use Stored Procedures or Stored Functions. Since Spring Integration 2.1, we provide three components in order to execute Stored Procedures or Stored Functions:
In order to enable calls to Stored Procedures and