Spring Cloud Stream Core
This section goes into more detail about how you can work with Spring Cloud Stream. It covers topics such as creating and running stream applications.
1. Introducing Spring Cloud Stream
Spring Cloud Stream is a framework for building message-driven microservice applications. Spring Cloud Stream builds upon Spring Boot to create standalone, production-grade Spring applications, and uses Spring Integration to provide connectivity to message brokers. It provides opinionated configuration of middleware from several vendors, introducing the concepts of persistent publish-subscribe semantics, consumer groups, and partitions.
You can add the @EnableBinding
annotation to your application to get immediate connectivity to a message broker, and you can add @StreamListener
to a method to cause it to receive events for stream processing.
The following is a simple sink application which receives external messages.
@SpringBootApplication
@EnableBinding(Sink.class)
public class VoteRecordingSinkApplication {
public static void main(String[] args) {
SpringApplication.run(VoteRecordingSinkApplication.class, args);
}
@StreamListener(Sink.INPUT)
public void processVote(Vote vote) {
votingService.recordVote(vote);
}
}
The @EnableBinding
annotation takes one or more interfaces as parameters (in this case, the parameter is a single Sink
interface).
An interface declares input and/or output channels.
Spring Cloud Stream provides the interfaces Source
, Sink
, and Processor
; you can also define your own interfaces.
The following is the definition of the Sink
interface:
public interface Sink {
String INPUT = "input";
@Input(Sink.INPUT)
SubscribableChannel input();
}
The @Input
annotation identifies an input channel, through which received messages enter the application; the @Output
annotation identifies an output channel, through which published messages leave the application.
The @Input
and @Output
annotations can take a channel name as a parameter; if a name is not provided, the name of the annotated method will be used.
Spring Cloud Stream will create an implementation of the interface for you. You can use this in the application by autowiring it, as in the following example of a test case.
@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = VoteRecordingSinkApplication.class)
@WebAppConfiguration
@DirtiesContext
public class StreamApplicationTests {
@Autowired
private Sink sink;
@Test
public void contextLoads() {
assertNotNull(this.sink.input());
}
}
2. Main Concepts
Spring Cloud Stream provides a number of abstractions and primitives that simplify the writing of message-driven microservice applications. This section gives an overview of the following:
-
Spring Cloud Stream’s application model
-
The Binder abstraction
-
Persistent publish-subscribe support
-
Consumer group support
-
Partitioning support
-
A pluggable Binder API
2.1. Application Model
A Spring Cloud Stream application consists of a middleware-neutral core. The application communicates with the outside world through input and output channels injected into it by Spring Cloud Stream. Channels are connected to external brokers through middleware-specific Binder implementations.
2.1.1. Fat JAR
Spring Cloud Stream applications can be run in standalone mode from your IDE for testing. To run a Spring Cloud Stream application in production, you can create an executable (or "fat") JAR by using the standard Spring Boot tooling provided for Maven or Gradle.
2.2. The Binder Abstraction
Spring Cloud Stream provides Binder implementations for Kafka and Rabbit MQ. Spring Cloud Stream also includes a TestSupportBinder, which leaves a channel unmodified so that tests can interact with channels directly and reliably assert on what is received. You can use the extensible API to write your own Binder.
Spring Cloud Stream uses Spring Boot for configuration, and the Binder abstraction makes it possible for a Spring Cloud Stream application to be flexible in how it connects to middleware.
For example, deployers can dynamically choose, at runtime, the destinations (e.g., the Kafka topics or RabbitMQ exchanges) to which channels connect.
Such configuration can be provided through external configuration properties and in any form supported by Spring Boot (including application arguments, environment variables, and application.yml
or application.properties
files).
In the sink example from the Introducing Spring Cloud Stream section, setting the application property spring.cloud.stream.bindings.input.destination
to raw-sensor-data
will cause it to read from the raw-sensor-data
Kafka topic, or from a queue bound to the raw-sensor-data
RabbitMQ exchange.
Spring Cloud Stream automatically detects and uses a binder found on the classpath. You can easily use different types of middleware with the same code: just include a different binder at build time. For more complex use cases, you can also package multiple binders with your application and have it choose the binder, and even whether to use different binders for different channels, at runtime.
2.3. Persistent Publish-Subscribe Support
Communication between applications follows a publish-subscribe model, where data is broadcast through shared topics. This can be seen in the following figure, which shows a typical deployment for a set of interacting Spring Cloud Stream applications.
Data reported by sensors to an HTTP endpoint is sent to a common destination named raw-sensor-data
.
From the destination, it is independently processed by a microservice application that computes time-windowed averages and by another microservice application that ingests the raw data into HDFS.
In order to process the data, both applications declare the topic as their input at runtime.
The publish-subscribe communication model reduces the complexity of both the producer and the consumer, and allows new applications to be added to the topology without disruption of the existing flow. For example, downstream from the average-calculating application, you can add an application that calculates the highest temperature values for display and monitoring. You can then add another application that interprets the same flow of averages for fault detection. Doing all communication through shared topics rather than point-to-point queues reduces coupling between microservices.
While the concept of publish-subscribe messaging is not new, Spring Cloud Stream takes the extra step of making it an opinionated choice for its application model. By using native middleware support, Spring Cloud Stream also simplifies use of the publish-subscribe model across different platforms.
2.4. Consumer Groups
While the publish-subscribe model makes it easy to connect applications through shared topics, the ability to scale up by creating multiple instances of a given application is equally important. When doing this, different instances of an application are placed in a competing consumer relationship, where only one of the instances is expected to handle a given message.
Spring Cloud Stream models this behavior through the concept of a consumer group.
(Spring Cloud Stream consumer groups are similar to and inspired by Kafka consumer groups.)
Each consumer binding can use the spring.cloud.stream.bindings.<channelName>.group
property to specify a group name.
For the consumers shown in the following figure, this property would be set as spring.cloud.stream.bindings.<channelName>.group=hdfsWrite
or spring.cloud.stream.bindings.<channelName>.group=average
.
All groups which subscribe to a given destination receive a copy of published data, but only one member of each group receives a given message from that destination. By default, when a group is not specified, Spring Cloud Stream assigns the application to an anonymous and independent single-member consumer group that is in a publish-subscribe relationship with all other consumer groups.
2.4.1. Durability
Consistent with the opinionated application model of Spring Cloud Stream, consumer group subscriptions are durable. That is, a binder implementation ensures that group subscriptions are persistent, and once at least one subscription for a group has been created, the group will receive messages, even if they are sent while all applications in the group are stopped.
Anonymous subscriptions are non-durable by nature. For some binder implementations (e.g., RabbitMQ), it is possible to have non-durable group subscriptions. |
In general, it is preferable to always specify a consumer group when binding an application to a given destination. When scaling up a Spring Cloud Stream application, you must specify a consumer group for each of its input bindings. This prevents the application’s instances from receiving duplicate messages (unless that behavior is desired, which is unusual).
2.5. Partitioning Support
Spring Cloud Stream provides support for partitioning data between multiple instances of a given application. In a partitioned scenario, the physical communication medium (e.g., the broker topic) is viewed as being structured into multiple partitions. One or more producer application instances send data to multiple consumer application instances and ensure that data identified by common characteristics are processed by the same consumer instance.
Spring Cloud Stream provides a common abstraction for implementing partitioned processing use cases in a uniform fashion. Partitioning can thus be used whether the broker itself is naturally partitioned (e.g., Kafka) or not (e.g., RabbitMQ).
Partitioning is a critical concept in stateful processing, where it is critiical, for either performance or consistency reasons, to ensure that all related data is processed together. For example, in the time-windowed average calculation example, it is important that all measurements from any given sensor are processed by the same application instance.
To set up a partitioned processing scenario, you must configure both the data-producing and the data-consuming ends. |
3. Programming Model
This section describes Spring Cloud Stream’s programming model. Spring Cloud Stream provides a number of predefined annotations for declaring bound input and output channels as well as how to listen to channels.
3.1. Declaring and Binding Channels
3.1.1. Triggering Binding Via @EnableBinding
You can turn a Spring application into a Spring Cloud Stream application by applying the @EnableBinding
annotation to one of the application’s configuration classes.
The @EnableBinding
annotation itself is meta-annotated with @Configuration
and triggers the configuration of Spring Cloud Stream infrastructure:
...
@Import(...)
@Configuration
@EnableIntegration
public @interface EnableBinding {
...
Class<?>[] value() default {};
}
The @EnableBinding
annotation can take as parameters one or more interface classes that contain methods which represent bindable components (typically message channels).
The |
3.1.2. @Input
and @Output
A Spring Cloud Stream application can have an arbitrary number of input and output channels defined in an interface as @Input
and @Output
methods:
public interface Barista {
@Input
SubscribableChannel orders();
@Output
MessageChannel hotDrinks();
@Output
MessageChannel coldDrinks();
}
Using this interface as a parameter to @EnableBinding
will trigger the creation of three bound channels named orders
, hotDrinks
, and coldDrinks
, respectively.
@EnableBinding(Barista.class)
public class CafeConfiguration {
...
}
In Spring Cloud Stream, the bindable |
Customizing Channel Names
Using the @Input
and @Output
annotations, you can specify a customized channel name for the channel, as shown in the following example:
public interface Barista {
...
@Input("inboundOrders")
SubscribableChannel orders();
}
In this example, the created bound channel will be named inboundOrders
.
Source
, Sink
, and Processor
For easy addressing of the most common use cases, which involve either an input channel, an output channel, or both, Spring Cloud Stream provides three predefined interfaces out of the box.
Source
can be used for an application which has a single outbound channel.
public interface Source {
String OUTPUT = "output";
@Output(Source.OUTPUT)
MessageChannel output();
}
Sink
can be used for an application which has a single inbound channel.
public interface Sink {
String INPUT = "input";
@Input(Sink.INPUT)
SubscribableChannel input();
}
Processor
can be used for an application which has both an inbound channel and an outbound channel.
public interface Processor extends Source, Sink {
}
Spring Cloud Stream provides no special handling for any of these interfaces; they are only provided out of the box.
3.1.3. Accessing Bound Channels
Injecting the Bound Interfaces
For each bound interface, Spring Cloud Stream will generate a bean that implements the interface.
Invoking a @Input
-annotated or @Output
-annotated method of one of these beans will return the relevant bound channel.
The bean in the following example sends a message on the output channel when its hello
method is invoked.
It invokes output()
on the injected Source
bean to retrieve the target channel.
@Component
public class SendingBean {
private Source source;
@Autowired
public SendingBean(Source source) {
this.source = source;
}
public void sayHello(String name) {
source.output().send(MessageBuilder.withPayload(name).build());
}
}
Injecting Channels Directly
Bound channels can be also injected directly:
@Component
public class SendingBean {
private MessageChannel output;
@Autowired
public SendingBean(MessageChannel output) {
this.output = output;
}
public void sayHello(String name) {
output.send(MessageBuilder.withPayload(name).build());
}
}
If the name of the channel is customized on the declaring annotation, that name should be used instead of the method name. Given the following declaration:
public interface CustomSource {
...
@Output("customOutput")
MessageChannel output();
}
The channel will be injected as shown in the following example:
@Component
public class SendingBean {
private MessageChannel output;
@Autowired
public SendingBean(@Qualifier("customOutput") MessageChannel output) {
this.output = output;
}
public void sayHello(String name) {
this.output.send(MessageBuilder.withPayload(name).build());
}
}
3.1.4. Producing and Consuming Messages
You can write a Spring Cloud Stream application using either Spring Integration annotations or Spring Cloud Stream’s @StreamListener
annotation.
The @StreamListener
annotation is modeled after other Spring Messaging annotations (such as @MessageMapping
, @JmsListener
, @RabbitListener
, etc.) but adds content type management and type coercion features.
Native Spring Integration Support
Because Spring Cloud Stream is based on Spring Integration, Stream completely inherits Integration’s foundation and infrastructure as well as the component itself.
For example, you can attach the output channel of a Source
to a MessageSource
:
@EnableBinding(Source.class)
public class TimerSource {
@Value("${format}")
private String format;
@Bean
@InboundChannelAdapter(value = Source.OUTPUT, poller = @Poller(fixedDelay = "${fixedDelay}", maxMessagesPerPoll = "1"))
public MessageSource<String> timerMessageSource() {
return () -> new GenericMessage<>(new SimpleDateFormat(format).format(new Date()));
}
}
Or you can use a processor’s channels in a transformer:
@EnableBinding(Processor.class)
public class TransformProcessor {
@Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
public Object transform(String message) {
return message.toUpperCase();
}
}
It’s important to understant that when you consume from the same binding using |
Spring Integration Error Channel Support
Spring Cloud Stream supports publishing error messages received by the Spring Integration global
error channel. Error messages sent to the errorChannel
can be published to a specific destination
at the broker by configuring a binding for the outbound target named error
. For example, to
publish error messages to a broker destination named "myErrors", provide the following property:
spring.cloud.stream.bindings.error.destination=myErrors
.
Message Channel Binders and Error Channels
Starting with version 1.3, some MessageChannel
- based binders publish errors to a discrete error channel for each destination.
In addition, these error channels are bridged to the global Spring Integration errorChannel
mentioned above.
You can therefore consume errors for specific destinations and/or for all destinations, using a standard Spring Integration flow (IntegrationFlow
, @ServiceActivator
, etc).
On the consumer side, the listener thread catches any exceptions and forwards an ErrorMessage
to the destination’s error channel.
The payload of the message is a MessagingException
with the normal failedMessage
and cause
properties.
Usually, the raw data received from the broker is included in a header.
For binders that support (and are configured with) a dead letter destination; a MessagePublishingErrorHandler
is subscribed to the channel, and the raw data is forwarded to the dead letter destination.
On the producer side; for binders that support some kind of async result after publishing messages (e.g. RabbitMQ, Kafka), you can enable an error channel by setting the …producer.errorChannelEnabled
to true
.
The payload of the ErrorMessage
depends on the binder implementation but will be a MessagingException
with the normal failedMessage
property, as well as additional properties about the failure.
Refer to the binder documentation for complete details.
Using @StreamListener for Automatic Content Type Handling
Complementary to its Spring Integration support, Spring Cloud Stream provides its own @StreamListener
annotation, modeled after other Spring Messaging annotations (e.g. @MessageMapping
, @JmsListener
, @RabbitListener
, etc.).
The @StreamListener
annotation provides a simpler model for handling inbound messages, especially when dealing with use cases that involve content type management and type coercion.
Spring Cloud Stream provides an extensible MessageConverter
mechanism for handling data conversion by bound channels and for, in this case, dispatching to methods annotated with @StreamListener
.
The following is an example of an application which processes external Vote
events:
@EnableBinding(Sink.class)
public class VoteHandler {
@Autowired
VotingService votingService;
@StreamListener(Sink.INPUT)
public void handle(Vote vote) {
votingService.record(vote);
}
}
The distinction between @StreamListener
and a Spring Integration @ServiceActivator
is seen when considering an inbound Message
that has a String
payload and a contentType
header of application/json
.
In the case of @StreamListener
, the MessageConverter
mechanism will use the contentType
header to parse the String
payload into a Vote
object.
As with other Spring Messaging methods, method arguments can be annotated with @Payload
, @Headers
and @Header
.
For methods which return data, you must use the
|
Using @StreamListener for dispatching messages to multiple methods
Since version 1.2, Spring Cloud Stream supports dispatching messages to multiple @StreamListener
methods registered on an input channel, based on a condition.
In order to be eligible to support conditional dispatching, a method must satisfy the follow conditions:
-
it must not return a value
-
it must be an individual message handling method (reactive API methods are not supported)
The condition is specified via a SpEL expression in the condition
attribute of the annotation and is evaluated for each message.
All the handlers that match the condition will be invoked in the same thread and no assumption must be made about the order in which the invocations take place.
An example of using @StreamListener
with dispatching conditions can be seen below.
In this example, all the messages bearing a header type
with the value foo
will be dispatched to the receiveFoo
method, and all the messages bearing a header type
with the value bar
will be dispatched to the receiveBar
method.
@EnableBinding(Sink.class)
@EnableAutoConfiguration
public static class TestPojoWithAnnotatedArguments {
@StreamListener(target = Sink.INPUT, condition = "headers['type']=='foo'")
public void receiveFoo(@Payload FooPojo fooPojo) {
// handle the message
}
@StreamListener(target = Sink.INPUT, condition = "headers['type']=='bar'")
public void receiveBar(@Payload BarPojo barPojo) {
// handle the message
}
}
Dispatching via |
3.1.5. Reactive Programming Support
Spring Cloud Stream also supports the use of reactive APIs where incoming and outgoing data is handled as continuous data flows.
Support for reactive APIs is available via the spring-cloud-stream-reactive
, which needs to be added explicitly to your project.
The programming model with reactive APIs is declarative, where instead of specifying how each individual message should be handled, you can use operators that describe functional transformations from inbound to outbound data flows.
Spring Cloud Stream supports the following reactive APIs:
-
Reactor
-
RxJava 1.x
In the future, it is intended to support a more generic model based on Reactive Streams.
The reactive programming model is also using the @StreamListener
annotation for setting up reactive handlers. The differences are that:
-
the
@StreamListener
annotation must not specify an input or output, as they are provided as arguments and return values from the method; -
the arguments of the method must be annotated with
@Input
and@Output
indicating which input or output will the incoming and respectively outgoing data flows connect to; -
the return value of the method, if any, will be annotated with
@Output
, indicating the input where data shall be sent.
Reactive programming support requires Java 1.8. |
As of Spring Cloud Stream 1.1.1 and later (starting with release train Brooklyn.SR2), reactive programming support requires the use of Reactor 3.0.4.RELEASE and higher.
Earlier Reactor versions (including 3.0.1.RELEASE, 3.0.2.RELEASE and 3.0.3.RELEASE) are not supported.
|
The use of term |
Reactor-based handlers
A Reactor based handler can have the following argument types:
-
For arguments annotated with
@Input
, it supports the Reactor typeFlux
. The parameterization of the inbound Flux follows the same rules as in the case of individual message handling: it can be the entireMessage
, a POJO which can be theMessage
payload, or a POJO which is the result of a transformation based on theMessage
content-type header. Multiple inputs are provided; -
For arguments annotated with
Output
, it supports the typeFluxSender
which connects aFlux
produced by the method with an output. Generally speaking, specifying outputs as arguments is only recommended when the method can have multiple outputs;
A Reactor based handler supports a return type of Flux
, case in which it must be annotated with @Output
. We recommend using the return value of the method when a single output flux is available.
Here is an example of a simple Reactor-based Processor.
@EnableBinding(Processor.class)
@EnableAutoConfiguration
public static class UppercaseTransformer {
@StreamListener
@Output(Processor.OUTPUT)
public Flux<String> receive(@Input(Processor.INPUT) Flux<String> input) {
return input.map(s -> s.toUpperCase());
}
}
The same processor using output arguments looks like this:
@EnableBinding(Processor.class)
@EnableAutoConfiguration
public static class UppercaseTransformer {
@StreamListener
public void receive(@Input(Processor.INPUT) Flux<String> input,
@Output(Processor.OUTPUT) FluxSender output) {
output.send(input.map(s -> s.toUpperCase()));
}
}
RxJava 1.x support
RxJava 1.x handlers follow the same rules as Reactor-based one, but will use Observable
and ObservableSender
arguments and return types.
So the first example above will become:
@EnableBinding(Processor.class)
@EnableAutoConfiguration
public static class UppercaseTransformer {
@StreamListener
@Output(Processor.OUTPUT)
public Observable<String> receive(@Input(Processor.INPUT) Observable<String> input) {
return input.map(s -> s.toUpperCase());
}
}
The second example above will become:
@EnableBinding(Processor.class)
@EnableAutoConfiguration
public static class UppercaseTransformer {
@StreamListener
public void receive(@Input(Processor.INPUT) Observable<String> input,
@Output(Processor.OUTPUT) ObservableSender output) {
output.send(input.map(s -> s.toUpperCase()));
}
}
Reactive Sources
Spring Cloud Stream reactive support also provides the ability for creating reactive sources through the StreamEmitter annotation. Using StreamEmitter annotation, a regular source may be converted to a reactive one. StreamEmitter is a method level annotation that marks a method to be an emitter to outputs declared via EnableBinding. It is not allowed to use the Input annotation along with StreamEmitter, as the methods marked with this annotation are not listening from any input, rather generating to an output. Following the same programming model used in StreamListener, StreamEmitter also allows flexible ways of using the Output annotation depending on whether the method has any arguments, return type etc.
Here are some examples of using StreamEmitter in various styles.
The following example will emit the "Hello World" message every millisecond and publish to a Flux. In this case, the resulting messages in Flux will be sent to the output channel of the Source.
@EnableBinding(Source.class)
@EnableAutoConfiguration
public static class HelloWorldEmitter {
@StreamEmitter
@Output(Source.OUTPUT)
public Flux<String> emit() {
return Flux.intervalMillis(1)
.map(l -> "Hello World");
}
}
Following is another flavor of the same sample as above. Instead of returning a Flux, this method uses a FluxSender to programmatically send Flux from a source.
@EnableBinding(Source.class)
@EnableAutoConfiguration
public static class HelloWorldEmitter {
@StreamEmitter
@Output(Source.OUTPUT)
public void emit(FluxSender output) {
output.send(Flux.intervalMillis(1)
.map(l -> "Hello World"));
}
}
Following is exactly same as the above snippet in functionality and style. However, instead of using an explicit Output annotation at the method level, it is used as the method parameter level.
@EnableBinding(Source.class)
@EnableAutoConfiguration
public static class HelloWorldEmitter {
@StreamEmitter
public void emit(@Output(Source.OUTPUT) FluxSender output) {
output.send(Flux.intervalMillis(1)
.map(l -> "Hello World"));
}
}
Here is yet another flavor of writing reacting sources using the Reactive Streams Publisher API and the support for it in the Spring Integration Java DSL. The Publisher is still using Reactor Flux under the hood, but from an application perspective, that is transparent to the user and only needs Reactive Streams and Java DSL for Spring Integration.
@EnableBinding(Source.class)
@EnableAutoConfiguration
public static class HelloWorldEmitter {
@StreamEmitter
@Output(Source.OUTPUT)
@Bean
public Publisher<Message<String>> emit() {
return IntegrationFlows.from(() ->
new GenericMessage<>("Hello World"),
e -> e.poller(p -> p.fixedDelay(1)))
.toReactivePublisher();
}
}
3.1.6. Aggregation
Spring Cloud Stream provides support for aggregating multiple applications together, connecting their input and output channels directly and avoiding the additional cost of exchanging messages via a broker. As of version 1.0 of Spring Cloud Stream, aggregation is supported only for the following types of applications:
-
sources - applications with a single output channel named
output
, typically having a single binding of the typeorg.springframework.cloud.stream.messaging.Source
-
sinks - applications with a single input channel named
input
, typically having a single binding of the typeorg.springframework.cloud.stream.messaging.Sink
-
processors - applications with a single input channel named
input
and a single output channel namedoutput
, typically having a single binding of the typeorg.springframework.cloud.stream.messaging.Processor
.
They can be aggregated together by creating a sequence of interconnected applications, in which the output channel of an element in the sequence is connected to the input channel of the next element, if it exists. A sequence can start with either a source or a processor, it can contain an arbitrary number of processors and must end with either a processor or a sink.
Depending on the nature of the starting and ending element, the sequence may have one or more bindable channels, as follows:
-
if the sequence starts with a source and ends with a sink, all communication between the applications is direct and no channels will be bound
-
if the sequence starts with a processor, then its input channel will become the
input
channel of the aggregate and will be bound accordingly -
if the sequence ends with a processor, then its output channel will become the
output
channel of the aggregate and will be bound accordingly
Aggregation is performed using the AggregateApplicationBuilder
utility class, as in the following example.
Let’s consider a project in which we have source, processor and a sink, which may be defined in the project, or may be contained in one of the project’s dependencies.
Each component (source, sink or processor) in an aggregate application must be provided in a separate package if the configuration classes use |
package com.app.mysink;
@SpringBootApplication
@EnableBinding(Sink.class)
public class SinkApplication {
private static Logger logger = LoggerFactory.getLogger(SinkApplication.class);
@ServiceActivator(inputChannel=Sink.INPUT)
public void loggerSink(Object payload) {
logger.info("Received: " + payload);
}
}
package com.app.myprocessor;
// Imports omitted
@SpringBootApplication
@EnableBinding(Processor.class)
public class ProcessorApplication {
@Transformer
public String loggerSink(String payload) {
return payload.toUpperCase();
}
}
package com.app.mysource;
// Imports omitted
@SpringBootApplication
@EnableBinding(Source.class)
public class SourceApplication {
@InboundChannelAdapter(value = Source.OUTPUT)
public String timerMessageSource() {
return new SimpleDateFormat().format(new Date());
}
}
Each configuration can be used for running a separate component, but in this case they can be aggregated together as follows:
package com.app;
// Imports omitted
@SpringBootApplication
public class SampleAggregateApplication {
public static void main(String[] args) {
new AggregateApplicationBuilder()
.from(SourceApplication.class).args("--fixedDelay=5000")
.via(ProcessorApplication.class)
.to(SinkApplication.class).args("--debug=true").run(args);
}
}
The starting component of the sequence is provided as argument to the from()
method.
The ending component of the sequence is provided as argument to the to()
method.
Intermediate processors are provided as argument to the via()
method.
Multiple processors of the same type can be chained together (e.g. for pipelining transformations with different configurations).
For each component, the builder can provide runtime arguments for Spring Boot configuration.
Configuring aggregate application
Spring Cloud Stream supports passing properties for the individual applications inside the aggregate application using 'namespace' as prefix.
The namespace can be set for applications as follows:
@SpringBootApplication
public class SampleAggregateApplication {
public static void main(String[] args) {
new AggregateApplicationBuilder()
.from(SourceApplication.class).namespace("source").args("--fixedDelay=5000")
.via(ProcessorApplication.class).namespace("processor1")
.to(SinkApplication.class).namespace("sink").args("--debug=true").run(args);
}
}
Once the 'namespace' is set for the individual applications, the application properties with the namespace
as prefix can be passed to the aggregate application using any supported property source (commandline, environment properties etc.,)
For instance, to override the default fixedDelay
and debug
properties of 'source' and 'sink' applications:
java -jar target/MyAggregateApplication-0.0.1-SNAPSHOT.jar --source.fixedDelay=10000 --sink.debug=false
Configuring binding service properties for non self contained aggregate application
The non self-contained aggregate application is bound to external broker via either or both the inbound/outbound components (typically, message channels) of the aggregate application while the applications inside the aggregate application are directly bound. For example: a source application’s output and a processor application’s input are directly bound while the processor’s output channel is bound to an external destination at the broker. When passing the binding service properties for non-self contained aggregate application, it is required to pass the binding service properties to the aggregate application instead of setting them as 'args' to individual child application. For instance,
@SpringBootApplication
public class SampleAggregateApplication {
public static void main(String[] args) {
new AggregateApplicationBuilder()
.from(SourceApplication.class).namespace("source").args("--fixedDelay=5000")
.via(ProcessorApplication.class).namespace("processor1").args("--debug=true").run(args);
}
}
The binding properties like --spring.cloud.stream.bindings.output.destination=processor-output
need to be specified as one of the external configuration properties (cmdline arg etc.,).
4. Binders
Spring Cloud Stream provides a Binder abstraction for use in connecting to physical destinations at the external middleware. This section provides information about the main concepts behind the Binder SPI, its main components, and implementation-specific details.
4.1. Producers and Consumers
A producer is any component that sends messages to a channel.
The channel can be bound to an external message broker via a Binder implementation for that broker.
When invoking the bindProducer()
method, the first parameter is the name of the destination within the broker, the second parameter is the local channel instance to which the producer will send messages, and the third parameter contains properties (such as a partition key expression) to be used within the adapter that is created for that channel.
A consumer is any component that receives messages from a channel.
As with a producer, the consumer’s channel can be bound to an external message broker.
When invoking the bindConsumer()
method, the first parameter is the destination name, and a second parameter provides the name of a logical group of consumers.
Each group that is represented by consumer bindings for a given destination receives a copy of each message that a producer sends to that destination (i.e., publish-subscribe semantics).
If there are multiple consumer instances bound using the same group name, then messages will be load-balanced across those consumer instances so that each message sent by a producer is consumed by only a single consumer instance within each group (i.e., queueing semantics).
4.2. Binder SPI
The Binder SPI consists of a number of interfaces, out-of-the box utility classes and discovery strategies that provide a pluggable mechanism for connecting to external middleware.
The key point of the SPI is the Binder
interface which is a strategy for connecting inputs and outputs to external middleware.
public interface Binder<T, C extends ConsumerProperties, P extends ProducerProperties> {
Binding<T> bindConsumer(String name, String group, T inboundBindTarget, C consumerProperties);
Binding<T> bindProducer(String name, T outboundBindTarget, P producerProperties);
}
The interface is parameterized, offering a number of extension points:
-
input and output bind targets - as of version 1.0, only
MessageChannel
is supported, but this is intended to be used as an extension point in the future; -
extended consumer and producer properties - allowing specific Binder implementations to add supplemental properties which can be supported in a type-safe manner.
A typical binder implementation consists of the following
-
a class that implements the
Binder
interface; -
a Spring
@Configuration
class that creates a bean of the type above along with the middleware connection infrastructure; -
a
META-INF/spring.binders
file found on the classpath containing one or more binder definitions, e.g.
kafka:\
org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfiguration
4.3. Binder Detection
Spring Cloud Stream relies on implementations of the Binder SPI to perform the task of connecting channels to message brokers. Each Binder implementation typically connects to one type of messaging system.
4.3.1. Classpath Detection
By default, Spring Cloud Stream relies on Spring Boot’s auto-configuration to configure the binding process. If a single Binder implementation is found on the classpath, Spring Cloud Stream will use it automatically. For example, a Spring Cloud Stream project that aims to bind only to RabbitMQ can simply add the following dependency:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-rabbit</artifactId>
</dependency>
For the specific maven coordinates of other binder dependencies, please refer to the documentation of that binder implementation.
4.4. Multiple Binders on the Classpath
When multiple binders are present on the classpath, the application must indicate which binder is to be used for each channel binding.
Each binder configuration contains a META-INF/spring.binders
, which is a simple properties file:
rabbit:\
org.springframework.cloud.stream.binder.rabbit.config.RabbitServiceAutoConfiguration
Similar files exist for the other provided binder implementations (e.g., Kafka), and custom binder implementations are expected to provide them, as well.
The key represents an identifying name for the binder implementation, whereas the value is a comma-separated list of configuration classes that each contain one and only one bean definition of type org.springframework.cloud.stream.binder.Binder
.
Binder selection can either be performed globally, using the spring.cloud.stream.defaultBinder
property (e.g., spring.cloud.stream.defaultBinder=rabbit
) or individually, by configuring the binder on each channel binding.
For instance, a processor application (that has channels with the names input
and output
for read/write respectively) which reads from Kafka and writes to RabbitMQ can specify the following configuration:
spring.cloud.stream.bindings.input.binder=kafka spring.cloud.stream.bindings.output.binder=rabbit
4.5. Connecting to Multiple Systems
By default, binders share the application’s Spring Boot auto-configuration, so that one instance of each binder found on the classpath will be created. If your application should connect to more than one broker of the same type, you can specify multiple binder configurations, each with different environment settings.
Turning on explicit binder configuration will disable the default binder configuration process altogether.
If you do this, all binders in use must be included in the configuration.
Frameworks that intend to use Spring Cloud Stream transparently may create binder configurations that can be referenced by name, but will not affect the default binder configuration.
In order to do so, a binder configuration may have its |
For example, this is the typical configuration for a processor application which connects to two RabbitMQ broker instances:
spring:
cloud:
stream:
bindings:
input:
destination: foo
binder: rabbit1
output:
destination: bar
binder: rabbit2
binders:
rabbit1:
type: rabbit
environment:
spring:
rabbitmq:
host: <host1>
rabbit2:
type: rabbit
environment:
spring:
rabbitmq:
host: <host2>
4.6. Binder configuration properties
The following properties are available when creating custom binder configurations.
They must be prefixed with spring.cloud.stream.binders.<configurationName>
.
- type
-
The binder type. It typically references one of the binders found on the classpath, in particular a key in a
META-INF/spring.binders
file.By default, it has the same value as the configuration name.
- inheritEnvironment
-
Whether the configuration will inherit the environment of the application itself.
Default
true
. - environment
-
Root for a set of properties that can be used to customize the environment of the binder. When this is configured, the context in which the binder is being created is not a child of the application context. This allows for complete separation between the binder components and the application components.
Default
empty
. - defaultCandidate
-
Whether the binder configuration is a candidate for being considered a default binder, or can be used only when explicitly referenced. This allows adding binder configurations without interfering with the default processing.
Default
true
.
5. Configuration Options
Spring Cloud Stream supports general configuration options as well as configuration for bindings and binders. Some binders allow additional binding properties to support middleware-specific features.
Configuration options can be provided to Spring Cloud Stream applications via any mechanism supported by Spring Boot. This includes application arguments, environment variables, and YAML or .properties files.
5.1. Spring Cloud Stream Properties
- spring.cloud.stream.instanceCount
-
The number of deployed instances of an application. Must be set for partitioning and if using Kafka.
Default:
1
. - spring.cloud.stream.instanceIndex
-
The instance index of the application: a number from
0
toinstanceCount
-1. Used for partitioning and with Kafka. Automatically set in Cloud Foundry to match the application’s instance index. - spring.cloud.stream.dynamicDestinations
-
A list of destinations that can be bound dynamically (for example, in a dynamic routing scenario). If set, only listed destinations can be bound.
Default: empty (allowing any destination to be bound).
- spring.cloud.stream.defaultBinder
-
The default binder to use, if multiple binders are configured. See Multiple Binders on the Classpath.
Default: empty.
- spring.cloud.stream.overrideCloudConnectors
-
This property is only applicable when the
cloud
profile is active and Spring Cloud Connectors are provided with the application. If the property is false (the default), the binder will detect a suitable bound service (e.g. a RabbitMQ service bound in Cloud Foundry for the RabbitMQ binder) and will use it for creating connections (usually via Spring Cloud Connectors). When set to true, this property instructs binders to completely ignore the bound services and rely on Spring Boot properties (e.g. relying on thespring.rabbitmq.*
properties provided in the environment for the RabbitMQ binder). The typical usage of this property is to be nested in a customized environment when connecting to multiple systems.Default: false.
5.2. Binding Properties
Binding properties are supplied using the format spring.cloud.stream.bindings.<channelName>.<property>=<value>
.
The <channelName>
represents the name of the channel being configured (e.g., output
for a Source
).
To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format spring.cloud.stream.default.<property>=<value>
.
In what follows, we indicate where we have omitted the spring.cloud.stream.bindings.<channelName>.
prefix and focus just on the property name, with the understanding that the prefix will be included at runtime.
5.2.1. Properties for Use of Spring Cloud Stream
The following binding properties are available for both input and output bindings and must be prefixed with spring.cloud.stream.bindings.<channelName>.
, e.g. spring.cloud.stream.bindings.input.destination=ticktock
.
Default values can be set by using the prefix spring.cloud.stream.default
, e.g. spring.cloud.stream.default.contentType=application/json
.
- destination
-
The target destination of a channel on the bound middleware (e.g., the RabbitMQ exchange or Kafka topic). If the channel is bound as a consumer, it could be bound to multiple destinations and the destination names can be specified as comma separated String values. If not set, the channel name is used instead. The default value of this property cannot be overridden.
- group
-
The consumer group of the channel. Applies only to inbound bindings. See Consumer Groups.
Default: null (indicating an anonymous consumer).
- contentType
-
The content type of the channel.
Default: null (so that no type coercion is performed).
- binder
-
The binder used by this binding. See Multiple Binders on the Classpath for details.
Default: null (the default binder will be used, if one exists).
5.2.2. Consumer properties
The following binding properties are available for input bindings only and must be prefixed with spring.cloud.stream.bindings.<channelName>.consumer.
, e.g. spring.cloud.stream.bindings.input.consumer.concurrency=3
.
Default values can be set by using the prefix spring.cloud.stream.default.consumer
, e.g. spring.cloud.stream.default.consumer.headerMode=raw
.
- concurrency
-
The concurrency of the inbound consumer.
Default:
1
. - partitioned
-
Whether the consumer receives data from a partitioned producer.
Default:
false
. - headerMode
-
When set to
raw
, disables header parsing on input. Effective only for messaging middleware that does not support message headers natively and requires header embedding. Useful when inbound data is coming from outside Spring Cloud Stream applications.Default:
embeddedHeaders
. - maxAttempts
-
If processing fails, the number of attempts to process the message (including the first). Set to 1 to disable retry.
Default:
3
. - backOffInitialInterval
-
The backoff initial interval on retry.
Default:
1000
. - backOffMaxInterval
-
The maximum backoff interval.
Default:
10000
. - backOffMultiplier
-
The backoff multiplier.
Default:
2.0
. - instanceIndex
-
When set to a value greater than equal to zero, allows customizing the instance index of this consumer (if different from
spring.cloud.stream.instanceIndex
). When set to a negative value, it will default tospring.cloud.stream.instanceIndex
.Default:
-1
. - instanceCount
-
When set to a value greater than equal to zero, allows customizing the instance count of this consumer (if different from
spring.cloud.stream.instanceCount
). When set to a negative value, it will default tospring.cloud.stream.instanceCount
.Default:
-1
.
5.2.3. Producer Properties
The following binding properties are available for output bindings only and must be prefixed with spring.cloud.stream.bindings.<channelName>.producer.
, e.g. spring.cloud.stream.bindings.input.producer.partitionKeyExpression=payload.id
.
Default values can be set by using the prefix spring.cloud.stream.default.producer
, e.g. spring.cloud.stream.default.producer.partitionKeyExpression=payload.id
.
- partitionKeyExpression
-
A SpEL expression that determines how to partition outbound data. If set, or if
partitionKeyExtractorClass
is set, outbound data on this channel will be partitioned, andpartitionCount
must be set to a value greater than 1 to be effective. The two options are mutually exclusive. See Partitioning Support.Default: null.
- partitionKeyExtractorClass
-
A
PartitionKeyExtractorStrategy
implementation. If set, or ifpartitionKeyExpression
is set, outbound data on this channel will be partitioned, andpartitionCount
must be set to a value greater than 1 to be effective. The two options are mutually exclusive. See Partitioning Support.Default: null.
- partitionSelectorClass
-
A
PartitionSelectorStrategy
implementation. Mutually exclusive withpartitionSelectorExpression
. If neither is set, the partition will be selected as thehashCode(key) % partitionCount
, wherekey
is computed via eitherpartitionKeyExpression
orpartitionKeyExtractorClass
.Default: null.
- partitionSelectorExpression
-
A SpEL expression for customizing partition selection. Mutually exclusive with
partitionSelectorClass
. If neither is set, the partition will be selected as thehashCode(key) % partitionCount
, wherekey
is computed via eitherpartitionKeyExpression
orpartitionKeyExtractorClass
.Default: null.
- partitionCount
-
The number of target partitions for the data, if partitioning is enabled. Must be set to a value greater than 1 if the producer is partitioned. On Kafka, interpreted as a hint; the larger of this and the partition count of the target topic is used instead.
Default:
1
. - requiredGroups
-
A comma-separated list of groups to which the producer must ensure message delivery even if they start after it has been created (e.g., by pre-creating durable queues in RabbitMQ).
- headerMode
-
When set to
raw
, disables header embedding on output. Effective only for messaging middleware that does not support message headers natively and requires header embedding. Useful when producing data for non-Spring Cloud Stream applications.Default:
embeddedHeaders
. - useNativeEncoding
-
When set to
true
, the outbound message is serialized directly by client library, which must be configured correspondingly (e.g. setting an appropriate Kafka producer value serializer). When this configuration is being used, the outbound message marshalling is not based on thecontentType
of the binding. When native encoding is used, it is the responsibility of the consumer to use appropriate decoder (ex: Kafka consumer value de-serializer) to deserialize the inbound message. Also, when native encoding/decoding is used theheaderMode
property is ignored and headers will not be embedded into the message.Default:
false
. - errorChannelEnabled
-
When set to
true
, if the binder supports async send results; send failures will be sent to an error channel for the destination. See Message Channel Binders and Error Channels for more information.Default:
false
.
5.3. Using dynamically bound destinations
Besides the channels defined via @EnableBinding
, Spring Cloud Stream allows applications to send messages to dynamically bound destinations.
This is useful, for example, when the target destination needs to be determined at runtime.
Applications can do so by using the BinderAwareChannelResolver
bean, registered automatically by the @EnableBinding
annotation.
The property 'spring.cloud.stream.dynamicDestinations' can be used for restricting the dynamic destination names to a set known beforehand (whitelisting). If the property is not set, any destination can be bound dynamicaly.
The BinderAwareChannelResolver
can be used directly as in the following example, in which a REST controller uses a path variable to decide the target channel.
@EnableBinding
@Controller
public class SourceWithDynamicDestination {
@Autowired
private BinderAwareChannelResolver resolver;
@RequestMapping(path = "/{target}", method = POST, consumes = "*/*")
@ResponseStatus(HttpStatus.ACCEPTED)
public void handleRequest(@RequestBody String body, @PathVariable("target") target,
@RequestHeader(HttpHeaders.CONTENT_TYPE) Object contentType) {
sendMessage(body, target, contentType);
}
private void sendMessage(String body, String target, Object contentType) {
resolver.resolveDestination(target).send(MessageBuilder.createMessage(body,
new MessageHeaders(Collections.singletonMap(MessageHeaders.CONTENT_TYPE, contentType))));
}
}
After starting the application on the default port 8080, when sending the following data:
curl -H "Content-Type: application/json" -X POST -d "customer-1" http://localhost:8080/customers curl -H "Content-Type: application/json" -X POST -d "order-1" http://localhost:8080/orders
The destinations 'customers' and 'orders' are created in the broker (for example: exchange in case of Rabbit or topic in case of Kafka) with the names 'customers' and 'orders', and the data is published to the appropriate destinations.
The BinderAwareChannelResolver
is a general purpose Spring Integration DestinationResolver
and can be injected in other components.
For example, in a router using a SpEL expression based on the target
field of an incoming JSON message.
@EnableBinding
@Controller
public class SourceWithDynamicDestination {
@Autowired
private BinderAwareChannelResolver resolver;
@RequestMapping(path = "/", method = POST, consumes = "application/json")
@ResponseStatus(HttpStatus.ACCEPTED)
public void handleRequest(@RequestBody String body, @RequestHeader(HttpHeaders.CONTENT_TYPE) Object contentType) {
sendMessage(body, contentType);
}
private void sendMessage(Object body, Object contentType) {
routerChannel().send(MessageBuilder.createMessage(body,
new MessageHeaders(Collections.singletonMap(MessageHeaders.CONTENT_TYPE, contentType))));
}
@Bean(name = "routerChannel")
public MessageChannel routerChannel() {
return new DirectChannel();
}
@Bean
@ServiceActivator(inputChannel = "routerChannel")
public ExpressionEvaluatingRouter router() {
ExpressionEvaluatingRouter router =
new ExpressionEvaluatingRouter(new SpelExpressionParser().parseExpression("payload.target"));
router.setDefaultOutputChannelName("default-output");
router.setChannelResolver(resolver);
return router;
}
}
6. Content Type and Transformation
To allow you to propagate information about the content type of produced messages, Spring Cloud Stream attaches, by default, a contentType
header to outbound messages.
For middleware that does not directly support headers, Spring Cloud Stream provides its own mechanism of automatically wrapping outbound messages in an envelope of its own.
For middleware that does support headers, Spring Cloud Stream applications may receive messages with a given content type from non-Spring Cloud Stream applications.
Spring Cloud Stream can handle messages based on this information in two ways:
-
Through its
contentType
settings on inbound and outbound channels -
Through its argument mapping performed for methods annotated with
@StreamListener
Spring Cloud Stream allows you to declaratively configure type conversion for inputs and outputs using the spring.cloud.stream.bindings.<channelName>.content-type
property of a binding.
Note that general type conversion may also be accomplished easily by using a transformer inside your application.
Currently, Spring Cloud Stream natively supports the following type conversions commonly used in streams:
-
JSON to/from POJO
-
JSON to/from org.springframework.tuple.Tuple
-
Object to/from byte[] : Either the raw bytes serialized for remote transport, bytes emitted by an application, or converted to bytes using Java serialization(requires the object to be Serializable)
-
String to/from byte[]
-
Object to plain text (invokes the object’s toString() method)
Where JSON represents either a byte array or String payload containing JSON. Currently, Objects may be converted from a JSON byte array or String. Converting to JSON always produces a String.
If no content-type
property is set on an outbound channel, Spring Cloud Stream will serialize the payload using a serializer based on the Kryo serialization framework.
Deserializing messages at the destination requires the payload class to be present on the receiver’s classpath.
6.1. MIME types
content-type
values are parsed as media types, e.g., application/json
or text/plain;charset=UTF-8
.
MIME types are especially useful for indicating how to convert to String or byte[] content.
Spring Cloud Stream also uses MIME type format to represent Java types, using the general type application/x-java-object
with a type
parameter.
For example, application/x-java-object;type=java.util.Map
or application/x-java-object;type=com.bar.Foo
can be set as the content-type
property of an input binding.
In addition, Spring Cloud Stream provides custom MIME types, notably, application/x-spring-tuple
to specify a Tuple.
6.2. MIME types and Java types
The type conversions Spring Cloud Stream provides out of the box are summarized in the following table: 'Source Payload' means the payload before conversion and 'Target Payload' means the 'payload' after conversion. The type conversion can occur either on the 'producer' side (output) or at the 'consumer' side (input).
Source Payload | Target Payload | content-type header (source message) |
content-type header (after conversion) |
Comments |
---|---|---|---|---|
POJO |
JSON String |
ignored |
application/json |
|
Tuple |
JSON String |
ignored |
application/json |
JSON is tailored for Tuple |
POJO |
String (toString()) |
ignored |
text/plain, java.lang.String |
|
POJO |
byte[] (java.io serialized) |
ignored |
application/x-java-serialized-object |
|
JSON byte[] or String |
POJO |
application/json (or none) |
application/x-java-object |
|
byte[] or String |
Serializable |
application/x-java-serialized-object |
application/x-java-object |
|
JSON byte[] or String |
Tuple |
application/json (or none) |
application/x-spring-tuple |
|
byte[] |
String |
any |
text/plain, java.lang.String |
will apply any Charset specified in the content-type header |
String |
byte[] |
any |
application/octet-stream |
will apply any Charset specified in the content-type header |
Conversion applies to payloads that require type conversion. For example, if an application produces an XML string with outputType=application/json, the payload will not be converted from XML to JSON. This is because the payload send to the outbound channel is already a String so no conversion will be applied at runtime. It is also important to note that when using the default serialization mechanism, the payload class must be shared between the sending and receiving application, and compatible with the binary content. This can create issues when application code changes independently in the two applications, as the binary format and code may become incompatible. |
While conversion is supported for both inbound and outbound channels, it is especially recommended to be used for the conversion of outbound messages.
For the conversion of inbound messages, especially when the target is a POJO, the |
6.3. Customizing message conversion
Besides the conversions that it supports out of the box, Spring Cloud Stream also supports registering your own message conversion implementations.
This allows you to send and receive data in a variety of custom formats, including binary, and associate them with specific contentTypes
.
Spring Cloud Stream registers all the beans of type org.springframework.messaging.converter.MessageConverter
as custom message converters along with the out of the box message converters.
If your message converter needs to work with a specific content-type
and target class (for both input and output), then the message converter needs to extend org.springframework.messaging.converter.AbstractMessageConverter
.
For conversion when using @StreamListener
, a message converter that implements org.springframework.messaging.converter.MessageConverter
would suffice.
Here is an example of creating a message converter bean (with the content-type application/bar
) inside a Spring Cloud Stream application:
@EnableBinding(Sink.class)
@SpringBootApplication
public static class SinkApplication {
...
@Bean
public MessageConverter customMessageConverter() {
return new MyCustomMessageConverter();
}
public class MyCustomMessageConverter extends AbstractMessageConverter {
public MyCustomMessageConverter() {
super(new MimeType("application", "bar"));
}
@Override
protected boolean supports(Class<?> clazz) {
return (Bar.class == clazz);
}
@Override
protected Object convertFromInternal(Message<?> message, Class<?> targetClass, Object conversionHint) {
Object payload = message.getPayload();
return (payload instanceof Bar ? payload : new Bar((byte[]) payload));
}
}
Spring Cloud Stream also provides support for Avro-based converters and schema evolution. See the specific section for details.
6.4. @StreamListener
and Message Conversion
The @StreamListener
annotation provides a convenient way for converting incoming messages without the need to specify the content type of an input channel.
During the dispatching process to methods annotated with @StreamListener
, a conversion will be applied automatically if the argument requires it.
For example, let’s consider a message with the String content {"greeting":"Hello, world"}
and a content-type
header of application/json
is received on the input channel.
Let us consider the following application that receives it:
public class GreetingMessage {
String greeting;
public String getGreeting() {
return greeting;
}
public void setGreeting(String greeting) {
this.greeting = greeting;
}
}
@EnableBinding(Sink.class)
@EnableAutoConfiguration
public static class GreetingSink {
@StreamListener(Sink.INPUT)
public void receive(Greeting greeting) {
// handle Greeting
}
}
The argument of the method will be populated automatically with the POJO containing the unmarshalled form of the JSON String.
7. Schema Evolution Support
Spring Cloud Stream provides support for schema evolution so that the data can be evolved over time and still work with older or newer producers and consumers and vice versa. Most serialization models, especially the ones that aim for portability across different platforms and languages, rely on a schema that describes how the data is serialized in the binary payload. In order to serialize the data and then to interpret it, both the sending and receiving sides must have access to a schema that describes the binary format. In certain cases, the schema can be inferred from the payload type on serialization or from the target type on deserialization. However, many applications benefit from having access to an explicit schema that describes the binary data format. A schema registry lets you store schema information in a textual format (typically JSON) and makes that information accessible to various applications that need it to receive and send data in binary format. A schema is referenceable as a tuple consisting of:
-
A subject that is the logical name of the schema
-
The schema version
-
The schema format, which describes the binary format of the data
This following sections goes through the details of various components involved in schema evolution process.
7.1. Schema Registry Client
The client-side abstraction for interacting with schema registry servers is the SchemaRegistryClient
interface, which has the following structure:
public interface SchemaRegistryClient {
SchemaRegistrationResponse register(String subject, String format, String schema);
String fetch(SchemaReference schemaReference);
String fetch(Integer id);
}
Spring Cloud Stream provides out-of-the-box implementations for interacting with its own schema server and for interacting with the Confluent Schema Registry.
A client for the Spring Cloud Stream schema registry can be configured by using the @EnableSchemaRegistryClient
, as follows:
@EnableBinding(Sink.class)
@SpringBootApplication
@EnableSchemaRegistryClient
public static class AvroSinkApplication {
...
}
The default converter is optimized to cache not only the schemas from the remote server but also the parse() and toString() methods, which are quite expensive.
Because of this, it uses a DefaultSchemaRegistryClient that does not cache responses.
If you intend to change the default behavior, you can use the client directly on your code and override it to the desired outcome.
To do so, you have to add the property spring.cloud.stream.schemaRegistryClient.cached=true to your application properties.
|
7.1.1. Schema Registry Client Properties
The Schema Registry Client supports the following properties:
spring.cloud.stream.schemaRegistryClient.endpoint
-
The location of the schema-server. When setting this, use a full URL, including protocol (
http
orhttps
) , port, and context path. - Default
spring.cloud.stream.schemaRegistryClient.cached
-
Whether the client should cache schema server responses. Normally set to
false
, as the caching happens in the message converter. Clients using the schema registry client should set this totrue
. - Default
-
true
7.2. Avro Schema Registry Client Message Converters
For applications that have a SchemaRegistryClient bean registered with the application context, Spring Cloud Stream auto configures an Apache Avro message converter for schema management. This eases schema evolution, as applications that receive messages can get easy access to a writer schema that can be reconciled with their own reader schema.
For outbound messages, if the content type of the channel is set to application/*+avro
, the MessageConverter
is activated, as shown in the following example:
spring.cloud.stream.bindings.output.contentType=application/*+avro
During the outbound conversion, the message converter tries to infer the schema of each outbound messages (based on its type) and register it to a subject (based on the payload type) by using the SchemaRegistryClient
.
If an identical schema is already found, then a reference to it is retrieved.
If not, the schema is registered, and a new version number is provided.
The message is sent with a contentType
header by using the following scheme: application/[prefix].[subject].v[version]+avro
, where prefix
is configurable and subject
is deduced from the payload type.
For example, a message of the type User
might be sent as a binary payload with a content type of application/vnd.user.v2+avro
, where user
is the subject and 2
is the version number.
When receiving messages, the converter infers the schema reference from the header of the incoming message and tries to retrieve it. The schema is used as the writer schema in the deserialization process.
7.2.1. Avro Schema Registry Message Converter Properties
If you have enabled Avro based schema registry client by setting spring.cloud.stream.bindings.output.contentType=application/*+avro
, you can customize the behavior of the registration by setting the following properties.
- spring.cloud.stream.schema.avro.dynamicSchemaGenerationEnabled
-
Enable if you want the converter to use reflection to infer a Schema from a POJO.
Default:
false
- spring.cloud.stream.schema.avro.readerSchema
-
Avro compares schema versions by looking at a writer schema (origin payload) and a reader schema (your application payload). See the Avro documentation for more information. If set, this overrides any lookups at the schema server and uses the local schema as the reader schema. Default:
null
- spring.cloud.stream.schema.avro.schemaLocations
-
Registers any
.avsc
files listed in this property with the Schema Server.Default:
empty
- spring.cloud.stream.schema.avro.prefix
-
The prefix to be used on the Content-Type header.
Default:
vnd
7.3. Apache Avro Message Converters
Spring Cloud Stream provides support for schema-based message converters through its spring-cloud-stream-schema
module.
Currently, the only serialization format supported out of the box for schema-based message converters is Apache Avro, with more formats to be added in future versions.
The spring-cloud-stream-schema
module contains two types of message converters that can be used for Apache Avro serialization:
-
Converters that use the class information of the serialized or deserialized objects or a schema with a location known at startup.
-
Converters that use a schema registry. They locate the schemas at runtime and dynamically register new schemas as domain objects evolve.
7.4. Converters with Schema Support
The AvroSchemaMessageConverter
supports serializing and deserializing messages either by using a predefined schema or by using the schema information available in the class (either reflectively or contained in the SpecificRecord
).
If you provide a custom converter, then the default AvroSchemaMessageConverter bean is not created. The following example shows a custom converter:
To use custom converters, you can simply add it to the application context, optionally specifying one or more MimeTypes
with which to associate it.
The default MimeType
is application/avro
.
If the target type of the conversion is a GenericRecord
, a schema must be set.
The following example shows how to configure a converter in a sink application by registering the Apache Avro MessageConverter
without a predefined schema.
In this example, note that the mime type value is avro/bytes
, not the default application/avro
.
@EnableBinding(Sink.class)
@SpringBootApplication
public static class SinkApplication {
...
@Bean
public MessageConverter userMessageConverter() {
return new AvroSchemaMessageConverter(MimeType.valueOf("avro/bytes"));
}
}
Conversely, the following application registers a converter with a predefined schema (found on the classpath):
@EnableBinding(Sink.class)
@SpringBootApplication
public static class SinkApplication {
...
@Bean
public MessageConverter userMessageConverter() {
AvroSchemaMessageConverter converter = new AvroSchemaMessageConverter(MimeType.valueOf("avro/bytes"));
converter.setSchemaLocation(new ClassPathResource("schemas/User.avro"));
return converter;
}
}
7.5. Schema Registry Server
Spring Cloud Stream provides a schema registry server implementation.
To use it, you can add the spring-cloud-stream-schema-server
artifact to your project and use the @EnableSchemaRegistryServer
annotation, which adds the schema registry server REST controller to your application.
This annotation is intended to be used with Spring Boot web applications, and the listening port of the server is controlled by the server.port
property.
The spring.cloud.stream.schema.server.path
property can be used to control the root path of the schema server (especially when it is embedded in other applications).
The spring.cloud.stream.schema.server.allowSchemaDeletion
boolean property enables the deletion of a schema. By default, this is disabled.
The schema registry server uses a relational database to store the schemas. By default, it uses an embedded database. You can customize the schema storage by using the Spring Boot SQL database and JDBC configuration options.
The following example shows a Spring Boot application that enables the schema registry:
@SpringBootApplication
@EnableSchemaRegistryServer
public class SchemaRegistryServerApplication {
public static void main(String[] args) {
SpringApplication.run(SchemaRegistryServerApplication.class, args);
}
}
7.5.1. Schema Registry Server API
The Schema Registry Server API consists of the following operations:
-
POST /
— see “Registering a New Schema” -
'GET /{subject}/{format}/{version}' — see “Retrieving an Existing Schema by Subject, Format, and Version”
-
GET /{subject}/{format}
— see “Retrieving an Existing Schema by Subject and Format” -
GET /schemas/{id}
— see “Retrieving an Existing Schema by ID” -
DELETE /{subject}/{format}/{version}
— see “Deleting a Schema by Subject, Format, and Version” -
DELETE /schemas/{id}
— see “Deleting a Schema by ID” -
DELETE /{subject}
— see “Deleting a Schema by Subject”
Registering a New Schema
To register a new schema, send a POST
request to the /
endpoint.
The /
accepts a JSON payload with the following fields:
-
subject
: The schema subject -
format
: The schema format -
definition
: The schema definition
Its response is a schema object in JSON, with the following fields:
-
id
: The schema ID -
subject
: The schema subject -
format
: The schema format -
version
: The schema version -
definition
: The schema definition
Retrieving an Existing Schema by Subject, Format, and Version
To retrieve an existing schema by subject, format, and version, send GET
request to the /{subject}/{format}/{version}
endpoint.
Its response is a schema object in JSON, with the following fields:
-
id
: The schema ID -
subject
: The schema subject -
format
: The schema format -
version
: The schema version -
definition
: The schema definition
Retrieving an Existing Schema by Subject and Format
To retrieve an existing schema by subject and format, send a GET
request to the /subject/format
endpoint.
Its response is a list of schemas with each schema object in JSON, with the following fields:
-
id
: The schema ID -
subject
: The schema subject -
format
: The schema format -
version
: The schema version -
definition
: The schema definition
Retrieving an Existing Schema by ID
To retrieve a schema by its ID, send a GET
request to the /schemas/{id}
endpoint.
Its response is a schema object in JSON, with the following fields:
-
id
: The schema ID -
subject
: The schema subject -
format
: The schema format -
version
: The schema version -
definition
: The schema definition
Deleting a Schema by Subject, Format, and Version
To delete a schema identified by its subject, format, and version, send a DELETE
request to the /{subject}/{format}/{version}
endpoint.
Deleting a Schema by ID
To delete a schema by its ID, send a DELETE
request to the /schemas/{id}
endpoint.
Deleting a Schema by Subject
DELETE /{subject}
Delete existing schemas by their subject.
This note applies to users of Spring Cloud Stream 1.1.0.RELEASE only.
Spring Cloud Stream 1.1.0.RELEASE used the table name, schema , for storing Schema objects. Schema is a keyword in a number of database implementations.
To avoid any conflicts in the future, starting with 1.1.1.RELEASE, we have opted for the name SCHEMA_REPOSITORY for the storage table.
Any Spring Cloud Stream 1.1.0.RELEASE users who upgrade should migrate their existing schemas to the new table before upgrading.
|
7.5.2. Using Confluent’s Schema Registry
The default configuration creates a DefaultSchemaRegistryClient
bean.
If you want to use the Confluent schema registry, you need to create a bean of type ConfluentSchemaRegistryClient
, which supersedes the one configured by default by the framework. The following example shows how to create such a bean:
@Bean
public SchemaRegistryClient schemaRegistryClient(@Value("${spring.cloud.stream.schemaRegistryClient.endpoint}") String endpoint){
ConfluentSchemaRegistryClient client = new ConfluentSchemaRegistryClient();
client.setEndpoint(endpoint);
return client;
}
The ConfluentSchemaRegistryClient is tested against Confluent platform version 4.0.0. |
7.6. Schema Registration and Resolution
To better understand how Spring Cloud Stream registers and resolves new schemas and its use of Avro schema comparison features, we provide two separate subsections:
7.6.1. Schema Registration Process (Serialization)
The first part of the registration process is extracting a schema from the payload that is being sent over a channel.
Avro types such as SpecificRecord
or GenericRecord
already contain a schema, which can be retrieved immediately from the instance.
In the case of POJOs, a schema is inferred if the spring.cloud.stream.schema.avro.dynamicSchemaGenerationEnabled
property is set to true
(the default).
Ones a schema is obtained, the converter loads its metadata (version) from the remote server. First, it queries a local cache. If no result is found, it submits the data to the server, which replies with versioning information. The converter always caches the results to avoid the overhead of querying the Schema Server for every new message that needs to be serialized.
With the schema version information, the converter sets the contentType
header of the message to carry the version information — for example: application/vnd.user.v1+avro
.
7.6.2. Schema Resolution Process (Deserialization)
When reading messages that contain version information (that is, a contentType
header with a scheme like the one described under “Schema Registration Process (Serialization)”), the converter queries the Schema server to fetch the writer schema of the message.
Once it has found the correct schema of the incoming message, it retrieves the reader schema and, by using Avro’s schema resolution support, reads it into the reader definition (setting defaults and any missing properties).
You should understand the difference between a writer schema (the application that wrote the message) and a reader schema (the receiving application).
We suggest taking a moment to read the Avro terminology and understand the process.
Spring Cloud Stream always fetches the writer schema to determine how to read a message.
If you want to get Avro’s schema evolution support working, you need to make sure that a readerSchema was properly set for your application.
|
8. Inter-Application Communication
8.1. Connecting Multiple Application Instances
While Spring Cloud Stream makes it easy for individual Spring Boot applications to connect to messaging systems, the typical scenario for Spring Cloud Stream is the creation of multi-application pipelines, where microservice applications send data to each other. You can achieve this scenario by correlating the input and output destinations of adjacent applications.
Supposing that a design calls for the Time Source application to send data to the Log Sink application, you can use a common destination named ticktock
for bindings within both applications.
Time Source (that has the channel name output
) will set the following property:
spring.cloud.stream.bindings.output.destination=ticktock
Log Sink (that has the channel name input
) will set the following property:
spring.cloud.stream.bindings.input.destination=ticktock
8.2. Instance Index and Instance Count
When scaling up Spring Cloud Stream applications, each instance can receive information about how many other instances of the same application exist and what its own instance index is.
Spring Cloud Stream does this through the spring.cloud.stream.instanceCount
and spring.cloud.stream.instanceIndex
properties.
For example, if there are three instances of a HDFS sink application, all three instances will have spring.cloud.stream.instanceCount
set to 3
, and the individual applications will have spring.cloud.stream.instanceIndex
set to 0
, 1
, and 2
, respectively.
When Spring Cloud Stream applications are deployed via Spring Cloud Data Flow, these properties are configured automatically; when Spring Cloud Stream applications are launched independently, these properties must be set correctly.
By default, spring.cloud.stream.instanceCount
is 1
, and spring.cloud.stream.instanceIndex
is 0
.
In a scaled-up scenario, correct configuration of these two properties is important for addressing partitioning behavior (see below) in general, and the two properties are always required by certain binders (e.g., the Kafka binder) in order to ensure that data are split correctly across multiple consumer instances.
8.3. Partitioning
8.3.1. Configuring Output Bindings for Partitioning
An output binding is configured to send partitioned data by setting one and only one of its partitionKeyExpression
or partitionKeyExtractorClass
properties, as well as its partitionCount
property.
For example, the following is a valid and typical configuration:
spring.cloud.stream.bindings.output.producer.partitionKeyExpression=payload.id spring.cloud.stream.bindings.output.producer.partitionCount=5
Based on the above example configuration, data will be sent to the target partition using the following logic.
A partition key’s value is calculated for each message sent to a partitioned output channel based on the partitionKeyExpression
.
The partitionKeyExpression
is a SpEL expression which is evaluated against the outbound message for extracting the partitioning key.
If a SpEL expression is not sufficient for your needs, you can instead calculate the partition key value by setting the property partitionKeyExtractorClass
to a class which implements the org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy
interface.
While the SpEL expression should usually suffice, more complex cases may use the custom implementation strategy.
In that case, the property 'partitionKeyExtractorClass' can be set as follows:
spring.cloud.stream.bindings.output.producer.partitionKeyExtractorClass=com.example.MyKeyExtractor spring.cloud.stream.bindings.output.producer.partitionCount=5
Once the message key is calculated, the partition selection process will determine the target partition as a value between 0
and partitionCount - 1
.
The default calculation, applicable in most scenarios, is based on the formula key.hashCode() % partitionCount
.
This can be customized on the binding, either by setting a SpEL expression to be evaluated against the 'key' (via the partitionSelectorExpression
property) or by setting a org.springframework.cloud.stream.binder.PartitionSelectorStrategy
implementation (via the partitionSelectorClass
property).
The binding level properties for 'partitionSelectorExpression' and 'partitionSelectorClass' can be specified similar to the way 'partitionKeyExpression' and 'partitionKeyExtractorClass' properties are specified in the above examples. Additional properties can be configured for more advanced scenarios, as described in the following section.
Spring-managed custom PartitionKeyExtractorClass
implementations
In the example above, a custom strategy such as MyKeyExtractor
is instantiated by the Spring Cloud Stream directly.
In some cases, it is necessary for such a custom strategy implementation to be created as a Spring bean, for being able to be managed by Spring, so that it can perform dependency injection, property binding, etc.
This can be done by configuring it as a @Bean in the application context and using the fully qualified class name as the bean’s name, as in the following example.
@Bean(name="com.example.MyKeyExtractor") public MyKeyExtractor extractor() { return new MyKeyExtractor(); }
As a Spring bean, the custom strategy benefits from the full lifecycle of a Spring bean. For example, if the implementation need access to the application context directly, it can make implement 'ApplicationContextAware'.
Configuring Input Bindings for Partitioning
An input binding (with the channel name input
) is configured to receive partitioned data by setting its partitioned
property, as well as the instanceIndex
and instanceCount
properties on the application itself, as in the following example:
spring.cloud.stream.bindings.input.consumer.partitioned=true spring.cloud.stream.instanceIndex=3 spring.cloud.stream.instanceCount=5
The instanceCount
value represents the total number of application instances between which the data need to be partitioned, and the instanceIndex
must be a unique value across the multiple instances, between 0
and instanceCount - 1
.
The instance index helps each application instance to identify the unique partition (or, in the case of Kafka, the partition set) from which it receives data.
It is important to set both values correctly in order to ensure that all of the data is consumed and that the application instances receive mutually exclusive datasets.
While a scenario which using multiple instances for partitioned data processing may be complex to set up in a standalone case, Spring Cloud Dataflow can simplify the process significantly by populating both the input and output values correctly as well as relying on the runtime infrastructure to provide information about the instance index and instance count.
9. Testing
Spring Cloud Stream provides support for testing your microservice applications without connecting to a messaging system.
You can do that by using the TestSupportBinder
provided by the spring-cloud-stream-test-support
library, which can be added as a test dependency to the application:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-test-support</artifactId>
<scope>test</scope>
</dependency>
The |
The TestSupportBinder
allows users to interact with the bound channels and inspect what messages are sent and received by the application
For outbound message channels, the TestSupportBinder
registers a single subscriber and retains the messages emitted by the application in a MessageCollector
.
They can be retrieved during tests and have assertions made against them.
The user can also send messages to inbound message channels, so that the consumer application can consume the messages. The following example shows how to test both input and output channels on a processor.
@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment= SpringBootTest.WebEnvironment.RANDOM_PORT)
public class ExampleTest {
@Autowired
private Processor processor;
@Autowired
private MessageCollector messageCollector;
@Test
@SuppressWarnings("unchecked")
public void testWiring() {
Message<String> message = new GenericMessage<>("hello");
processor.input().send(message);
Message<String> received = (Message<String>) messageCollector.forChannel(processor.output()).poll();
assertThat(received.getPayload(), equalTo("hello world"));
}
@SpringBootApplication
@EnableBinding(Processor.class)
public static class MyProcessor {
@Autowired
private Processor channels;
@Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
public String transform(String in) {
return in + " world";
}
}
}
In the example above, we are creating an application that has an input and an output channel, bound through the Processor
interface.
The bound interface is injected into the test so we can have access to both channels.
We are sending a message on the input channel and we are using the MessageCollector
provided by Spring Cloud Stream’s test support to capture the message has been sent to the output channel as a result.
Once we have received the message, we can validate that the component functions correctly.
9.1. Disabling the test binder autoconfiguration
The intent behind the test binder superseding all the other binders on the classpath is to make it easy to test your applications without making changes to your production dependencies.
In some cases (e.g. integration tests) it is useful to use the actual production binders instead, and that requires disabling the test binder autoconfiguration.
In order to do so, you can exclude the org.springframework.cloud.stream.test.binder.TestSupportBinderAutoConfiguration
class using one of the Spring Boot autoconfiguration exclusion mechanisms, as in the following example.
@SpringBootApplication(exclude = TestSupportBinderAutoConfiguration.class)
@EnableBinding(Processor.class)
public static class MyProcessor {
@Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
public String transform(String in) {
return in + " world";
}
}
When autoconfiguration is disabled, the test binder is available on the classpath, and its defaultCandidate
property is set to false
, so that it does not interfere with the regular user configuration. It can be referenced under the name test
e.g.:
spring.cloud.stream.defaultBinder=test
10. Health Indicator
Spring Cloud Stream provides a health indicator for binders.
It is registered under the name binders
and can be enabled or disabled by setting the management.health.binders.enabled
property.
By default management.health.binders.enabled
is set to false
.
Setting management.health.binders.enabled
to true
enables the health indicator, allowing you to access the /health
endpoint to retrieve the binder health indicators.
Health indicators are binder-specific and certain binder implementations may not necessarily provide a health indicator.
11. Metrics Emitter
Spring Cloud Stream provides a module called spring-cloud-stream-metrics
that can be used to emit any available metric from Spring Boot metrics endpoint to a named channel.
This module allow operators to collect metrics from stream applications without relying on polling their endpoints.
The module is activated when you set the destination name for metrics binding, e.g. spring.cloud.stream.bindings.applicationMetrics.destination=<DESTINATION_NAME>
.
applicationMetrics
can be configured in a similar fashion to any other producer binding.
The default contentType
setting of applicationMetrics
is application/json
.
The following properties can be used for customizing the emission of metrics:
- spring.cloud.stream.metrics.key
-
The name of the metric being emitted. Should be an unique value per application.
- Default
-
${spring.application.name:${vcap.application.name:${spring.config.name:application}}}
- spring.cloud.stream.metrics.prefix
-
Prefix string to be prepended to the metrics key.
Default: ``
- spring.cloud.stream.metrics.properties
-
Just like the
includes
option, it allows white listing application properties that will be added to the metrics payloadDefault: null.
A detailed overview of the metrics export process can be found in the Spring Boot reference documentation.
Spring Cloud Stream provides a metric exporter named application
that can be configured via regular Spring Boot metrics configuration properties.
The exporter can be configured either by using the global Spring Boot configuration settings for exporters, or by using exporter-specific properties.
For using the global configuration settings, the properties should be prefixed by spring.metric.export
(e.g. spring.metric.export.includes=integration**
).
These configuration options will apply to all exporters (unless they have been configured differently).
Alternatively, if it is intended to use configuration settings that are different from the other exporters (e.g. for restricting the number of metrics published), the Spring Cloud Stream provided metrics exporter can be configured using the prefix spring.metrics.export.triggers.application
(e.g. spring.metrics.export.triggers.application.includes=integration**
).
Due to Spring Boot’s relaxed binding the value of a property being included can be slightly different than the original value. As a rule of thumb, the metric exporter will attempt to normalize all the properties in a consistent format using the dot notation (e.g. The goal of normalization is to make downstream consumers of those metrics capable of receiving property names consistently, regardless of how they are set on the monitored application ( |
Below is a sample of the data published to the channel in JSON format by the following command:
java -jar time-source.jar \
--spring.cloud.stream.bindings.applicationMetrics.destination=someMetrics \
--spring.cloud.stream.metrics.properties=spring.application** \
--spring.metrics.export.includes=integration.channel.input**,integration.channel.output**
The resulting JSON is:
{
"name":"time-source",
"metrics":[
{
"name":"integration.channel.output.errorRate.mean",
"value":0.0,
"timestamp":"2017-04-11T16:56:35.790Z"
},
{
"name":"integration.channel.output.errorRate.max",
"value":0.0,
"timestamp":"2017-04-11T16:56:35.790Z"
},
{
"name":"integration.channel.output.errorRate.min",
"value":0.0,
"timestamp":"2017-04-11T16:56:35.790Z"
},
{
"name":"integration.channel.output.errorRate.stdev",
"value":0.0,
"timestamp":"2017-04-11T16:56:35.790Z"
},
{
"name":"integration.channel.output.errorRate.count",
"value":0.0,
"timestamp":"2017-04-11T16:56:35.790Z"
},
{
"name":"integration.channel.output.sendCount",
"value":6.0,
"timestamp":"2017-04-11T16:56:35.790Z"
},
{
"name":"integration.channel.output.sendRate.mean",
"value":0.994885872292989,
"timestamp":"2017-04-11T16:56:35.790Z"
},
{
"name":"integration.channel.output.sendRate.max",
"value":1.006247080013156,
"timestamp":"2017-04-11T16:56:35.790Z"
},
{
"name":"integration.channel.output.sendRate.min",
"value":1.0012035220116378,
"timestamp":"2017-04-11T16:56:35.790Z"
},
{
"name":"integration.channel.output.sendRate.stdev",
"value":6.505181111084848E-4,
"timestamp":"2017-04-11T16:56:35.790Z"
},
{
"name":"integration.channel.output.sendRate.count",
"value":6.0,
"timestamp":"2017-04-11T16:56:35.790Z"
}
],
"createdTime":"2017-04-11T20:56:35.790Z",
"properties":{
"spring.application.name":"time-source",
"spring.application.index":"0"
}
}
12. Samples
For Spring Cloud Stream samples, please refer to the spring-cloud-stream-samples repository on GitHub.
13. Getting Started
To get started with creating Spring Cloud Stream applications, visit the Spring Initializr and create a new Maven project named "GreetingSource".
Select Spring Boot {supported-spring-boot-version} in the dropdown.
In the Search for dependencies text box type Stream Rabbit
or Stream Kafka
depending on what binder you want to use.
Next, create a new class, GreetingSource
, in the same package as the GreetingSourceApplication
class.
Give it the following code:
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.messaging.Source;
import org.springframework.integration.annotation.InboundChannelAdapter;
@EnableBinding(Source.class)
public class GreetingSource {
@InboundChannelAdapter(Source.OUTPUT)
public String greet() {
return "hello world " + System.currentTimeMillis();
}
}
The @EnableBinding
annotation is what triggers the creation of Spring Integration infrastructure components.
Specifically, it will create a Kafka connection factory, a Kafka outbound channel adapter, and the message channel defined inside the Source interface:
public interface Source {
String OUTPUT = "output";
@Output(Source.OUTPUT)
MessageChannel output();
}
The auto-configuration also creates a default poller, so that the greet()
method will be invoked once per second.
The standard Spring Integration @InboundChannelAdapter
annotation sends a message to the source’s output channel, using the return value as the payload of the message.
To test-drive this setup, run a Kafka message broker. An easy way to do this is to use a Docker image:
# On OS X
$ docker run -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=`docker-machine ip \`docker-machine active\`` --env ADVERTISED_PORT=9092 spotify/kafka
# On Linux
$ docker run -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=localhost --env ADVERTISED_PORT=9092 spotify/kafka
Build the application:
./mvnw clean package
The consumer application is coded in a similar manner.
Go back to Initializr and create another project, named LoggingSink.
Then create a new class, LoggingSink
, in the same package as the class LoggingSinkApplication
and with the following code:
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.messaging.Sink;
@EnableBinding(Sink.class)
public class LoggingSink {
@StreamListener(Sink.INPUT)
public void log(String message) {
System.out.println(message);
}
}
Build the application:
./mvnw clean package
To connect the GreetingSource application to the LoggingSink application, each application must share the same destination name. Starting up both applications as shown below, you will see the consumer application printing "hello world" and a timestamp to the console:
cd GreetingSource
java -jar target/GreetingSource-0.0.1-SNAPSHOT.jar --spring.cloud.stream.bindings.output.destination=mydest
cd LoggingSink
java -jar target/LoggingSink-0.0.1-SNAPSHOT.jar --server.port=8090 --spring.cloud.stream.bindings.input.destination=mydest
(The different server port prevents collisions of the HTTP port used to service the Spring Boot Actuator endpoints in the two applications.)
The output of the LoggingSink application will look something like the following:
[ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8090 (http)
[ main] com.example.LoggingSinkApplication : Started LoggingSinkApplication in 6.828 seconds (JVM running for 7.371)
hello world 1458595076731
hello world 1458595077732
hello world 1458595078733
hello world 1458595079734
hello world 1458595080735
13.1. Deploying Stream applications on CloudFoundry
On CloudFoundry services are usually exposed via a special environment variable called VCAP_SERVICES.
When configuring your binder connections, you can use the values from an environment variable as explained on the dataflow cloudfoundry server docs.
Binder Implementations
14. Apache Kafka Binder
14.1. Usage
For using the Apache Kafka binder, you just need to add it to your Spring Cloud Stream application, using the following Maven coordinates:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
Alternatively, you can also use the Spring Cloud Stream Kafka Starter.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
14.2. Apache Kafka Binder Overview
A simplified diagram of how the Apache Kafka binder operates can be seen below.
The Apache Kafka Binder implementation maps each destination to an Apache Kafka topic. The consumer group maps directly to the same Apache Kafka concept. Partitioning also maps directly to Apache Kafka partitions as well.
14.3. Configuration Options
This section contains the configuration options used by the Apache Kafka binder.
For common configuration options and properties pertaining to binder, refer to the core documentation.
14.3.1. Kafka Binder Properties
- spring.cloud.stream.kafka.binder.brokers
-
A list of brokers to which the Kafka binder will connect.
Default:
localhost
. - spring.cloud.stream.kafka.binder.defaultBrokerPort
-
brokers
allows hosts specified with or without port information (e.g.,host1,host2:port2
). This sets the default port when no port is configured in the broker list.Default:
9092
. - spring.cloud.stream.kafka.binder.zkNodes
-
A list of ZooKeeper nodes to which the Kafka binder can connect.
Default:
localhost
. - spring.cloud.stream.kafka.binder.defaultZkPort
-
zkNodes
allows hosts specified with or without port information (e.g.,host1,host2:port2
). This sets the default port when no port is configured in the node list.Default:
2181
. - spring.cloud.stream.kafka.binder.configuration
-
Key/Value map of client properties (both producers and consumer) passed to all clients created by the binder. Due to the fact that these properties will be used by both producers and consumers, usage should be restricted to common properties, especially security settings.
Default: Empty map.
- spring.cloud.stream.kafka.binder.headers
-
The list of custom headers that will be transported by the binder.
Default: empty.
- spring.cloud.stream.kafka.binder.healthTimeout
-
The time to wait to get partition information in seconds; default 60. Health will report as down if this timer expires.
Default: 10.
- spring.cloud.stream.kafka.binder.offsetUpdateTimeWindow
-
The frequency, in milliseconds, with which offsets are saved. Ignored if
0
.Default:
10000
. - spring.cloud.stream.kafka.binder.offsetUpdateCount
-
The frequency, in number of updates, which which consumed offsets are persisted. Ignored if
0
. Mutually exclusive withoffsetUpdateTimeWindow
.Default:
0
. - spring.cloud.stream.kafka.binder.requiredAcks
-
The number of required acks on the broker.
Default:
1
. - spring.cloud.stream.kafka.binder.minPartitionCount
-
Effective only if
autoCreateTopics
orautoAddPartitions
is set. The global minimum number of partitions that the binder will configure on topics on which it produces/consumes data. It can be superseded by thepartitionCount
setting of the producer or by the value ofinstanceCount
*concurrency
settings of the producer (if either is larger).Default:
1
. - spring.cloud.stream.kafka.binder.replicationFactor
-
The replication factor of auto-created topics if
autoCreateTopics
is active.Default:
1
. - spring.cloud.stream.kafka.binder.autoCreateTopics
-
If set to
true
, the binder will create new topics automatically. If set tofalse
, the binder will rely on the topics being already configured. In the latter case, if the topics do not exist, the binder will fail to start. Of note, this setting is independent of theauto.topic.create.enable
setting of the broker and it does not influence it: if the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings.Default:
true
. - spring.cloud.stream.kafka.binder.autoAddPartitions
-
If set to
true
, the binder will create add new partitions if required. If set tofalse
, the binder will rely on the partition size of the topic being already configured. If the partition count of the target topic is smaller than the expected value, the binder will fail to start.Default:
false
. - spring.cloud.stream.kafka.binder.socketBufferSize
-
Size (in bytes) of the socket buffer to be used by the Kafka consumers.
Default:
2097152
.
14.3.2. Kafka Consumer Properties
The following properties are available for Kafka consumers only and
must be prefixed with spring.cloud.stream.kafka.bindings.<channelName>.consumer.
.
- autoRebalanceEnabled
-
When
true
, topic partitions will be automatically rebalanced between the members of a consumer group. Whenfalse
, each consumer will be assigned a fixed set of partitions based onspring.cloud.stream.instanceCount
andspring.cloud.stream.instanceIndex
. This requires bothspring.cloud.stream.instanceCount
andspring.cloud.stream.instanceIndex
properties to be set appropriately on each launched instance. The propertyspring.cloud.stream.instanceCount
must typically be greater than 1 in this case.Default:
true
. - autoCommitOffset
-
Whether to autocommit offsets when a message has been processed. If set to
false
, a header with the keykafka_acknowledgment
of the typeorg.springframework.kafka.support.Acknowledgment
header will be present in the inbound message. Applications may use this header for acknowledging messages. See the examples section for details. When this property is set tofalse
, Kafka binder will set the ack mode toorg.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode.MANUAL
.Default:
true
. - autoCommitOnError
-
Effective only if
autoCommitOffset
is set totrue
. If set tofalse
it suppresses auto-commits for messages that result in errors, and will commit only for successful messages, allows a stream to automatically replay from the last successfully processed message, in case of persistent failures. If set totrue
, it will always auto-commit (if auto-commit is enabled). If not set (default), it effectively has the same value asenableDlq
, auto-committing erroneous messages if they are sent to a DLQ, and not committing them otherwise.Default: not set.
- recoveryInterval
-
The interval between connection recovery attempts, in milliseconds.
Default:
5000
. - startOffset
-
The starting offset for new groups. Allowed values:
earliest
,latest
. If the consumer group is set explicitly for the consumer 'binding' (viaspring.cloud.stream.bindings.<channelName>.group
), then 'startOffset' is set toearliest
; otherwise it is set tolatest
for theanonymous
consumer group.Default: null (equivalent to
earliest
). - enableDlq
-
When set to true, it will send enable DLQ behavior for the consumer. By default, messages that result in errors will be forwarded to a topic named
error.<destination>.<group>
. The DLQ topic name can be configurable via the propertydlqName
. This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome.Default:
false
. - configuration
-
Map with a key/value pair containing generic Kafka consumer properties.
Default: Empty map.
- dlqName
-
The name of the DLQ topic to receive the error messages.
Default: null (If not specified, messages that result in errors will be forwarded to a topic named
error.<destination>.<group>
).
14.3.3. Kafka Producer Properties
The following properties are available for Kafka producers only and
must be prefixed with spring.cloud.stream.kafka.bindings.<channelName>.producer.
.
- bufferSize
-
Upper limit, in bytes, of how much data the Kafka producer will attempt to batch before sending.
Default:
16384
. - sync
-
Whether the producer is synchronous.
Default:
false
. - batchTimeout
-
How long the producer will wait before sending in order to allow more messages to accumulate in the same batch. (Normally the producer does not wait at all, and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency.
Default:
0
. - messageKeyExpression
-
A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message. For example
headers.key
orpayload.myKey
.Default:
none
. - configuration
-
Map with a key/value pair containing generic Kafka producer properties.
Default: Empty map.
The Kafka binder will use the |
14.3.4. Usage examples
In this section, we illustrate the use of the above properties for specific scenarios.
Example: Setting autoCommitOffset
false and relying on manual acking.
This example illustrates how one may manually acknowledge offsets in a consumer application.
This example requires that spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset
is set to false.
Use the corresponding input channel name for your example.
@SpringBootApplication
@EnableBinding(Sink.class)
public class ManuallyAcknowdledgingConsumer {
public static void main(String[] args) {
SpringApplication.run(ManuallyAcknowdledgingConsumer.class, args);
}
@StreamListener(Sink.INPUT)
public void process(Message<?> message) {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
if (acknowledgment != null) {
System.out.println("Acknowledgment provided");
acknowledgment.acknowledge();
}
}
}
Example: security configuration
Apache Kafka 0.9 supports secure connections between client and brokers.
To take advantage of this feature, follow the guidelines in the Apache Kafka Documentation as well as the Kafka 0.9 security guidelines from the Confluent documentation.
Use the spring.cloud.stream.kafka.binder.configuration
option to set security properties for all clients created by the binder.
For example, for setting security.protocol
to SASL_SSL
, set:
spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
All the other security properties can be set in a similar manner.
When using Kerberos, follow the instructions in the reference documentation for creating and referencing the JAAS configuration.
Spring Cloud Stream supports passing JAAS configuration information to the application using a JAAS configuration file and using Spring Boot properties.
Using JAAS configuration files
The JAAS, and (optionally) krb5 file locations can be set for Spring Cloud Stream applications by using system properties. Here is an example of launching a Spring Cloud Stream application with SASL and Kerberos using a JAAS configuration file:
java -Djava.security.auth.login.config=/path.to/kafka_client_jaas.conf -jar log.jar \
--spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \
--spring.cloud.stream.kafka.binder.zkNodes=secure.zookeeper:2181 \
--spring.cloud.stream.bindings.input.destination=stream.ticktock \
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT
Using Spring Boot properties
As an alternative to having a JAAS configuration file, Spring Cloud Stream provides a mechanism for setting up the JAAS configuration for Spring Cloud Stream applications using Spring Boot properties.
The following properties can be used for configuring the login context of the Kafka client.
- spring.cloud.stream.kafka.binder.jaas.loginModule
-
The login module name. Not necessary to be set in normal cases.
Default:
com.sun.security.auth.module.Krb5LoginModule
. - spring.cloud.stream.kafka.binder.jaas.controlFlag
-
The control flag of the login module.
Default:
required
. - spring.cloud.stream.kafka.binder.jaas.options
-
Map with a key/value pair containing the login module options.
Default: Empty map.
Here is an example of launching a Spring Cloud Stream application with SASL and Kerberos using Spring Boot configuration properties:
java --spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \
--spring.cloud.stream.kafka.binder.zkNodes=secure.zookeeper:2181 \
--spring.cloud.stream.bindings.input.destination=stream.ticktock \
--spring.cloud.stream.kafka.binder.autoCreateTopics=false \
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT \
--spring.cloud.stream.kafka.binder.jaas.options.useKeyTab=true \
--spring.cloud.stream.kafka.binder.jaas.options.storeKey=true \
--spring.cloud.stream.kafka.binder.jaas.options.keyTab=/etc/security/keytabs/kafka_client.keytab \
--spring.cloud.stream.kafka.binder.jaas.options.principal=kafka-client-1@EXAMPLE.COM
This represents the equivalent of the following JAAS file:
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_client.keytab"
principal="[email protected]";
};
If the topics required already exist on the broker, or will be created by an administrator, autocreation can be turned off and only client JAAS properties need to be sent. As an alternative to setting spring.cloud.stream.kafka.binder.autoCreateTopics
you can simply remove the broker dependency from the application. See Excluding Kafka broker jar from the classpath of the binder based application for details.
Do not mix JAAS configuration files and Spring Boot properties in the same application.
If the |
Exercise caution when using the |
Using the binder with Apache Kafka 0.10
The default Kafka support in Spring Cloud Stream Kafka binder is for Kafka version 0.10.1.1. The binder also supports connecting to other 0.10 based versions and 0.9 clients.
In order to do this, when you create the project that contains your application, include spring-cloud-starter-stream-kafka
as you normally would do for the default binder.
Then add these dependencies at the top of the <dependencies>
section in the pom.xml file to override the dependencies.
Here is an example for downgrading your application to 0.10.0.1. Since it is still on the 0.10 line, the default spring-kafka
and spring-integration-kafka
versions can be retained.
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.10.0.1</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.10.0.1</version>
</dependency>
Here is another example of using 0.9.0.1 version.
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>1.0.5.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-kafka</artifactId>
<version>2.0.1.RELEASE</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.9.0.1</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.9.0.1</version>
</dependency>
The versions above are provided only for the sake of the example. For best results, we recommend using the most recent 0.10-compatible versions of the projects. |
Excluding Kafka broker jar from the classpath of the binder based application
The Apache Kafka Binder uses the administrative utilities which are part of the Apache Kafka server library to create and reconfigure topics. If the inclusion of the Apache Kafka server library and its dependencies is not necessary at runtime because the application will rely on the topics being configured administratively, the Kafka binder allows for Apache Kafka server dependency to be excluded from the application.
If you use non default versions for Kafka dependencies as advised above, all you have to do is not to include the kafka broker dependency.
If you use the default Kafka version, then ensure that you exclude the kafka broker jar from the spring-cloud-starter-stream-kafka
dependency as following.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
<exclusions>
<exclusion>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
</exclusion>
</exclusions>
</dependency>
If you exclude the Apache Kafka server dependency and the topic is not present on the server, then the Apache Kafka broker will create the topic if auto topic creation is enabled on the server. Please keep in mind that if you are relying on this, then the Kafka server will use the default number of partitions and replication factors. On the other hand, if auto topic creation is disabled on the server, then care must be taken before running the application to create the topic with the desired number of partitions.
If you want to have full control over how partitions are allocated, then leave the default settings as they are, i.e. do not exclude the kafka broker jar and ensure that spring.cloud.stream.kafka.binder.autoCreateTopics
is set to true
, which is the default.
14.4. Kafka Streams Binding Capabilities of Spring Cloud Stream
Spring Cloud Stream Kafka support also includes a binder specifically designed for Kafka Streams binding. Using this binder, applications can be written that leverage the Kafka Streams API. For more information on Kafka Streams, see Kafka Streams API Developer Manual
Kafka Streams support in Spring Cloud Stream is based on the foundations provided by the Spring Kafka project. For details on that support, see Kafaka Streams Support in Spring Kafka.
Here are the maven coordinates for the Spring Cloud Stream KStream binder artifact.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kstream</artifactId>
</dependency>
In addition to leveraging the Spring Cloud Stream programming model which is based on Spring Boot, one of the main other benefits that the KStream binder provides is the fact that it avoids the boilerplate configuration that one needs to write when using the Kafka Streams API directly. High level streams DSL provided through the Kafka Streams API can be used through Spring Cloud Stream in the current support.
14.4.1. Usage example of high level streams DSL
This application will listen from a Kafka topic and write the word count for each unique word that it sees in a 5 seconds time window.
@SpringBootApplication
@EnableBinding(KStreamProcessor.class)
public class WordCountProcessorApplication {
@StreamListener("input")
@SendTo("output")
public KStream<?, String> process(KStream<?, String> input) {
return input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.map((key, word) -> new KeyValue<>(word, word))
.groupByKey(Serdes.String(), Serdes.String())
.count(TimeWindows.of(5000), "store-name")
.toStream()
.map((w, c) -> new KeyValue<>(null, "Count for " + w.key() + ": " + c));
}
public static void main(String[] args) {
SpringApplication.run(WordCountProcessorApplication.class, args);
}
If you build it as Spring Boot runnable fat jar, you can run the above example in the following way:
java -jar uber.jar --spring.cloud.stream.bindings.input.destination=words --spring.cloud.stream.bindings.output.destination=counts
This means that the application will listen from the incoming Kafka topic words and write to the output topic counts.
Spring Cloud Stream will ensure that the messages from both the incoming and outgoing topics are bound as KStream objects. As one may observe, the developer can exclusively focus on the business aspects of the code, i.e. writing the logic required in the processor rather than setting up the streams specific configuration required by the Kafka Streams infrastructure. All those boilerplate is handled by Spring Cloud Stream behind the scenes.
14.4.2. Support for interactive queries
If access to the KafkaStreams
is needed for interactive queries, the internal KafkaStreams
instance can be accessed via KStreamBuilderFactoryBean.getKafkaStreams()
.
You can autowire the KStreamBuilderFactoryBean
instance provided by the KStream binder. Then you can get KafkaStreams
instance from it and retrieve the underlying store, execute queries on it, etc.
14.4.3. Kafka Streams properties
- configuration
-
Map with a key/value pair containing properties pertaining to Kafka Streams API. This property must be prefixed with
spring.cloud.stream.kstream.binder.
.Following are some examples of using this property.
spring.cloud.stream.kstream.binder.configuration.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kstream.binder.configuration.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kstream.binder.configuration.commit.interval.ms=1000
For more information about all the properties that may go into streams configuration, see StreamsConfig JavaDocs.
There can also be binding specific properties.
For instance, you can use a different Serde for your input or output destination.
spring.cloud.stream.kstream.bindings.output.producer.keySerde=org.apache.kafka.common.serialization.Serdes$IntegerSerde
spring.cloud.stream.kstream.bindings.output.producer.valueSerde=org.apache.kafka.common.serialization.Serdes$LongSerde
- timewindow.length
-
Many streaming applications written using Kafka Streams involve windowning operations. If you specify this property, there is a
org.apache.kafka.streams.kstream.TimeWindows
bean automatically provided that can be autowired in applications. This property must be prefixed withspring.cloud.stream.kstream.
. A bean of typeorg.apache.kafka.streams.kstream.TimeWindows
is created only if this property is provided.Following is an example of using this property. Values are provided in milliseconds.
spring.cloud.stream.kstream.timeWindow.length=5000
- timewindow.advanceBy
-
This property goes hand in hand with
timewindow.length
and has no effect on its own. If you provide this property, the generatedorg.apache.kafka.streams.kstream.TimeWindows
bean will automatically conatin this information. This property must be prefixed withspring.cloud.stream.kstream.
.Following is an example of using this property. Values are provided in milliseconds.
spring.cloud.stream.kstream.timeWindow.advanceBy=1000
14.5. Error Channels
Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination, and can be configured to send async producer send failures to an error channel too. See Message Channel Binders and Error Channels for more information.
The payload of the ErrorMessage
for a send failure is a KafkaSendFailureException
with properties:
-
failedMessage
- the spring-messagingMessage<?>
that failed to be sent. -
record
- the rawProducerRecord
that was created from thefailedMessage
There is no automatic handling of these exceptions (such as sending to a Dead-Letter queue); you can consume these exceptions with your own Spring Integration flow.
14.6. Kafka Metrics
Kafka binder module exposes the following metrics:
spring.cloud.stream.binder.kafka.someGroup.someTopic.lag
- this metric indicates how many messages have not been yet consumed from given binder’s topic by given consumer group.
For example if the value of the metric spring.cloud.stream.binder.kafka.myGroup.myTopic.lag
is 1000
, then consumer group myGroup
has 1000
messages to waiting to be consumed from topic myTopic
.
This metric is particularly useful to provide auto-scaling feedback to PaaS platform of your choice.
14.7. Dead-Letter Topic Processing
Because it can’t be anticipated how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them.
If the reason for the dead-lettering is transient, you may wish to route the messages back to the original topic.
However, if the problem is a permanent issue, that could cause an infinite loop.
The following spring-boot
application is an example of how to route those messages back to the original topic, but moves them to a third "parking lot" topic after three attempts.
The application is simply another spring-cloud-stream application that reads from the dead-letter topic.
It terminates when no messages are received for 5 seconds.
The examples assume the original destination is so8400out
and the consumer group is so8400
.
There are several considerations.
-
Consider only running the rerouting when the main application is not running. Otherwise, the retries for transient errors will be used up very quickly.
-
Alternatively, use a two-stage approach - use this application to route to a third topic, and another to route from there back to the main topic.
-
Since this technique uses a message header to keep track of retries, it won’t work with
headerMode=raw
. In that case, consider adding some data to the payload (that can be ignored by the main application). -
x-retries
has to be added to theheaders
propertyspring.cloud.stream.kafka.binder.headers=x-retries
on both this, and the main application so that the header is transported between the applications. -
Since kafka is publish/subscribe, replayed messages will be sent to each consumer group, even those that successfully processed a message the first time around.
spring.cloud.stream.bindings.input.group=so8400replay
spring.cloud.stream.bindings.input.destination=error.so8400out.so8400
spring.cloud.stream.bindings.output.destination=so8400out
spring.cloud.stream.bindings.output.producer.partitioned=true
spring.cloud.stream.bindings.parkingLot.destination=so8400in.parkingLot
spring.cloud.stream.bindings.parkingLot.producer.partitioned=true
spring.cloud.stream.kafka.binder.configuration.auto.offset.reset=earliest
spring.cloud.stream.kafka.binder.headers=x-retries
@SpringBootApplication
@EnableBinding(TwoOutputProcessor.class)
public class ReRouteDlqKApplication implements CommandLineRunner {
private static final String X_RETRIES_HEADER = "x-retries";
public static void main(String[] args) {
SpringApplication.run(ReRouteDlqKApplication.class, args).close();
}
private final AtomicInteger processed = new AtomicInteger();
@Autowired
private MessageChannel parkingLot;
@StreamListener(Processor.INPUT)
@SendTo(Processor.OUTPUT)
public Message<?> reRoute(Message<?> failed) {
processed.incrementAndGet();
Integer retries = failed.getHeaders().get(X_RETRIES_HEADER, Integer.class);
if (retries == null) {
System.out.println("First retry for " + failed);
return MessageBuilder.fromMessage(failed)
.setHeader(X_RETRIES_HEADER, new Integer(1))
.setHeader(BinderHeaders.PARTITION_OVERRIDE,
failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))
.build();
}
else if (retries.intValue() < 3) {
System.out.println("Another retry for " + failed);
return MessageBuilder.fromMessage(failed)
.setHeader(X_RETRIES_HEADER, new Integer(retries.intValue() + 1))
.setHeader(BinderHeaders.PARTITION_OVERRIDE,
failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))
.build();
}
else {
System.out.println("Retries exhausted for " + failed);
parkingLot.send(MessageBuilder.fromMessage(failed)
.setHeader(BinderHeaders.PARTITION_OVERRIDE,
failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))
.build());
}
return null;
}
@Override
public void run(String... args) throws Exception {
while (true) {
int count = this.processed.get();
Thread.sleep(5000);
if (count == this.processed.get()) {
System.out.println("Idle, terminating");
return;
}
}
}
public interface TwoOutputProcessor extends Processor {
@Output("parkingLot")
MessageChannel parkingLot();
}
}
15. RabbitMQ Binder
15.1. Usage
For using the RabbitMQ binder, you just need to add it to your Spring Cloud Stream application, using the following Maven coordinates:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-rabbit</artifactId>
</dependency>
Alternatively, you can also use the Spring Cloud Stream RabbitMQ Starter.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-rabbit</artifactId>
</dependency>
15.2. RabbitMQ Binder Overview
A simplified diagram of how the RabbitMQ binder operates can be seen below.
The RabbitMQ Binder implementation maps each destination to a TopicExchange
.
For each consumer group, a Queue
will be bound to that TopicExchange
.
Each consumer instance have a corresponding RabbitMQ Consumer
instance for its group’s Queue
.
For partitioned producers/consumers the queues are suffixed with the partition index and use the partition index as routing key.
Using the autoBindDlq
option, you can optionally configure the binder to create and configure dead-letter queues (DLQs) (and a dead-letter exchange DLX
).
The dead letter queue has the name of the destination, appended with .dlq
.
If retry is enabled (maxAttempts > 1
) failed messages will be delivered to the DLQ.
If retry is disabled (maxAttempts = 1
), you should set requeueRejected
to false
(default) so that a failed message will be routed to the DLQ, instead of being requeued.
In addition, republishToDlq
causes the binder to publish a failed message to the DLQ (instead of rejecting it); this enables additional information to be added to the message in headers, such as the stack trace in the x-exception-stacktrace
header.
This option does not need retry enabled; you can republish a failed message after just one attempt.
Starting with version 1.2, you can configure the delivery mode of republished messages; see property republishDeliveryMode
.
Setting requeueRejected to true will cause the message to be requeued and redelivered continually, which is likely not what you want unless the failure issue is transient.
In general, it’s better to enable retry within the binder by setting maxAttempts to greater than one, or set republishToDlq to true .
|
See RabbitMQ Binder Properties for more information about these properties.
The framework does not provide any standard mechanism to consume dead-letter messages (or to re-route them back to the primary queue). Some options are described in Dead-Letter Queue Processing.
When multiple RabbitMQ binders are used in a Spring Cloud Stream application, it is important to disable 'RabbitAutoConfiguration' to avoid the same configuration from |
Starting with version 1.3, the RabbitMessageChannelBinder
creates an internal ConnectionFactory
copy for the non-transactional producers to avoid dead locks on consumers when shared, cached connections are blocked because of Memory Alarm on Broker.
15.3. Configuration Options
This section contains settings specific to the RabbitMQ Binder and bound channels.
For general binding configuration options and properties, please refer to the Spring Cloud Stream core documentation.
15.3.1. RabbitMQ Binder Properties
By default, the RabbitMQ binder uses Spring Boot’s ConnectionFactory
, and it therefore supports all Spring Boot configuration options for RabbitMQ.
(For reference, consult the Spring Boot documentation.)
RabbitMQ configuration options use the spring.rabbitmq
prefix.
In addition to Spring Boot options, the RabbitMQ binder supports the following properties:
- spring.cloud.stream.rabbit.binder.adminAddresses
-
A comma-separated list of RabbitMQ management plugin URLs. Only used when
nodes
contains more than one entry. Each entry in this list must have a corresponding entry inspring.rabbitmq.addresses
. Only needed if you are using a RabbitMQ cluster and wish to consume from the node that hosts the queue. See Queue Affinity and the LocalizedQueueConnectionFactory for more information.Default: empty.
- spring.cloud.stream.rabbit.binder.nodes
-
A comma-separated list of RabbitMQ node names. When more than one entry, used to locate the server address where a queue is located. Each entry in this list must have a corresponding entry in
spring.rabbitmq.addresses
. Only needed if you are using a RabbitMQ cluster and wish to consume from the node that hosts the queue. See Queue Affinity and the LocalizedQueueConnectionFactory for more information.Default: empty.
- spring.cloud.stream.rabbit.binder.compressionLevel
-
Compression level for compressed bindings. See
java.util.zip.Deflater
.Default:
1
(BEST_LEVEL).
15.3.2. RabbitMQ Consumer Properties
The following properties are available for Rabbit consumers only and
must be prefixed with spring.cloud.stream.rabbit.bindings.<channelName>.consumer.
.
- acknowledgeMode
-
The acknowledge mode.
Default:
AUTO
. - autoBindDlq
-
Whether to automatically declare the DLQ and bind it to the binder DLX.
Default:
false
. - bindingRoutingKey
-
The routing key with which to bind the queue to the exchange (if
bindQueue
istrue
). for partitioned destinations-<instanceIndex>
will be appended.Default:
#
. - bindQueue
-
Whether to bind the queue to the destination exchange; set to
false
if you have set up your own infrastructure and have previously created/bound the queue.Default:
true
. - deadLetterQueueName
-
name of the DLQ
Default:
prefix+destination.dlq
- deadLetterExchange
-
a DLX to assign to the queue; if autoBindDlq is true
Default: 'prefix+DLX'
- deadLetterRoutingKey
-
a dead letter routing key to assign to the queue; if autoBindDlq is true
Default:
destination
- declareExchange
-
Whether to declare the exchange for the destination.
Default:
true
. - delayedExchange
-
Whether to declare the exchange as a
Delayed Message Exchange
- requires the delayed message exchange plugin on the broker. Thex-delayed-type
argument is set to theexchangeType
.Default:
false
. - dlqDeadLetterExchange
-
if a DLQ is declared, a DLX to assign to that queue
Default:
none
- dlqDeadLetterRoutingKey
-
if a DLQ is declared, a dead letter routing key to assign to that queue; default none
Default:
none
- dlqExpires
-
how long before an unused dead letter queue is deleted (ms)
Default:
no expiration
- dlqLazy
-
Declare the dead letter queue with the
x-queue-mode=lazy
argument. See Lazy Queues. Consider using a policy instead of this setting because using a policy allows changing the setting without deleting the queue.Default:
false
. - dlqMaxLength
-
maximum number of messages in the dead letter queue
Default:
no limit
- dlqMaxLengthBytes
-
maximum number of total bytes in the dead letter queue from all messages
Default:
no limit
- dlqMaxPriority
-
maximum priority of messages in the dead letter queue (0-255)
Default:
none
- dlqTtl
-
default time to live to apply to the dead letter queue when declared (ms)
Default:
no limit
- durableSubscription
-
Whether subscription should be durable. Only effective if
group
is also set.Default:
true
. - exchangeAutoDelete
-
If
declareExchange
is true, whether the exchange should be auto-delete (removed after the last queue is removed).Default:
true
. - exchangeDurable
-
If
declareExchange
is true, whether the exchange should be durable (survives broker restart).Default:
true
. - exchangeType
-
The exchange type;
direct
,fanout
ortopic
for non-partitioned destinations;direct
ortopic
for partitioned destinations.Default:
topic
. - exclusive
-
Create an exclusive consumer; concurrency should be 1 when this is
true
; often used when strict ordering is required but enabling a hot standby instance to take over after a failure. SeerecoveryInterval
, which controls how often a standby instance will attempt to consume.Default:
false
. - expires
-
how long before an unused queue is deleted (ms)
Default:
no expiration
- failedDeclarationRetryInterval
-
The interval (ms) between attempts to consume from a queue if it is missing.
Default: 5000
- headerPatterns
-
Patterns for headers to be mapped from inbound messages.
Default:
['*']
(all headers). - lazy
-
Declare the queue with the
x-queue-mode=lazy
argument. See Lazy Queues. Consider using a policy instead of this setting because using a policy allows changing the setting without deleting the queue.Default:
false
. - maxConcurrency
-
the maximum number of consumers
Default:
1
. - maxLength
-
maximum number of messages in the queue
Default:
no limit
- maxLengthBytes
-
maximum number of total bytes in the queue from all messages
Default:
no limit
- maxPriority
-
maximum priority of messages in the queue (0-255)
- Default
-
none
- missingQueuesFatal
-
If the queue cannot be found, treat the condition as fatal and stop the listener container. Defaults to
false
so that the container keeps trying to consume from the queue, for example when using a cluster and the node hosting a non HA queue is down. - Default
-
false
- prefetch
-
Prefetch count.
Default:
1
. - prefix
-
A prefix to be added to the name of the
destination
and queues.Default: "".
- queueDeclarationRetries
-
The number of times to retry consuming from a queue if it is missing. Only relevant if
missingQueuesFatal
istrue
; otherwise the container keeps retrying indefinitely. - Default
-
3
- queueNameGroupOnly
-
When true, consume from a queue with a name equal to the
group
; otherwise the queue name isdestination.group
. This is useful, for example, when using Spring Cloud Stream to consume from an existing RabbitMQ queue.Default: false.
- recoveryInterval
-
The interval between connection recovery attempts, in milliseconds.
Default:
5000
. - requeueRejected
-
Whether delivery failures should be requeued when retry is disabled or republishToDlq is false.
Default:
false
. - republishDeliveryMode
-
When
republishToDlq
istrue
, specify the delivery mode of the republished message.Default:
DeliveryMode.PERSISTENT
- republishToDlq
-
By default, messages which fail after retries are exhausted are rejected. If a dead-letter queue (DLQ) is configured, RabbitMQ will route the failed message (unchanged) to the DLQ. If set to
true
, the binder will republish failed messages to the DLQ with additional headers, including the exception message and stack trace from the cause of the final failure.Default: false
- transacted
-
Whether to use transacted channels.
Default:
false
. - ttl
-
default time to live to apply to the queue when declared (ms)
Default:
no limit
- txSize
-
The number of deliveries between acks.
Default:
1
.
15.3.3. Rabbit Producer Properties
The following properties are available for Rabbit producers only and
must be prefixed with spring.cloud.stream.rabbit.bindings.<channelName>.producer.
.
- autoBindDlq
-
Whether to automatically declare the DLQ and bind it to the binder DLX.
Default:
false
. - batchingEnabled
-
Whether to enable message batching by producers.
Default:
false
. - batchSize
-
The number of messages to buffer when batching is enabled.
Default:
100
. - batchBufferLimit
-
Default:
10000
. - batchTimeout
-
Default:
5000
. - bindingRoutingKey
-
The routing key with which to bind the queue to the exchange (if
bindQueue
istrue
). Only applies to non-partitioned destinations. Only applies ifrequiredGroups
are provided and then only to those groups.Default:
#
. - bindQueue
-
Whether to bind the queue to the destination exchange; set to
false
if you have set up your own infrastructure and have previously created/bound the queue. Only applies ifrequiredGroups
are provided and then only to those groups.Default:
true
. - compress
-
Whether data should be compressed when sent.
Default:
false
. - deadLetterQueueName
-
name of the DLQ Only applies if
requiredGroups
are provided and then only to those groups.Default:
prefix+destination.dlq
- deadLetterExchange
-
a DLX to assign to the queue; if autoBindDlq is true Only applies if
requiredGroups
are provided and then only to those groups.Default: 'prefix+DLX'
- deadLetterRoutingKey
-
a dead letter routing key to assign to the queue; if autoBindDlq is true Only applies if
requiredGroups
are provided and then only to those groups.Default:
destination
- declareExchange
-
Whether to declare the exchange for the destination.
Default:
true
. - delay
-
A SpEL expression to evaluate the delay to apply to the message (
x-delay
header) - has no effect if the exchange is not a delayed message exchange.Default: No
x-delay
header is set. - delayedExchange
-
Whether to declare the exchange as a
Delayed Message Exchange
- requires the delayed message exchange plugin on the broker. Thex-delayed-type
argument is set to theexchangeType
.Default:
false
. - deliveryMode
-
Delivery mode.
Default:
PERSISTENT
. - dlqDeadLetterExchange
-
if a DLQ is declared, a DLX to assign to that queue Only applies if
requiredGroups
are provided and then only to those groups.Default:
none
- dlqDeadLetterRoutingKey
-
if a DLQ is declared, a dead letter routing key to assign to that queue; default none Only applies if
requiredGroups
are provided and then only to those groups.Default:
none
- dlqExpires
-
how long before an unused dead letter queue is deleted (ms) Only applies if
requiredGroups
are provided and then only to those groups.Default:
no expiration
- dlqLazy
-
Declare the dead letter queue with the
x-queue-mode=lazy
argument. See Lazy Queues. Consider using a policy instead of this setting because using a policy allows changing the setting without deleting the queue. Only applies ifrequiredGroups
are provided and then only to those groups. - dlqMaxLength
-
maximum number of messages in the dead letter queue Only applies if
requiredGroups
are provided and then only to those groups.Default:
no limit
- dlqMaxLengthBytes
-
maximum number of total bytes in the dead letter queue from all messages Only applies if
requiredGroups
are provided and then only to those groups.Default:
no limit
- dlqMaxPriority
-
maximum priority of messages in the dead letter queue (0-255) Only applies if
requiredGroups
are provided and then only to those groups.Default:
none
- dlqTtl
-
default time to live to apply to the dead letter queue when declared (ms) Only applies if
requiredGroups
are provided and then only to those groups.Default:
no limit
- exchangeAutoDelete
-
If
declareExchange
is true, whether the exchange should be auto-delete (removed after the last queue is removed).Default:
true
. - exchangeDurable
-
If
declareExchange
is true, whether the exchange should be durable (survives broker restart).Default:
true
. - exchangeType
-
The exchange type;
direct
,fanout
ortopic
for non-partitioned destinations;direct
ortopic
for partitioned destinations.Default:
topic
. - expires
-
how long before an unused queue is deleted (ms) Only applies if
requiredGroups
are provided and then only to those groups.Default:
no expiration
- headerPatterns
-
Patterns for headers to be mapped to outbound messages.
Default:
['*']
(all headers). - lazy
-
Declare the queue with the
x-queue-mode=lazy
argument. See Lazy Queues. Consider using a policy instead of this setting because using a policy allows changing the setting without deleting the queue. Only applies ifrequiredGroups
are provided and then only to those groups.Default:
false
. - maxLength
-
maximum number of messages in the queue Only applies if
requiredGroups
are provided and then only to those groups.Default:
no limit
- maxLengthBytes
-
maximum number of total bytes in the queue from all messages Only applies if
requiredGroups
are provided and then only to those groups.Default:
no limit
- maxPriority
-
maximum priority of messages in the queue (0-255) Only applies if
requiredGroups
are provided and then only to those groups. - Default
-
none
- prefix
-
A prefix to be added to the name of the
destination
exchange.Default: "".
- queueNameGroupOnly
-
When true, consume from a queue with a name equal to the
group
; otherwise the queue name isdestination.group
. This is useful, for example, when using Spring Cloud Stream to consume from an existing RabbitMQ queue. Only applies ifrequiredGroups
are provided and then only to those groups.Default: false.
- routingKeyExpression
-
A SpEL expression to determine the routing key to use when publishing messages. For a fixed routing key, use a literal expression, e.g.
routingKeyExpression='my.routingKey'
in a properties file, orroutingKeyExpression: '''my.routingKey'''
in a YAML file.Default:
destination
ordestination-<partition>
for partitioned destinations. - transacted
-
Whether to use transacted channels.
Default:
false
. - ttl
-
default time to live to apply to the queue when declared (ms) Only applies if
requiredGroups
are provided and then only to those groups.Default:
no limit
In the case of RabbitMQ, content type headers can be set by external applications. Spring Cloud Stream supports them as part of an extended internal protocol used for any type of transport (including transports, such as Kafka, that do not normally support headers). |
15.4. Retry With the RabbitMQ Binder
15.4.1. Overview
When retry is enabled within the binder, the listener container thread is suspended for any back off periods that are configured. This might be important when strict ordering is required with a single consumer but for other use cases it prevents other messages from being processed on that thread. An alternative to using binder retry is to set up dead lettering with time to live on the dead-letter queue (DLQ), as well as dead-letter configuration on the DLQ itself. See RabbitMQ Binder Properties for more information about the properties discussed here. Example configuration to enable this feature:
-
Set
autoBindDlq
totrue
- the binder will create a DLQ; you can optionally specify a name indeadLetterQueueName
-
Set
dlqTtl
to the back off time you want to wait between redeliveries -
Set the
dlqDeadLetterExchange
to the default exchange - expired messages from the DLQ will be routed to the original queue since the defaultdeadLetterRoutingKey
is the queue name (destination.group
)
To force a message to be dead-lettered, either throw an AmqpRejectAndDontRequeueException
, or set requeueRejected
to true
and throw any exception.
The loop will continue without end, which is fine for transient problems but you may want to give up after some number of attempts.
Fortunately, RabbitMQ provides the x-death
header which allows you to determine how many cycles have occurred.
To acknowledge a message after giving up, throw an ImmediateAcknowledgeAmqpException
.
15.4.2. Putting it All Together
---
spring.cloud.stream.bindings.input.destination=myDestination
spring.cloud.stream.bindings.input.group=consumerGroup
#disable binder retries
spring.cloud.stream.bindings.input.consumer.max-attempts=1
#dlx/dlq setup
spring.cloud.stream.rabbit.bindings.input.consumer.auto-bind-dlq=true
spring.cloud.stream.rabbit.bindings.input.consumer.dlq-ttl=5000
spring.cloud.stream.rabbit.bindings.input.consumer.dlq-dead-letter-exchange=
---
This configuration creates an exchange myDestination
with queue myDestination.consumerGroup
bound to a topic exchange with a wildcard routing key #
.
It creates a DLQ bound to a direct exchange DLX
with routing key myDestination.consumerGroup
.
When messages are rejected, they are routed to the DLQ.
After 5 seconds, the message expires and is routed to the original queue using the queue name as the routing key.
@SpringBootApplication
@EnableBinding(Sink.class)
public class XDeathApplication {
public static void main(String[] args) {
SpringApplication.run(XDeathApplication.class, args);
}
@StreamListener(Sink.INPUT)
public void listen(String in, @Header(name = "x-death", required = false) Map<?,?> death) {
if (death != null && death.get("count").equals(3L)) {
// giving up - don't send to DLX
throw new ImmediateAcknowledgeAmqpException("Failed after 4 attempts");
}
throw new AmqpRejectAndDontRequeueException("failed");
}
}
Notice that the count property in the x-death
header is a Long
.
15.5. Error Channels
Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination, and can be configured to send async producer send failures to an error channel too. See Message Channel Binders and Error Channels for more information.
With rabbitmq, there are two types of send failures:
-
returned messages
-
negatively acknowledged Publisher Confirms
The latter is rare; quoting the RabbitMQ documentation "[A nack] will only be delivered if an internal error occurs in the Erlang process responsible for a queue.".
As well as enabling producer error channels as described in Message Channel Binders and Error Channels, the RabbitMQ binder will only send messages to the channels if the connection factory is appropriately configured:
-
ccf.setPublisherConfirms(true);
-
ccf.setPublisherReturns(true);
When using spring boot configuration for the connection factory, set properties:
-
spring.rabbitmq.publisher-confirms
-
spring.rabbitmq.publisher-returns
The payload of the ErrorMessage
for a returned message is a ReturnedAmqpMessageException
with properties:
-
failedMessage
- the spring-messagingMessage<?>
that failed to be sent. -
amqpMessage
- the raw spring-amqpMessage
-
replyCode
- an integer value indicating the reason for the failure (e.g. 312 - No route) -
replyText
- a text value indicating the reason for the failure e.g.NO_ROUTE
. -
exchange
- the exchange to which the message was published. -
routingKey
- the routing key used when the message was published.
For negatively acknowledged confirms, the payload is a NackedAmqpMessageException
with properties:
-
failedMessage
- the spring-messagingMessage<?>
that failed to be sent. -
nackReason
- a reason (if available; you may need to examine the broker logs for more information).
There is no automatic handling of these exceptions (such as sending to a Dead-Letter queue); you can consume these exceptions with your own Spring Integration flow.
15.6. Dead-Letter Queue Processing
Because it can’t be anticipated how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them.
If the reason for the dead-lettering is transient, you may wish to route the messages back to the original queue.
However, if the problem is a permanent issue, that could cause an infinite loop.
The following spring-boot
application is an example of how to route those messages back to the original queue, but moves them to a third "parking lot" queue after three attempts.
The second example utilizes the RabbitMQ Delayed Message Exchange to introduce a delay to the requeued message.
In this example, the delay increases for each attempt.
These examples use a @RabbitListener
to receive messages from the DLQ, you could also use RabbitTemplate.receive()
in a batch process.
The examples assume the original destination is so8400in
and the consumer group is so8400
.
15.6.1. Non-Partitioned Destinations
The first two examples are when the destination is not partitioned.
@SpringBootApplication
public class ReRouteDlqApplication {
private static final String ORIGINAL_QUEUE = "so8400in.so8400";
private static final String DLQ = ORIGINAL_QUEUE + ".dlq";
private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot";
private static final String X_RETRIES_HEADER = "x-retries";
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args);
System.out.println("Hit enter to terminate");
System.in.read();
context.close();
}
@Autowired
private RabbitTemplate rabbitTemplate;
@RabbitListener(queues = DLQ)
public void rePublish(Message failedMessage) {
Integer retriesHeader = (Integer) failedMessage.getMessageProperties().getHeaders().get(X_RETRIES_HEADER);
if (retriesHeader == null) {
retriesHeader = Integer.valueOf(0);
}
if (retriesHeader < 3) {
failedMessage.getMessageProperties().getHeaders().put(X_RETRIES_HEADER, retriesHeader + 1);
this.rabbitTemplate.send(ORIGINAL_QUEUE, failedMessage);
}
else {
this.rabbitTemplate.send(PARKING_LOT, failedMessage);
}
}
@Bean
public Queue parkingLot() {
return new Queue(PARKING_LOT);
}
}
@SpringBootApplication
public class ReRouteDlqApplication {
private static final String ORIGINAL_QUEUE = "so8400in.so8400";
private static final String DLQ = ORIGINAL_QUEUE + ".dlq";
private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot";
private static final String X_RETRIES_HEADER = "x-retries";
private static final String DELAY_EXCHANGE = "dlqReRouter";
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args);
System.out.println("Hit enter to terminate");
System.in.read();
context.close();
}
@Autowired
private RabbitTemplate rabbitTemplate;
@RabbitListener(queues = DLQ)
public void rePublish(Message failedMessage) {
Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders();
Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER);
if (retriesHeader == null) {
retriesHeader = Integer.valueOf(0);
}
if (retriesHeader < 3) {
headers.put(X_RETRIES_HEADER, retriesHeader + 1);
headers.put("x-delay", 5000 * retriesHeader);
this.rabbitTemplate.send(DELAY_EXCHANGE, ORIGINAL_QUEUE, failedMessage);
}
else {
this.rabbitTemplate.send(PARKING_LOT, failedMessage);
}
}
@Bean
public DirectExchange delayExchange() {
DirectExchange exchange = new DirectExchange(DELAY_EXCHANGE);
exchange.setDelayed(true);
return exchange;
}
@Bean
public Binding bindOriginalToDelay() {
return BindingBuilder.bind(new Queue(ORIGINAL_QUEUE)).to(delayExchange()).with(ORIGINAL_QUEUE);
}
@Bean
public Queue parkingLot() {
return new Queue(PARKING_LOT);
}
}
15.6.2. Partitioned Destinations
With partitioned destinations, there is one DLQ for all partitions and we determine the original queue from the headers.
republishToDlq=false
When republishToDlq
is false
, RabbitMQ publishes the message to the DLX/DLQ with an x-death
header containing information about the original destination.
@SpringBootApplication
public class ReRouteDlqApplication {
private static final String ORIGINAL_QUEUE = "so8400in.so8400";
private static final String DLQ = ORIGINAL_QUEUE + ".dlq";
private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot";
private static final String X_DEATH_HEADER = "x-death";
private static final String X_RETRIES_HEADER = "x-retries";
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args);
System.out.println("Hit enter to terminate");
System.in.read();
context.close();
}
@Autowired
private RabbitTemplate rabbitTemplate;
@SuppressWarnings("unchecked")
@RabbitListener(queues = DLQ)
public void rePublish(Message failedMessage) {
Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders();
Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER);
if (retriesHeader == null) {
retriesHeader = Integer.valueOf(0);
}
if (retriesHeader < 3) {
headers.put(X_RETRIES_HEADER, retriesHeader + 1);
List<Map<String, ?>> xDeath = (List<Map<String, ?>>) headers.get(X_DEATH_HEADER);
String exchange = (String) xDeath.get(0).get("exchange");
List<String> routingKeys = (List<String>) xDeath.get(0).get("routing-keys");
this.rabbitTemplate.send(exchange, routingKeys.get(0), failedMessage);
}
else {
this.rabbitTemplate.send(PARKING_LOT, failedMessage);
}
}
@Bean
public Queue parkingLot() {
return new Queue(PARKING_LOT);
}
}
republishToDlq=true
When republishToDlq
is true
, the republishing recoverer adds the original exchange and routing key to headers.
@SpringBootApplication
public class ReRouteDlqApplication {
private static final String ORIGINAL_QUEUE = "so8400in.so8400";
private static final String DLQ = ORIGINAL_QUEUE + ".dlq";
private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot";
private static final String X_RETRIES_HEADER = "x-retries";
private static final String X_ORIGINAL_EXCHANGE_HEADER = RepublishMessageRecoverer.X_ORIGINAL_EXCHANGE;
private static final String X_ORIGINAL_ROUTING_KEY_HEADER = RepublishMessageRecoverer.X_ORIGINAL_ROUTING_KEY;
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args);
System.out.println("Hit enter to terminate");
System.in.read();
context.close();
}
@Autowired
private RabbitTemplate rabbitTemplate;
@RabbitListener(queues = DLQ)
public void rePublish(Message failedMessage) {
Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders();
Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER);
if (retriesHeader == null) {
retriesHeader = Integer.valueOf(0);
}
if (retriesHeader < 3) {
headers.put(X_RETRIES_HEADER, retriesHeader + 1);
String exchange = (String) headers.get(X_ORIGINAL_EXCHANGE_HEADER);
String originalRoutingKey = (String) headers.get(X_ORIGINAL_ROUTING_KEY_HEADER);
this.rabbitTemplate.send(exchange, originalRoutingKey, failedMessage);
}
else {
this.rabbitTemplate.send(PARKING_LOT, failedMessage);
}
}
@Bean
public Queue parkingLot() {
return new Queue(PARKING_LOT);
}
}
Appendices
Appendix A: Building
A.1. Basic Compile and Test
To build the source you will need to install JDK 1.7.
The build uses the Maven wrapper so you don’t have to install a specific version of Maven. To enable the tests for Redis, Rabbit, and Kafka bindings you should have those servers running before building. See below for more information on running the servers.
The main build command is
$ ./mvnw clean install
You can also add '-DskipTests' if you like, to avoid running the tests.
You can also install Maven (>=3.3.3) yourself and run the mvn command
in place of ./mvnw in the examples below. If you do that you also
might need to add -P spring if your local Maven settings do not
contain repository declarations for spring pre-release artifacts.
|
Be aware that you might need to increase the amount of memory
available to Maven by setting a MAVEN_OPTS environment variable with
a value like -Xmx512m -XX:MaxPermSize=128m . We try to cover this in
the .mvn configuration, so if you find you have to do it to make a
build succeed, please raise a ticket to get the settings added to
source control.
|
The projects that require middleware generally include a
docker-compose.yml
, so consider using
Docker Compose to run the middeware servers
in Docker containers. See the README in the
scripts demo
repository for specific instructions about the common cases of mongo,
rabbit and redis.
A.2. Documentation
There is a "full" profile that will generate documentation.
A.3. Working with the code
If you don’t have an IDE preference we would recommend that you use Spring Tools Suite or Eclipse when working with the code. We use the m2eclipe eclipse plugin for maven support. Other IDEs and tools should also work without issue.
A.3.1. Importing into eclipse with m2eclipse
We recommend the m2eclipe eclipse plugin when working with eclipse. If you don’t already have m2eclipse installed it is available from the "eclipse marketplace".
Unfortunately m2e does not yet support Maven 3.3, so once the projects
are imported into Eclipse you will also need to tell m2eclipse to use
the .settings.xml
file for the projects. If you do not do this you
may see many different errors related to the POMs in the
projects. Open your Eclipse preferences, expand the Maven
preferences, and select User Settings. In the User Settings field
click Browse and navigate to the Spring Cloud project you imported
selecting the .settings.xml
file in that project. Click Apply and
then OK to save the preference changes.
Alternatively you can copy the repository settings from .settings.xml into your own ~/.m2/settings.xml .
|
A.3.2. Importing into eclipse without m2eclipse
If you prefer not to use m2eclipse you can generate eclipse project metadata using the following command:
$ ./mvnw eclipse:eclipse
The generated eclipse projects can be imported by selecting import existing projects
from the file
menu.
[[contributing]
== Contributing
Spring Cloud is released under the non-restrictive Apache 2.0 license, and follows a very standard Github development process, using Github tracker for issues and merging pull requests into master. If you want to contribute even something trivial please do not hesitate, but follow the guidelines below.
A.4. Sign the Contributor License Agreement
Before we accept a non-trivial patch or pull request we will need you to sign the contributor’s agreement. Signing the contributor’s agreement does not grant anyone commit rights to the main repository, but it does mean that we can accept your contributions, and you will get an author credit if we do. Active contributors might be asked to join the core team, and given the ability to merge pull requests.
A.5. Code Conventions and Housekeeping
None of these is essential for a pull request, but they will all help. They can also be added after the original pull request but before a merge.
-
Use the Spring Framework code format conventions. If you use Eclipse you can import formatter settings using the
eclipse-code-formatter.xml
file from the Spring Cloud Build project. If using IntelliJ, you can use the Eclipse Code Formatter Plugin to import the same file. -
Make sure all new
.java
files to have a simple Javadoc class comment with at least an@author
tag identifying you, and preferably at least a paragraph on what the class is for. -
Add the ASF license header comment to all new
.java
files (copy from existing files in the project) -
Add yourself as an
@author
to the .java files that you modify substantially (more than cosmetic changes). -
Add some Javadocs and, if you change the namespace, some XSD doc elements.
-
A few unit tests would help a lot as well — someone has to do it.
-
If no-one else is using your branch, please rebase it against the current master (or other target branch in the main project).
-
When writing a commit message please follow these conventions, if you are fixing an existing issue please add
Fixes gh-XXXX
at the end of the commit message (where XXXX is the issue number).