For using the RabbitMQ binder, you just need to add it to your Spring Cloud Stream application, using the following Maven coordinates:
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-stream-binder-rabbit</artifactId> </dependency>
Alternatively, you can also use the Spring Cloud Stream RabbitMQ Starter.
<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-stream-rabbit</artifactId> </dependency>
A simplified diagram of how the RabbitMQ binder operates can be seen below.
The RabbitMQ Binder implementation maps each destination to a TopicExchange
(by default).
For each consumer group, a Queue
will be bound to that TopicExchange
.
Each consumer instance have a corresponding RabbitMQ Consumer
instance for its group’s Queue
.
For partitioned producers/consumers the queues are suffixed with the partition index and use the partition index as routing key.
For anonymous consumers (no group
property) an auto-delete queue is used, with a randomized unique name.
Using the optional autoBindDlq
option, you can configure the binder to create and configure dead-letter queues (DLQs) (and a dead-letter exchange DLX
as well as routing infrastructure).
By default, the dead letter queue has the name of the destination, appended with .dlq
.
If retry is enabled (maxAttempts > 1
) failed messages will be delivered to the DLQ after retries are exhausted.
If retry is disabled (maxAttempts = 1
), you should set requeueRejected
to false
(default) so that a failed message will be routed to the DLQ, instead of being requeued.
In addition, republishToDlq
causes the binder to publish a failed message to the DLQ (instead of rejecting it); this enables additional information to be added to the message in headers, such as the stack trace in the x-exception-stacktrace
header.
This option does not need retry enabled; you can republish a failed message after just one attempt.
Starting with version 1.2, you can configure the delivery mode of republished messages; see property republishDeliveryMode
.
![]() | Important |
---|---|
Setting |
See Section 17.3.1, “RabbitMQ Binder Properties” for more information about these properties.
The framework does not provide any standard mechanism to consume dead-letter messages (or to re-route them back to the primary queue). Some options are described in Section 17.6, “Dead-Letter Queue Processing”.
![]() | Note |
---|---|
When multiple RabbitMQ binders are used in a Spring Cloud Stream application, it is important to disable 'RabbitAutoConfiguration' to avoid the same configuration from |
Starting with version 2.0, the RabbitMessageChannelBinder
sets the RabbitTemplate.userPublisherConnection
property to true so that the non-transactional producers will avoid dead locks on consumers which can happen if cached connections are blocked because of Memory Alarm on Broker.
This section contains settings specific to the RabbitMQ Binder and bound channels.
For general binding configuration options and properties, please refer to the Spring Cloud Stream core documentation.
By default, the RabbitMQ binder uses Spring Boot’s ConnectionFactory
, and it therefore supports all Spring Boot configuration options for RabbitMQ.
(For reference, consult the Spring Boot documentation).
RabbitMQ configuration options use the spring.rabbitmq
prefix.
In addition to Spring Boot options, the RabbitMQ binder supports the following properties:
A comma-separated list of RabbitMQ management plugin URLs.
Only used when nodes
contains more than one entry.
Each entry in this list must have a corresponding entry in spring.rabbitmq.addresses
.
Only needed if you are using a RabbitMQ cluster and wish to consume from the node that hosts the queue.
See Queue Affinity and the LocalizedQueueConnectionFactory for more information.
Default: empty.
A comma-separated list of RabbitMQ node names.
When more than one entry, used to locate the server address where a queue is located.
Each entry in this list must have a corresponding entry in spring.rabbitmq.addresses
.
Only needed if you are using a RabbitMQ cluster and wish to consume from the node that hosts the queue.
See Queue Affinity and the LocalizedQueueConnectionFactory for more information.
Default: empty.
Compression level for compressed bindings.
See java.util.zip.Deflater
.
Default: 1
(BEST_LEVEL).
A connection name prefix used to name the connection(s) created by this binder.
The name will be this prefix followed by #n
, where n increments each time a new connection is opened.
Default: none (Spring AMQP default).
The following properties are available for Rabbit consumers only and
must be prefixed with spring.cloud.stream.rabbit.bindings.<channelName>.consumer.
.
The acknowledge mode.
Default: AUTO
.
Whether to automatically declare the DLQ and bind it to the binder DLX.
Default: false
.
The routing key with which to bind the queue to the exchange (if bindQueue
is true
).
for partitioned destinations -<instanceIndex>
will be appended.
Default: #
.
Whether to bind the queue to the destination exchange; set to false
if you have set up your own infrastructure and have previously created/bound the queue.
Default: true
.
name of the DLQ
Default: prefix+destination.dlq
a DLX to assign to the queue; if autoBindDlq is true
Default: 'prefix+DLX'
a dead letter routing key to assign to the queue; if autoBindDlq is true
Default: destination
Whether to declare the exchange for the destination.
Default: true
.
Whether to declare the exchange as a Delayed Message Exchange
- requires the delayed message exchange plugin on the broker.
The x-delayed-type
argument is set to the exchangeType
.
Default: false
.
if a DLQ is declared, a DLX to assign to that queue
Default: none
if a DLQ is declared, a dead letter routing key to assign to that queue; default none
Default: none
how long before an unused dead letter queue is deleted (ms)
Default: no expiration
Declare the dead letter queue with the x-queue-mode=lazy
argument.
See Lazy Queues.
Consider using a policy instead of this setting because using a policy allows changing the setting without deleting the queue.
Default: false
.
maximum number of messages in the dead letter queue
Default: no limit
maximum number of total bytes in the dead letter queue from all messages
Default: no limit
maximum priority of messages in the dead letter queue (0-255)
Default: none
default time to live to apply to the dead letter queue when declared (ms)
Default: no limit
Whether subscription should be durable.
Only effective if group
is also set.
Default: true
.
If declareExchange
is true, whether the exchange should be auto-delete (removed after the last queue is removed).
Default: true
.
If declareExchange
is true, whether the exchange should be durable (survives broker restart).
Default: true
.
The exchange type; direct
, fanout
or topic
for non-partitioned destinations; direct
or topic
for partitioned destinations.
Default: topic
.
Create an exclusive consumer; concurrency should be 1 when this is true
; often used when strict ordering is required but enabling a hot standby instance to take over after a failure.
See recoveryInterval
, which controls how often a standby instance will attempt to consume.
Default: false
.
how long before an unused queue is deleted (ms)
Default: no expiration
The interval (ms) between attempts to consume from a queue if it is missing.
Default: 5000
Patterns for headers to be mapped from inbound messages.
Default: ['*']
(all headers).
Declare the queue with the x-queue-mode=lazy
argument.
See Lazy Queues.
Consider using a policy instead of this setting because using a policy allows changing the setting without deleting the queue.
Default: false
.
the maximum number of consumers
Default: 1
.
maximum number of messages in the queue
Default: no limit
maximum number of total bytes in the queue from all messages
Default: no limit
maximum priority of messages in the queue (0-255)
Default: none
If the queue cannot be found, treat the condition as fatal and stop the listener container.
Defaults to false
so that the container keeps trying to consume from the queue, for example when using a cluster and the node hosting a non HA queue is down.
Default: false
Prefetch count.
Default: 1
.
A prefix to be added to the name of the destination
and queues.
Default: "".
The number of times to retry consuming from a queue if it is missing.
Only relevant if missingQueuesFatal
is true
; otherwise the container keeps retrying indefinitely.
Default: 3
When true, consume from a queue with a name equal to the group
; otherwise the queue name is destination.group
.
This is useful, for example, when using Spring Cloud Stream to consume from an existing RabbitMQ queue.
Default: false.
The interval between connection recovery attempts, in milliseconds.
Default: 5000
.
Whether delivery failures should be requeued when retry is disabled or republishToDlq is false.
Default: false
.
When republishToDlq
is true
, specify the delivery mode of the republished message.
Default: DeliveryMode.PERSISTENT
By default, messages which fail after retries are exhausted are rejected.
If a dead-letter queue (DLQ) is configured, RabbitMQ will route the failed message (unchanged) to the DLQ.
If set to true
, the binder will republish failed messages to the DLQ with additional headers, including the exception message and stack trace from the cause of the final failure.
Default: false
Whether to use transacted channels.
Default: false
.
default time to live to apply to the queue when declared (ms)
Default: no limit
The number of deliveries between acks.
Default: 1
.
The following properties are available for Rabbit producers only and
must be prefixed with spring.cloud.stream.rabbit.bindings.<channelName>.producer.
.
Whether to automatically declare the DLQ and bind it to the binder DLX.
Default: false
.
Whether to enable message batching by producers. Messages are batched into one message according to the following properties. Refer to Batching for more information.
Default: false
.
The number of messages to buffer when batching is enabled.
Default: 100
.
The maximum buffer size when batching is enabled.
Default: `10000`.
The batch timeout when batching is enabled.
Default: `5000`.
The routing key with which to bind the queue to the exchange (if bindQueue
is true
).
Only applies to non-partitioned destinations.
Only applies if requiredGroups
are provided and then only to those groups.
Default: #
.
Whether to bind the queue to the destination exchange; set to false
if you have set up your own infrastructure and have previously created/bound the queue.
Only applies if requiredGroups
are provided and then only to those groups.
Default: true
.
Whether data should be compressed when sent.
Default: false
.
name of the DLQ
Only applies if requiredGroups
are provided and then only to those groups.
Default: prefix+destination.dlq
a DLX to assign to the queue; if autoBindDlq is true
Only applies if requiredGroups
are provided and then only to those groups.
Default: 'prefix+DLX'
a dead letter routing key to assign to the queue; if autoBindDlq is true
Only applies if requiredGroups
are provided and then only to those groups.
Default: destination
Whether to declare the exchange for the destination.
Default: true
.
A SpEL expression to evaluate the delay to apply to the message (x-delay
header) - has no effect if the exchange is not a delayed message exchange.
Default: No x-delay
header is set.
Whether to declare the exchange as a Delayed Message Exchange
- requires the delayed message exchange plugin on the broker.
The x-delayed-type
argument is set to the exchangeType
.
Default: false
.
Delivery mode.
Default: PERSISTENT
.
if a DLQ is declared, a DLX to assign to that queue
Only applies if requiredGroups
are provided and then only to those groups.
Default: none
if a DLQ is declared, a dead letter routing key to assign to that queue; default none
Only applies if requiredGroups
are provided and then only to those groups.
Default: none
how long before an unused dead letter queue is deleted (ms)
Only applies if requiredGroups
are provided and then only to those groups.
Default: no expiration
x-queue-mode=lazy
argument.
See Lazy Queues.
Consider using a policy instead of this setting because using a policy allows changing the setting without deleting the queue.
Only applies if requiredGroups
are provided and then only to those groups.maximum number of messages in the dead letter queue
Only applies if requiredGroups
are provided and then only to those groups.
Default: no limit
maximum number of total bytes in the dead letter queue from all messages
Only applies if requiredGroups
are provided and then only to those groups.
Default: no limit
maximum priority of messages in the dead letter queue (0-255)
Only applies if requiredGroups
are provided and then only to those groups.
Default: none
default time to live to apply to the dead letter queue when declared (ms)
Only applies if requiredGroups
are provided and then only to those groups.
Default: no limit
If declareExchange
is true, whether the exchange should be auto-delete (removed after the last queue is removed).
Default: true
.
If declareExchange
is true, whether the exchange should be durable (survives broker restart).
Default: true
.
The exchange type; direct
, fanout
or topic
for non-partitioned destinations; direct
or topic
for partitioned destinations.
Default: topic
.
how long before an unused queue is deleted (ms)
Only applies if requiredGroups
are provided and then only to those groups.
Default: no expiration
Patterns for headers to be mapped to outbound messages.
Default: ['*']
(all headers).
Declare the queue with the x-queue-mode=lazy
argument.
See Lazy Queues.
Consider using a policy instead of this setting because using a policy allows changing the setting without deleting the queue.
Only applies if requiredGroups
are provided and then only to those groups.
Default: false
.
maximum number of messages in the queue
Only applies if requiredGroups
are provided and then only to those groups.
Default: no limit
maximum number of total bytes in the queue from all messages
Only applies if requiredGroups
are provided and then only to those groups.
Default: no limit
maximum priority of messages in the queue (0-255)
Only applies if requiredGroups
are provided and then only to those groups.
Default: none
A prefix to be added to the name of the destination
exchange.
Default: "".
When true, consume from a queue with a name equal to the group
; otherwise the queue name is destination.group
.
This is useful, for example, when using Spring Cloud Stream to consume from an existing RabbitMQ queue.
Only applies if requiredGroups
are provided and then only to those groups.
Default: false.
A SpEL expression to determine the routing key to use when publishing messages.
For a fixed routing key, use a literal expression, e.g. routingKeyExpression='my.routingKey'
in a properties file, or routingKeyExpression: '''my.routingKey'''
in a YAML file.
Default: destination
or destination-<partition>
for partitioned destinations.
Whether to use transacted channels.
Default: false
.
default time to live to apply to the queue when declared (ms)
Only applies if requiredGroups
are provided and then only to those groups.
Default: no limit
![]() | Note |
---|---|
In the case of RabbitMQ, content type headers can be set by external applications. Spring Cloud Stream supports them as part of an extended internal protocol used for any type of transport (including transports, such as Kafka (prior to 0.11), that do not natively support headers). |
When retry is enabled within the binder, the listener container thread is suspended for any back off periods that are configured. This might be important when strict ordering is required with a single consumer but for other use cases it prevents other messages from being processed on that thread. An alternative to using binder retry is to set up dead lettering with time to live on the dead-letter queue (DLQ), as well as dead-letter configuration on the DLQ itself. See Section 17.3.1, “RabbitMQ Binder Properties” for more information about the properties discussed here. Example configuration to enable this feature:
autoBindDlq
to true
- the binder will create a DLQ; you can optionally specify a name in deadLetterQueueName
dlqTtl
to the back off time you want to wait between redeliveriesdlqDeadLetterExchange
to the default exchange - expired messages from the DLQ will be routed to the original queue since the default deadLetterRoutingKey
is the queue name (destination.group
) - setting to the default exchange is achieved by setting the property with no value, as is shown in the example belowTo force a message to be dead-lettered, either throw an AmqpRejectAndDontRequeueException
, or set requeueRejected
to true
(default) and throw any exception.
The loop will continue without end, which is fine for transient problems but you may want to give up after some number of attempts.
Fortunately, RabbitMQ provides the x-death
header which allows you to determine how many cycles have occurred.
To acknowledge a message after giving up, throw an ImmediateAcknowledgeAmqpException
.
--- spring.cloud.stream.bindings.input.destination=myDestination spring.cloud.stream.bindings.input.group=consumerGroup #disable binder retries spring.cloud.stream.bindings.input.consumer.max-attempts=1 #dlx/dlq setup spring.cloud.stream.rabbit.bindings.input.consumer.auto-bind-dlq=true spring.cloud.stream.rabbit.bindings.input.consumer.dlq-ttl=5000 spring.cloud.stream.rabbit.bindings.input.consumer.dlq-dead-letter-exchange= ---
This configuration creates an exchange myDestination
with queue myDestination.consumerGroup
bound to a topic exchange with a wildcard routing key #
.
It creates a DLQ bound to a direct exchange DLX
with routing key myDestination.consumerGroup
.
When messages are rejected, they are routed to the DLQ.
After 5 seconds, the message expires and is routed to the original queue using the queue name as the routing key.
Spring Boot application.
@SpringBootApplication @EnableBinding(Sink.class) public class XDeathApplication { public static void main(String[] args) { SpringApplication.run(XDeathApplication.class, args); } @StreamListener(Sink.INPUT) public void listen(String in, @Header(name = "x-death", required = false) Map<?,?> death) { if (death != null && death.get("count").equals(3L)) { // giving up - don't send to DLX throw new ImmediateAcknowledgeAmqpException("Failed after 4 attempts"); } throw new AmqpRejectAndDontRequeueException("failed"); } }
Notice that the count property in the x-death
header is a Long
.
Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination, and can be configured to send async producer send failures to an error channel too. See the section called “Message Channel Binders and Error Channels” for more information.
With rabbitmq, there are two types of send failures:
The latter is rare; quoting the RabbitMQ documentation "[A nack] will only be delivered if an internal error occurs in the Erlang process responsible for a queue.".
As well as enabling producer error channels as described in the section called “Message Channel Binders and Error Channels”, the RabbitMQ binder will only send messages to the channels if the connection factory is appropriately configured:
ccf.setPublisherConfirms(true);
ccf.setPublisherReturns(true);
When using spring boot configuration for the connection factory, set properties:
spring.rabbitmq.publisher-confirms
spring.rabbitmq.publisher-returns
The payload of the ErrorMessage
for a returned message is a ReturnedAmqpMessageException
with properties:
failedMessage
- the spring-messaging Message<?>
that failed to be sent.amqpMessage
- the raw spring-amqp Message
replyCode
- an integer value indicating the reason for the failure (e.g. 312 - No route)replyText
- a text value indicating the reason for the failure e.g. NO_ROUTE
.exchange
- the exchange to which the message was published.routingKey
- the routing key used when the message was published.For negatively acknowledged confirms, the payload is a NackedAmqpMessageException
with properties:
failedMessage
- the spring-messaging Message<?>
that failed to be sent.nackReason
- a reason (if available; you may need to examine the broker logs for more information).There is no automatic handling of these exceptions (such as sending to a Dead-Letter queue); you can consume these exceptions with your own Spring Integration flow.
Because it can’t be anticipated how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them.
If the reason for the dead-lettering is transient, you may wish to route the messages back to the original queue.
However, if the problem is a permanent issue, that could cause an infinite loop.
The following spring-boot
application is an example of how to route those messages back to the original queue, but moves them to a third "parking lot" queue after three attempts.
The second example utilizes the RabbitMQ Delayed Message Exchange to introduce a delay to the requeued message.
In this example, the delay increases for each attempt.
These examples use a @RabbitListener
to receive messages from the DLQ, you could also use RabbitTemplate.receive()
in a batch process.
The examples assume the original destination is so8400in
and the consumer group is so8400
.
The first two examples are when the destination is not partitioned.
@SpringBootApplication public class ReRouteDlqApplication { private static final String ORIGINAL_QUEUE = "so8400in.so8400"; private static final String DLQ = ORIGINAL_QUEUE + ".dlq"; private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot"; private static final String X_RETRIES_HEADER = "x-retries"; public static void main(String[] args) throws Exception { ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args); System.out.println("Hit enter to terminate"); System.in.read(); context.close(); } @Autowired private RabbitTemplate rabbitTemplate; @RabbitListener(queues = DLQ) public void rePublish(Message failedMessage) { Integer retriesHeader = (Integer) failedMessage.getMessageProperties().getHeaders().get(X_RETRIES_HEADER); if (retriesHeader == null) { retriesHeader = Integer.valueOf(0); } if (retriesHeader < 3) { failedMessage.getMessageProperties().getHeaders().put(X_RETRIES_HEADER, retriesHeader + 1); this.rabbitTemplate.send(ORIGINAL_QUEUE, failedMessage); } else { this.rabbitTemplate.send(PARKING_LOT, failedMessage); } } @Bean public Queue parkingLot() { return new Queue(PARKING_LOT); } }
@SpringBootApplication public class ReRouteDlqApplication { private static final String ORIGINAL_QUEUE = "so8400in.so8400"; private static final String DLQ = ORIGINAL_QUEUE + ".dlq"; private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot"; private static final String X_RETRIES_HEADER = "x-retries"; private static final String DELAY_EXCHANGE = "dlqReRouter"; public static void main(String[] args) throws Exception { ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args); System.out.println("Hit enter to terminate"); System.in.read(); context.close(); } @Autowired private RabbitTemplate rabbitTemplate; @RabbitListener(queues = DLQ) public void rePublish(Message failedMessage) { Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders(); Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER); if (retriesHeader == null) { retriesHeader = Integer.valueOf(0); } if (retriesHeader < 3) { headers.put(X_RETRIES_HEADER, retriesHeader + 1); headers.put("x-delay", 5000 * retriesHeader); this.rabbitTemplate.send(DELAY_EXCHANGE, ORIGINAL_QUEUE, failedMessage); } else { this.rabbitTemplate.send(PARKING_LOT, failedMessage); } } @Bean public DirectExchange delayExchange() { DirectExchange exchange = new DirectExchange(DELAY_EXCHANGE); exchange.setDelayed(true); return exchange; } @Bean public Binding bindOriginalToDelay() { return BindingBuilder.bind(new Queue(ORIGINAL_QUEUE)).to(delayExchange()).with(ORIGINAL_QUEUE); } @Bean public Queue parkingLot() { return new Queue(PARKING_LOT); } }
With partitioned destinations, there is one DLQ for all partitions and we determine the original queue from the headers.
When republishToDlq
is false
, RabbitMQ publishes the message to the DLX/DLQ with an x-death
header containing information about the original destination.
@SpringBootApplication public class ReRouteDlqApplication { private static final String ORIGINAL_QUEUE = "so8400in.so8400"; private static final String DLQ = ORIGINAL_QUEUE + ".dlq"; private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot"; private static final String X_DEATH_HEADER = "x-death"; private static final String X_RETRIES_HEADER = "x-retries"; public static void main(String[] args) throws Exception { ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args); System.out.println("Hit enter to terminate"); System.in.read(); context.close(); } @Autowired private RabbitTemplate rabbitTemplate; @SuppressWarnings("unchecked") @RabbitListener(queues = DLQ) public void rePublish(Message failedMessage) { Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders(); Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER); if (retriesHeader == null) { retriesHeader = Integer.valueOf(0); } if (retriesHeader < 3) { headers.put(X_RETRIES_HEADER, retriesHeader + 1); List<Map<String, ?>> xDeath = (List<Map<String, ?>>) headers.get(X_DEATH_HEADER); String exchange = (String) xDeath.get(0).get("exchange"); List<String> routingKeys = (List<String>) xDeath.get(0).get("routing-keys"); this.rabbitTemplate.send(exchange, routingKeys.get(0), failedMessage); } else { this.rabbitTemplate.send(PARKING_LOT, failedMessage); } } @Bean public Queue parkingLot() { return new Queue(PARKING_LOT); } }
When republishToDlq
is true
, the republishing recoverer adds the original exchange and routing key to headers.
@SpringBootApplication public class ReRouteDlqApplication { private static final String ORIGINAL_QUEUE = "so8400in.so8400"; private static final String DLQ = ORIGINAL_QUEUE + ".dlq"; private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot"; private static final String X_RETRIES_HEADER = "x-retries"; private static final String X_ORIGINAL_EXCHANGE_HEADER = RepublishMessageRecoverer.X_ORIGINAL_EXCHANGE; private static final String X_ORIGINAL_ROUTING_KEY_HEADER = RepublishMessageRecoverer.X_ORIGINAL_ROUTING_KEY; public static void main(String[] args) throws Exception { ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args); System.out.println("Hit enter to terminate"); System.in.read(); context.close(); } @Autowired private RabbitTemplate rabbitTemplate; @RabbitListener(queues = DLQ) public void rePublish(Message failedMessage) { Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders(); Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER); if (retriesHeader == null) { retriesHeader = Integer.valueOf(0); } if (retriesHeader < 3) { headers.put(X_RETRIES_HEADER, retriesHeader + 1); String exchange = (String) headers.get(X_ORIGINAL_EXCHANGE_HEADER); String originalRoutingKey = (String) headers.get(X_ORIGINAL_ROUTING_KEY_HEADER); this.rabbitTemplate.send(exchange, originalRoutingKey, failedMessage); } else { this.rabbitTemplate.send(PARKING_LOT, failedMessage); } } @Bean public Queue parkingLot() { return new Queue(PARKING_LOT); } }
RabbitMQ does not support partitioning natively.
Sometimes it is advantageous to send data to specific partitions, for example when you want to strictly order message processing - all messages for a particular customer should go to the same partition.
The RabbitMessageChannelBinder
provides partitioning by binding a queue for each partition to the destination exchange.
The following illustrates how to configure the producer and consumer side:
Producer.
@SpringBootApplication @EnableBinding(Source.class) public class RabbitPartitionProducerApplication { private static final Random RANDOM = new Random(System.currentTimeMillis()); private static final String[] data = new String[] { "foo1", "bar1", "qux1", "foo2", "bar2", "qux2", "foo3", "bar3", "qux3", "foo4", "bar4", "qux4", }; public static void main(String[] args) { new SpringApplicationBuilder(RabbitPartitionProducerApplication.class) .web(false) .run(args); } @InboundChannelAdapter(channel = Source.OUTPUT, poller = @Poller(fixedRate = "5000")) public Message<?> generate() { String value = data[RANDOM.nextInt(data.length)]; System.out.println("Sending: " + value); return MessageBuilder.withPayload(value) .setHeader("partitionKey", value) .build(); } }
application.yml.
spring: cloud: stream: bindings: output: destination: partitioned.destination producer: partitioned: true partition-key-expression: headers['partitionKey'] partition-count: 2 required-groups: - myGroup
![]() | Note |
---|---|
The above configuration uses the default partitioning ( The |
This configuration provisions a topic exchange:
and these queues bound to that exchange:
with these bindings:
Consumer.
@SpringBootApplication @EnableBinding(Sink.class) public class RabbitPartitionConsumerApplication { public static void main(String[] args) { new SpringApplicationBuilder(RabbitPartitionConsumerApplication.class) .web(false) .run(args); } @StreamListener(Sink.INPUT) public void listen(@Payload String in, @Header(AmqpHeaders.CONSUMER_QUEUE) String queue) { System.out.println(in + " received from queue " + queue); } }
application.yml.
spring: cloud: stream: bindings: input: destination: partitioned.destination group: myGroup consumer: partitioned: true instance-index: 0
![]() | Important |
---|---|
The |