Spring Cloud Stream App Starters Reference Guide


Sabby Anandan, Artem Bilan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinathan, Gunnar Hillert, Mark Pollack, Patrick Peralta, Glenn Renfro, Gary Russell, Thomas Risberg, David Turanski, Janne Valkealahti, Soby Chacko, Christian Tzolov


Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically.

Table of Contents

I. Reference Guide
1. Introduction
1.1. Pre-built applications
1.2. Classification
1.2.1. Maven and Docker access
1.2.2. Building the Artifacts
1.3. Patching Pre-built Applications
II. Starters
2. Sources
2.1. Http Source
2.1.1. Payload:
2.1.2. Options
2.2. JDBC Source
2.2.1. Payload
2.2.2. Options
2.3. Time Source
2.3.1. Options
3. Processors
4. Sinks
4.1. Log Sink
4.1.1. Options
III. Appendices
A. Contributing
A.1. Sign the Contributor License Agreement
A.2. Code Conventions and Housekeeping

Part I. Reference Guide

This section will provide you with a detailed overview of the out of hte box Spring Cloud Stream Applications. It assumes familiarity with general Spring Cloud Stream concepts, which can be found in the Spring Cloud Stream reference documentation.

1. Introduction

These Spring Cloud Stream Applications provide you with out-of-the-box Spring Cloud Stream utility applications that you can run independently or with Spring Cloud Data Flow.

They include:

  • connectors (sources, processors, and sinks) for a variety of middleware technologies including message brokers, storage (relational, non-relational, filesystem);
  • adapters for various network protocols;
  • generic processors that can be customized via Spring Expression Language (SpEL) or scripting.

You can find a detailed listing of all the applications and their options in the corresponding section of this guide.

1.1 Pre-built applications

Out-of-the-box applications are Spring Boot applications that include a Binder implementation on top of the basic logic of the app (a function for e.g.) - a fully functional uber-jar. These uber-jar’s include minimal code required to execute standalone. For each function application, the project provides a prebuilt version for Apache Kafka and Rabbit MQ Binders.


Prebuilt applications are generated according to the stream apps generator maven plugin.

1.2 Classification

Based on their target application type, they can be either:

  • a source that connects to an external resource to poll and receive data that is published to the default "output" channel;
  • a processor that receives data from an "input" channel and processes it, sending the result on the default "output" channel;
  • a sink that connects to an external resource to send the received data to the default "input" channel.

The prebuilt applications follow a naming convention: <functionality>-<type>-<binder>. For example, rabbit-sink-kafka is a Rabbit sink using the Kafka binder that is running with Kafka as the middleware.

1.2.1 Maven and Docker access

Core functionality of the apps are available as functions. They are provided as Maven artifacts in the Spring repositories. You can add them as dependencies to your application, as follows:


Prebuilt applications are available as Maven artifacts. You can download the executable jar artifacts from the Spring Maven repositories. The root directory of the Maven repository that hosts release versions is repo.spring.io/release/org/springframework/cloud/stream/app/. From there you can navigate to the latest released version of a specific app, for example log-sink-rabbit-2.0.2.RELEASE.jar. Use the Milestone and Snapshot repository locations for Milestone and Snapshot executable jar artifacts.

The Docker versions of the applications are available in Docker Hub, at hub.docker.com/r/springcloudstream/. Naming and versioning follows the same general conventions as Maven, e.g.

docker pull springcloudstream/cassandra-sink-kafka

will pull the latest Docker image of the Cassandra sink with the Kafka binder.

1.2.2 Building the Artifacts

You can build the project and generate the artifacts (including the prebuilt applications) on your own. This is useful if you want to deploy the artifacts locally or add additional features.

First, you need to generate the prebuilt applications. This is done by running the application generation Maven plugin. You can do so by simply invoking the maven build. If you are at the root of the repository, [steam-applications](github.com/spring-cloud-stream-app-starters/stream-applications), then doing a maven build will generate the entire binder based apps. If you don’t want to do that and instead only are interested in a certain app, then cd into the right module and invoke the build from there.

mvn clean package

This will generate the apps. By default, the generated projects are placed under a directory called apps. There you can find the binder based applications which you can then build and run.

1.3 Patching Pre-built Applications

If you’re looking to patch the pre-built applications to accommodate the addition of new dependencies, you can use the following example as the reference. Let’s review the steps to add mysql driver to jdbc-sink application.

  • Clone the GitHub repository github.com/spring-cloud-stream-app-starters/stream-applications
  • Open it in an IDE and make the necessary changes in the right generator proejct. The repository is organized as source-apps-generator, sink-apps-generator and processor-apps-generator. Find the module that you want to patch and make the changes. For e.g. add the following to the generator plugin’s configuration in the pom.xml
  • Generate the binder based apps as specified above and build the apps.

Part II. Starters

2. Sources

2.1 Http Source

A source application that listens for HTTP requests and emits the body as a message payload. If the Content-Type matches text/* or application/json, the payload will be a String, otherwise the payload will be a byte array.

2.1.1 Payload:

If content type matches text/* or application/json

  • String

If content type does not match text/* or application/json

  • byte array

2.1.2 Options

The http source supports the following configuration properties:

Whether the browser should include any cookies associated with the domain of the request being annotated. (Boolean, default: <none>)
List of request headers that can be used during the actual request. (String[], default: <none>)
List of allowed origins, e.g. "http://domain1.com". (String[], default: <none>)
Headers that will be mapped. (String[], default: <none>)
HTTP endpoint path mapping. (String, default: /)
Server HTTP port. (Integer, default: 8080)

2.2 JDBC Source

This source polls data from an RDBMS. This source is fully based on the DataSourceAutoConfiguration, so refer to the Spring Boot JDBC Support for more information.

2.2.1 Payload

  • Map<String, Object> when jdbc.split == true (default) and List<Map<String, Object>> otherwise

2.2.2 Options

The jdbc source has the following options:

Max numbers of rows to process for query. (Integer, default: 0)
The query to use to select data. (String, default: <none>)
Whether to split the SQL result as individual messages. (Boolean, default: true)
An SQL update statement to execute for marking polled messages as 'seen'. (String, default: <none>)
Cron expression value for the Cron Trigger. (String, default: <none>)
Fixed delay for default poller. (Long, default: 1000)
Initial delay for periodic triggers. (Integer, default: 0)
Maximum messages per poll for the default poller. (Long, default: 1)
Data (DML) script resource references. (List<String>, default: <none>)
Fully qualified name of the JDBC driver. Auto-detected based on the URL by default. (String, default: <none>)
Initialize the datasource with available DDL and DML scripts. (DataSourceInitializationMode, default: embedded, possible values: ALWAYS,EMBEDDED,NEVER)
Login password of the database. (String, default: <none>)
Schema (DDL) script resource references. (List<String>, default: <none>)
JDBC URL of the database. (String, default: <none>)
Login username of the database. (String, default: <none>)

2.3 Time Source

The time source will simply emit a String with the current time every so often.

2.3.1 Options

The time source has the following options:

Cron expression value for the Cron Trigger. (String, default: <none>)
Fixed delay for default poller. (Long, default: 1000)
Initial delay for periodic triggers. (Integer, default: 0)
Maximum messages per poll for the default poller. (Long, default: 1)
Format for the date value. (String, default: MM/dd/yy HH:mm:ss)

3. Processors

Unresolved directive in processors.adoc - include::https://raw.githubusercontent.com/spring-cloud-stream-app-starters/stream-applications/master/processor-app-generator/splitter-processor-apps-generator/README.adoc[tags=ref-doc]

4. Sinks

Unresolved directive in sinks.adoc - include::https://raw.githubusercontent.com/spring-cloud-stream-app-starters/stream-applications/master/sink-apps-geneartor/rabbit-sink-apps-generator/README.adoc[tags=ref-doc]

4.1 Log Sink

The log sink uses the application logger to output the data for inspection.

Please understand that log sink uses type-less handler, which affects how the actual logging will be performed. This means that if the content-type is textual, then raw payload bytes will be converted to String, otherwise raw bytes will be logged. Please see more info in the user-guide.

4.1.1 Options

The log sink has the following options:

A SpEL expression (against the incoming message) to evaluate as the logged message. (String, default: payload)
The level at which to log messages. (Level, default: <none>, possible values: FATAL,ERROR,WARN,INFO,DEBUG,TRACE)
The name of the logger to use. (String, default: <none>)

Part III. Appendices

Appendix A. Contributing

Spring Cloud is released under the non-restrictive Apache 2.0 license, and follows a very standard Github development process, using Github tracker for issues and merging pull requests into master. If you want to contribute even something trivial please do not hesitate, but follow the guidelines below.

A.1 Sign the Contributor License Agreement

Before we accept a non-trivial patch or pull request we will need you to sign the contributor’s agreement. Signing the contributor’s agreement does not grant anyone commit rights to the main repository, but it does mean that we can accept your contributions, and you will get an author credit if we do. Active contributors might be asked to join the core team, and given the ability to merge pull requests.

A.2 Code Conventions and Housekeeping

None of these is essential for a pull request, but they will all help. They can also be added after the original pull request but before a merge.

  • Use the Spring Framework code format conventions. If you use Eclipse you can import formatter settings using the eclipse-code-formatter.xml file from the Spring Cloud Build project. If using IntelliJ, you can use the Eclipse Code Formatter Plugin to import the same file.
  • Make sure all new .java files to have a simple Javadoc class comment with at least an @author tag identifying you, and preferably at least a paragraph on what the class is for.
  • Add the ASF license header comment to all new .java files (copy from existing files in the project)
  • Add yourself as an @author to the .java files that you modify substantially (more than cosmetic changes).
  • Add some Javadocs and, if you change the namespace, some XSD doc elements.
  • A few unit tests would help a lot as well — someone has to do it.
  • If no-one else is using your branch, please rebase it against the current master (or other target branch in the main project).
  • When writing a commit message please follow these conventions, if you are fixing an existing issue please add Fixes gh-XXXX at the end of the commit message (where XXXX is the issue number).