Register a Stream App with the App Registry using the Spring Cloud Data Flow Shell
app register
command. You must provide a unique name, application type, and a URI that can be
resolved to the app artifact. For the type, specify "source", "processor", or "sink".
Here are a few examples:
dataflow:>app register --name mysource --type source --uri maven://com.example:mysource:0.0.1-SNAPSHOT dataflow:>app register --name myprocessor --type processor --uri file:///Users/example/myprocessor-1.2.3.jar dataflow:>app register --name mysink --type sink --uri http://example.com/mysink-2.0.1.jar
When providing a URI with the maven
scheme, the format should conform to the following:
maven://<groupId>:<artifactId>[:<extension>[:<classifier>]]:<version>
For example, if you would like to register the snapshot versions of the http
and log
applications built with the RabbitMQ binder, you could do the following:
dataflow:>app register --name http --type source --uri maven://org.springframework.cloud.stream.app:http-source-rabbit:1.2.1.BUILD-SNAPSHOT dataflow:>app register --name log --type sink --uri maven://org.springframework.cloud.stream.app:log-sink-rabbit:1.2.1.BUILD-SNAPSHOT
If you would like to register multiple apps at one time, you can store them in a properties file
where the keys are formatted as <type>.<name>
and the values are the URIs.
For example, if you would like to register the snapshot versions of the http
and log
applications built with the RabbitMQ binder, you could have the following in a properties file [eg: stream-apps.properties]:
source.http=maven://org.springframework.cloud.stream.app:http-source-rabbit:1.2.1.BUILD-SNAPSHOT sink.log=maven://org.springframework.cloud.stream.app:log-sink-rabbit:1.2.1.BUILD-SNAPSHOT
Then to import the apps in bulk, use the app import
command and provide the location of the properties file via --uri
:
dataflow:>app import --uri file:///<YOUR_FILE_LOCATION>/stream-apps.properties
For convenience, we have the static files with application-URIs (for both maven and docker) available for all the out-of-the-box stream and task/batch app-starters. You can point to this file and import all the application-URIs in bulk. Otherwise, as explained in previous paragraphs, you can register them individually or have your own custom property file with only the required application-URIs in it. It is recommended, however, to have a "focused" list of desired application-URIs in a custom property file.
List of available Stream Application Starters:
Artifact Type | Stable Release | SNAPSHOT Release |
---|---|---|
RabbitMQ + Maven | bit.ly/Bacon-BUILD-SNAPSHOT-stream-applications-rabbit-maven | |
RabbitMQ + Docker | N/A | |
Kafka 0.9 + Maven | bit.ly/Bacon-BUILD-SNAPSHOT-stream-applications-kafka-09-maven | |
Kafka 0.9 + Docker | N/A | |
Kafka 0.10 + Maven | bit.ly/Bacon-BUILD-SNAPSHOT-stream-applications-kafka-10-maven | |
Kafka 0.10 + Docker | N/A |
List of available Task Application Starters:
Artifact Type | Stable Release | SNAPSHOT Release |
---|---|---|
Maven | ||
Docker | N/A |
You can find more information about the available task starters in the Task App Starters Project Page and related reference documentation. For more information about the available stream starters look at the Stream App Starters Project Page and related reference documentation.
As an example, if you would like to register all out-of-the-box stream applications built with the RabbitMQ binder in bulk, you can with the following command.
dataflow:>app import --uri http://bit.ly/Bacon-RELEASE-stream-applications-rabbit-maven
You can also pass the --local
option (which is true
by default) to indicate whether the
properties file location should be resolved within the shell process itself. If the location should
be resolved from the Data Flow Server process, specify --local false
.
Warning | |
---|---|
When using either Note however that once downloaded, applications may be cached locally on the Data Flow server, based on the resource
location. If the resource location doesn’t change (even though the actual resource bytes may be different), then it
won’t be re-downloaded. When using Moreover, if a stream is already deployed and using some version of a registered app, then (forcibly) re-registering a different app will have no effect until the stream is deployed anew. |
Note | |
---|---|
In some cases the Resource is resolved on the server side, whereas in others the URI will be passed to a runtime container instance where it is resolved. Consult the specific documentation of each Data Flow Server for more detail. |
Stream and Task applications are Spring Boot applications which are aware of many Section 24.3, “Common application properties”, e.g. server.port
but also families of properties such as those with the prefix spring.jmx
and logging
. When creating your own application it is desirable to whitelist properties so that the shell and the UI can display them first as primary properties when presenting options via TAB completion or in drop-down boxes.
To whitelist application properties create a file named spring-configuration-metadata-whitelist.properties
in the META-INF
resource directory. There are two property keys that can be used inside this file. The first key is named configuration-properties.classes
. The value is a comma separated list of fully qualified @ConfigurationProperty
class names. The second key is configuration-properties.names
whose value is a comma separated list of property names. This can contain the full name of property, such as server.port
or a partial name to whitelist a category of property names, e.g. spring.jmx
.
The Spring Cloud Stream application starters are a good place to look for examples of usage. Here is a simple example of the file sink’s spring-configuration-metadata-whitelist.properties
file
configuration-properties.classes=org.springframework.cloud.stream.app.file.sink.FileSinkProperties
If we also wanted to add server.port
to be white listed, then it would look like this:
configuration-properties.classes=org.springframework.cloud.stream.app.file.sink.FileSinkProperties configuration-properties.names=server.port
Important | |
---|---|
Make sure to add 'spring-boot-configuration-processor' as an optional dependency to generate configuration metadata file for the properties. <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-configuration-processor</artifactId> <optional>true</optional> </dependency> |
You can go a step further in the process of describing the main properties that your stream or task app supports by creating a so-called metadata companion artifact. This simple jar file contains only the Spring boot JSON file about configuration properties metadata, as well as the whitelisting file described in the previous section.
Here is the contents of such an artifact, for the canonical log
sink:
$ jar tvf log-sink-rabbit-1.2.1.BUILD-SNAPSHOT-metadata.jar 373848 META-INF/spring-configuration-metadata.json 174 META-INF/spring-configuration-metadata-whitelist.properties
Note that the spring-configuration-metadata.json
file is quite large. This is because it contains the concatenation of all the properties that
are available at runtime to the log
sink (some of them come from spring-boot-actuator.jar
, some of them come from
spring-boot-autoconfigure.jar
, even some more from spring-cloud-starter-stream-sink-log.jar
, etc.) Data Flow
always relies on all those properties, even when a companion artifact is not available, but here all have been merged
into a single file.
To help with that (as a matter of fact, you don’t want to try to craft this giant JSON file by hand), you can use the following plugin in your build:
<plugin> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-app-starter-metadata-maven-plugin</artifactId> <executions> <execution> <id>aggregate-metadata</id> <phase>compile</phase> <goals> <goal>aggregate-metadata</goal> </goals> </execution> </executions> </plugin>
Note | |
---|---|
This plugin comes in addition to the |
The benefits of a companion artifact are manifold:
app info
or the Dashboard UIRemember though, that this is entirely optional when dealing with uberjars. The uberjar itself also includes the metadata in it already.
Once you have a companion artifact at hand, you need to make the system aware of it so that it can be used.
When registering a single app via app register
, you can use the optional --metadata-uri
option in the shell, like so:
dataflow:>app register --name log --type sink --uri maven://org.springframework.cloud.stream.app:log-sink-kafka-10:1.2.1.BUILD-SNAPSHOT --metadata-uri=maven://org.springframework.cloud.stream.app:log-sink-kafka-10:jar:metadata:1.2.1.BUILD-SNAPSHOT
When registering several files using the app import
command, the file should contain a <type>.<name>.metadata
line
in addition to each <type>.<name>
line. This is optional (i.e. if some apps have it but some others don’t, that’s fine).
Here is an example for a Dockerized app, where the metadata artifact is being hosted in a Maven repository (but retrieving
it via http://
or file://
would be equally possible).
... source.http=docker:springcloudstream/http-source-rabbit:latest source.http.metadata=maven://org.springframework.cloud.stream.app:http-source-rabbit:jar:metadata:1.2.1.BUILD-SNAPSHOT ...