The Spring Cloud Data Flow Server exposes a full RESTful API for managing the lifecycle of stream definitions, but the easiest way to use is it is via the Spring Cloud Data Flow shell. Start the shell as described in the Getting Started section.
New streams are created by posting stream definitions. The definitions are built from a simple DSL. For example, let’s walk through what happens if we execute the following shell command:
dataflow:> stream create --definition "time | log" --name ticktock
This defines a stream named ticktock
based off the DSL expression time | log
. The DSL uses the "pipe" symbol |
, to connect a source to a sink.
Then to deploy the stream execute the following shell command (or alternatively add the --deploy
flag when creating the stream so that this step is not needed):
dataflow:> stream deploy --name ticktock
The Data Flow Server resolves time
and log
to maven coordinates and uses those to launch the time
and log
applications of the stream.
2016-06-01 09:41:21.728 INFO 79016 --- [nio-9393-exec-6] o.s.c.d.spi.local.LocalAppDeployer : deploying app ticktock.log instance 0 Logs will be in /var/folders/wn/8jxm_tbd1vj28c8vj37n900m0000gn/T/spring-cloud-dataflow-912434582726479179/ticktock-1464788481708/ticktock.log 2016-06-01 09:41:21.914 INFO 79016 --- [nio-9393-exec-6] o.s.c.d.spi.local.LocalAppDeployer : deploying app ticktock.time instance 0 Logs will be in /var/folders/wn/8jxm_tbd1vj28c8vj37n900m0000gn/T/spring-cloud-dataflow-912434582726479179/ticktock-1464788481910/ticktock.time
In this example, the time source simply sends the current time as a message each second, and the log sink outputs it using the logging framework.
You can tail the stdout
log (which has an "_<instance>" suffix). The log files are located within the directory displayed in the Data Flow Server’s log output, as shown above.
$ tail -f /var/folders/wn/8jxm_tbd1vj28c8vj37n900m0000gn/T/spring-cloud-dataflow-912434582726479179/ticktock-1464788481708/ticktock.log/stdout_0.log 2016-06-01 09:45:11.250 INFO 79194 --- [ kafka-binder-] log.sink : 06/01/16 09:45:11 2016-06-01 09:45:12.250 INFO 79194 --- [ kafka-binder-] log.sink : 06/01/16 09:45:12 2016-06-01 09:45:13.251 INFO 79194 --- [ kafka-binder-] log.sink : 06/01/16 09:45:13
Application properties are the properties associated with each application in the stream. When the application is deployed, the application properties are applied to the application via command line arguments or environment variables based on the underlying deployment implementation.
The following stream
dataflow:> stream create --definition "time | log" --name ticktock
can have application properties defined at the time of stream creation.
The shell command app info
displays the white-listed application properties for the application.
For more info on the property white listing refer to Section 21.1, “Whitelisting application properties”
Below are the white listed properties for the app time
:
dataflow:> app info source:time ╔══════════════════════════════╤══════════════════════════════╤══════════════════════════════╤══════════════════════════════╗ ║ Option Name │ Description │ Default │ Type ║ ╠══════════════════════════════╪══════════════════════════════╪══════════════════════════════╪══════════════════════════════╣ ║trigger.time-unit │The TimeUnit to apply to delay│<none> │java.util.concurrent.TimeUnit ║ ║ │values. │ │ ║ ║trigger.fixed-delay │Fixed delay for periodic │1 │java.lang.Integer ║ ║ │triggers. │ │ ║ ║trigger.cron │Cron expression value for the │<none> │java.lang.String ║ ║ │Cron Trigger. │ │ ║ ║trigger.initial-delay │Initial delay for periodic │0 │java.lang.Integer ║ ║ │triggers. │ │ ║ ║trigger.max-messages │Maximum messages per poll, -1 │1 │java.lang.Long ║ ║ │means infinity. │ │ ║ ║trigger.date-format │Format for the date value. │<none> │java.lang.String ║ ╚══════════════════════════════╧══════════════════════════════╧══════════════════════════════╧══════════════════════════════╝
Below are the white listed properties for the app log
:
dataflow:> app info sink:log ╔══════════════════════════════╤══════════════════════════════╤══════════════════════════════╤══════════════════════════════╗ ║ Option Name │ Description │ Default │ Type ║ ╠══════════════════════════════╪══════════════════════════════╪══════════════════════════════╪══════════════════════════════╣ ║log.name │The name of the logger to use.│<none> │java.lang.String ║ ║log.level │The level at which to log │<none> │org.springframework.integratio║ ║ │messages. │ │n.handler.LoggingHandler$Level║ ║log.expression │A SpEL expression (against the│payload │java.lang.String ║ ║ │incoming message) to evaluate │ │ ║ ║ │as the logged message. │ │ ║ ╚══════════════════════════════╧══════════════════════════════╧══════════════════════════════╧══════════════════════════════╝
The application properties for the time
and log
apps can be specified at the time of stream
creation as follows:
dataflow:> stream create --definition "time --fixed-delay=5 | log --level=WARN" --name ticktock
Note that the properties fixed-delay
and level
defined above for the apps time
and log
are the 'short-form' property names provided by the shell completion.
These 'short-form' property names are applicable only for the white-listed properties and in all other cases, only fully qualified property names should be used.
The application properties can also be specified when deploying a stream. When specified during deployment, these application properties can either be specified as 'short-form' property names (applicable for white-listed properties) or fully qualified property names. The application properties should have the prefix "app.<appName/label>".
For example, the stream
dataflow:> stream create --definition "time | log" --name ticktock
can be deployed with application properties using the 'short-form' property names:
dataflow:>stream deploy ticktock --properties "app.time.fixed-delay=5,app.log.level=ERROR"
When using the app label,
stream create ticktock --definition "a: time | b: log"
the application properties can be defined as:
stream deploy ticktock --properties "app.a.fixed-delay=4,app.b.level=ERROR"
A common pattern in stream processing is to partition the data as it is streamed. This entails deploying multiple instances of a message consuming app and using content-based routing so that messages with a given key (as determined at runtime) are always routed to the same app instance. You can pass the partition properties during stream deployment to declaratively configure a partitioning strategy to route each message to a specific consumer instance.
See below for examples of deploying partitioned streams:
null
)partitionKeyExtractorClass
is null. If both are null, the app
is not partitioned (default null
)null
)[nextModule].count
. If both the class and
expression are null, the underlying binder’s default PartitionSelectorStrategy
will be applied to the key (default null
)In summary, an app is partitioned if its count is > 1 and the previous app has a
partitionKeyExtractorClass
or partitionKeyExpression
(class takes precedence).
When a partition key is extracted, the partitioned app instance is determined by
invoking the partitionSelectorClass
, if present, or the partitionSelectorExpression % partitionCount
,
where partitionCount
is application count in the case of RabbitMQ, and the underlying
partition count of the topic in the case of Kafka.
If neither a partitionSelectorClass
nor a partitionSelectorExpression
is
present the result is key.hashCode() % partitionCount
.
Application properties that are defined during deployment override the same properties defined during the stream creation.
For example, the following stream has application properties defined during stream creation:
dataflow:> stream create --definition "time --fixed-delay=5 | log --level=WARN" --name ticktock
To override these application properties, one can specify the new property values during deployment:
dataflow:>stream deploy ticktock --properties "app.time.fixed-delay=4,app.log.level=ERROR"
When deploying the stream, properties that control the deployment of the apps into the target platform are known as deployment
properties.
For instance, one can specify how many instances need to be deployed for the specific application defined in the stream using the deployment property called count
.
If you would like to have multiple instances of an application in the stream, you can include a property with the deploy command:
dataflow:> stream deploy --name ticktock --properties "app.time.count=3"
Note that count
is the reserved property name used by the underlying deployer. Hence, if the application also has a custom property named count
, it is not supported
when specified in 'short-form' form during stream deployment as it could conflict with the instance count deployer property. Instead, the count
as a custom application property can be
specified in its fully qualified form (example: app.foo.bar.count
) during stream deployment or it can be specified using 'short-form' or fully qualified form during the stream creation
where it will be considered as an app property.
Important | |
---|---|
When using the Spring Cloud Dataflow Shell, there are two ways to provide deployment properties: either inline or via a file reference. Those two ways are exclusive and documented below:
--properties
shell option and list properties as a comma separated
list of key=value pairs, like so:stream deploy foo
--properties "app.transform.count=2,app.transform.producer.partitionKeyExpression=payload"
--propertiesFile
option and point it to a local Java .properties
file
(i.e. that lives in the filesystem of the machine running the shell). Being read
as a .properties
file, normal rules apply (ISO 8859-1 encoding, =
, <space>
or
:
delimiter, etc.) although we recommend using =
as a key-value pair delimiter
for consistency:stream deploy foo --propertiesFile myprops.properties
where myprops.properties
contains:
app.transform.count=2 app.transform.producer.partitionKeyExpression=payload
Both the above properties will be passed as deployment properties for the stream foo
above.