As an example of a simple processing step, we can transform the payload of the HTTP posted data to upper case using the stream definitions
http | transform --expression=payload.toUpperCase() | log
To create this stream enter the following command in the shell
dataflow:> stream create --definition "http | transform --expression=payload.toUpperCase() | log" --name mystream --deploy
Posting some data (using a shell command)
dataflow:> http post --target http://localhost:1234 --data "hello"
Will result in an uppercased 'HELLO' in the log
2016-06-01 09:54:37.749 INFO 80083 --- [ kafka-binder-] log.sink : HELLO
To demonstrate the data partitioning functionality, let’s deploy the following stream with Kafka as the binder.
dataflow:>stream create --name words --definition "http --server.port=9900 | splitter --expression=payload.split(' ') | log" Created new stream 'words' dataflow:>stream deploy words --properties "app.splitter.producer.partitionKeyExpression=payload,deployer.log.count=2" Deployed stream 'words' dataflow:>http post --target http://localhost:9900 --data "How much wood would a woodchuck chuck if a woodchuck could chuck wood" > POST (text/plain;Charset=UTF-8) http://localhost:9900 How much wood would a woodchuck chuck if a woodchuck could chuck wood > 202 ACCEPTED
You’ll see the following in the server logs.
2016-06-05 18:33:24.982 INFO 58039 --- [nio-9393-exec-9] o.s.c.d.spi.local.LocalAppDeployer : deploying app words.log instance 0 Logs will be in /var/folders/c3/ctx7_rns6x30tq7rb76wzqwr0000gp/T/spring-cloud-dataflow-694182453710731989/words-1465176804970/words.log 2016-06-05 18:33:24.988 INFO 58039 --- [nio-9393-exec-9] o.s.c.d.spi.local.LocalAppDeployer : deploying app words.log instance 1 Logs will be in /var/folders/c3/ctx7_rns6x30tq7rb76wzqwr0000gp/T/spring-cloud-dataflow-694182453710731989/words-1465176804970/words.log
Review the words.log instance 0
logs:
2016-06-05 18:35:47.047 INFO 58638 --- [ kafka-binder-] log.sink : How 2016-06-05 18:35:47.066 INFO 58638 --- [ kafka-binder-] log.sink : chuck 2016-06-05 18:35:47.066 INFO 58638 --- [ kafka-binder-] log.sink : chuck
Review the words.log instance 1
logs:
2016-06-05 18:35:47.047 INFO 58639 --- [ kafka-binder-] log.sink : much 2016-06-05 18:35:47.066 INFO 58639 --- [ kafka-binder-] log.sink : wood 2016-06-05 18:35:47.066 INFO 58639 --- [ kafka-binder-] log.sink : would 2016-06-05 18:35:47.066 INFO 58639 --- [ kafka-binder-] log.sink : a 2016-06-05 18:35:47.066 INFO 58639 --- [ kafka-binder-] log.sink : woodchuck 2016-06-05 18:35:47.067 INFO 58639 --- [ kafka-binder-] log.sink : if 2016-06-05 18:35:47.067 INFO 58639 --- [ kafka-binder-] log.sink : a 2016-06-05 18:35:47.067 INFO 58639 --- [ kafka-binder-] log.sink : woodchuck 2016-06-05 18:35:47.067 INFO 58639 --- [ kafka-binder-] log.sink : could 2016-06-05 18:35:47.067 INFO 58639 --- [ kafka-binder-] log.sink : wood
This shows that payload splits that contain the same word are routed to the same application instance.
Let’s try something a bit more complicated and swap out the time
source for something else. Another supported source type is http
, which accepts data for ingestion over HTTP POSTs. Note that the http
source accepts data on a different port from the Data Flow Server (default 8080). By default the port is randomly assigned.
To create a stream using an http
source, but still using the same log
sink, we would change the original command above to
dataflow:> stream create --definition "http | log" --name myhttpstream --deploy
which will produce the following output from the server
2016-06-01 09:47:58.920 INFO 79016 --- [io-9393-exec-10] o.s.c.d.spi.local.LocalAppDeployer : deploying app myhttpstream.log instance 0 Logs will be in /var/folders/wn/8jxm_tbd1vj28c8vj37n900m0000gn/T/spring-cloud-dataflow-912434582726479179/myhttpstream-1464788878747/myhttpstream.log 2016-06-01 09:48:06.396 INFO 79016 --- [io-9393-exec-10] o.s.c.d.spi.local.LocalAppDeployer : deploying app myhttpstream.http instance 0 Logs will be in /var/folders/wn/8jxm_tbd1vj28c8vj37n900m0000gn/T/spring-cloud-dataflow-912434582726479179/myhttpstream-1464788886383/myhttpstream.http
Note that we don’t see any other output this time until we actually post some data (using a shell command). In order to see the randomly assigned port on which the http source is listening, execute:
dataflow:> runtime apps
You should see that the corresponding http source has a url
property containing the host and port information on which it is listening. You are now ready to post to that url, e.g.:
dataflow:> http post --target http://localhost:1234 --data "hello" dataflow:> http post --target http://localhost:1234 --data "goodbye"
and the stream will then funnel the data from the http source to the output log implemented by the log sink
2016-06-01 09:50:22.121 INFO 79654 --- [ kafka-binder-] log.sink : hello 2016-06-01 09:50:26.810 INFO 79654 --- [ kafka-binder-] log.sink : goodbye
Of course, we could also change the sink implementation. You could pipe the output to a file (file
), to hadoop (hdfs
) or to any of the other sink apps which are available. You can also define your own apps.