Download and run the Spring Cloud Data Flow shell.
wget http://repo.spring.io/release/org/springframework/cloud/spring-cloud-dataflow-shell/1.2.3.RELEASE/spring-cloud-dataflow-shell-1.2.3.RELEASE.jar $ java -jar spring-cloud-dataflow-shell-1.2.3.RELEASE.jar
That should give you the following startup message from the shell:
____ ____ _ __ / ___| _ __ _ __(_)_ __ __ _ / ___| | ___ _ _ __| | \___ \| '_ \| '__| | '_ \ / _` | | | | |/ _ \| | | |/ _` | ___) | |_) | | | | | | | (_| | | |___| | (_) | |_| | (_| | |____/| .__/|_| |_|_| |_|\__, | \____|_|\___/ \__,_|\__,_| ____ |_| _ __|___/ __________ | _ \ __ _| |_ __ _ | ___| | _____ __ \ \ \ \ \ \ | | | |/ _` | __/ _` | | |_ | |/ _ \ \ /\ / / \ \ \ \ \ \ | |_| | (_| | || (_| | | _| | | (_) \ V V / / / / / / / |____/ \__,_|\__\__,_| |_| |_|\___/ \_/\_/ /_/_/_/_/_/ 1.2.3.RELEASE Welcome to the Spring Cloud Data Flow shell. For assistance hit TAB or type "help". server-unknown:>
Configure the Data Flow server URI with the following command (use the URL determined above in the previous step) using the default user and password settings:
server-unknown:>dataflow config server --username user --password password --uri http://130.211.203.246/ Successfully targeted http://130.211.203.246/ dataflow:>
Register the Docker with Rabbit binder versions of the time
and log
apps using the shell.
dataflow:>app register --type source --name time --uri docker://springcloudstream/time-source-rabbit:1.2.0.RELEASE --metadata-uri maven://org.springframework.cloud.stream.app:time-source-rabbit:jar:metadata:1.2.0.RELEASE dataflow:>app register --type sink --name log --uri docker://springcloudstream/log-sink-rabbit:1.2.0.RELEASE --metadata-uri maven://org.springframework.cloud.stream.app:log-sink-rabbit:jar:metadata:1.2.0.RELEASE
Alternatively, if you would like to register all out-of-the-box stream applications built with the Rabbit binder in bulk, you can with the following command. For more details, review how to register applications.
dataflow:>app import --uri http://bit.ly/stream-applications-rabbit-docker
Deploy a simple stream in the shell
dataflow:>stream create --name ticktock --definition "time | log" --deploy
You can use the command kubectl get pods
to check on the state of the pods corresponding to this stream. We can run this from the shell by running it as an OS command by adding a "!" before the command.
dataflow:>! kubectl get pods -l role=spring-app command is:kubectl get pods -l role=spring-app NAME READY STATUS RESTARTS AGE ticktock-log-0-qnk72 1/1 Running 0 2m ticktock-time-r65cn 1/1 Running 0 2m
Look at the logs for the pod deployed for the log sink.
dataflow:>! kubectl logs ticktock-log-0-qnk72 command is:kubectl logs ticktock-log-0-qnk72 ... 2017-07-20 04:34:37.369 INFO 1 --- [time.ticktock-1] log-sink : 07/20/17 04:34:37 2017-07-20 04:34:38.371 INFO 1 --- [time.ticktock-1] log-sink : 07/20/17 04:34:38 2017-07-20 04:34:39.373 INFO 1 --- [time.ticktock-1] log-sink : 07/20/17 04:34:39 2017-07-20 04:34:40.380 INFO 1 --- [time.ticktock-1] log-sink : 07/20/17 04:34:40 2017-07-20 04:34:41.381 INFO 1 --- [time.ticktock-1] log-sink : 07/20/17 04:34:41
Destroy the stream
dataflow:>stream destroy --name ticktock
A useful command to help in troubleshooting issues, such as a container that has a fatal error starting up, add the options --previous
to view last terminated container log. You can also get more detailed information about the pods by using the kubctl describe
like:
kubectl describe pods/ticktock-log-qnk72
![]() | Note |
---|---|
If you need to specify any of the app specific configuration properties then you might use "long-form" of them including the app specific prefix like |
If you need to be able to connect to from outside of the Kubernetes cluster to an app that you deploy, like the http-source
, then you need to use either an external load balancer for the incoming connections or you need to use a NodePort configuration that will expose a proxy port on each Kubetnetes Node. If your cluster doesn’t support external load balancers, like the Minikube, then you must use the NodePort approach. You can use deployment properties for configuring the access. Use deployer.http.kubernetes.createLoadBalancer=true
for the app to specify that you want to have a LoadBalancer with an external IP address created for your app’s service. For the NodePort configuration use deployer.http.kubernetes.createNodePort=<port>
where <port>
should be a number between 30000 and 32767.
Register the http-source
, you can use the following command:
dataflow:>app register --type source --name http --uri docker:springcloudstream/http-source-rabbit:1.2.0.RELEASE --metadata-uri maven://org.springframework.cloud.stream.app:http-source-rabbit:jar:metadata:1.2.0.RELEASE
Create the http | log
stream without deploying it using the following command:
dataflow:>stream create --name test --definition "http | log"
If your cluster supports an External LoadBalancer for the http-source
, then you can use the following command to deploy the stream:
dataflow:>stream deploy test --properties "deployer.http.kubernetes.createLoadBalancer=true"
Wait for the pods to be started showing 1/1 in the READY column by using this command:
dataflow:>! kubectl get pods -l role=spring-app command is:kubectl get pods -l role=spring-app NAME READY STATUS RESTARTS AGE test-http-2bqx7 1/1 Running 0 3m test-log-0-tg1m4 1/1 Running 0 3m
Now, look up the external IP address for the http
app (it can sometimes take a minute or two for the external IP to get assigned):
dataflow:>! kubectl get service test-http command is:kubectl get service test-http NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE test-http 10.103.251.157 130.211.200.96 8080/TCP 58s
If you are using Minikube, or any cluster that doesn’t support an External LoadBalancer, then you should deploy the stream with a NodePort in the range of 30000-32767. Use the following command to deploy it:
dataflow:>stream deploy test --properties "deployer.http.kubernetes.createNodePort=32123"
Wait for the pods to be started showing 1/1 in the READY column by using this command:
dataflow:>! kubectl get pods -l role=spring-app command is:kubectl get pods -l role=spring-app NAME READY STATUS RESTARTS AGE test-http-9obkq 1/1 Running 0 3m test-log-0-ysiz3 1/1 Running 0 3m
Now look up the URL to use with the following command:
dataflow:>! minikube service --url test-http command is:minikube service --url test-http http://192.168.99.100:32123
Post some data to the test-http
app either using the EXTERNAL-IP address from above with port 8080 or the URL provided by the minikube command:
dataflow:>http post --target http://130.211.200.96:8080 --data "Hello"
Finally, look at the logs for the test-log
pod:
dataflow:>! kubectl get pods-l role=spring-app command is:kubectl get pods-l role=spring-app NAME READY STATUS RESTARTS AGE test-http-9obkq 1/1 Running 0 2m test-log-0-ysiz3 1/1 Running 0 2m dataflow:>! kubectl logs test-log-0-ysiz3 command is:kubectl logs test-log-0-ysiz3 ... 2016-04-27 16:54:29.789 INFO 1 --- [ main] o.s.c.s.b.k.KafkaMessageChannelBinder$3 : started inbound.test.http.test 2016-04-27 16:54:29.799 INFO 1 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 0 2016-04-27 16:54:29.799 INFO 1 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147482647 2016-04-27 16:54:29.895 INFO 1 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http) 2016-04-27 16:54:29.896 INFO 1 --- [ kafka-binder-] log.sink : Hello
Destroy the stream
dataflow:>stream destroy --name test