redis-server
wget http://repo.spring.io/milestone/org/springframework/cloud/spring-cloud-dataflow-admin/1.0.0.M1/spring-cloud-dataflow-admin-1.0.0.M1.jar wget http://repo.spring.io/milestone/org/springframework/cloud/spring-cloud-dataflow-shell/1.0.0.M1/spring-cloud-dataflow-shell-1.0.0.M1.jar
$ java -jar spring-cloud-dataflow-admin-1.0.0.M1.jar
$ java -jar spring-cloud-dataflow-shell-1.0.0.M1.jar
thus far, only the following commands are supported in the shell when running singlenode:
stream list
stream create
stream deploy
ltc create redis redis -r
ltc create admin springcloud/dataflow-admin -p 9393 -m 512
server-unknown:>admin config server http://admin.192.168.11.11.xip.io Successfully targeted http://admin.192.168.11.11.xip.io dataflow:>
all stream commands are supported in the shell when running on Lattice:
stream list
stream create
stream deploy
stream undeploy
stream all undeploy
stream destroy
stream all destroy
Spring Cloud Data Flow can be used to deploy modules in a Cloud Foundry environment. When doing so, the Admin application can either run itself on Cloud Foundry, or on another installation (e.g. a simple laptop).
The required configuration amounts to the same, and is merely related to providing credentials to the Cloud Foundry instance, so that the admin can spawn applications itself. Any Spring Boot compatible configuration mechanism can be used (passing program arguments, editing configuration files before building the application, using Spring Cloud Config, using environment variables, etc.), although although some may prove more adequate than others when running on Cloud Foundry.
cf marketplace
to discover which plans are available to you. For example when using Pivotal Web Services:cf create-service rediscloud 30mb redis
wget http://repo.spring.io/milestone/org/springframework/cloud/spring-cloud-dataflow-admin/1.0.0.M1/spring-cloud-dataflow-admin-1.0.0.M1.jar wget http://repo.spring.io/milestone/org/springframework/cloud/spring-cloud-dataflow-shell/1.0.0.M1/spring-cloud-dataflow-shell-1.0.0.M1.jar
3a. push the admin application on Cloud Foundry, configure it (see below) and start it
Note | |
---|---|
You must use a unique name for your app that’s not already used by someone else or your deployment will fail |
cf push s-c-dataflow-admin --no-start -p spring-cloud-dataflow-admin-1.0.0.M1.jar cf bind-service s-c-dataflow-admin redis
Now we can configure the app. This configuration is for Pivotal Web Services. You need to fill in {org}, {space}, {email} and {password} before running these commands.
cf set-env s-c-dataflow-admin CLOUDFOUNDRY_API_ENDPOINT https://api.run.pivotal.io cf set-env s-c-dataflow-admin CLOUDFOUNDRY_ORGANIZATION {org} cf set-env s-c-dataflow-admin CLOUDFOUNDRY_SPACE {space} cf set-env s-c-dataflow-admin CLOUDFOUNDRY_DOMAIN cfapps.io cf set-env s-c-dataflow-admin CLOUDFOUNDRY_SERVICES redis cf set-env s-c-dataflow-admin SECURITY_OAUTH2_CLIENT_USERNAME {email} cf set-env s-c-dataflow-admin SECURITY_OAUTH2_CLIENT_PASSWORD {password} cf set-env s-c-dataflow-admin SECURITY_OAUTH2_CLIENT_ACCESS_TOKEN_URI https://login.run.pivotal.io/oauth/token cf set-env s-c-dataflow-admin SECURITY_OAUTH2_CLIENT_USER_AUTHORIZATION_URI https://login.run.pivotal.io/oauth/authorize
We are now ready to start the app.
cf start s-c-dataflow-admin
alternatively,
3b. run the admin application locally, targeting your Cloud Foundry installation (see below for configuration)
java -jar spring-cloud-dataflow-admin-1.0.0.M1.jar [--option1=value1] [--option2=value2] [etc.]
$ java -jar spring-cloud-dataflow-shell-1.0.0.M1.jar
server-unknown:>admin config server http://s-c-dataflow-admin.cfapps.io Successfully targeted http://s-c-dataflow-admin.cfapps.io dataflow:>
At step 3., either running on Cloud Foundry or targeting Cloud Foundry, the following pieces of configuration must be provided, for example using cf env s-c-dataflow-admin CLOUDFOUNDRY_DOMAIN mydomain.cfapps.io
(note the use of underscores) when running in Cloud Foundry
# Default values cited after the equal sign. # Example values, typical for Pivotal Web Services, cited as a comment # url of the CF API (used when using cf login -a for example), e.g. https://api.run.pivotal.io # (for setting env var use CLOUDFOUNDRY_API_ENDPOINT) cloudfoundry.apiEndpoint= # name of the organization that owns the space above, e.g. youruser-org # (for setting env var use CLOUDFOUNDRY_ORGANIZATION) cloudfoundry.organization= # name of the space into which modules will be deployed # (for setting env var use CLOUDFOUNDRY_SPACE) cloudfoundry.space=<same as admin when running on CF or 'development'> # the root domain to use when mapping routes, e.g. cfapps.io # (for setting env var use CLOUDFOUNDRY_DOMAIN) cloudfoundry.domain= # Comma separated set of service instance names to bind to the module. # Amongst other things, this should include a service that will be used # for Spring Cloud Stream binding # (for setting env var use CLOUDFOUNDRY_SERVICES) cloudfoundry.services=redis # url used for obtaining an OAuth2 token, e.g. https://uaa.run.pivotal.io/oauth/token # (for setting env var use SECURITY_OAUTH2_CLIENT_ACCESS_TOKEN_URI) security.oauth2.client.access-token-uri= # url used to grant user authorizations, e.g. https://login.run.pivotal.io/oauth/authorize # (for setting env var use SECURITY_OAUTH2_CLIENT_USER_AUTHORIZATION_URI) security.oauth2.client.user-authorization-uri= # username and password of the user to use to create apps (modules) # (for setting env var use SECURITY_OAUTH2_CLIENT_USERNAME and SECURITY_OAUTH2_CLIENT_PASSWORD) security.oauth2.client.username= security.oauth2.client.password=
Currently the YARN configuration is set to use localhost
, meaning this can only be run against a local cluster. Also, all commands shown here need to be run from the project root.
wget http://repo.spring.io/milestone/org/springframework/cloud/spring-cloud-dataflow-yarn-appmaster/1.0.0.M1/spring-cloud-dataflow-yarn-appmaster-1.0.0.M1.jar wget http://repo.spring.io/milestone/org/springframework/cloud/spring-cloud-dataflow-yarn-container/1.0.0.M1/spring-cloud-dataflow-yarn-container-1.0.0.M1.jar wget http://repo.spring.io/milestone/org/springframework/cloud/spring-cloud-dataflow-yarn-client/1.0.0.M1/spring-cloud-dataflow-yarn-client-1.0.0.M1.jar wget http://repo.spring.io/milestone/org/springframework/cloud/spring-cloud-dataflow-admin/1.0.0.M1/spring-cloud-dataflow-admin-1.0.0.M1.jar wget http://repo.spring.io/milestone/org/springframework/cloud/spring-cloud-dataflow-shell/1.0.0.M1/spring-cloud-dataflow-shell-1.0.0.M1.jar
redis-server
hdfs
$ hdfs dfs -rm -R /app/app
spring-cloud-dataflow-admin
with yarn
profile$ java -Dspring.profiles.active=yarn -jar spring-cloud-dataflow-admin-1.0.0.M1.jar
spring-cloud-dataflow-shell
$ java -jar spring-cloud-dataflow-shell-1.0.0.M1.jar dataflow:>stream create --name "ticktock" --definition "time --fixedDelay=5|log" --deploy dataflow:>stream list Stream Name Stream Definition Status ----------- ----------------------- -------- ticktock time --fixedDelay=5|log deployed dataflow:>stream destroy --name "ticktock" Destroyed stream 'ticktock'
YARN application is pushed and started automatically during a stream deployment process. This application instance is not automatically closed which can be done from CLI:
$ java -jar spring-cloud-dataflow-yarn-client-1.0.0.M1.jar shell Spring YARN Cli (v2.3.0.M2) Hit TAB to complete. Type 'help' and hit RETURN for help, and 'exit' to quit. $ submitted APPLICATION ID USER NAME QUEUE TYPE STARTTIME FINISHTIME STATE FINALSTATUS ORIGINAL TRACKING URL ------------------------------ ------------ ---------------------------------- ------- -------- -------------- ---------- ------- ----------- -------------------------- application_1439803106751_0088 jvalkealahti spring-cloud-dataflow-yarn-app_app default DATAFLOW 01/09/15 09:02 N/A RUNNING UNDEFINED http://192.168.122.1:48913 $ shutdown -a application_1439803106751_0088 shutdown requested
Properties dataflow.yarn.app.appmaster.path
and dataflow.yarn.app.container.path
can be used with both spring-cloud-dataflow-admin
and and spring-cloud-dataflow-yarn-client
to define directory for appmaster
and container
jars. Values for those default to .
which then assumes all needed jars are in a same working directory.