07:10:44,617 INFO Test worker xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [org/springframework/data/hadoop/batch/in-do-out.xml]
07:10:44,646 INFO Test worker xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [org/springframework/data/hadoop/batch-common.xml]
07:10:44,659 INFO Test worker xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [org/springframework/data/hadoop/hadoop-ctx.xml]
07:10:44,679 INFO Test worker support.DefaultListableBeanFactory - Overriding bean definition for bean 'mainJob': replacing [Generic bean: class [org.springframework.batch.core.configuration.xml.SimpleFlowFactoryBean]; scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null] with [Generic bean: class [org.springframework.batch.core.configuration.xml.JobParserJobFactoryBean]; scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null]
07:10:44,681 INFO Test worker support.GenericXmlApplicationContext - Refreshing org.springframework.context.support.GenericXmlApplicationContext@1805fea3: startup date [Thu Sep 05 07:10:44 PDT 2013]; root of context hierarchy
07:10:44,686 INFO Test worker config.PropertyPlaceholderConfigurer - Loading properties file from class path resource [test.properties]
07:10:44,692 INFO Test worker support.DefaultListableBeanFactory - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@3047411c: defining beans [jobRepository,transactionManager,jobLauncher,taskExecutor,ppc,hadoopFs,hadoopResourceLoader,hadoopConfiguration,cfg-init,fs-init,rl-init,org.springframework.data.hadoop.scripting.HdfsScriptRunner#0,org.springframework.batch.core.scope.internalStepScope,org.springframework.beans.factory.config.CustomEditorConfigurer,org.springframework.batch.core.configuration.xml.CoreNamespacePostProcessor,import,do,mainJob,file-reader,hadoop-tasklet,mr-job,org.springframework.data.hadoop.scripting.HdfsScriptRunner#1,script-tasklet]; root of factory hierarchy
FS is org.apache.hadoop.hdfs.DistributedFileSystem
07:10:44,772 INFO Test worker support.SimpleJobLauncher - Job: [FlowJob: [name=mainJob]] launched with the following parameters: [{}]
07:10:44,784 INFO Test worker job.SimpleStepHandler - Executing step: [import]
07:10:44,844 INFO Test worker job.SimpleStepHandler - Executing step: [do]
07:10:44,849 INFO Test worker mapreduce.JobTasklet - Starting job [mr-job]
07:10:44,875 INFO Test worker input.FileInputFormat - Total input paths to process : 1
07:10:44,982 INFO Test worker mapred.JobClient - Running job: job_201308191125_0570
07:10:45,985 INFO Test worker mapred.JobClient - map 0% reduce 0%
07:10:49,991 INFO Test worker mapred.JobClient - map 100% reduce 0%
07:10:58,006 INFO Test worker mapred.JobClient - map 100% reduce 33%
07:10:59,008 INFO Test worker mapred.JobClient - map 100% reduce 100%
07:11:00,012 INFO Test worker mapred.JobClient - Job complete: job_201308191125_0570
07:11:00,014 INFO Test worker mapred.JobClient - Counters: 29
07:11:00,015 INFO Test worker mapred.JobClient - Job Counters
07:11:00,015 INFO Test worker mapred.JobClient - Launched reduce tasks=1
07:11:00,015 INFO Test worker mapred.JobClient - SLOTS_MILLIS_MAPS=4814
07:11:00,015 INFO Test worker mapred.JobClient - Total time spent by all reduces waiting after reserving slots (ms)=0
07:11:00,015 INFO Test worker mapred.JobClient - Total time spent by all maps waiting after reserving slots (ms)=0
07:11:00,015 INFO Test worker mapred.JobClient - Launched map tasks=1
07:11:00,015 INFO Test worker mapred.JobClient - Data-local map tasks=1
07:11:00,015 INFO Test worker mapred.JobClient - SLOTS_MILLIS_REDUCES=8964
07:11:00,015 INFO Test worker mapred.JobClient - File Output Format Counters
07:11:00,015 INFO Test worker mapred.JobClient - Bytes Written=38
07:11:00,015 INFO Test worker mapred.JobClient - FileSystemCounters
07:11:00,015 INFO Test worker mapred.JobClient - FILE_BYTES_READ=64
07:11:00,016 INFO Test worker mapred.JobClient - HDFS_BYTES_READ=150
07:11:00,016 INFO Test worker mapred.JobClient - FILE_BYTES_WRITTEN=105559
07:11:00,016 INFO Test worker mapred.JobClient - HDFS_BYTES_WRITTEN=38
07:11:00,016 INFO Test worker mapred.JobClient - File Input Format Counters
07:11:00,016 INFO Test worker mapred.JobClient - Bytes Read=28
07:11:00,016 INFO Test worker mapred.JobClient - Map-Reduce Framework
07:11:00,016 INFO Test worker mapred.JobClient - Map output materialized bytes=64
07:11:00,016 INFO Test worker mapred.JobClient - Map input records=3
07:11:00,016 INFO Test worker mapred.JobClient - Reduce shuffle bytes=64
07:11:00,016 INFO Test worker mapred.JobClient - Spilled Records=10
07:11:00,016 INFO Test worker mapred.JobClient - Map output bytes=48
07:11:00,016 INFO Test worker mapred.JobClient - CPU time spent (ms)=1450
07:11:00,016 INFO Test worker mapred.JobClient - Total committed heap usage (bytes)=239075328
07:11:00,017 INFO Test worker mapred.JobClient - Combine input records=0
07:11:00,017 INFO Test worker mapred.JobClient - SPLIT_RAW_BYTES=122
07:11:00,017 INFO Test worker mapred.JobClient - Reduce input records=5
07:11:00,017 INFO Test worker mapred.JobClient - Reduce input groups=5
07:11:00,017 INFO Test worker mapred.JobClient - Combine output records=0
07:11:00,017 INFO Test worker mapred.JobClient - Physical memory (bytes) snapshot=313237504
07:11:00,017 INFO Test worker mapred.JobClient - Reduce output records=5
07:11:00,017 INFO Test worker mapred.JobClient - Virtual memory (bytes) snapshot=1682501632
07:11:00,017 INFO Test worker mapred.JobClient - Map output records=5
07:11:00,019 INFO Test worker mapreduce.JobTasklet - Completed job [mr-job]
07:11:00,055 INFO Test worker support.SimpleJobLauncher - Job: [FlowJob: [name=mainJob]] completed with the following parameters: [{}] and the following status: [COMPLETED]
FS is org.apache.hadoop.hdfs.DistributedFileSystem
07:11:00,058 INFO Test worker support.GenericXmlApplicationContext - Closing org.springframework.context.support.GenericXmlApplicationContext@1805fea3: startup date [Thu Sep 05 07:10:44 PDT 2013]; root of context hierarchy
07:11:00,059 INFO Test worker support.DefaultListableBeanFactory - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@3047411c: defining beans [jobRepository,transactionManager,jobLauncher,taskExecutor,ppc,hadoopFs,hadoopResourceLoader,hadoopConfiguration,cfg-init,fs-init,rl-init,org.springframework.data.hadoop.scripting.HdfsScriptRunner#0,org.springframework.batch.core.scope.internalStepScope,org.springframework.beans.factory.config.CustomEditorConfigurer,org.springframework.batch.core.configuration.xml.CoreNamespacePostProcessor,import,do,mainJob,file-reader,hadoop-tasklet,mr-job,org.springframework.data.hadoop.scripting.HdfsScriptRunner#1,script-tasklet]; root of factory hierarchy
07:11:00,063 INFO Test worker xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [org/springframework/data/hadoop/batch/in-do-out-ns.xml]
07:11:00,092 INFO Test worker xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [org/springframework/data/hadoop/batch-common.xml]
07:11:00,115 INFO Test worker xml.XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [org/springframework/data/hadoop/hadoop-ctx.xml]
07:11:00,135 INFO Test worker support.DefaultListableBeanFactory - Overriding bean definition for bean 'mainJob': replacing [Generic bean: class [org.springframework.batch.core.configuration.xml.SimpleFlowFactoryBean]; scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null] with [Generic bean: class [org.springframework.batch.core.configuration.xml.JobParserJobFactoryBean]; scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null]
07:11:00,136 INFO Test worker support.GenericXmlApplicationContext - Refreshing org.springframework.context.support.GenericXmlApplicationContext@10139edf: startup date [Thu Sep 05 07:11:00 PDT 2013]; root of context hierarchy
07:11:00,140 INFO Test worker config.PropertyPlaceholderConfigurer - Loading properties file from class path resource [test.properties]
07:11:00,147 INFO Test worker support.DefaultListableBeanFactory - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@1b80fd99: defining beans [jobRepository,transactionManager,jobLauncher,taskExecutor,ppc,hadoopFs,hadoopResourceLoader,hadoopConfiguration,cfg-init,fs-init,rl-init,org.springframework.data.hadoop.scripting.HdfsScriptRunner#0,org.springframework.batch.core.scope.internalStepScope,org.springframework.beans.factory.config.CustomEditorConfigurer,org.springframework.batch.core.configuration.xml.CoreNamespacePostProcessor,import,do,mainJob,file-reader,hadoop-tasklet,mr-job,org.springframework.data.hadoop.scripting.HdfsScriptRunner#1,script-tasklet]; root of factory hierarchy
07:11:00,218 INFO Test worker support.SimpleJobLauncher - Job: [FlowJob: [name=mainJob]] launched with the following parameters: [{}]
07:11:00,229 INFO Test worker job.SimpleStepHandler - Executing step: [import]
07:11:00,285 INFO Test worker job.SimpleStepHandler - Executing step: [do]
07:11:00,292 INFO Test worker mapreduce.JobTasklet - Starting job [mr-job]
07:11:00,318 INFO Test worker input.FileInputFormat - Total input paths to process : 1
07:11:00,404 INFO Test worker mapred.JobClient - Running job: job_201308191125_0571
07:11:01,405 INFO Test worker mapred.JobClient - map 0% reduce 0%
07:11:05,412 INFO Test worker mapred.JobClient - map 100% reduce 0%
Called script
07:11:05,684 INFO myScheduler-2 mapreduce.JobRunner - Starting job [custom-jar-job]
07:11:05,728 INFO myScheduler-2 mapred.JobClient - Cleaning up the staging area hdfs://w1-kodiak-hd023:8020/app/hadoop/tmp/mapred/staging/bamboo/.staging/job_201308191125_0572
07:11:05,728 ERROR myScheduler-2 security.UserGroupInformation - PriviledgedActionException as:bamboo cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory /ide-test/runner/output-4 already exists
07:11:05,728 WARN myScheduler-2 mapreduce.JobRunner - Cannot start job [custom-jar-job]
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory /ide-test/runner/output-4 already exists
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:137)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:973)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
at org.springframework.data.hadoop.mapreduce.JobExecutor$2.run(JobExecutor.java:201)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:48)
at org.springframework.data.hadoop.mapreduce.JobExecutor.startJobs(JobExecutor.java:172)
at org.springframework.data.hadoop.mapreduce.JobExecutor.startJobs(JobExecutor.java:164)
at org.springframework.data.hadoop.mapreduce.JobRunner.call(JobRunner.java:52)
at org.springframework.data.hadoop.mapreduce.JobRunner.call(JobRunner.java:33)
at org.springframework.data.hadoop.mapreduce.JobRunner.invoke(JobRunner.java:88)
at org.springframework.data.hadoop.mapreduce.JobRunner.call(JobRunner.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.springframework.util.MethodInvoker.invoke(MethodInvoker.java:273)
at org.springframework.scheduling.support.MethodInvokingRunnable.run(MethodInvokingRunnable.java:65)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:51)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
07:11:05,730 ERROR myScheduler-2 support.MethodInvokingRunnable - Invocation of method 'call' on target class [class org.springframework.data.hadoop.mapreduce.JobRunner] failed
java.lang.IllegalStateException: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory /ide-test/runner/output-4 already exists
at org.springframework.data.hadoop.mapreduce.JobExecutor$2.run(JobExecutor.java:213)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:48)
at org.springframework.data.hadoop.mapreduce.JobExecutor.startJobs(JobExecutor.java:172)
at org.springframework.data.hadoop.mapreduce.JobExecutor.startJobs(JobExecutor.java:164)
at org.springframework.data.hadoop.mapreduce.JobRunner.call(JobRunner.java:52)
at org.springframework.data.hadoop.mapreduce.JobRunner.call(JobRunner.java:33)
at org.springframework.data.hadoop.mapreduce.JobRunner.invoke(JobRunner.java:88)
at org.springframework.data.hadoop.mapreduce.JobRunner.call(JobRunner.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.springframework.util.MethodInvoker.invoke(MethodInvoker.java:273)
at org.springframework.scheduling.support.MethodInvokingRunnable.run(MethodInvokingRunnable.java:65)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:51)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:679)
Caused by: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory /ide-test/runner/output-4 already exists
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:137)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:973)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
at org.springframework.data.hadoop.mapreduce.JobExecutor$2.run(JobExecutor.java:201)
... 22 more
07:11:13,428 INFO Test worker mapred.JobClient - map 100% reduce 33%
07:11:14,431 INFO Test worker mapred.JobClient - map 100% reduce 100%
07:11:15,435 INFO Test worker mapred.JobClient - Job complete: job_201308191125_0571
07:11:15,437 INFO Test worker mapred.JobClient - Counters: 29
07:11:15,437 INFO Test worker mapred.JobClient - Job Counters
07:11:15,437 INFO Test worker mapred.JobClient - Launched reduce tasks=1
07:11:15,438 INFO Test worker mapred.JobClient - SLOTS_MILLIS_MAPS=4449
07:11:15,438 INFO Test worker mapred.JobClient - Total time spent by all reduces waiting after reserving slots (ms)=0
07:11:15,438 INFO Test worker mapred.JobClient - Total time spent by all maps waiting after reserving slots (ms)=0
07:11:15,438 INFO Test worker mapred.JobClient - Launched map tasks=1
07:11:15,438 INFO Test worker mapred.JobClient - Data-local map tasks=1
07:11:15,438 INFO Test worker mapred.JobClient - SLOTS_MILLIS_REDUCES=9276
07:11:15,438 INFO Test worker mapred.JobClient - File Output Format Counters
07:11:15,438 INFO Test worker mapred.JobClient - Bytes Written=38
07:11:15,438 INFO Test worker mapred.JobClient - FileSystemCounters
07:11:15,438 INFO Test worker mapred.JobClient - FILE_BYTES_READ=64
07:11:15,438 INFO Test worker mapred.JobClient - HDFS_BYTES_READ=150
07:11:15,438 INFO Test worker mapred.JobClient - FILE_BYTES_WRITTEN=105559
07:11:15,439 INFO Test worker mapred.JobClient - HDFS_BYTES_WRITTEN=38
07:11:15,439 INFO Test worker mapred.JobClient - File Input Format Counters
07:11:15,439 INFO Test worker mapred.JobClient - Bytes Read=28
07:11:15,439 INFO Test worker mapred.JobClient - Map-Reduce Framework
07:11:15,439 INFO Test worker mapred.JobClient - Map output materialized bytes=64
07:11:15,439 INFO Test worker mapred.JobClient - Map input records=3
07:11:15,439 INFO Test worker mapred.JobClient - Reduce shuffle bytes=64
07:11:15,441 INFO Test worker mapred.JobClient - Spilled Records=10
07:11:15,441 INFO Test worker mapred.JobClient - Map output bytes=48
07:11:15,441 INFO Test worker mapred.JobClient - CPU time spent (ms)=1460
07:11:15,441 INFO Test worker mapred.JobClient - Total committed heap usage (bytes)=239075328
07:11:15,441 INFO Test worker mapred.JobClient - Combine input records=0
07:11:15,441 INFO Test worker mapred.JobClient - SPLIT_RAW_BYTES=122
07:11:15,441 INFO Test worker mapred.JobClient - Reduce input records=5
07:11:15,441 INFO Test worker mapred.JobClient - Reduce input groups=5
07:11:15,442 INFO Test worker mapred.JobClient - Combine output records=0
07:11:15,442 INFO Test worker mapred.JobClient - Physical memory (bytes) snapshot=324255744
07:11:15,442 INFO Test worker mapred.JobClient - Reduce output records=5
07:11:15,442 INFO Test worker mapred.JobClient - Virtual memory (bytes) snapshot=1704382464
07:11:15,442 INFO Test worker mapred.JobClient - Map output records=5
07:11:15,444 INFO Test worker mapreduce.JobTasklet - Completed job [mr-job]
07:11:15,483 INFO Test worker support.SimpleJobLauncher - Job: [FlowJob: [name=mainJob]] completed with the following parameters: [{}] and the following status: [COMPLETED]
07:11:15,483 INFO Test worker support.GenericXmlApplicationContext - Closing org.springframework.context.support.GenericXmlApplicationContext@10139edf: startup date [Thu Sep 05 07:11:00 PDT 2013]; root of context hierarchy
07:11:15,483 INFO Test worker support.DefaultListableBeanFactory - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@1b80fd99: defining beans [jobRepository,transactionManager,jobLauncher,taskExecutor,ppc,hadoopFs,hadoopResourceLoader,hadoopConfiguration,cfg-init,fs-init,rl-init,org.springframework.data.hadoop.scripting.HdfsScriptRunner#0,org.springframework.batch.core.scope.internalStepScope,org.springframework.beans.factory.config.CustomEditorConfigurer,org.springframework.batch.core.configuration.xml.CoreNamespacePostProcessor,import,do,mainJob,file-reader,hadoop-tasklet,mr-job,org.springframework.data.hadoop.scripting.HdfsScriptRunner#1,script-tasklet]; root of factory hierarchy