public abstract class JobExecutor
extends java.lang.Object
implements org.springframework.beans.factory.InitializingBean, org.springframework.beans.factory.DisposableBean, org.springframework.beans.factory.BeanFactoryAware
Job
s.Modifier and Type | Class and Description |
---|---|
protected static interface |
JobExecutor.JobListener |
Modifier and Type | Field and Description |
---|---|
protected org.apache.commons.logging.Log |
log |
Constructor and Description |
---|
JobExecutor() |
Modifier and Type | Method and Description |
---|---|
void |
afterPropertiesSet() |
void |
destroy() |
protected java.util.Collection<org.apache.hadoop.mapreduce.Job> |
findJobs() |
boolean |
isKillJobsAtShutdown()
Indicates whether the configured jobs should be 'killed' when the application
shuts down or not.
|
boolean |
isVerbose()
Indicates whether the job execution is verbose (the default) or not.
|
boolean |
isWaitForCompletion()
Indicates whether the 'runner' should wait for the job to complete (default).
|
void |
setBeanFactory(org.springframework.beans.factory.BeanFactory beanFactory) |
void |
setExecutor(java.util.concurrent.Executor executor)
Sets the TaskExecutor used for executing the Hadoop job.
|
void |
setJob(org.apache.hadoop.mapreduce.Job job)
Sets the job to execute.
|
void |
setJobNames(java.lang.String... jobName)
Sets the jobs to execute by (bean) name.
|
void |
setJobs(java.util.Collection<org.apache.hadoop.mapreduce.Job> jobs)
Sets the jobs to execute.
|
void |
setKillJobAtShutdown(boolean killJobsAtShutdown)
Indicates whether the configured jobs should be 'killed' when the application
shuts down (default) or not.
|
void |
setVerbose(boolean verbose)
Indicates whether the job execution is verbose (the default) or not.
|
void |
setWaitForCompletion(boolean waitForJob)
Indicates whether the 'runner' should wait for the job to complete (default)
after submission or not.
|
protected java.util.Collection<org.apache.hadoop.mapreduce.Job> |
startJobs() |
protected java.util.Collection<org.apache.hadoop.mapreduce.Job> |
startJobs(JobExecutor.JobListener listener) |
protected java.util.Collection<org.apache.hadoop.mapreduce.Job> |
stopJobs()
Stops running job.
|
protected java.util.Collection<org.apache.hadoop.mapreduce.Job> |
stopJobs(JobExecutor.JobListener listener)
Stops running job.
|
public void afterPropertiesSet() throws java.lang.Exception
afterPropertiesSet
in interface org.springframework.beans.factory.InitializingBean
java.lang.Exception
public void destroy() throws java.lang.Exception
destroy
in interface org.springframework.beans.factory.DisposableBean
java.lang.Exception
protected java.util.Collection<org.apache.hadoop.mapreduce.Job> stopJobs()
protected java.util.Collection<org.apache.hadoop.mapreduce.Job> stopJobs(JobExecutor.JobListener listener)
listener
- job listenerprotected java.util.Collection<org.apache.hadoop.mapreduce.Job> startJobs()
protected java.util.Collection<org.apache.hadoop.mapreduce.Job> startJobs(JobExecutor.JobListener listener)
protected java.util.Collection<org.apache.hadoop.mapreduce.Job> findJobs()
public void setJob(org.apache.hadoop.mapreduce.Job job)
job
- The job to execute.public void setJobs(java.util.Collection<org.apache.hadoop.mapreduce.Job> jobs)
jobs
- The job to execute.public void setJobNames(java.lang.String... jobName)
jobName
- The job to execute.public boolean isWaitForCompletion()
public void setWaitForCompletion(boolean waitForJob)
waitForJob
- whether to wait for the job to complete or not.public boolean isVerbose()
public void setVerbose(boolean verbose)
verbose
- whether the job execution is verbose or not.public void setBeanFactory(org.springframework.beans.factory.BeanFactory beanFactory) throws org.springframework.beans.BeansException
setBeanFactory
in interface org.springframework.beans.factory.BeanFactoryAware
org.springframework.beans.BeansException
public void setExecutor(java.util.concurrent.Executor executor)
SyncTaskExecutor
is used, meaning the calling thread is used.
While this replicates the Hadoop behavior, it prevents running jobs from being killed if the application shuts down.
For a fine-tuned control, a dedicated Executor
is recommended.executor
- the task executor to use execute the Hadoop job.public boolean isKillJobsAtShutdown()
public void setKillJobAtShutdown(boolean killJobsAtShutdown)
setWaitForCompletion(boolean)
is true, this flag is considered to be true as otherwise
the application cannot shut down (since it has to keep waiting for the job).killJobsAtShutdown
- whether or not to kill configured jobs when the application shuts down