Task

Task (aka command) is the smallest individual unit of execution that is launched to compute a RDD partition.

spark rdd partitions job stage tasks.png
Figure 1. Tasks correspond to partitions in RDD

A task is described by the Task contract with a single runTask to run it and optional placement preferences to place the computation on right executors.

There are two concrete implementations of Task contract:

  • ShuffleMapTask that executes a task and divides the task’s output to multiple buckets (based on the task’s partitioner).

  • ResultTask that executes a task and sends the task’s output back to the driver application.

The very last stage in a Spark job consists of multiple ResultTasks, while earlier stages can only be ShuffleMapTasks.

Caution
FIXME You could have a Spark job with ShuffleMapTask being the last.

In other (more technical) words, a task is a computation on the records in a RDD partition in a stage of a RDD in a Spark job.

Note
T is the type defined when a Task is created.
Table 1. Task Internal Registries and Counters
Name Description

context

Used when ???

epoch

Set for a Task when TaskSetManager is created and later used when TaskRunner runs and when DAGScheduler handles a ShuffleMapTask successful completion.

_executorDeserializeTime

Used when ???

_executorDeserializeCpuTime

Used when ???

_killed

Used when ???

metrics

TaskMetrics

Created lazily when Task is created from serializedTaskMetrics.

Used when ???

taskMemoryManager

TaskMemoryManager that manages the memory allocated by the task.

Used when ???

taskThread

Used when ???

A task can only belong to one stage and operate on a single partition. All tasks in a stage must be completed before the stages that follow can start.

Tasks are spawned one by one for each stage and partition.

Caution
FIXME What are stageAttemptId and taskAttemptId?

Task Contract

def runTask(context: TaskContext): T
def preferredLocations: Seq[TaskLocation] = Nil
Note
Task is a private[spark] contract.
Table 2. Task Contract
Method Description

runTask

Used when a task runs.

preferredLocations

Collection of TaskLocations.

Used exclusively when TaskSetManager registers a task as pending execution and dequeueSpeculativeTask.

Empty by default and so no task location preferences are defined that says the task could be launched on any executor.

Defined by the custom tasks, i.e. ShuffleMapTask and ResultTask.

Creating Task Instance

Task takes the following when created:

Task initializes the internal registries and counters.

Running Task Thread — run Method

run(
  taskAttemptId: Long,
  attemptNumber: Int,
  metricsSystem: MetricsSystem): T

run creates a TaskContextImpl that in turn becomes the task’s TaskContext.

Note
run is a final method and so must not be overriden.

run checks _killed flag and, if enabled, kills the task (with interruptThread flag disabled).

run creates a Hadoop CallerContext and sets it.

Note
This is the moment when the custom Task's runTask is executed.

In the end, run notifies TaskContextImpl that the task has completed (regardless of the final outcome — a success or a failure).

In case of any exceptions, run notifies TaskContextImpl that the task has failed. run requests MemoryStore to release unroll memory for this task (for both ON_HEAP and OFF_HEAP memory modes).

Note
run uses SparkEnv to access the current BlockManager that it uses to access MemoryStore.
Note
run is used exclusively when TaskRunner starts. The Task instance has just been deserialized from taskBytes that were sent over the wire to an executor. localProperties and TaskMemoryManager are already assigned.

Task States

A task can be in one of the following states (as described by TaskState enumeration):

  • LAUNCHING

  • RUNNING when the task is being started.

  • FINISHED when the task finished with the serialized result.

  • FAILED when the task fails, e.g. when FetchFailedException, CommitDeniedException or any Throwable occurs

  • KILLED when an executor kills a task.

  • LOST

States are the values of org.apache.spark.TaskState.

Note
Task status updates are sent from executors to the driver through ExecutorBackend.

Task is finished when it is in one of FINISHED, FAILED, KILLED, LOST.

LOST and FAILED states are considered failures.

Tip
Task states correspond to org.apache.mesos.Protos.TaskState.

Collect Latest Values of (Internal and External) Accumulators — collectAccumulatorUpdates Method

collectAccumulatorUpdates(taskFailed: Boolean = false): Seq[AccumulableInfo]

collectAccumulatorUpdates collects the latest values of internal and external accumulators from a task (and returns the values as a collection of AccumulableInfo).

Internally, collectAccumulatorUpdates takes TaskMetrics.

Note
collectAccumulatorUpdates uses TaskContextImpl to access the task’s TaskMetrics.

collectAccumulatorUpdates collects the latest values of:

collectAccumulatorUpdates returns an empty collection when TaskContextImpl is not initialized.

Note
collectAccumulatorUpdates is used when TaskRunner runs a task (and sends a task’s final results back to the driver).

Killing Task — kill Method

kill(interruptThread: Boolean)

kill marks the task to be killed, i.e. it sets the internal _killed flag to true.

kill calls TaskContextImpl.markInterrupted when context is set.

If interruptThread is enabled and the internal taskThread is available, kill interrupts it.

Caution
FIXME When could context and interruptThread not be set?

results matching ""

    No results matching ""