FetchFailedException
FetchFailedException
exception may be thrown when a task runs (and ShuffleBlockFetcherIterator
did not manage to fetch shuffle blocks).
FetchFailedException
contains the following:
-
the unique identifier for a BlockManager (as BlockManagerId)
-
shuffleId
-
mapId
-
reduceId
-
A short exception
message
-
cause
- the rootThrowable
object
When FetchFailedException
is reported, TaskRunner
catches it and notifies ExecutorBackend
(with TaskState.FAILED
task state).
The root cause
of the FetchFailedException
is usually because the executor (with the BlockManager for the shuffle blocks) is lost (i.e. no longer available) due to:
-
OutOfMemoryError
could be thrown (aka OOMed) or some other unhandled exception. -
The cluster manager that manages the workers with the executors of your Spark application, e.g. YARN, enforces the container memory limits and eventually decided to kill the executor due to excessive memory usage.
You should review the logs of the Spark application using web UI, Spark History Server or cluster-specific tools like yarn logs -applicationId for Hadoop YARN.
A solution is usually to tune the memory of your Spark application.
Caution
|
FIXME Image with the call to ExecutorBackend. |
toTaskFailedReason
Method
Caution
|
FIXME |