ExecutorSource

ExecutorSource is a Source of metrics for an Executor. It uses an executor’s threadPool for calculating the gauges.

Note
Every executor has its own separate ExecutorSource that is registered when CoarseGrainedExecutorBackend receives a RegisteredExecutor.

The name of a ExecutorSource is executor.

spark executorsource jconsole.png
Figure 1. ExecutorSource in JConsole (using Spark Standalone)
Table 1. ExecutorSource Gauges
Gauge Description

threadpool.activeTasks

Approximate number of threads that are actively executing tasks.

Uses ThreadPoolExecutor.getActiveCount().

threadpool.completeTasks

Approximate total number of tasks that have completed execution.

Uses ThreadPoolExecutor.getCompletedTaskCount().

threadpool.currentPool_size

Current number of threads in the pool.

Uses ThreadPoolExecutor.getPoolSize().

threadpool.maxPool_size

Maximum allowed number of threads that have ever simultaneously been in the pool

Uses ThreadPoolExecutor.getMaximumPoolSize().

filesystem.hdfs.read_bytes

Uses Hadoop’s FileSystem.getAllStatistics() and getBytesRead().

filesystem.hdfs.write_bytes

Uses Hadoop’s FileSystem.getAllStatistics() and getBytesWritten().

filesystem.hdfs.read_ops

Uses Hadoop’s FileSystem.getAllStatistics() and getReadOps()

filesystem.hdfs.largeRead_ops

Uses Hadoop’s FileSystem.getAllStatistics() and getLargeReadOps().

filesystem.hdfs.write_ops

Uses Hadoop’s FileSystem.getAllStatistics() and getWriteOps().

filesystem.file.read_bytes

The same as hdfs but for file scheme.

filesystem.file.write_bytes

The same as hdfs but for file scheme.

filesystem.file.read_ops

The same as hdfs but for file scheme.

filesystem.file.largeRead_ops

The same as hdfs but for file scheme.

filesystem.file.write_ops

The same as hdfs but for file scheme.

results matching ""

    No results matching ""