Spark 源码系列 - DAGScheduler 执行

结论

DAGScheduler -> runJob

def runJob[T, U](
  val waiter = submitJob(rdd, func, partitions, callSite, resultHandler, properties)
  ...
  // 等待job执行结束
  ThreadUtils.awaitReady(waiter.completionFuture, Duration.Inf)

DAGScheduler -> submitJob

  1. 将this, 新生成的jobid, 分区数 生成对象 JobWaiter
  2. 将JobWatier放入阻塞队列EventLoop, EventLoop是通过内部的死循环机制,不停遍历阻塞队列,当有数据就调用onReceive方法。
  def submitJob[T, U](...): JobWaiter[U] = {
    ...
    val jobId = nextJobId.getAndIncrement()
    ...

    val waiter = new JobWaiter[U](this, jobId, partitions.size, resultHandler)
    // eventPorcessLoop 是 阻塞队列 LinkedBlockingDeque
    eventProcessLoop.post(JobSubmitted(
      jobId, rdd, func2, partitions.toArray, callSite, waiter,
      Utils.cloneProperties(properties))) // end post
    waiter
  }
posted @ 2022-05-29 10:47  608088  阅读(39)  评论(0编辑  收藏  举报