Apache Spark 中的 shuffle read 和 shuffle write 是什么

2024-01-12

在下面的 Spark admin 在端口 8080 上运行的屏幕截图中:

对于此代码,“随机读取”和“随机写入”参数始终为空:

import org.apache.spark.SparkContext;

object first {
  println("Welcome to the Scala worksheet")

  val conf = new org.apache.spark.SparkConf()
    .setMaster("local")
    .setAppName("distances")
    .setSparkHome("C:\\spark-1.1.0-bin-hadoop2.4\\spark-1.1.0-bin-hadoop2.4")
    .set("spark.executor.memory", "2g")
  val sc = new SparkContext(conf)

  def euclDistance(userA: User, userB: User) = {

    val subElements = (userA.features zip userB.features) map {
      m => (m._1 - m._2) * (m._1 - m._2)
    }
    val summed = subElements.sum
    val sqRoot = Math.sqrt(summed)

    println("value is" + sqRoot)
    ((userA.name, userB.name), sqRoot)
  }

  case class User(name: String, features: Vector[Double])

  def createUser(data: String) = {

    val id = data.split(",")(0)
    val splitLine = data.split(",")

    val distanceVector = (splitLine.toList match {
      case h :: t => t
    }).map(m => m.toDouble).toVector

    User(id, distanceVector)

  }

  val dataFile = sc.textFile("c:\\data\\example.txt")
  val users = dataFile.map(m => createUser(m))
  val cart = users.cartesian(users) //
  val distances = cart.map(m => euclDistance(m._1, m._2))
  //> distances  : org.apache.spark.rdd.RDD[((String, String), Double)] = MappedR
  //| DD[4] at map at first.scala:46
  val d = distances.collect //

  d.foreach(println) //> ((a,a),0.0)
  //| ((a,b),0.0)
  //| ((a,c),1.0)
  //| ((a,),0.0)
  //| ((b,a),0.0)
  //| ((b,b),0.0)
  //| ((b,c),1.0)
  //| ((b,),0.0)
  //| ((c,a),1.0)
  //| ((c,b),1.0)
  //| ((c,c),0.0)
  //| ((c,),0.0)
  //| ((,a),0.0)
  //| ((,b),0.0)
  //| ((,c),0.0)
  //| ((,),0.0)

}

为什么“随机读取”和“随机写入”字段为空?是否可以调整上面的代码以填充这些字段,以便了解如何


Shuffle是指在多个Spark stage之间重新分配数据。 “Shuffle Write”是在传输之前(通常在阶段结束时)所有执行器上所有写入的序列化数据的总和,“Shuffle Read”是指在阶段开始时所有执行器上读取的序列化数据的总和。

您的程序只有一个阶段,由“收集”操作触发。不需要进行洗牌,因为您只有一堆连续的映射操作,这些操作在一个阶段中进行管道传输。

尝试看看这些幻灯片:http://de.slideshare.net/colorant/spark-shuffle-introduction http://de.slideshare.net/colorant/spark-shuffle-introduction

阅读原始论文的第 5 章也可能有所帮助:http://people.csail.mit.edu/matei/papers/2012/nsdi_spark.pdf http://people.csail.mit.edu/matei/papers/2012/nsdi_spark.pdf

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

Apache Spark 中的 shuffle read 和 shuffle write 是什么 的相关文章

随机推荐