第一次-自从LabeledPoint
期望向量为Double
s,我假设您还想拆分每个中的每个元素features
按冒号排列的数组 (:
),并将其右侧视为双精度,例如:
"d90_pv_1sec:1.4471580313422192" --> 1.4471580313422192
如果是这样 - 这是转换:
import org.apache.spark.mllib.linalg.{Vector, Vectors}
import org.apache.spark.mllib.regression.LabeledPoint
// sample data - DataFrame with label, features and other columns
val df = Seq(
(1, Array("d90_pv_1sec:1.4471580313422192", "d3_pv_1sec:0.9030899869919435"), 4.0),
(2, Array("d7_pv_1sec:0.9030899869919435", "d30_pv_1sec:1.414973347970818"), 5.0)
).toDF("label", "features", "ignored")
// extract relevant fields from Row and convert WrappedArray[String] into Vector:
val result = df.rdd.map(r => {
val label = r.getAs[Int]("label")
val featuresArray = r.getAs[mutable.WrappedArray[String]]("features")
val features: Vector = Vectors.dense(
featuresArray.map(_.split(":")(1).toDouble).toArray
)
LabeledPoint(label, features)
})
result.foreach(println)
// (1.0,[1.4471580313422192,0.9030899869919435])
// (2.0,[0.9030899869919435,1.414973347970818])
EDIT:根据澄清,现在假设输入数组中的每个项目都包含预期的index在由此产生的sparse vector:
"d90_pv_1sec:1.4471580313422192" --> index = 90; value = 1.4471580313422192
修改后的代码将是:
val vectorSize: Int = 100 // just a guess - should be the maximum index + 1
val result = df.rdd.map(r => {
val label = r.getAs[Int]("label")
val arr = r.getAs[mutable.WrappedArray[String]]("features").toArray
// parse each item into (index, value) tuple to use in sparse vector
val elements = arr.map(_.split(":")).map {
case Array(s, d) => (s.replaceAll("d|_pv_1sec","").toInt, d.toDouble)
}
LabeledPoint(label, Vectors.sparse(vectorSize, elements))
})
result.foreach(println)
// (1.0,(100,[3,90],[0.9030899869919435,1.4471580313422192]))
// (2.0,(100,[7,30],[0.9030899869919435,1.414973347970818]))
NOTE: Using s.replaceAll("d|_pv_1sec","")
可能有点慢,因为它分别为每个项目编译正则表达式。如果是这样的话,它可以被更快(但更丑)的替代s.replace("d", "").replace("_pv_1sec", "")
它不使用正则表达式。