在 PySpark 中,您需要一个 ormap
超过 RDD。让我们使用第一个选项。首先是一些进口:
from pyspark.ml.linalg import VectorUDT
from pyspark.sql.functions import udf
和一个函数:
as_ml = udf(lambda v: v.asML() if v is not None else None, VectorUDT())
带有示例数据:
from pyspark.mllib.linalg import Vectors as MLLibVectors
df = sc.parallelize([
(MLLibVectors.sparse(4, [0, 2], [1, -1]), ),
(MLLibVectors.dense([1, 2, 3, 4]), )
]).toDF(["features"])
result = df.withColumn("features", as_ml("features"))
结果是
+--------------------+
| features|
+--------------------+
|(4,[0,2],[1.0,-1.0])|
| [1.0,2.0,3.0,4.0]|
+--------------------+