您可以收集V1
, V2
and V3
列为struct
并传递给udf
函数与top
列并将值提取为
scala
import org.apache.spark.sql.functions._
def findValueUdf = udf((strct: Row, top: String) => strct.getAs[String](top))
df.withColumn("top_value", findValueUdf(struct("V1", "V2", "V3"), col("top")))
这应该给你
+------+------+---+---+---+---+---------+
|Src_ip|dst_ip|V1 |V2 |V3 |top|top_value|
+------+------+---+---+---+---+---------+
|A |B |xx |yy |zz |V1 |xx |
+------+------+---+---+---+---+---------+
pyspark
pyspark 中的等效代码是
from pyspark.sql import functions as f
from pyspark.sql import types as t
def findValueUdf(strct, top):
return strct[top]
FVUdf = f.udf(findValueUdf, t.StringType())
df.withColumn("top_value", FVUdf(f.struct("V1", "V2", "V3"), f.col("top")))
此外,您可以在列表中定义要使用的列名称struct
函数,这样您就不必对它们进行硬编码。
我希望答案有帮助