Spark 中的列表理解array
a = [1,2,3,4,5,6,7,8,9,10]
df = spark.createDataFrame([['a b c d e f g h i j '],], ['col1'])
df = df.withColumn("NewColumn", F.array([F.lit(x) for x in a]))
df.show(truncate=False)
df.printSchema()
# +--------------------+-------------------------------+
# |col1 |NewColumn |
# +--------------------+-------------------------------+
# |a b c d e f g h i j |[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]|
# +--------------------+-------------------------------+
# root
# |-- col1: string (nullable = true)
# |-- NewColumn: array (nullable = false)
# | |-- element: integer (containsNull = false)
@pault 评论道(Python 2.7):
您可以使用隐藏循环map
:
df.withColumn("NewColumn", F.array(map(F.lit, a)))
添加@abegehrPython 3版本:
df.withColumn("NewColumn", F.array(*map(F.lit, a)))
Spark's udf
# Defining UDF
def arrayUdf():
return a
callArrayUdf = F.udf(arrayUdf, T.ArrayType(T.IntegerType()))
# Calling UDF
df = df.withColumn("NewColumn", callArrayUdf())
输出是一样的。