如何使用案例类将简单的 DataFrame 转换为 Spark Scala DataSet?

2024-03-07

我正在尝试将 Spark 中的示例中的简单 DataFrame 转换为 DataSet:https://spark.apache.org/docs/latest/sql-programming-guide.html https://spark.apache.org/docs/latest/sql-programming-guide.html

case class Person(name: String, age: Int)    
import spark.implicits._

val path = "examples/src/main/resources/people.json"

val peopleDS = spark.read.json(path).as[Person]
peopleDS.show()

但出现了以下问题:

Exception in thread "main" org.apache.spark.sql.AnalysisException: Cannot up cast `age` from bigint to int as it may truncate
The type path of the target object is:
- field (class: "scala.Int", name: "age")
- root class: ....

谁能帮我吗?

编辑 我注意到使用 Long 而不是 Int 有效! 这是为什么?

Also:

val primitiveDS = Seq(1,2,3).toDS()
val augmentedDS = primitiveDS.map(i => ("var_" + i.toString, (i + 1).toLong))
augmentedDS.show()

augmentedDS.as[Person].show()

Prints:

+-----+---+
|   _1| _2|
+-----+---+
|var_1|  2|
|var_2|  3|
|var_3|  4|
+-----+---+

Exception in thread "main"
org.apache.spark.sql.AnalysisException: cannot resolve '`name`' given input columns: [_1, _2];

任何人都可以帮我理解这里吗?


如果将 Int 更改为 Long (或 BigInt),它可以正常工作:

case class Person(name: String, age: Long)
import spark.implicits._

val path = "examples/src/main/resources/people.json"

val peopleDS = spark.read.json(path).as[Person]
peopleDS.show()

Output:

+----+-------+
| age|   name|
+----+-------+
|null|Michael|
|  30|   Andy|
|  19| Justin|
+----+-------+

EDIT: Spark.read.json默认情况下将数字解析为Long类型 - 这样做更安全。 您可以在使用强制转换或 udfs 后更改 col 类型。

EDIT2:

要回答第二个问题,您需要在转换为 Person 之前正确命名列:

val primitiveDS = Seq(1,2,3).toDS()
val augmentedDS = primitiveDS.map(i => ("var_" + i.toString, (i + 1).toLong)).
 withColumnRenamed ("_1", "name" ).
 withColumnRenamed ("_2", "age" )
augmentedDS.as[Person].show()

Outputs:

+-----+---+
| name|age|
+-----+---+
|var_1|  2|
|var_2|  3|
|var_3|  4|
+-----+---+
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

如何使用案例类将简单的 DataFrame 转换为 Spark Scala DataSet? 的相关文章

随机推荐