我想使用最新的读取 CSV 文件Apache Spark Version i.e 2.2.1
in Windows 7 via cmd
但无法这样做,因为存在一些问题metastore_db
。我尝试了以下步骤:
1. spark-shell --packages com.databricks:spark-csv_2.11:1.5.0 //Since my scala
// version is 2.11
2. val df = spark.read.format("csv").option("header", "true").option("mode", "DROPMALFORMED").load("file:///D:/ResourceData.csv")// As //in latest versions we use SparkSession variable i.e spark instead of //sqlContext variable
但它引发了我以下错误:
Caused by: org.apache.derby.iapi.error.StandardException: Failed to start database 'metastore_db' with class loader o
.spark.sql.hive.client.IsolatedClientLoader
Caused by: org.apache.derby.iapi.error.StandardException: Another instance of Derby may have already booted the database
我可以在 1.6 版本中读取 csv,但我想在最新版本中读取。谁能帮我这个??我被困了很多天了。
打开 Spark Shell
spark-shell
通过 SQLContext 传递 Spark Context 并将其分配给 sqlContext 变量
val sqlContext = new org.apache.spark.sql.SQLContext(sc) // As Spark context available as 'sc'
根据您的要求读取 CSV 文件
val bhaskar = sqlContext.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("/home/burdwan/Desktop/bhaskar.csv") // Use wildcard, with * we will be able to import multiple csv files in a single load ...Desktop/*.csv
收集 RDD 并打印
bhaskar.collect.foreach(println)
Output
_a1 _a2 Cn clr clarity depth aprx price x y z
1 0.23 Ideal E SI2 61.5 55 326 3.95 3.98 2.43
2 0.21 Premium E SI1 59.8 61 326 3.89 3.84 2.31
3 0.23 Good E VS1 56.9 65 327 4.05 4.07 2.31
4 0.29 Premium I VS2 62.4 58 334 4.2 4.23 2.63
5 0.31 Good J SI2 63.3 58 335 4.34 4.35 2.75
6 0.24 Good J VVS2 63 57 336 3.94 3.96 2.48
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)