pyspark.sql.utils.AnalysisException:找不到数据源:kafka

2023-11-23

我正在尝试使用 pyspark 从 kafka 读取流。我在用Spark 版本 3.0.0-preview2 and 火花流-kafka-0-10_2.12在此之前,我只是统计了zookeeper、kafka并创建了一个新主题:

/usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties 
/usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties
/usr/local/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic data_wm

这是我的代码:

import pandas as pd
import os
import findspark
findspark.init("/usr/local/spark")
from pyspark.sql import SparkSession

spark = SparkSession.builder.appName("TestApp").getOrCreate()
df = spark \
  .readStream \
  .format("kafka") \
  .option("kafka.bootstrap.servers", "localhost:9092") \
  .option("subscribe", "data_wm") \
  .load() 
value = df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") 

这就是我运行脚本的方式:

sudo --preserve-env=pyspark /usr/local/spark/bin/pyspark --packages org.apache.spark:spark-streaming-kafka-0-10_2.12:3.0.0-preview

作为这个命令的结果,我有这个:

: resolving dependencies :: org.apache.spark#spark-submit-parent-0d7b2a8d-a860-4766-a4c7-141a902d8365;1.0
        confs: [default]
        found org.apache.spark#spark-streaming-kafka-0-10_2.12;3.0.0-preview in central
        found org.apache.spark#spark-token-provider-kafka-0-10_2.12;3.0.0-preview in central
        found org.apache.kafka#kafka-clients;2.3.1 in central
        found com.github.luben#zstd-jni;1.4.3-1 in central
        found org.lz4#lz4-java;1.6.0 in central
        found org.xerial.snappy#snappy-java;1.1.7.3 in central
        found org.slf4j#slf4j-api;1.7.16 in central
        found org.spark-project.spark#unused;1.0.0 in central :: resolution report :: resolve 380ms :: artifacts dl 7ms
        :: modules in use:
        com.github.luben#zstd-jni;1.4.3-1 from central in [default]
        org.apache.kafka#kafka-clients;2.3.1 from central in [default]
        org.apache.spark#spark-streaming-kafka-0-10_2.12;3.0.0-preview from central in [default]
        org.apache.spark#spark-token-provider-kafka-0-10_2.12;3.0.0-preview from central in [default]
        org.lz4#lz4-java;1.6.0 from central in [default]
        org.slf4j#slf4j-api;1.7.16 from central in [default]
        org.spark-project.spark#unused;1.0.0 from central in [default]
        org.xerial.snappy#snappy-java;1.1.7.3 from central in [default]

但我总是出现这个错误:

d> f = Spark \ ... .readStream \ ... .format("kafka") \ ...

.option("kafka.bootstrap.servers", "localhost:9092") \ ...
.option("subscribe", "data_wm") \ ... .load() 回溯(大多数 最近的调用最后):文件“”,第 5 行,在文件中 “/usr/local/spark/python/pyspark/sql/streaming.py”,第 406 行,加载中 返回 self._df(self._jreader.load()) 文件“/usr/local/spark/python/lib/py4j-0.10.8.1-src.zip/py4j/java_gateway.py”, 第 1286 行,在call文件 “/usr/local/spark/python/pyspark/sql/utils.py”,第 102 行,装饰风格引发转换后的 pyspark.sql.utils.AnalysisException:无法找到数据源:kafka。请按照以下方式部署应用程序 《结构化流+Kafka集成》的部署部分指导”。;

我不知道这个错误的原因,请帮忙


我已在 Spark 3.0.1 上成功解决了此错误(使用 PySpark)。

我会保持简单并通过提供所需的包--packages争论:

spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1 MyPythonScript.py

注意参数的顺序,否则会引发错误。

Where MyPythonScript.py has:

KAFKA_TOPIC = "data_wm"
KAFKA_SERVER = "localhost:9092"

# creating an instance of SparkSession
spark_session = SparkSession \
    .builder \
    .appName("Python Spark create RDD") \
    .getOrCreate()

# Subscribe to 1 topic
df = spark_session \
    .readStream \
    .format("kafka") \
    .option("kafka.bootstrap.servers", KAFKA_SERVER) \
    .option("subscribe", KAFKA_TOPIC) \
    .load()
print(df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)"))
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

pyspark.sql.utils.AnalysisException:找不到数据源:kafka 的相关文章

随机推荐