报错:
Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
我在jupyter notebook中以:
spark = SparkSession.builder.master('spark://hadoop101:7077').appName('apply123').getOrCreate() 创建的application的Executor会使用spark-env.sh配置文件中设置的内存和core数量,如下:
export SPARK_WORKER_CORES=2
export SPARK_EXECUTOR_MEMORY=900m
http://hadoop101:8080/