Spark 写入 S3 V4 SignatureDoesNotMatch 错误

2024-06-26

我遇到S3SignatureDoesNotMatch尝试使用 Spark 将 Dataframe 写入 S3 时。

症状/尝试过的事情:

  • 代码失败有时但有效有时;
  • 代码可以read从 S3 没有任何问题,并且能够不时写入 S3,这排除了错误的配置设置,例如S3A/enableV4/错误密钥/区域端点 etc.
  • S3A端点已根据S3文档设置S3端点 http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region;
  • 已确定AWS_SECRETY_KEY不包含任何建议的非字母数字here https://github.com/aws/aws-cli/issues/602;
  • 使用NTP确保服务器时间同步;
  • 以下是在EC2上测试的m3.xlarge with spark-2.0.2-bin-hadoop2.7在本地模式下运行;
  • 当文件写入本地文件系统时,问题就消失了;
  • 现在的解决方法是使用 s3fs 安装存储桶并写入其中;然而这并不理想,因为 s3fs 经常因为 Spark 施加的压力而死掉;

代码可以归结为:

spark-submit\
    --verbose\
    --conf spark.hadoop.fs.s3n.impl=org.apache.hadoop.fs.s3native.NativeS3FileSystem \
    --conf spark.hadoop.fs.s3.impl=org.apache.hadoop.fs.s3.S3FileSystem \
    --conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem\
    --packages org.apache.hadoop:hadoop-aws:2.7.3\
    --driver-java-options '-Dcom.amazonaws.services.s3.enableV4'\
    foobar.py


# foobar.py
sc = SparkContext.getOrCreate()
sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", 'xxx')
sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", 'xxx')
sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", 's3.dualstack.ap-southeast-2.amazonaws.com')

hc = SparkSession.builder.enableHiveSupport().getOrCreate()
dataframe = hc.read.parquet(in_file_path)

dataframe.write.csv(
    path=out_file_path,
    mode='overwrite',
    compression='gzip',
    sep=',',
    quote='"',
    escape='\\',
    escapeQuotes='true',
)

Spark 溢出以下内容error https://gist.github.com/PaulLiang1/4d01740b1a11ac1a49affcec71cb4b1f.


将 log4j 设置为 verbose,似乎发生了以下情况:

  • 每个个体都会被输出到S3上的stag位置/_temporary/foorbar.part-xxx;
  • PUT 调用会将分区移动到最终位置;
  • 在几次成功的 PUT 调用之后,所有后续的 PUT 调用都因 403 而失败;
  • 由于 reuqets 是由 aws-java-sdk 制作的,因此不确定在应用程序级别上做什么; -- 以下日志来自另一个具有完全相同错误的事件;

 >> PUT XXX/part-r-00025-ae3d5235-932f-4b7d-ae55-b159d1c1343d.gz.parquet HTTP/1.1
 >> Host: XXX.s3-ap-southeast-2.amazonaws.com
 >> x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
 >> X-Amz-Date: 20161104T005749Z
 >> x-amz-metadata-directive: REPLACE
 >> Connection: close
 >> User-Agent: aws-sdk-java/1.10.11 Linux/3.13.0-100-generic OpenJDK_64-Bit_Server_VM/25.91-b14/1.8.0_91 com.amazonaws.services.s3.transfer.TransferManager/1.10.11
 >> x-amz-server-side-encryption-aws-kms-key-id: 5f88a222-715c-4a46-a64c-9323d2d9418c
 >> x-amz-server-side-encryption: aws:kms
 >> x-amz-copy-source: /XXX/_temporary/0/task_201611040057_0001_m_000025/part-r-00025-ae3d5235-932f-4b7d-ae55-b159d1c1343d.gz.parquet
 >> Accept-Ranges: bytes
 >> Authorization: AWS4-HMAC-SHA256 Credential=AKIAJZCSOJPB5VX2B6NA/20161104/ap-southeast-2/s3/aws4_request, SignedHeaders=accept-ranges;connection;content-length;content-type;etag;host;last-modified;user-agent;x-amz-content-sha256;x-amz-copy-source;x-amz-date;x-amz-metadata-directive;x-amz-server-side-encryption;x-amz-server-side-encryption-aws-kms-key-id, Signature=48e5fe2f9e771dc07a9c98c7fd98972a99b53bfad3b653151f2fcba67cff2f8d
 >> ETag: 31436915380783143f00299ca6c09253
 >> Content-Type: application/octet-stream
 >> Content-Length: 0
DEBUG wire:  << "HTTP/1.1 403 Forbidden[\r][\n]"
DEBUG wire:  << "x-amz-request-id: 849F990DDC1F3684[\r][\n]"
DEBUG wire:  << "x-amz-id-2: 6y16TuQeV7CDrXs5s7eHwhrpa1Ymf5zX3IrSuogAqz9N+UN2XdYGL2FCmveqKM2jpGiaek5rUkM=[\r][\n]"
DEBUG wire:  << "Content-Type: application/xml[\r][\n]"
DEBUG wire:  << "Transfer-Encoding: chunked[\r][\n]"
DEBUG wire:  << "Date: Fri, 04 Nov 2016 00:57:48 GMT[\r][\n]"
DEBUG wire:  << "Server: AmazonS3[\r][\n]"
DEBUG wire:  << "Connection: close[\r][\n]"
DEBUG wire:  << "[\r][\n]"
DEBUG DefaultClientConnection: Receiving response: HTTP/1.1 403 Forbidden
 << HTTP/1.1 403 Forbidden
 << x-amz-request-id: 849F990DDC1F3684
 << x-amz-id-2: 6y16TuQeV7CDrXs5s7eHwhrpa1Ymf5zX3IrSuogAqz9N+UN2XdYGL2FCmveqKM2jpGiaek5rUkM=
 << Content-Type: application/xml
 << Transfer-Encoding: chunked
 << Date: Fri, 04 Nov 2016 00:57:48 GMT
 << Server: AmazonS3
 << Connection: close
DEBUG requestId: x-amzn-RequestId: not available

我遇到了完全相同的问题,并在以下人员的帮助下找到了解决方案本文 https://medium.com/@subhojit20_27731/apache-spark-and-amazon-s3-gotchas-and-best-practices-a767242f3d98 (其他资源 https://de.slideshare.net/SparkSummit/spark-and-object-stores-what-you-need-to-know-spark-summit-east-talk-by-steve-loughran都指向同一个方向)。设置这些配置选项后,写入S3成功:

spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version 2
spark.speculation false

我正在使用 Spark 2.1.1 和 Hadoop 2.7。我的最终 Spark-submit 命令如下所示:

spark-submit
--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.3
--conf spark.hadoop.fs.s3a.endpoint=s3.eu-central-1.amazonaws.com
--conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
--conf spark.executor.extraJavaOptions=-Dcom.amazonaws.services.s3.enableV4=true
--conf spark.driver.extraJavaOptions=-Dcom.amazonaws.services.s3.enableV4=true
--conf spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version=2
--conf spark.speculation=false
...

此外,我定义了这些环境变量:

AWS_ACCESS_KEY_ID=****
AWS_SECRET_ACCESS_KEY=****
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

Spark 写入 S3 V4 SignatureDoesNotMatch 错误 的相关文章

随机推荐