我正在使用 Python 和 Pyspark,并且想要将 CSV 文件上传到 azure blob 存储。我已经有一个由代码生成的数据框:df.我想做的是接下来的事情:
# Dataframe generated by code
df
# Create the BlockBlockService that is used to call the Blob service for the storage account
block_blob_service = BlockBlobService(account_name='name', account_key='key')
container_name ='results-csv'
d = {'one' : pandas.Series([1., 2., 3.], index=['a', 'b', 'c']), 'two' : pandas.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
df = pandas.DataFrame(d)
writer = pandas.ExcelWriter(df, engine='xlsxwriter')
a = df.to_excel(writer, sheet_name='Sheet1', index=False, engine='xlsxwriter')
block_blob_service.create_blob_from_stream(container_name, 'test', a)
我收到错误:
ValueError: stream should not be None.
所以我想将数据帧的内容作为 blob 上传到上面提供的存储位置。有没有办法在不首先在本地计算机中生成 CSV 文件的情况下执行此操作?