是否可以使 scrapy 写入每个不超过 5000 行的 CSV 文件?我怎样才能给它一个自定义的命名方案?我应该修改吗CsvItemExporter
?
尝试这个管道:
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
from scrapy.exporters import CsvItemExporter
import datetime
class MyPipeline(object):
def __init__(self, stats):
self.stats = stats
self.base_filename = "result/amazon_{}.csv"
self.next_split = self.split_limit = 50000 # assuming you want to split 50000 items/csv
self.create_exporter()
@classmethod
def from_crawler(cls, crawler):
return cls(crawler.stats)
def create_exporter(self):
now = datetime.datetime.now()
datetime_stamp = now.strftime("%Y%m%d%H%M")
self.file = open(self.base_filename.format(datetime_stamp),'w+b')
self.exporter = CsvItemExporter(self.file)
self.exporter.start_exporting()
def process_item(self, item, spider):
if (self.stats.get_stats()['item_scraped_count'] >= self.next_split):
self.next_split += self.split_limit
self.exporter.finish_exporting()
self.file.close()
self.create_exporter
self.exporter.export_item(item)
return item
不要忘记将管道添加到您的设置中:
ITEM_PIPELINES = {
'myproject.pipelines.MyPipeline': 300,
}
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)