我正在使用 scrapy 进行项目。我运行了以下命令来部署项目:
$scrapy deploy -l
然后我得到以下o/p:
刮刮网站http://localhost:6800/
$cat scrapy.cfg
[settings]
default = scrapBib.settings
[deploy:scrapysite]
url = http://localhost:6800/
project = scrapBib
$scrapy deploy scrapysite -p scrapBib
'Building egg of scrapBib-1346242513
'build/lib.linux-x86_64-2.7' does not exist -- can't clean it
'build/bdist.linux-x86_64' does not exist -- can't clean it
'build/scripts-2.7' does not exist -- can't clean it
zip_safe flag not set; analyzing archive contents...
Deploying scrapBib-1346242513 to `http://localhost:6800/addversion.json`
2012-08-29 17:45:14+0530 [HTTPChannel,22,127.0.0.1] 127.0.0.1 - - [29/Aug/2012:12:15:13
+0000] "POST /addversion.json HTTP/1.1" 200 79 "-" "Python-urllib/2.7"
Server response (200):
{"status": "ok", "project": "scrapBib", "version": "1346242513", "spiders": 0}
正如你所看到的,尽管我在project/spiders/文件夹中编写了3个蜘蛛,但蜘蛛的数量为0。因此,我无法使用curl 请求启动爬网。请帮忙
我也遇到过这个问题,做两件事
1)删除project.egg-info
, build
, setup.py
从您的本地系统。
2) 从您的服务器中删除所有已部署的版本。
然后尝试部署它将被修复...
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)