这是我为抓取 justdial 网站而编写的代码。
import scrapy
from scrapy.http.request import Request
class JustdialSpider(scrapy.Spider):
name = 'justdial'
# handle_httpstatus_list = [400]
# headers={'User-Agent': "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"}
# handle_httpstatus_list = [403, 404]
allowed_domains = ['justdial.com']
start_urls = ['https://www.justdial.com/Delhi-NCR/Chemists/page-1']
# def start_requests(self):
# # hdef start_requests(self):
# headers= {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:48.0) Gecko/20100101 Firefox/48.0'}
# for url in self.start_urls:
# self.log("I just visited :---------------------------------- "+url)
# yield Request(url, headers=headers)
def parse(self,response):
self.log("I just visited the site:---------------------------------------------- "+response.url)
urls = response.xpath('//a/@href').extract()
self.log("Urls-------: "+str(urls))
这是终端中显示的错误:
2017-08-18 18:32:25 [scrapy.core.engine] INFO: Spider opened
2017-08-18 18:32:25 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pag
es/min), scraped 0 items (at 0 items/min)
2017-08-18 18:32:25 [scrapy.extensions.httpcache] DEBUG: Using filesystem cache
storage in D:\scrapy\justdial\.scrapy\httpcache
2017-08-18 18:32:25 [scrapy.extensions.telnet] DEBUG: Telnet console listening o
n 127.0.0.1:6023
2017-08-18 18:32:25 [scrapy.core.engine] DEBUG: Crawled (403) <GET https://www.j
ustdial.com/robots.txt> (referer: None) ['cached']
2017-08-18 18:32:25 [scrapy.core.engine] DEBUG: Crawled (403) <GET https://www.j
ustdial.com/Delhi-NCR/Chemists/page-1> (referer: None) ['cached']
2017-08-18 18:32:25 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response
<403 https://www.justdial.com/Delhi-NCR/Chemists/page-1>: HTTP status code is n
ot handled or not allowed
我在 stackoverflow 上看到了类似的问题,我尝试了所有类似的方法,
您可以在带有注释的代码中看到我尝试过的内容,
注意:这个(https://www.justdial.com/Delhi-NCR/Chemists/page-1)网站甚至没有在我的系统中被阻止。当我在 chrome/mozilla 中打开网站时,它正在打开。这与 (https://www.practo.com/bangalore#doctor-search)站点也。
当您使用user_agent
蜘蛛属性,开始发挥作用。可能设置请求标头是不够的,因为它会被默认的用户代理字符串覆盖。所以设置spider属性
user_agent = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"
(与你设置的方式相同start_urls
)并尝试一下。
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)