你好:)我正在使用 python 使用 scrapy 网络爬行框架,抓取网站并使用 Deathbycaptcha 服务解决我在其页面上遇到的验证码。我的下载延迟设置为 30 秒,我只抓取几页来获取基本信息,这样我就不会过多地占用网站带宽或任何其他类型的信息。我将抓取视为在常规浏览器上发生的体验。
那么首先我们来谈谈这些问题。
ISSUE 1(在代码中)
我如何让 scrapy 基本上停止创建新请求,或者在解决验证码时过多地弄乱验证码?我尝试了很多不同的方法,但都无济于事,而且我对 scrapy 还很陌生,所以我不太熟悉编辑下载器中间件或 scrapy 引擎代码,但如果这是唯一的方法,那就这样吧但我希望有一个非常简单且有效的解决方案,让验证码完成它的工作,而新的请求根本不会中断它。
ISSUE 2(在代码中)
我如何修复这个计时器功能,我认为它与第一个问题有点相关。如果验证码超时而没有解决,那么它永远不会重置captchaIsRunning
布尔值并不断不允许验证码再次开始尝试解决。计时器是我尝试解决第一个问题的方法之一,但是......我收到了一个错误。不确定这是否与从其中提取的事实有关threading
and timeit
在导入声明中,但我认为这没有什么大的区别。谁能指导我修复 Timer 语句的正确方向?
就像我说的,deathbycaptcha API 运行得很好,当它有机会的时候,但 scrapy 请求确实很干扰,我还没有找到这个问题的相关解决方案。再说一次,我还不是一个 scrapy 专家,所以有些事情已经远远超出了我的舒适区,需要推动,但不要太用力,否则我最终会破坏一切 xD 感谢您的帮助,非常感谢!抱歉问这个超长的问题。
不管怎样,该页面可以让你查找几个结果,大约 40-60 个页面后,它会重定向到包含 recaptcha v2 的验证码页面。 Deathbycaptcha 服务有一个用于解决 recaptcha v2 问题的 API,但不幸的是,他们的解决时间有时可能会超过几分钟,这非常令人失望,但它确实发生了。于是我很自然地调整了自己的心态DOWNLOAD_TIMEOUT
设置为240
秒,以便它有足够的时间来解决验证码,并在此之后继续抓取,以便它不再重定向。我的scrapy设置如下:
CONCURRENT_REQUESTS = 1
DEPTH_LIMIT = 1
DOWNLOAD_DELAY = 30
CONCURRENT_REQUESTS_PER_DOMAIN = 1
CONCURRENT_REQUESTS_PER_IP = 1
DOWNLOAD_TIMEOUT = 240
AUTOTHROTTLE_ENABLED = True
# The initial download delay
AUTOTHROTTLE_START_DELAY = 10
# The maximum download delay to be set in case of high latencies
AUTOTHROTTLE_MAX_DELAY = 60
然后显然是其余的,但我认为这些是我的问题中最重要的。我启用了一个扩展,然后中间件中有一些额外的东西,因为我还在这个文件中使用 docker 和 scrapy-splash。
SPIDER_MIDDLEWARES = {
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
DOWNLOADER_MIDDLEWARES = {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
MYEXT_ENABLED = False
MYEXT_ITEMCOUNT = 100
EXTENSIONS = {
'scrapy.extensions.telnet.TelnetConsole': None,
'scrapy.extensions.spideroclog.SpiderOpenCloseLogging':500,
}
所以我不认为这个东西对验证码或下载器中间件有很大影响......但这里是我的抓取工具中的一些代码:
Python:
import sys
import os
sys.path.append(r'F:\Documents\ScrapyDirectory\scrapername\scrapername\spiders')
import deathbycaptcha
import json
import scrapy
import requests
from datetime import datetime
import math
import urllib
import time
from scrapy_splash import SplashRequest
from threading import Timer
from timeit import Timer
class scrapername(scrapy.Spider):
name = "scrapername"
start_urls = []
global scrapeUrlList
global charCompStorage
global captchaIsRunning
r = requests.get('http://example.com/examplejsonfeed.php')
myObject = json.loads(r.text)
#print("Loading names...")
for o in myObject['objects']:
#a huge function for creating basically a lot of objects and appending links created from these objects to the scrapeUrlList function
print(len(scrapeUrlList))
for url in scrapeUrlList:
start_urls.append(url[1])
#add all those urls that just got created to the start_urls list
link_collection = []
def resetCaptchaInformation():
global captchaIsRunning
if captchaIsRunning:
captchaIsRunning = False
def afterCaptchaSubmit(self, response):
global captchaIsRunning
print("Captcha submitted: " + response.request.url)
captchaIsRunning = False
def parse(self, response):
global captchaIsRunning
self.logger.info("got response %s for %r" % (response.status, response.url))
if "InternalCaptcha" in response.request.url:
#checks for captcha in the url and if it's there it starts running the captcha solver API
if not captchaIsRunning:
#I have this statement here as a deterrent to prevent the captcha solver from starting again and again and
#again with every new request (which it does) *ISSUE 1*
if "captchasubmit" in response.request.url:
print("Found captcha submit in url")
else:
print("Internal Captcha is activated")
captchaIsRunning = True
t = Timer(240.0, self.resetCaptchaInformation)
#so I have been having major issues here not sure why?
#*ISSUE 2*
t.start()
username = "username"
password = "password"
print("Set username and password")
Captcha_dict = {
'googlekey': '6LcMUhgUAAAAAPn2MfvqN9KYxj7KVut-oCG2oCoK',
'pageurl': response.request.url}
print("Created catpcha dict")
json_Captcha = json.dumps(Captcha_dict)
print("json.dumps on captcha dict:")
print(json_Captcha)
client = deathbycaptcha.SocketClient(username, password)
print("Set up client with deathbycaptcha socket client")
try:
print("Trying to solve captcha")
balance = client.get_balance()
print("Remaining Balance: " + str(balance))
# Put your CAPTCHA type and Json payload here:
captcha = client.decode(type=4,token_params=json_Captcha)
if captcha:
# The CAPTCHA was solved; captcha["captcha"] item holds its
# numeric ID, and captcha["text"] item its a text token".
print("CAPTCHA %s solved: %s" % (captcha["captcha"], captcha["text"]))
data = {
'g-recaptcha-response':captcha["text"],
}
try:
dest = response.xpath("/html/body/form/@action").extract_first()
print("Form URL: " + dest)
submitURL = "https://exampleaddress.com" + dest
yield scrapy.FormRequest(url=submitURL, formdata=data, callback=self.afterCaptchaSubmit, dont_filter = True)
print("Yielded form request")
if '': # check if the CAPTCHA was incorrectly solved
client.report(captcha["captcha"])
except TypeError:
sys.exit()
except deathbycaptcha.AccessDeniedException:
# Access to DBC API denied, check your credentials and/or balance
print("error: Access to DBC API denied, check your credentials and/or balance")
else:
pass
else:
print("no Captcha")
#this will run if no captcha is on the page that the redirect landed on
#and basically parses all the information on the page
非常抱歉所有这些代码,感谢您耐心阅读它。如果您对为什么有些东西在那里有任何疑问,请询问,以便我可以解释。所以验证码确实解决了。这不是问题。当抓取工具运行并且发生许多请求并且遇到 302 重定向时,它会收到 200 响应并抓取页面,检测验证码并开始解决它。然后scrapy发送另一个请求,该请求在验证码页面上获取302重定向、200响应,并检测验证码并尝试再次解决。它多次启动 API 并浪费了我的令牌。因此if not captchaIsRunning:
声明是为了阻止这种情况发生。所以这是我现在在遇到验证码时输出的 scrapy 日志,请记住在此之前的一切都很好,运行我的所有解析日志。
杂乱的日志:
2018-07-19 14:10:35 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.exampleaddress.com/InternalCaptcha?returnUrl=%2fresults%3fname%3dThomas%2520Garrett%26citystatezip%3dLas%2520Vegas%2c%2520Nv> from <GET https://www.exampleaddress.com/results?name=Thomas%20Garrett&citystatezip=Las%20Vegas,%20Nv>
2018-07-19 14:10:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.exampleaddress.com/InternalCaptcha?returnUrl=%2fresults%3fname%3dThomas%2520Garrett%26citystatezip%3dLas%2520Vegas%2c%2520Nv> (referer: None)
2018-07-19 14:10:49 [scrapername] INFO: got response 200 for 'https://www.exampleaddress.com/InternalCaptcha?returnUrl=%2fresults%3fname%3dThomas%2520Garrett%26citystatezip%3dLas%2520Vegas%2c%2520Nv'
Internal Captcha is activated
2018-07-19 14:10:49 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.exampleaddress.com/InternalCaptcha?returnUrl=%2fresults%3fname%3dThomas%2520Garrett%26citystatezip%3dLas%2520Vegas%2c%2520Nv> (referer: None)
Traceback (most recent call last):
File "F:\Program Files (x86)\Anaconda3\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback
yield next(it)
File "F:\Program Files (x86)\Anaconda3\lib\site-packages\scrapy_splash\middleware.py", line 156, in process_spider_output
for el in result:
File "F:\Program Files (x86)\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
for x in result:
File "F:\Program Files (x86)\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr>
return (_set_referer(r) for r in result or ())
File "F:\Program Files (x86)\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "F:\Program Files (x86)\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "F:\Documents\ScrapyDirectory\scraperName\scraperName\spiders\scraperName- Copy.py", line 232, in parse
t = Timer(240.0, self.resetCaptchaInformation)
File "F:\Program Files (x86)\Anaconda3\lib\timeit.py", line 130, in __init__
raise ValueError("stmt is neither a string nor callable")
ValueError: stmt is neither a string nor callable
2018-07-19 14:10:53 [scrapy.extensions.logstats] INFO: Crawled 63 pages (at 2 pages/min), scraped 13 items (at 0 items/min)
2018-07-19 14:11:02 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.exampleaddress.com/InternalCaptcha?returnUrl=%2fresults%3fname%3dSamuel%2520Van%2520Cleave%26citystatezip%3dLas%2520Vegas%2c%2520Nv> from <GET https://www.exampleaddress.com/results?name=Samuel%20Van%20Cleave&citystatezip=Las%20Vegas,%20Nv>
2018-07-19 14:11:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.exampleaddress.com/InternalCaptcha?returnUrl=%2fresults%3fname%3dSamuel%2520Van%2520Cleave%26citystatezip%3dLas%2520Vegas%2c%2520Nv> (referer: None)
2018-07-19 14:11:13 [scrapername] INFO: got response 200 for 'https://www.exampleaddress.com/InternalCaptcha?returnUrl=%2fresults%3fname%3dSamuel%2520Van%2520Cleave%26citystatezip%3dLas%2520Vegas%2c%2520Nv'
#and then an endless supply of 302 redirects, and 200 response for their crawl
#nothing happens, because the Timer failed, the captcha never solved?
#I'm not sure what is going wrong with it, hence the issues I am having