scrapy 蜘蛛中的类型错误

2024-01-06

note:

我正在爬行的页面直到我现在为止都没有使用javascript。我也尝试过使用 scrapy_splash 但遇到了同样的错误! 我已经依靠this https://www.udemy.com/course/web-scraping-in-python-using-scrapy-and-splash/learn/lecture/16249446#questions/11644432启动蜘蛛的过程。

问题:

scrapy 蜘蛛给出了这个错误:

raise TypeError('to_bytes must receive a str or bytes '
TypeError: to_bytes must receive a str or bytes object, got Selector

我想要的是:

作为输出的字符串包括“一定数量的记录”。

我尝试了什么?

This https://stackoverflow.com/q/37604916/3604513 and this https://stackoverflow.com/q/38291784/3604513以及诸如此类的问题。他们没有解决我面临的问题。

My Code:

import scrapy
from scrapy import FormRequest


class abcSpider(scrapy.Spider):
    name = 'abc'
    allowed_domains = ['citizen.mahapolice.gov.in']

    def start_requests(self):
        yield scrapy.Request(
            url='http://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx',
            headers={
                'Referer': 'https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx'
            },
            callback=self.parse
        )

    def parse(self, response):

        yield FormRequest.from_response(
            response,
            formid='form1',
            formdata={
                '__EVENTTARGET': response.xpath("//input[@name='__EVENTTARGET']/@value"),
                '__EVENTARGUMENT': response.xpath("//*[@id='__EVENTARGUMENT']/@value"),
                '__LASTFOCUS': response.xpath("//*[@id='__LASTFOCUS']/@value"),
                '__VIEWSTATE':response.xpath("//*[@id='__VIEWSTATE']/@value"),
                '__VIEWSTATEGENERATOR': "6F2EA376",
                '__PREVIOUSPAGE': response.xpath("//*[@id='__PREVIOUSPAGE']/@value"),
                '__EVENTVALIDATION': response.xpath("//*[@id='__EVENTVALIDATION']/@value"),
                'ctl00$hdnSessionIdleTime': response.xpath("//*[@id='hdnSessionIdleTime']/@value"),
                'ctl00$hdnUserUniqueId': response.xpath("//*[@id='hdnUserUniqueId']/@value"),
                'ctl00$ContentPlaceHolder1$meeDateOfRegistrationFrom_ClientState': response.xpath(
                    "//*[@id='ContentPlaceHolder1_meeDateOfRegistrationFrom_ClientState']/@value"),
                'ctl00$ContentPlaceHolder1$txtDateOfRegistrationFrom': "01/07/2020",
                'ctl00$ContentPlaceHolder1$meeDateOfRegistrationTo_ClientState':
                    response.xpath(
                        "//*[@id='ContentPlaceHolder1_meeDateOfRegistrationTo_ClientState']/@value"),
                'ctl00$ContentPlaceHolder1_txtDateOfRegistrationTo': "03/07/2020",
                'ctl00$ContentPlaceHolder1$ddlDistrict': "19409",
                'ctl00$ContentPlaceHolder1$ddlPoliceStation': "",
                'ctl00$ContentPlaceHolder1$txtFirno': "",
                'ctl00$ContentPlaceHolder1$btnSearch': "Search",
                'ctl00$ContentPlaceHolder1$ucRecordView$ddlPageSize': "0",
                'ctl00$ContentPlaceHolder1$ucGridRecordView$txtPageNumber': ""
            },
            callback=(self.after_login),

        )

    def after_login(self, response):

        police_stations = response.xpath(
            '//*[@id="ContentPlaceHolder1_lbltotalrecord"]/text()').get()
        print(police_stations)

终端输出:

2020-07-15 15:11:37 [scrapy.utils.log] INFO: Scrapy 2.2.0 started (bot: xyz)
2020-07-15 15:11:37 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.8.2 (default, Apr 27 2020, 15:53:34) - [GCC 9.3.0], pyOpenSSL 19.1.0 (OpenSSL 1.1.1f  31 Mar 2020), cryptography 2.8, Platform Linux-5.4.0-40-generic-x86_64-with-glibc2.29
2020-07-15 15:11:37 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.epollreactor.EPollReactor
2020-07-15 15:11:37 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'xyz',
 'NEWSPIDER_MODULE': 'xyz.spiders',
 'SPIDER_MODULES': ['xyz.spiders']}
2020-07-15 15:11:38 [scrapy.extensions.telnet] INFO: Telnet Password: db3dd9550774d0ab
2020-07-15 15:11:38 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2020-07-15 15:11:39 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-07-15 15:11:39 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-07-15 15:11:39 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-07-15 15:11:39 [scrapy.core.engine] INFO: Spider opened
2020-07-15 15:11:39 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-07-15 15:11:39 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-07-15 15:11:40 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx> from <GET http://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx>
2020-07-15 15:11:40 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx> (referer: https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx)
2020-07-15 15:11:40 [scrapy.core.scraper] ERROR: Spider error processing <GET https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx> (referer: https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx)
Traceback (most recent call last):
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/defer.py", line 120, in iter_errback
    yield next(it)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/python.py", line 346, in __next__
    return next(self.data)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/python.py", line 346, in __next__
    return next(self.data)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output
    for x in result:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/referer.py", line 340, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/home/sangharshmanuski/Documents/delet/xyz/xyz/spiders/abc.py", line 20, in parse
    yield FormRequest.from_response(
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 58, in from_response
    return cls(url=url, method=method, formdata=formdata, **kwargs)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 31, in __init__
    querystr = _urlencode(items, self.encoding)
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 71, in _urlencode
    values = [(to_bytes(k, enc), to_bytes(v, enc))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/http/request/form.py", line 71, in <listcomp>
    values = [(to_bytes(k, enc), to_bytes(v, enc))
  File "/home/sangharshmanuski/.local/lib/python3.8/site-packages/scrapy/utils/python.py", line 104, in to_bytes
    raise TypeError('to_bytes must receive a str or bytes '
TypeError: to_bytes must receive a str or bytes object, got Selector
2020-07-15 15:11:40 [scrapy.core.engine] INFO: Closing spider (finished)
2020-07-15 15:11:40 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 648,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 8150,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/302': 1,
 'elapsed_time_seconds': 1.116569,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 7, 15, 9, 41, 40, 607840),
 'log_count/DEBUG': 2,
 'log_count/ERROR': 1,
 'log_count/INFO': 10,
 'memusage/max': 52281344,
 'memusage/startup': 52281344,
 'response_received_count': 1,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'spider_exceptions/TypeError': 1,
 'start_time': datetime.datetime(2020, 7, 15, 9, 41, 39, 491271)}
2020-07-15 15:11:40 [scrapy.core.engine] INFO: Spider closed (finished)

你有我在评论中提到的问题上一个问题 https://stackoverflow.com/questions/62888016/scrapy-spider-dosent-give-any-output.

你必须使用.get()当你获得价值观时response.xpath(...).get() in formdata={...}


BTW:

您的字段名称仍然有错误

 'ContentPlaceHolder1_txtDateOfRegistrationTo': "03/07/2020",

它一定要是

'ctl00$ContentPlaceHolder1$txtDateOfRegistrationTo': "03/07/2020",

而且你必须使用https://代替http://在起始网址中。

url = 'https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx',

如果你使用http://然后它重定向到主页

https://citizen.mahapolice.gov.in/Citizen/MH/index.aspx

然后您将表格发送至index.aspx代替PublishedFIRs.aspx


您可以将其放入一个文件并运行的最少工作代码python script.py不会造成问题

它没有以前的错误,并且发送到正确的网址,但值仍然有问题__VIEWSTATE and __EVENTVALIDATION。如果我从网络浏览器复制所有值,那么它可以工作,但如果我使用来自scrapy然后页面生成错误 500。可能页面使用 JavaScript 来生成这些值。

#!/usr/bin/env python3

import scrapy
from scrapy import FormRequest


class abcSpider(scrapy.Spider):
    name = 'abc'
    allowed_domains = ['citizen.mahapolice.gov.in']

    def start_requests(self):
        yield scrapy.Request(
            url='https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx',
            headers={
                'USER_AGENT': 'Mozilla/5.0',
                'Referer': 'https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx'
            },
            callback=self.parse
        )

    def parse(self, response):

        yield FormRequest.from_response(
            response,
            formid='form1',
            formdata={
                '__EVENTTARGET': response.xpath("//input[@name='__EVENTTARGET']/@value").get(),
                '__EVENTARGUMENT': response.xpath("//*[@id='__EVENTARGUMENT']/@value").get(),
                '__LASTFOCUS': response.xpath("//*[@id='__LASTFOCUS']/@value").get(),
                '__VIEWSTATE':response.xpath("//*[@id='__VIEWSTATE']/@value").get(),
                '__VIEWSTATEGENERATOR': "6F2EA376",
                '__PREVIOUSPAGE': response.xpath("//*[@id='__PREVIOUSPAGE']/@value").get(),
                '__EVENTVALIDATION': response.xpath("//*[@id='__EVENTVALIDATION']/@value").get(),
                'ctl00$hdnSessionIdleTime': response.xpath("//*[@id='hdnSessionIdleTime']/@value").get(),
                'ctl00$hdnUserUniqueId': response.xpath("//*[@id='hdnUserUniqueId']/@value").get(),
                'ctl00$ContentPlaceHolder1$meeDateOfRegistrationFrom_ClientState': 
                    response.xpath("//*[@id='ContentPlaceHolder1_meeDateOfRegistrationFrom_ClientState']/@value").get(),
                'ctl00$ContentPlaceHolder1$txtDateOfRegistrationFrom': "01/07/2020",
                'ctl00$ContentPlaceHolder1$meeDateOfRegistrationTo_ClientState':
                     response.xpath("//*[@id='ContentPlaceHolder1_meeDateOfRegistrationTo_ClientState']/@value").get(),
                #'ContentPlaceHolder1_txtDateOfRegistrationTo': "03/07/2020",
                'ctl00$ContentPlaceHolder1$txtDateOfRegistrationTo': "03/07/2020",
                'ctl00$ContentPlaceHolder1$ddlDistrict': "19409",
                'ctl00$ContentPlaceHolder1$ddlPoliceStation': "",
                'ctl00$ContentPlaceHolder1$txtFirno': "",
                'ctl00$ContentPlaceHolder1$btnSearch': "Search",
                'ctl00$ContentPlaceHolder1$ucRecordView$ddlPageSize': "0",
                'ctl00$ContentPlaceHolder1$ucGridRecordView$txtPageNumber': ""
            },
            callback=(self.after_login),
        )

    def after_login(self, response):

        police_stations = response.xpath(
            '//*[@id="ContentPlaceHolder1_lbltotalrecord"]/text()').get()
        print(police_stations)

# --- run without project and save in `output.csv` ---


from scrapy.crawler import CrawlerProcess

c = CrawlerProcess({
    'USER_AGENT': 'Mozilla/5.0',
})
c.crawl(abcSpider)
c.start() 

EDIT:带有值的代码可以给我结果,但我不知道值多长时间是正确的以及它们是否适用于不同的日期

#!/usr/bin/env python3

import scrapy
from scrapy import FormRequest


class abcSpider(scrapy.Spider):
    name = 'abc'
    allowed_domains = ['citizen.mahapolice.gov.in']

    def start_requests(self):
        yield scrapy.Request(
            url='https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx',
            headers={
                'Referer': 'https://citizen.mahapolice.gov.in/Citizen/MH/PublishedFIRs.aspx'
            },
            callback=self.parse
        )

    def parse(self, response):

        yield FormRequest.from_response(
            response,
            formid='form1',
            formdata={
                '__EVENTTARGET': '',
                '__EVENTARGUMENT': '',
                '__LASTFOCUS': '',
                '__VIEWSTATE': '/wEPDwUKLTIwNzQyOTkwOA9kFgJmD2QWAgIDD2QWIAIRDw8WAh4EVGV4dAUyPGgxPk1haGFyYXNodHJhIFBvbGljZSAtIFNlcnZpY2VzIGZvciBDaXRpemVuPC9oMT5kZAITDw8WAh8ABT88aDI+Q3JpbWUgYW5kIENyaW1pbmFsIFRyYWNraW5nIE5ldHdvcmsgYW5kIFN5c3RlbXMgKENDVE5TKTxoMj5kZAIVDw8WAh8ABSLigJxFbXBvd2VyaW5nIFBvbGljZSBUaHJvdWdoIElU4oCdZGQCFw8PFgIeCEltYWdlVXJsBRV+L0ltYWdlcy90YWJfSG9tZS5wbmcWBB4Lb25tb3VzZW92ZXIFI3RoaXMuc3JjPScuLi9JbWFnZXMvdGFiX0hvbWVSTy5wbmcnHgpvbm1vdXNlb3V0BSF0aGlzLnNyYz0nLi4vSW1hZ2VzL3RhYl9Ib21lLnBuZydkAhkPDxYCHwEFGH4vSW1hZ2VzL3RhYl9BYm91dFVzLnBuZxYEHwIFJnRoaXMuc3JjPScuLi9JbWFnZXMvdGFiX0Fib3V0VXNSTy5wbmcnHwMFJHRoaXMuc3JjPScuLi9JbWFnZXMvdGFiX0Fib3V0VXMucG5nJ2QCGw8PFgIfAQUffi9JbWFnZXMvdGFiX0NpdGl6ZW5DaGFydGVyLnBuZxYEHwIFLXRoaXMuc3JjPScuLi9JbWFnZXMvdGFiX0NpdGl6ZW5DaGFydGVyUk8ucG5nJx8DBSt0aGlzLnNyYz0nLi4vSW1hZ2VzL3RhYl9DaXRpemVuQ2hhcnRlci5wbmcnZAIdDw8WAh8BBRx+L0ltYWdlcy90YWJfQ2l0aXplbkluZm8ucG5nFgQfAgUqdGhpcy5zcmM9Jy4uL0ltYWdlcy90YWJfQ2l0aXplbkluZm9STy5wbmcnHwMFKHRoaXMuc3JjPScuLi9JbWFnZXMvdGFiX0NpdGl6ZW5JbmZvLnBuZydkAh8PDxYCHwEFKH4vSW1hZ2VzL3RhYl9PbmxpbmVTZXJ2aWNlc19FbmdfYmx1ZS5wbmcWBB8CBTJ0aGlzLnNyYz0nLi4vSW1hZ2VzL3RhYl9PbmxpbmVTZXJ2aWNlc19FbmdfUk8ucG5nJx8DBTR0aGlzLnNyYz0nLi4vSW1hZ2VzL3RhYl9PbmxpbmVTZXJ2aWNlc19FbmdfYmx1ZS5wbmcnZAIhDw8WAh8BBR9+L0ltYWdlcy90YWJfT25saW5lU2VydmljZXMucG5nFgQfAgUtdGhpcy5zcmM9Jy4uL0ltYWdlcy90YWJfT25saW5lU2VydmljZXNSTy5wbmcnHwMFK3RoaXMuc3JjPScuLi9JbWFnZXMvdGFiX09ubGluZVNlcnZpY2VzLnBuZydkAiMPZBYCAgEPZBYIAgEPZBYIAgEPZBYMAgMPDxYEHgdUb29sVGlwBRpFbnRlciBEYXRlIG9mIFJlZ2lzdHJhdGlvbh4JTWF4TGVuZ3RoZmRkAgkPFggeDERpc3BsYXlNb25leQspggFBamF4Q29udHJvbFRvb2xraXQuTWFza2VkRWRpdFNob3dTeW1ib2wsIEFqYXhDb250cm9sVG9vbGtpdCwgVmVyc2lvbj00LjEuNDA0MTIuMCwgQ3VsdHVyZT1uZXV0cmFsLCBQdWJsaWNLZXlUb2tlbj0yOGYwMWIwZTg0YjZkNTNlAB4OQWNjZXB0TmVnYXRpdmULKwQAHg5JbnB1dERpcmVjdGlvbgsphgFBamF4Q29udHJvbFRvb2xraXQuTWFza2VkRWRpdElucHV0RGlyZWN0aW9uLCBBamF4Q29udHJvbFRvb2xraXQsIFZlcnNpb249NC4xLjQwNDEyLjAsIEN1bHR1cmU9bmV1dHJhbCwgUHVibGljS2V5VG9rZW49MjhmMDFiMGU4NGI2ZDUzZQAeCkFjY2VwdEFtUG1oZAITDw8WBB8EBRpFbnRlciBEYXRlIG9mIFJlZ2lzdHJhdGlvbh8FZmRkAhkPFggfBgsrBAAfBwsrBAAfCAsrBQAfCWhkAiEPEA8WBh4ORGF0YVZhbHVlRmllbGQFC0RJU1RSSUNUX0NEHg1EYXRhVGV4dEZpZWxkBQhESVNUUklDVB4LXyFEYXRhQm91bmRnZBAVMQZTZWxlY3QKQUhNRUROQUdBUgVBS09MQQ1BTVJBVkFUSSBDSVRZDkFNUkFWQVRJIFJVUkFMD0FVUkFOR0FCQUQgQ0lUWRBBVVJBTkdBQkFEIFJVUkFMBEJFRUQIQkhBTkRBUkESQlJJSEFOIE1VTUJBSSBDSVRZCEJVTERIQU5BCkNIQU5EUkFQVVIFREhVTEUKR0FEQ0hJUk9MSQZHT05ESUEHSElOR09MSQdKQUxHQU9OBUpBTE5BCEtPTEhBUFVSBUxBVFVSC05BR1BVUiBDSVRZDE5BR1BVUiBSVVJBTAZOQU5ERUQJTkFORFVSQkFSC05BU0hJSyBDSVRZDE5BU0hJSyBSVVJBTAtOQVZJIE1VTUJBSQlPU01BTkFCQUQHUEFMR0hBUghQQVJCSEFOSRBQSU1QUkktQ0hJTkNIV0FECVBVTkUgQ0lUWQpQVU5FIFJVUkFMBlJBSUdBRBJSQUlMV0FZIEFVUkFOR0FCQUQOUkFJTFdBWSBNVU1CQUkOUkFJTFdBWSBOQUdQVVIMUkFJTFdBWSBQVU5FCVJBVE5BR0lSSQZTQU5HTEkGU0FUQVJBClNJTkRIVURVUkcMU09MQVBVUiBDSVRZDVNPTEFQVVIgUlVSQUwKVEhBTkUgQ0lUWQtUSEFORSBSVVJBTAZXQVJESEEGV0FTSElNCFlBVkFUTUFMFTEGU2VsZWN0BTE5MzcyBTE5MzczBTE5ODQyBTE5Mzc0BTE5NDA5BTE5Mzc1BTE5Mzc3BTE5Mzc2BTE5Mzc4BTE5Mzc5BTE5MzgxBTE5MzgyBTE5NDAzBTE5ODQ1BTE5ODQ2BTE5Mzg0BTE5MzgwBTE5Mzg2BTE5NDA1BTE5Mzg3BTE5Mzg4BTE5Mzg5BTE5ODQ0BTE5NDA4BTE5MzkwBTE5ODQxBTE5MzkxBTE5MzcxBTE5MzkyBTE5ODQ3BTE5MzkzBTE5Mzk0BTE5Mzg1BTE5ODQ4BTE5NDA0BTE5NDAyBTE5MzgzBTE5Mzk1BTE5Mzk2BTE5Mzk3BTE5NDA2BTE5NDEwBTE5Mzk4BTE5Mzk5BTE5NDA3BTE5NDAwBTE5ODQzBTE5NDAxFCsDMWdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cWAWZkAicPEGQQFQEGU2VsZWN0FQEGU2VsZWN0FCsDAWdkZAIDDw8WAh8ABQZTZWFyY2hkZAIFDw8WAh8ABQVDbGVhcmRkAgcPDxYCHwAFBUNsb3NlZGQCAw9kFgJmD2QWAgIDDxBkDxYBZhYBBQtWaWV3IFJlY29yZBYBZmQCCQ88KwARAgEQFgAWABYADBQrAABkAgsPDxYCHgdWaXNpYmxlZ2QWAgIBD2QWAgIFDw8WAh8ABQJHb2RkAiUPDxYCHwAFB1NpdGVNYXBkZAInDw8WAh8ABRRQb2xpY2UgVW5pdHMgV2Vic2l0ZWRkAikPDxYCHwAFC0Rpc2NsYWltZXJzZGQCKw8PFgIfAAUDRkFRZGQCLQ8PFgIfAAUKQ29udGFjdCBVc2RkAi8PDxYCHwAFBzgzNDkxNDhkZBgCBR5fX0NvbnRyb2xzUmVxdWlyZVBvc3RCYWNrS2V5X18WCAUNY3RsMDAkbG1nSG9tZQUMY3RsMDAkbG1nQWJ0BQ1jdGwwMCRsbWdDaHJ0BQ1jdGwwMCRsbWdJbmZvBQ5jdGwwMCRsbWdEd25sZAULY3RsMDAkbG1nT1MFM2N0bDAwJENvbnRlbnRQbGFjZUhvbGRlcjEkaW1nRGF0ZU9mUmVnaXN0cmF0aW9uRnJvbQUxY3RsMDAkQ29udGVudFBsYWNlSG9sZGVyMSRpbWdEYXRlT2ZSZWdpc3RyYXRpb25UbwUlY3RsMDAkQ29udGVudFBsYWNlSG9sZGVyMSRnZHZEZWFkQm9keQ9nZJLUBB4bd3CH8EeW9a0lIRLz9afH',
                '__VIEWSTATEGENERATOR': '6F2EA376',
                '__PREVIOUSPAGE': '6Fkypj_FbKCMscMOIEbFwiAIl-t4XMDVxhwkenT13SdXVANmcLkeKVNreNUcxzCFPd2Pxt-oh_2N7OVcM2YpQJ9h0re0OFqkn5XLvLpF1J-DFQ0h0',
                '__EVENTVALIDATION': '/wEdAFbuJNLDGfYOJbFWhYC0CtoGCssMeMRH46lUxWxNoH/QjR5JLHBufgCBaXKcLsIHFZg2MfFCqAQ55R5q232FZgK2qoCdmcL8o03Ga7p3SNpVviXoWLdz7AIdB4qHlFb/Ei9/1ch/aUhwAcGED/suJluf7ISsvoU9AiyuaEemMV5BBJnd8M9l/EB8CbzCs/Qj58HeW1DBXpopxThMkmM3IaEA4f83zm8GjIMpdMbZJo0bg/ou0osxK9vw1/I5QAXjT4WAelg7J4xZgxz60IVmQFQVBwFQHg/XFH9pTR8T2Gs+V8qukw1XTUYPesJgPqkOxZQh262jaQ7BxUOV7QoxeNck2w47G8rm/lqu6eH38UvMjATEI1G+tctApp1T0wcXwuNCLn3Z0VPV65eVNYp7hMU8lDrezCJH7PKOMYlCjf6maxW322Wg8dLjJ0oAXaSslqZHs1bB/7i2oDFBz4DJ85TGKEfqFutX9Sc8iba6A2UA3Jbp98jppZoyKABVAKm4ScwkZSsqCnmWlHZE1g1cl5KTdz2wIx74ktDJhIyxSHIwnuUrnMnZVi7M1yfB08jysAZLiaKqyALYmaPTP4iB7/cEzRldPEjwCvWpP992wRUTSVioExXj+mq+aV3ovp1s3PYdGAfIul3shD4atfGh7x1DmI0SjJjBG5MN09bwTja6X4d1tYyTUWpH5kv7kquz0k9MSPwDuX8kZiAr7Go4LvLA1v8x//T/i3cmHZhqsqcHSaOvUIY7oYzJpYB6269Eg/Eet2MzUbATNVMVJ6z0ps0G3+QTnao16M2kNs5Amrrfs7FS6VV+1VO0VlVoBMI3MVL2a5ZuFE0VYjpXs1Ie80zilwTI+Q87lRt0RiHvm7no9Ryh+i/NQ0SvqV6XUmpTvyESyCOHyB4V0JKFy3ngQefeiU1Bhw0YKqnM8XuA/3OuCrtvoVW4iPEPfzfW0U992cke6MjKSj+bFRXox1RixVsclKaaKq2cbpV4jLUztc0v0FIrdBwoILAS39YNKuaLebG44MFbUIrfl2XY7SNDPfSk0ikZQF6oN5BHioH7XmNMzwk0vSNeQ5gKYDKx4xnue5CFyfuTFLCx43hksDtRhvJXkn2iJAzo5kx+7Oa9LqM3/7ZUva29woTjHIRcXIx5V+VUFaPbjpSXixVRCTuVCcHTNPAsoz+6EiXvsfi8lrX0f3D7YCO4ridVhQClK705rktyQAmmeO0iV/Vh5DSf8FhvD58uSORbTqGUZryylC9SPojWj+h3++zOroq6bTLe/itZW7f6vF0eyAgMysofFozRdBhZo5tdiQR7X5+feZXm8Mh9dkmrkjndCY6MJW+Z6GMDEkD2DRN460MRst3Ymkivnm8me7KLtZghplypPrBnBqKdsArB4XzeK7XbSYhMVY6qipQKdH6cU6XeeZcmTS57SquMwbHZEhKbKL6YxYuvZmZZF8nlCQZL4zlr//3g5nyOTFulzGhY80/Z1HbCJJ6LxQbS0yD9Thl7sm6WVjYxB23A6c0dbgG4R+nkAQKMqcH6ZVn78Nu9BSKKrOVmNjQwbSsS5vUv6MFDROG0CrK/eNGU0C14yGuWM5HkGE/DyCzIKRYuDsUr1CVxXS+jyCWdB8LwDiFJV7yxNt0d/PtikuBIdrCGbTGK/JJV58CVginDn/qsq9scauaAbl2FvBQQCQMuNszcsKvvFie32VnIgxjp9PYR0Y1JxT7s4XE1eEASLLIarsRVQGJxRqon8iLGYHzEO3PB1DG6typAyQ+VpxaMiZBUOTlCcsXDdY08Kwd7PKgvFhd/UCrh6PvV7qCAPcsiiHYjV/MyKFDCcqDP506hiuHs8/lYYzvu5lRwgpFGVVnV',
                'ctl00$hdnSessionIdleTime': '',
                'ctl00$hdnUserUniqueId': '',
                'ctl00$ContentPlaceHolder1$txtDateOfRegistrationFrom': '01/07/2020',
                'ctl00$ContentPlaceHolder1$meeDateOfRegistrationFrom_ClientState': '',
                'ctl00$ContentPlaceHolder1$txtDateOfRegistrationTo': '03/07/2020',
                'ctl00$ContentPlaceHolder1$meeDateOfRegistrationTo_ClientState': '',
                'ctl00$ContentPlaceHolder1$ddlDistrict': '19372',
                'ctl00$ContentPlaceHolder1$ddlPoliceStation': 'Select',
                'ctl00$ContentPlaceHolder1$txtFirno': '',
                'ctl00$ContentPlaceHolder1$btnSearch': 'Search',
                'ctl00$ContentPlaceHolder1$ucRecordView$ddlPageSize': '0',
                'ctl00$ContentPlaceHolder1$ucGridRecordView$txtPageNumber': '',
            },
            callback=(self.after_login),
        )

    def after_login(self, response):

        police_stations = response.xpath(
            '//*[@id="ContentPlaceHolder1_lbltotalrecord"]/text()').get()
        print(police_stations)

# --- run without project and save in `output.csv` ---


from scrapy.crawler import CrawlerProcess

c = CrawlerProcess({
    'USER_AGENT': 'Mozilla/5.0',
})
c.crawl(abcSpider)
c.start() 
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

scrapy 蜘蛛中的类型错误 的相关文章

  • sudo 和 pip 不在同一路径上

    pip and sudo不在我的计算机上的同一路径上 因此当 基本上一直 我需要运行这两个命令时 如下所示 sudo pip install xxx I get sudo pip command not found pip下载软件包 但由于
  • setColumnStretch 和 setRowStretch 如何工作

    我有一个使用构建的应用程序PySide2它使用setColumnStretch用于柱拉伸和setRowStretch用于行拉伸 它工作得很好 但我无法理解它是如何工作的 我参考了 qt 文档 但它对我没有帮助 我被困在括号内的两个值上 例如
  • 如何使用 tkinter 使用网格功能显示不同的图像?

    我想使用显示文件夹中的图像grid 但是当我尝试使用以下代码时 我得到了迭代单个图像的输出 My code def messageWindow win Toplevel path C Users HP Desktop dataset for
  • Python:由于 OSError 无法安装软件包:[Errno 2] 没有这样的文件或目录

    我尝试使用pip安装sklearn 并且我收到以下错误消息 错误 由于 OSError 无法安装软件包 Errno 2 没有这样的文件或目录 C Users 13434 AppData Local Packages PythonSoftwa
  • 如何使用 Pycharm 运行 fast-api 服务器?

    我有一个简单的 API 函数 如下所示 from fastapi import FastAPI app FastAPI app get async def read root return Hello World 我正在使用启动服务器uvi
  • 如何搜索一列并用找到的内容填充另一列?

    我有一个带有虚构人物数据的大熊猫数据框 下面是一个小例子 每个人都由一个数字定义 import pandas as pd import numpy as np df pd DataFrame Number 5569 3385 9832 64
  • 让 python 脚本打印到终端而不作为标准输出的一部分返回

    我正在尝试编写一个返回值的 python 脚本 然后我可以将其传递给 bash 脚本 问题是我想要在 bash 中返回一个单一值 但我想要一些东西一路打印到终端 这是一个示例脚本 我们称之为 return5 py usr bin env p
  • 使用 Poetry 创建的 Python 项目:如何在 Visual Studio Code 中调试它?

    我有一个根据基本 Poetry 创建的 Python 项目指示 https python poetry org docs basic usage 项目文件夹是这样的 my project my project my project py F
  • python win32com.client 调整窗口大小

    我正在使用 Python 3 4 1 通过 win32com client 控制 Windows 应用程序 我可以激活它 我可以发送击键 点击等 现在我想知道是否有办法调整窗口大小并将其设置到特定位置 我找不到方法 这里有一些代码片段 所以
  • 如何用pygame画一条虚线?

    我需要在坐标系上绘制正弦波和余弦波 就像在this https i stack imgur com DGI8g png图片 除了没能代表以外 我所有的工作都做得很好虚线和曲线与 pygame 一致 我有与我需要的类似的东西 但我怎样才能让它
  • 清理 MongoDB 的输入

    我正在为 MongoDB 数据库程序编写 REST 接口 并尝试实现搜索功能 我想公开整个 MongoDB 接口 我确实有两个问题 但它们是相关的 所以我将它们放在一篇文章中 使用 Python json 模块解码不受信任的 JSON 是否
  • matplotlib - 将文本包装在图例中

    我目前正在尝试绘制一些pandas数据通过matplotlib seaborn 然而我的一个专栏标题特别长 拉长了情节 考虑以下示例 import random import pandas as pd import matplotlib p
  • Python 模块 BeautifulSoup 提取锚点 href

    我正在使用 BeautifulSoup 模块通过以下方式从 html 选择所有 href def extract links html soup BeautifulSoup html anchors soup findAll a print
  • python 硒 按名称查找元素

    查找电子邮件输入的正确代码是什么https accounts google com ServiceLogin html 是
  • PySpark DataFrame 上分组数据的 Pandas 式转换

    如果我们有一个由一列类别和一列值组成的 Pandas 数据框 我们可以通过执行以下操作来删除每个类别中的平均值 df DemeanedValues df groupby Category Values transform lambda g
  • 数据类和属性装饰器

    我一直在阅读 Python 3 7 的数据类 作为命名元组的替代品 我通常在必须将数据分组到结构中时使用它 我想知道数据类是否与属性装饰器兼容 以便为数据类的数据元素定义 getter 和 setter 函数 如果是这样 是否在某处进行了描
  • 如何在 Jupyter Notebook 中选择 conda 环境

    我安装了 Anaconda 5 3 和 Python 3 7 根环境 之后我使用 Python 3 6 创建了一个新环境 py36 我激活了新环境activate py36 conda env list表明环境是活跃的 但是当我启动 Jup
  • 基于 Web 请求在 Airflow 上运行作业

    我想知道是否可以在通过 HTTP 收到请求时执行气流任务 我对 Airflow 的调度部分不感兴趣 我只是想用它来代替芹菜 因此 示例操作如下所示 用户提交一份表格 请求某些报告 后端接收请求并向用户发送请求已收到的通知 然后后端使用 Ai
  • 从 HDF5 文件中删除信息

    我意识到 SO 用户以前曾问过这个问题question https stackoverflow com questions 1124994 removing data from a hdf5 file rq 1但它是在 2009 年被问到的
  • 处理错误“TypeError: Expected tuple, got str”将 CSV 加载到 pandas 多级和多索引 (pandas)

    我正在尝试加载 CSV 文件 这个文件 https drive google com file d 13a eVeSzy6HkhffQw32S57U hm5YCj0 view usp sharing 创建一个多索引多级数据帧 它有5 五 指

随机推荐