scarpy 爬虫

2023-11-20

基本指令

全局指令

  • scrapy fetch(直接爬取某个网页)
  • scrapy runspider(运行某个爬虫,并且这个爬虫可以不属于项目里)
  • scrapy settings(设置)
  • scrapy shell(进入交互模式)
D:\>scrapy shell
2018-12-12 19:25:33 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: scrapybot)
2018-12-12 19:25:33 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 3.7.1 (v3.7.1:260ec2c36a, Oct 20 2018, 14:57:15) [MSC v.1915 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.1.0j  20 Nov 2018), cryptography 2.4.2, Platform Windows-10-10.0.17134-SP0
2018-12-12 19:25:33 [scrapy.crawler] INFO: Overridden settings: {'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'LOGSTATS_INTERVAL': 0}
2018-12-12 19:25:33 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole']
2018-12-12 19:25:33 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-12-12 19:25:33 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-12-12 19:25:33 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-12-12 19:25:33 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
[s] Available Scrapy objects:
[s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s]   crawler    <scrapy.crawler.Crawler object at 0x000002A069AFB9B0>
[s]   item       {}
[s]   settings   <scrapy.settings.Settings object at 0x000002A069AFB940>
[s] Useful shortcuts:
[s]   fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
[s]   fetch(req)                  Fetch a scrapy.Request and update local objects
[s]   shelp()           Shell help (print this help)
[s]   view(response)    View response in a browser
In [1]: print("hehe")
hehe
  • scrapy version(查看版本)
D:\>scrapy version
Scrapy 1.5.1

D:\>
  • scarpy startproject 项目名(创建一个项目)
D:\>scrapy startproject spiders
New Scrapy project 'spiders', using template directory 'e:\\development\\python\\lib\\site-packages\\scrapy\\templates\\project', created in:
    D:\spiders

You can start your first spider with:
    cd spiders
    scrapy genspider example example.com
  • scrapy bench(测试电脑性能)
D:\>scrapy bench
2018-12-12 19:24:10 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: scrapybot)
2018-12-12 19:24:10 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 3.7.1 (v3.7.1:260ec2c36a, Oct 20 2018, 14:57:15) [MSC v.1915 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.1.0j  20 Nov 2018), cryptography 2.4.2, Platform Windows-10-10.0.17134-SP0
2018-12-12 19:24:10 [scrapy.crawler] INFO: Overridden settings: {'CLOSESPIDER_TIMEOUT': 10, 'LOGSTATS_INTERVAL': 1, 'LOG_LEVEL': 'INFO'}
2018-12-12 19:24:11 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.closespider.CloseSpider',
 'scrapy.extensions.logstats.LogStats']
2018-12-12 19:24:11 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-12-12 19:24:11 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-12-12 19:24:11 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-12-12 19:24:11 [scrapy.core.engine] INFO: Spider opened
2018-12-12 19:24:11 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-12-12 19:24:12 [scrapy.extensions.logstats] INFO: Crawled 69 pages (at 4140 pages/min), scraped 0 items (at 0 items/min)
2018-12-12 19:24:13 [scrapy.extensions.logstats] INFO: Crawled 150 pages (at 4860 pages/min), scraped 0 items (at 0 items/min)
2018-12-12 19:24:14 [scrapy.extensions.logstats] INFO: Crawled 214 pages (at 3840 pages/min), scraped 0 items (at 0 items/min)
2018-12-12 19:24:15 [scrapy.extensions.logstats] INFO: Crawled 278 pages (at 3840 pages/min), scraped 0 items (at 0 items/min)
2018-12-12 19:24:16 [scrapy.extensions.logstats] INFO: Crawled 334 pages (at 3360 pages/min), scraped 0 items (at 0 items/min)
2018-12-12 19:24:17 [scrapy.extensions.logstats] INFO: Crawled 382 pages (at 2880 pages/min), scraped 0 items (at 0 items/min)
2018-12-12 19:24:18 [scrapy.extensions.logstats] INFO: Crawled 430 pages (at 2880 pages/min), scraped 0 items (at 0 items/min)
2018-12-12 19:24:19 [scrapy.extensions.logstats] INFO: Crawled 478 pages (at 2880 pages/min), scraped 0 items (at 0 items/min)
2018-12-12 19:24:20 [scrapy.extensions.logstats] INFO: Crawled 526 pages (at 2880 pages/min), scraped 0 items (at 0 items/min)
2018-12-12 19:24:21 [scrapy.extensions.logstats] INFO: Crawled 574 pages (at 2880 pages/min), scraped 0 items (at 0 items/min)
2018-12-12 19:24:21 [scrapy.core.engine] INFO: Closing spider (closespider_timeout)
2018-12-12 19:24:22 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 263306,
 'downloader/request_count': 590,
 'downloader/request_method_count/GET': 590,
 'downloader/response_bytes': 1815754,
 'downloader/response_count': 590,
 'downloader/response_status_count/200': 590,
 'finish_reason': 'closespider_timeout',
 'finish_time': datetime.datetime(2018, 12, 12, 11, 24, 22, 225496),
 'log_count/INFO': 17,
 'request_depth_max': 20,
 'response_received_count': 590,
 'scheduler/dequeued': 590,
 'scheduler/dequeued/memory': 590,
 'scheduler/enqueued': 11801,
 'scheduler/enqueued/memory': 11801,
 'start_time': datetime.datetime(2018, 12, 12, 11, 24, 11, 368009)}
2018-12-12 19:24:22 [scrapy.core.engine] INFO: Spider closed (closespider_timeout)

D:\>

项目指令(只能进入项目里才能使用)

  • scrapy list (打开已有的爬虫列表)
D:\>cd he
D:\he>scrapy list
tianshan
  • scrapy gensprider -l (爬虫模板)
D:\he>scrapy genspider -l
Available templates:
  basic
  crawl
  csvfeed
  xmlfeed
  • scarpy genspider -t 模板 爬虫名 域名 (创建一个爬虫,注意要进入爬虫项目)
D:\>cd spiders

D:\spiders>scrapy genspider -t basic bd baidu.com
Created spider 'bd' using template 'basic' in module:
  spiders.spiders.bd
  • scrapy crawl 爬虫名 (运行该爬虫)
D:\spiders>scrapy crawl bd
2018-12-12 19:21:12 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: spiders)
2018-12-12 19:21:12 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 3.7.1 (v3.7.1:260ec2c36a, Oct 20 2018, 14:57:15) [MSC v.1915 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.1.0j  20 Nov 2018), cryptography 2.4.2, Platform Windows-10-10.0.17134-SP0
2018-12-12 19:21:12 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'spiders', 'NEWSPIDER_MODULE': 'spiders.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['spiders.spiders']}
2018-12-12 19:21:12 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2018-12-12 19:21:13 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-12-12 19:21:13 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-12-12 19:21:13 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-12-12 19:21:13 [scrapy.core.engine] INFO: Spider opened
2018-12-12 19:21:13 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-12-12 19:21:13 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-12-12 19:21:13 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://baidu.com/robots.txt> (failed 1 times): DNS lookup failed: no results for hostname lookup: baidu.com.
2018-12-12 19:21:13 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://baidu.com/robots.txt> (failed 2 times): DNS lookup failed: no results for hostname lookup: baidu.com.
2018-12-12 19:21:13 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET http://baidu.com/robots.txt> (failed 3 times): DNS lookup failed: no results for hostname lookup: baidu.com.
2018-12-12 19:21:13 [scrapy.downloadermiddlewares.robotstxt] ERROR: Error downloading <GET http://baidu.com/robots.txt>: DNS lookup failed: no results for hostname lookup: baidu.com.
Traceback (most recent call last):
  File "e:\development\python\lib\site-packages\twisted\internet\defer.py", line 1416, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "e:\development\python\lib\site-packages\twisted\python\failure.py", line 491, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "e:\development\python\lib\site-packages\scrapy\core\downloader\middleware.py", line 43, in process_request
    defer.returnValue((yield download_func(request=request,spider=spider)))
  File "e:\development\python\lib\site-packages\twisted\internet\defer.py", line 654, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "e:\development\python\lib\site-packages\twisted\internet\endpoints.py", line 975, in startConnectionAttempts
    "no results for hostname lookup: {}".format(self._hostStr)
twisted.internet.error.DNSLookupError: DNS lookup failed: no results for hostname lookup: baidu.com.
2018-12-12 19:21:13 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://baidu.com/> (failed 1 times): DNS lookup failed: no results for hostname lookup: baidu.com.
2018-12-12 19:21:13 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://baidu.com/> (failed 2 times): DNS lookup failed: no results for hostname lookup: baidu.com.
2018-12-12 19:21:13 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET http://baidu.com/> (failed 3 times): DNS lookup failed: no results for hostname lookup: baidu.com.
2018-12-12 19:21:13 [scrapy.core.scraper] ERROR: Error downloading <GET http://baidu.com/>
Traceback (most recent call last):
  File "e:\development\python\lib\site-packages\twisted\internet\defer.py", line 1416, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "e:\development\python\lib\site-packages\twisted\python\failure.py", line 491, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "e:\development\python\lib\site-packages\scrapy\core\downloader\middleware.py", line 43, in process_request
    defer.returnValue((yield download_func(request=request,spider=spider)))
  File "e:\development\python\lib\site-packages\twisted\internet\defer.py", line 654, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "e:\development\python\lib\site-packages\twisted\internet\endpoints.py", line 975, in startConnectionAttempts
    "no results for hostname lookup: {}".format(self._hostStr)
twisted.internet.error.DNSLookupError: DNS lookup failed: no results for hostname lookup: baidu.com.
2018-12-12 19:21:13 [scrapy.core.engine] INFO: Closing spider (finished)
2018-12-12 19:21:13 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 6,
 'downloader/exception_type_count/twisted.internet.error.DNSLookupError': 6,
 'downloader/request_bytes': 1278,
 'downloader/request_count': 6,
 'downloader/request_method_count/GET': 6,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2018, 12, 12, 11, 21, 13, 651125),
 'log_count/DEBUG': 7,
 'log_count/ERROR': 2,
 'log_count/INFO': 7,
 'retry/count': 4,
 'retry/max_reached': 2,
 'retry/reason_count/twisted.internet.error.DNSLookupError': 4,
 'scheduler/dequeued': 3,
 'scheduler/dequeued/memory': 3,
 'scheduler/enqueued': 3,
 'scheduler/enqueued/memory': 3,
 'start_time': datetime.datetime(2018, 12, 12, 11, 21, 13, 187234)}
2018-12-12 19:21:13 [scrapy.core.engine] INFO: Spider closed (finished)

D:\spiders>
  • scrapy edit 爬虫名(直接编辑某个爬虫代码)

    scrapy主要文件

    items.py

    确定需要爬取的数据

    spider.py

    网页解析,进行数据提取,返回数据给piplines,返回url给调度器

    item_pipelines.py

    爬后处理,进行存储

    settings.py

    设置文件

    聚焦爬虫的编写步骤

    item编写

    先在item里面确定需要爬取的数据

    spider编写

  • 先导入item的类,再实例化
  • 从网页中提取数据并存入item
  • 返回item到item_pipeline

    settings的编写

  • 打开pipeline的注释并且更改pipeline中真实的类名到settings

    pipeline的编写

    进行数据的存入

转载于:https://www.cnblogs.com/c-aha/p/10110438.html

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

scarpy 爬虫 的相关文章

随机推荐

  • flutterApp隐藏/显示状态栏和底部栏

    import package flutter services dart SystemChrome setEnabledSystemUIOverlays 隐藏状态栏 底部按钮栏 SystemChrome setEnabledSystemUI
  • ORACLE表的在线重定义

    一 在线表重定义的用处 1 修改表或者簇的存储参数 2 在相同schema的表空间之间 可以移动表或簇 注意 如果表的可以停止dml操作 则可以利用alter table move来进行表空间的更改 3 增加 修改或者删除一个或多个表或簇的
  • leetcode-712. 两个字符串的最小ASCII删除和

    712 两个字符串的最小ASCII删除和 题目 给定两个字符串s1 和 s2 返回使两个字符串相等所需删除字符的 ASCII 值的最小和 示例1 输入 s1 sea s2 eat 输出 231 解释 在 sea 中删除 s 并将 s 的值
  • [ vulhub漏洞复现篇 ] Apache APISIX 默认密钥漏洞 CVE-2020-13945

    博主介绍 博主介绍 大家好 我是 PowerShell 很高兴认识大家 主攻领域 渗透领域 数据通信 通讯安全 web安全 面试分析 点赞 评论 收藏 养成习惯 一键三连 欢迎关注 一起学习 一起讨论 一起进步 文末有彩蛋 作者水平有限 欢
  • Linux云计算-05_Linux软件包管理

    本章介绍Linux系统软件的安装 卸载 配置 维护以及如何构建企业本地YUM光盘源及HTTP本地源 1 RPM软件包管理 Linux软件包管理大致可分为二进制包 源码包 使用的工具也各不相同 Linux常见软件包分为两种 分别是源代码包 S
  • C++ pthread cond_wait 和 cond_broadcast的使用

    一个简单的实例程序 说明pthread cond wait 和 pthread cond broadcast 的使用方式 函数定义 int pthread cond wait pthread cond t cond pthread mute
  • Coding and Paper Letter(六十一)

    2019独角兽企业重金招聘Python工程师标准 gt gt gt 资源整理 1 Coding 1 航拍影像的土地覆盖分类 CAS机器学习人工智能2019 ZHAW 中ML DL分配的仓库 ml dl assignment 2019 2 跨
  • 职场新人如何使用ChatGPT提高工作效率

    刚刚从象牙塔中毕业 走向社会战场 作为职场新人的同学们刚刚进入公司和部门 难免会被安排做些本职工作之外的事务工作 被上级安排做些零零碎碎的小东西 俗称打杂 这些工作说难不难 想要做漂亮也并不简单 想要不辜负领导的信任 把这些工作做好 很容易
  • BP学习算法-构建三层神经网络

    引 人工神经网络 Artificial Neural Networks 简写为ANNs 也简称为神经网络 NNs 或称作连接模型 Connection Model 是一种模仿动物神经网络行为特征 进行分布式并行信息处理的算法数学模型 这种网
  • MySql学习笔记:一文上手MySql

    MySql学习笔记 quad PS 本文整理的笔记来自于B站视频 老杜带你学 mysql入门基础 mysql基础视频 数据库实战 视频讲的很好 值得大家一看 quad 一 MySql安装及概述 1 1 MySQL安装 MySql安装包下载链
  • 分层聚类算法

    分层聚类算法 转载 看到很多地方都讲到分层聚类法 这到底是什么东东 今天来研究一下 分层聚类法是聚类算法的一种 聚类算法是数据挖掘的核心技术 把数据库中的对象分类是数据挖掘的基本操作 其准则是使属于同一类的个体间距离尽可能小 而不同类个体间
  • 计算机网络4--Internet结构

    本页内容 1 基本结构 2 结构图解 3 层次结构图解 1 基本结构 a 端系统通过接入ISP access ISPs 连接到Internet b 接入ISP必须进一步互连 保证任意两个主机可以互相发送分组 c 构成复杂的网络互连的网络 2
  • Java对象数组的定义与用法

    目录 一 什么是对象数组 二 对象数组的作用 三 对象数组的语法定义 四 对象数组案例 一 什么是对象数组 1 顾名思义就是当数组元素是类对象时 这样的数组称之为对象数组 在这种情况下 数组的每一个元素都是一个对象的引用 2 对象数组 就是
  • Spring注解开发

    Spring配置繁重 注解提高开发速度 在applicationContext xml中配置组件扫描 作用是指定哪个包及其子包下的Bean需要进行扫描以便识别使用注解配置的类 字段和方法
  • PCL 法向量精细化处理

    目录 一 算法原理 1 概述 2 参考文献 二 代码实现 三 结果展示 一 算法原理 1 概述 class pcl NormalRefinement lt NormalT gt 这个类通过迭代的方式将每个点的法向量更新为其邻域内所有法向量的
  • Jenkins exec command java -jar 无法启动的问题

    看了一下午 网上讲的各种办法 都没起作用 前提 得先了解的知识 Disable exec 禁止在目标机上执行命令 勾选后将会忽略在Job配置中 Exec command 选项中设置的命令 Jenkins的说明文档中的 The Disable
  • 华为手机媒体音量自动静音_华为媒体音量自动静音

    大家好 我是时间财富网智能客服时间君 上述问题将由我为大家进行解答 检查是否有APP与媒体音量冲突 只要打开软件 手机媒体音量就自动关闭 建议卸载冲突软件 华为手机 隶属于华为消费者业务 作为华为三大核心业务之一 华为消费者业务始于2003
  • PAT 7 加法变乘法

    加法变乘法 我们都知道 1 2 3 49 1225现在要求你把其中两个不相邻的加号变成乘号 使得结果为2015比如 1 2 3 1011 12 2728 29 49 2015就是符合要求的答案 请你寻找另外一个可能的答案 并把位置靠前的那个
  • Vue的响应式原理与diff算法的理解

    前端面试中主技术栈是vue的小伙伴应该都知道 这道题会被经常问到 也是老生常谈的一些题 下面简单说一下这些题 一 什么是vue的响应式原理 答 1 首次数据加载的时候 比如我data里面有age name 通过Object definePr
  • scarpy 爬虫

    基本指令 全局指令 scrapy fetch 直接爬取某个网页 scrapy runspider 运行某个爬虫 并且这个爬虫可以不属于项目里 scrapy settings 设置 scrapy shell 进入交互模式 D gt scrap