我正在通过 dig 处理来自 DNSBL 的数千个域名列表,创建 URL 和 IP 的 CSV。这是一个非常耗时的过程,可能需要几个小时。我的服务器的 DNSBL 每十五分钟更新一次。有没有办法可以提高 Python 脚本的吞吐量以跟上服务器的更新步伐?
Edit:脚本,根据要求。
import re
import subprocess as sp
text = open("domainslist", 'r')
text = text.read()
text = re.split("\n+", text)
file = open('final.csv', 'w')
for element in text:
try:
ip = sp.Popen(["dig", "+short", url], stdout = sp.PIPE)
ip = re.split("\n+", ip.stdout.read())
file.write(url + "," + ip[0] + "\n")
except:
pass
嗯,可能是名称解析花了你这么长时间。如果你算出来(即,如果以某种方式 dig 很快返回),Python 应该能够轻松处理数千个条目。
也就是说,您应该尝试线程方法。 (理论上)这将同时解析多个地址,而不是依次解析。您也可以继续使用 dig 来实现这一点,为此修改下面的示例代码应该是微不足道的,但是,为了让事情变得有趣(并且希望更加 Pythonic),让我们使用现有的模块来实现这一点:dnspython http://www.dnspython.org
因此,安装它:
sudo pip install -f http://www.dnspython.org/kits/1.8.0/ dnspython
然后尝试如下所示:
import threading
from dns import resolver
class Resolver(threading.Thread):
def __init__(self, address, result_dict):
threading.Thread.__init__(self)
self.address = address
self.result_dict = result_dict
def run(self):
try:
result = resolver.query(self.address)[0].to_text()
self.result_dict[self.address] = result
except resolver.NXDOMAIN:
pass
def main():
infile = open("domainlist", "r")
intext = infile.readlines()
threads = []
results = {}
for address in [address.strip() for address in intext if address.strip()]:
resolver_thread = Resolver(address, results)
threads.append(resolver_thread)
resolver_thread.start()
for thread in threads:
thread.join()
outfile = open('final.csv', 'w')
outfile.write("\n".join("%s,%s" % (address, ip) for address, ip in results.iteritems()))
outfile.close()
if __name__ == '__main__':
main()
如果事实证明同时启动了太多线程,您可以尝试分批执行,或使用队列(请参阅http://www.ibm.com/developerworks/aix/library/au-threadingpython/ http://www.ibm.com/developerworks/aix/library/au-threadingpython/举个例子)
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)