如何在python中向.aspx页面提交查询

2024-04-26

我需要从 .aspx 网页中抓取查询结果。

http://legistar.council.nyc.gov/Legislation.aspx http://legistar.council.nyc.gov/Legislation.aspx

url 是静态的,那么如何向该页面提交查询并获取结果呢?假设我们需要从相应的下拉菜单中选择“所有年份”和“所有类型”。

那里一定有人知道如何做到这一点。


总而言之,您需要执行四项主要任务:

  • 向网站提交请求,
  • 从站点检索响应
  • 解析这些响应
  • 使用与导航关联的参数(到结果列表中的“下一页”)在上述任务中迭代一些逻辑

http 请求和响应处理是使用 Python 标准库中的方法和类完成的urllib http://docs.python.org/library/urllib.html and urllib2 http://docs.python.org/library/urllib2.html。 html页面的解析可以使用Python的标准库来完成HTML解析器 http://docs.python.org/library/htmlparser.html或与其他模块一起使用,例如美丽的汤 http://www.crummy.com/software/BeautifulSoup/

以下代码片段演示了在问题中指示的站点上请求和接收搜索的过程。该站点是 ASP 驱动的,因此我们需要确保发送多个表单字段,其中一些字段具有“可怕”的值,因为 ASP 逻辑使用这些字段来维护状态并在某种程度上验证请求。确实提交了。请求必须与http POST 方法因为这是该 ASP 应用程序所期望的。主要困难在于识别 ASP 期望的表单字段和关联值(使用 Python 获取页面是简单的部分)。

这段代码是函数式的,或者更准确地说,was功能,直到我删除了大部分 VSATE 值,并且可能通过添加注释引入了一两个拼写错误。

import urllib
import urllib2

uri = 'http://legistar.council.nyc.gov/Legislation.aspx'

#the http headers are useful to simulate a particular browser (some sites deny
#access to non-browsers (bots, etc.)
#also needed to pass the content type. 
headers = {
    'HTTP_USER_AGENT': 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.13) Gecko/2009073022 Firefox/3.0.13',
    'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml; q=0.9,*/*; q=0.8',
    'Content-Type': 'application/x-www-form-urlencoded'
}

# we group the form fields and their values in a list (any
# iterable, actually) of name-value tuples.  This helps
# with clarity and also makes it easy to later encoding of them.

formFields = (
   # the viewstate is actualy 800+ characters in length! I truncated it
   # for this sample code.  It can be lifted from the first page
   # obtained from the site.  It may be ok to hardcode this value, or
   # it may have to be refreshed each time / each day, by essentially
   # running an extra page request and parse, for this specific value.
   (r'__VSTATE', r'7TzretNIlrZiKb7EOB3AQE ... ...2qd6g5xD8CGXm5EftXtNPt+H8B'),

   # following are more of these ASP form fields
   (r'__VIEWSTATE', r''),
   (r'__EVENTVALIDATION', r'/wEWDwL+raDpAgKnpt8nAs3q+pQOAs3q/pQOAs3qgpUOAs3qhpUOAoPE36ANAve684YCAoOs79EIAoOs89EIAoOs99EIAoOs39EIAoOs49EIAoOs09EIAoSs99EI6IQ74SEV9n4XbtWm1rEbB6Ic3/M='),
   (r'ctl00_RadScriptManager1_HiddenField', ''), 
   (r'ctl00_tabTop_ClientState', ''), 
   (r'ctl00_ContentPlaceHolder1_menuMain_ClientState', ''),
   (r'ctl00_ContentPlaceHolder1_gridMain_ClientState', ''),

   #but then we come to fields of interest: the search
   #criteria the collections to search from etc.
                                                       # Check boxes  
   (r'ctl00$ContentPlaceHolder1$chkOptions$0', 'on'),  # file number
   (r'ctl00$ContentPlaceHolder1$chkOptions$1', 'on'),  # Legislative text
   (r'ctl00$ContentPlaceHolder1$chkOptions$2', 'on'),  # attachement
                                                       # etc. (not all listed)
   (r'ctl00$ContentPlaceHolder1$txtSearch', 'york'),   # Search text
   (r'ctl00$ContentPlaceHolder1$lstYears', 'All Years'),  # Years to include
   (r'ctl00$ContentPlaceHolder1$lstTypeBasic', 'All Types'),  #types to include
   (r'ctl00$ContentPlaceHolder1$btnSearch', 'Search Legislation')  # Search button itself
)

# these have to be encoded    
encodedFields = urllib.urlencode(formFields)

req = urllib2.Request(uri, encodedFields, headers)
f= urllib2.urlopen(req)     #that's the actual call to the http site.

# *** here would normally be the in-memory parsing of f 
#     contents, but instead I store this to file
#     this is useful during design, allowing to have a
#     sample of what is to be parsed in a text editor, for analysis.

try:
  fout = open('tmp.htm', 'w')
except:
  print('Could not open output file\n')

fout.writelines(f.readlines())
fout.close()

初始页面的获取就到此为止了。如上所述,然后需要解析页面,即找到感兴趣的部分并适当地收集它们,并将它们存储到文件/数据库/任何地方。这项工作可以通过多种方式完成:使用 html 解析器或 XSLT 类型的技术(实际上是在将 html 解析为 xml 之后),甚至对于粗略的工作,使用简单的正则表达式。此外,通常提取的项目之一是“下一个信息”,即某种链接,可以在对服务器的新请求中使用以获取后续页面。

这应该能让您大致了解“长手”html 抓取的含义。还有许多其他方法可以实现此目的,例如专用实用程序、Mozilla (FireFox) GreaseMonkey 插件中的脚本、XSLT...

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

如何在python中向.aspx页面提交查询 的相关文章

随机推荐