我正在尝试使用 BeautifulSoup 从 LinkedIn 抓取一些网页,但不断收到错误“HTTP 错误 999:请求被拒绝”。有没有办法避免这个错误。如果您查看我的代码,我尝试过 Mechanize 和 URLLIB2,两者都给了我相同的错误。
from __future__ import unicode_literals
from bs4 import BeautifulSoup
import urllib2
import csv
import os
import re
import requests
import pandas as pd
import urlparse
import urllib
import urllib2
from BeautifulSoup import BeautifulSoup
from BeautifulSoup import BeautifulStoneSoup
import urllib
import urlparse
import pdb
import codecs
from BeautifulSoup import UnicodeDammit
import codecs
import webbrowser
from urlgrabber import urlopen
from urlgrabber.grabber import URLGrabber
import mechanize
fout5 = codecs.open('data.csv','r', encoding='utf-8', errors='replace')
for y in range(2,10,1):
url = "https://www.linkedin.com/job/analytics-%2b-data-jobs-united-kingdom/?sort=relevance&page_num=1"
params = {'page_num':y}
url_parts = list(urlparse.urlparse(url))
query = dict(urlparse.parse_qsl(url_parts[4]))
query.update(params)
url_parts[4] = urllib.urlencode(query)
y = urlparse.urlunparse(url_parts)
#print y
#url = urllib2.urlopen(y)
#f = urllib2.urlopen(y)
op = mechanize.Browser() # use mecahnize's browser
op.set_handle_robots(False) #tell the webpage you're not a robot
j = op.open(y)
#print op.title()
#g = URLGrabber()
#data = g.urlread(y)
#data = fo.read()
#print data
#html = response.read()
soup1 = BeautifulSoup(y)
print soup1
您应该使用领英 REST API https://developer.linkedin.com/docs/rest-api,直接或使用python-linkedin https://pypi.python.org/pypi/python-linkedin。它允许直接访问数据,而不是尝试抓取大量 JavaScript 的网站。
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)