我已经设法编写代码来从第一页中抓取数据,现在我不得不在这段代码中编写一个循环来抓取接下来的“n”页。下面是代码
如果有人可以指导/帮助我编写从剩余页面中抓取数据的代码,我将不胜感激。
Thanks!
from bs4 import BeautifulSoup
import requests
import csv
url = requests.get('https://wsc.nmbe.ch/search?sFamily=Salticidae&fMt=begin&sGenus=&gMt=begin&sSpecies=&sMt=begin&multiPurpose=slsid&sMulti=&mMt=contain&searchSpec=s').text
soup = BeautifulSoup(url, 'lxml')
elements = soup.find_all('div', style="border-bottom: 1px solid #C0C0C0; padding: 10px 0;")
#print(elements)
csv_file = open('wsc_scrape.csv', 'w')
csv_writer = csv.writer(csv_file)
csv_writer.writerow(['sp_name', 'species_author', 'status', 'family'])
for element in elements:
sp_name = element.i.text.strip()
print(sp_name)
status = element.find('span', class_ = ['success label', 'error label']).text.strip()
print(status)
author_family = element.i.next_sibling.strip().split('|')
species_author = author_family[0].strip()
family = author_family[1].strip()
print(species_author)
print(family)
print()
csv_writer.writerow([sp_name, species_author, status, family])
csv_file.close()
你必须通过page=
URL 中的参数并迭代所有页面:
from bs4 import BeautifulSoup
import requests
import csv
csv_file = open('wsc_scrape.csv', 'w', encoding='utf-8')
csv_writer = csv.writer(csv_file)
csv_writer.writerow(['sp_name', 'species_author', 'status', 'family'])
for i in range(151):
url = requests.get('https://wsc.nmbe.ch/search?page={}&sFamily=Salticidae&fMt=begin&sGenus=&gMt=begin&sSpecies=&sMt=begin&multiPurpose=slsid&sMulti=&mMt=contain&searchSpec=s'.format(i+1)).text
soup = BeautifulSoup(url, 'lxml')
elements = soup.find_all('div', style="border-bottom: 1px solid #C0C0C0; padding: 10px 0;")
for element in elements:
sp_name = element.i.text.strip()
print(sp_name)
status = element.find('span', class_ = ['success label', 'error label']).text.strip()
print(status)
author_family = element.i.next_sibling.strip().split('|')
species_author = author_family[0].strip()
family = author_family[1].strip()
print(species_author)
print(family)
print()
csv_writer.writerow([sp_name, species_author, status, family])
csv_file.close()
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)