Feedparser 到数据框不输出所有列

2024-04-15

我从 feedparser 解析 URL 并尝试获取所有列,但我没有将所有列作为输出,不确定问题出在哪里。如果执行下面的命令。我没有获得几列的数据,但数据确实存在,您可以在浏览器中查看。

my code

import feedparser
import pandas as pd 

xmldoc = feedparser.parse('http://www.ebay.com/rps/feed/v1.1/epnexcluded/EBAY-US')
df_cols = [
    "title", "url", "endsAt", "image225","currency"
    "price", "orginalPrice", "discountPercentage", "quantity", "shippingCost","dealUrl"
]
rows = []

for entry in xmldoc.entries:
    s_title = entry.get("title","")
    s_url = entry.get("url", "")
    s_endsAt = entry.get("endsAt", "")
    s_image225 = entry.get("image225", "")
    s_currency = entry.get("currency", "")
    s_price = entry.get("price","")
    s_orginalPrice = entry.get("orginalPrice","")
    s_discountPercentage = entry.get ("discountPercentage","")
    s_quantity = entry.get("quantity","")
    s_shippingCost = entry.get("shippingCost", "")
    s_dealUrl = entry.get("dealUrl", "")#.replace('YOURUSERIDHERE','2427312')
       
        
    rows.append({"title":s_title, "url": s_url, "endsAt": s_endsAt, 
                 "image225": s_image225,"currency": s_currency,"price":s_price,
                 "orginalPrice": s_orginalPrice,"discountPercentage": s_discountPercentage,"quantity": s_quantity,
                 "shippingCost": s_shippingCost,"dealUrl": s_dealUrl})

out_df = pd.DataFrame(rows, columns=df_cols)

out_df

尝试过这个,但这并没有给我任何数据,只有几列(我想是标题)

import lxml.etree as ET 
import urllib

response = urllib.request.urlopen('http://www.ebay.com/rps/feed/v1.1/epnexcluded/EBAY-US')
xml = response.read()

root = ET.fromstring(xml)
for item in root.findall('.*/item'):
       
    df = pd.DataFrame([{item.tag: item.text if item.text.strip() != "" else item.find("*").text
                       for item in lnk.findall("*") if item is not None} 
                       for lnk in root.findall('.//item')])
                       
df

可以如下迭代数组中的 URL 偏移量并将结果输出到 PD。当我尝试这样做时,它确实可以部分解决问题(即,我缺少一些元素,导致此错误AttributeError: object has no attribute 'price', shipping cost etc.,如果元素为 null,我们如何处理?

my code

 import feedparser
    import pandas as pd
    #from simplified_scrapy import SimplifiedDoc, utils, req
    getdeals = ['http://www.ebay.com/rps/feed/v1.1/epnexcluded/EBAY-US?limit=200',
            'http://www.ebay.com/rps/feed/v1.1/epnexcluded/EBAY-US?limit=200&offset=200',
            'http://www.ebay.com/rps/feed/v1.1/epnexcluded/EBAY-US?limit=200&offset=400']
    
    posts=[]
    for urls in getdeals:
        feed = feedparser.parse(urls)
        for deals in feed.entries:
            print (deals)
            posts.append((deals.title,deals.endsat,deals.image225,deals.price,deals.originalprice,deals.discountpercentage,deals.shippingcost,deals.dealurl))
    df=pd.DataFrame(posts,columns=['title','endsat','image2255','price','originalprice','discountpercentage','shippingcost','dealurl'])
    df.tail()

另外,类似地如何循环多个 JSON 响应

 url= ["https://merchants.apis.com/v4/publisher/159663/offers?country=US&limit=2000",
"https://merchants.apis.com/v4/publisher/159663/offers?country=US&offset=2001&limit=2000"]
    
    
    response = requests.request("GET", url, headers=headers, params=querystring)
    response = response.json()
    
    
    name = []
    logo = []
    date_added = []
    description = []
    for i in range(len(response['offers'])):
        name.append(response['offers'][i]['merchant_details']['name'])
        logo.append(response['offers'][i]['merchant_details']['metadata']['logo'])
        date_added.append(response['offers'][i]['date_added'])
        description.append(response['offers'][i]['description'])
         try:
            verticals.append(response['offers'][i]['merchant_details']['verticals'][0])
        except IndexError:
            verticals.append('NA')
        pass
        
    data1 = pd.DataFrame({'name':name,'logo':logo,'verticals':verticals, 'date_added':date_added,'description':description})

另一种方法。

import pandas as pd
from simplified_scrapy import SimplifiedDoc, utils, req

getdeals = ['http://www.ebay.com/rps/feed/v1.1/epnexcluded/EBAY-US?limit=200',
            'http://www.ebay.com/rps/feed/v1.1/epnexcluded/EBAY-US?limit=200&offset=200',
            'http://www.ebay.com/rps/feed/v1.1/epnexcluded/EBAY-US?limit=200&offset=400']
    
posts=[]
header = ['title','endsAt','image255','price','originalPrice','discountPercentage','shippingCost','dealUrl']
for url in getdeals:
    try: # It's a good habit to have try and exception in your code.
        feed = SimplifiedDoc(req.get(url))
        for deals in feed.selects('item'):
            row = []
            for h in header: row.append(deals.select(h+">text()")) # Returns None when the element does not exist
            posts.append(row)
    except Exception as e:
        print (e)
        
df=pd.DataFrame(posts,columns=header)
df.tail()
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

Feedparser 到数据框不输出所有列 的相关文章

随机推荐