我有一个包含电影数据的“.csv”文件,我正在尝试将其重新格式化为 JSON 文件以在 MongoDB 中使用它。所以我将该 csv 文件加载到 pandas DataFrame 中,然后使用 to_json 方法将其写回。
DataFrame 中的一行如下所示:
In [43]: result.iloc[0]
Out[43]:
title Avatar
release_date 2009
cast [{"cast_id": 242, "character": "Jake Sully", "...
crew [{"credit_id": "52fe48009251416c750aca23", "de...
Name: 0, dtype: object
但是当 pandas 写回来时,它就变成了这样:
{ "title":"Avatar",
"release_date":"2009",
"cast":"[{\"cast_id\": 242, \"character\": \"Jake Sully\", \"credit_id\": \"5602a8a7c3a3685532001c9a\", \"gender\": 2,...]",
"crew":"[{\"credit_id\": \"52fe48009251416c750aca23\", \"department\": \"Editing\", \"gender\": 0, \"id\": 1721,...]"
}
正如您所看到的,“cast”和“crew”是列表,它们有大量多余的反斜杠。这些反斜杠出现在 MongoDB 集合中,导致无法从这两个字段中提取数据。
除了更换之外如何解决这个问题\"
with "
?
P.S.1:这就是我将 DataFrame 保存为 JSON 的方法:
result.to_json('result.json', orient='records', lines=True)
更新1:
显然 pandas 做得很好,问题是由原始 csv 文件引起的。
它们是这样的:
movie_id,title,cast,crew
19995,Avatar,"[{""cast_id"": 242, ""character"": ""Jake Sully"", ""credit_id"": ""5602a8a7c3a3685532001c9a"", ""gender"": 2, ""id"": 65731, ""name"": ""Sam Worthington"", ""order"": 0}, {""cast_id"": 3, ""character"": ""Neytiri"", ""credit_id"": ""52fe48009251416c750ac9cb"", ""gender"": 1, ""id"": 8691, ""name"": ""Zoe Saldana"", ""order"": 1}, {""cast_id"": 25, ""character"": ""Dr. Grace Augustine"", ""credit_id"": ""52fe48009251416c750aca39"", ""gender"": 1, ""id"": 10205, ""name"": ""Sigourney Weaver"", ""order"": 2}, {""cast_id"": 4, ""character"": ""Col. Quaritch"", ""credit_id"": ""52fe48009251416c750ac9cf"", ""gender"": 2, ""id"": 32747, ""name"": ""Stephen Lang"", ""order"": 3},...]"
我尝试更换""
with "
(我真的想避免这种黑客行为):
sed -i 's/\"\"/\"/g'
当然,当再次将其读取为 csv 时,它会导致某些数据行出现问题:
ParserError: Error tokenizing data. C error: Expected 1501 fields in line 4, saw 1513
所以我们可以得出结论,这种盲目更换是不安全的。任何想法?
P.S.2:我使用的是kaggle的5000部电影数据集:https://www.kaggle.com/carolzhangdc/imdb-5000-movie-dataset https://www.kaggle.com/carolzhangdc/imdb-5000-movie-dataset