每隔一段时间就会出现一段 JSON 数据,这会带来挑战,可能需要数小时才能从中提取所需的信息。我有以下由 Speech To Text API 引擎生成的 JSON 响应。
它显示了文字记录、每个单词的发音以及每个说话者的时间戳和说话者标签speaker 0
and speaker 2
谈话中。
{
"results": [
{
"alternatives": [
{
"timestamps": [
[
"the",
6.18,
6.63
],
[
"weather",
6.63,
6.95
],
[
"is",
6.95,
7.53
],
[
"sunny",
7.73,
8.11
],
[
"it's",
8.21,
8.5
],
[
"time",
8.5,
8.66
],
[
"to",
8.66,
8.81
],
[
"sip",
8.81,
8.99
],
[
"in",
8.99,
9.02
],
[
"some",
9.02,
9.25
],
[
"cold",
9.25,
9.32
],
[
"beer",
9.32,
9.68
]
],
"confidence": 0.812,
"transcript": "the weather is sunny it's time to sip in some cold beer "
}
],
"final": "True"
},
{
"alternatives": [
{
"timestamps": [
[
"sure",
10.52,
10.88
],
[
"that",
10.92,
11.19
],
[
"sounds",
11.68,
11.82
],
[
"like",
11.82,
12.11
],
[
"a",
12.32,
12.96
],
[
"plan",
12.99,
13.8
]
],
"confidence": 0.829,
"transcript": "sure that sounds like a plan"
}
],
"final": "True"
}
],
"result_index":0,
"speaker_labels": [
{
"from": 6.18,
"to": 6.63,
"speaker": 0,
"confidence": 0.475,
"final": "False"
},
{
"from": 6.63,
"to": 6.95,
"speaker": 0,
"confidence": 0.475,
"final": "False"
},
{
"from": 6.95,
"to": 7.53,
"speaker": 0,
"confidence": 0.475,
"final": "False"
},
{
"from": 7.73,
"to": 8.11,
"speaker": 0,
"confidence": 0.499,
"final": "False"
},
{
"from": 8.21,
"to": 8.5,
"speaker": 0,
"confidence": 0.472,
"final": "False"
},
{
"from": 8.5,
"to": 8.66,
"speaker": 0,
"confidence": 0.472,
"final": "False"
},
{
"from": 8.66,
"to": 8.81,
"speaker": 0,
"confidence": 0.472,
"final": "False"
},
{
"from": 8.81,
"to": 8.99,
"speaker": 0,
"confidence": 0.472,
"final": "False"
},
{
"from": 8.99,
"to": 9.02,
"speaker": 0,
"confidence": 0.472,
"final": "False"
},
{
"from": 9.02,
"to": 9.25,
"speaker": 0,
"confidence": 0.472,
"final": "False"
},
{
"from": 9.25,
"to": 9.32,
"speaker": 0,
"confidence": 0.472,
"final": "False"
},
{
"from": 9.32,
"to": 9.68,
"speaker": 0,
"confidence": 0.472,
"final": "False"
},
{
"from": 10.52,
"to": 10.88,
"speaker": 2,
"confidence": 0.441,
"final": "False"
},
{
"from": 10.92,
"to": 11.19,
"speaker": 2,
"confidence": 0.364,
"final": "False"
},
{
"from": 11.68,
"to": 11.82,
"speaker": 2,
"confidence": 0.372,
"final": "False"
},
{
"from": 11.82,
"to": 12.11,
"speaker": 2,
"confidence": 0.372,
"final": "False"
},
{
"from": 12.32,
"to": 12.96,
"speaker": 2,
"confidence": 0.383,
"final": "False"
},
{
"from": 12.99,
"to": 13.8,
"speaker": 2,
"confidence": 0.428,
"final": "False"
}
]
}
请原谅缩进问题(如果有),但 JSON 是有效的,并且我一直在尝试将每个文字记录与其相应的说话者标签进行映射。
我想要像下面这样的东西。上面的 JSON 大约有 20,000 行,根据时间戳和单词发音提取说话者标签并将其与transcript
.
[
{
"transcript": "the weather is sunny it's time to sip in some cold beer ",
"speaker" : 0
},
{
"transcript": "sure that sounds like a plan",
"speaker" : 2
}
]
到目前为止我尝试过的:
JSON 数据存储在名为example.json
。我已经能够将每个单词及其相应的时间戳和说话者标签放入元组列表中(参见下面的输出):
import json
# with open('C:\\Users\\%USERPROFILE%\\Desktop\\example.json', 'r') as f:
# data = json.load(f)
l1 = []
l2 = []
l3 = []
for i in data['results']:
for j in i['alternatives'][0]['timestamps']:
l1.append(j)
for m in data['speaker_labels']:
l2.append(m)
for q in l1:
for n in l2:
if q[1]==n['from']:
l3.append((q[0],n['speaker'], q[1], q[2]))
print(l3)
这给出了输出:
[('the', 0, 6.18, 6.63),
('weather', 0, 6.63, 6.95),
('is', 0, 6.95, 7.53),
('sunny', 0, 7.73, 8.11),
("it's", 0, 8.21, 8.5),
('time', 0, 8.5, 8.66),
('to', 0, 8.66, 8.81),
('sip', 0, 8.81, 8.99),
('in', 0, 8.99, 9.02),
('some', 0, 9.02, 9.25),
('cold', 0, 9.25, 9.32),
('beer', 0, 9.32, 9.68),
('sure', 2, 10.52, 10.88),
('that', 2, 10.92, 11.19),
('sounds', 2, 11.68, 11.82),
('like', 2, 11.82, 12.11),
('a', 2, 12.32, 12.96),
('plan', 2, 12.99, 13.8)]
但现在我不确定如何根据时间戳比较将单词关联在一起,并将每组单词“存储”起来,以与其说话者标签再次形成文字记录。
我还设法在列表中获取文字记录,但现在如何从上面的列表中提取每个文字记录的说话者标签。扬声器标签speaker 0
and speaker 2
不幸的是,我希望它们能代表每个单词transcript
反而。
for i in data['results']:
l4.append(i['alternatives'][0]['transcript'])
这给出了输出:
["the weather is sunny it's time to sip in some cold beer ",'sure that sounds like a plan']
我已尽力解释该问题,但我愿意接受任何反馈,并会在必要时进行更改。另外,我很确定有更好的方法来解决这个问题,而不是列出多个列表,非常感谢任何帮助。
对于更大的数据集,请参阅pastebin https://pastebin.com/KrnPXuFx。我希望这个数据集能够对性能基准测试有所帮助。如果可用或需要的话,我可以提供更大的数据集。
当我处理大型 JSON 数据时,性能是一个重要因素,同样在重叠转录中准确实现说话者隔离是另一个要求。