我有一个 3,000,000 行的巨大文件,每行有 20-40 个单词。我必须从语料库中提取 1 到 5 个 ngram。我的输入文件是标记化的纯文本,例如:
This is a foo bar sentence .
There is a comma , in this sentence .
Such is an example text .
目前,我正在按如下方式进行操作,但这似乎不是提取 1-5 克的有效方法:
#!/usr/bin/env python -*- coding: utf-8 -*-
import io, os
from collections import Counter
import sys; reload(sys); sys.setdefaultencoding('utf-8')
with io.open('train-1.tok.en', 'r', encoding='utf8') as srcfin, \
io.open('train-1.tok.jp', 'r', encoding='utf8') as trgfin:
# Extract words from file.
src_words = ['<s>'] + srcfin.read().replace('\n', ' </s> <s> ').split()
del src_words[-1] # Removes the final '<s>'
trg_words = ['<s>'] + trgfin.read().replace('\n', ' </s> <s> ').split()
del trg_words[-1] # Removes the final '<s>'
# Unigrams count.
src_unigrams = Counter(src_words)
trg_unigrams = Counter(trg_words)
# Sum of unigram counts.
src_sum_unigrams = sum(src_unigrams.values())
trg_sum_unigrams = sum(trg_unigrams.values())
# Bigrams count.
src_bigrams = Counter(zip(src_words,src_words[1:]))
trg_bigrams = Counter(zip(trg_words,trg_words[1:]))
# Sum of bigram counts.
src_sum_bigrams = sum(src_bigrams.values())
trg_sum_bigrams = sum(trg_bigrams.values())
# Trigrams count.
src_trigrams = Counter(zip(src_words,src_words[1:], src_words[2:]))
trg_trigrams = Counter(zip(trg_words,trg_words[1:], trg_words[2:]))
# Sum of trigram counts.
src_sum_trigrams = sum(src_bigrams.values())
trg_sum_trigrams = sum(trg_bigrams.values())
还有其他方法可以更有效地做到这一点吗?
如何同时最优地提取N个不同的ngram?
From 在 python 中快速/优化 N-gram 实现 https://stackoverflow.com/questions/21883108/fast-optimize-n-gram-implementations-in-python,本质上是这样的:
zip(*[words[i:] for i in range(n)])
当对二元组进行硬编码时,n=2
:
zip(src_words,src_words[1:])
这是卦象吗?n=3
:
zip(src_words,src_words[1:],src_words[2:])