我提取维基百科语料的时候,刚开始使用的wikiextractor ,后来发现总是报错,于是就没有用了,由于很多人都在问我是怎么提取的,现在把代码公布下
代码不是我写的,是从一个网站找到的,由于太久了,忘记了网站的地址,就没办法贴原网址了,如果作者看到了请私信我原网址
作者的邮箱是:panyangnlp@gmail.com
使用方法:命令行输入命令:
python data_pre_process.py zhwiki-latest-pages-articles.xml.bz2(维基百科语料库) wiki.zh.text(保存的文件)
源码:
# -*- coding: utf-8 -*-
# Author: Pan Yang (panyangnlp@gmail.com)
# Copyrigh 2017
from __future__ import print_function
import logging
import os.path
import six
import sys
from IPython.core.page import page
from gensim.corpora import WikiCorpus
page.encoding = 'utf-8'
# 将维基百科xml语料库封装成txt格式
# python data_pre_process.py zhwiki-latest-pages-articles.xml.bz2 wiki.zh.text
if __name__ == '__main__':
program = os.path.basename(sys.argv[0])
logger = logging.getLogger(program)
logging.basicConfig(format='%(asctime)s: %(levelname)s: %(message)s')
logging.root.setLevel(level=logging.INFO)
logger.info("running %s" % ' '.join(sys.argv))
# check and process input arguments
if len(sys.argv) != 3:
print("Using: python process_wiki.py enwiki.xxx.xml.bz2 wiki.en.text")
sys.exit(1)
inp, outp = sys.argv[1:3]
space = " "
i = 0
output = open(outp, 'w', encoding='utf-8')
wiki = WikiCorpus(inp, dictionary={})
for text in wiki.get_texts():
if six.PY3:
output.write(bytes(' '.join(text), 'utf-8').decode('utf-8') + '\n')
# ###another method
# output.write(space.join(map(lambda x: x.decode("utf-8"), str(text))) + '\n')
else:
output.write(space.join(text) + "\n")
i = i + 1
if i % 10000 == 0:
logger.info("Saved " + str(i) + " articles")
output.close()
logger.info("Finished Saved " + str(i) + " articles")