我对 160Gb 转储文件也有完全相同的问题。我花了两天时间才加载了原始文件的 3%-jsonArray
15 分钟完成这些更改。
首先,去掉初始的[
和尾随]
人物:
sed 's/^\[//; s/\]$/' -i filename.json
然后导入不带-jsonArray
option:
mongoimport --db "dbname" --collection "collectionname" --file filename.json
如果文件很大的话sed
会花费很长时间,也许您会遇到存储问题。你可以改用这个 C 程序(不是我写的,所有荣耀归于@guillermobox):
int main(int argc, char *argv[])
{
FILE * f;
const size_t buffersize = 2048;
size_t length, filesize, position;
char buffer[buffersize + 1];
if (argc < 2) {
fprintf(stderr, "Please provide file to mongofix!\n");
exit(EXIT_FAILURE);
};
f = fopen(argv[1], "r+");
/* get the full filesize */
fseek(f, 0, SEEK_END);
filesize = ftell(f);
/* Ignore the first character */
fseek(f, 1, SEEK_SET);
while (1) {
/* read chunks of buffersize size */
length = fread(buffer, 1, buffersize, f);
position = ftell(f);
/* write the same chunk, one character before */
fseek(f, position - length - 1, SEEK_SET);
fwrite(buffer, 1, length, f);
/* return to the reading position */
fseek(f, position, SEEK_SET);
/* we have finished when not all the buffer is read */
if (length != buffersize)
break;
}
/* truncate the file, with two less characters */
ftruncate(fileno(f), filesize - 2);
fclose(f);
return 0;
};
P.S.:我无权建议迁移这个问题,但我认为这可能会有所帮助。