Python 中 Postgres 的 jsonb 数组的正确格式是什么?

2024-02-29

我有一个看起来像的架构

Column                  |            Type             |
-------------------------------------------------------
message_id              | integer                     | 
 user_id                | integer                     |
 body                   | text                        |
 created_at             | timestamp without time zone |
 source                 | jsonb                       |
 symbols                | jsonb[]                     |

我正在尝试使用 psycopg2 通过 psycopg2.Cursor.copy_from() 插入数据,但我遇到了很多问题,试图弄清楚 jsonb[] 对象应该如何格式化。当我直接列出 JSON 对象时,出现如下错误

psycopg2.errors.InvalidTextRepresentation: malformed array literal: "[{'id': 13016, 'symbol':
.... 
DETAIL:  "[" must introduce explicitly-specified array dimensions.

我在双引号和花括号上尝试了多种不同的转义。如果我对数据执行 json.dumps() ,则会收到以下错误。

psycopg2.errors.InvalidTextRepresentation: invalid input syntax for type json
DETAIL:  Token "'" is invalid.

此错误是从此代码片段收到的

messageData = []
symbols = messageObject["symbols"]
newSymbols = []
for symbol in symbols:
    toAppend = symbol
    toAppend = refineJSON(json.dumps(symbol))
    toAppend = re.sub("{", "\{", toAppend)
    toAppend = re.sub("}", "\}", toAppend)
    toAppend = re.sub('"', '\\"', toAppend)
    newSymbols.append(toAppend)
messageData.append(set(newSymbols))

我也愿意将列定义为不同的类型(例如文本),然后尝试转换,但我也无法做到这一点。

messageData 是调用 psycopg2.Cursor.copy_from() 的辅助函数的输入

def copy_string_iterator_messages(connection, messages, size: int = 8192) -> None:
    with connection.cursor() as cursor:
        messages_string_iterator = StringIteratorIO((
            '|'.join(map(clean_csv_value, (messageData[0], messageData[1], messageData[2], messageData[3], messageData[4], messageData[5], messageData[6], messageData[7], messageData[8], messageData[9], messageData[10], 
                messageData[11],
            ))) + '\n'
            for messageData in messages
        ))
        # pp.pprint(messages_string_iterator.read())
        cursor.copy_from(messages_string_iterator, 'test', sep='|', size=size)
        connection.commit()

编辑:根据迈克的输入,我更新了代码以使用execute_batch(),其中消息是包含每条消息的messageData的列表。

def insert_execute_batch_iterator_messages(connection, messages, page_size: int = 1000) -> None:
    with connection.cursor() as cursor:
        iter_messages = ({**message, } for message in messages)

        print("inside")

        psycopg2.extras.execute_batch(cursor, """
            INSERT INTO test VALUES(
                %(message_id)s,
                %(user_id)s,
                %(body)s,
                %(created_at)s,
                %(source)s::jsonb,
                %(symbols)s::jsonb[]
            );
        """, iter_messages, page_size=page_size)
        connection.commit()

你的问题让我很好奇。下面这个对我有用。我怀疑是否可以解决转义到 CSV 或从 CSV 转义的问题。

我的桌子:

=# \d jbarray
                             Table "public.jbarray"
 Column  |  Type   | Collation | Nullable |               Default
---------+---------+-----------+----------+-------------------------------------
 id      | integer |           | not null | nextval('jbarray_id_seq'::regclass)
 symbols | jsonb[] |           |          |
Indexes:
    "jbarray_pkey" PRIMARY KEY, btree (id)

完全独立的Python代码:

mport json
import psycopg2

con = psycopg2.connect('dbname=<my database>')

some_objects = [{'id': x, 'array': [x, x+1, x+2, {'inside': x+3}]} for x in range(5)]

insert_array = [json.dumps(x) for x in some_objects]
print(insert_array)

c = con.cursor()

c.execute("insert into jbarray (symbols) values (%s::jsonb[])", (insert_array,))

con.commit()

Result:

=# select * from jbarray;
-[ RECORD 1 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
id      | 1
symbols | {"{\"id\": 0, \"array\": [0, 1, 2, {\"inside\": 3}]}","{\"id\": 1, \"array\": [1, 2, 3, {\"inside\": 4}]}","{\"id\": 2, \"array\": [2, 3, 4, {\"inside\": 5}]}","{\"id\": 3, \"array\": [3, 4, 5, {\"inside\": 6}]}","{\"id\": 4, \"array\": [4, 5, 6, {\"inside\": 7}]}"}

=# select id, unnest(symbols) from jbarray;
-[ RECORD 1 ]----------------------------------------
id     | 1
unnest | {"id": 0, "array": [0, 1, 2, {"inside": 3}]}
-[ RECORD 2 ]----------------------------------------
id     | 1
unnest | {"id": 1, "array": [1, 2, 3, {"inside": 4}]}
-[ RECORD 3 ]----------------------------------------
id     | 1
unnest | {"id": 2, "array": [2, 3, 4, {"inside": 5}]}
-[ RECORD 4 ]----------------------------------------
id     | 1
unnest | {"id": 3, "array": [3, 4, 5, {"inside": 6}]}
-[ RECORD 5 ]----------------------------------------
id     | 1
unnest | {"id": 4, "array": [4, 5, 6, {"inside": 7}]}

如果插入性能对您来说太慢,那么您可以使用prepared statement with execute_batch() 如此处记录的 https://www.psycopg.org/docs/extras.html。我用过这个组合,速度非常快。

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

Python 中 Postgres 的 jsonb 数组的正确格式是什么? 的相关文章

随机推荐