我的问题本质上是这样的:当我尝试将 to_sql 与 if_exists = 'append' 一起使用并且名称设置为 SQL Server 上已存在的表时,Python 崩溃。
这是我的代码:
@event.listens_for(engine, 'before_cursor_execute') def receive_before_cursor_execute(conn, cursor, statement, params, context, executemany):
if executemany:
cursor.fast_executemany = True
df.to_sql(name = 'existingSQLTable', con = engine, if_exists = 'append', index = False, chunksize = 10000, dtype = dataTypes)
我没有包含它,但 dataTypes 是所有列名称及其数据类型的字典。
这是我得到的错误:
Traceback (most recent call last):
File "C:\Apps\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1116, in _execute_context
context)
File "C:\Apps\Anaconda3\lib\site-packages\sqlalchemy\engine\default.py", line 447, in do_executemany
cursor.executemany(statement, parameters)
pyodbc.IntegrityError: ('23000', "[23000] [Microsoft][SQL Server Native Client 11.0][SQL Server]Violation of PRIMARY KEY constraint 'PK__existingSQLTable__'. Cannot insert duplicate key in object 'dbo.existingSQLTable'. The duplicate key value is (20008.7, 2008-08-07, Fl). (2627) (SQLExecute); [23000] [Microsoft][SQL Server Native Client 11.0][SQL Server]The statement has been terminated. (3621)")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<pyshell#24>", line 1, in <module>
Table.to_sql(name = 'existingSQLTable', con = engine, if_exists = 'append', index = False, chunksize = 10000, dtype = dataTypes)
File "C:\Apps\Anaconda3\lib\site-packages\pandas\core\generic.py", line 1165, in to_sql
chunksize=chunksize, dtype=dtype)
File "C:\Apps\Anaconda3\lib\site-packages\pandas\io\sql.py", line 571, in to_sql
chunksize=chunksize, dtype=dtype)
File "C:\Apps\Anaconda3\lib\site-packages\pandas\io\sql.py", line 1250, in to_sql
table.insert(chunksize)
File "C:\Apps\Anaconda3\lib\site-packages\pandas\io\sql.py", line 770, in insert
self._execute_insert(conn, keys, chunk_iter)
File "C:\Apps\Anaconda3\lib\site-packages\pandas\io\sql.py", line 745, in _execute_insert
conn.execute(self.insert_statement(), data)
File "C:\Apps\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py", line 914, in execute
return meth(self, multiparams, params)
File "C:\Apps\Anaconda3\lib\site-packages\sqlalchemy\sql\elements.py", line 323, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "C:\Apps\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1010, in _execute_clauseelement
compiled_sql, distilled_params
File "C:\Apps\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1146, in _execute_context
context)
File "C:\Apps\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1341, in _handle_dbapi_exception
exc_info
File "C:\Apps\Anaconda3\lib\site-packages\sqlalchemy\util\compat.py", line 202, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "C:\Apps\Anaconda3\lib\site-packages\sqlalchemy\util\compat.py", line 185, in reraise
raise value.with_traceback(tb)
File "C:\Apps\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1116, in _execute_context
context)
File "C:\Apps\Anaconda3\lib\site-packages\sqlalchemy\engine\default.py", line 447, in do_executemany
cursor.executemany(statement, parameters)
根据这些错误,对我来说,标志 fast_executemany 似乎有问题,但我已经阅读了很多有关它的文档,但没有发现它有任何问题。
可能值得注意的事情:
- 使用 if_exists = 'replace' 尚不存在的表按预期工作
- 使用 if_exists = 'append' 尚不存在的表按预期工作
- 已存在的表 if_exists = 'replace' 按预期工作
- 我的 DataFrame 大约有 300 万行和 25 列(主要是浮点数和一些短字符串)
- 我可以成功写入绝对最大 900,000 行,而不会导致 python 崩溃。
- 我正在使用 SQL Server、pandas 0.23.3、pyodbc 4.0.23(我在 4.0.22 中也遇到了相同的错误)、Jupyter Notebook(我也在 IDLE 中尝试过,得到了相同的结果)、Windows 10、 Python 3.5.1 和 Anaconda 3。
对我来说,显而易见的解决方案是将 DataFrame 分成 900,000 行的块。虽然第一个块已成功上传,但在 python 崩溃的情况下,我什至无法向其附加一行。
这个错误是否是旨在加速该过程的代码的结果(它的效果非常出色)?我是否误解了 to_sql 函数?还是还有其他事情发生?任何建议都会很棒!另外,如果有人遇到类似的问题,很高兴知道!