您可以使用减少:
reduce(lambda r, d: r.update(d) or r, lst, {})
Demo:
>>> lst = [
... {'1': 'A'},
... {'2': 'B'},
... {'3': 'C'}
... ]
>>> reduce(lambda r, d: r.update(d) or r, lst, {})
{'1': 'A', '3': 'C', '2': 'B'}
或者你可以链接项目调用(Python 2):
from itertools import chain, imap
from operator import methodcaller
dict(chain.from_iterable(imap(methodcaller('iteritems'), lst)))
Python 3 版本:
from itertools import chain
from operator import methodcaller
dict(chain.from_iterable(map(methodcaller('items'), lst)))
Demo:
>>> from itertools import chain, imap
>>> from operator import methodcaller
>>>
>>> dict(chain.from_iterable(map(methodcaller('iteritems'), lst)))
{'1': 'A', '3': 'C', '2': 'B'}
或者使用字典理解:
{k: v for d in lst for k, v in d.iteritems()}
Demo:
>>> {k: v for d in lst for k, v in d.iteritems()}
{'1': 'A', '3': 'C', '2': 'B'}
在这三者中,对于简单的 3 字典输入,字典理解是最快的:
>>> import timeit
>>> def d_reduce(lst):
... reduce(lambda r, d: r.update(d) or r, lst, {})
...
>>> def d_chain(lst):
... dict(chain.from_iterable(imap(methodcaller('iteritems'), lst)))
...
>>> def d_comp(lst):
... {k: v for d in lst for k, v in d.iteritems()}
...
>>> timeit.timeit('f(lst)', 'from __main__ import lst, d_reduce as f')
2.4552760124206543
>>> timeit.timeit('f(lst)', 'from __main__ import lst, d_chain as f')
3.9764280319213867
>>> timeit.timeit('f(lst)', 'from __main__ import lst, d_comp as f')
1.8335261344909668
当你increase输入列表中的项目数为 1000,然后chain
方法赶上:
>>> import string, random
>>> lst = [{random.choice(string.printable): random.randrange(100)} for _ in range(1000)]
>>> timeit.timeit('f(lst)', 'from __main__ import lst, d_reduce as f', number=10000)
5.420135974884033
>>> timeit.timeit('f(lst)', 'from __main__ import lst, d_chain as f', number=10000)
3.464245080947876
>>> timeit.timeit('f(lst)', 'from __main__ import lst, d_comp as f', number=10000)
3.877490997314453
从现在开始,进一步增加输入列表似乎并不重要;这chain()
方法速度快了一小部分,但从未获得明显的优势。