我在Python 3.4.2中运行了示例代码,具有numpy版本1.9.1和matplotlib版本1.4.2,在具有4个物理CPU的Macbook Pro上(即,与“虚拟”CPU相反,Mac硬件架构也是提供一些用例):
import numpy as np
import matplotlib.mlab as mlab
import time
import multiprocessing
# This value should be set much larger than nprocs, defined later below
size = 500
Y = np.arange(size)
X = np.arange(size)
x, y = np.meshgrid(X, Y)
u = x * np.sin(5) + y * np.cos(5)
v = x * np.cos(5) + y * np.sin(5)
test = x + y
tic = time.clock()
test_d = mlab.griddata(
x.flatten(), y.flatten(), test.flatten(), x+u, y+v, interp='linear')
toc = time.clock()
print('Single Processor Time={0}'.format(toc-tic))
# Put interpolation points into a single array so that we can slice it easily
xi = x + u
yi = y + v
# My example test machine has 4 physical CPUs
nprocs = 4
jump = int(size/nprocs)
# Enclose the griddata function in a wrapper which will communicate its
# output result back to the calling process via a Queue
def wrapper(x, y, z, xi, yi, q):
test_w = mlab.griddata(x, y, z, xi, yi, interp='linear')
q.put(test_w)
# Measure the elapsed time for multiprocessing separately
ticm = time.clock()
queue, process = [], []
for n in range(nprocs):
queue.append(multiprocessing.Queue())
# Handle the possibility that size is not evenly divisible by nprocs
if n == (nprocs-1):
finalidx = size
else:
finalidx = (n + 1) * jump
# Define the arguments, dividing the interpolation variables into
# nprocs roughly evenly sized slices
argtuple = (x.flatten(), y.flatten(), test.flatten(),
xi[:,(n*jump):finalidx], yi[:,(n*jump):finalidx], queue[-1])
# Create the processes, and launch them
process.append(multiprocessing.Process(target=wrapper, args=argtuple))
process[-1].start()
# Initialize an array to hold the return value, and make sure that it is
# null-valued but of the appropriate size
test_m = np.asarray([[] for s in range(size)])
# Read the individual results back from the queues and concatenate them
# into the return array
for q, p in zip(queue, process):
test_m = np.concatenate((test_m, q.get()), axis=1)
p.join()
tocm = time.clock()
print('Multiprocessing Time={0}'.format(tocm-ticm))
# Check that the result of both methods is actually the same; should raise
# an AssertionError exception if assertion is not True
assert np.all(test_d == test_m)
我得到以下结果:
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/matplotlib/tri/triangulation.py:110: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.self._neighbors)
Single Processor Time=8.495998
Multiprocessing Time=2.249938
我不太确定什么是导致triangulation.py的“未来警告”(显然我的版本的matplotlib不喜欢最初提供的问题的输入值),但不管多处理确实似乎达到所需的加速度为8.50 / 2.25 = 3.8,(编辑:请参阅注释),大约在4X左右,我们期望具有4个CPU的机器。而最后的断言声明也成功执行,证明这两种方法得到相同的答案,所以尽管有一些微小的警告信息,我相信上面的代码是一个有效的解决方案。
编辑:一位评论家指出,我的解决方案以及原作者发布的代码片段都可能使用错误的方法time.clock()来衡量执行时间;他建议使用time.time()。我想我也来到他的观点。 (进一步挖掘Python文档,我仍然不相信,即使这个解决方案是100%正确的,因为较新版本的Python似乎已经不推荐使用time.clock(),有利于time.perf_counter()和time.process_time().但无论如何,我同意,time.time()绝对是采用这种测量的最正确方法,它仍然可能比我之前使用过的更为正确,time.clock())。
假设评论者的观点是正确的,那意味着我以为我测量的大约4X的加速度其实是错误的。
然而,这并不意味着底层代码本身没有被正确地并行化;相反,这只是意味着并行化在这种情况下并没有实际的帮助;分割数据并在多个处理器上运行并没有改善任何事情。为什么会这样?其他用户有pointed out,至少在numpy / scipy中,一些功能在多个核心上运行,有些不会,最终用户可能是一个严重挑战性的研究项目,试图找出哪些功能是哪一个。
根据这个实验的结果,如果我的解决方案正确地实现了Python内的并行化,但是没有进一步加速,那么我建议最简单的解释是,matplotlib也可能是并行化其“引擎盖”下的一些功能,所以在编译的C库中,就像numpy / scipy已经做的那样。假设是这种情况,那么这个问题的正确答案就是没有什么可以做的:如果基础C库已经在多个内核上静默运行,Python将进一步并行化将不会很好。