I use a MPI
(mpi4py
)脚本(在单个节点上),适用于非常大的对象。为了让所有进程都能访问该对象,我通过comm.bcast()
。这会将对象复制到所有进程并消耗大量内存,尤其是在复制过程中。因此,我想分享一些像指针这样的东西,而不是对象本身。我发现了一些功能memoryview
对于促进进程内对象的工作很有用。对象的实际内存地址也可以通过memoryview
对象字符串表示形式,可以这样分布:
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
if rank:
content_pointer = comm.bcast(root = 0)
print(rank, content_pointer)
else:
content = ''.join(['a' for i in range(100000000)]).encode()
mv = memoryview(content)
print(mv)
comm.bcast(str(mv).split()[-1][: -1], root = 0)
这打印:
<memory at 0x7f362a405048>
1 0x7f362a405048
2 0x7f362a405048
...
这就是为什么我相信一定有一种方法可以在另一个过程中重构对象。但是,我在文档中找不到有关如何执行此操作的线索。
简而言之,我的问题是:是否可以在同一节点上的进程之间共享对象mpi4py
?
这是使用 MPI 的共享内存的简单示例,稍作修改https://groups.google.com/d/msg/mpi4py/Fme1n9niNwQ/lk3VJ54WAQAJ https://groups.google.com/d/msg/mpi4py/Fme1n9niNwQ/lk3VJ54WAQAJ
您可以使用以下命令运行它:mpirun -n 2 python3 shared_memory_test.py
(假设您将其保存为shared_memory_test.py)
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
# create a shared array of size 1000 elements of type double
size = 1000
itemsize = MPI.DOUBLE.Get_size()
if comm.Get_rank() == 0:
nbytes = size * itemsize
else:
nbytes = 0
# on rank 0, create the shared block
# on rank 1 get a handle to it (known as a window in MPI speak)
win = MPI.Win.Allocate_shared(nbytes, itemsize, comm=comm)
# create a numpy array whose data points to the shared mem
buf, itemsize = win.Shared_query(0)
assert itemsize == MPI.DOUBLE.Get_size()
ary = np.ndarray(buffer=buf, dtype='d', shape=(size,))
# in process rank 1:
# write the numbers 0.0,1.0,..,4.0 to the first 5 elements of the array
if comm.rank == 1:
ary[:5] = np.arange(5)
# wait in process rank 0 until process 1 has written to the array
comm.Barrier()
# check that the array is actually shared and process 0 can see
# the changes made in the array by process 1
if comm.rank == 0:
print(ary[:10])
应该输出这个(从进程等级 0 打印):
[0. 1. 2. 3. 4. 0. 0. 0. 0. 0.]
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)