如何使用所有处理器在 MPI 中发送/接收

2024-01-15

该程序使用 C Lagrange 和 MPI 编写。我是 MPI 新手,想要使用所有处理器进行一些计算,包括进程 0。为了学习这个概念,我编写了以下简单程序。但是这个程序在接收到进程0的输入后挂在底部,并且不会将结果发送回进程0。

#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv) {    
    MPI_Init(&argc, &argv);
    int world_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
    int world_size;
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);

    int number;
    int result;
    if (world_rank == 0) 
    {
        number = -2;
        int i;
        for(i = 0; i < 4; i++)
        {
            MPI_Send(&number, 1, MPI_INT, i, 0, MPI_COMM_WORLD);
        }
        for(i = 0; i < 4; i++)
        {           /*Error: can't get result send by other processos bellow*/
            MPI_Recv(&number, 1, MPI_INT, i, 99, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
            printf("Process 0 received number %d from i:%d\n", number, i);
        }
    } 
    /*I want to do this without using an else statement here, so that I can use process 0 to do some calculations as well*/

    MPI_Recv(&number, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); 
    printf("*Process %d received number %d from process 0\n",world_rank, number);
    result = world_rank + 1;
    MPI_Send(&result, 1, MPI_INT, 0, 99, MPI_COMM_WORLD);  /* problem happens here when trying to send result back to process 0*/

    MPI_Finalize();
}

运行并得到结果:

:$ mpicc test.c -o test
:$ mpirun -np 4 test

*Process 1 received number -2 from process 0
*Process 2 received number -2 from process 0
*Process 3 received number -2 from process 0
/* hangs here and will not continue */

如果可以的话,请向我展示一个示例,或者如果可能的话编辑上面的代码。


我真的不明白使用 2 会有什么问题if围绕工作域的声明。但无论如何,这里有一个可以做什么的例子。

我修改了您的代码以使用集体通信,因为它们比您使用的一系列发送/接收更有意义。由于初始通信具有统一的值,因此我使用MPI_Bcast()在一次调用中执行相同的操作。
相反,由于结果值都不同,因此调用MPI_Gather()是完全合适的。
我还介绍了一个电话sleep()只是为了模拟进程在发回结果之前运行一段时间。

代码现在看起来像这样:

#include <mpi.h>
#include <stdlib.h>   // for malloc and free
#include <stdio.h>    // for printf
#include <unistd.h>   // for sleep

int main( int argc, char *argv[] ) {

    MPI_Init( &argc, &argv );
    int world_rank;
    MPI_Comm_rank( MPI_COMM_WORLD, &world_rank );
    int world_size;
    MPI_Comm_size( MPI_COMM_WORLD, &world_size );

    // sending the same number to all processes via broadcast from process 0
    int number = world_rank == 0 ? -2 : 0;
    MPI_Bcast( &number, 1, MPI_INT, 0, MPI_COMM_WORLD );
    printf( "Process %d received %d from process 0\n", world_rank, number );

    // Do something usefull here
    sleep( 1 );
    int my_result = world_rank + 1;

    // Now collecting individual results on process 0
    int *results = world_rank == 0 ? malloc( world_size * sizeof( int ) ) : NULL;
    MPI_Gather( &my_result, 1, MPI_INT, results, 1, MPI_INT, 0, MPI_COMM_WORLD );

    // Process 0 prints what it collected
    if ( world_rank == 0 ) {
        for ( int i = 0; i < world_size; i++ ) {
            printf( "Process 0 received result %d from process %d\n", results[i], i );
        }
        free( results );
    }

    MPI_Finalize();

    return 0;
}

编译后如下:

$ mpicc -std=c99 simple_mpi.c -o simple_mpi

它运行并给出:

$ mpiexec -n 4 ./simple_mpi
Process 0 received -2 from process 0
Process 1 received -2 from process 0
Process 3 received -2 from process 0
Process 2 received -2 from process 0
Process 0 received result 1 from process 0
Process 0 received result 2 from process 1
Process 0 received result 3 from process 2
Process 0 received result 4 from process 3
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

如何使用所有处理器在 MPI 中发送/接收 的相关文章

随机推荐