charm AT lists.siebelschool.illinois.edu
Subject: Charm++ parallel programming system
List archive
- From: "Sébastien Boisvert" <sebhtml AT gmail.com>
- To: charm AT cs.uiuc.edu
- Subject: [charm] MPI functions used in CHARM++ (with arch/mpi)
- Date: Wed, 17 Oct 2012 02:44:43 -0000
- List-archive: <http://lists.cs.uiuc.edu/pipermail/charm/>
- List-id: CHARM parallel programming system <charm.cs.uiuc.edu>
Hello,
According to the benchmark
http://www.hpcadvisorycouncil.com/pdf/NAMD_analysis.pdf
CHARM++ utilizes these MPI functions:
- MPI_Get_count
- MPI_Iprobe
- MPI_Isend
- MPI_Recv
- MPI_Test
- MPI_Wtime
With most of the time being spent in MPI_Iprobe.
>From the CHARM++ source code file
git-clones/charm/src/arch/mpi/machine.c
In MPISendOneMsg(), I understand that messages are sent with MPI_Isend and
that MPI_Test is used to see whether or not buffers can be safely
reused.
In PumpMsgs(), it seems to me that there are at least two code paths:
First, the code attempts to use non-blocking reception with a bunch
of MPI_Irecv + MPI_Testany + MPI_Get_count.
If that does not work, the code probes for incoming messages, and read one
if any with MPI_Iprobe + MPI_Get_count + MPI_Recv.
But in the benchmarks above on NAMD, they report only the code path with
MPI_Iprobe, MPI_Get_count and MPI_Recv and none for the code path with
MPI_Testany, MPI_Irecv and MPI_Get_count.
It seems to me that non-blocking communication should be better as the MPI
library can put the incoming data directly in the buffer provided by the user
whereas with MPI_Iprobe it is not necessarily the case.
So what's going on ? Is it because NAMD was compiled with special options
for the benchmarks ?
Thank you.
- [charm] MPI functions used in CHARM++ (with arch/mpi), Sébastien Boisvert, 10/16/2012
- <Possible follow-up(s)>
- [charm] MPI functions used in CHARM++ (with arch/mpi), Sébastien Boisvert, 10/16/2012
Archive powered by MHonArc 2.6.16.