charm AT lists.siebelschool.illinois.edu
Subject: Charm++ parallel programming system
List archive
- From: Gengbin Zheng <gzheng AT illinois.edu>
- To: d.fallaize AT ucl.ac.uk
- Cc: charm AT cs.uiuc.edu
- Subject: Re: [charm] [ppl] nodegroups don't work as I expect under mpi-smp - bug?
- Date: Mon, 1 Aug 2011 08:40:14 -0500
- List-archive: <http://lists.cs.uiuc.edu/pipermail/charm>
- List-id: CHARM parallel programming system <charm.cs.uiuc.edu>
In your case for mpi build, you need to run like
mpirun -np 1 ./pgm ++ppn 2
Gengbin
On Thu, Jul 21, 2011 at 8:26 PM, David Fallaize
<d.fallaize AT ucl.ac.uk>
wrote:
> Hi,
>
> I've written my parallel program in charm++ which is intended to run on
> a typical sort of cluster where each node is a multi-core machine,
> there's a job submission queue etc. etc. which basically means I build
> mpi-linux-x86_64-smp-mpicxx, using MPICH2. I'm using the latest version
> of Charm from Git.
>
> From the description of nodegroups, I expect one of these objects to be
> instantiated on each node I run on. However when I run my program I
> notice that I get one nodegroup object created per processor, not per
> node. I checked using one of the test programs which comes with charm
> (pingpong) and by adding a line to the PingN constructor (which is
> supposed to be a nodegroup object) I can observe 2 of these being
> created on a single SMP node: see "Creating a PingN" appears twice in
> the output appended below. (I tried using OpenMPI too, and got the same
> result).
>
> Using the net-smp version (which is what I've been doing on my
> development machine) gives a single nodegroup per node as expected, so
> why do I not get the same under MPI? Is this a bug? This is a serious
> problem for me as part of my program rather relies on the one-per-node
> behaviour of nodegroups!
>
> Thanks,
>
> David Fallaize
> PhD student, UCL, London
>
> Running on 2 processors: ./pgm
> charmrun> /usr/bin/setarch x86_64 -R mpirun -np 2 ./pgm
> Charm++> Running on MPI version: 2.1 multi-thread support:
> MPI_THREAD_FUNNELED (max supported: MPI_THREAD_SINGLE)
> Converse/Charm++ Commit ID: v6.3.0-308-gbb91f75
> Charm++> Running on 1 unique compute nodes (8-way SMP).
> Creating a PingN.
> Charm++> cpu topology info is gathered in 0.000 seconds.
> Pingpong with payload: 100 iterations: 1000
> Creating a PingN.
> Roundtrip time for 1D Arrays is 26.417017 us
> Roundtrip time for 1D threaded Arrays is 30.620098 us
> Roundtrip time for 2D Arrays is 26.331186 us
> Roundtrip time for 3D Arrays is 26.363850 us
> Roundtrip time for Fancy Arrays is 26.504993 us
> Roundtrip time for Chares (reuse msgs) is 25.969982 us
> Roundtrip time for Chares (new/del msgs) is 26.633024 us
> Roundtrip time for threaded Chares (reuse) is 30.007124 us
> Roundtrip time for Groups is 26.214123 us
> Roundtrip time for NodeGroups is 26.480913 us
> End of program
>
>
>
> _______________________________________________
> charm mailing list
> charm AT cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/charm
> _______________________________________________
> ppl mailing list
> ppl AT cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/ppl
>
- Re: [charm] [ppl] nodegroups don't work as I expect under mpi-smp - bug?, Gengbin Zheng, 08/01/2011
Archive powered by MHonArc 2.6.16.