Skip to Content.
Sympa Menu

charm - Re: [charm] Is AMPI support MPI_Waitall?

charm AT lists.siebelschool.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] Is AMPI support MPI_Waitall?


Chronological Thread 
  • From: Phil Miller <mille121 AT illinois.edu>
  • To: 张凯 <zhangk1985 AT gmail.com>
  • Cc: charm AT cs.uiuc.edu
  • Subject: Re: [charm] Is AMPI support MPI_Waitall?
  • Date: Thu, 28 Jan 2010 14:46:39 -0600
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

The bug has been fixed in the development version of charm. If you use
a pre-built development binary, the fix will be in tonight's autobuild
for whatever platform you use. If you're building it from development
source yourself, the patch is attached. If you are using the released
Charm 6.1.x, we can port that fix over for you if you're not
comfortable doing so yourself.

Phil

2010/1/28 张凯
<zhangk1985 AT gmail.com>:
>
> i think u got the problem i suffered.
>
> thanks for your reply.
>
> best regards.
>
> Zhang Kai
>
> 2010/1/29 Phil Miller
> <mille121 AT illinois.edu>
>>
>> On Thu, Jan 28, 2010 at 07:50, 张凯
>> <zhangk1985 AT gmail.com>
>> wrote:
>> > hi:
>> >
>> > I am a beginner of AMPI and trying to run a MPI program using it. But i
>> > found a little problem.
>> >
>> > Here(
>> >
>> > http://www.mcs.anl.gov/research/projects/mpi/usingmpi/examples/advmsg/nbodypipe_c.htm)
>> > you can find an example of a MPI program. I have successfully built
>> > and
>> > run it using both MPICH and intel MPI.
>> >
>> > However, when i running it with AMPI, i found that the program was
>> > blocked
>> > by MPI_Waitall function and never return again.
>> >
>> > I just run it with ++local +p2 +vp2 options. Did i miss other options?
>> > or
>> > misconfig AMPI?
>>
>> I'm seeing the same effect as you describe on a net-linux-x86_64 build
>> of AMPI from the latest charm sources. We'll look into this and get
>> back to you.
>>
>> For reference, the attached code (with added prints) produces the
>> following:
>>
>> $ ./charmrun nbp +vp 4 20 +p4
>> Charm++: scheduler running in netpoll mode.
>> Charm++> cpu topology info is being gathered.
>> Charm++> Running on 1 unique compute nodes (8-way SMP).
>> Iteration 9
>> Iteration 9:0 a
>> Iteration 9
>> Iteration 9:0 a
>> Iteration 9
>> Iteration 9:0 a
>> Iteration 9
>> Iteration 9:0 a
>> Iteration 9:0 b
>> Iteration 9:0 b
>> Iteration 9:0 b
>> Iteration 9:0 b
>
>



Archive powered by MHonArc 2.6.16.

Top of Page