charm AT lists.siebelschool.illinois.edu
Subject: Charm++ parallel programming system
List archive
- From: Sam White <white67 AT illinois.edu>
- To: Maksym Planeta <mplaneta AT os.inf.tu-dresden.de>
- Cc: Jeff Hammond <jeff.science AT gmail.com>, "Miller, Philip B" <mille121 AT illinois.edu>, charm <charm AT lists.cs.illinois.edu>
- Subject: Re: [charm] AMPI automatic code transformation
- Date: Sat, 22 Oct 2016 09:17:58 -0500
On BGQ AMPI's thread migration does not work but you can still run with virtualization. AMPI works fine on Crays, except Charm++ and AMPI don't build on the Cray Compiler (as of a couple months ago, not sure if that has since changed).
It is fair to compare AMPI with other MPI implementations. In general AMPI programs are MPI ones without mutable global/static variables. And AMPI programs are MPI programs unless you add calls to AMPI's extensions (AMPI_Migrate). You can find more information in the AMPI manual here: http://charm.cs.illinois.edu/manuals/html/ampi/manual.html
I don't believe any of the PRKs in the github repo suffer from dynamic load imbalance, but linking with '-memory isomalloc -module CommonLBs' and adding a call to AMPI_Migrate() would be the only additions required to add support for it. Some of ASC proxy apps have load imbalance though. I've been meaning to update and collect the various AMPI mini-apps in a better way, so I can do that and send you a link.
-Sam
It is fair to compare AMPI with other MPI implementations. In general AMPI programs are MPI ones without mutable global/static variables. And AMPI programs are MPI programs unless you add calls to AMPI's extensions (AMPI_Migrate). You can find more information in the AMPI manual here: http://charm.cs.illinois.edu/manuals/html/ampi/manual.html
I don't believe any of the PRKs in the github repo suffer from dynamic load imbalance, but linking with '-memory isomalloc -module CommonLBs' and adding a call to AMPI_Migrate() would be the only additions required to add support for it. Some of ASC proxy apps have load imbalance though. I've been meaning to update and collect the various AMPI mini-apps in a better way, so I can do that and send you a link.
-Sam
On Fri, Oct 21, 2016 at 7:11 PM, Maksym Planeta <mplaneta AT os.inf.tu-dresden.de> wrote:
Dear Jeff,
do I understand correctly that it is fair to compare MPI1 implementations with AMPI implementations (the code looks to be the same)?
Are these applications too small to benefit from AMPI load balancing?
Can AMPI automatically serialize individual ranks of PRK to employ transparent load balancing, using isomalloc?
On 10/22/2016 12:55 AM, Jeff Hammond wrote:
Currently, I just wanted to compare AMPI vs MPI and for that I
planned to port several benchmarks (NPB or miniapps). I will be
happy to have any tool which makes porting smoother.
I believe we actually have ported versions of the NPB that we could
readily share with you. We've also already ported and tested parts
of the Mantevo suite and most of the Lawrence Livermore ASC proxy
applications.
The Parallel Research Kernel (PRK) project already supports AMPI, in
addition to Charm++ and approximately a dozen other programming models.
See https://github.com/ParRes/Kernels/ for details. The AMPI builds are
part of our CI system (https://travis-ci.org/ParRes/Kernels) so I know
they are working.
We didn't publish AMPI results
in http://dx.doi.org/10.1007/978-3-319-41321-1_17 but it is a good
overview of the PRK project in general. I can provide more details
offline if you want.
Mostly, I wanted to distinguish between commodity clusters and more
proprietary supercomputers, like IBM Blue Gene and Cray. The
specialized systems have more quirks that make AMPI a bit harder to
use. SLURM on a common Linux cluster is perfectly straightforward.
Charm++ runs wonderfully on both Blue Gene and Cray machines. I thought
we tested AMPI on Cray as part of our study, but perhaps my memory is
suffering from bitflips. I guess Blue Gene may have issues related to
virtual memory and compiler support.
Best,
Jeff
--
Jeff Hammond
jeff.science AT gmail.com <mailto:jeff.science AT gmail.com>
http://jeffhammond.github.io/
--
Regards,
Maksym Planeta
- [charm] AMPI automatic code transformation, Maksym Planeta, 10/21/2016
- Re: [charm] AMPI automatic code transformation, Phil Miller, 10/21/2016
- Re: [charm] AMPI automatic code transformation, Maksym Planeta, 10/21/2016
- Re: [charm] AMPI automatic code transformation, Phil Miller, 10/21/2016
- Re: [charm] AMPI automatic code transformation, Jeff Hammond, 10/21/2016
- Re: [charm] AMPI automatic code transformation, Maksym Planeta, 10/21/2016
- Re: [charm] AMPI automatic code transformation, Jeff Hammond, 10/24/2016
- Message not available
- Re: [charm] AMPI automatic code transformation, Sam White, 10/22/2016
- Re: [charm] AMPI automatic code transformation, Maksym Planeta, 10/21/2016
- Re: [charm] AMPI automatic code transformation, Jeff Hammond, 10/21/2016
- Re: [charm] AMPI automatic code transformation, Maksym Planeta, 10/21/2016
- Re: [charm] AMPI automatic code transformation, Phil Miller, 10/21/2016
- Re: [charm] AMPI automatic code transformation, Maksym Planeta, 10/21/2016
- Re: [charm] AMPI automatic code transformation, Phil Miller, 10/21/2016
Archive powered by MHonArc 2.6.19.