Skip to Content.
Sympa Menu

charm - Re: [charm] [ppl] AMPI Application + MPI-based load balancer

charm AT lists.siebelschool.illinois.edu

Subject: Charm++ parallel programming system

List archive

Re: [charm] [ppl] AMPI Application + MPI-based load balancer


Chronological Thread 
  • From: Rafael Keller Tesser <rktesser AT gmail.com>
  • To: François Tessier <francois.tessier AT inria.fr>
  • Cc: "charm AT cs.uiuc.edu" <charm AT cs.uiuc.edu>
  • Subject: Re: [charm] [ppl] AMPI Application + MPI-based load balancer
  • Date: Wed, 11 Mar 2015 17:39:09 -0300
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/charm/>
  • List-id: CHARM parallel programming system <charm.cs.uiuc.edu>

Hello François,

If you are using GCC , you can see which files are included by executing "gcc -M yourcode.c".

However, the problem may not be with the header ("mpi.h"), but with having two libraries which implement the same functions. If I'm not mistaken,  charmc statically links Charm's libraries to the generated executable. On the other hand, when you compile your load balancer, you dynamically link it with MPI's library ("libmpi.so" or similar).

This is not a problem with Charm++ applications, since they are not linked to the  ampi libraries. With AMPI, however, the calls in your load balancer may be executing AMPI's implementation, since it is statically linked to you binary, instead of the one in "libmpi.so". So, you would need to find some way to hide AMPI's implementation of the MPI functions from you load balancer.

Regards,
Tesser

2015-03-06 13:09 GMT-03:00 François Tessier <francois.tessier AT inria.fr>:
Hello,

I tried to add a MPI_Init() call in my LB code. The error is not the same anymore :

------------- Processor 0 Exiting: Called CmiAbort ------------
Reason: TCharm has not been initialized!

Does it help ?

Regards,

François

Dr. François TESSIER
University of Bordeaux
Inria - TADaaM Team
Tel : 0033524574152
francois.tessier AT inria.fr
http://runtime.bordeaux.inria.fr/ftessier/
PGP 0x8096B5FA
On 03/03/2015 17:13, François Tessier wrote:
Hello Phil,

I tried few things but without any success. Do you have some more explanations ? Here are a few more details :

- The libraries I need are compiled with the default MPI implementation of BlueWaters (cray-MPICH). It includes libtopomap, metis and parmetis.

- My load balancer files include mpi.h (but I didn't succeed to see which one is included, the one from cray-MPICH or the one from AMPI).

- My build command (Charm++ is not built, just AMPI but maybe it's included) : ./build AMPI mpi-crayxe -g -fopenmp -ltopomap -lmetis -lparmetis -I/u/sciteam/ejeannot/install/libtopomap-0.9-sources/libtopomap-0.9/ -L/u/sciteam/ejeannot/install/libtopomap-0.9-sources/libtopomap-0.9/

- The application is built with charmc and with these options : -language ampi -module CommonLBs -module TreeMatchLB -thread context. mpi.h is included in the header files.

- My load balancer works well on other applications (kNeighbor, stencil3D, ...) with a Charm++ build.

Thank you for your help

François
Dr. François TESSIER
University of Bordeaux
Inria - TADaaM Team
Tel : 0033524574152
francois.tessier AT inria.fr
http://runtime.bordeaux.inria.fr/ftessier/
PGP 0x8096B5FA
On 25/02/2015 19:12, Phil Miller wrote:
When compiling your LB code and associated routines, double-check that the mpi.h that they're picking up is in fact the host's and not the one provided by AMPI. This is a problem we just saw for another user as well.

On Wed, Feb 25, 2015 at 11:58 AM, François Tessier <francois.tessier AT inria.fr> wrote:
I'm going to try to answer everyone :-)

@Ehsan : No, I compiled the libraries (libtopomap, ParMETIS) with MPICH2-Cray. May be I can try with AMPI.

@Gengbin : I'm using libtopomap which is using MPI. In my code, I call libtopomap functions and some MPI methods like MPI_CommSplit.

@Celso and Abhinav (and all) : My load balancer works well on applications like kNeighbor but with a Charm++ build (not only AMPI). May be there is something here ?

Thank you for your help !

François

Dr. François TESSIER
University of Bordeaux
Inria - TADaaM Team
Tel : 0033524574152
francois.tessier AT inria.fr
http://runtime.bordeaux.inria.fr/ftessier/
PGP 0x8096B5FA
On 25/02/2015 18:31, Abhinav Bhatele wrote:
We have demonstrated the use of ParMetisLB (which uses MPI calls) from within a Charm++ application (LeanMD) through the interoperation framework.

But Francois' case is slightly different because he would like to compile the whole thing as one big AMPI program.


On Wed, Feb 25, 2015 at 9:10 AM, Celso L. Mendes <cmendes AT illinois.edu> wrote:
Francois,

Since you say "I'm able to run the application without load
balancer or with the native ones.", it seems to me that the
problem arises because you're using MPI calls *inside* your
load balancer, is that correct? Are those calls really
essential to your balancer?

I don't think such a balancer (based on MPI) has been built
in the past, so this might be new territory in Charm++, but
probably someone at PPL could tell better than me.

-Celso



On 2/25/2015 9:49 AM, François Tessier wrote:
Hello,

I've just started again to work on that and the problem is still
there... Here is a summary :

I'm working on a topology-aware load balancer. My algorithm contains a
part of code written with MPI.

I would like to try this load balancer on the AMPI version of Ondes3D, a
simulator of seisimical wave propagation, on Blue Waters. To run this
application, I build AMPI like that on a new Charm++ checkout : ./build
AMPI mpi-crayxe (while linking with some libraries). I'm able to run the
application without load balancer or with the native ones. However, when
I carry out this experiment with my load balancer, it fails with this
error :

Reason: Cannot call MPI routines before AMPI is initialized.

Is there something special to do to use MPI functions in a load balancer
with AMPI ?

Thanks for you help

François

Dr. François TESSIER
University of Bordeaux
Inria - TADaaM Team
Tel : 0033524574152
francois.tessier AT inria.fr
http://runtime.bordeaux.inria.fr/ftessier/
PGP 0x8096B5FA

On 09/12/2014 19:25, François Tessier wrote:
Celso,

Yes, the declaration is in the file containing the "main()" function.
But I noticed a new behavior today. On a new allocation on Blue Waters,
I was able to run the application and my load balancer with success.
I've just carried out the application on another node and I have the
same problem as exposed yesterday. It seems to be dependant of the Blue
Waters nodes... That's weird.

François

François TESSIER
PhD Student at University of Bordeaux
Inria - Runtime Team
Tel : 0033524574152
francois.tessier AT inria.fr
http://runtime.bordeaux.inria.fr/ftessier/
PGP 0x8096B5FA

On 09/12/2014 00:01, Celso L. Mendes wrote:
Francois,

Do you have the declaration of the #include for "mpi.h" in the
same file where MPI_Init() is invoked? Is this really the same
file where "main()" is defined?

-Celso


On 12/8/2014 4:30 PM, François Tessier wrote:
There is a MPI_Init() at the beginning of the main function in the
application's code. Is it enough to use MPI in my load balancer ?
There is no MPI_Finalize() during the execution. The only one is at the
end of the main function.

++

François



_______________________________________________
charm mailing list
charm AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/charm



_______________________________________________
ppl mailing list
ppl AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/ppl




_______________________________________________
charm mailing list
charm AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/charm



_______________________________________________
charm mailing list
charm AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/charm



_______________________________________________
ppl mailing list
ppl AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/ppl



_______________________________________________
charm mailing list
charm AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/charm

_______________________________________________
ppl mailing list
ppl AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/ppl



--
Abhinav Bhatele, people.llnl.gov/bhatele
Center for Applied Scientific Computing, Lawrence Livermore National Laboratory


_______________________________________________
charm mailing list
charm AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/charm





_______________________________________________
charm mailing list
charm AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/charm


_______________________________________________
charm mailing list
charm AT cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/charm





Archive powered by MHonArc 2.6.16.

Top of Page