charm AT lists.siebelschool.illinois.edu
Subject: Charm++ parallel programming system
List archive
- From: Shad Kirmani <sxk5292 AT cse.psu.edu>
- To: "Kale, Laxmikant V" <kale AT illinois.edu>
- Cc: "charm AT cs.uiuc.edu" <charm AT cs.uiuc.edu>, "Venkataraman, Ramprasad" <ramv AT illinois.edu>
- Subject: Re: [charm] [ppl] MPI_Allgather
- Date: Fri, 2 Mar 2012 04:22:53 -0500
- Authentication-results: mr.google.com; spf=pass (google.com: domain of shad.kirmani AT gmail.com designates 10.68.136.97 as permitted sender) smtp.mail=shad.kirmani AT gmail.com; dkim=pass header.i=shad.kirmani AT gmail.com
- List-archive: <http://lists.cs.uiuc.edu/pipermail/charm>
- List-id: CHARM parallel programming system <charm.cs.uiuc.edu>
Hello Dr. Kale,
Thanks a lot your generosity. :)
I have an array of structures distributed over PEs. Each of the structures have three doubles and an integer. The total size of this array ranges from 100,000 to 1-2 million.
- The size of each of the contribution should be roughly equal to the array size/number of cores.
- The size of the contribution from each core would not be very different, with a variability of, say, 10-15% between them.
- We are planning to run this code on 10 - 1000 cores.
Thanks again,
Shad
On Thu, Mar 1, 2012 at 12:47 PM, Kale, Laxmikant V <kale AT illinois.edu> wrote:
We often tend to do this things on demand :-)Actually, right now we are engaged in re-doing collectives with sections (sort of like communicators). So this is a timely request. I will see if we can get a quick implementation for you.
Since the best algorithms are different depending on the message and distributions, can you say:
. Whats the size of contribution form each core?. Is it same or very different for different processors? (min, max, average, ..). How many nodes and cores are you targeting for now?
Sanjay
--Laxmikant (Sanjay) Kale http://charm.cs.uiuc.eduProfessor, Computer Science kale AT illinois.edu201 N. Goodwin Avenue Ph: (217) 244-0094Urbana, IL 61801-2302 FAX: (217) 265-6582
On 2/29/12 12:03 AM, "Shad Kirmani" <sxk5292 AT cse.psu.edu> wrote:
I want to do MPI_Allgather on group. Having an allgather would have helped a lot._______________________________________________ charm mailing list charm AT cs.uiuc.edu http://lists.cs.uiuc.edu/mailman/listinfo/charm _______________________________________________ ppl mailing list ppl AT cs.uiuc.edu http://lists.cs.uiuc.edu/mailman/listinfo/ppl
Thanks,Shad
On Tue, Feb 28, 2012 at 4:29 PM, Ramprasad Venkataraman <ramv AT illinois.edu> wrote:
There is not yet a direct way to achieve an allgather in charm.
An immediate mechanism to achieve something like this would be to
perform a reduction using a CkReduction::set or a CkReduction::concat
reducer. The result of the reduction (gather) can then be broadcast to
all the contributing entities.However, data element ordering within
the result is not guaranteed and has to be achieved manually.
What charm entity do you want to do this on: group, chare array, section?
Ram
> _______________________________________________
On Tue, Feb 28, 2012 at 15:12, Shad Kirmani <sxk5292 AT cse.psu.edu> wrote:
> Hello,
>
> I want to do an MPI_Allgather in charm code. Can anybody please help me with
> this?
>
> Thanks,
> Shad
>
> charm mailing list
> charm AT cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/charm
>
> _______________________________________________
> ppl mailing list
> ppl AT cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/ppl
>
--
Ramprasad Venkataraman
Parallel Programming Lab
Univ. of Illinois
- Re: [charm] [ppl] MPI_Allgather, Kale, Laxmikant V, 03/01/2012
- Re: [charm] [ppl] MPI_Allgather, Shad Kirmani, 03/02/2012
- Re: [charm] [ppl] MPI_Allgather, Shad Kirmani, 03/02/2012
- Re: [charm] [ppl] MPI_Allgather, Shad Kirmani, 03/02/2012
Archive powered by MHonArc 2.6.16.