charm AT lists.siebelschool.illinois.edu
Subject: Charm++ parallel programming system
List archive
- From: Jozsef Bakosi <jbakosi AT gmail.com>
- To: "charm AT cs.uiuc.edu" <charm AT cs.uiuc.edu>
- Subject: [charm] Many individual chares vs chare array
- Date: Thu, 9 Jul 2015 12:10:09 -0600
- List-archive: <http://lists.cs.uiuc.edu/pipermail/charm/>
- List-id: CHARM parallel programming system <charm.cs.uiuc.edu>
Hi folks,
I suspect I know the answer to this question but I'd like some clarification on it.
What is the main difference between creating (a potentially large number of) individual chares and those calling back to a single host proxy or creating the workers instead as a chare array and using reduction. I assume the latter will do some kind of message aggregation under the hood (i.e., using a tree) and collect messages (in the form of an entry method arguments) from individual array elements and send only aggregated messages to the single host. Is this correct? If so, I guess, I should get better performance...
Thanks,
Jozsef
- [charm] Many individual chares vs chare array, Jozsef Bakosi, 07/09/2015
- Re: [charm] Many individual chares vs chare array, Phil Miller, 07/09/2015
- Re: [charm] Many individual chares vs chare array, Jozsef Bakosi, 07/09/2015
- Re: [charm] Many individual chares vs chare array, Jozsef Bakosi, 07/10/2015
- Re: [charm] [ppl] Many individual chares vs chare array, Jonathan Lifflander, 07/10/2015
- Re: [charm] [ppl] Many individual chares vs chare array, Jozsef Bakosi, 07/10/2015
- Re: [charm] Many individual chares vs chare array, Phil Miller, 07/10/2015
- Re: [charm] Many individual chares vs chare array, Jozsef Bakosi, 07/10/2015
- Re: [charm] [ppl] Many individual chares vs chare array, Jonathan Lifflander, 07/10/2015
- Re: [charm] Many individual chares vs chare array, Jozsef Bakosi, 07/10/2015
- Re: [charm] Many individual chares vs chare array, Jozsef Bakosi, 07/09/2015
- Re: [charm] Many individual chares vs chare array, Phil Miller, 07/09/2015
Archive powered by MHonArc 2.6.16.