charm AT lists.siebelschool.illinois.edu
Subject: Charm++ parallel programming system
List archive
- From: "Van Der Wijngaart, Rob F" <rob.f.van.der.wijngaart AT intel.com>
- To: Phil Miller <mille121 AT illinois.edu>
- Cc: Sam White <white67 AT illinois.edu>, "charm AT cs.uiuc.edu" <charm AT cs.uiuc.edu>
- Subject: RE: [charm] Adaptive MPI
- Date: Sat, 26 Nov 2016 01:05:40 +0000
- Accept-language: en-US
Thanks, Phil. In general I think more help with the PUP routines would be useful. Since they are callback routines, they are harder to troubleshoot, also because: · you cannot ask for a rank ID · the pup_er variable is opaque (to the application writer) · most of the documentation is for C++ and Fortran programmers, and my code is in C Here is one thing that stumped me earlier. If I would exit the PUP routine before the test of whether the mode is unpacking, it would complete without problem (of course without the correct result, but I am first trying to find the source of the segmentation violation)_. If I would place it after the test, the code would bomb. That is why I thought the error occurred in the test itself. I now know that was not the case. Since the PUP routine is called multiple times with different modes (sizing, packing, unpacking), it would reach the premature exit if the test would return true, but it is returned false, it would skip over the code with the premature exit and jump to a set of free calls that appear to be causing the problem (apparently some corrupted pointers).
Rob From: unmobile AT gmail.com [mailto:unmobile AT gmail.com]
On Behalf Of Phil Miller
Sam: It seems like it should be straightforward to add an assertion in our API entry/exit tracking sentries to catch this kind of issue. Essentially, it would need to check that the calling thread is actually an AMPI process thread that's supposed to be running. We should also document that PUP routines for AMPI code can't call MPI routines.
On Thu, Nov 24, 2016 at 5:36 PM, Van Der Wijngaart, Rob F <rob.f.van.der.wijngaart AT intel.com> wrote:
|
- Re: [charm] Adaptive MPI, (continued)
- Re: [charm] Adaptive MPI, Sam White, 11/23/2016
- RE: [charm] Adaptive MPI, Van Der Wijngaart, Rob F, 11/23/2016
- RE: [charm] Adaptive MPI, Van Der Wijngaart, Rob F, 11/23/2016
- Message not available
- Re: [charm] Adaptive MPI, Sam White, 11/23/2016
- RE: [charm] Adaptive MPI, Van Der Wijngaart, Rob F, 11/23/2016
- Message not available
- Re: [charm] Adaptive MPI, Sam White, 11/23/2016
- RE: [charm] Adaptive MPI, Van Der Wijngaart, Rob F, 11/23/2016
- RE: [charm] Adaptive MPI, Van Der Wijngaart, Rob F, 11/23/2016
- RE: [charm] Adaptive MPI, Van Der Wijngaart, Rob F, 11/24/2016
- Re: [charm] Adaptive MPI, Phil Miller, 11/25/2016
- RE: [charm] Adaptive MPI, Van Der Wijngaart, Rob F, 11/25/2016
- RE: [charm] Adaptive MPI, Van Der Wijngaart, Rob F, 11/28/2016
- RE: [charm] Adaptive MPI, Van Der Wijngaart, Rob F, 11/28/2016
- Message not available
- Re: [charm] Adaptive MPI, Sam White, 11/28/2016
- RE: [charm] Adaptive MPI, Van Der Wijngaart, Rob F, 11/28/2016
- RE: [charm] Adaptive MPI, Van Der Wijngaart, Rob F, 11/29/2016
- RE: [charm] Adaptive MPI, Van Der Wijngaart, Rob F, 11/29/2016
- RE: [charm] Adaptive MPI, Van Der Wijngaart, Rob F, 11/28/2016
- Re: [charm] Adaptive MPI, Sam White, 11/23/2016
- Re: [charm] Adaptive MPI, Sam White, 11/23/2016
- Re: [charm] Adaptive MPI, Sam White, 11/23/2016
Archive powered by MHonArc 2.6.19.