charm AT lists.siebelschool.illinois.edu
Subject: Charm++ parallel programming system
List archive
- From: vishwas vasudeva <vishvasu98 AT gmail.com>
- To: charm AT cs.illinois.edu, ppl AT cs.uiuc.edu
- Subject: [charm] converting mpi to ampi program
- Date: Thu, 21 Jun 2018 17:17:41 +0530
- Authentication-results: illinois.edu; spf=softfail smtp.mailfrom=vishvasu98 AT gmail.com; dkim=pass header.d=gmail.com header.s=20161025; dmarc=pass header.from=gmail.com
Respected sir/madam,
In the conversion of MPI codes to AMPI ,
I followed all the instruction given in the Charm 2007 workshop by taking an MPI program of Matrix multiplication and without making any changes to the code i ran it using /bin/charmc with ' -language ampi ' flag and inturn got 2 executable files namely charmrun and pgm (name given to the file with -o flag).
My intention is to run it on BigEmulator.
on running the command,(I am running it in my local machine with single processor)
./charmrun ++local ./pgm +vp8
I got this response,
------------------------------------------------------------------------------------
wizard@turingmachine:~/temp1/charm/net-linux-x86_64-bigemulator/tests/ampi/mm$ ./charmrun ++local ./pgm +vp8
Charmrun> started all node programs in 0.006 seconds.
Converse/Charm++ Commit ID: v6.6.1-0-g74a2cc5
Charm++> scheduler running in netpoll mode.
Missing parameters for BlueGene machine size!
<tip> use command line options: +x, +y, or +z.
BG> BigSim emulator shutdown gracefully!
BG> Emulation took 0.003119 seconds!
------------------------------------------------------------------------------------------
However on specifying the +x +y +z parameters along with +bglog to get the trace files,with the following command,
./charmrun ++local ./pgm +x8 +y1 +z1 +vp8 +bglog
I got this response.
------------------------------------------------------------------------------------------------
wizard@turingmachine:~/temp1/charm/net-linux-x86_64-bigemulator/tests/ampi/mm$ ./charmrun ++local ./pgm +x8 +y1 +z1 +vp8 +bglogCharmrun> started all node programs in 0.007 seconds.
Converse/Charm++ Commit ID: v6.6.1-0-g74a2cc5
Charm++> scheduler running in netpoll mode.
BG info> Simulating 8x1x1 nodes with 1 comm + 1 work threads each.
BG info> Network type: bluegene.
alpha: 1.000000e-07 packetsize: 1024 CYCLE_TIME_FACTOR:1.000000e-03.
CYCLES_PER_HOP: 5 CYCLES_PER_CORNER: 75.
BG info> cpufactor is 1.000000.
BG info> floating point factor is 0.000000.
BG info> Using WallTimer for timing method.
BG info> Generating timing log.
CharmLB> Load balancer ignores processor background load.
CharmLB> Load balancer assumes all CPUs are same.
Trace: traceroot: ./pgm
Charmrun: error on request socket--
Socket closed before recv.
---------------------------------------------------------------------------------------------
I am running it in my laptop, so i don't think there is an issue with some switches.
Am I missing something. I have also attached the mpi program, please sort out the issue.
Thanking you,
Vishwas V.K.
/******************************************************************************
* FILE: mpi_mm.c
* DESCRIPTION:
* MPI Matrix Multiply - C Version
* In this code, the master task distributes a matrix multiply
* operation to numtasks-1 worker tasks.
* NOTE: C and Fortran versions of this code differ because of the way
* arrays are stored/passed. C arrays are row-major order but Fortran
* arrays are column-major order.
* AUTHOR: Blaise Barney. Adapted from Ros Leibensperger, Cornell Theory
* Center. Converted to MPI: George L. Gusciora, MHPCC (1/95)
* LAST REVISED: 04/13/05
******************************************************************************/
#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
#define NRA 62 /* number of rows in matrix A */
#define NCA 15 /* number of columns in matrix A */
#define NCB 7 /* number of columns in matrix B */
#define MASTER 0 /* taskid of first task */
#define FROM_MASTER 1 /* setting a message type */
#define FROM_WORKER 2 /* setting a message type */
int main (int argc, char *argv[])
{
int numtasks, /* number of tasks in partition */
taskid, /* a task identifier */
numworkers, /* number of worker tasks */
source, /* task id of message source */
dest, /* task id of message destination */
mtype, /* message type */
rows, /* rows of matrix A sent to each worker */
averow, extra, offset, /* used to determine rows sent to each worker */
i, j, k, rc; /* misc */
double a[NRA][NCA], /* matrix A to be multiplied */
b[NCA][NCB], /* matrix B to be multiplied */
c[NRA][NCB]; /* result matrix C */
MPI_Status status;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD,&taskid);
MPI_Comm_size(MPI_COMM_WORLD,&numtasks);
if (numtasks < 2 ) {
printf("Need at least two MPI tasks. Quitting...\n");
MPI_Abort(MPI_COMM_WORLD, rc);
exit(1);
}
numworkers = numtasks-1;
/**************************** master task ************************************/
if (taskid == MASTER)
{
printf("mpi_mm has started with %d tasks.\n",numtasks);
printf("Initializing arrays...\n");
for (i=0; i<NRA; i++)
for (j=0; j<NCA; j++)
a[i][j]= i+j;
for (i=0; i<NCA; i++)
for (j=0; j<NCB; j++)
b[i][j]= i*j;
/* Send matrix data to the worker tasks */
averow = NRA/numworkers;
extra = NRA%numworkers;
offset = 0;
mtype = FROM_MASTER;
for (dest=1; dest<=numworkers; dest++)
{
rows = (dest <= extra) ? averow+1 : averow;
printf("Sending %d rows to task %d offset=%d\n",rows,dest,offset);
MPI_Send(&offset, 1, MPI_INT, dest, mtype, MPI_COMM_WORLD);
MPI_Send(&rows, 1, MPI_INT, dest, mtype, MPI_COMM_WORLD);
MPI_Send(&a[offset][0], rows*NCA, MPI_DOUBLE, dest, mtype,
MPI_COMM_WORLD);
MPI_Send(&b, NCA*NCB, MPI_DOUBLE, dest, mtype, MPI_COMM_WORLD);
offset = offset + rows;
}
/* Receive results from worker tasks */
mtype = FROM_WORKER;
for (i=1; i<=numworkers; i++)
{
source = i;
MPI_Recv(&offset, 1, MPI_INT, source, mtype, MPI_COMM_WORLD, &status);
MPI_Recv(&rows, 1, MPI_INT, source, mtype, MPI_COMM_WORLD, &status);
MPI_Recv(&c[offset][0], rows*NCB, MPI_DOUBLE, source, mtype,
MPI_COMM_WORLD, &status);
printf("Received results from task %d\n",source);
}
/* Print results */
printf("******************************************************\n");
printf("Result Matrix:\n");
for (i=0; i<NRA; i++)
{
printf("\n");
for (j=0; j<NCB; j++)
printf("%6.2f ", c[i][j]);
}
printf("\n******************************************************\n");
printf ("Done.\n");
}
/**************************** worker task ************************************/
if (taskid > MASTER)
{
mtype = FROM_MASTER;
MPI_Recv(&offset, 1, MPI_INT, MASTER, mtype, MPI_COMM_WORLD, &status);
MPI_Recv(&rows, 1, MPI_INT, MASTER, mtype, MPI_COMM_WORLD, &status);
MPI_Recv(&a, rows*NCA, MPI_DOUBLE, MASTER, mtype, MPI_COMM_WORLD, &status);
MPI_Recv(&b, NCA*NCB, MPI_DOUBLE, MASTER, mtype, MPI_COMM_WORLD, &status);
for (k=0; k<NCB; k++)
for (i=0; i<rows; i++)
{
c[i][k] = 0.0;
for (j=0; j<NCA; j++)
c[i][k] = c[i][k] + a[i][j] * b[j][k];
}
mtype = FROM_WORKER;
MPI_Send(&offset, 1, MPI_INT, MASTER, mtype, MPI_COMM_WORLD);
MPI_Send(&rows, 1, MPI_INT, MASTER, mtype, MPI_COMM_WORLD);
MPI_Send(&c, rows*NCB, MPI_DOUBLE, MASTER, mtype, MPI_COMM_WORLD);
}
MPI_Finalize();
}
- [charm] converting mpi to ampi program, vishwas vasudeva, 06/21/2018
- Re: [charm] converting mpi to ampi program, Nitin Bhat, 06/22/2018
Archive powered by MHonArc 2.6.19.