ppl-accel AT lists.siebelschool.illinois.edu
Subject: Ppl-accel mailing list
List archive
[[ppl-accel] ] Fwd: [namd-ppl] Comet-GPU has GPUDirect RDMA hardware and software
Chronological Thread
- From: Ronak Buch <rabuch2 AT illinois.edu>
- To: "ppl-accel AT cs.uiuc.edu" <ppl-accel AT cs.uiuc.edu>
- Subject: [[ppl-accel] ] Fwd: [namd-ppl] Comet-GPU has GPUDirect RDMA hardware and software
- Date: Fri, 21 Jul 2017 16:05:13 -0500
- Authentication-results: illinois.edu; spf=pass smtp.mailfrom=rabuch2 AT illinois.edu
---------- Forwarded message ----------
From: Jim Phillips <jim AT ks.uiuc.edu>
Date: Fri, Jul 21, 2017 at 4:01 PM
Subject: [namd-ppl] Comet-GPU has GPUDirect RDMA hardware and software
To: "namd-ppl AT cs.illinois.edu" <namd-ppl AT cs.illinois.edu>, David Hardy <dhardy AT ks.uiuc.edu>, John Stone <johns AT ks.uiuc.edu>
Useful for developing a GPUDirect RDMA interface in Charm++. -Jim
From https://portal.xsede.org/sdsc-comet
MVAPICH2-GDR on Comet GPU Nodes
The GPU nodes on Comet have MVAPICH2-GDR available. MVAPICH2-GDR is based
on the standard MVAPICH2 software stack. It incorporates designs that take
advantage of the new GPUDirect RDMA technology for inter-node data
movement on NVIDIA GPUs clusters with Mellanox InfiniBand interconnect.
The "mvapich2-gdr" modules are also available on the login nodes for
compiling purposes. An example compile and run script is provided in
"/share/apps/examples/MVAPICH2GDR".
From: Jim Phillips <jim AT ks.uiuc.edu>
Date: Fri, Jul 21, 2017 at 4:01 PM
Subject: [namd-ppl] Comet-GPU has GPUDirect RDMA hardware and software
To: "namd-ppl AT cs.illinois.edu" <namd-ppl AT cs.illinois.edu>, David Hardy <dhardy AT ks.uiuc.edu>, John Stone <johns AT ks.uiuc.edu>
Useful for developing a GPUDirect RDMA interface in Charm++. -Jim
From https://portal.xsede.org/sdsc-comet
MVAPICH2-GDR on Comet GPU Nodes
The GPU nodes on Comet have MVAPICH2-GDR available. MVAPICH2-GDR is based
on the standard MVAPICH2 software stack. It incorporates designs that take
advantage of the new GPUDirect RDMA technology for inter-node data
movement on NVIDIA GPUs clusters with Mellanox InfiniBand interconnect.
The "mvapich2-gdr" modules are also available on the login nodes for
compiling purposes. An example compile and run script is provided in
"/share/apps/examples/MVAPICH2GDR".
- [[ppl-accel] ] Fwd: [namd-ppl] Comet-GPU has GPUDirect RDMA hardware and software, Ronak Buch, 07/21/2017
Archive powered by MHonArc 2.6.19.