|
|
# Introduction
|
|
# Introduction
|
|
|
`td` codes require machines equipped with GPUs. The standard scenario assumes that the number of parallel MPI processes equals the number of GPUs. The user must provide the correct prescription that uniquely assigns GPU devices to MPI processes. This step depends on the architecture of the target machine. To set up the correct profile of the target machine, you need to modify `machine.h`. To printout on screen applied mapping of MPI processes to GPUs use:
|
|
`td` codes require machines equipped with GPUs. The standard scenario assumes that the number of parallel MPI processes equals the number of GPUs. The user must provide the correct prescription that uniquely assigns GPU devices to MPI processes. This step depends on the target machine's architecture. To set up the correct profile for the target machine, modify `machine.h`. To print out the screen-applied mapping of MPI processes to GPUs, use:
|
|
|
```c
|
|
```c
|
|
|
/**
|
|
/**
|
|
|
* Activate this flag in order to print to stdout
|
|
* Activate this flag in order to print to stdout
|
|
|
* applied mapping mpi-process <==> device-id.
|
|
* applied mapping mpi-process <==> device-id.
|
|
|
* */
|
|
* */
|
|
|
#define PRINT_GPU_DISTRIBUTION
|
|
// #define PRINT_GPU_DISTRIBUTION
|
|
|
```
|
|
```
|
|
|
|
|
|
|
|
# Machine with uniformly distributed GPU cards of the same type
|
|
# Machine with uniformly distributed GPU cards of the same type
|
|
|
It is the most common case. In such case, it is sufficient to use default settings by commenting out:
|
|
It is the most common case. In such a case, it is sufficient to use the default settings by commenting out:
|
|
|
```c
|
|
```c
|
|
|
/**
|
|
/**
|
|
|
* Activate this flag if target machine has non-standard distribution of GPUs.
|
|
* Activate this flag if the target machine has a non-standard distribution of GPUs.
|
|
|
* In such case you need to provide body of function `assign_deviceid_to_mpi_process`.
|
|
* In such a case, you need to provide the body of the function `assign_deviceid_to_mpi_process`.
|
|
|
* If this flag is commented-out it is assumed that code is running on a machine
|
|
* If this flag is commented out, it is assumed that the code is running on a machine
|
|
|
* with uniformly distributed GPU cards across the nodes,
|
|
* with uniformly distributed GPU cards across the nodes,
|
|
|
* and each node has `gpuspernode` (input file parameter) cards.
|
|
* controlled by GPUS_PER_NODE or `gpuspernode` input file tag.
|
|
|
* */
|
|
* */
|
|
|
// #define CUSTOM_GPU_DISTRIBUTION
|
|
// #define CUSTOM_GPU_DISTRIBUTION
|
|
|
```
|
|
```
|
|
|
and in the input file setting number of GPUs that each node is equipped with:
|
|
and setting
|
|
|
```bash
|
|
```c
|
|
|
gpuspernode 1 # number of GPUs per node (resource set), default=1
|
|
/**
|
|
|
|
* Default number of GPUs per node.
|
|
|
|
* You can overwrite this value by using the gpuspernode tag in the input file.
|
|
|
|
* The flag is ignored if CUSTOM_GPU_DISTRIBUTION is selected.
|
|
|
|
* */
|
|
|
|
#define GPUS_PER_NODE 1
|
|
|
```
|
|
```
|
|
|
You need to execute the code with the number of MPI processes equal number of GPUs. For example, if each node is equipped with one GPU and you plan to run the code on 512 nodes, you should call it as (schematic notation):
|
|
In a given example, you need to execute the code with the number of MPI processes equal number of GPUs. For example, if each node is equipped with one GPU and you plan to run the code on 512 nodes, you should call it as (schematic notation):
|
|
|
```bash
|
|
```bash
|
|
|
mpirun -n 512 --ntasks-per-node=1 ./td-wslda-3d input.txt
|
|
mpirun -n 512 --ntasks-per-node=1 ./td-wslda-3d input.txt
|
|
|
```
|
|
```
|
|
|
|
|
|
|
|
# Machine with non-uniform distribution of GPUs
|
|
# Machine with non-uniform distribution of GPUs
|
|
|
In such a case, you need to define GPUs distribution. For example, consider machine that has 7 nodes, and cards are distributed as follow (the content of file `nodes.txt`):
|
|
In such a case, you need to define the GPU distribution. For example, consider a machine that has 7 nodes, and cards are distributed as follows (the content of file `nodes.txt`):
|
|
|
```bash
|
|
```bash
|
|
|
node2061.grid4cern.if.pw.edu.pl slots=8
|
|
node2061.grid4cern.if.pw.edu.pl slots=8
|
|
|
node2062.grid4cern.if.pw.edu.pl slots=8
|
|
node2062.grid4cern.if.pw.edu.pl slots=8
|
| ... | @@ -43,16 +48,16 @@ node2067.grid4cern.if.pw.edu.pl slots=8 |
... | @@ -43,16 +48,16 @@ node2067.grid4cern.if.pw.edu.pl slots=8 |
|
|
GPU distribution is defined as follows:
|
|
GPU distribution is defined as follows:
|
|
|
```c
|
|
```c
|
|
|
/**
|
|
/**
|
|
|
* Activate this flag if target machine has non-standard distribution of GPUs.
|
|
* Activate this flag if the target machine has a non-standard distribution of GPUs.
|
|
|
* In such case you need to provide body of function `assign_deviceid_to_mpi_process`.
|
|
* In such a case, you need to provide the body of the function `assign_deviceid_to_mpi_process`.
|
|
|
* If this flag is commented-out it is assumed that code is running on a machine
|
|
* If this flag is commented out, it is assumed that the code is running on a machine
|
|
|
* with uniformly distributed GPU cards accross the nodes,
|
|
* with uniformly distributed GPU cards across the nodes,
|
|
|
* and each node has `gpuspernode` (input file paramater) cards.
|
|
* controlled by GPUS_PER_NODE or `gpuspernode` input file tag.
|
|
|
* */
|
|
* */
|
|
|
#define CUSTOM_GPU_DISTRIBUTION
|
|
#define CUSTOM_GPU_DISTRIBUTION
|
|
|
|
|
|
|
|
/**
|
|
/**
|
|
|
* This function is used to assign unique device-id to mpi process.
|
|
* This function is used to assign a unique device ID to the MPI process.
|
|
|
* @param comm MPI communicator
|
|
* @param comm MPI communicator
|
|
|
* @return device-id assign to the process extracted by function MPI_Comm_rank(...)
|
|
* @return device-id assign to the process extracted by function MPI_Comm_rank(...)
|
|
|
* DO NOT REMOVE STATEMENT `#if ... BELOW !!!
|
|
* DO NOT REMOVE STATEMENT `#if ... BELOW !!!
|
| ... | | ... | |