Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • wslda wslda
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 0
    • Issues 0
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Container Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • CI/CD
    • Repository
    • Value stream
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • wtools
  • wsldawslda
  • Wiki
  • Estimation of the number of needed GPUs

Last edited by Gabriel Wlazłowski Dec 21, 2022
Page history
This is an old version of this page. You can view the most recent version or browse the history.

Estimation of the number of needed GPUs

W-SLDA Toolkit provides a script that can be used to estimate the number of GPUs that you need to run your code efficiently: tools/td-memory.py. The user must edit the # SETTINGS section and run the code. Example:

# SETTINGS
NX = 128
NY = 128
NZ = 16
codedim=2 # dimensonality of code
nwf=70141 # provide here number if you know it, otherwise the code will use simple estimate
mem_per_gpu = 16.0 # in GB
min_mem_utilization = 2.0 # in GB

Note that the number of wave-functions to be evolved is typically printed by st-wslda code when writing them to files. Optionally you can leave nwf=None, and then the script will use an estimate for this number.
Running the script:

[gabrielw@wutdell tools]$ python td-memory.py 
MINIMAL NUMBER OF GPUs=24

and plot like this will show up: td-memory

To obtain a good performance of the code, it is recommended that the memory utilization of each GPU card is about 50% or more of its capacity. In the given example, it is recommended to run the code with the number of GPUs less than 50.

Clone repository
  • API version
  • Automatic interpolations
  • Auxiliary tools
  • Browsing the code
  • Broyden algorithm
  • C and CUDA
  • Campaign of calculations
  • Checking correctness of settings
  • Chemical potentials control
  • Code & Results quality
  • Common failures of static codes
  • Common failures of time dependent codes
  • Computation domain
  • Configuring GPU machine
  • Constraining densities and potentials
View All Pages