­

py3nvml實現GPU相關信息讀取

技術背景

隨着模型運算量的增長和硬件技術的發展,使用GPU來完成各種任務的計算已經漸漸成為算法實現的主流手段。而對於運行期間的一些GPU的佔用,比如每一步的顯存使用率等諸如此類的信息,就需要一些比較細緻的GPU信息讀取的工具,這裡我們重點推薦使用py3nvml來對python代碼運行的一個過程進行監控。

常規信息讀取

一般大家比較常用的就是nvidia-smi這個指令,來讀取GPU的使用率和顯存佔用、驅動版本等信息:

$ nvidia-smi
Wed Jan 12 15:52:04 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.42.01    Driver Version: 470.42.01    CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Quadro RTX 4000     On   | 00000000:03:00.0  On |                  N/A |
| 30%   39C    P8    20W / 125W |    538MiB /  7979MiB |     16%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Quadro RTX 4000     On   | 00000000:A6:00.0 Off |                  N/A |
| 30%   32C    P8     7W / 125W |      6MiB /  7982MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1643      G   /usr/lib/xorg/Xorg                412MiB |
|    0   N/A  N/A      2940      G   /usr/bin/gnome-shell               76MiB |
|    0   N/A  N/A     47102      G   ...AAAAAAAAA= --shared-files       35MiB |
|    0   N/A  N/A    172424      G   ...AAAAAAAAA= --shared-files       11MiB |
|    1   N/A  N/A      1643      G   /usr/lib/xorg/Xorg                  4MiB |
+-----------------------------------------------------------------------------+

但是如果不使用profile僅僅使用nvidia-smi這個指令的輸出的話,是沒有辦法非常細緻的分析程序運行過程中的變化的。這裡順便推薦一個比較精緻的跟nvidia-smi用法非常類似的小工具:gpustat。這個工具可以直接使用pip進行安裝和管理:

$ python3 -m pip install gpustat
Collecting gpustat
  Downloading gpustat-0.6.0.tar.gz (78 kB)
     |████████████████████████████████| 78 kB 686 kB/s
Requirement already satisfied: six>=1.7 in /home/dechin/.local/lib/python3.8/site-packages (from gpustat) (1.16.0)
Collecting nvidia-ml-py3>=7.352.0
  Downloading nvidia-ml-py3-7.352.0.tar.gz (19 kB)
Requirement already satisfied: psutil in /home/dechin/.local/lib/python3.8/site-packages (from gpustat) (5.8.0)
Collecting blessings>=1.6
  Downloading blessings-1.7-py3-none-any.whl (18 kB)
Building wheels for collected packages: gpustat, nvidia-ml-py3
  Building wheel for gpustat (setup.py) ... done
  Created wheel for gpustat: filename=gpustat-0.6.0-py3-none-any.whl size=12617 sha256=4158e741b609c7a1bc6db07d76224db51cd7656a6f2e146e0b81185ce4e960ba
  Stored in directory: /home/dechin/.cache/pip/wheels/0d/d9/80/b6cbcdc9946c7b50ce35441cc9e7d8c5a9d066469ba99bae44
  Building wheel for nvidia-ml-py3 (setup.py) ... done
  Created wheel for nvidia-ml-py3: filename=nvidia_ml_py3-7.352.0-py3-none-any.whl size=19191 sha256=70cd8ffc92286944ad9f5dc4053709af76fc0e79928dc61b98a9819a719f1e31
  Stored in directory: /home/dechin/.cache/pip/wheels/b9/b1/68/cb4feab29709d4155310d29a421389665dcab9eb3b679b527b
Successfully built gpustat nvidia-ml-py3
Installing collected packages: nvidia-ml-py3, blessings, gpustat
Successfully installed blessings-1.7 gpustat-0.6.0 nvidia-ml-py3-7.352.0

使用的時候也是跟nvidia-smi非常類似的操作:

$ watch --color -n1 gpustat -cpu 

返回結果如下所示:

Every 1.0s: gpustat -cpu                   ubuntu2004: Wed Jan 12 15:58:59 2022

ubuntu2004           Wed Jan 12 15:58:59 2022  470.42.01
[0] Quadro RTX 4000  | 39'C,   3 % |   537 /  7979 MB | root:Xorg/1643(412M) de
chin:gnome-shell/2940(75M) dechin:slack/47102(35M) dechin:chrome/172424(11M)
[1] Quadro RTX 4000  | 32'C,   0 % |     6 /  7982 MB | root:Xorg/1643(4M)

通過gpustat返回的結果,包含了GPU的型號、使用率和顯存使用大小和GPU當前的溫度等常規信息。

py3nvml的安裝與使用

接下來正式看下py3nvml的安裝和使用方法,這是一個可以在python中實時查看和監測GPU信息的一個庫,可以通過pip來安裝和管理:

$ python3 -m pip install py3nvml
Collecting py3nvml
  Downloading py3nvml-0.2.7-py3-none-any.whl (55 kB)
     |████████████████████████████████| 55 kB 650 kB/s
Requirement already satisfied: xmltodict in /home/dechin/anaconda3/lib/python3.8/site-packages (from py3nvml) (0.12.0)
Installing collected packages: py3nvml
Successfully installed py3nvml-0.2.7

py3nvml綁定GPU卡

有一些框架為了性能的最大化,在初始化的時候就會默認去使用到整個資源池裏面的所有GPU卡,比如如下使用Jax來演示的一個案例:

In [1]: import py3nvml

In [2]: from jax import numpy as jnp

In [3]: x = jnp.ones(1000000000)

In [4]: !nvidia-smi
Wed Jan 12 16:08:32 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.42.01    Driver Version: 470.42.01    CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Quadro RTX 4000     On   | 00000000:03:00.0  On |                  N/A |
| 30%   41C    P0    38W / 125W |   7245MiB /  7979MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Quadro RTX 4000     On   | 00000000:A6:00.0 Off |                  N/A |
| 30%   35C    P0    35W / 125W |    101MiB /  7982MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1643      G   /usr/lib/xorg/Xorg                412MiB |
|    0   N/A  N/A      2940      G   /usr/bin/gnome-shell               75MiB |
|    0   N/A  N/A     47102      G   ...AAAAAAAAA= --shared-files       35MiB |
|    0   N/A  N/A    172424      G   ...AAAAAAAAA= --shared-files       11MiB |
|    0   N/A  N/A    812125      C   /usr/local/bin/python            6705MiB |
|    1   N/A  N/A      1643      G   /usr/lib/xorg/Xorg                  4MiB |
|    1   N/A  N/A    812125      C   /usr/local/bin/python              93MiB |
+-----------------------------------------------------------------------------+

在這個案例中我們只是在顯存中分配了一塊空間用於存儲一個向量,但是Jax在初始化之後,自動佔據了本地的2張GPU卡。根據Jax官方提供的方法,我們可以使用如下的操作配置環境變量,使得Jax只能看到其中的1張卡,這樣就不會擴張:

In [1]: import os

In [2]: os.environ["CUDA_VISIBLE_DEVICES"] = "1"

In [3]: from jax import numpy as jnp

In [4]: x = jnp.ones(1000000000)

In [5]: !nvidia-smi
Wed Jan 12 16:10:36 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.42.01    Driver Version: 470.42.01    CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Quadro RTX 4000     On   | 00000000:03:00.0  On |                  N/A |
| 30%   40C    P8    19W / 125W |    537MiB /  7979MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Quadro RTX 4000     On   | 00000000:A6:00.0 Off |                  N/A |
| 30%   35C    P0    35W / 125W |   7195MiB /  7982MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1643      G   /usr/lib/xorg/Xorg                412MiB |
|    0   N/A  N/A      2940      G   /usr/bin/gnome-shell               75MiB |
|    0   N/A  N/A     47102      G   ...AAAAAAAAA= --shared-files       35MiB |
|    0   N/A  N/A    172424      G   ...AAAAAAAAA= --shared-files       11MiB |
|    1   N/A  N/A      1643      G   /usr/lib/xorg/Xorg                  4MiB |
|    1   N/A  N/A    813030      C   /usr/local/bin/python            7187MiB |
+-----------------------------------------------------------------------------+

可以看到結果中已經是只使用了1張GPU卡,達到了我們的目的,但是這種通過配置環境變量來實現的功能還是着實不夠pythonic,因此py3nvml中也提供了這樣的功能,可以指定某一系列的GPU卡用於執行任務:

In [1]: import py3nvml

In [2]: from jax import numpy as jnp

In [3]: py3nvml.grab_gpus(num_gpus=1,gpu_select=[1])
Out[3]: 1

In [4]: x = jnp.ones(1000000000)

In [5]: !nvidia-smi
Wed Jan 12 16:12:37 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.42.01    Driver Version: 470.42.01    CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Quadro RTX 4000     On   | 00000000:03:00.0  On |                  N/A |
| 30%   40C    P8    20W / 125W |    537MiB /  7979MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Quadro RTX 4000     On   | 00000000:A6:00.0 Off |                  N/A |
| 30%   36C    P0    35W / 125W |   7195MiB /  7982MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1643      G   /usr/lib/xorg/Xorg                412MiB |
|    0   N/A  N/A      2940      G   /usr/bin/gnome-shell               75MiB |
|    0   N/A  N/A     47102      G   ...AAAAAAAAA= --shared-files       35MiB |
|    0   N/A  N/A    172424      G   ...AAAAAAAAA= --shared-files       11MiB |
|    1   N/A  N/A      1643      G   /usr/lib/xorg/Xorg                  4MiB |
|    1   N/A  N/A    814673      C   /usr/local/bin/python            7187MiB |
+-----------------------------------------------------------------------------+

可以看到結果中也是只使用了1張GPU卡,達到了跟上一步的操作一樣的效果。

查看空閑GPU

對於環境中可用的GPU,py3nvml的判斷標準就是在這個GPU上已經沒有任何的進程,那麼這個就是一張可用的GPU卡:

In [1]: import py3nvml

In [2]: free_gpus = py3nvml.get_free_gpus()

In [3]: free_gpus
Out[3]: [True, True]

當然這裡需要說明的是,系統應用在這裡不會被識別,應該是會判斷守護進程。

命令行信息獲取

nvidia-smi非常類似的,py3nvml也可以在命令行中通過調用py3smi來使用。值得一提的是,如果需要用nvidia-smi來實時的監測GPU的使用信息,往往是需要配合watch -n來使用的,但是如果是py3smi則不需要,直接用py3smi -l就可以實現類似的功能。

$ py3smi -l 5
Wed Jan 12 16:17:37 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI                        Driver Version: 470.42.01                 |
+---------------------------------+---------------------+---------------------+
| GPU Fan  Temp Perf Pwr:Usage/Cap|        Memory-Usage | GPU-Util Compute M. |
+=================================+=====================+=====================+
|   0 30%   39C    8   19W / 125W |   537MiB /  7979MiB |       0%    Default |
|   1 30%   33C    8    7W / 125W |     6MiB /  7982MiB |       0%    Default |
+---------------------------------+---------------------+---------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
| GPU        Owner      PID      Uptime  Process Name                   Usage |
+=============================================================================+
+-----------------------------------------------------------------------------+

可以看到略有區別的是,這裡並不像nvidia-smi列出來的進程那麼多,應該是自動忽略了系統進程。

單獨查看驅動版本和顯卡型號

在py3nvml中把查看驅動和型號的功能單獨列了出來:

In [1]: from py3nvml.py3nvml import *

In [2]: nvmlInit()
Out[2]: <CDLL 'libnvidia-ml.so.1', handle 560ad4d07a60 at 0x7fd13aa52340>

In [3]: print("Driver Version: {}".format(nvmlSystemGetDriverVersion()))
Driver Version: 470.42.01

In [4]: deviceCount = nvmlDeviceGetCount()
   ...: for i in range(deviceCount):
   ...:     handle = nvmlDeviceGetHandleByIndex(i)
   ...:     print("Device {}: {}".format(i, nvmlDeviceGetName(handle)))
   ...:
Device 0: Quadro RTX 4000
Device 1: Quadro RTX 4000

In [5]: nvmlShutdown()

這樣也不需要我們自己再去逐個的篩選,從靈活性和可擴展性上來說還是比較方便的。

單獨查看顯存信息

這裡同樣的也是把顯存的使用信息單獨列了出來,不需要用戶再去單獨篩選這個信息,相對而言比較細緻:

In [1]: from py3nvml.py3nvml import *

In [2]: nvmlInit()
Out[2]: <CDLL 'libnvidia-ml.so.1', handle 55ae42aadd90 at 0x7f39c700e040>

In [3]: handle = nvmlDeviceGetHandleByIndex(0)

In [4]: info = nvmlDeviceGetMemoryInfo(handle)

In [5]: print("Total memory: {}MiB".format(info.total >> 20))
Total memory: 7979MiB

In [6]: print("Free memory: {}MiB".format(info.free >> 20))
Free memory: 7441MiB

In [7]: print("Used memory: {}MiB".format(info.used >> 20))
Used memory: 537MiB

如果把這些代碼插入到程序中,就可以獲悉每一步所佔用的顯存的變化。

總結概要

在深度學習或者其他類型的GPU運算過程中,對於GPU信息的監測也是一個非常常用的功能。如果僅僅是使用系統級的GPU監測工具,就沒辦法非常細緻的去跟蹤每一步的顯存和使用率的變化。如果是用profiler,又顯得過於細緻,而且環境配置、信息輸出和篩選並不是很方便。此時就可以考慮使用py3nvml這樣的工具,針對於GPU任務執行的過程進行細化的分析,有助於提升GPU的利用率和程序執行的性能。

版權聲明

本文首發鏈接為://www.cnblogs.com/dechinphy/p/py3nvml.html

作者ID:DechinPhy

更多原著文章請參考://www.cnblogs.com/dechinphy/

打賞專用鏈接://www.cnblogs.com/dechinphy/gallery/image/379634.html

騰訊雲專欄同步://cloud.tencent.com/developer/column/91958

參考鏈接

  1. //zhuanlan.zhihu.com/p/31558973