The CUDA runtime API cudaGetDeviceProperties is our companion in CUDA programming to find the information of the current CUDA device so that some CUDA kernel parameters can be adjusted for optimization at runtime.
The NVIDIA device query application is built on top of the CUDA runtime API cudaGetDeviceProperties. It helps the first-time users who are unfamiliar with the device to have some idea about the device.
Docker Device Query
The NVIDIA device query application is wrapped in a Dockerfile and it supports both AMD64 and ARM64 platforms.
Dockerfile
The NVIDIA device query application is part of the NVIDIA CUDA samples on GitHub.
$ docker run --gpus all device-query:0.0.1 deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "NVIDIA GeForce RTX 3090" CUDA Driver Version / Runtime Version 11.6 / 11.6 CUDA Capability Major/Minor version number: 8.6 Total amount of global memory: 24246 MBytes (25423577088 bytes) (082) Multiprocessors, (128) CUDA Cores/MP: 10496 CUDA Cores GPU Max Clock rate: 1695 MHz (1.70 GHz) Memory Clock rate: 9751 Mhz Memory Bus Width: 384-bit L2 Cache Size: 6291456 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total shared memory per multiprocessor: 102400 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 1536 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 2 copy engine(s) Run time limit on kernels: Yes Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Disabled Device supports Unified Addressing (UVA): Yes Device supports Managed Memory: Yes Device supports Compute Preemption: Yes Supports Cooperative Kernel Launch: Yes Supports MultiDevice Co-op Kernel Launch: Yes Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.6, CUDA Runtime Version = 11.6, NumDevs = 1 Result = PASS