NVIDIA Tensor Core TN Layout MMA Instruction
Introduction
NVIDIA Tensor Cores are specialized hardware units designed to accelerate matrix operations, particularly matrix multiplications, which are fundamental to many deep learning and high-performance computing applications. Tensor Cores utilize Matrix-Matrix Accumulate (MMA) instructions to perform these operations efficiently. However, most of the NVIDIA Tensor Core MMA instructions are optimized for a specific data layout known as TN layout, where the input matrix $A$ is stored in row-major order and input matrix $B$ is stored in column-major order.
In this article, I would like to discuss why TN layout is the most optimized layout for GEMM problems across different hardware architectures, how NVIDIA Tensor Core MMA instructions are designed specifically for TN layout, and how non-TN layout GEMM solutions can be implemented using these TN layout MMA instructions.
GEMM and MMA Layouts
GEMM (General Matrix-Matrix Multiplication) computes matrix multiplications using different data layouts for an input matrix $A$ of shape $(M, K)$ and an input matrix $B$ of shape $(K, N)$, resulting in an output matrix $C$ of shape $(M, N)$. In the context of this article, we ignore the accumulation of matrix $C$ in GEMM problems.
MMA (Matrix-Matrix Accumulate) instructions are low-level instructions that perform matrix multiplications on small sub-matrices (tiles) of the input matrices $A$ and $B$, producing a sub-matrix (tile) of the output matrix $C$. MMA instructions are often used to accelerate GEMM operations on NVIDIA GPUs when using Tensor Cores.
There are four different layouts for GEMM problems as well as for MMA instructions:
- TN: Matrix $A$ is stored in row-major and matrix $B$ is stored in column-major.
- NT: Matrix $A$ is stored in column-major and matrix $B$ is stored in row-major.
- NN: Both matrix $A$ and matrix $B$ are stored in column-major.
- TT: Both matrix $A$ and matrix $B$ are stored in row-major.
The resulting matrix $C$ is always stored in column-major format. If the user wants to store the resulting matrix $C$ in row-major format, there are ways to translate the GEMM problems to the existing one described above. Please refer to my previous article “cuBLAS GEMM API Usages for Column-Major and Row-Major Matrices” for more details.
MMA Instruction Performances of Different Layouts
Most of the NVIDIA Tensor Core MMA instructions, especially those on newer GPU architectures, such as SM80, SM90, SM100, and SM120, are specific to TN layout. There are no direct MMA instructions for other layouts such as NT, NN, and TT for those GPU architectures.
The only platform, as far as I know, that provides MMA instructions for all four layouts is the SM70 (Volta) architecture. Below are the benchmark results of different SM70 FP16 MMA instructions using CuTe MMA Benchmark on an NVIDIA RTX 5080 GPU (SM120).
| MMA Instruction | Accumulation Datatype | Throughput (TOPS) |
|---|---|---|
SM70_8x8x4_F16F16F16F16_TN |
FP16 | 95.499 |
SM70_8x8x4_F16F16F16F16_NT |
FP16 | 13.971 |
SM70_8x8x4_F16F16F16F16_NN |
FP16 | 30.857 |
SM70_8x8x4_F16F16F16F16_TT |
FP16 | 19.596 |
SM70_8x8x4_F32F16F16F32_TN |
FP32 | 107.138 |
SM70_8x8x4_F32F16F16F32_NT |
FP32 | 15.362 |
SM70_8x8x4_F32F16F16F32_NN |
FP32 | 28.888 |
SM70_8x8x4_F32F16F16F32_TT |
FP32 | 28.578 |
We could see that among all four layouts of MMA instructions in the same “family”, the TN layout MMA instruction has the highest throughput, which is about 3x to 7x faster than other layouts on SM120 architecture. It should be noted that, according to the NVIDIA Parallel Thread Execution ISA, mma.sync.m8n8k4, the MMA instruction behind the CuTe MMA Atom wrappers, is optimized for target architecture SM70 and may have substantially reduced performance on other target architectures. Because I no longer have access to SM70 GPUs, I could not the true performance of SM70 MMA instructions on SM70 architecture.
Why TN Layout Is Chosen as the Most Optimized for GEMM Problems?
The next question is, why TN layout is chosen as the most optimized and the only supported MMA instruction layout most of the time on NVIDIA GPU architectures?
In fact, it is not only NVIDIA GPU architectures that choose to optimize TN layout GEMM problems. Even before deep learning became a thing, GEMM problems are widely used in high-performance computing (HPC) applications, and TN layout is often the preferred layout for matrix multiplications in HPC as well. This is because TN layout allows for better memory access patterns and cache utilization, leading to improved performance. Even if we are just using single-threaded naive implementations on CPU without SIMD instructions, TN layout can still outperform other layouts due to better cache locality.
1 |
|
We could see from the benchmark results below from an Intel Core i9-9900K CPU that even with a naive single-threaded implementation, TN layout still outperforms other layouts by a large margin, especially when compiler optimizations are enabled with -O1, -O2, or -O3 flags.
1 | $ g++ benchmark_matmul_layouts.cpp -o benchmark_matmul_layouts -O0 |
Therefore, TN layout is generally the most important layout to optimize for GEMM problems across different hardware architectures. Whenever we invented a new hardware architecture or an algorithm for GEMM problems, we should always consider optimizing TN layout first to compete with existing solutions.
NVIDIA Tensor Core TN Layout MMA Instructions
In the following figure, I am showing an example of NVIDIA Tensor Core TN layout MMA instruction SM80_16x8x16_F16F16F16F16_TN, which performs a matrix multiplication of an input matrix $A$ of shape $(16, 16)$ in row-major layout and an input matrix $B$ of shape $(16, 8)$ in column-major layout, resulting in an output matrix $C$ of shape $(16, 8)$ in column-major layout.
Typically, the data is stored in shared memory before being loaded into registers for the MMA instruction. We could see that multiple low-bitwidth values along the $K$ dimension, stored as row-major for matrix $A$ and column-major for matrix $B$, can be loaded into one 32-bit register by each thread in the warp (via the ldmatrix instruction). The shared memory bank conflicts are usually eliminated by swizzling the data in shared memory before loading them into registers.
Implementing Non-TN Layout GEMM Using NVIDIA Tensor Core TN Layout MMA Instructions
The next question is, given there are no non-TN layout MMA instructions on most NVIDIA GPU architectures, how could we implement NT, NN, and TT layout GEMM solutions? Will it still be efficient?
Suppose we want to implement an NN layout GEMM solution using NVIDIA Tensor Core TN layout MMA instructions. This means the input matrix $A$ has to be transposed conceptually from column-major to row-major layout. We could think of two possible ways to achieve this:
- Physically transpose the input matrix $A$ from column-major to row-major layout in shared memory before loading it into registers for the MMA instruction. This is typically done when loading data from global memory to shared memory.
- Logically transpose the input matrix $A$ by swapping the thread mapping and the value mapping when loading data from shared memory to registers for the MMA instruction. This is typically done when loading data from shared memory to registers.
The first approach is relatively straightforward and it is MMA-instruction agnostic. However, it may introduce additional overhead due to shared memory conflicts. With proper techniques to avoid shared memory bank conflicts, such as swizzling, the overhead could be minimized to almost zero for most GEMM problems.
The second approach is relatively MMA-instruction dependent. Taking the TN layout MMA instruction SM80_16x8x16_F16F16F16F16_TN as an example, the thread $T0$ has to load two FP16 values from the logical coordinates $(0, 0)$ and $(0, 1)$ of matrix $A$ in row-major layout. If the matrix $A$ is stored in column-major layout, the two FP16 values are actually located at the physical coordinates $(0, 0)$ and $(1, 0)$. There are utility functions to help loading data from shared memory to registers with logical transposition, such as the ldmatrix instruction and its transposed version.
Conclusions
In this article, we discussed why TN layout is the most optimized layout for GEMM problems across different hardware architectures. We also explored how NVIDIA Tensor Core MMA instructions are designed specifically for TN layout and how non-TN layout GEMM solutions can be implemented efficiently using these TN layout MMA instructions. Because of this, NVIDIA does not have to support all four layouts for MMA instructions on GPU.
References
NVIDIA Tensor Core TN Layout MMA Instruction
https://leimao.github.io/blog/NVIDIA-Tensor-Core-MMA-Instruction-TN-Layout/