CUDA Constant Memory

Introduction

CUDA constant memory is a special memory space on the device. It’s cached and read-only.

There are some caveats when using constant memory. In this post, we will discuss the usages and caveats of constant memory.

Constant Memory

There is a total of 64 KB constant memory on a device. The constant memory space is cached. As a result, a read from constant memory costs one memory read from device memory only on a cache miss; otherwise, it just costs one read from the constant cache. Accesses to different addresses by threads within a warp are serialized, thus the cost scales linearly with the number of unique addresses read by all threads within a warp. As such, the constant cache is best when threads in the same warp accesses only a few distinct locations. If all threads of a warp access the same location, then constant memory can be as fast as a register access.

Constant Memory Usage and Performance

In the following example, we perform additions for an array. One of the constant input arrays is stored on global memory, and the other constant input arrays is stored on global memory or constant memory. We compare the performance of accessing constant memory and global memory under different access patterns.

add_constant.cu
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
#include <functional>
#include <iostream>
#include <string>
#include <vector>

#include <cuda_runtime.h>

#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)
void check(cudaError_t err, const char* const func, const char* const file,
const int line)
{
if (err != cudaSuccess)
{
std::cerr << "CUDA Runtime Error at: " << file << ":" << line
<< std::endl;
std::cerr << cudaGetErrorString(err) << " " << func << std::endl;
std::exit(EXIT_FAILURE);
}
}

#define CHECK_LAST_CUDA_ERROR() checkLast(__FILE__, __LINE__)
void checkLast(const char* const file, const int line)
{
cudaError_t const err{cudaGetLastError()};
if (err != cudaSuccess)
{
std::cerr << "CUDA Runtime Error at: " << file << ":" << line
<< std::endl;
std::cerr << cudaGetErrorString(err) << std::endl;
std::exit(EXIT_FAILURE);
}
}

template <class T>
float measure_performance(std::function<T(cudaStream_t)> bound_function,
cudaStream_t stream, unsigned int num_repeats = 100,
unsigned int num_warmups = 100)
{
cudaEvent_t start, stop;
float time;

CHECK_CUDA_ERROR(cudaEventCreate(&start));
CHECK_CUDA_ERROR(cudaEventCreate(&stop));

for (unsigned int i{0}; i < num_warmups; ++i)
{
bound_function(stream);
}

CHECK_CUDA_ERROR(cudaStreamSynchronize(stream));

CHECK_CUDA_ERROR(cudaEventRecord(start, stream));
for (unsigned int i{0}; i < num_repeats; ++i)
{
bound_function(stream);
}
CHECK_CUDA_ERROR(cudaEventRecord(stop, stream));
CHECK_CUDA_ERROR(cudaEventSynchronize(stop));
CHECK_LAST_CUDA_ERROR();
CHECK_CUDA_ERROR(cudaEventElapsedTime(&time, start, stop));
CHECK_CUDA_ERROR(cudaEventDestroy(start));
CHECK_CUDA_ERROR(cudaEventDestroy(stop));

float const latency{time / num_repeats};

return latency;
}

// Use all the constant memory.
constexpr unsigned int N{64U * 1024U / sizeof(int)};
__constant__ int const_values[N];

// Magic number for generating the pseudo-random access pattern.
constexpr unsigned int magic_number{1357U};

enum struct AccessPattern
{
OneAccessPerBlock,
OneAccessPerWarp,
OneAccessPerThread,
PseudoRandom
};

void add_constant_cpu(int* sums, int const* inputs, int const* values,
unsigned int num_sums, unsigned int num_values,
unsigned int block_size, AccessPattern access_pattern)
{
for (unsigned int i{0U}; i < num_sums; ++i)
{
unsigned int const block_id{i / block_size};
unsigned int const thread_id{i % block_size};
unsigned int const warp_id{thread_id / 32U};
unsigned int index{0U};

switch (access_pattern)
{
case AccessPattern::OneAccessPerBlock:
index = block_id % num_values;
break;
case AccessPattern::OneAccessPerWarp:
index = warp_id % num_values;
break;
case AccessPattern::OneAccessPerThread:
index = thread_id % num_values;
break;
case AccessPattern::PseudoRandom:
index = (thread_id * magic_number) % num_values;
break;
}

sums[i] = inputs[i] + values[index];
}
}

__global__ void add_constant_global_memory(
int* sums, int const* inputs, int const* values, unsigned int num_sums,
unsigned int num_values,
AccessPattern access_pattern = AccessPattern::OneAccessPerBlock)
{
unsigned int const i{blockIdx.x * blockDim.x + threadIdx.x};
unsigned int const block_id{blockIdx.x};
unsigned int const thread_id{threadIdx.x};
unsigned int const warp_id{threadIdx.x / warpSize};
unsigned int index{0U};

switch (access_pattern)
{
case AccessPattern::OneAccessPerBlock:
index = block_id % num_values;
break;
case AccessPattern::OneAccessPerWarp:
index = warp_id % num_values;
break;
case AccessPattern::OneAccessPerThread:
index = thread_id % num_values;
break;
case AccessPattern::PseudoRandom:
index = (thread_id * magic_number) % num_values;
break;
}

if (i < num_sums)
{
sums[i] = inputs[i] + values[index];
}
}

void launch_add_constant_global_memory(int* sums, int const* inputs,
int const* values, unsigned int num_sums,
unsigned int num_values,
unsigned int block_size,
AccessPattern access_pattern,
cudaStream_t stream)
{
add_constant_global_memory<<<(num_sums + block_size - 1) / block_size,
block_size, 0, stream>>>(
sums, inputs, values, num_sums, num_values, access_pattern);
CHECK_LAST_CUDA_ERROR();
}

__global__ void add_constant_constant_memory(int* sums, int const* inputs,
unsigned int num_sums,
AccessPattern access_pattern)
{
unsigned int const i{blockIdx.x * blockDim.x + threadIdx.x};
unsigned int const block_id{blockIdx.x};
unsigned int const thread_id{threadIdx.x};
unsigned int const warp_id{threadIdx.x / warpSize};
unsigned int index{0U};

switch (access_pattern)
{
case AccessPattern::OneAccessPerBlock:
index = block_id % N;
break;
case AccessPattern::OneAccessPerWarp:
index = warp_id % N;
break;
case AccessPattern::OneAccessPerThread:
index = thread_id % N;
break;
case AccessPattern::PseudoRandom:
index = (thread_id * magic_number) % N;
break;
}

if (i < num_sums)
{
sums[i] = inputs[i] + const_values[index];
}
}

void launch_add_constant_constant_memory(int* sums, int const* inputs,
unsigned int num_sums,
unsigned int block_size,
AccessPattern access_pattern,
cudaStream_t stream)
{
add_constant_constant_memory<<<(num_sums + block_size - 1) / block_size,
block_size, 0, stream>>>(
sums, inputs, num_sums, access_pattern);
CHECK_LAST_CUDA_ERROR();
}

void parse_args(int argc, char** argv, AccessPattern& access_pattern,
unsigned int& block_size, unsigned int& num_sums)
{
if (argc < 4)
{
std::cerr << "Usage: " << argv[0]
<< " <access pattern> <block size> <number of sums>"
<< std::endl;
std::exit(EXIT_FAILURE);
}

std::string const access_pattern_str{argv[1]};
if (access_pattern_str == "one_access_per_block")
{
access_pattern = AccessPattern::OneAccessPerBlock;
}
else if (access_pattern_str == "one_access_per_warp")
{
access_pattern = AccessPattern::OneAccessPerWarp;
}
else if (access_pattern_str == "one_access_per_thread")
{
access_pattern = AccessPattern::OneAccessPerThread;
}
else if (access_pattern_str == "pseudo_random")
{
access_pattern = AccessPattern::PseudoRandom;
}
else
{
std::cerr << "Invalid access pattern: " << access_pattern_str
<< std::endl;
std::exit(EXIT_FAILURE);
}

block_size = std::stoi(argv[2]);
num_sums = std::stoi(argv[3]);
}

int main(int argc, char** argv)
{
constexpr unsigned int num_warmups{100U};
constexpr unsigned int num_repeats{100U};

AccessPattern access_pattern{AccessPattern::OneAccessPerBlock};
unsigned int block_size{1024U};
unsigned int num_sums{12800000U};
// Modify access pattern, block size and number of sums from command line.
parse_args(argc, argv, access_pattern, block_size, num_sums);

cudaStream_t stream;
CHECK_CUDA_ERROR(cudaStreamCreate(&stream));

int h_values[N];
// Initialize values on host memory.
for (unsigned int i{0U}; i < N; ++i)
{
h_values[i] = i;
}
// Initialize values on global memory.
int* d_values;
CHECK_CUDA_ERROR(cudaMallocAsync(&d_values, N * sizeof(int), stream));
CHECK_CUDA_ERROR(cudaMemcpyAsync(d_values, h_values, N * sizeof(int),
cudaMemcpyHostToDevice, stream));
// Initialize values on constant memory.
CHECK_CUDA_ERROR(cudaMemcpyToSymbolAsync(const_values, h_values,
N * sizeof(int), 0,
cudaMemcpyHostToDevice, stream));

std::vector<int> inputs(num_sums, 0);
int* h_inputs{inputs.data()};
int* d_inputs_for_constant;
int* d_inputs_for_global;
CHECK_CUDA_ERROR(cudaMallocAsync(&d_inputs_for_constant,
num_sums * sizeof(int), stream));
CHECK_CUDA_ERROR(
cudaMallocAsync(&d_inputs_for_global, num_sums * sizeof(int), stream));
CHECK_CUDA_ERROR(cudaMemcpyAsync(d_inputs_for_constant, h_inputs,
num_sums * sizeof(int),
cudaMemcpyHostToDevice, stream));
CHECK_CUDA_ERROR(cudaMemcpyAsync(d_inputs_for_global, h_inputs,
num_sums * sizeof(int),
cudaMemcpyHostToDevice, stream));

std::vector<int> reference_sums(num_sums, 0);
std::vector<int> sums_from_constant(num_sums, 1);
std::vector<int> sums_from_global(num_sums, 2);

int* h_reference_sums{reference_sums.data()};
int* h_sums_from_constant{sums_from_constant.data()};
int* h_sums_from_global{sums_from_global.data()};

int* d_sums_from_constant;
int* d_sums_from_global;
CHECK_CUDA_ERROR(
cudaMallocAsync(&d_sums_from_constant, num_sums * sizeof(int), stream));
CHECK_CUDA_ERROR(
cudaMallocAsync(&d_sums_from_global, num_sums * sizeof(int), stream));

// Synchronize.
CHECK_CUDA_ERROR(cudaStreamSynchronize(stream));

// Compute reference sums on CPU.
add_constant_cpu(h_reference_sums, h_inputs, h_values, num_sums, N,
block_size, access_pattern);
// Compute reference sums on GPU using global memory.
launch_add_constant_global_memory(d_sums_from_global, d_inputs_for_global,
d_values, num_sums, N, block_size,
access_pattern, stream);
// Compute reference sums on GPU using constant memory.
launch_add_constant_constant_memory(d_sums_from_constant,
d_inputs_for_constant, num_sums,
block_size, access_pattern, stream);

// Copy results from device to host.
CHECK_CUDA_ERROR(cudaMemcpyAsync(h_sums_from_constant, d_sums_from_constant,
num_sums * sizeof(int),
cudaMemcpyDeviceToHost, stream));
CHECK_CUDA_ERROR(cudaMemcpyAsync(h_sums_from_global, d_sums_from_global,
num_sums * sizeof(int),
cudaMemcpyDeviceToHost, stream));

// Synchronize.
CHECK_CUDA_ERROR(cudaStreamSynchronize(stream));

// Verify results.
for (unsigned int i{0U}; i < num_sums; ++i)
{
if (h_reference_sums[i] != h_sums_from_constant[i])
{
std::cerr << "Error at index " << i << " for constant memory."
<< std::endl;
std::exit(EXIT_FAILURE);
}
if (h_reference_sums[i] != h_sums_from_global[i])
{
std::cerr << "Error at index " << i << " for global memory."
<< std::endl;
std::exit(EXIT_FAILURE);
}
}

// Measure performance.
std::function<void(cudaStream_t)> bound_function_constant_memory{
std::bind(launch_add_constant_constant_memory, d_sums_from_constant,
d_inputs_for_constant, num_sums, block_size, access_pattern,
std::placeholders::_1)};
std::function<void(cudaStream_t)> bound_function_global_memory{
std::bind(launch_add_constant_global_memory, d_sums_from_global,
d_inputs_for_global, d_values, num_sums, N, block_size,
access_pattern, std::placeholders::_1)};
float const latency_constant_memory{measure_performance(
bound_function_constant_memory, stream, num_repeats, num_warmups)};
float const latency_global_memory{measure_performance(
bound_function_global_memory, stream, num_repeats, num_warmups)};
std::cout << "Latency for Add using constant memory: "
<< latency_constant_memory << " ms" << std::endl;
std::cout << "Latency for Add using global memory: "
<< latency_global_memory << " ms" << std::endl;

CHECK_CUDA_ERROR(cudaStreamDestroy(stream));
CHECK_CUDA_ERROR(cudaFree(d_values));
CHECK_CUDA_ERROR(cudaFree(d_inputs_for_constant));
CHECK_CUDA_ERROR(cudaFree(d_inputs_for_global));
CHECK_CUDA_ERROR(cudaFree(d_sums_from_constant));
CHECK_CUDA_ERROR(cudaFree(d_sums_from_global));

return 0;
}

The program was compiled and executed on an NVIDIA RTX 3090 GPU.

1
$ nvcc add_constant.cu -o add_constant

If we have 12800000 adds to perform using 1024 threads per block.

1
2
3
4
5
6
7
8
9
10
11
12
$ ./add_constant one_access_per_block 1024 12800000
Latency for Add using constant memory: 0.151798 ms
Latency for Add using global memory: 0.171404 ms
$ ./add_constant one_access_per_warp 1024 12800000
Latency for Add using constant memory: 0.164012 ms
Latency for Add using global memory: 0.189501 ms
$ ./add_constant one_access_per_thread 1024 12800000
Latency for Add using constant memory: 0.281967 ms
Latency for Add using global memory: 0.164649 ms
$ ./add_constant pseudo_random 1024 12800000
Latency for Add using constant memory: 1.2925 ms
Latency for Add using global memory: 0.159621 ms

If we have 128000 adds to perform using 1024 threads per block.

1
2
3
4
5
6
7
8
9
10
11
12
$ ./add_constant one_access_per_block 1024 128000
Latency for Add using constant memory: 0.00289792 ms
Latency for Add using global memory: 0.00323584 ms
$ ./add_constant one_access_per_warp 1024 128000
Latency for Add using constant memory: 0.00315392 ms
Latency for Add using global memory: 0.00359392 ms
$ ./add_constant one_access_per_thread 1024 128000
Latency for Add using constant memory: 0.00596992 ms
Latency for Add using global memory: 0.00383264 ms
$ ./add_constant pseudo_random 1024 128000
Latency for Add using constant memory: 0.0215347 ms
Latency for Add using global memory: 0.00482304 ms

In both cases, we could see that accessing constant memory is ~10% faster than accessing global memory if the it’s one access per block or one access per warp. If it’s one access per thread, then accessing constant memory is ~70% slower than accessing global memory. If it’s pseudo random access, then accessing constant memory is ~800% slower than accessing global memory.

Conclusions

To use constant memory, it’s important to roughly know the access pattern. If the access pattern is one access per block or one access per warp, which is typically used in broadcast, then constant memory is a good choice. If the access pattern is one access per thread or even pseudo random, then constant memory is a very bad choice.

References

Author

Lei Mao

Posted on

12-01-2023

Updated on

12-01-2023

Licensed under


Comments