4.4 Example: Vector Addition

Let’s examine a relatively straightforward example that enables us to visualize the CUDA programming model and how to think when we have many cores. In linear algebra, vector addition is well-used operation, where vectors can be represented by arrays of values.

Figure 4-7 shows how vector addition on a GPU device works for the code you will see below (we show a small number of elements, but the real code will use much larger arrays). It also shows how we should think about how we will complete the work in parallel, by having each thread perform a single addition operation and compute a single new value for the result array. The green lines represent each thread of computation running simultaneously in parallel.

Using the CUDA programming model, we can think of having as many threads as we need to complete this type of task on larger arrays than depicted here- the system takes care of doing as much as possible in parallel with the available cores on your card, using the number of blocks you have set up in the grid, then reassigns them to do the rest; we don’t need to worry about that.

../_images/1ThreadPerArrayElementVectorAdd.png

Figure 4-7: Each thread working on 1 array element

In this and the next two sections we will examine 3 versions of how we can complete this code.

First let’s describe in words what the first version of the CUDA code needs to do. Here are the steps, which match comments placed in the code found below.

  1. Allocate 2 arrays, x and y, on the host and initialize them with data.

  2. Allocate 2 arrays d_x, and d_y, on the GPU device.

  3. Copy each of the arrays from the host to the device.

  4. Perform the addition on the device using a kernel function, keeping result in d_y.

  5. Ensure that the device is finished with the computation.

  6. Copy the resulting array d_y from the device to the host, back into array y.

  7. Check that the computation ran correctly.

  8. Free memory on the device and the host.

New indispensable additions to the code

In a previous section (4.2 near the end), we mentioned that it is useful to check for errors that could be introduced into the code. This is true for both CUDA library calls used in the host code or functions you write for the device. CUDA provides some ways to enable you to examine errors that have occurred so that you can debug them. Observe a new macro provided at the top of the following example program file, then look further on in main to observe how it is used to check for problems that code on the device might have and report information about them.

It is always useful to be able to experiment with your application by changing sizes of various attributes to observe what happens. In C we can do this with command line arguments. In the case of this code, we have introduced a function called getArguments() that helps us allow the user to change certain parameters. We will see this in later examples also. In this case, the block size (number of threads per block) can be changed and the number of blocks needed in a grid is computed inside the code (look for that below).

When developing our code, we also need to develop mechanisms to ensure that the computation is correct. In this case, since we are testing by knowing we have placed 1.0 in each element of the array x (and copied to d_x) and 2.0 into the array y (and copied into d_y), we can test to be certain that every element in the result contains the value 3.0. This is shown in the function called verifyCorrect() in the code below.

The complete example

Here is the complete code so you can study it. You will be able to execute it in a second runnable block below.

Filename: 2-vectorAdd/vectorAdd.cu

Vector Addition CUDA Program (version 1)
  1//
  2// Demonstration using a single 1D grid and 1D block size
  3//
  4/*
  5 * Example of vector addition :
  6 * Array of floats x is added to array of floats y and the 
  7 * result is placed back in y
  8 *
  9 */
 10#include <math.h>   // ceil function
 11#include <stdio.h>  // printf
 12#include <iostream> // alternative cout print for illustration
 13
 14#include <cuda.h>
 15
 16void initialize(float *x, float *y, int N);
 17void verifyCorrect(float *y, int N);
 18void getArguments(int argc, char **argv, int *blockSize);
 19
 20///////
 21// error checking macro taken from Oakridge Nat'l lab training code:
 22// https://github.com/olcf/cuda-training-series
 23////////
 24#define cudaCheckErrors(msg) \
 25    do { \
 26        cudaError_t __err = cudaGetLastError(); \
 27        if (__err != cudaSuccess) { \
 28            fprintf(stderr, "Fatal error: %s (%s at %s:%d)\n", \
 29                msg, cudaGetErrorString(__err), \
 30                __FILE__, __LINE__); \
 31            fprintf(stderr, "*** FAILED - ABORTING\n"); \
 32            exit(1); \
 33        } \
 34    } while (0)
 35
 36
 37// Kernel function based on 1D grid of 1D blocks of threads
 38// In this version, thread number is:
 39//  its block number in the grid (blockIdx.x) times 
 40// the threads per block plus which thread it is in that block.
 41//
 42// This thread id is then the index into the 1D array of floats.
 43// This represents the simplest type of mapping:
 44// Each thread takes care of one element of the result
 45__global__ void vecAdd(float *x, float *y, int n)
 46{
 47    // Get our global thread ID designed to be an array index
 48    int id = (blockIdx.x * blockDim.x) + threadIdx.x;
 49 
 50    // Make sure we do not go out of bounds;
 51    // Threads allocated could be larger than array length
 52    if (id < n)
 53        y[id] = x[id] + y[id];
 54}
 55
 56////////////////////                   main
 57int main(int argc, char **argv)
 58{
 59  printf("Vector addition by managing memory ourselves.\n");
 60  // Set up size of arrays for vectors 
 61  int N = 32*1048576;
 62  // TODO: try changng the size of the arrays by doubling or
 63  //       halving (32 becomes 64 or 16). Note how the grid size changes.
 64  printf("size (N) of 1D arrays are: %d\n\n", N);
 65
 66  // host vectors, which are arrays of length N
 67  float *x, *y;
 68
 69  // Size, in bytes, of each vector
 70  size_t bytes = N*sizeof(float);
 71
 72  // 1.1 Allocate memory for each vector on host
 73  x = (float*)malloc(bytes);
 74  y = (float*)malloc(bytes);
 75
 76  // 1.2 initialize x and y arrays on the host
 77  initialize(x, y, N);  // set values in each vector
 78
 79   // device array storage
 80  float *d_x;
 81  float *d_y;
 82
 83  printf("allocate vectors and copy to device\n");
 84
 85  // 2. Allocate memory for each vector on GPU device
 86  cudaMalloc(&d_x, bytes);
 87  cudaMalloc(&d_y, bytes);
 88  cudaCheckErrors("allocate device memory");
 89
 90  // 3. Copy host vectors to device
 91  cudaMemcpy( d_x, x, bytes, cudaMemcpyHostToDevice);
 92  cudaMemcpy( d_y, y, bytes, cudaMemcpyHostToDevice);
 93  cudaCheckErrors("mem copy to device");
 94
 95  // Default number of threads in each thread block
 96  int blockSize = 256;
 97  getArguments(argc, argv, &blockSize); //update blocksize from cmd line
 98 
 99  // Number of thread blocks in grid needs to be based on array size
100  // and block size
101  int gridSize = (int)ceil((float)N/blockSize);
102 
103  printf("add vectors on device using grid with ");
104  printf("%d blocks of %d threads each.\n", gridSize, blockSize);
105  // 4. Execute the kernel
106  vecAdd<<<gridSize, blockSize>>>(d_x, d_y, N);
107  cudaCheckErrors("vecAdd kernel call");
108
109  // 5. Ensure that device is finished
110  cudaDeviceSynchronize();
111  cudaCheckErrors("Failure to synchronize device");
112
113  // 6. Copy array back to host (use original y for this)
114  // Note that essentially the device gets synchronized
115  // before this is performed.
116  cudaMemcpy( y, d_y, bytes, cudaMemcpyDeviceToHost);
117  cudaCheckErrors("mem copy device to host");
118
119  // 7. Check that the computation ran correctly
120  verifyCorrect(y, N); 
121
122  printf("execution complete\n");
123
124  // 8.1 Free device memory
125  cudaFree(d_x);
126  cudaFree(d_y);
127  cudaCheckErrors("free cuda memory");
128
129  // 8.2 Release host memory
130  free(x);
131  free(y);
132
133  return 0;
134}
135///////////////////////// end main
136
137///////////////////////////////// helper functions
138
139// To initialize or reset the arrays for each trial
140void initialize(float *x, float *y, int N) {
141  // initialize x and y arrays on the host
142  for (int i = 0; i < N; i++) {
143    x[i] = 1.0f;
144    y[i] = 2.0f;
145  }
146}
147
148// check whether the kernel functions worked as expected
149void verifyCorrect(float *y, int N) {
150  // Check for errors (all values should be 3.0f)
151  float maxError = 0.0f;
152  for (int i = 0; i < N; i++)
153    maxError = fmaxf(maxError, fabs(y[i]-3.0f));
154  std::cout << "Max error: " << maxError << std::endl;
155}
156
157// simple argument gather for this simple example program
158void getArguments(int argc, char **argv, int *blockSize) {
159
160  if (argc == 2) {
161    *blockSize = atoi(argv[1]);
162  }
163
164}

This code is designed to use a default block size of 256 threads and let you change it as the single command line argument. Find how this is done in the code above. Block sizes of 128, 256, or 512 are typical for many NVIDIA architectures (we will examine how this may change performance later). This is because most of today’s NVIDIA cards have 128 cores per streaming multiprocessor (SM). We leave it as an exercise for you to try different block sizes.

Use array size and block size to set grid size

Note line 101 above, repeated here:

  int gridSize = (int)ceil((float)N/blockSize);

Since in most code we may want to vary the size of our arrays, this illustrates the best practice for ensuring that is possible. Given a particular block size and number of elements, we can decide how many thread blocks should be in the 1D grid that will work on each data element.

Obtaining an array index for a thread

In the previous section, we provided this function to compute a unique value, beginning with 0, to use as an index into any array. It looks like this:

Function to obtain array index using information about 1D grid of 1D blocks
// Given a 1 dimensional grid of blocks of threads, 
// determine my thread number.
// This is run on the GPU on each thread.
__device__ int find1DThreadNumber() {
  // illustrative variable names
  int threadsPerBlock_horizontal = blockDim.x;
  int gridBlockNumber = blockIdx.x;

  int threadNumber;
  threadnumber = (gridBlockNumber * threadsPerBlock_horizontal) + threadIdx.x;
  return threadNumber;

In the vectorAdd program above, we use this same concept in the vecAdd kernel function, repeated here:

Vector Addition kernel function
__global__ void vecAdd(float *x, float *y, int n)
{
    // Get our global thread ID designed to be an array index
    int id = (blockIdx.x * blockDim.x) + threadIdx.x;
 
    // Make sure we do not go out of bounds;
    // Threads allocated could be larger than array length
    if (id < n)
        y[id] = x[id] + y[id];
}

Convince yourself these are both obtaining indexes for arrays and that in the case of the vecAdd function each thread is now completing one addition operation and populating one element of the array.

Note

An important check is being done in the above vecAdd function. We should always check that a thread id to be used as an index into an array does not point outside of the bounds of the array before we attempt the computation.

Run it here

Build and run on your machine

You can use the make command on your own machine or compile the code like this:

nvcc -arch=native  -o vectorAdd vectorAdd.cu

Remember that you will need to use a different -arch flag if native does not work for you. (See note at end of section 4.1.)

You can execute this code like this:

./vectorAdd
Vector addition by managing memory ourselves.
size (N) of 1D arrays are: 33554432

allocate vectors and copy to device
add vectors on device using grid with 131072 blocks of 256 threads each.
Max error: 0
execution complete

Exercises

4.4-1. To be certain that the code is still correct, try using different block sizes of 128 and 512 by changing the command line argument.

4.4-2. If you are ambitious, you could try changing the command line arguments (if you are familiar with this) to include a length for the arrays. Or more simply, change the code to change the size of the array, N, as shown early in the main function. This is useful to verify the code is still correct. Later we will see that it is useful for trying different problem sizes to see how the code performs.

You have attempted of activities on this page