0

Seems like there are a lot of questions on here about moving double (or int, or float, etc) 2d arrays from host to device. This is NOT my question.

I have already moved all of the data onto the GPU and, the __global__ kernel calls several __device__ functions.

In these device kernels, I have tried the following:

To allocate:

__device__ double** matrixCreate(int rows, int cols, double initialValue)
{
double** temp; temp=(double**)malloc(rows*sizeof(double*));
for(int j=0;j<rows;j++) {temp[j]=(double*)malloc(cols*sizeof(double));}

//Set  initial  values 

for(int i=0;i<rows;i++)  
{  
    for(int j=0;j<cols;j++)
    {
        temp[i][j]=initialValue;
    }
}

return  temp;
}

To deallocate:

__device__ void  matrixDestroy(double** temp,int rows)
{
for(int j=0;j<rows;j++) { free( temp[j] ); }
free(temp);
}

For single dimension arrays the __device__ mallocs work great, can't seem to keep it stable in the multidimensional case. By the way, the variables are sometime used like this:

double** z=matrixCreate(2,2,0);
double* x=z[0];

However, care is always taken to ensure no calls to free are done with active data. The code is actually an adaption of cpu only code, so I know nothing funny is going on with the pointers or memory. Basically I'm just re-defining the allocators and throwing a __device__ on the serial portions. Just want to run the whole serial bit 10000 times and the GPU seems like a good way to do it.

++++++++++++++ UPDATE +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Problem solved by Vyas. As per cuda specifications, heap size is initially set to 8Mb, if your mallocs exceed this, NSIGHT will not launch and the kernel crashes. Use the following under host code.

float increaseHeap=10;
cudaDeviceSetLimit(cudaLimitMallocHeapSize, size[0]*increaseHeap);

Worked for me!

14
  • The malloc and free are OK , assuming you included <stdlib.h>, and you check that each malloc doesn't return NULL. Perhaps there is memory corruption happening elsewhere in your code that didn't trigger any visible effect in the CPU version. Can you describe your symptoms better and/or post a testcase that reproduces a failure?
    – M.M
    Commented Apr 24, 2014 at 4:57
  • 1
    @MattMcNabb let's hope very strong this is C. malloc() in C++ is harmful. Commented Apr 24, 2014 at 4:58
  • 2
    Did you try running your code through cuda-memcheck? Also did you try adding checks on the returned pointers from malloc? The GPU side malloc() is a suballocator from a limited heap. Depending on the number of allocations, it is possible the heap is being exhausted. You can change the size of the backing heap using cudaDeviceSetLimit(cudaLimitMallocHeapSize, size_t size). For more info see : link
    – Vyas
    Commented Apr 24, 2014 at 5:04
  • 1
    If it runs fine for the single dimensional case, have you considered linearising the matrix and addressing it such as i + j*M? That way whenever you deal with memory you can just treat it as a single block of memory.
    – Cramer
    Commented Apr 24, 2014 at 5:09
  • 2
    @Vyas: You would like a add a short answer to get this off the unanswered list?
    – talonmies
    Commented Apr 24, 2014 at 5:50

1 Answer 1

2

The GPU side malloc() is a suballocator from a limited heap. Depending on the number of allocations, it is possible the heap is being exhausted. You can change the size of the backing heap using cudaDeviceSetLimit(cudaLimitMallocHeapSize, size_t size). For more info see : CUDA programming guide

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.