I don’t witness any such behavior, if you allow the application to exit.
I have a NVS310 (has 512MB of memory instead of 1GB like NVS315) on CUDA 7 on Centos 6.2:
$ /usr/local/cuda/samples/bin/x86_64/linux/release/deviceQuery
/usr/local/cuda/samples/bin/x86_64/linux/release/deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "NVS 310"
CUDA Driver Version / Runtime Version 7.0 / 7.0
CUDA Capability Major/Minor version number: 2.1
Total amount of global memory: 512 MBytes (536543232 bytes)
( 1) Multiprocessors, ( 48) CUDA Cores/MP: 48 CUDA Cores
GPU Max Clock rate: 1046 MHz (1.05 GHz)
Memory Clock rate: 875 Mhz
Memory Bus Width: 64-bit
L2 Cache Size: 65536 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65535), 3D=(2048, 2048, 2048)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (65535, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 4 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 7.0, CUDA Runtime Version = 7.0, NumDevs = 1, Device0 = NVS 310
Result = PASS
$
I created the following program according to your description:
#include <stdio.h>
#include <stdlib.h>
#define DSIZE_MAX 100000000
#define cudaCheckErrors(msg) \
do { \
cudaError_t __err = cudaGetLastError(); \
if (__err != cudaSuccess) { \
fprintf(stderr, "Fatal error: %s (%s at %s:%d)\n", \
msg, cudaGetErrorString(__err), \
__FILE__, __LINE__); \
fprintf(stderr, "*** FAILED - ABORTING\n"); \
exit(1); \
} \
} while (0)
int main(int argc, char *argv[]){
if (argc < 2) {printf("must specify allocation size on command line\n"); return 1;}
const int dsize = atoi(argv[1]);
if ((dsize < 1)||(dsize > DSIZE_MAX)) {printf("invalid size %d\n", dsize); return 1;}
int *data;
cudaMalloc(&data, dsize*sizeof(int));
cudaCheckErrors("cudaMalloc fail");
cudaFree(data);
cudaCheckErrors("cudaFree fail");
return 0;
}
I compiled with:
nvcc -o t906 t906.cu
I can run it “many” times without any issues:
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
$ ./t906 20000000
and at the completion of the above, nvidia-smi still reports approximately zero memory used:
$ nvidia-smi
Wed Nov 4 10:31:35 2015
+------------------------------------------------------+
| NVIDIA-SMI 346.46 Driver Version: 346.46 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 NVS 310 On | 0000:04:00.0 N/A | N/A |
| 30% 38C P8 N/A / N/A | 3MiB / 511MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 C Not Supported |
+-----------------------------------------------------------------------------+
$
Note that the thread you linked for the vector add app is acknowledging that the problem does not occur if you allow the application to exit (or use cudaDeviceReset()). The problem being discussed there is likely some sort of memory fragmentation issue during application runtime. When an application exits properly, all of its resources are freed properly and there should be no residual fragmentation issues. And I don’t witness any with my test case according to your description.