Hello,
Previous question: Is there a way to know how much GPU memory Optix will use?
If the number of meshes is small, it runs, but if there are a lot, it does not run, so I thought it was a GPU memory problem.
However, it did not run on the GTX 960 4GB graphics card,
but it did run on the GTX 1050 2GB graphics card.
I executed optixLaunch(…, width, height, 1) in the for statement. (width = mesh count)
“height” is set so that (width * height) is about 30% (range of 2^30 or less) of GPU available memory. (Adjust for loop index increment value accordingly).
In the above situation, GTX 960 4GB (width(=mesh count): 138056 / height: 7777) did not run.
When “height” was set to 1, it worked on GTX 960 4GB.
I think it may be an index access issue when putting the results into the buffer, but I don’t understand why the same code works on the GTX 1050 but not on the GTX 960.
Are there any restrictions on the execution size of optixLaunch for each graphics card?
==============================================
This is part of the source code.
o main.cpp
// dimensionH is set so that (width * height) is about 30% (range of 2^30 or less) of GPU available memory.
for (unsigned int i = 0; i < meshesCnt; i += dimensionH)
{
const unsigned int width = meshesCnt;
const unsigned int height = (meshesCnt - i < dimensionH) ? (meshesCnt - i) : dimensionH; // last block
cudaMemset(params.Visibility, 0, blockSize * sizeof(char))); // blockSize = width * dimensionH
cudaMemcpy(… startIndex … );
optixLaunch(… width, height, 1);
CUDA_SYNC_CHECK();
}
o ray_gen()
{
…
…
const unsigned int resultIndex = (launch_index.y * width) + launch_index.x;
params.Visibility[resultIndex] = hit or miss;
}
==============================================
Output error when running GTX 960 4GB
: options.validationMode = OPTIX_DEVICE_CONTEXT_VALIDATION_MODE_ALL; Setting, Debug x64 Mode
[ 2][ ERROR]: Error recording event to prevent concurrent launches on the same OptixPipeline (CUDA error string: unknown error, CUDA error code: 999)
Error recording resource event on user stream (CUDA error string: unknown error, CUDA error code: 999)
Caught exception: OPTIX_ERROR_CUDA_ERROR: Optix call ‘optixLaunch( scene.pipeline(), 0, reinterpret_cast(d_paramsOpX), sizeof(opx::LaunchParamsOpX), scene.sbt(), width, height, 1 )’ failed:
[ 2][ PIPELINE]: Error releasing namedConstant’s internal resources (CUDA error string: unknown error, CUDA error code: 999)
Error synching on OptixPipeline event (CUDA error string: unknown error, CUDA error code: 999)
Error destroying OptixPipeline event (CUDA error string: unknown error, CUDA error code: 999)
==============================================
Thank you