How can I use createExecutionContextWithoutDeviceMemory()/IExecutionContext::setDeviceMemory()

Hi, all

I am here for asking how to use these two API “createExecutionContextWithoutDeviceMemory()” and “IExecutionContext::setDeviceMemory()” on xavier.

I wrote following codes to use “createExecutionContextWithoutDeviceMemory()”, but it won’t work.

  • Try #1
IGpuAllocator* allocator;
void* memory;
...
cudaSetDevice(0);
runtime->setGpuAllocator(allocator);
uint64_t memory_size = engine->getDeviceMemorySize();
uint64_t alignment = 512;
memory = allocator->allocate(memory_size, alignment, 0);
context->setDeviceMemory(memory);
...

However, I got Segmentation fault on the line ‘memory = allocator->allocate(memory_size, alignment, 0);’.

==========================================

(I get the value of alignment by cudaGetDeviceProp, which equals to texture memory alignment)
(As a side note, above d_context was supposed to process several inferences on DLA)

==========================================


In TensorRT document, it said
“The memory must be aligned with cuda memory alignment property (using cudaGetDeviceProperties()), and its size must be at least that returned by getDeviceMemorySize(). Setting memory to nullptr is acceptable if getDeviceMemorySize() returns 0.”, here.

However, I couldn’t fully understand what it means or how to set device memory on the context.

Could you show me some example codes for allocating device memory on the context?

or, any help will also be very appreciated.

yjkim.

p.s. I am working on this sample code TensorRT_sample.zip from here.

Hi,

Based on the discussion below, could you try to call the cudaSetDevice first?

https://github.com/NVIDIA/TensorRT/issues/219#issuecomment-559249117

Thanks.

1 Like

Hi, @AastaLLL .

Thanks for the reply.

I solved this problem by using cudaMalloc.

	cudaMalloc(&memory, engine->getDeviceMemorySize());
	context->setDeviceMemory(memory);

Thanks.

yjkim