Run the TLT models with deepstream on GPU not DLA

Hi everyone,
I want to know, How I can to run the TLT models with GPU? when I run the each of models like peoplenet/trafficcam/detectnet_v2 , …, automatically switching to running some layers on DLA and some layers on GPU, but I want to specific which of accelerators run that model, GPU/DLA0/DLA1?

I used enable-dla and use-dla-core option in the property, but don’t difference , all states switch into DLA+GPU.

Hi,

Please use Deepstream for the TLT model format support.

Please noticed that DLA is a hardware inference engine.
It has pre-defined operation support and limited capacity.
For those operation cannot fit into DLA, TensorRT will use GPU for inference instead.

Here is DLA support matrix for your reference:
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#dla_layers

Thanks.