allowGPUFallback() in TensorRT 8.0.1

• Hardware Platform (Jetson / GPU): RTX 3090
• DeepStream Version 6.0
• TensorRT Version 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 495.29.05
• Issue Type( questions, new requirements, bugs) question

According docs the allowGPUFallback function was removed from TensorRT 8.0.1. Does it mean that from now in TensorRT 8.0.1 I do not have to care about layers that are not supported on DLA and TensorRT 8.0.1 will decide it alone if the layer should fall back to GPU? Or is there any new function for this?

See the TensorRT head file comments in TensorRT 7.2 :
TensorRT/NvInfer.h at release/7.2 · NVIDIA/TensorRT (

Use this flag instead:
TensorRT/NvInfer.h at release/7.2 · NVIDIA/TensorRT (


1 Like

Thank you!
If I set:
and then:
Does this second call of setFlag() overwrite the first one? If yes, how to build it in fp16 mode with DLA?

No, will not overwrite.

You can verify it by writing a simple sample

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.