Compiling TensorRT oss gives redefinition of of argument 'std'?

The exact error,

nvcc fatal : redefinition of argument 'std'

I know this is because cmake is passing the is flag and even nvcc is also doing the same task of enabling the same flag which wasn’t an issue in older cmake. Compiling it with the new cmake gives this error. Any ideas as to how to fix it. As to removing the flag in any one?


Please add the info from the issue template, especially the commands to reproduce this error:

Provide details on the platforms you are using:
Linux distro and version
GPU type
Nvidia driver version
CUDA version
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT version
If Jetson, OS, hw versions

Describe the problem


Include any logs, source, models (.uff, .pb, etc.) that would be helpful to diagnose the problem.

If relevant, please include the full traceback.


Please provide a minimal test case that reproduces your error.

Was able to fix the issue by removing std in one of the cpp files inside TensorRT OSS code and I was able to build TensorRT OSS on the latest jetpack. But everytime we put our custom trained model inside deepstream-app (trained using tlt) on the latest jetpack the entire pipeline crashes, but we were able to plugin our custom model in the deepstream-app on older jetpack. Any reason why this would happen?

Was the model trained with TLT using a different version of TensorRT than the one it’s crashing in?

For example, I believe Jetpack 4.2 has TensorRT 5, and Jetpack 4.3 has TensorRT 6.

If so, that may be the reason, as TensorRT engines are not compatible across versions. You’ll likely have to re-train with TLT in the new Jetpack version as well in order to run the TensorRT engine (.etlt file) in the new Jetpack version.

I don’t get it. You don’t train on the nano right. So for some reason fp16 was causing the issue even after we had the fp16 fix on the nano. So went ahead and ran fp32 of the same model. I’m guessing we need to build the fp16 fix for tensorrt 6