CUDNN_STATUS_BAD_PARAM when infer with dynamic shape on jetson nx

I posted this on github here but not resolved yet. CUDNN_STATUS_BAD_PARAM when infer with dynamic shape · Issue #1281 · NVIDIA/TensorRT · GitHub please check.

Hi,

Please noted that TensorRT engine cannot be used cross different platform or software version.
The file shared above is for desktop GPU.

Do you meet the same issue on XavierNX?
If yes, could you share an engine file converted from JetPack4.5.1 on NX with us?

Thanks.

or sorry, yes i tested on desktop GPU and it doesn’t work. For jetson i converted it again and same result.

Let me convert again and share, one sec

hey sorry, i just tested the jetson engine, it works with different batch size. However, when i converted it onto rtx 1080 or 2080, it only works for several batch sizes. We do need to deploy onto different devices including jetson and 1080 and 2080, if that’s 1080/2080 related tensorrt issue, where should i ask this question?

Hi,

Please share your issue to below TensorRT board.
They can help dPU issues:

Thanks.

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.