Migrating DeepStream 5.0 competible model to 6.1/6.0.1/.6.0

Please provide complete information as applicable to your setup.

  • Hardware Platform (Jetson / GPU)** GPU
  • DeepStream Version** DeepStream 6.0 , 6.0.1. 6.1
  • JetPack Version (valid for Jetson only)**
  • TensorRT Version** As in corresponding DeepStream Container (8.2.5 in 6.1, 8.0.1 in 6.0
  • CUDA VERSION (11.4 in 6.1, 11.3 in 6.0.1 and 6.0)
  • NVIDIA GPU Driver Version (valid for GPU only)** 515.65.01

I have a model which could be converted with trtexec from ONNX to TensorRT engine in DeepStream 5.0. And deepstream-app could parse the engine file correctly.

However, when I tried to convert the model on DeepStream 6.1, model failed to convert no matter it is dynamic and static as well as all with different conversion flags. The last tactic message happened with verbose is shown below. I am not sure why this will cause the core dump. Is there any breaking changing in 6.1 ?

[08/19/2022-02:08:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0
[08/19/2022-02:08:56] [V] [TRT] =============== Computing costs for
[08/19/2022-02:08:56] [V] [TRT] *************** Autotuning format combination: → Int32() ***************
[08/19/2022-02:08:56] [V] [TRT] =============== Computing costs for
[08/19/2022-02:08:56] [V] [TRT] *************** Autotuning format combination: → Int32() ***************
[08/19/2022-02:08:56] [V] [TRT] =============== Computing costs for
[08/19/2022-02:08:56] [V] [TRT] *************** Autotuning format combination: → Int32() ***************
[08/19/2022-02:08:56] [V] [TRT] =============== Computing costs for
[08/19/2022-02:08:56] [V] [TRT] *************** Autotuning format combination: Float(1108992,369664,608,1) → Float(11829248,369664,608,1) ***************
[08/19/2022-02:08:56] [V] [TRT] --------------- Timing Runner: Conv_30 (CudaDepthwiseConvolution)
[08/19/2022-02:08:56] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping
[08/19/2022-02:08:56] [V] [TRT] --------------- Timing Runner: Conv_30 (FusedConvActConvolution)
[08/19/2022-02:08:56] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping
Illegal instruction (core dumped)

Interestingly, I have tried to convert the model with DeepStream 6.0.1 or 6.0, the model could be converted successfully. Then I put the converted model in 6.0.1 to 6.1, the engine magically run over there with static batch without a problem.

I also found the the explicitBatch flag is removed in 8.2.5. Is it the reason for that ?

Can “trtexec” run with your model with DeepStream 6.0/6.0.1/6.1?

Thanks for the reply, Fiona.

I am using the trtexec in the DeepStream container for the model conversion. trtexec in DeepStream Container 6.0 and 6.0.1 could convert successfully. Yes, I think so. As I remember that trtexec typically will test latency with some dummy tensor after successful conversion. In 6.1, it could not even convert successfully.

Is the log you post comes from “trtexec” with DS 6.1?

Yes, it is.

This is DeepStream forum. Please raise a topic in TensorRT forum.

Thanks anyway although I think that parser in DeepStream and TensorRT should be related to each other. Change in TensorRT should reflect the change in DeepStream config parser as well for the engine file. Anyway, I have re-implemented the whole thing under DeepStream 6.0. I will report this problem in TensorRT later.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.