Loss data when build engine from onnx

hi dear,
I want to run crnn model on deepstream but I have a problem when convert my model to tensorrt engine
Environment: ubuntu 18.04, cuda 10.2, deepstream 5.0, tensorrt
I converted model file to onnx from pytorch, and then I use onnxruntime to test this weight --> result: OK
output: 3-dims
But when I run deepstream-app to build engine file and run this project, my output was loss to 2-dims

  • onnx test result
    image.shape: torch.Size([1, 3, 32, 128])
    preds.shape: torch.Size([14, 1, 131])
  • deepstream-app
    0 INPUT kFLOAT input0 3x32x128
    1 OUTPUT kFLOAT output0 1x131

please help me check it
many thanks


Usually, we use the first dimension for the batch size.
Do you mean the input is one image with size = 3x32x128?
And the corresponding output is [1, 14, 1, 131]?

Or an image with size = 1x3x32x128? (4-dimesion input)


hi, thank for your responding.

input image have size = (3x32x128) --> 3 dims, 1 is batch size
both of 2 cases have the same input …

correct output for an image should be (14, 1, 131) but when I run on deepstream-app, output was (1,131)


Not sure if we understand your problem correctly.
If the batch size is 1, the output should be [1, 14, 1, 131]?

We try to confirm this since some models will do the cross-batch operation, and it is not supported by Deepstream right now.


"If the batch size is 1, the output should be [ 1 , 14, 1, 131]? " --> right
input: (1x3x32x128)
output: (1, 14, 1, 131)

sorry because my question is not clearly



It’s okay.

May I know what kind of input format do you use? Is it NCHW or NHWC?
It’s possible that your model is using NHWC, but TensorRT treats it as NCHW.

If your model is already using NCHW, please share the ONNX model with us for checking?

I would like to send you my onnx file

please help me check it


Thanks for your sharing.
It seems that you are meeting the same issue of Miss ouput when convert Onnx to TensorRT.

Our internal team is checking on this.
We will update more information here once got any feedback.



We found Miss ouput when convert Onnx to TensorRT is a custom issue rather than a bug.
Could you check the latest comment here also.