Reshaping error when set batch-size greater than 1 in onnx modle

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson)
• DeepStream Version-6.0
• JetPack Version (valid for Jetson only)-4.6
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)-questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

There is an error when I set batch-size greater than 1 in onnx modle. The logs as follows.

2023-02-08 16:48:10,648  ** INFO: <create_rtmpsink_bin:904>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080 
2023-02-08 16:48:10,667  ** INFO: <create_encode_file_bin:354>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080 
2023-02-08 16:48:10,908  Opening in BLOCKING MODE  
2023-02-08 16:48:10,908  Opening in BLOCKING MODE  
2023-02-08 16:48:10,908  Table created Successfully 
2023-02-08 16:48:13,406  0:00:02.855652927 21294   0x7f24002390 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 7]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1904> [UID = 7]: deserialized trt engine from :/home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine 
2023-02-08 16:48:13,406  0:00:02.855847369 21294   0x7f24002390 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 7]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2008> [UID = 7]: Use deserialized engine model: /home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine 
2023-02-08 16:48:13,411  0:00:02.861368350 21294   0x7f24002390 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_1> [UID 7]: Load new model:/home/lcfc/david/code/qf-ecu-jpack4.6/ds-app/ds-cfg/sgie4_vehicletypes_onnx_cfg.txt sucessfully 
2023-02-08 16:48:13,433  0:00:02.882998457 21294   0x7f24002390 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1904> [UID = 3]: deserialized trt engine from :/home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_PlateRecognition/lprnet.onnx_b2_gpu0_fp16.engine 
2023-02-08 16:48:13,434  0:00:02.883180674 21294   0x7f24002390 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2008> [UID = 3]: Use deserialized engine model: /home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_PlateRecognition/lprnet.onnx_b2_gpu0_fp16.engine 
2023-02-08 16:48:13,435  0:00:02.885716097 21294   0x7f24002390 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_0> [UID 3]: Load new model:/home/lcfc/david/code/qf-ecu-jpack4.6/ds-app/ds-cfg/sgie1_lpr_onnx_cfg.txt sucessfully 
2023-02-08 16:48:13,457  INFO: [FullDims Engine Info]: layers num: 2 
2023-02-08 16:48:13,457  0   INPUT  kFLOAT images          3x224x224       min: 1x3x224x224     opt: 8x3x224x224     Max: 8x3x224x224      
2023-02-08 16:48:13,457  1   OUTPUT kFLOAT output          178             min: 0               opt: 0               Max: 0                
2023-02-08 16:48:13,457  WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors. 
2023-02-08 16:48:13,458  INFO: [FullDims Engine Info]: layers num: 2 
2023-02-08 16:48:13,458  0   INPUT  kFLOAT images          3x24x94         min: 1x3x24x94       opt: 2x3x24x94       Max: 2x3x24x94        
2023-02-08 16:48:13,458  1   OUTPUT kFLOAT output          76x18           min: 0               opt: 0               Max: 0                
2023-02-08 16:48:13,458  gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.0/lib/libObjectTracker.so 
2023-02-08 16:48:13,459  Track NvMOT_Query success 
2023-02-08 16:48:13,459  gstnvtracker: Batch processing is ON 
2023-02-08 16:48:13,459  gstnvtracker: Past frame output is OFF 
2023-02-08 16:48:13,460  0:00:02.906652986 21294   0x7f24002390 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1163> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5 
2023-02-08 16:48:13,552  0:00:03.002008225 21294   0x7f24002390 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1904> [UID = 1]: deserialized trt engine from :/home/lcfc/david/code/qf-ecu-jpack4.6/ds-app/ds-engine/vehicle.engine 
2023-02-08 16:48:13,552  0:00:03.002202859 21294   0x7f24002390 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2008> [UID = 1]: Use deserialized engine model: /home/lcfc/david/code/qf-ecu-jpack4.6/ds-app/ds-engine/vehicle.engine 
2023-02-08 16:48:13,555  0:00:03.005591188 21294   0x7f24002390 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/lcfc/david/code/qf-ecu-jpack4.6/ds-app/ds-cfg/pgie_yolo_cfg.txt sucessfully 
2023-02-08 16:48:13,560  WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors. 
2023-02-08 16:48:13,561  INFO: [Implicit Engine Info]: layers num: 2 
2023-02-08 16:48:13,561  0   INPUT  kFLOAT data            3x640x640        
2023-02-08 16:48:13,561  1   OUTPUT kFLOAT prob            7001x1x1         
2023-02-08 16:48:13,561  Runtime commands: 
2023-02-08 16:48:13,561         h: Print this help 
2023-02-08 16:48:13,561         q: Quit 
2023-02-08 16:48:13,562         p: Pause 
2023-02-08 16:48:13,562         r: Resume 
2023-02-08 16:48:13,562  ** INFO: <bus_callback:194>: Pipeline ready 
2023-02-08 16:48:14,702  NvMMLiteOpen : Block : BlockType = 261  
2023-02-08 16:48:14,703  NVMEDIA: Reading vendor.tegra.display-size : status: 6  
2023-02-08 16:48:14,705  NvMMLiteBlockCreate : Block : BlockType = 261  
2023-02-08 16:48:14,821  NvMMLiteOpen : Block : BlockType = 4  
2023-02-08 16:48:14,821  NvMMLiteOpen : Block : BlockType = 4  
2023-02-08 16:48:14,821  ===== NVMEDIA: NVENC ===== 
2023-02-08 16:48:14,821  ===== NVMEDIA: NVENC ===== 
2023-02-08 16:48:14,822  NvMMLiteBlockCreate : Block : BlockType = 4  
2023-02-08 16:48:14,823  NvMMLiteBlockCreate : Block : BlockType = 4  
2023-02-08 16:48:15,771  Opening in BLOCKING MODE  
2023-02-08 16:48:15,771  2023-02-08 16:48:15: 
2023-02-08 16:48:15,771  **PERF:  FPS 0 (Avg)    
2023-02-08 16:48:15,771  **PERF:  0.00 (0.00)    
2023-02-08 16:48:16,183  track_thresh:0.500000  high_thresh:0.600000    match_thresh:0.800000 
2023-02-08 16:48:16,183  frame_rate:30  track_buffer:20 
2023-02-08 16:48:16,188  ERROR: [TRT]: 7: [shapeMachine.cpp::execute::565] Error Code 7: Internal Error (IShuffleLayer Flatten_47: reshaping failed for tensor: onnx::Flatten_189 
2023-02-08 16:48:16,188  reshape would change volume 
2023-02-08 16:48:16,189  Instruction: RESHAPE{6 512 1 1} {8 512} 
2023-02-08 16:48:16,189  ) 
2023-02-08 16:48:16,189  ERROR: [TRT]: 2: [executionContext.cpp::enqueueInternal::360] Error Code 2: Internal Error (Could not resolve slots: ) 
2023-02-08 16:48:16,189  ERROR: Failed to enqueue trt inference batch 
2023-02-08 16:48:16,189  ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR 
2023-02-08 16:48:16,190  0:00:05.637736227 21294   0x55684896d0 WARN                 nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<secondary_gie_1> error: Failed to queue input batch for inferencing 
2023-02-08 16:48:16,190  ERROR from secondary_gie_1: Failed to queue input batch for inferencing 
2023-02-08 16:48:16,190  Debug info: gstnvinfer.cpp(1324): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_1 
2023-02-08 16:48:16,210  Quitting 
2023-02-08 16:48:16,245 send_nats: name_str: dataset_stream_source_111  topic_str: cloud.ai_algorithm.deepstream.object_detection.111.all 
2023-02-08 16:48:16,318  (deepstream-app:21294): GLib-CRITICAL **: 16:48:16.317: g_thread_join: assertion 'thread' failed 
2023-02-08 16:48:17,217  App run failed 

My configuration file as follows.

[property]
gpu-id=0
net-scale-factor=0.003921568627451
#offsets=127.5;127.5;127.5
model-color-format=1
onnx-file=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx
model-engine-file=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine
batch-size=8
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
#num-detected-classes=300
infer-dims=3;224;224
output-blob-names=output
network-type=1
parse-classifier-func-name=NvDsInferParseCustomVehicleTypes
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer_custom_parser_vehicle_types.so
classifier-async-mode=1
# GPU:1  VIC:2(Jetson only)
#scaling-compute-hw=2
#enable-dla=1
#use-dla-core=1
secondary-reinfer-interval=10
maintain-aspect-ratio=0
#force-implicit-batch-dim=1
process-mode=2
classifier-threshold=0.6
input-object-min-width=64
input-object-min-height=64
symmetric-padding=1

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

from the logs, typenet_bs8.onnx’s inference engine was not created successfully.

  1. could you share the whole media pipeline? which deepstream sample are you testing?
  2. is Secondary_VehicleTypes the the second gie?
  3. Using third party tools, can the model works?
  1. could you share the whole media pipeline? which deepstream sample are you testing?
    I used deepstream-app sample and the whole pipeline is as follows:
    src → streammux → primary detector → tracker → secondary vehicle types classification

  2. is Secondary_VehicleTypes the the second gie?
    yes

  3. Using third party tools, can the model works?
    I have not done this.

Could anyone give some advice?
Thank you.

  1. please test your model and make sure the model can works, what is the model 's input and output?
  2. can you rename Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine and run again, from the logs, deserializing from that engine failed, if still failed, please share the whole logs.

This engine file automatic generated by deepstream and this error occured.
But there is no error when I use the engine file which generate by trtexec command. The trtexec command is as follows.

/usr/src/tensorrt/bin/trtexec --onnx=typenet_bs8.onnx --saveEngine=test.engine --explicitBatch --fp16 --workspace=1024 --buildOnly --threads=12

Why?

I guess the engnie file which produced by deepstream is error, so I created a new engine file using a different configuration file, saddly a new error occured. The deepstream log is as follows.

2023-02-09 14:23:07,879 deamon: /opt/nvidia/deepstream/ds-app/ds-engine/vehicle.engine already exist------ 
2023-02-09 14:23:07,879 roll-image: image_save_path:/usr/local/video2/nginx/html/ecupic/Flow_Picture 
2023-02-09 14:23:07,879 roll-image: save_big_images_time:14400 save_small_images_time:14400 
2023-02-09 14:23:07,879 roll-image: db_save_path:/home/lcfc/work/FAS/EF_NFCS/flow_sign/FASDB.db 
2023-02-09 14:23:07,880 roll-image: open sqlite3 /home/lcfc/work/FAS/EF_NFCS/flow_sign/FASDB.db success 
2023-02-09 14:23:07,884 send_nats: nats_addr is 'nats://127.0.0.1:4222' 
2023-02-09 14:23:08,150  ** INFO: <create_rtmpsink_bin:904>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080 
2023-02-09 14:23:08,168  ** INFO: <create_encode_file_bin:354>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080 
2023-02-09 14:23:08,183  ** INFO: <create_rtmpsink_bin:904>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080 
2023-02-09 14:23:08,187  ** INFO: <create_rtmpsink_bin:904>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080 
2023-02-09 14:23:08,191  ** INFO: <create_rtmpsink_bin:904>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080 
2023-02-09 14:23:08,644  Opening in BLOCKING MODE  
2023-02-09 14:23:08,644  Opening in BLOCKING MODE  
2023-02-09 14:23:08,644  Opening in BLOCKING MODE  
2023-02-09 14:23:08,644  Opening in BLOCKING MODE  
2023-02-09 14:23:08,645  Opening in BLOCKING MODE  
2023-02-09 14:23:08,645  Table created Successfully 
2023-02-09 14:23:09,578  ERROR: Deserialize engine failed because file path: /home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine open error 
2023-02-09 14:23:09,578  0:00:01.527758480  6262   0x55ad95cd90 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 7]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1893> [UID = 7]: deserialize engine from file :/home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine failed 
2023-02-09 14:23:09,578  0:00:01.527887509  6262   0x55ad95cd90 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 7]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2000> [UID = 7]: deserialize backend context from engine from file :/home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine failed, try rebuild 
2023-02-09 14:23:09,579  0:00:01.527907830  6262   0x55ad95cd90 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 7]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1918> [UID = 7]: Trying to create engine from model files 
2023-02-09 14:23:09,675  ERROR: [TRT]: ModelImporter.cpp:726: ERROR: ModelImporter.cpp:527 In function importModel: 
2023-02-09 14:23:09,676  [4] Assertion failed: !_importer_ctx.network()->hasImplicitBatchDimension() && "This version of the ONNX parser only supports TensorRT INetworkDefinitions with an explicit batch dimension. Please ensure the network was created using the EXPLICIT_BATCH NetworkDefinitionCreationFlag." 
2023-02-09 14:23:09,677  ERROR: Failed to parse onnx file 
2023-02-09 14:23:09,677  ERROR: failed to build network since parsing model errors. 
2023-02-09 14:23:09,938  Segmentation fault (core dumped) 
2023-02-09 14:23:09,939 deamon: ***ds app exit*** return :139 
2023-02-09 14:23:12,870 deamon: ***ds app start*** 
2023-02-09 14:23:12,871 deamon: /opt/nvidia/deepstream/ds-app/ds-engine/vehicle.engine already exist------ 
2023-02-09 14:23:13,151  ** INFO: <create_rtmpsink_bin:904>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080 
2023-02-09 14:23:13,169  ** INFO: <create_encode_file_bin:354>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080 
2023-02-09 14:23:13,184  ** INFO: <create_rtmpsink_bin:904>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080 
2023-02-09 14:23:13,189  ** INFO: <create_rtmpsink_bin:904>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080 
2023-02-09 14:23:13,193  ** INFO: <create_rtmpsink_bin:904>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080 
2023-02-09 14:23:13,562  Opening in BLOCKING MODE  
2023-02-09 14:23:13,562  Opening in BLOCKING MODE  
2023-02-09 14:23:13,563  Opening in BLOCKING MODE  
2023-02-09 14:23:13,563  Opening in BLOCKING MODE  
2023-02-09 14:23:13,563  Opening in BLOCKING MODE  
2023-02-09 14:23:13,563  Table created Successfully 
2023-02-09 14:23:14,516  ERROR: Deserialize engine failed because file path: /home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine open error 
2023-02-09 14:23:14,517  0:00:01.466568861  6304   0x557272ad90 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 7]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1893> [UID = 7]: deserialize engine from file :/home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine failed 
2023-02-09 14:23:14,517  0:00:01.466702019  6304   0x557272ad90 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 7]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2000> [UID = 7]: deserialize backend context from engine from file :/home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine failed, try rebuild 
2023-02-09 14:23:14,517  0:00:01.466725188  6304   0x557272ad90 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 7]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1918> [UID = 7]: Trying to create engine from model files 
2023-02-09 14:23:14,617  ERROR: [TRT]: ModelImporter.cpp:726: ERROR: ModelImporter.cpp:527 In function importModel: 
2023-02-09 14:23:14,617  [4] Assertion failed: !_importer_ctx.network()->hasImplicitBatchDimension() && "This version of the ONNX parser only supports TensorRT INetworkDefinitions with an explicit batch dimension. Please ensure the network was created using the EXPLICIT_BATCH NetworkDefinitionCreationFlag." 
2023-02-09 14:23:14,617  ERROR: Failed to parse onnx file 
2023-02-09 14:23:14,617  ERROR: failed to build network since parsing model errors. 
2023-02-09 14:23:14,867  Segmentation fault (core dumped) 

The new configuration file is as follows.

[property]
gpu-id=0
net-scale-factor=0.003921568627451
#offsets=127.5;127.5;127.5
model-color-format=1
onnx-file=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx
model-engine-file=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine
batch-size=8
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
#num-detected-classes=300
infer-dims=3;224;224
output-blob-names=output
network-type=1
parse-classifier-func-name=NvDsInferParseCustomVehicleTypes
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer_custom_parser_vehicle_types.so
classifier-async-mode=1
# GPU:1  VIC:2(Jetson only)
#scaling-compute-hw=2
#enable-dla=1
#use-dla-core=1
secondary-reinfer-interval=10
maintain-aspect-ratio=0
force-implicit-batch-dim=1
process-mode=2
classifier-threshold=0.6
input-object-min-width=64
input-object-min-height=64
symmetric-padding=1

[class-attrs-all]
  1. please check all libs if meet deepstream’s developing demand, please refer to this link: Quickstart Guide — DeepStream 6.2 Release documentation
    here are the commands:
    CUDA version nvcc -V
    dpkg -l |grep TensorRT
    dpkg -l | grep gstreamer
    dpkg -l|grep cudnn
    if all lib versions meet demand, to……

  2. using #force-implicit-batch-dim=1 and renaming the Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine , please run again and share the whole logs.

I use deepstream 6.0 and the jpack version is 4.6, I nver change any lib version. The commands logs are as follows.

lcfc@lcfc-desktop:gst-nvinfer$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Feb_28_22:34:44_PST_2021
Cuda compilation tools, release 10.2, V10.2.300
Build cuda_10.2_r440.TC440_70.29663091_0
lcfc@lcfc-desktop:gst-nvinfer$ dpkg -l |grep TensorRT
ii  graphsurgeon-tf                               8.0.1-1+cuda10.2                                 arm64        GraphSurgeon for TensorRT package
ii  libnvinfer-bin                                8.0.1-1+cuda10.2                                 arm64        TensorRT binaries
ii  libnvinfer-dev                                8.0.1-1+cuda10.2                                 arm64        TensorRT development libraries and headers
ii  libnvinfer-doc                                8.0.1-1+cuda10.2                                 all          TensorRT documentation
ii  libnvinfer-plugin-dev                         8.0.1-1+cuda10.2                                 arm64        TensorRT plugin libraries
ii  libnvinfer-plugin8                            8.0.1-1+cuda10.2                                 arm64        TensorRT plugin libraries
ii  libnvinfer-samples                            8.0.1-1+cuda10.2                                 all          TensorRT samples
ii  libnvinfer8                                   8.0.1-1+cuda10.2                                 arm64        TensorRT runtime libraries
ii  libnvonnxparsers-dev                          8.0.1-1+cuda10.2                                 arm64        TensorRT ONNX libraries
ii  libnvonnxparsers8                             8.0.1-1+cuda10.2                                 arm64        TensorRT ONNX libraries
ii  libnvparsers-dev                              8.0.1-1+cuda10.2                                 arm64        TensorRT parsers libraries
ii  libnvparsers8                                 8.0.1-1+cuda10.2                                 arm64        TensorRT parsers libraries
ii  nvidia-container-csv-tensorrt                 8.0.1.6-1+cuda10.2                               arm64        Jetpack TensorRT CSV file
ii  python3-libnvinfer                            8.0.1-1+cuda10.2                                 arm64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                        8.0.1-1+cuda10.2                                 arm64        Python 3 development package for TensorRT
ii  tensorrt                                      8.0.1.6-1+cuda10.2                               arm64        Meta package of TensorRT
ii  uff-converter-tf                              8.0.1-1+cuda10.2                                 arm64        UFF converter for TensorRT package
lcfc@lcfc-desktop:gst-nvinfer$ dpkg -l | grep gstreamer
ii  gir1.2-gstreamer-1.0:arm64                    1.14.5-0ubuntu1~18.04.2                          arm64        GObject introspection data for the GStreamer library
ii  gstreamer1.0-alsa:arm64                       1.14.5-0ubuntu1~18.04.3                          arm64        GStreamer plugin for ALSA
ii  gstreamer1.0-clutter-3.0:arm64                3.0.26-1                                         arm64        Clutter PLugin for GStreamer 1.0
ii  gstreamer1.0-gl:arm64                         1.14.5-0ubuntu1~18.04.3                          arm64        GStreamer plugins for GL
ii  gstreamer1.0-gtk3:arm64                       1.14.5-0ubuntu1~18.04.2                          arm64        GStreamer plugin for GTK+3
ii  gstreamer1.0-libav:arm64                      1.14.5-0ubuntu1~18.04.1                          arm64        libav plugin for GStreamer
ii  gstreamer1.0-packagekit                       1.1.9-1ubuntu2.18.04.6                           arm64        GStreamer plugin to install codecs using PackageKit
ii  gstreamer1.0-plugins-bad:arm64                1.14.5-0ubuntu1~18.04.1                          arm64        GStreamer plugins from the "bad" set
ii  gstreamer1.0-plugins-base:arm64               1.14.5-0ubuntu1~18.04.3                          arm64        GStreamer plugins from the "base" set
ii  gstreamer1.0-plugins-base-apps                1.14.5-0ubuntu1~18.04.3                          arm64        GStreamer helper programs from the "base" set
ii  gstreamer1.0-plugins-good:arm64               1.14.5-0ubuntu1~18.04.2                          arm64        GStreamer plugins from the "good" set
ii  gstreamer1.0-plugins-ugly:arm64               1.14.5-0ubuntu1~18.04.1                          arm64        GStreamer plugins from the "ugly" set
ii  gstreamer1.0-pulseaudio:arm64                 1.14.5-0ubuntu1~18.04.2                          arm64        GStreamer plugin for PulseAudio
ii  gstreamer1.0-tools                            1.14.5-0ubuntu1~18.04.2                          arm64        Tools for use with GStreamer
ii  gstreamer1.0-x:arm64                          1.14.5-0ubuntu1~18.04.3                          arm64        GStreamer plugins for X11 and Pango
ii  libgstreamer-gl1.0-0:arm64                    1.14.5-0ubuntu1~18.04.3                          arm64        GStreamer GL libraries
ii  libgstreamer-plugins-bad1.0-0:arm64           1.14.5-0ubuntu1~18.04.1                          arm64        GStreamer libraries from the "bad" set
ii  libgstreamer-plugins-base1.0-0:arm64          1.14.5-0ubuntu1~18.04.3                          arm64        GStreamer libraries from the "base" set
ii  libgstreamer-plugins-base1.0-dev:arm64        1.14.5-0ubuntu1~18.04.3                          arm64        GStreamer development files for libraries from the "base" set
ii  libgstreamer-plugins-good1.0-0:arm64          1.14.5-0ubuntu1~18.04.2                          arm64        GStreamer development files for libraries from the "good" set
ii  libgstreamer1.0-0:arm64                       1.14.5-0ubuntu1~18.04.2                          arm64        Core GStreamer libraries and elements
ii  libgstreamer1.0-dev:arm64                     1.14.5-0ubuntu1~18.04.2                          arm64        GStreamer core development files
ii  libreoffice-avmedia-backend-gstreamer         1:6.0.7-0ubuntu0.18.04.10                        arm64        GStreamer backend for LibreOffice
ii  nvidia-l4t-gstreamer                          32.6.1-20210916210945                            arm64        NVIDIA GST Application files
lcfc@lcfc-desktop:gst-nvinfer$ dpkg -l|grep cudnn
ii  libcudnn8                                     8.2.1.32-1+cuda10.2                              arm64        cuDNN runtime libraries
ii  libcudnn8-dev                                 8.2.1.32-1+cuda10.2                              arm64        cuDNN development libraries and headers
ii  libcudnn8-samples                             8.2.1.32-1+cuda10.2                              arm64        cuDNN documents and samples
ii  nvidia-container-csv-cudnn                    8.2.1.32-1+cuda10.2                              arm64        Jetpack CUDNN CSV file
lcfc@lcfc-desktop:gst-nvinfer$ 

I have a question about how to rename Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine?Please give me a hint.
Thank you.

I mean change to a different name, for example: mv typenet_bs8.onnx_b8_gpu0_fp16.engine to typenet_bs8.onnx_b8_gpu0_fp16.engine1
dose the model support dynamic batch?

OK, but mistake still exist. the log is as follows.

2023-02-09 15:05:39,441  ** INFO: <create_rtmpsink_bin:904>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080 
2023-02-09 15:05:39,460  ** INFO: <create_encode_file_bin:354>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080 
2023-02-09 15:05:39,475  ** INFO: <create_rtmpsink_bin:904>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080 
2023-02-09 15:05:39,479  ** INFO: <create_rtmpsink_bin:904>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080 
2023-02-09 15:05:39,483  ** INFO: <create_rtmpsink_bin:904>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080 
2023-02-09 15:05:39,843  Opening in BLOCKING MODE  
2023-02-09 15:05:39,844  Opening in BLOCKING MODE  
2023-02-09 15:05:39,844  Opening in BLOCKING MODE  
2023-02-09 15:05:39,844  Opening in BLOCKING MODE  
2023-02-09 15:05:39,844  Opening in BLOCKING MODE  
2023-02-09 15:05:39,844  Table created Successfully 
2023-02-09 15:05:40,768  ERROR: Deserialize engine failed because file path: /home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine1 open error 
2023-02-09 15:05:40,768  0:00:01.426500294  7470   0x55647b2990 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 7]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1893> [UID = 7]: deserialize engine from file :/home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine1 failed 
2023-02-09 15:05:40,768  0:00:01.426617099  7470   0x55647b2990 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 7]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2000> [UID = 7]: deserialize backend context from engine from file :/home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine1 failed, try rebuild 
2023-02-09 15:05:40,768  0:00:01.426638860  7470   0x55647b2990 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 7]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1918> [UID = 7]: Trying to create engine from model files 
2023-02-09 15:06:02,816  0:00:23.474579678  7470   0x55647b2990 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 7]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1951> [UID = 7]: serialize cuda engine to file: /home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine successfully 
2023-02-09 15:06:02,846  0:00:23.504292811  7470   0x55647b2990 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_1> [UID 7]: Load new model:/home/lcfc/david/code/qf-ecu-jpack4.6/ds-app/ds-cfg/sgie4_vehicletypes_onnx_cfg.txt sucessfully 
2023-02-09 15:06:02,878  0:00:23.536245022  7470   0x55647b2990 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1904> [UID = 3]: deserialized trt engine from :/home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_PlateRecognition/lprnet.onnx_b2_gpu0_fp16.engine 
2023-02-09 15:06:02,878  0:00:23.536437127  7470   0x55647b2990 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2008> [UID = 3]: Use deserialized engine model: /home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_PlateRecognition/lprnet.onnx_b2_gpu0_fp16.engine 
2023-02-09 15:06:02,880  0:00:23.539555573  7470   0x55647b2990 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_0> [UID 3]: Load new model:/home/lcfc/david/code/qf-ecu-jpack4.6/ds-app/ds-cfg/sgie1_lpr_onnx_cfg.txt sucessfully 
2023-02-09 15:06:02,901  WARNING: [TRT]: DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU 
2023-02-09 15:06:02,901  WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead 
2023-02-09 15:06:02,902  WARNING: [TRT]: Min value of this profile is not valid 
2023-02-09 15:06:02,902  INFO: [FullDims Engine Info]: layers num: 2 
2023-02-09 15:06:02,902  0   INPUT  kFLOAT images          3x224x224       min: 1x3x224x224     opt: 8x3x224x224     Max: 8x3x224x224      
2023-02-09 15:06:02,902  1   OUTPUT kFLOAT output          178             min: 0               opt: 0               Max: 0                
2023-02-09 15:06:02,902  WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors. 
2023-02-09 15:06:02,903  INFO: [FullDims Engine Info]: layers num: 2 
2023-02-09 15:06:02,903  0   INPUT  kFLOAT images          3x24x94         min: 1x3x24x94       opt: 2x3x24x94       Max: 2x3x24x94        
2023-02-09 15:06:02,903  1   OUTPUT kFLOAT output          76x18           min: 0               opt: 0               Max: 0                
2023-02-09 15:06:02,903  gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.0/lib/libObjectTracker.so 
2023-02-09 15:06:02,903  Track NvMOT_Query success 
2023-02-09 15:06:02,904  gstnvtracker: Batch processing is ON 
2023-02-09 15:06:02,904  gstnvtracker: Past frame output is OFF 
2023-02-09 15:06:02,904  ----------------------------- 
2023-02-09 15:06:02,904  frame_rate:30 
2023-02-09 15:06:02,904  track_buffer:20 
2023-02-09 15:06:02,904  track_thresh:0.500000 
2023-02-09 15:06:02,905  high_thresh:0.600000 
2023-02-09 15:06:02,905  match_thresh:0.800000 
2023-02-09 15:06:02,905  ----------------------------- 
2023-02-09 15:06:02,905  0:00:23.560609430  7470   0x55647b2990 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1163> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5 
2023-02-09 15:06:03,033  0:00:23.692060776  7470   0x55647b2990 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1904> [UID = 1]: deserialized trt engine from :/home/lcfc/david/code/qf-ecu-jpack4.6/ds-app/ds-engine/vehicle.engine 
2023-02-09 15:06:03,034  0:00:23.692231760  7470   0x55647b2990 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2008> [UID = 1]: Use deserialized engine model: /home/lcfc/david/code/qf-ecu-jpack4.6/ds-app/ds-engine/vehicle.engine 
2023-02-09 15:06:03,055  0:00:23.713946415  7470   0x55647b2990 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/lcfc/david/code/qf-ecu-jpack4.6/ds-app/ds-cfg/pgie_yolo_cfg.txt sucessfully 
2023-02-09 15:06:03,062  WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors. 
2023-02-09 15:06:03,062  INFO: [Implicit Engine Info]: layers num: 2 
2023-02-09 15:06:03,063  0   INPUT  kFLOAT data            3x640x640        
2023-02-09 15:06:03,063  1   OUTPUT kFLOAT prob            7001x1x1         
2023-02-09 15:06:03,063  Runtime commands: 
2023-02-09 15:06:03,063         h: Print this help 
2023-02-09 15:06:03,063         q: Quit 
2023-02-09 15:06:03,063         p: Pause 
2023-02-09 15:06:03,063         r: Resume 
2023-02-09 15:06:03,065  2023-02-09 15:06:03: 
2023-02-09 15:06:03,065  **PERF:  FPS 0 (Avg)   FPS 1 (Avg)     FPS 2 (Avg)     FPS 3 (Avg)      
2023-02-09 15:06:03,065  **PERF:  0.00 (0.00)   0.00 (0.00)     0.00 (0.00)     0.00 (0.00)      
2023-02-09 15:06:03,066  ** INFO: <bus_callback:194>: Pipeline ready 
2023-02-09 15:06:04,169  NvMMLiteOpen : Block : BlockType = 261  
2023-02-09 15:06:04,171  NvMMLiteOpen : Block : BlockType = 261  
2023-02-09 15:06:04,172  NVMEDIA: Reading vendor.tegra.display-size : status: 6  
2023-02-09 15:06:04,173  NVMEDIA: Reading vendor.tegra.display-size : status: 6  
2023-02-09 15:06:04,174  NvMMLiteOpen : Block : BlockType = 261  
2023-02-09 15:06:04,175  NvMMLiteOpen : Block : BlockType = 261  
2023-02-09 15:06:04,175  NvMMLiteBlockCreate : Block : BlockType = 261  
2023-02-09 15:06:04,176  NVMEDIA: Reading vendor.tegra.display-size : status: 6  
2023-02-09 15:06:04,176  NVMEDIA: Reading vendor.tegra.display-size : status: 6  
2023-02-09 15:06:04,182  NvMMLiteBlockCreate : Block : BlockType = 261  
2023-02-09 15:06:04,182  NvMMLiteBlockCreate : Block : BlockType = 261  
2023-02-09 15:06:04,183  NvMMLiteBlockCreate : Block : BlockType = 261  
2023-02-09 15:06:04,292  NvMMLiteOpen : Block : BlockType = 4  
2023-02-09 15:06:04,293  ===== NVMEDIA: NVENC ===== 
2023-02-09 15:06:04,296  NvMMLiteOpen : Block : BlockType = 4  
2023-02-09 15:06:04,297  ===== NVMEDIA: NVENC ===== 
2023-02-09 15:06:04,298  NvMMLiteOpen : Block : BlockType = 4  
2023-02-09 15:06:04,298  ===== NVMEDIA: NVENC ===== 
2023-02-09 15:06:04,302  NvMMLiteOpen : Block : BlockType = 4  
2023-02-09 15:06:04,303  ===== NVMEDIA: NVENC ===== 
2023-02-09 15:06:04,304  NvMMLiteOpen : Block : BlockType = 4  
2023-02-09 15:06:04,304  ===== NVMEDIA: NVENC ===== 
2023-02-09 15:06:04,306  NvMMLiteBlockCreate : Block : BlockType = 4  
2023-02-09 15:06:04,306  NvMMLiteBlockCreate : Block : BlockType = 4  
2023-02-09 15:06:04,308  NvMMLiteBlockCreate : Block : BlockType = 4  
2023-02-09 15:06:04,309  NvMMLiteBlockCreate : Block : BlockType = 4  
2023-02-09 15:06:04,310  NvMMLiteBlockCreate : Block : BlockType = 4  
2023-02-09 15:06:04,917  Opening in BLOCKING MODE  
2023-02-09 15:06:04,918  Opening in BLOCKING MODE  
2023-02-09 15:06:04,918  Opening in BLOCKING MODE  
2023-02-09 15:06:04,918  Opening in BLOCKING MODE  
2023-02-09 15:06:04,919  track_thresh:0.500000  high_thresh:0.600000    match_thresh:0.800000 
2023-02-09 15:06:04,919  frame_rate:30  track_buffer:20 
2023-02-09 15:06:04,919  unique_id:7    batch->frames.size ():4 
2023-02-09 15:06:04,926  ERROR: [TRT]: [shapeMachine.cpp::execute::565] Error Code 7: Internal Error (IShuffleLayer Flatten_47: reshaping failed for tensor: onnx::Flatten_189 
2023-02-09 15:06:04,926  reshape would change volume 
2023-02-09 15:06:04,926  Instruction: RESHAPE{4 512 1 1} {8 512} 
2023-02-09 15:06:04,926  ) 
2023-02-09 15:06:04,926  ERROR: [TRT]: [executionContext.cpp::enqueueInternal::360] Error Code 2: Internal Error (Could not resolve slots: ) 
2023-02-09 15:06:04,927  ERROR: Failed to enqueue trt inference batch 
2023-02-09 15:06:04,927  ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR 
2023-02-09 15:06:04,927  0:00:25.585418593  7470   0x7ed0006b70 WARN                 nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<secondary_gie_1> error: Failed to queue input batch for inferencing 
2023-02-09 15:06:04,927  ERROR from secondary_gie_1: Failed to queue input batch for inferencing 
2023-02-09 15:06:04,928  Debug info: gstnvinfer.cpp(1324): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_1 
2023-02-09 15:06:04,940  track_thresh:0.500000  high_thresh:0.600000    match_thresh:0.800000 
2023-02-09 15:06:04,940  frame_rate:30  track_buffer:20 
2023-02-09 15:06:04,954  Quitting 
2023-02-09 15:06:04,969  unique_id:7    batch->frames.size ():8 
2023-02-09 15:06:04,973  unique_id:7    batch->frames.size ():3 
2023-02-09 15:06:04,980 send_nats: name_str: dataset_stream_source_111  topic_str: cloud.ai_algorithm.deepstream.object_detection.111.all 
2023-02-09 15:06:04,983  ERROR: [TRT]: [shapeMachine.cpp::execute::565] Error Code 7: Internal Error (IShuffleLayer Flatten_47: reshaping failed for tensor: onnx::Flatten_189 
2023-02-09 15:06:04,985  reshape would change volume 
2023-02-09 15:06:04,985  Instruction: RESHAPE{3 512 1 1} {8 512} 
2023-02-09 15:06:04,985  ) 
2023-02-09 15:06:04,985  ERROR: [TRT]: [executionContext.cpp::enqueueInternal::360] Error Code 2: Internal Error (Could not resolve slots: ) 
2023-02-09 15:06:04,986  ERROR: Failed to enqueue trt inference batch 
2023-02-09 15:06:04,986  ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR 
2023-02-09 15:06:04,986  0:00:25.641850897  7470   0x7ed0006b70 WARN                 nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<secondary_gie_1> error: Failed to queue input batch for inferencing 
2023-02-09 15:06:05,047  track_thresh:0.500000  high_thresh:0.600000    match_thresh:0.800000 
2023-02-09 15:06:05,048  frame_rate:30  track_buffer:20 
2023-02-09 15:06:05,057  ERROR from secondary_gie_1: Failed to queue input batch for inferencing 
2023-02-09 15:06:05,058  Debug info: gstnvinfer.cpp(1324): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_1 
2023-02-09 15:06:05,076  unique_id:7    batch->frames.size ():8 
2023-02-09 15:06:05,108  (deepstream-app:7470): GLib-CRITICAL **: 15:06:05.107: g_thread_join: assertion 'thread' failed 
2023-02-09 15:06:05,965  App run failed 

dose the model support dynamic batch? could you share the model?

This model do not support dynamic batch, only support batch-size=8.
I have to get manager’s opinion whether the model can be shared.

I have a question about could deepstream convert fixed batch-size onnx file to engine file and run successfully?

I have found two conclusions.
Num 1, I can generate engine file which using trtexec command by fixed batch-size onnx model and run successfuly in deepstream, the trtexec command is as follows.

/usr/src/tensorrt/bin/trtexec --onnx=typenet_bs8.onnx --saveEngine=test.engine --explicitBatch --fp16 --workspace=1024 --buildOnly --threads=12

deepsteam can generate the engine file using this same fixed batch-size onnx model but there is a error when running. the log is as follow.

2023-02-09 15:06:04,919  track_thresh:0.500000  high_thresh:0.600000    match_thresh:0.800000 
2023-02-09 15:06:04,919  frame_rate:30  track_buffer:20 
2023-02-09 15:06:04,919  unique_id:7    batch->frames.size ():4 
2023-02-09 15:06:04,926  ERROR: [TRT]: [shapeMachine.cpp::execute::565] Error Code 7: Internal Error (IShuffleLayer Flatten_47: reshaping failed for tensor: onnx::Flatten_189 
2023-02-09 15:06:04,926  reshape would change volume 
2023-02-09 15:06:04,926  Instruction: RESHAPE{4 512 1 1} {8 512} 
2023-02-09 15:06:04,926  ) 
2023-02-09 15:06:04,926  ERROR: [TRT]: [executionContext.cpp::enqueueInternal::360] Error Code 2: Internal Error (Could not resolve slots: ) 
2023-02-09 15:06:04,927  ERROR: Failed to enqueue trt inference batch 
2023-02-09 15:06:04,927  ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR 
2023-02-09 15:06:04,927  0:00:25.585418593  7470   0x7ed0006b70 WARN                 nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<secondary_gie_1> error: Failed to queue input batch for inferencing 
2023-02-09 15:06:04,927  ERROR from secondary_gie_1: Failed to queue input batch for inferencing 
2023-02-09 15:06:04,928  Debug info: gstnvinfer.cpp(1324): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_1 
2023-02-09 15:06:04,940  track_thresh:0.500000  high_thresh:0.600000    match_thresh:0.800000 
2023-02-09 15:06:04,940  frame_rate:30  track_buffer:20 
2023-02-09 15:06:04,954  Quitting 
2023-02-09 15:06:04,969  unique_id:7    batch->frames.size ():8 
2023-02-09 15:06:04,973  unique_id:7    batch->frames.size ():3 
2023-02-09 15:06:04,980 send_nats: name_str: dataset_stream_source_111  topic_str: cloud.ai_algorithm.deepstream.object_detection.111.all 
2023-02-09 15:06:04,983  ERROR: [TRT]: [shapeMachine.cpp::execute::565] Error Code 7: Internal Error (IShuffleLayer Flatten_47: reshaping failed for tensor: onnx::Flatten_189 
2023-02-09 15:06:04,985  reshape would change volume 
2023-02-09 15:06:04,985  Instruction: RESHAPE{3 512 1 1} {8 512} 
2023-02-09 15:06:04,985  ) 
2023-02-09 15:06:04,985  ERROR: [TRT]: [executionContext.cpp::enqueueInternal::360] Error Code 2: Internal Error (Could not resolve slots: ) 
2023-02-09 15:06:04,986  ERROR: Failed to enqueue trt inference batch 
2023-02-09 15:06:04,986  ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR 
2023-02-09 15:06:04,986  0:00:25.641850897  7470   0x7ed0006b70 WARN                 nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<secondary_gie_1> error: Failed to queue input batch for inferencing 
2023-02-09 15:06:05,047  track_thresh:0.500000  high_thresh:0.600000    match_thresh:0.800000 
2023-02-09 15:06:05,048  frame_rate:30  track_buffer:20 
2023-02-09 15:06:05,057  ERROR from secondary_gie_1: Failed to queue input batch for inferencing 
2023-02-09 15:06:05,058  Debug info: gstnvinfer.cpp(1324): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_1 
2023-02-09 15:06:05,076  unique_id:7    batch->frames.size ():8 
2023-02-09 15:06:05,108  (deepstream-app:7470): GLib-CRITICAL **: 15:06:05.107: g_thread_join: assertion 'thread' failed 
2023-02-09 15:06:05,965  App run failed 

Num 2, deepstream can generate engine file using dynamic batch-size onnx model and run success.

Both deepstream and trtexec will call tensorrt to convert model, deepstream sdk 's nvinfer is opensource, to narrow down this issue, you can add logs and compare the deepstream log with trtexec 's.
here is the method to open trt log in deepstream: DeepStream SDK FAQ - #33 by mchi

OK, I can do it and the log is as follow.
trtexec.log (616.1 KB)
deepstream.log (629.6 KB)

Thanks for your sharing, please use trtexec to do inference, then provide the terminal log, here is the command: ./trtexec --loadEngine=xxx.engine --fp16

OK, the log is as follows.
trtexec_load.txt (10.9 KB)

trtexec_load.txt is the log which engine file generate by trtexec.the command is as follows.

/usr/src/tensorrt/bin/trtexec --onnx=typenet_bs8.onnx --saveEngine=test.engine --explicitBatch --fp16 --workspace=1024 --buildOnly --threads=12

When I load engine file which generate by deepstream automatically, there is a error.

The log is as follows.
trtexec_load_deepstream.txt (5.1 KB)