Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson)
• DeepStream Version-6.0
• JetPack Version (valid for Jetson only)-4.6
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)-questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
There is an error when I set batch-size greater than 1 in onnx modle. The logs as follows.
2023-02-08 16:48:10,648 ** INFO: <create_rtmpsink_bin:904>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080
2023-02-08 16:48:10,667 ** INFO: <create_encode_file_bin:354>: cap_str_buf is video/x-raw(memory:NVMM), format=I420, width=1920, height=1080
2023-02-08 16:48:10,908 Opening in BLOCKING MODE
2023-02-08 16:48:10,908 Opening in BLOCKING MODE
2023-02-08 16:48:10,908 Table created Successfully
2023-02-08 16:48:13,406 0:00:02.855652927 21294 0x7f24002390 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 7]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1904> [UID = 7]: deserialized trt engine from :/home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine
2023-02-08 16:48:13,406 0:00:02.855847369 21294 0x7f24002390 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 7]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2008> [UID = 7]: Use deserialized engine model: /home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine
2023-02-08 16:48:13,411 0:00:02.861368350 21294 0x7f24002390 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_1> [UID 7]: Load new model:/home/lcfc/david/code/qf-ecu-jpack4.6/ds-app/ds-cfg/sgie4_vehicletypes_onnx_cfg.txt sucessfully
2023-02-08 16:48:13,433 0:00:02.882998457 21294 0x7f24002390 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1904> [UID = 3]: deserialized trt engine from :/home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_PlateRecognition/lprnet.onnx_b2_gpu0_fp16.engine
2023-02-08 16:48:13,434 0:00:02.883180674 21294 0x7f24002390 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2008> [UID = 3]: Use deserialized engine model: /home/lcfc/david/code/qf-ecu-jpack4.6/deepstream-6.0/samples/models/Secondary_PlateRecognition/lprnet.onnx_b2_gpu0_fp16.engine
2023-02-08 16:48:13,435 0:00:02.885716097 21294 0x7f24002390 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_0> [UID 3]: Load new model:/home/lcfc/david/code/qf-ecu-jpack4.6/ds-app/ds-cfg/sgie1_lpr_onnx_cfg.txt sucessfully
2023-02-08 16:48:13,457 INFO: [FullDims Engine Info]: layers num: 2
2023-02-08 16:48:13,457 0 INPUT kFLOAT images 3x224x224 min: 1x3x224x224 opt: 8x3x224x224 Max: 8x3x224x224
2023-02-08 16:48:13,457 1 OUTPUT kFLOAT output 178 min: 0 opt: 0 Max: 0
2023-02-08 16:48:13,457 WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
2023-02-08 16:48:13,458 INFO: [FullDims Engine Info]: layers num: 2
2023-02-08 16:48:13,458 0 INPUT kFLOAT images 3x24x94 min: 1x3x24x94 opt: 2x3x24x94 Max: 2x3x24x94
2023-02-08 16:48:13,458 1 OUTPUT kFLOAT output 76x18 min: 0 opt: 0 Max: 0
2023-02-08 16:48:13,458 gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.0/lib/libObjectTracker.so
2023-02-08 16:48:13,459 Track NvMOT_Query success
2023-02-08 16:48:13,459 gstnvtracker: Batch processing is ON
2023-02-08 16:48:13,459 gstnvtracker: Past frame output is OFF
2023-02-08 16:48:13,460 0:00:02.906652986 21294 0x7f24002390 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1163> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
2023-02-08 16:48:13,552 0:00:03.002008225 21294 0x7f24002390 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1904> [UID = 1]: deserialized trt engine from :/home/lcfc/david/code/qf-ecu-jpack4.6/ds-app/ds-engine/vehicle.engine
2023-02-08 16:48:13,552 0:00:03.002202859 21294 0x7f24002390 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2008> [UID = 1]: Use deserialized engine model: /home/lcfc/david/code/qf-ecu-jpack4.6/ds-app/ds-engine/vehicle.engine
2023-02-08 16:48:13,555 0:00:03.005591188 21294 0x7f24002390 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/lcfc/david/code/qf-ecu-jpack4.6/ds-app/ds-cfg/pgie_yolo_cfg.txt sucessfully
2023-02-08 16:48:13,560 WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
2023-02-08 16:48:13,561 INFO: [Implicit Engine Info]: layers num: 2
2023-02-08 16:48:13,561 0 INPUT kFLOAT data 3x640x640
2023-02-08 16:48:13,561 1 OUTPUT kFLOAT prob 7001x1x1
2023-02-08 16:48:13,561 Runtime commands:
2023-02-08 16:48:13,561 h: Print this help
2023-02-08 16:48:13,561 q: Quit
2023-02-08 16:48:13,562 p: Pause
2023-02-08 16:48:13,562 r: Resume
2023-02-08 16:48:13,562 ** INFO: <bus_callback:194>: Pipeline ready
2023-02-08 16:48:14,702 NvMMLiteOpen : Block : BlockType = 261
2023-02-08 16:48:14,703 NVMEDIA: Reading vendor.tegra.display-size : status: 6
2023-02-08 16:48:14,705 NvMMLiteBlockCreate : Block : BlockType = 261
2023-02-08 16:48:14,821 NvMMLiteOpen : Block : BlockType = 4
2023-02-08 16:48:14,821 NvMMLiteOpen : Block : BlockType = 4
2023-02-08 16:48:14,821 ===== NVMEDIA: NVENC =====
2023-02-08 16:48:14,821 ===== NVMEDIA: NVENC =====
2023-02-08 16:48:14,822 NvMMLiteBlockCreate : Block : BlockType = 4
2023-02-08 16:48:14,823 NvMMLiteBlockCreate : Block : BlockType = 4
2023-02-08 16:48:15,771 Opening in BLOCKING MODE
2023-02-08 16:48:15,771 2023-02-08 16:48:15:
2023-02-08 16:48:15,771 **PERF: FPS 0 (Avg)
2023-02-08 16:48:15,771 **PERF: 0.00 (0.00)
2023-02-08 16:48:16,183 track_thresh:0.500000 high_thresh:0.600000 match_thresh:0.800000
2023-02-08 16:48:16,183 frame_rate:30 track_buffer:20
2023-02-08 16:48:16,188 ERROR: [TRT]: 7: [shapeMachine.cpp::execute::565] Error Code 7: Internal Error (IShuffleLayer Flatten_47: reshaping failed for tensor: onnx::Flatten_189
2023-02-08 16:48:16,188 reshape would change volume
2023-02-08 16:48:16,189 Instruction: RESHAPE{6 512 1 1} {8 512}
2023-02-08 16:48:16,189 )
2023-02-08 16:48:16,189 ERROR: [TRT]: 2: [executionContext.cpp::enqueueInternal::360] Error Code 2: Internal Error (Could not resolve slots: )
2023-02-08 16:48:16,189 ERROR: Failed to enqueue trt inference batch
2023-02-08 16:48:16,189 ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
2023-02-08 16:48:16,190 0:00:05.637736227 21294 0x55684896d0 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<secondary_gie_1> error: Failed to queue input batch for inferencing
2023-02-08 16:48:16,190 ERROR from secondary_gie_1: Failed to queue input batch for inferencing
2023-02-08 16:48:16,190 Debug info: gstnvinfer.cpp(1324): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_1
2023-02-08 16:48:16,210 Quitting
2023-02-08 16:48:16,245 send_nats: name_str: dataset_stream_source_111 topic_str: cloud.ai_algorithm.deepstream.object_detection.111.all
2023-02-08 16:48:16,318 (deepstream-app:21294): GLib-CRITICAL **: 16:48:16.317: g_thread_join: assertion 'thread' failed
2023-02-08 16:48:17,217 App run failed
My configuration file as follows.
[property]
gpu-id=0
net-scale-factor=0.003921568627451
#offsets=127.5;127.5;127.5
model-color-format=1
onnx-file=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx
model-engine-file=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Secondary_VehicleTypes/typenet_bs8.onnx_b8_gpu0_fp16.engine
batch-size=8
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
#num-detected-classes=300
infer-dims=3;224;224
output-blob-names=output
network-type=1
parse-classifier-func-name=NvDsInferParseCustomVehicleTypes
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer_custom_parser_vehicle_types.so
classifier-async-mode=1
# GPU:1 VIC:2(Jetson only)
#scaling-compute-hw=2
#enable-dla=1
#use-dla-core=1
secondary-reinfer-interval=10
maintain-aspect-ratio=0
#force-implicit-batch-dim=1
process-mode=2
classifier-threshold=0.6
input-object-min-width=64
input-object-min-height=64
symmetric-padding=1
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)