• Hardware Platform (Jetson / GPU)
Jetson
• DeepStream Version
6.0.1
• TensorRT Version
8.2.1-1+cuda10.2
Hi, I am trying to use a custom yolov7-tiny model as the primary detector in the deepstream-app. After training, I used this command (in the yolov7 directory) to export the model file to onnx.
sudo python3 export.py --weights ./best.pt --grid --end2end --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 --max-wh 640 --dynamic-batch
Here is the onnxfile generated : link
Now when I use this onnx file in the deepstream-app, it generates the engine file but while running the pipeline, it shows error like this:
Unknown or legacy key specified 'is-classifier' for group [property]
Unknown or legacy key specified 'disable-output-host-copy' for group [property]
Unknown or legacy key specified 'crop-objects-to-roi-boundary' for group [property]
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
0:00:06.955669204 18637 0x556595f040 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/samples/models/yolov7_primary_detector/yolov7-tiny-new.onnx_b8_gpu0_fp16.engine
INFO: [FullDims Engine Info]: layers num: 2
0 INPUT kFLOAT images 3x640x640 min: 1x3x640x640 opt: 8x3x640x640 Max: 8x3x640x640
1 OUTPUT kFLOAT output 7 min: 0 opt: 0 Max: 0
0:00:06.957044334 18637 0x556595f040 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/samples/models/yolov7_primary_detector/yolov7-tiny-new.onnx_b8_gpu0_fp16.engine
0:00:07.017783631 18637 0x556595f040 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream/samples/models/yolov7_primary_detector/config_infer_primary_yoloV7.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.
**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:194>: Pipeline ready
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:180>: Pipeline running
ERROR: [TRT]: 1: [runner.cpp::execute::416] Error Code 1: Cuda Runtime (invalid argument)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:09.273044487 18637 0x55654f1320 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1324): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
Quitting
ERROR: [TRT]: 1: [runner.cpp::execute::416] Error Code 1: Cuda Runtime (invalid argument)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:09.330013194 18637 0x55654f1320 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR: [TRT]: 1: [runner.cpp::execute::416] Error Code 1: Cuda Runtime (invalid argument)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:09.337557864 18637 0x55654f1320 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR: [TRT]: 1: [runner.cpp::execute::416] Error Code 1: Cuda Runtime (invalid argument)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:09.345006335 18637 0x55654f1320 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1324): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1324): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1324): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
[NvMultiObjectTracker] De-initialized
ERROR: [TRT]: 1: [runner.cpp::execute::416] Error Code 1: Cuda Runtime (invalid argument)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:14.583387358 18637 0x55654f1320 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
App run failed
any idea how to fix this?