I am using jetson agx orin with deepstream 7.0.
I converted my model first to ONNX with the command python export.py --weights best.pt --img 640 --dynamic --simplify --device cpu --include onnx text
and then to .engine with the command /usr/src/tensorrt/bin/trtexec --verbose --onnx=best.onnx --saveEngine=best.engine
This is my onnx model
I’m using the example → deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub
and I replaced the config file with the following
gpu-id=0
model-engine-file=best.engine
labelfile-path=labels.txt
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=3
gie-unique-id=1
uff-input-blob-name=input1
output-blob-names=output0
infer-dims=3;640;640
When i run the example i get the following error
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kFLOAT output0 25200x8
0:00:05.265346887 12037 0xaaaad5226290 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/best.engine
0:00:05.275756026 12037 0xaaaad5226290 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Resetting source -1, attempts: 1
Warning: gst-stream-error-quark: No data from source since last 5 sec. Trying reconnection (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvdsbins/gstdsnvurisrcbin.cpp(1412): watch_source_status (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstDsNvUriSrcBin:uri-decode-bin
Decodebin child added: depay
Decodebin child added: parser
Decodebin child added: tee_rtsp_pre_decode
Decodebin child added: dec_que
Decodebin child added: tee_rtsp_post_decode
Decodebin child added: decodebin
Decodebin child added: queue
Decodebin child added: nvvidconv
Decodebin child added: src_cap_filter_nvvidconv
In cb_newpad
gstname= video/x-raw
features= <Gst.CapsFeatures object at 0xffff90efc220 (GstCapsFeatures at 0xfffed8078500)>
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
sys:1: Warning: g_object_set_is_valid_property: object class 'nvv4l2decoder' has no property named 'low-latency-mode'
sys:1: Warning: g_object_set_is_valid_property: object class 'nvv4l2decoder' has no property named 'extract-sei-type5-data'
sys:1: Warning: g_object_set_is_valid_property: object class 'nvv4l2decoder' has no property named 'sei-uuid'
Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
0:00:07.124151203 12037 0xaaaad529dcc0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:60> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:07.124238769 12037 0xaaaad529dcc0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:736> [UID = 1]: Failed to parse bboxes
Segmentation fault (core dumped)