Deepstream implementation

P configuration as below:

• Hardware Platform (Jetson / GPU)
jetson NX
• DeepStream Version
6.0.1
• JetPack Version (valid for Jetson only)
4.6.2
• TensorRT Version
8.2.1.8

sudo /usr/src/tensorrt/bin/trtexec --refit --sparsity=enable --ifp16 --best --saveEngine=yolov5s_custom_fp16.engine --onnx=best.onnx --device=0 --plugins
I already got a engine file, how can i put into the deepstream pipline?

(py3) shisun@nx:~/DeepStream-Yolo$ deepstream-app -c deepstream_app_config.txt

Using winsys: x11
0:00:05.416749790 27837 0x2f8a8670 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/shisun/DeepStream-Yolo/yolov5s_fp16.engine
INFO: [Implicit Engine Info]: layers num: 5
0 INPUT kHALF images 3x640x640
1 OUTPUT kHALF 490 3x80x80x6
2 OUTPUT kHALF 558 3x40x40x6
3 OUTPUT kHALF 626 3x20x20x6
4 OUTPUT kFLOAT output 25200x6

0:00:05.444020002 27837 0x2f8a8670 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/shisun/DeepStream-Yolo/yolov5s_fp16.engine
0:00:05.462007896 27837 0x2f8a8670 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/shisun/DeepStream-Yolo/config_infer_primary_yoloV5.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:194>: Pipeline ready

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 260
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 260
** INFO: <bus_callback:180>: Pipeline running

0:00:05.946443823 27837 0x2f19fe80 ERROR nvinfer gstnvinfer.cpp:1150:get_converted_buffer:<primary_gie> cudaMemset2DAsync failed with error cudaErrorIllegalAddress while converting buffer
0:00:05.947321591 27837 0x2f19fe80 WARN nvinfer gstnvinfer.cpp:1472:gst_nvinfer_process_full_frame:<primary_gie> error: Buffer conversion failed
ERROR from primary_gie: Buffer conversion failed
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1472): gst_nvinfer_process_full_frame (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
Quitting
ERROR: Failed to add cudaStream callback for returning input buffers, cuda err_no:77, err_str:cudaErrorIllegalAddress
ERROR: Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:00:05.981711511 27837 0x2f280450 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
Segmentation fault (core dumped)

• JetPack Version (valid for Jetson only)
4.6.2
• TensorRT Version
8.2.1.8

Jetpack 4.6.2 includes TensorRT 8.0.1, please use the compatible version.

Thank you for reply.
and do you mean I should uninstall tensorrt 8.2.1.8 and install 8.0.1?
please give me a hint. thx.

Sorry for above.
Jetpack 4.6 includes TensorRT 8.0.1
Jetpack 4.6.1 includes TensorRT 8.2.1

Please check which Jetpack version you used, and use compatible TensorRT version accordingly.

Thanks again, I am trying.

I reflashed jetson nx to jetpack4.6 and it not worked yet.
But the error appeared different:
deepstream-app -c deepstream_app_config.txt

Using winsys: x11
0:00:05.465532190 21034 0xf7b1f50 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/./best_exp.engine
INFO: [Implicit Engine Info]: layers num: 5
0 INPUT kHALF images 3x640x640
1 OUTPUT kHALF 490 3x80x80x6
2 OUTPUT kHALF 558 3x40x40x6
3 OUTPUT kHALF 626 3x20x20x6
4 OUTPUT kFLOAT output 25200x6

0:00:05.465893821 21034 0xf7b1f50 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/./best_exp.engine
0:00:05.475766009 21034 0xf7b1f50 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/config_infer_primary_yoloV5.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:194>: Pipeline ready

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 260
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 260
** INFO: <bus_callback:180>: Pipeline running

WARNING: Num classes mismatch. Configured: 1, detected by network: 0
deepstream-app: nvdsparsebbox_Yolo.cpp:137: bool NvDsInferParseCustomYolo(const std::vector&, const NvDsInferNetworkInfo&, const NvDsInferParseDetectionParams&, std::vector&, const uint&, const uint&): Assertion `layer.inferDims.numDims == 3’ failed.
Aborted (core dumped)

Please customize the Yolov5 postprocessing function(bbox parser function) by yourself.

The Yolov4 postprocessing customization function can be a reference. https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/post_processor/nvdsinfer_custombboxparser_tao.cpp

Thank you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.