P configuration as below:
• Hardware Platform (Jetson / GPU)
jetson NX
• DeepStream Version
6.0.1
• JetPack Version (valid for Jetson only)
4.6.2
• TensorRT Version
8.2.1.8
sudo /usr/src/tensorrt/bin/trtexec --refit --sparsity=enable --ifp16 --best --saveEngine=yolov5s_custom_fp16.engine --onnx=best.onnx --device=0 --plugins
I already got a engine file, how can i put into the deepstream pipline?
(py3) shisun@nx:~/DeepStream-Yolo$ deepstream-app -c deepstream_app_config.txt
Using winsys: x11
0:00:05.416749790 27837 0x2f8a8670 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/shisun/DeepStream-Yolo/yolov5s_fp16.engine
INFO: [Implicit Engine Info]: layers num: 5
0 INPUT kHALF images 3x640x640
1 OUTPUT kHALF 490 3x80x80x6
2 OUTPUT kHALF 558 3x40x40x6
3 OUTPUT kHALF 626 3x20x20x6
4 OUTPUT kFLOAT output 25200x6
0:00:05.444020002 27837 0x2f8a8670 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/shisun/DeepStream-Yolo/yolov5s_fp16.engine
0:00:05.462007896 27837 0x2f8a8670 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/shisun/DeepStream-Yolo/config_infer_primary_yoloV5.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.
**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:194>: Pipeline ready
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 260
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 260
** INFO: <bus_callback:180>: Pipeline running
0:00:05.946443823 27837 0x2f19fe80 ERROR nvinfer gstnvinfer.cpp:1150:get_converted_buffer:<primary_gie> cudaMemset2DAsync failed with error cudaErrorIllegalAddress while converting buffer
0:00:05.947321591 27837 0x2f19fe80 WARN nvinfer gstnvinfer.cpp:1472:gst_nvinfer_process_full_frame:<primary_gie> error: Buffer conversion failed
ERROR from primary_gie: Buffer conversion failed
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1472): gst_nvinfer_process_full_frame (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
Quitting
ERROR: Failed to add cudaStream callback for returning input buffers, cuda err_no:77, err_str:cudaErrorIllegalAddress
ERROR: Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:00:05.981711511 27837 0x2f280450 WARN nvinfer gstnvinfer.cpp:1324:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
Segmentation fault (core dumped)