Yolov5 on jetson nano segmentation fault

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson nano
• DeepStream Version 6.0.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
**• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)**bug
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
i’m trying detection realtime with camera usb with yolov5 and get something error like this:
mirai@mirai-desktop:~/DeepStream-Yolo$ deepstream-app -c csi_test.txt
0:00:11.396758499 12372 0x3c46f20 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/mirai/DeepStream-Yolo/model_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kFLOAT output0 25200x11

0:00:11.398429202 12372 0x3c46f20 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/mirai/DeepStream-Yolo/model_b1_gpu0_fp32.engine
0:00:12.010606344 12372 0x3c46f20 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/mirai/DeepStream-Yolo/config_infer_primary_yoloV5.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:194>: Pipeline ready

** INFO: <bus_callback:180>: Pipeline running

and i’m trying using gdb and get error like this:
[Switching to Thread 0x7f54f26de0 (LWP 12413)]
0x0000007f74254cb8 in decodeTensorYolo(float const*, float const*, float const*, unsigned int const&, unsigned int const&, unsigned int const&, std::vector<float, std::allocator > const&) ()
from /home/mirai/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
(gdb) bt
#0 0x0000007f74254cb8 in decodeTensorYolo(float const*, float const*, float const*, unsigned int const&, unsigned int const&, unsigned int const&, std::vector<float, std::allocator > const&) ()
at /home/mirai/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
#1 0x0000007f7425514c in NvDsInferParseCustomYolo(std::vector<NvDsInferLayerInfo, std::allocator > const&, NvDsInferNetworkInfo const&, NvDsInferParseDetectionParams const&, std::vector<NvDsInferObjectDetectionInfo, std::allocator >&) ()
at /home/mirai/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
#2 0x0000007f74255404 in NvDsInferParseYolo ()
at /home/mirai/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
#3 0x0000007f7dbb6618 in ()
at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
#4 0x0000007f7db9cca4 in ()
at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
#5 0x0000007f7db9d150 in ()
at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
#6 0x0000007f7dba1624 in nvdsinfer::NvDsInferContextImpl::dequeueOutputBatch(NvDsInferContextBatchOutput&) ()
at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
#7 0x0000007f7dc689ec in ()
at /usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so
#8 0x0000007fb7d33a64 in ()
at /usr/lib/aarch64-linux-gnu/libglib-2.0.so.0
#9 0x0000007fffffdf08 in ()

can you help me:') because this problem can’t solved until one week

Did you add this API yourself? Can you add some log prints to see where exactly the crash happened?

this is my error without using gdb :
mirai@mirai-desktop:~/DeepStream-Yolo$ deepstream-app -c csi_test.txt
0:00:05.706352178 12590 0xbc1cf20 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/mirai/DeepStream-Yolo/model3-32.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kFLOAT output0 25200x11

0:00:05.707657619 12590 0xbc1cf20 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/mirai/DeepStream-Yolo/model3-32.engine
0:00:05.743497508 12590 0xbc1cf20 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/mirai/DeepStream-Yolo/config_infer_primary_yoloV5.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:194>: Pipeline ready

** INFO: <bus_callback:180>: Pipeline running

Segmentation fault (core dumped)

this is my config infer yolov5:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=model3-32.onnx
model-engine-file=model3-32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=6
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

and this is my config.txt
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=640
camera-height=480
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=5
sync=0
display-id=0
offset-x=0
offset-y=0
width=0
height=0
overlay-id=1
source-id=0

[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=1
batch-size=1
batched-push-timeout=40000
width=640
height=480
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
#model-engine-file=model_b1_gpu0_fp32.engine
config-file=config_infer_primary_yoloV5.txt
gie-unique-id=1
gpu-id=0
nvbuf-memory-type=0

[tests]
file-loop=0[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=1
camera-width=640
camera-height=480
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=5
sync=0
display-id=0
offset-x=0
offset-y=0
width=0
height=0
overlay-id=1
source-id=0

[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=1
batch-size=1
batched-push-timeout=40000
width=640
height=480
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
#model-engine-file=model_b1_gpu0_fp32.engine
config-file=config_infer_primary_yoloV5.txt
gie-unique-id=1
gpu-id=0
nvbuf-memory-type=0

[tests]
file-loop=0

please help my problem

Since you are using your own model and your own NvDsInferParseYolo API, please add some log to debug that yourself first. You can just add some log in your NvDsInferParseYolo function to see where exactly the crash happened.

how can i add logg in my nvdsinferparseyolo?

i’m sorry i’m newbie at this

You can debug by the following steps. If you are referring to the project from other people, it’s better to go to that project to ask questions.

  1. You need to find the NvDsInferParseYolo API in your project first.
  2. Add some log in this API
  3. Rebuild the lib
  4. Run the demo app again

can you suggest me a custom lib_path, i think the problem from that, i’m making a project yolov5 with object detection and using jetson nano.

The path is already written in your configuration file.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.