Segmentation fault while running yolov8

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jestson Orin NX
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only) R32 Revision:3.0
• TensorRT Version tensorrt_versuib_8_6_2_3
*• NVIDIA GPU Driver Version (valid for GPU only) * cuda 12.2
• Issue Type( questions, new requirements, bugs) whenever install labrires GitHub - marcoslucianops/DeepStream-Yolo-Seg: NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 implementation for YOLO-Segmentation models for yolo segementation, it return error “segmentation fault” after executing commend deepstream-app -c deepstream_app_config.txt

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) : Ensure above setup and install above github labries, it will generate same error

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

please refer to this compatibility table. the Jetpack version should be 6.0 GA.
if still can’t work. without any code and cfg modificaiton, will the app crash? could you share a whole log? can you use gdb to get a crash stack?

Jetpack : 6.0

Gdb for a crash stack :

Starting program: /usr/bin/deepstream-app -c deepstream_app_config.txt
[Thread debugging using libthread_db enabled]
Using host libthread_db library “/lib/aarch64-linux-gnu/libthread_db.so.1”.
[New Thread 0xffffc86a4840 (LWP 30491)]
[New Thread 0xffff85919840 (LWP 30492)]
[New Thread 0xffff85109840 (LWP 30493)]
[New Thread 0xffff848f9840 (LWP 30494)]
[New Thread 0xffff77ff9840 (LWP 30495)]
[New Thread 0xffff777e9840 (LWP 30496)]
[New Thread 0xffff76fd9840 (LWP 30497)]
[New Thread 0xffff767c9840 (LWP 30498)]
[New Thread 0xffff75fb9840 (LWP 30499)]
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:08.059148272 30489 0xaaaaf6e7b420 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from :/home/paymentinapp/DeepStream-Yolo-Seg/yolov8s-seg.onnx_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kHALF output1 32x160x160
2 OUTPUT kHALF output0 116x8400

0:00:08.459176817 30489 0xaaaaf6e7b420 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /home/paymentinapp/DeepStream-Yolo-Seg/yolov8s-seg.onnx_b1_gpu0_fp32.engine
[New Thread 0xffff75099840 (LWP 30503)]
[New Thread 0xffff74889840 (LWP 30504)]
[New Thread 0xffff5fff9840 (LWP 30505)]
0:00:08.485270724 30489 0xaaaaf6e7b420 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/paymentinapp/DeepStream-Yolo-Seg/config_infer_primary_yoloV8_seg.txt sucessfully
[New Thread 0xffff5f7e9840 (LWP 30506)]
[New Thread 0xffff5efd9840 (LWP 30507)]
[New Thread 0xffff5e7c9840 (LWP 30508)]
[New Thread 0xffff5dfb9840 (LWP 30509)]
[New Thread 0xffff5d7a9840 (LWP 30510)]

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:291>: Pipeline ready

[New Thread 0xffff5cf99840 (LWP 30511)]
[New Thread 0xffff3bff9840 (LWP 30512)]
[New Thread 0xffff3b7e9840 (LWP 30513)]
[New Thread 0xffff28fe9840 (LWP 30514)]
[New Thread 0xffff267d9840 (LWP 30515)]
[New Thread 0xffff23fc9840 (LWP 30516)]
[New Thread 0xffff217b9840 (LWP 30517)]
[New Thread 0xffff1efa9840 (LWP 30518)]
[Detaching after vfork from child process 30519]
[Detaching after vfork from child process 30522]
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
[New Thread 0xffff1bd50840 (LWP 30525)]
[New Thread 0xffff1b540840 (LWP 30526)]
[New Thread 0xffff1ad30840 (LWP 30527)]
NvMMLiteBlockCreate : Block : BlockType = 261
[New Thread 0xffff1a520840 (LWP 30528)]
[New Thread 0xffff19889840 (LWP 30529)]
** INFO: <bus_callback:277>: Pipeline running

[New Thread 0xffff19079840 (LWP 30530)]
[New Thread 0xffff18869840 (LWP 30531)]
[New Thread 0xffff03c71840 (LWP 30532)]

Thread 11 “deepstream-app” received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0xffff75099840 (LWP 30503)]
0x0000ffffc02d5d18 in decodeTensorYoloSeg(float const*, float const*, float const*, float const*, unsigned int const&, unsigned int const&, unsigned int const&, unsigned int const&, unsigned int const&, std::vector<float, std::allocator > const&) () from /home/paymentinapp/DeepStream-Yolo-Seg/nvdsinfer_custom_impl_Yolo_seg/libnvdsinfer_custom_impl_Yolo_seg.so
(gdb) bt
#0 0x0000ffffc02d5d18 in decodeTensorYoloSeg(float const*, float const*, float const*, float const*, unsigned int const&, unsigned int const&, unsigned int const&, unsigned int const&, unsigned int const&, std::vector<float, std::allocator > const&) ()
at /home/paymentinapp/DeepStream-Yolo-Seg/nvdsinfer_custom_impl_Yolo_seg/libnvdsinfer_custom_impl_Yolo_seg.so
#1 0x0000ffffc02d5ff4 in NvDsInferParseCustomYoloSeg(std::vector<NvDsInferLayerInfo, std::allocator > const&, NvDsInferNetworkInfo const&, NvDsInferParseDetectionParams const&, std::vector<NvDsInferInstanceMaskInfo, std::allocator >&) ()
at /home/paymentinapp/DeepStream-Yolo-Seg/nvdsinfer_custom_impl_Yolo_seg/libnvdsinfer_custom_impl_Yolo_seg.so
#2 0x0000ffffc02d608c in NvDsInferParseYoloSeg ()
at /home/paymentinapp/DeepStream-Yolo-Seg/nvdsinfer_custom_impl_Yolo_seg/libnvdsinfer_custom_impl_Yolo_seg.so
#3 0x0000ffffc0d7cb08 in ()
at /opt/nvidia/deepstream/deepstream-7.0/lib/libnvds_infer.so
#4 0x0000ffffc0d5f714 [PAC] in ()
at /opt/nvidia/deepstream/deepstream-7.0/lib/libnvds_infer.so
#5 0x0000ffffc0d6055c [PAC] in ()
at /opt/nvidia/deepstream/deepstream-7.0/lib/libnvds_infer.so
#6 0x0000ffffc0d60ed8 [PAC] in nvdsinfer::NvDsInferContextImpl::dequeueOutputBatch(NvDsInferContextBatchOutput&) ()
–Type for more, q to quit, c to continue without paging–RET
b/libnvds_infer.so
#7 0x0000ffffc0e97db8 [PAC] in () at /usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so
#8 0x0000fffff7c88064 in g_thread_proxy (data=0xaaaaf4d94610) at …/glib/gthread.c:831
#9 0x0000fffff6b7d5c8 in start_thread (arg=0x0) at ./nptl/pthread_create.c:442
#10 0x0000fffff6be5edc in thread_start () at …/sysdeps/unix/sysv/linux/aarch64/clone.S:79

Gdb: Run

The program being debugged has been started already.
Start it from the beginning? (y or n) y
Starting program: /usr/bin/deepstream-app -c deepstream_app_config.txt
[Thread debugging using libthread_db enabled]
Using host libthread_db library “/lib/aarch64-linux-gnu/libthread_db.so.1”.
[New Thread 0xffffc86a4840 (LWP 30571)]
[New Thread 0xffff85919840 (LWP 30572)]
[New Thread 0xffff85109840 (LWP 30573)]
[New Thread 0xffff848f9840 (LWP 30574)]
[New Thread 0xffff77ff9840 (LWP 30575)]
[New Thread 0xffff777e9840 (LWP 30576)]
[New Thread 0xffff76fd9840 (LWP 30577)]
[New Thread 0xffff767c9840 (LWP 30578)]
[New Thread 0xffff75fb9840 (LWP 30579)]
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:07.447252266 30570 0xaaaaf6e7b420 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from :/home/paymentinapp/DeepStream-Yolo-Seg/yolov8s-seg.onnx_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kHALF output1 32x160x160
2 OUTPUT kHALF output0 116x8400

0:00:07.842781271 30570 0xaaaaf6e7b420 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /home/paymentinapp/DeepStream-Yolo-Seg/yolov8s-seg.onnx_b1_gpu0_fp32.engine
[New Thread 0xffff75099840 (LWP 30580)]
[New Thread 0xffff74889840 (LWP 30581)]
[New Thread 0xffff57ff9840 (LWP 30582)]
0:00:07.863582355 30570 0xaaaaf6e7b420 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/paymentinapp/DeepStream-Yolo-Seg/config_infer_primary_yoloV8_seg.txt sucessfully
[New Thread 0xffff577e9840 (LWP 30583)]
[New Thread 0xffff56fd9840 (LWP 30584)]
[New Thread 0xffff567c9840 (LWP 30585)]
[New Thread 0xffff55fb9840 (LWP 30586)]
[New Thread 0xffff557a9840 (LWP 30587)]

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:291>: Pipeline ready

[New Thread 0xffff54f99840 (LWP 30588)]
[New Thread 0xffff3fff9840 (LWP 30589)]
[New Thread 0xffff3f7e9840 (LWP 30590)]
[New Thread 0xffff28fe9840 (LWP 30591)]
[New Thread 0xffff287d9840 (LWP 30592)]
[New Thread 0xffff25fc9840 (LWP 30593)]
[New Thread 0xffff217b9840 (LWP 30594)]
[New Thread 0xffff20fa9840 (LWP 30595)]
[Detaching after vfork from child process 30596]
[Detaching after vfork from child process 30599]
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
[New Thread 0xffff1bd50840 (LWP 30602)]
[New Thread 0xffff1b540840 (LWP 30603)]
[New Thread 0xffff1ad30840 (LWP 30604)]
NvMMLiteBlockCreate : Block : BlockType = 261
[New Thread 0xffff1a520840 (LWP 30605)]
[New Thread 0xffff19889840 (LWP 30606)]
** INFO: <bus_callback:277>: Pipeline running

[New Thread 0xffff19079840 (LWP 30607)]
[New Thread 0xffff18869840 (LWP 30608)]
[New Thread 0xffff03c71840 (LWP 30609)]

Thread 11 “deepstream-app” received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0xffff75099840 (LWP 30580)]
0x0000ffffc02d5d18 in decodeTensorYoloSeg(float const*, float const*, float const*, float const*, unsigned int const&, unsigned int const&, unsigned int const&, unsigned int const&, unsigned int const&, std::vector<float, std::allocator > const&) () from /home/paymentinapp/DeepStream-Yolo-Seg/nvdsinfer_custom_impl_Yolo_seg/libnvdsinfer_custom_impl_Yolo_seg.so

from the log above, there are two output layers, but in this code line, the code will get four layers. please make sure the model is correct.

Actually forth layer is for getting mask : then what’s wrong here ?

const NvDsInferLayerInfo& boxes = outputLayersInfo[0];
const NvDsInferLayerInfo& scores = outputLayersInfo[1];
const NvDsInferLayerInfo& classes = outputLayersInfo[2];
const NvDsInferLayerInfo& masks = outputLayersInfo[3];

const uint outputSize = boxes.inferDims.d[0];
const uint maskWidth = masks.inferDims.d[2];
const uint maskHeight = masks.inferDims.d[1];

Detail code is here :

include

include “nvdsinfer_custom_impl.h”

include “utils.h”

define NMS_THRESH 0.45;

extern “C” bool
NvDsInferParseYoloSeg(std::vector const& outputLayersInfo, NvDsInferNetworkInfo const& networkInfo,
NvDsInferParseDetectionParams const& detectionParams, std::vector& objectList);

static void
addSegProposal(const float* masks, const uint& maskWidth, const uint& maskHeight, const uint& b,
NvDsInferInstanceMaskInfo& obj)
{
obj.mask = new float[maskHeight * maskWidth];
obj.mask_width = maskWidth;
obj.mask_height = maskHeight;
obj.mask_size = sizeof(float) * maskHeight * maskWidth;

const float* data = reinterpret_cast<const float*>(masks + b * maskHeight * maskWidth);
memcpy(obj.mask, data, sizeof(float) * maskHeight * maskWidth);
}

static void
addBBoxProposal(const float& bx1, const float& by1, const float& bx2, const float& by2, const uint& netW, const uint& netH,
const int& maxIndex, const float& maxProb, NvDsInferInstanceMaskInfo& obj)
{
float x1 = clamp(bx1, 0, netW);
float y1 = clamp(by1, 0, netH);
float x2 = clamp(bx2, 0, netW);
float y2 = clamp(by2, 0, netH);

obj.left = x1;
obj.width = clamp(x2 - x1, 0, netW);
obj.top = y1;
obj.height = clamp(y2 - y1, 0, netH);

if (obj.width < 1 || obj.height < 1) {
return;
}

obj.detectionConfidence = maxProb;
obj.classId = maxIndex;
}

static std::vector
decodeTensorYoloSeg(const float* boxes, const float* scores, const float* classes, const float* masks,
const uint& outputSize, const uint& maskWidth, const uint& maskHeight, const uint& netW, const uint& netH,
const std::vector& preclusterThreshold)
{
std::vector objects;

for (uint b = 0; b < outputSize; ++b) {
float maxProb = scores[b];
int maxIndex = (int) classes[b];

if (maxProb < preclusterThreshold[maxIndex]) {
  continue;
}

float bx1 = boxes[b * 4 + 0];
float by1 = boxes[b * 4 + 1];
float bx2 = boxes[b * 4 + 2];
float by2 = boxes[b * 4 + 3];

NvDsInferInstanceMaskInfo obj;

addBBoxProposal(bx1, by1, bx2, by2, netW, netH, maxIndex, maxProb, obj);
addSegProposal(masks, maskWidth, maskHeight, b, obj);

objects.push_back(obj);

}

return objects;
}

static bool
NvDsInferParseCustomYoloSeg(std::vector const& outputLayersInfo,
NvDsInferNetworkInfo const& networkInfo, NvDsInferParseDetectionParams const& detectionParams,
std::vector& objectList)
{
if (outputLayersInfo.empty()) {
std::cerr << “ERROR: Could not find output layer in bbox parsing” << std::endl;
return false;
}

const NvDsInferLayerInfo& boxes = outputLayersInfo[0];
const NvDsInferLayerInfo& scores = outputLayersInfo[1];
const NvDsInferLayerInfo& classes = outputLayersInfo[2];
const NvDsInferLayerInfo& masks = outputLayersInfo[3];

const uint outputSize = boxes.inferDims.d[0];
const uint maskWidth = masks.inferDims.d[2];
const uint maskHeight = masks.inferDims.d[1];

std::vector objects = decodeTensorYoloSeg((const float*) (boxes.buffer),
(const float*) (scores.buffer), (const float*) (classes.buffer), (const float*) (masks.buffer), outputSize, maskWidth,
maskHeight, networkInfo.width, networkInfo.height, detectionParams.perClassPreclusterThreshold);

objectList = objects;

return true;
}

extern “C” bool
NvDsInferParseYoloSeg(std::vector const& outputLayersInfo, NvDsInferNetworkInfo const& networkInfo,
NvDsInferParseDetectionParams const& detectionParams, std::vector& objectList)
{
return NvDsInferParseCustomYoloSeg(outputLayersInfo, networkInfo, detectionParams, objectList);
}

CHECK_CUSTOM_INSTANCE_MASK_PARSE_FUNC_PROTOTYPE(NvDsInferParseYoloSeg);

from the log, the model only has two output layers, but the code needs four layers. so the model and code are inconsistent. if the model is correct, you need to correct the code.

Removed last layer const NvDsInferLayerInfo& masks = outputLayersInfo[3]; still getting same error : segmentation fault

I generated a yolov8-seg model according to this link. the model has four output layers. please check why your model has two layers. it seems it is a detection model, not a segmentation model.

Its working. Thank you.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.