Segmentation Fault When Running Heartrate App

• Hardware Platform: Jetson Nano
• DeepStream Version: 6.0.1
• DeepStream Install Method: .tar Package
• Power Mode: MAXN
• JetPack Version: 4.6.4 [L4T 32.7.4]
• TensorRT Version: 8.2.1.8
• NVIDIA GPU Driver Version: N/A

• How to reproduce the issue ?

I am trying to run the sample hearth rate app and then train the hearth rate model i got from this github repository. I have followed these steps to build and run the sample app with pretrained model. After the command:

./deepstream-heartrate-app 1 file:///usr/data/test_video.mp4 ./heartrate

I got this output:

deepstream_hearthrate_output_log.txt (3.7 KB)

Thanks in advance for all the help you offer.

NOTE:

  • For installing Deepstream and deepstream_tao_apps, i have followed this topic and referred to this topic.

  • I have tried to run it twice. But, it gave me the same ouput.

  • I want it to infer on images coming from an usb camera.

  • After trying command:

sudo ./deepstream-heartrate-app 3 file:///usr/data/test_video.mp4 ./heartrate

I got this output:

Request sink_0 pad from streammux
Now playing: file:///usr/data/test_video.mp4
Using winsys: x11
libnvcv_tensorops.so: cannot open shared object file: No such file or directory
Running…

And it stuck at Running...

  • After trying this command three times:
./deepstream-heartrate-app 3 file:///usr/data/test_video.mp4 ./heartrate

I got this output after the third time:

new_deepstream_hearthrate_output_log.txt (3.2 KB)

  • After trying this command twice:
./deepstream-heartrate-app 3 file:/opt/nvidia/samples/streams/yoga.mp4 ./heartrate

I got this output after the second time:

new_deepstream_hearthrate_with_different_file_directory_output_log.txt (3.2 KB)

  • After trying this command twice:
./deepstream-heartrate-app 1 file:/opt/nvidia/samples/streams/yoga.mp4 ./heartrate

I got this output after the second time:

new_deepstream_hearthrate_filesink_output_log.txt (3.3 KB)

  • After modifying engine and etlt paths in the config file of heartrate to absolute paths. Nothing has changed and i still got that segmentation fault. Here is the config file:

sample_heartrate_model_config.txt (326 Bytes)

I have modified config files of FaceNet to have absolute paths for models. Here are the config files for FaceNet:

sample_heartrate_model_config.txt (326 Bytes)

After the command:

./deepstream-heartrate-app 1 file:/opt/nvidia/samples/streams/yoga.mp4 ./heartrate

I got this output:

latest_deepstream_heartrate_output_log.txt (3.2 KB)

I have built engine files for both fpenet and heartratenet. But, this time when i run the command:

./deepstream-heartrate-app 3 file:///usr/data/test_video.mp4 ./heartrate

It gets stuck at Running…

This is the output i get:

Now playing: file:///usr/data/test_video.mp4
Using winsys: x11
libnvcv_tensorops.so: cannot open shared object file: No such file or directory
Running…

Could you offer your guidance please?

Kindest regards.

Sorry for the long delay because the vacation.

You probably missed this command

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/deepstream/deepstream/lib/cvcore_libs

Can you capture the crash stack use gdb ? Due to lack jeston nano, I can’t use DS-6.0 reproduce this issue.

By the way, can the deepstream-test1 work fine ? Make sure all dependencies are installed.

file:/opt/nvidia/samples/streams/yoga.mp4 is not a correct uri, will cause crash.

Use file:// with absolute path like http://.

Thank you for your guidance.

When i run the command:

gdb ./deepstream-heartrate-app 3 file:///usr/data/test_video.mp4 ./heartrate

I get:

Type “apropos word” to search for commands related to “word”…
Reading symbols from ./deepstream-heartrate-app…
(No debugging symbols found in ./deepstream-heartrate-app)
Attaching to program: /home/mericgeren/deepstream_tao_apps/apps/tao_others/deepstream-heartrate-app/deepstream-heartrate-app, process 3
ptrace: No such process.
/home/mericgeren/deepstream_tao_apps/apps/tao_others/deepstream-heartrate-app/3: No such file or directory.

Using this command:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/deepstream/deepstream/lib/cvcore_libs

was helpfull. It no longer get’s stuck on “running…”

This time i get “Segmentataion fault” instead.

Here is the output i get after running “./deepstream-heartrate-app 1 file://opt/nvidia/samples/streams/yoga.mp4 ./heartrate”:

current_deepstream_heartrate_output_log.txt (2.1 KB)

Could you offer your advice please?

Kindest regards.

The correct command should be

gdb --args ./deepstream-heartrate-app 3 file:///usr/data/test_video.mp4 ./heartrate

try file:///opt/nvidia/samples/streams/yoga.mp4, I think file:// is an illegal uri

Thank you for your advice.

For the command:

gdb --args ./deepstream-heartrate-app 3 file:///usr/data/test_video.mp4 ./heartrate

I get the output:

gdb_output_log.txt (3.1 KB)

could you offer your guidance please?

Kindest regards.

0x0000007f88ca3bbc in HeartRateAlgorithm::~HeartRateAlgorithm (this=0x555569c0b0, __in_chrg=<optimized out>) at heartrateinfer.cpp:696
696	  if(m_process_surf->memType == NVBUF_MEM_SURFACE_ARRAY) {

If you change the output type from 3 to 1 save the output to file. will it crash again ?

Set GST_DEBUG=3 like below, then share the log.

GST_DEBUG=3 ./deepstream-heartrate-app 1 file:///usr/data/test_video.mp4 ./heartrate

When i change the output type from 3 to 1, it crashes again in the same way.

When i run with GST_DEBUG=3, i get:

Request sink_0 pad from streammux
####+++OUT file ./heartrate.264
Now playing: file:///usr/data/test_video.mp4
Opening in BLOCKING MODE 
0:00:00.628615627 19786   0x557f4f7210 WARN                    v4l2 gstv4l2object.c:2388:gst_v4l2_object_add_interlace_mode:0x557f4e3430 Failed to determine interlace mode
0:00:00.628746828 19786   0x557f4f7210 WARN                    v4l2 gstv4l2object.c:2388:gst_v4l2_object_add_interlace_mode:0x557f4e3430 Failed to determine interlace mode
0:00:00.628794537 19786   0x557f4f7210 WARN                    v4l2 gstv4l2object.c:2388:gst_v4l2_object_add_interlace_mode:0x557f4e3430 Failed to determine interlace mode
0:00:00.628840059 19786   0x557f4f7210 WARN                    v4l2 gstv4l2object.c:2388:gst_v4l2_object_add_interlace_mode:0x557f4e3430 Failed to determine interlace mode
0:00:00.628953186 19786   0x557f4f7210 WARN                    v4l2 gstv4l2object.c:4476:gst_v4l2_object_probe_caps:<nvvideo-h264enc:src> Failed to probe pixel aspect ratio with VIDIOC_CROPCAP: Unknown error -1
Library Opened Successfully
Setting custom lib properties # 1
Adding Prop: config-file : ../../../../configs/heartrate_tao/sample_heartrate_model_config.txt
Inside Custom Lib : Setting Prop Key=config-file Value=../../../../configs/heartrate_tao/sample_heartrate_model_config.txt
0:00:00.728759540 19786   0x557f4f7210 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:06.932822721 19786   0x557f4f7210 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/mericgeren/deepstream_tao_apps/models/faciallandmark/facenet.etlt_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x416x736       
1   OUTPUT kFLOAT output_bbox/BiasAdd 4x26x46         
2   OUTPUT kFLOAT output_cov/Sigmoid 1x26x46         

0:00:06.935043857 19786   0x557f4f7210 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/mericgeren/deepstream_tao_apps/models/faciallandmark/facenet.etlt_b1_gpu0_int8.engine
0:00:07.011123816 19786   0x557f4f7210 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-infer-engine1> [UID 1]: Load new model:../../../configs/facial_tao/config_infer_primary_facenet.txt sucessfully
0:00:07.012578011 19786   0x557f4f7210 WARN                 filesrc gstfilesrc.c:533:gst_file_src_start:<source> error: No such file "/usr/data/test_video.mp4"
0:00:07.012727753 19786   0x557f4f7210 WARN                 basesrc gstbasesrc.c:3452:gst_base_src_start:<source> error: Failed to start
Decodebin child added: source
Decodebin child added: decodebin0
0:00:07.013691782 19786   0x557f4f7210 WARN                 filesrc gstfilesrc.c:533:gst_file_src_start:<source> error: No such file "/usr/data/test_video.mp4"
0:00:07.013753971 19786   0x557f4f7210 WARN                 basesrc gstbasesrc.c:3452:gst_base_src_start:<source> error: Failed to start
0:00:07.013863504 19786   0x557f4f7210 WARN                 filesrc gstfilesrc.c:533:gst_file_src_start:<source> error: No such file "/usr/data/test_video.mp4"
0:00:07.013912047 19786   0x557f4f7210 WARN                 basesrc gstbasesrc.c:3452:gst_base_src_start:<source> error: Failed to start
0:00:07.013956944 19786   0x557f4f7210 WARN                 basesrc gstbasesrc.c:3807:gst_base_src_activate_push:<source> Failed to start in push mode
0:00:07.013979340 19786   0x557f4f7210 WARN                GST_PADS gstpad.c:1149:gst_pad_set_active:<source:src> Failed to activate pad
Segmentation fault (core dumped)

Thank you for all the guidance you have offered,

I managed to display the video stream and even could see it detecting a face with the following command:

GST_DEBUG=3 ./deepstream-heartrate-app 3 file:///opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_ride_bike.mov ./heartrate

But, it detected the heartrate as 0. Here is the output log of the command (it’s pretty long. But, what i observe it is mostly same errors, warnings repeating itself and that the first lines and last lines might be more relevant for us):

the_longest_output_log.txt (239.6 KB)

May i ask your advice on how can i use live frames from a USB camera, instead of a video file please?

Kindest regards.

Those errors seems related to model, make sure the model is converted in Jetson

modify the pipeline such as

v4l2src --> nvvideoconvert --> capsfilter(must be "video/x-raw(memory:NVMM)")  --> nvstreammux ......

Here is a FAQ for v4l2src.

Thank you for guidance you have offered.

I came across the error even though i have converted both fpenet and heartrate models in Jetson.

For modifying the pipeline, can you tell me do i need to use something like !nvinfer in gst-launch-1.0 command to have the inference i would do with the command: ./deepstream-heartrate-app 3 file:... ./heartrate in the pipeline or is there a way to use both ./deepstream-heartrate-app 3 ... ./heartrate and gst-launch-1.0 together? If i can use both commands together, in concert May i ask for your guidance on how can i use gst-launch-1.0 command with the command: ./deepstream-heartrate-app 3 ... ./heartrate please?

Kindest regards.

If the video can be displayed, let’s ignore the error for now. As I do not possess a device to reproduce this error on my outdated Jetson nano,

if you only wish to use a USB camera as input, simply adjust the v4l2 camera in the create_source_bin function.

If you prefer to utilise gst-launch-1.0, kindly refer to the following examples.

Thank you for your kind advice,

After the command:

v4l2-ctl -d /dev/video2 --list-formats-ext

I got this output:

ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'YUYV'
	Name        : YUYV 4:2:2
		Size: Discrete 424x240
			Interval: Discrete 0.017s (60.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
			Interval: Discrete 0.167s (6.000 fps)
		Size: Discrete 640x480
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
			Interval: Discrete 0.167s (6.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.067s (15.000 fps)
			Interval: Discrete 0.100s (10.000 fps)
			Interval: Discrete 0.167s (6.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.125s (8.000 fps)

From there it seems like dimensions are 640x480, FPS is 30 and format is YUYV (YUY2).

I have tried this command to see if the gst pipeline works:

sudo gst-launch-1.0 v4l2src device=/dev/video2 ! 'video/x-raw, format=YUY2, width=640, height=480, framerate=30/1'  ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12' ! mux.sink_0  nvstreammux name=mux width=1280 height=720 batch-size=1  ! nvoverlaysink

Even though, i had a low FPS and see warnings telling me a lot of buffers have been dropped, i could see the output of the camera on the screen.

I have found the function you have mentioned on both
deepstream_source_bin.c and deepstream_heartrate_app.cpp which create_source_bin function i should be adjusting?

May i ask if i need to write something for videoconvert there? Or only adjusting v4l2 camera there?

Also, could you offer your guidance on how i could modify the camera for my case please?

Kindest regards.

For your use, modify deepstream_heartrate_app.cpp is enough.

Just rewrite your gst-launch pipeline into c codes. There is sample code for reference in create_source_bin

Maybe scaling loses some performance.

Thank you for your advice.

I have tried to utilize gst-launch-1.0 with running following command:

gst-launch-1.0 v4l2src device=/dev/video2 ! 'video/x-raw, format=YUY2, width=640, height=480, framerate=30/1'  ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12' ! mux.sink_0 nvstreammux name=mux width=1280 height=720 batch-size=1  ! nvinfer config-file-path=/home/mericgeren/deepstream_tao_apps/configs/facial_tao/config_infer_primary_facenet.txt ! nvdsvideotemplate customlib-name=/home/mericgeren/deepstream_tao_apps/apps/tao_others/deepstream-heartrate-app/heartrateinfer_impl/libnvds_heartrateinfer.so customlib-props=config-file:/home/mericgeren/deepstream_tao_apps/configs/heartrate_tao/sample_heartrate_model_config.txt ! nvegltransform ! nveglglessink

It showed the view coming from the camera in low fps around 2-3 FPS and it didn’t draw a bounding box around my face and outputted heart rate as 0 on the terminal just like the previous time i have tried with “sample_ride_bike.mov”. But, this time instead of being able to see the heart rate info both on the terminal and display, i could only see the info on the terminal.

Can you offer your guidance on how to make it show bound box with heart rate info, on how to increase the FPS and how to make it output the heart rate correctly please?

Kindest regards.

I have tried both with width=1280 height=720 and width=640 height=480. But, on both tries, there wasn’t a significant change if there was any.

Thanks,

Kindest regards.

As the description in README, You need to input the following face image.

The pretrained model is trained with limited face images. There are some requirements for the video to be recognized:
* The expected range for a person of interest is 0.5 meter.
* above 100 Lux 
* allow up to 10 secs to adapt for lighting changes and face movement
* keep head stable and straight with regard to the camera
* whole face is well illuminated
* do not wear a mask 
* Heart Rate only supports 15-30 FPS.
* Head Angle Relative to Camera Plane
   Yaw: -75 to 75 degrees
   Pitch: -60 to 45 degrees
   Roll: -45 to 45 degrees

1.use the fakesink replace nvegltransform ! nveglglesink

2.look at the gpu loading, this is command line.

Due to jetson nano is too old, I think it can not support high fps.

sudo pip3 install jetson-stats
sudo jtop

Thanks for your guidance.

I have replaced nvegltransform ! nveglglesink with fakesink as suggested. Then run the command:

gst-launch-1.0 v4l2src device=/dev/video2 ! 'video/x-raw, format=YUY2, width=640, height=480, framerate=30/1'  ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12' ! mux.sink_0 nvstreammux name=mux width=1280 height=720 batch-size=1  ! nvinfer config-file-path=/home/mericgeren/deepstream_tao_apps/configs/facial_tao/config_infer_primary_facenet.txt ! nvdsvideotemplate customlib-name=/home/mericgeren/deepstream_tao_apps/apps/tao_others/deepstream-heartrate-app/heartrateinfer_impl/libnvds_heartrateinfer.so customlib-props=config-file:/home/mericgeren/deepstream_tao_apps/configs/heartrate_tao/sample_heartrate_model_config.txt ! fakesink

Unfortunatelly, heart rate still stays at 0.

While running the command i have opened the jtop on another terminal and observed that gpu works on almost at the full capacity.

Could you offer your advice please?

Kindest regards.