Aborted (Core Dumped) on deepstream-heartrate-app

I have closed the log as you have suggested and commented the line https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/apps/tao_others/deepstream-heartrate-app/heartrateinfer_impl/heartrateinfer.cpp#L889 and i have discovered that after 10th second, rarely, like one in every 100 to 200 frames, it outputs something other than 0. But, the values are inconsistent and sometimes abnormal like 205, 181 and 43.

Also checked the heartrateinfer.cpp and have seen these lines:

#include "cv/heartrate/HeartRate.h"
#include "cv/core/Memory.h"

So may i ask you might that be related to this warning please:

0:00:00.478184398 16985   0x5591335800 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-infer-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5

Kindest regards.

No. This is not related to the test results. The biggest impact on the results is whether your video meets the following restrictions: limitations. You need to ensure the fps, lux, angles of the face etc…

I get an FPS between 40 and 50 as the average FPS of the inference (videos are 30 FPS), the angle between my face and camera on all axes are less than 15 degrees, i have tested with different lighting conditions, with natural light, with artificial light and with both natural and artificial light. Also, i have tested with resolutions of 1080p, 720p and 480p. Would messaging videos i have tested with help?

Kindest regards.

Yes. You can message to me the videos and we’ll check for that.

Thank you,

May i ask might having a higher average FPS than 30 (I have seen 45.455561 in my latest try) be an issue here?

It seems like the issue is not about FPS. Since, i still have this issue with v4l2 source at an FPS between 29.7 and 29.9.

Kindest regards.

If you don’t mind, may i ask might having an unstable facenet can cause such an issue and from the h264 file, do you think that there might be a problem on facenet or does the facenet gives output as it should have? Would messaging output videos of heartrate (those with h264 extension) help?

Kindest regards.

NO. You can check the generated test.mp4 file, the bbox of the face is correct. The main reason is that the training data of this model is limited, and it is only a simple model used for demonstration. As you can see from your different videos, there are different output results.
So if you want to improve the accurancy of this heartrate model, you need to retrain that with more datasets.

Thank you for your kind explanation,

The reason i thought an unstable facenet might have caused that, the bounding box showing my face was shaking and twitching significantly. Regarding my question on sending output videos, I asked about messaging the output videos with the intention that you and your team could compare the results i see and values i got with results and values you and your team see. Regarding output values, yes, the outputs does differ between videos. While, output results are similar in sense they all contain a lot of 0s and for every 100 to 150 frames with inferred heart rate of 0, a single inconsistent, even sometimes abnormal, value other than 0. May i ask if you also see such an output with a lot of 0s and some inconsistent values? Also, would we agree on that the results we see in this case is not the output seen when the deepstream-heartrate-app was tested? Finally, could you offer an explanation to on what does these videos used in our case differ from the ones used on the training of the model and ones used while verifying that this model works correctly please? (or in other words, what should i do different? And what should i change?) To see the outputs i get, please see the following output logs:

  • This is output log of the command: GST_DEBUG=0 ./deepstream-heartrate-app 3 file:/opt/nvidia/deepstream/deepstream-6.0/samples/streams/heartrate_test_720p.mp4 ./heartrate

output_log_1.txt (24.6 KB)

  • This is output log of the command: GST_DEBUG=0 ./deepstream-heartrate-app 3 file:/opt/nvidia/deepstream/deepstream-6.0/samples/streams/heartrate_test_natural_light_fhd.mp4 ./heartrate

output_log_2.txt (27.6 KB)

  • This is output log of the command: GST_DEBUG=0 ./deepstream-heartrate-app 3 file:/opt/nvidia/deepstream/deepstream-6.0/samples/streams/heartrate_test_extra_light_source.mp4 ./heartrate

output_log_3.txt (65.3 KB)

Kindest regards.

If you don’t mind, may i ask you if you and your team have found the reason behind the heart rate 0 and could offer an update on analysis or a feedback please?

Kindest regards.

Thank you for your kind advice,

Unfortunately, i don’t have the time and means to collect the data for datasets. Also, i want to test the heartrate and see consistent results quickly. Besides, as i am not from academic community. Since that, i don’t have means to access any of the datasets which require any kind of permission. Could you offer datasets for which only thing i would need to do is placing the dataset to the right directory?

Kindest regards.

Because it involves the face of a person, there may be copyright issues. We will discuss how to provide a demo video correctly.

Thank you for your kind explanation,

I am looking forward to hear from you.

Kindest regards.

Sorry, we cannot share our dataset. But you can use your own recorded videos to train the model. After doing this, it will be more suitable for your scenario.
You can refer to the following more materials,thanks.
1.https://github.com/NVIDIA/tao_tutorials/tree/95aca39c79cb9068593a6a9c3dcc7a509f4ad786/notebooks/tao_launcher_starter_kit/heartratenet
2.https://docs.nvidia.com/tao/tao-toolkit-archive/tao-30-2108/text/tao_cv_inf_pipeline/index.html

Thank you for informing me and for materials.

If you don’t mind, could you offer an explanation on how does these videos used in our case (Please kindly refer to messages) differ from the ones in your dataset please?

Kindest regards.

In fact, some of your videos that can occasionally detect values are correct. This demo takes 10s to warm up and detect the values every 3s.

Thank you for offering me information on the detection frequency of the heartrate model,

The problem in these videos for which heartrate model can detect values is that, the values are quite abnormal like 220, 180 and 40. Could you and your team offer me the cause of the problem regarding values model detects please?

Kindest regards.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

As I attached before, the model training dataset is limited, so the results are not very accurate. It’s just used to briefly show the effect of the model. If you want precise inference, you need to train that with more data. And you can use your own recorded video dataset to continue training.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.