FaceDetect Pre-Trained model implementation using DS

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson AGX Orin
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1
• TensorRT Version 8.2.2

I’m trying to implement face detection model in deepstream and i found the facedetect pre-trained model FaceDetect | NVIDIA NGC. Is there a documentation or a guideline to the steps that i have to follow to implement the model in DS. And is there a python version of the model?
I have tried to follow DS TAO apps deepstream_tao_apps/apps/tao_others/README.md at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub where as per readme page it should have the facedetect model however i can’t find it in the installed files.

1 Like

this sample deepstream-faciallandmark-app will use facedetect model. the model configuration file is config_infer_primary_facenet.yml

Thank you. I’m trying to build the model but the below error is generated when running the make file in the folder
make: Nothing to be done for ‘all’.

is the building successful? is there deepstream-faciallandmark-app in deepstream_tao_apps/apps/tao_others/deepstream-faciallandmark-app?
please refer the steps in README.md

make step in readme page have gave the same error msg. Yes i have followed all steps to download it and deepstream-faciallandmark-app exists
make[1]: Entering directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app’
make[1]: Nothing to be done for ‘all’.
make[1]: Leaving directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app’
make[1]: Entering directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-faciallandmark-app’
make[1]: Nothing to be done for ‘all’.
make[1]: Leaving directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-faciallandmark-app’
make[1]: Entering directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-emotion-app’
make[1]: Nothing to be done for ‘all’.
make[1]: Leaving directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-emotion-app’
make[1]: Entering directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-emotion-app/emotion_impl’
make[1]: Nothing to be done for ‘all’.
make[1]: Leaving directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-emotion-app/emotion_impl’
make[1]: Entering directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-gaze-app’
make[1]: Nothing to be done for ‘all’.
make[1]: Leaving directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-gaze-app’
make[1]: Entering directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-gaze-app/gazeinfer_impl’
make[1]: Nothing to be done for ‘all’.
make[1]: Leaving directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-gaze-app/gazeinfer_impl’
make[1]: Entering directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-gesture-app’
make[1]: Nothing to be done for ‘all’.
make[1]: Leaving directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-gesture-app’
make[1]: Entering directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-heartrate-app/heartrateinfer_impl’
make[1]: Nothing to be done for ‘all’.
make[1]: Leaving directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-heartrate-app/heartrateinfer_impl’
make[1]: Entering directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-heartrate-app’
make[1]: Nothing to be done for ‘all’.
make[1]: Leaving directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-heartrate-app’
make[1]: Entering directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-mdx-perception-app’
make[1]: Nothing to be done for ‘all’.
make[1]: Leaving directory ‘/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-mdx-perception-app’

tha app building already has succeeded. if buliding again, there will be tip “make[1]: Nothing to be done for ‘all’.”, it is not an error. please test the executable file according to the readme.

Thank you. I have tried to run the model using the command
deepstream_tao_apps/post_processor/deepstream_tao_apps/configs/facial_tao/sample_faciallandmarks_config.txt /home/Jetson/Desktop/input/image1.jpg ./landmarks
this should this error. I have checked that the directory to image1is correct
ERROR from element uri-decode-bin: Invalid URI “/home/Jetson/Desktop/input/image1.jpg”.
Error details: gsturidecodebin.c(1383): gen_source_element (): /GstPipeline:pipeline/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin
Returned, stopping playback
Average fps 0.000233
Totally 0 faces are inferred
Deleting pipeline

please use file:///home/Jetson/Desktop/input/image1.jpg

i’m getting this error
failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/models/faciallandmark/facenet.etlt_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x416x736
1 OUTPUT kFLOAT output_bbox/BiasAdd 4x26x46
2 OUTPUT kFLOAT output_cov/Sigmoid 1x26x46

0:04:45.140839273 6985 0xaaaaf2888ea0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:…/…/…/configs/facial_tao/config_infer_primary_facenet.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Running…
Decodebin child added: nvjpegdec0
ReadFrameInfo: 634: HW doesn’t support progressive decode
In cb_newpad
###Decodebin pick nvidia decoder plugin.
nvstreammux: Successfully handled EOS for source_id=0
ERROR from element typefind: Internal data stream error.
Error details: gsttypefindelement.c(1228): gst_type_find_element_loop (): /GstPipeline:pipeline/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin/GstDecodeBin:decodebin0/GstTypeFindElement:typefind:
streaming stopped, reason error (-5)
Returned, stopping playback
Average fps 0.000233
Totally 0 faces are inferred
Deleting pipeline

from the error, it is because nvjpegdec decoded failed, could you share the image1.jpg? thanks. and you can try other pictures.


this is the image i have tried other images but same output as above. For another image it shows that counted faces are 20 as below but without showing the output

0:05:27.299895392 7107 0xaaaad35680a0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:…/…/…/configs/facial_tao/config_infer_primary_facenet.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Running…
Decodebin child added: nvjpegdec0
In cb_newpad
###Decodebin pick nvidia decoder plugin.
nvstreammux: Successfully handled EOS for source_id=0
Keep the original bbox
Frame Number = 0 Face Count = 20
End of stream
Returned, stopping playback
Average fps 0.000233
Totally 20 faces are inferred
Deleting pipeline

  1. from your command, there will be a landmarks.jpg.
  2. here is my test:
    command: ./deepstream-faciallandmark-app 1 …/…/…/configs/nvinfer/facial_tao/sample_faciallandmarks_config.txt
    file:///tmp/faciallandmarks_test.jpg ./faciallandmark
    log: log.txt (5.0 KB)

I’m using 2 after reading the .txt file not 1. But i have got a similar output but does it give an output image where landmarks are located?
and is it only used for image is there a way to input a video or use a usb camera?

  1. 1 means fakesink, the app will output nothing.
  2. you can this configuration faciallandmark_app_config.yml to test video source, the command is ./deepstream-faciallandmark-app faciallandmark_app_config.yml, please refer to the doc build-and-run
  1. I understand now so i have to use 3 to display the output. It worked but is there a way to make the output stay visible in the GUI as it only appears for a sec.
    and is there a way to run the model from usb cam or use a video as an input?
  2. The below error is generated from .yml file
    /opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-faciallandmark-app
    $ ./deepstream-faciallandmark-app /opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-faciallandmark-app/faciallandmark_app_config.yml
    terminate called after throwing an instance of ‘YAML::ParserException’
    what(): yaml-cpp: error at line 8, column 3: end of map not found
    Aborted (core dumped)

please refer to this topic.

so there is no command which i can use directly to allow the camera as an input. Please confirm if i have understood correctly. In this case i have to

  1. Use videoconvert to convert the YUYV to a format that nvvideoconvert supports. Using “gst-launch-1.0 uridecodebin uri=v4l2:///dev/video0 ! videoconvert ! nvvideoconvert ! autovideosink” or “gst-launch-1.0 -v uridecodebin uri=v4l2:///dev/video0 ! videoconvert ! video/x-raw,format=(string)YVYU ! nvvideoconvert ! autovideosink”
  2. Add a videoconvert plugin between uridecodebin and nvvideoconvert

you need to check if videoconvert is needed. you can use “gst-inspect-1.0 nvvideoconvert” to get the formats supported by nvvideoconvert. please check if the device 's output format is supported by nvvideoconvert directly. if can’t support, you need to insert a videoconvert between uridecodebin and nvvideoconvert.

I have attached the output log
Log (27.9 KB)
from gst-inspect-1.0 nvvideoconvert. I have installed DS recently is there any docs i can refer to to understand

please refer to nvvideoconvert.