FaceDetect Pre-Trained model implementation using DS

please use file:///home/Jetson/Desktop/input/image1.jpg

i’m getting this error
failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/models/faciallandmark/facenet.etlt_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x416x736
1 OUTPUT kFLOAT output_bbox/BiasAdd 4x26x46
2 OUTPUT kFLOAT output_cov/Sigmoid 1x26x46

0:04:45.140839273 6985 0xaaaaf2888ea0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:…/…/…/configs/facial_tao/config_infer_primary_facenet.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Running…
Decodebin child added: nvjpegdec0
ReadFrameInfo: 634: HW doesn’t support progressive decode
In cb_newpad
###Decodebin pick nvidia decoder plugin.
nvstreammux: Successfully handled EOS for source_id=0
ERROR from element typefind: Internal data stream error.
Error details: gsttypefindelement.c(1228): gst_type_find_element_loop (): /GstPipeline:pipeline/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin/GstDecodeBin:decodebin0/GstTypeFindElement:typefind:
streaming stopped, reason error (-5)
Returned, stopping playback
Average fps 0.000233
Totally 0 faces are inferred
Deleting pipeline

from the error, it is because nvjpegdec decoded failed, could you share the image1.jpg? thanks. and you can try other pictures.


this is the image i have tried other images but same output as above. For another image it shows that counted faces are 20 as below but without showing the output

0:05:27.299895392 7107 0xaaaad35680a0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:…/…/…/configs/facial_tao/config_infer_primary_facenet.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Running…
Decodebin child added: nvjpegdec0
In cb_newpad
###Decodebin pick nvidia decoder plugin.
nvstreammux: Successfully handled EOS for source_id=0
Keep the original bbox
Frame Number = 0 Face Count = 20
End of stream
Returned, stopping playback
Average fps 0.000233
Totally 20 faces are inferred
Deleting pipeline

  1. from your command, there will be a landmarks.jpg.
  2. here is my test:
    command: ./deepstream-faciallandmark-app 1 …/…/…/configs/nvinfer/facial_tao/sample_faciallandmarks_config.txt
    file:///tmp/faciallandmarks_test.jpg ./faciallandmark
    log: log.txt (5.0 KB)

I’m using 2 after reading the .txt file not 1. But i have got a similar output but does it give an output image where landmarks are located?
and is it only used for image is there a way to input a video or use a usb camera?

  1. 1 means fakesink, the app will output nothing.
  2. you can this configuration faciallandmark_app_config.yml to test video source, the command is ./deepstream-faciallandmark-app faciallandmark_app_config.yml, please refer to the doc build-and-run
  1. I understand now so i have to use 3 to display the output. It worked but is there a way to make the output stay visible in the GUI as it only appears for a sec.
    and is there a way to run the model from usb cam or use a video as an input?
  2. The below error is generated from .yml file
    /opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-faciallandmark-app
    $ ./deepstream-faciallandmark-app /opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-faciallandmark-app/faciallandmark_app_config.yml
    terminate called after throwing an instance of ‘YAML::ParserException’
    what(): yaml-cpp: error at line 8, column 3: end of map not found
    Aborted (core dumped)

please refer to this topic.

so there is no command which i can use directly to allow the camera as an input. Please confirm if i have understood correctly. In this case i have to

  1. Use videoconvert to convert the YUYV to a format that nvvideoconvert supports. Using “gst-launch-1.0 uridecodebin uri=v4l2:///dev/video0 ! videoconvert ! nvvideoconvert ! autovideosink” or “gst-launch-1.0 -v uridecodebin uri=v4l2:///dev/video0 ! videoconvert ! video/x-raw,format=(string)YVYU ! nvvideoconvert ! autovideosink”
  2. Add a videoconvert plugin between uridecodebin and nvvideoconvert

you need to check if videoconvert is needed. you can use “gst-inspect-1.0 nvvideoconvert” to get the formats supported by nvvideoconvert. please check if the device 's output format is supported by nvvideoconvert directly. if can’t support, you need to insert a videoconvert between uridecodebin and nvvideoconvert.

I have attached the output log
Log (27.9 KB)
from gst-inspect-1.0 nvvideoconvert. I have installed DS recently is there any docs i can refer to to understand

please refer to nvvideoconvert.

Thank you. But the link didn’t help much. I’m still facing an error with the .yml file and I still can’t understand how to use a usb camera to the test. Where should the NVVconvert be plugged

please share the whole log.

nvvideoconvert is already in code deepstream_det_app, you don’t need to add a new one.

Thank you the use of camera is now clear. For .yml file this is the full log:
/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/apps/tao_others/deepstream-faciallandmark-app$ ./deepstream-faciallandmark-app faciallandmark_app_config.yml
./deepstream-faciallandmark-app: error while loading shared libraries: libnvcv_faciallandmarks.so: cannot open shared object file: No such file or directory

I couldn’t figure where libnvcv_faciallandmarks.so this file is used. I found the below config in the facenet config file
tlt-encoded-model: …/…/models/faciallandmark/facenet.etlt
labelfile-path: labels_facenet.txt
int8-calib-file: …/…/models/faciallandmark/facenet_cal.txt
model-engine-file: …/…/models/faciallandmark/facenet.etlt_b1_gpu0_int8.engine

but when i checked what i have installed facenet_cal and facenet.etlt_b1_gpu0 engine file do not exist in …/…/models/faciallandmark/ folder 1. facenet.etlt 2.faciallandmarks.etlt 3. int8_calibration.txt files only exist

after making successfully, please execute “export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/deepstream/deepstream/lib/cvcore_libs”, please refer to the doc.

at the first run, the app will generate a TensorRT engine file, after the first run, the app will load the engine directly if the engine path is set.

to implement the FaceDetect model in NVIDIA DeepStream using the given setup (Jetson AGX Orin, DeepStream 6.2, JetPack 5.1, and TensorRT 8.2.2), follow these steps:

  1. Obtain the FaceDetect Model:
  • Download the FaceDetect model from the NVIDIA NGC website or through the DeepStream TAO Toolkit. Ensure you have the necessary license to use the model.
  1. Install DeepStream TAO Toolkit:
  • Make sure you have the DeepStream TAO Toolkit installed on your system. Refer to the official NVIDIA documentation for installation instructions.
  1. Follow DeepStream TAO Apps README:
  • Navigate to the tao_others directory in the DeepStream TAO Apps repository and follow the instructions in the README.md file to run applications based on the FaceDetect model.
  1. Verify Model Files:
  • After following the instructions, ensure that the model files are correctly installed and present in the designated directories. Double-check the installation location to ensure the model was downloaded successfully.

Regarding a Python version of the model, TAO provides pre-trained models for use with DeepStream, but it does not specifically offer a Python version. DeepStream primarily utilizes C/C++ for performance reasons, but Python can still be used for higher-level control and integration.

thanks for the sharing, Is this still an DeepStream issue to support? Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.