How to use deepstream to detect and capture faces


How can I detect faces using deepstream while using rtsp from a raspberry pi on jetson nano

By default it demonstreates ResNet/SSd/Yolo in DeepStream SDK. You would need to convert the face-recognition model to be TensorRT-runnable and modify the config file to replace the default models.

How can we do this with dlib since we we have more experience with this and have an implementation with this currently running but having serious lags.

Can you help with a sample detection model I can make use of and how to to implement with tensorRT.

Lastly, we want to connect this to connect this to our python script which records and uploads images and videos to the cloud.



First, please check if you have enabled GPU support in the dlib library.

If you want some optimization on the camera part from deepstream.
You can check this sample to get the data buffer and feed it into dlib:

A better solution is to use Deepstream for the whole pipeline.
Suppose you are using the dlib model from this repository:

Then you will need to convert it into TensorRT support format: caffe, ONNX, uff, or directly create the weight with TensorRT API.
You can find the sample of each format in the below folder:

$ /usr/src/tensorrt/samples/