Hi,
I am using the following tutorial to implement face recognition on Jetson nano
post link
The tutorial uses Dlib and face_recognition library to do the task of face detection and recognition. it fetches 6-7 FPS on Jetson nano, how can I incorporate this tutorial with deepstream as i want to run it for several camera streams.
thanks
Hi,
dlib has its own detection and recognition model.
If you can convert the model into TensorRT, you can just replace the path and parser in a standard pipeline:
If the model doesn’t be fully supported, you can try to insert the dlib implementation into the below sample.
#!/usr/bin/env python3
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2020-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
import sys
This file has been truncated. show original
Please noted that the buffer transfer between components is a GPU buffer.
Please check if dlib can accept the GPU buffer or not first.
Below is the TensorRT support matrix for your reference:
These support matrices provide a look into the supported platforms, features, and hardware capabilities of the NVIDIA TensorRT 8.4.3 APIs, parsers, and layers.
Thanks.
Thanks @AastaLLL , I’ll check all the methods you mentioned and get back. Thanks once again
is there any already existing pipeline for face recognition with deepstream?
thanks
Is there a way to just get a video feed with deepstream from multiple sources, I don’t want to run any inference.
Hi,
We don’t have a face recognition pipeline since there is no face related model in the Deepstream.
To feed multiple sources into Deepstream is easy.
For example, with /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/source30_1080p_dec_infer-resnet_tiled_display_int8.txt .
1. Turn off inference
...
[primary-gie]
enable=0
gpu-id=0
...
2. Run
$ deepstream-app -c source30_1080p_dec_infer-resnet_tiled_display_int8.txt
Thanks.
Awesome, thanks. how can i do the same in python @AastaLLL
where can i access the frames of this stream @AastaLLL
Hi,
You can find some python-based sample in our GitHub below:
To access the image data, please check the following comment for information:
Hi,
You can save the raw image in a similar way.
The frame buffer can be accessed with the following function:
n_frame=pyds.get_nvds_buf_surface(hash(gst_buffer),frame_meta.batch_id)
frame_image=np.array(n_frame,copy=True,order='C')
frame_image=cv2.cvtColor(frame_image,cv2.COLOR_RGBA2BGRA)
For example, the deepstream_imagedata-multistream.py (16.1 KB) is to dump each frame into the same folder.
Thanks.
Thanks.
Thanks @AastaLLL I’ll look into it.
how can i disable pgie and tracker in the following example
thanks
Hi,
You can comment out this line:
print("Creating EGLSink \n")
sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
if not sink:
sys.stderr.write(" Unable to create egl sink \n")
if is_live:
print("Atleast one of the sources is live")
streammux.set_property('live-source', 1)
streammux.set_property('width', 1920)
streammux.set_property('height', 1080)
streammux.set_property('batch-size', number_sources)
streammux.set_property('batched-push-timeout', 4000000)
pgie.set_property('config-file-path', "dstest_imagedata_config.txt")
pgie_batch_size = pgie.get_property("batch-size")
if (pgie_batch_size != number_sources):
print("WARNING: Overriding infer-config batch-size", pgie_batch_size, " with number of sources ",
number_sources, " \n")
pgie.set_property("batch-size", number_sources)
tiler_rows = int(math.sqrt(number_sources))
tiler_columns = int(math.ceil((1.0 * number_sources) / tiler_rows))
Thanks.