How to access to image array in GPU buffer without copy to CPU buffer?

Hi,
I want to access image array from gstreamer buffer and the feed to the custom model, and I see this example ,
But one big problem of this example is that copying frame from GPU buffer to CPU buffer, But I don’t want to copy to CPU buffer, because I have to convert to Tensor on CUDA and then feed to model.
I want to have image array in GPU buffer without copy to CPU buffer.

It is not recommended to do this in the probe function in your scenario. You can integrate your model with the nvinfer plugin as a sgie.

I’m doing face recognition in Deepstream. I use the Yolov8 model for face detection and using face_recognition library for face recognition. How do this face_recognition part as sgie?
Is there any example available to do this?

It’s similar to our deepstream_lpr_app demo.
pgie(vehicle detection)->sgie(license plate detection)->sgie(license plate recognition).

Do you have a Python example?

As your face_recognition is a library, it can’t be a sgie. You can use that as the postprocess part.

How to use that as the postprocess part?
Please help me with examples or instructions for that.
Is it possible to use in deepstream_imagedata-multistream.py code

You can refer to the deepstream_ssd_parser.py to learn how to analyze the data from the nvinfer with python.

I refer deepstream_ssd_parser.py.
First i run the script for better understanding.
But facing error issue: "
pgie.set_property(“config-file-path”, “/home/shalu/Documents/projects/deepstream_python_apps/apps/deepstream-ssd-parser/dstest_ssd_nopostprocess.txt”)
AttributeError: ‘NoneType’ object has no attribute ‘set_property’ "

Please provide complete information as applicable to your setup. Thanks
Hardware Platform (Jetson / GPU)
DeepStream Version
JetPack Version (valid for Jetson only)
TensorRT Version
NVIDIA GPU Driver Version (valid for GPU only)
Issue Type( questions, new requirements, bugs)
How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

If you are using jetson board, the nvinferserver may be created failed. You need to run the /opt/nvidia/deepstream/deepstream/samples/triton_backend_setup.sh to install the triton.

Hardware Platform (Jetson / GPU) GPU
DeepStream Version : 535.104.12

TensorRT Version : TensorRT 8.6.1.6
NVIDIA GPU Driver Version (valid for GPU only) GeForce RTX 3050
Issue Type( questions, new requirements, bugs) BUGS
How to reproduce the issue ? deepstream_imagedata-multistream.py code
Requirement details : my project is face detection and face recognition.
face detection create pgie and working fine and try face recognition code directly into it but am facing issues.
I cant do sgie for face recognition because am using face_recognition library for test it.
How can i use face_recognition library into it

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

There are two ways to fulfill your needs.

  1. Just refer to our deepstream_ssd_parser.py demo. If you want to run this demo, you will need to install triton in your environment or just use our docker with triton. In this way, the memory copy you mentioned cannot be avoided.
  2. Use the nvdsvideotemplate plugin to handle the buffers. But you need to use C/C++ to handle the implementation. You can refer to our demo deepstream-emotion-app.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.