Deepstream6.2 python frame extraction for custom deeplearning model (TensorRT)

Description

Hello, i’m developing deeplearning rtsp python server, modifing deepstream_python_apps/ deepstream-imagedata-multistream-cupy.
My scheme is to extract frames from pipeline through probe on deepstream-imagedata-multistream-cupy.py and apply my custom deeplearning module for converting frame into low light improved frame.
My custom deeplearning module is originally developed by pytorch, but converted into TensorRT(.trt).
At the end of my deepstream pipeline, i can see live stream video through rtsp. But THE PROBLEM is that the live stream video does not seem to be converted into low light improved frame through my deeplearning model.
My converting frame algorithm is just alike below.

Environment

TensorRT Version: 8.5.2.2
GPU Type: dGPU (A6000)
Nvidia Driver Version: 520.61.03
CUDA Version: 11.8.0.062
CUDNN Version: 8.6.0.163
Operating System + Version: Ubuntu 20.04 Server
Python Version (if applicable): 3.8.10

TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,

This looks like a Deepstream related issue. We will move this post to the Deepstream forum.

Thanks!

n_frame_gpu is replaced by "model.predict(n_frame_numpy), the buffer pointer lost. Your code is wrong. You can not use n_frame_gpu in this way. The n_frame_gpu is the pointer of the frame buffer, you can not change it but you can change the value in the array.

thank you so much… I solve it!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.