Hello, i’m developing deeplearning rtsp python server, modifing deepstream_python_apps/ deepstream-imagedata-multistream-cupy.
My scheme is to extract frames from pipeline through probe on deepstream-imagedata-multistream-cupy.py and apply my custom deeplearning module for converting frame into low light improved frame.
My custom deeplearning module is originally developed by pytorch, but converted into TensorRT(.trt).
At the end of my deepstream pipeline, i can see live stream video through rtsp. But THE PROBLEM is that the live stream video does not seem to be converted into low light improved frame through my deeplearning model.
My converting frame algorithm is just alike below.
TensorRT Version: 220.127.116.11
GPU Type: dGPU (A6000)
Nvidia Driver Version: 520.61.03
CUDA Version: 11.8.0.062
CUDNN Version: 18.104.22.168
Operating System + Version: Ubuntu 20.04 Server
Python Version (if applicable): 3.8.10
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered