How to stream the output of Autoencoder models?

I have a denoising Autoencoder model which takes an image as input and output the de-noised image as output. I would like to use it in a Deepstream python app and view the output of the model on the screen. I see there’s an example here: deepstream_python_apps/deepstream_segmentation.py at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub, where the output frames are stored in a file. This method will be extremely slow and I’d like to stream and view the video using Deepstream if it’s possible? I see there are ways to add bboxes, texts, etc to the frames but I couldn’t find a way to modify the pixel values of the frame.

1 Like

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.