Hi,
As part of frame processing, I want to scale down the frame size as well as convert the frame into binary data before posting it to Kafka messaging broker.
My question is, performing the above operation once we receive frame is a good option or writing custom Gstreamer element is better in terms of performance as well resource utilization perspective.
Hi,
Do you use DeepStream SDK? In DeepStream SDK, a possible solution is to downscale the frames through nvvideoconvert, encode to JPEGs, and send to kafka messaging broker.
We tried setting up DeepStream in our Nano, but end up facing lot of issues. That’s when we decided to go back to Gstreamer.
Do you have clear steps on installing DeepStream with python bindings on latest software?
Thanks for the example.
Do we need to install SDK manager in the computer connected to Jetson Nano using command line or it needs to be installed directly in Nano. We downloaded the file, but we are unable to install SDK manager on Jetson Nano
sdkmanager_1.4.0-7363_amd64.deb.download
We can able to make it work. Currently, we can able to run few examples. However, we still facing issues while running Deepstream using config_*.
When we run DeepStream samples, we are observing RAM memory utilization touching 100%. Around half of this memory been consumed by Compiz. I believe this application won’t run in production as we are not going to show any video. Please confirm?
Also, is there a possibility to extend RAM in Nano?
We could able to run DeepStream using any of C examples. But, facing issue while running python examples. We are getting error as “missing Gst Python” Do you have any clear documentation on how to make python samples to work using Deepstream.