Please provide complete information as applicable to your setup.
• Hardware Platform: Jetson Nano Developer Kit
• DeepStream Version: 5.0
• JetPack Version: 4.4
• TensorRT Version: 7.1.0
This is a simple question but confused me a lot.
- I hope that I could modify the FPS (frame per second) of input video stream manually before it goes into object detection module, for example, modify from 30 FPS to 20 FPS.
- I hope that I could modify the display video size. Currently I am using “nvoverlaysink” to display video locally on the screen. However, “nvoverlaysink” gives me a full screen display. And I cannot switch to my terminal during the code running. I have no idea how to tune the screen size of the video display.
Current solution (not work):
I tried to create some Gstreamer elements: “videoscale”, “videorate”, “capsfilter” and tried to link them together to “nvstreammux”. See the codes below:
videoscale = Gst.ElementFactory.make(“videoscale”, “scale”)
if not videoscale:
sys.stderr.write(" Unable to create videoscale \n")
print(“Creating video (re)-scale for video source%d.” %i )
videorate = Gst.ElementFactory.make(“videorate”, “videorate”)
if not videorate:
sys.stderr.write(" Unable to create videorate \n")
print(“Creating video rate for video source%d.” %i )
capsfilter = Gst.ElementFactory.make(“capsfilter”, “capsfilter”)
if not capsfilter:
sys.stderr.write(" Unable to create capsfilter \n")
print(“Creating capsfilter for video source%d.” %i )
caps = Gst.Caps.from_string(“video/x-raw, width=640, height=480,
Link of elements:
Can you help for this problem? Thank you so much.