Resizing Video Stream using Jetson Inference Utils

So my webcam is currently 1920x1080, and my display is 1920x1080 too, and I would like to display each pixel of the webcam, but wouldn’t want it to take up the entire display since I also want to be able to see the terminal window outputs as well. Is there a way to downsize the video stream to 1280x720, so the model uses every single pixel of the FHD stream (and is therefore a higher resolution), but only downsizes the output stream to visualise the bounding boxes.

If you use gstreamer, you can utilize window-based nveglglessink plugin so that you can scroll the window size.

I had a quick look at the nveglglessink plugin, and couldn’t get it to work, do I have to stick some arguments in when I call the detectnet program?

It’s recommended just to run your webcam at 1280x720 (--input-width=1280 --input-height=720) because all images get downsized to 300x300 for the object detection model anyways. DNN models are typically trained at lower resolutions because they are able to extract information without needing as many pixels. The resolution the models are trained at is static and it doesn’t matter what the camera resolution is - in the case of the SSD-Mobilenet detection models, it will always be downsampled to 300x300. So it doesn’t really matter if your camera is running at 1920x1080 or 1280x720.

However to directly answer your question, to resize an image before display, you can use the cudaResize() function from jetson-utils: