Thank you so much for responding. I reviewed the api docs you posted, the deepstream SDK documentation, as well as the info on how to access image data in python.
The detectnet api info is clear. I can call detectnet.detect with image, width, height, and none for the last parameter to avoid overlay on the image. I can then use the detections to crop the image and use gldisplay to render it. So far so good.
The jetson utils api documentation is missing a lot of the info I am looking for (or maybe I am not looking at it correctly). For instance, how do I initialize gstCamera or videoSource to use CSI MIPI camera with a resolution of 4032 x 3040, a frame rate of 30 fps, and a flip-method of 1, for instance? Similarly, what if I crop the image using the image data manipulations you linked and scale the image to a 720 x 1280 image that I want to show using a glDisplay window of 1280x720 by flipping the image clockwise again?
I tried passing these arguments while initializing gstCamera and videoSource as a second argument string, as part of the input URI string, and other ways, and I simply cannot get it to work. For example,
camera = jetson.utils.videoSource("csi://0 --input-width=4032 --input-height=3040 --flip-method=1")
camera = jetson.utils.videoSource("csi://0", "--input-width=4032 --input-height=3040 --flip-method=1")
Similarly for glDisplay as well.
None of it seems to appropriately affect the gstreamer pipeline that gets used in the background for input stream or properties of the output window. If I am just being completely obtuse, please let me know. I feel that jetson-inference and jetson-utils has almost everything I need if I could figure out how to control these parameters. DeepStream might be a bit too deep for me and I don’t need to run inference on every frame, so I am hoping to use a counter and do it selectively, so jetson-utils and jetson-inference looks like the way to go if I could figure out these details.