Hello,
I want to do depth estimation inference using videoSource and videoOutput APIs from jetson.utils. I am streaming video file using videoSource and getting an output video with 15 FPS and 2 times longer than the input video, because my model’s throughput is 15 qps. My expectation was to write an output video file with 15 FPS, but with the same length as the input video (or in other words process and write only a part of the input frames). How can I achieve this through videoSource and videoOutput APIs?
Thanks,
Tigran