GeForce GTX 1650; GazeNet; nvcr.io/nvidia/tao/tao-cv-inference-pipeline v0.3-ga-client with pre-trained model.
While adopting demo_gaze app to read video file (is_video_path_file=true) and process it AFAP (rate control is removed, essentially), I faced a few problems with TAOCVAPI.
- Using OpenCV reader (i.e. use_decoded_image_api=true) skips about 20% of frames if I call non-blocking cvAPI.getGaze() as expected. Calling cvAPI.getGaze(true) processes all the frames, but freezes pipeline from time to time (not regularly, but quite often). Looks like some internal TAOCVAPI race condition (on the client), as no error is reported and the Triton server logs START/DONE and ready for the next input. Is it know issue, any workaround?
- Using TAOCVAPI camera instead (i.e. use_decoded_image_api=false) solves the problem with the pipeline freezing, but it loops video processing over again once it reached out the end. No way to detect the video end or get the video frame timestamps. Is there any way to retrieve that using API?
- Trying combining both approaches (i.e. using taocv camera enabled anlong with opencv reader running at the same time to get frame timestamps and detect the video end) works fine, but only in realtime. Does taocv camera rely on the current PC time to fit the processing speed to the video fps? Changing video fps (i.e. for ex. cameraInit.fps =1000) results in Opecv frame timestamp and taocv camera frame timestamp get out of sync: while opencv reads frame by frame, taocv camera skips frame to fit current PC time.
Thank you!