I am working on an end-to-end ML benchmark for the Jetson devices. My idea is to have a video file in the raw Bayer format on the disk for this demo. I then intend to use the ISP to process this file to output RGB frames which I can then process using ML model. This way I will be able to measure how much time the ISP takes to process each frame and how long the model takes for inference. My questions are:
- Is there anyway to pass data to the Jetson ISP from a file for the usecase I have described above?
- Are there are GStreamer pipelines which I can use for directly inferring output from a CSI camera using a TRT engine file?