Jetson Nano inference

We want to do inference while it’s getting recorded. As inference is taking longer time, I can’t able to use one more pipe in DeepStream or Gstreamer.

I am considering two options

  1. Record video 2 min interval, and backend invoke inference once 2 min video is available for processing
  2. Generate frames as part of Gstreamer, and process it in parallel.

Is there any other better option?