I have tried Jetson Nano and Jetson TX2, both of them are very slow when decoding MMJPEG video frames to Numpy(opencv) or Cuda (gstreamer), when we add the inference time of model to the processing time, it creates 20-30 fps on nano and tx2 accordingly.(Our cameras can support 120 fps in used resolutions (720p))
We want to simulate autonomous car, however, the obtained results are not as fast as to use them, is there any way to make this process faster on python or other languages? What is the situation on better cards such as Xavier?
Thank you for your supports, have a good day.