When calling a CSI camera to send and read a video stream, it will occupy more GPU than a USB camera

Good morning teachers, please ask you a question, when using Xavier NX, I find that when calling a CSI camera to send and read video streams, it will take up more GPU than a USB camera. Video 1 shows the CSI camera + model inference delay of about 500ms. Video 2 uses USB camera + model inference, and the delay is about 150ms. Please help to see what the reason is.

Not sure what does occupy more GPU mean. For checking GPU usage, please execute sudo tegrastats and compare the two use-cases

csi.log (23.5 KB)
uvc.log (12.9 KB)
This is a log of both, please help take a look. Thank you.

We don’t see GPU usage while running:

$ gst-launch-1.0 nvarguscamerasrc ! fakesink

So the deviation in your use-case looks to be from model inference. One possible reason is that the resolution is different in the two cases, so doing inference takes different GPU usage.

csi+d(1).log (5.2 KB)
usb+d(1).log (6.2 KB)
Good morning, this is the occupancy content you need, please help take a look.