I’ve got a Xavier NX running Jetpack 4.6 (can’t upgrade just yet) running the deepstream (v6.0) examples (specifically
deepstream-test3-app). If I run it with the default resnet10 model, it processes about 30 fps and the display is fine…doesn’t appear to be chunky or anything. If I instead use my own trained model based on yolov5, processing drops down to about 7-9 fps, which is fine, but the display slows way way down to like 1 frame every 5 seconds. I also get this warning from gstreamer:
0:04:24.311698688 25204 0x559a84c8f0 WARN basesink gstbasesink.c:2902:gst_base_sink_is_too_late:<nvvideo-renderer> warning: A lot of buffers are being dropped.
0:04:24.311806432 25204 0x559a84c8f0 WARN basesink gstbasesink.c:2902:gst_base_sink_is_too_late:<nvvideo-renderer> warning: There may be a timestamping problem, or this computer is too slow.
where nvvideo-renderer is the
I am getting the “System throttled due to over-current” mentioned here which might be the cause, but even if I reduce my GPU freq to 408 MHz which decreases the power consumption to 8W (but still triggers the overcurrent alarms, but less frequent) the display is still really slow. So I don’t think that’s the cause, but I guess it might be. Just wondering if there’s something else to look at.
I assume the video (720p) I’m playing runs at 25 fps (so faster than inference)…do I need to configure something so that frames are dropped appropriately or is there something else to look at to why the display would be so slow?
Recent observation: the resnet10 only consumes about 50% of the GPU where as my yolov5 model consumers nearly all of it. I’m more inclined to think this is the cause, but am interested in others thoughts. Is there a way to throttle how many frames a second nvinfer is processing?