Looking for camera systems advice. We’ve been prototyping some systems on the tx2 using python and gstreamer. This has been fine, but we’re now moving towards c++. A lot of the motivation is performance and latency access to the cameras.
To get lowest latency assess to the mipi cameras are people having better experience and performance with libargus or gstreamer? If gstreamer: using nvcamerasrc or nvarguscamerasrc?
It seems like libargus gives lower level access to the isp and camera, but I haven’t seen that really documented so I don’t know.
I was looking at the ridgerun latency performance tests and those were done with gstreamer and the nvcamerasrc. I also see one of the ridgerun devs having to ask for special builds of nvcamersrc with the min buffer depth reduced from 10 to 2 - this makes me suspect that we don’t want to continue down the nvcamerasrc route, but maybe I’m wrong?
We’re looking to learn from others experiences before re-stumbling into already solved issues.
One thing to keep in mind is that nvcamera-daemon, which nvcamerasrc uses, is deprecated. The R31+ builds of L4t, such as those on Xavier, no longer include nvcamerasrc nor nvcamera-daemon.
I’m interested in knowing what you learn about reducing latency.
Ok, thanks for that information. When we started working on this 28.2 was the release version and 31.1 was in pre-release. How time flies.
We should probably switch over to 31.1 after I finish this libargus port. I’ll test latency characteristics in c++ between libargus vs gstreamer (nvcamerasrc).