Our usecase involves camera connected to FGPA interfaced via PCIe with jetson TX2, we need to achieve 4K video streams at 60 fps in Jetson.
Using Gstreamer :
1.we have a custom v4l2-pcie driver for interaction with hardware, and for performance we want to eliminate copies such that we can use the Nvbuffers accessed using the tegra_multimeda_api.
our pipeline involves : v4l2src -> nvidia encoder plugin -> record the frames
Is it feasible to modify the v4l2src plugin such that we can interact with the V4L2-PCIe driver and pushout the frames in NVMM memory out of v4l2src plugin directly towards the nvidia encoder element.
Using nvvidconv between v4l2src & nvidia encoder element would be an overhead for our usecase.
- Is there a way to use the nvarguscamerasrc to interact with our PCIe device, so we get the frames from fpga at the nvrguscamerasrc plugin and pushed out in NVMM memory, I’ve seen that LibArgus only supports platform cameras that are connected via MIPI interface whereas any other camera - connected through USB, MIPI, ethernet, etc. should make use of the V4L2 framework.