Avoiding video converter in Video pipeline with PCIe-camera as the source


Our usecase involves camera connected to FGPA interfaced via PCIe with jetson TX2, we need to achieve 4K video streams at 60 fps in Jetson.

Using Gstreamer :

1.we have a custom v4l2-pcie driver for interaction with hardware, and for performance we want to eliminate copies such that we can use the Nvbuffers accessed using the tegra_multimeda_api.

our pipeline involves : v4l2src -> nvidia encoder plugin -> record the frames

Is it feasible to modify the v4l2src plugin such that we can interact with the V4L2-PCIe driver and pushout the frames in NVMM memory out of v4l2src plugin directly towards the nvidia encoder element.

Using nvvidconv between v4l2src & nvidia encoder element would be an overhead for our usecase.

  1. Is there a way to use the nvarguscamerasrc to interact with our PCIe device, so we get the frames from fpga at the nvrguscamerasrc plugin and pushed out in NVMM memory, I’ve seen that LibArgus only supports platform cameras that are connected via MIPI interface whereas any other camera - connected through USB, MIPI, ethernet, etc. should make use of the V4L2 framework.

We would suggest use tegra_multimedia_api. Please take a look at 12_camera_v4l2_cuda sample. It is a sample if doing frame-capturing through v4l2. Once you can run the default sample, please check the post:
It is a patch of hooking 12_camera_v4l2_cuda with NvVideoEncoder.

The document is in