Nvidia gstreamer elements

Hi,

  1. I would like to know what is the difference between nvjpegdecand nvv4l2decoder, are they both using HW accelerated jpeg decoding?
  2. Another question of mine is do all nvX use NVMM buffers? what exactly are they?
  3. Does the v4l2src can configure a USB camera? It looks like when I’m changing the caps after this element the sensor actually outputs in mpeg format instead of raw. With what API does the v4l2src does that? what is its base difference from nvv4l2camerasrc?

please? :)

Hi,

Yes, both utilize NVJPG hardware engine.

Yes, it is hardware DMA buffer(called NvBuffer). ON JP4.5 we have most plugins open source, you may take a look at the implementation. Source codes are in
https://developer.nvidia.com/embedded/l4t/r32_release_v5.1/r32_release_v5.1/sources/t186/public_sources.tbz2

Yes, please refer to steps in
Jetson Nano FAQ
Q: I have a USB camera. How can I launch it on Jetson Nano?

  1. thank you for the response, so there are no real difference between those decoding elements?

  2. does nvvidconvconverts regular buffers to NvBuffer?

  3. Is the difference between v4l2src to nvv4l2camerasrc is that the second already outputs NvBuffers that does not need conversion before HW accelerated elements?

Hi,

The software implementation is not identical, but both utilizes the same hardware engine.

Yes, it is supported like

... ! video/x-raw ! nvvidconv ! video/x-raw(memory:NVMM) ! ...

Yes.

thank you