What does the Nvidia accelerated JPEG plugin for gstreamer do?

Hi everyone,

Can somebody explain how the nvidia provided jpeg decoding/encoding plugin work on the Jetson TK1? I can see that nvidia has provided a libjpeg.so and a gstreamer plugin library/source as well.

There is no mention in the TRM about any dedicated hardware block in Tegra K1 processor for jpeg decode/encode. Then how is it hardware accelerated. Does it use CUDA, or Neon or GLSL or any other techniques?

I think the jpeg situation is similar to video and 3D. TRM doesn’t say much but still you have a SW API that you can use to perform the needed operations. The datasheet do list the capabilities of the jpeg decoding and encoding.

Hi kulve, could you point out which datsheet you are referring to? The multimedia user guide only explains how to use the gstreamer plugin. I havent seen any performance characteristics mentioned.

A lot of docs are available through here:

[url]https://developer.nvidia.com/embedded-computing[/url]

Click “Hardware Design and Development” and from there you can download “Tegra K1 Data Sheet”. You do need to register for that though.

I have a source that can provide JPEG data at high framerates. I’m trying to decode the data using gstreamer. I noticed that there are 3 gstreamer plugins that can handle jpeg.

  1. jpegdec
  2. nvjpegdec
  3. nv_omx_jpegdec

As far as I have tested jpegdec seems to be faster than nvjpegdec. Thats why I’m trying to understand how nvidia’s jpeg decoder works. This is the pipeline I’m using :

gst-launch-0.10 v4l2src device=/dev/video0 queue-size=4 always-copy=false ! \
"image/jpeg, width=(int)1280, height=(int)720, interlaced=(boolean)false,   \
pixel-aspect-ratio=(fraction)1/1" ! queue ! $DECODE_PLUGIN ! fakesink -v

I replace DECODE_PLUGIN with either of the 3 plugins. nv_omx_jpegdec doesnt work at all for me. I get the following errors.

omx_setup error while setting FilterTimestamp
NvMMLiteBlockCreate : Block : BlockType = 257 
Caught SIGSEGV accessing address 0xae7f6000
#0  0xb6d99b60 in ?? ()
#1  0xb6d99b5a in ?? ()
Spinning.  Please run 'gdb gst-launch 22009' to continue debugging, Ctrl-C to quit, or Ctrl-\ to dump core.

Has anybody else tried the nvidia provided gstreamer plugins? Any luck?

Thank for pointing it out kulve. I have that document. But hadn’t noticed this information before.

I haven’t used any jpeg decoders but when doing performance measurements, do remember to max out CPU, GPu and EMC clocks:

[url]http://elinux.org/Jetson/Performance[/url]

You may also want to decode a file instead of a /dev/video0 just to see if there’s a difference in how the plugins behave.

Thats a nice suggestion kulve. I have already maximized both the GPU clocks and CPU core clocks. Still did not notice any big improvement in performance.

Also I tried your suggestion to decode a file. But the sources I can find online are all either low resolution or has low framerate. So I’m unable to see any difference between the software decoder and the hardware accelerated decoder. The camera I’m using can generate UltraHD jpeg data at 30 fps. I’m able to get the JPEG data at 30 fps using gstreamer. Only while decoding, the framerates drop drastically. So using a video node is not an issue here.

A similar problem with encode jpeg.
The command
for i in seq 1 1000; do
gst-launch-0.10 filesrc location=./img_2592x1944_pitch2592 blocksize=5038848 !
“video/x-raw-gray, bpp=8, width=(int)2592, height=(int)1944, framerate=(fraction)1/1, format=(fourcc)I420” !
jpegenc !
fakesink -e
done
It is performed for 1m24s seconds. The command
for i in seq 1 1000; do
gst-launch-0.10 filesrc location=./img_2592x1944_pitch2592 blocksize=5038848 !
“video/x-raw-gray, bpp=8, width=(int)2592, height=(int)1944, framerate=(fraction)1/1, format=(fourcc)I420” !
nvjpegenc !
fakesink -e
done
It is performed for 2m45s seconds.

It is clear that for some time it takes to load the file. But still the standard is faster encode nvidia encode.

Why this happens?
Jetson/Performance - eLinux.org I read and execute.

NVIDIA support advised a command run:
gst-launch-1.0 videotestsrc num-buffers=1000 ! ‘video/x-raw, width=(int)2592, height=(int)1944, framerate=(fraction)1/1, format=(string)GRAY8’ ! queue ! nvvidconv ! “video/x-raw(memory:NVMM)” ! queue ! nvjpegenc ! fakesink -v -e