What’s the maximum achievable fps for MJPEG-to-H.264/5 transcoding on a Jetson Nano, assuming the input is a UVC camera producing either 1080p or 2160p 8bpc? Do you also have the same figures for 10bpc?
Hi,
we don’t have the result of this usecase at hand. Please give it a try under sudo jetson_clcoks being executed.
There is a patch for enhancement of MJPEG decoding:
FYR.
The issue is I only have a TX2 at the moment, which I believe uses different hardware for the codecs?
Would you be able to run the test for me before I get the hardware?
Hi,
On Jetson Nano/r32.3.1 with E-Con See3CAM CU135, run
$ gst-launch-1.0 v4l2src io-mode=2 ! image/jpeg,width=3840,height=2160,framerate=30/1 ! nvjpegdec ! 'video/x-raw(memory:NVMM)' ! fpsdisplaysink test-overlay=0 video-sink=fakesink -v
The fps is around:
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 55, dropped: 0, current: 11.30, average: 11.45
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstTextOverlay:fps-display-text-overlay: text = rendered: 61, dropped: 0, current: 11.21, average: 11.43
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 61, dropped: 0, current: 11.21, average: 11.43
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstTextOverlay:fps-display-text-overlay: text = rendered: 67, dropped: 0, current: 11.20, average: 11.41
FYR.
Thank you, @DaneLLL. However, I’m a little confused. It looks like:
- The fps is around 5 times lower than the spec for HEVC. Any idea why and what could be done about it?
- You’re not even transcoding to H.264/5, in your test, so the real performance might be even worse?
Would the encoding/decoding be running on the GPU or VPU?
Hi,
Please check hardware capability
Hardware H254/H265 decoder can achieve 4Kp60, but JPEG decoder cannot. Hardware H254/H265 encoder can achieve 4Kp30. The bottleneck is in hardware JPEG decoder, so it should be same result to have H264/H265 encoding in pipeline or not.
Good to hear having encoding in the pipeline shouldn’t affect the performance.
Looking at the capabilities, 600 MP/sec should yield 600 / 8.3 = 72 fps for a UHD stream, i.e. even more than H.264/5. Are you sure this is where the bottleneck is?
Hi,
The calculation may not fit real usecases. If we are able to achieve 4Kp60 MJPEG decoding, it shall be listed in document.
I see. Can you explain how you achieved 600 MP/sec in your testing?
Will transcoding 1080p achieve at least 30fps in reality?
Hi,
The result looks fine in running tegra_multimedia_api sample:
12_camera_v4l2_cuda$ ./camera_v4l2_cuda -d /dev/video0 -s 3840x2160 -f MJPEG -r 30 -v
INFO: camera_initialize(): (line:303) Camera ouput format: (3840 x 2160) stride: 0, imagesize: 16588800, frate: 30 / 1
[INFO] (NvEglRenderer.cpp:110) <renderer0> Setting Screen width 3840 height 2160
INFO: init_components(): (line:342) Initialize v4l2 components successfully
INFO: prepare_buffers_mjpeg(): (line:465) Succeed in preparing mjpeg buffers
INFO: start_stream(): (line:544) Camera video streaming on ...
^CQuit due to exit command from user!
----------- Element = renderer0 -----------
Total Profiling time = 3.98325
Average FPS = 29.8751
Total units processed = 120
Num. of late units = 86
-------------------------------------
INFO: stop_stream(): (line:711) Camera video streaming off ...
App run was successful
Once you got the device, please connect 4K TV and give it a try.
The patch in #3 should improve performance of running in gstreamer pipeline. FYR.
OK. So decoding performance figures assume you’re outputting directly to a display. GStreamer may perform much worse. This wasn’t clear to me.
Are you planning to apply the patch upstream?
Thanks for your help.
Hi,
The gstjpeg is open source code and this is shared as a patch for user who requires the function.