Image and Video compression over ORIN AGX

Hello,

I would like to efficiently perform 2 tasks on the ORIN AGX :

  1. compress images (jpeg for example) to some compressed format (H264/H265 or something else)
  2. compress video from camera input to storage

I tried to follow the Multimedia Developer Guide Examples but did not find the appropriate API. My focus is compression.
Should i use the gstream API? If so - can you please provide an example for these tasks?
Maybe DeepStream?

Thank you,
Rinat

Hi,
Please install SDK Components through SDKManager. And you will see jetson_multimedia_api samples in

/usr/src/jetson_multimedia_api

Document is in
Jetson Linux API Reference: Main Page | NVIDIA Docs

Thank you.

I am considering performing the encoding task on the GPU using FFmpeg or GTstream
(using Low level jetson multimedia API seems more complicated)

I understand that it is possible because of the NVENC support on the orin AGX.

  1. Can you please refer me to instructions to do this with both frameworks?

  2. What is the preferred method for camera stream encoding on GPU FFmpeg/GTstream/Low level jetson multimedia API ?

Thank you,
Rinat

Hi,
There is individual hardware encoder in Jetson Orin so the encoding task is not done on GPU. Please note this. For encoding frame data from camera, if you use Bayer camera sensor and do debayering through ISP engine, we use Argus stack and please refer to the sample:

/usr/src/jetson_multimedia_api/samples/10_argus_camera_recording

And if your camera source is a v4l2 source, please refer to the sample:

/usr/src/jetson_multimedia_api/samples/12_v4l2_camera_cuda

and apply the patch for encoding:
How to use v4l2 to capture videos in Jetson Orin r35 Jetpack 5.0 and encode them using a hardware encoding chip - #8 by DaneLLL

Thank you,

Can I activate this hardware encoder using FFmpeg and GtStream?

Hi,
We have developed some plugins in gstreamer. Please refer to gstreamer user guide:
Accelerated GStreamer — Jetson Linux Developer Guide documentation

Thank you.

I tried to use this guide for encoding via nvv4l2h264enc . for example get input.mp4 and encode to .h264.
I wish to avoid GPU activity , in order to keep current application behavior (only the specific encoder HW).
According to my understanding the nvvidconv element utilizes the GPU, So to avoid its usage i performed offline file conversion to input.yuv (yuv420P format) using ffmpeg:

“ffmpeg -i input.mp4 -c:v rawvideo -pix_fmt yuv420p input.yuv”

I defined the following pipeline for encoding:

gst-launch-1.0 filesrc location=input.yuv ! videoparse format=i420 width=768 height=576 ! nvvidconv ! ‘video/x-raw(memory:NVMM), format=(string)I420’ ! nvv4l2h264enc bitrate=2000000 ! ‘video/x-h264, stream-format=(string)byte-stream’ ! h264parse ! qtmux ! filesink location=ouput.mp4

this works but uses nvvidconv. without it i get pipeline error.
I tried the same thing with NV12 format.
How can i avoid the usage of nvvidconv (and any elements that uses GPU)?

Thank you,
Rinat

nvvidconv doesn’t use GPU by default on Jetson, but VIC hw, so this wouldn’t be a GPU performance issue (you may check using tegrastats, GPU is referred as GR3D (may also have GR3D2 on some devices).

NVDEC, NVENC or GPU would use NVMM memory (contiguous memory for DMA), so you have to use nvvidconv for copying from or to system memory.

With gstreamer you would open a mp4 file and decode and display with:

gst-launch-1.0 filesrc location=input.mp4 ! decodebin ! queue ! nvvidconv ! autovideosink

So, if you want to re-encode into H264 into MP4 container you would try:

gst-launch-1.0 -e filesrc location=input.mp4 ! decodebin ! queue ! nvvidconv ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=output.mp4

Note that nvvidconv in the latter case may not be mandatory if input video is encoded in a format supported by nvv4l2decoder such as H264,H265,VP8,VP9,AV1… in such case the decoder would output into NVMM memory as expected input for nvv4l2h264enc and nvvidconv would have nothing to do.
Though, if decodebin selects another decoder outputting into system memory, then nvvidconv would be required for copying decoded video from system memory into NVMM memory for encoder.

1 Like

Thanks, I will use the suggested pipeline.

I run it and noticed some short GPU load rise at pipeline start and pipeline end. Is it because of memory copy activity?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.