Video Image Compositor

The VIC on the Xavier seems to fit very well for the image correction that I’m trying to do.

But I see no trace of support in the JetPack for configuring/programming the VIC.

I would love to have the ability to use this

From the datasheet of the Xavier NX
1.7.4 Video Image Compositor (VIC)
VIC implements various 2D image and video operations in a power-efficient manner. It handles various system UI scaling,
blending, and rotation operations, video post-processing functions needed during video playback, and advanced de-noising
functions used for camera capture.
• Color Decompression
• High-quality Deinterlacing
• Inverse Teleciné
• Temporal Noise Reduction
o New Bilateral Filter as spatial filter
o Improved TNR3 algorithm
• Scaling
• Color Conversion
• Memory Format Conversion
• Blend/Composite
• 2D Bit BLIT operation
• Rotation
• Geometry transform processing
o Programmable nine-points controlled warp patch for distortion correction
o Real-time on-the-fly position generation from sparse warp map surface
o Pincushion/barrel/moustache distortion correction
o Distortion correction of 180- and 360-degree wide FOV lens
o Scene perspective orientation adjustment with IPT
o Full warp map capability
o Non-fixed Patch size with 4x4 regions
o External Mask bit map surface

More specifically, I’m interested in the " Geometry transform processing" part of the VIC and how to use it


After some research i found that the VIC is managed within NvMedia library as part of the DriveOS platform.

Would’n it be suitable to include the HW accellerated parts of NvMedia in LibArgus, or to include/support the libnvmedia into jetpack. It feels wasteful to have a very capable SoC that have accellerated support to do many of the camera related processing, where the alternative is to waste CPU and GPU performance doing it.

On Jetson platforms, we support gstreamer and jetson_multimedia_api. You can leverage VIC engine through nvvidconv in gstreamer, or NvBuffer APIs in jetson_multimedia_api. Please look at documents:

Can nvvidconvert leveredge the warping functiionality build in the VIC?

If not, is the source code for nvvidconv avilable to add the functionality?

For dewarping function, we have a sample in DeepStream SDK:


We are now working on Ds SDK 5.0 GA. Please wait for the release and try the sample.

But isn’t that warping done in SW? And also just for 360 fisheye cameras. I have two cameras with aprox 110 deg HFOV.

The Xavier NX have HW support in the VIC to do warping. So how can i access these features ?


The plugins is nvdewarper. It is implemented via CUDA and uses GPU.

Currently we don’t have software implementation for this. We have nvvidconvert which can do resizing, cropping, and format conversion by leveraging VIC.

Do nvdewarper support dewarping of non-360 cameras?

Is the source for nvvidconv available, to be able to add support for HW dewarping by my own?

Since you have this supporter in NvMedia stack for DriveOS, is there a plan to integration these features? How do I add feature requests for the JetPack SW?

On r32.4.3, we leverage hardware PVA engine and have new APIs for dewarping:
Please take a look and give it a try.

Will the VPI library manage the performance of hadling 2x4k video streams?

We have profiling result for 1920x1080. Please look at
We don’t try 4K. You may give it a try.

And how would code based on this library be able to feed astream into GStreamer, for Audio encoding and RTMP streaming?


The APIs are not compatible with gstreamer currently. In gstreamer, we have nvdewarp plugin which supports 360 degree now.

So wouldn’t it make sense to integrate the VIC functionality from NvMedia to LibArgus adn have thaht support in GStreamer as well?

With evaluation and discussion, we use VPI APIs for the functionality in L4T releases. We will evaluate to support VPI functions in gstreamer. On r32.4.3, please use dewarp in gstreamer or VPI APIs.

The examples with VPI uses OpenCv for image input etc. Is there a performance benifit of using ArgusLib, rather than OpenCV for camera/image interaction?



Would it be possible to use arguslib for requiring frames from cameras, VPI for cenversion and using appsrc for feeding the result into Gstreamer?

I need to keep the 2x4k @60fps performance through it all.



Is the VPI Library compatible with ArgusLib?

How would I interface the camera?

How would the image-data be representer, to ensure max. performance?