Video Scaling implementation in TX1 R24.2

Hi, We work on media broadcast related products. We are just ramping up in TX1, L4T R24.2, Ubuntu 16.04 64-bit.

We need to bring up Scaler module for (2160p to 1080p) and (1080p to 720p), 30fps. The multimedia API reference documents give a brief about “V4L2 Video Converter APIs”. But I am unable to get a reference implementation for scaler. No sample application (c/cpp file) which demonstrates the V4L2 Video Converter APIs.

Could anyone point me to the correct sample application? OR gstreamer Video Converter plug-in source code to see the reference implementation.


Hi Subash, please install Multimedia API package via JetPack and you will see samples at /home/ubuntu/tegra_multimedia_api

Hi DaneLL, I installed Multimedia APIs and used 07_video_convert sample for profiling.
It works!! the code is self explanatory.

Thank you for your quick response!!


But when i run the 11_camera_object_identification,there was a wrong message about opencv.The wrong message was that cannot open the opencv lib.However,I had made the opencv lib in the file of 11_camera_object_identification/opencv_consumer_lib/ help me!

Hello, haijun:
Please take a look at 11_camera_object_identification/README for details.
Generally, you need install CAFFE and related packages and build that library in your device.
Please let me know if there’s any problem.


I worked with 07_video_convert samples and used “conv0” for my scaler profiling.
my profiling result shows,
1920x1080(YUYV) to 1280x720(NV12M) takes about 20.8ms per frame for scaling
3840x2160(YUYV) to 1920x1080(NV12M) takes about 56.1ms per frame for scaling

For UHD format conversion,
3840x2160(YUYV) to 3840x2160(NV12M) takes about 65.6ms per frame for format conversion.

The result shows that I will be not be able to use Video_converter for real time UHD scaling.
The timing does not include FILE read/write, I have taken care. I loaded 10 frames into a buffer from file and fed it from the buffer and after processing discarded.

Is my observation correct? OR the accelerator is capable beyond this?


Hello, Subash:
For sample 07_video_convert, it just a sample for video format convert. It reads raw data from file to user space buffer, and output converted data to file. Internally, it need to transform user space buffer to DMA buffer. That may cost a lot of time since raw data is too large to handle.

You can combine this sample with video decoder sample, and feed the video convertor with handler directly from decoder. The convert process will be much faster. (Also this pipeline should be the normal use case for most applications.)


Hi Chenjian,

Thank you for your quick reply. I will try this soon.

Meanwhile, How do I enable de-interlace (Interlaced to progressive). The Tegra_X1_TRM_DP07225001document claim that TX1 supports de-interlacer(DEI). But v4l2_nv_entension.h has no enum to enable the de-interlacer.

please point me to a document, how do I do DEI?

Hello, SubashBose:
Why do you need extra interlace to progressive transform? can you provide your use case?
Generally, during decoding, this transform is done internally if the input video stream is interlaced.


Hello ChenJian,

Our final goal is to realise Encoder and Transcoder system.
Reverse Transcode Use case: TS Stream -> Decode -> Scale/Deinterlace -> Encode -> TS Stream.
Input: 1080i50 (H.264)
Output: 720p25 (H.264/H.265)

Usually for interlaced input, decoders generates field interleaved OR field separated output in a single output buffer. this has to be converted to progressive frame for progressive encoding. Top and bottom fields should be deinterlaced to one frame output.

I am not sure about Nvidia TX1 decoders, Is there any option to set in decoder to generate progressive output for interlaced input content?


Hello, Subash:
After decoding, the output is progressive data and no need to do further transform. You can run the sample to confirm that.

Let me know if there’s any problem.