Multiple H.265 video encoding

Hi,

I would like to know how to proceed to encode 6 x 1080p@60fps videos in H.265 using the Jetson TX1 please.

Regards,

Dev

Hi Dev,
Please refer to documentation for running multiple encoding instances.
https://developer.nvidia.com/embedded/dlc/l4t-documentation-24-2-1

The spec of H265 encoding on TX1 is one 4kP30. 6 1080p60 encodings are not guaranteed.

Hi DaneLLL,

Thanks for the documents.

If H.265 encodings are not guaranteed then what is the best way to store 6 1080p60 using the TX1?

Regards,

Dev

Hi Dev,
Here is a post about 4 1080p25 transcoding:
[url]https://devtalk.nvidia.com/default/topic/979908/jetson-tx1/gstreamer-transcoding-performance-issue/post/5033461/#5033461[/url]

We don’t have test case of 6 1080p60 h265 encoding but it should exceed the HW limit. You need to move part of the loading to SW encoder.

Hi Dane,

Thanks for the link.

The idea here is to connect 6 CMOS sensors through CSI 2 lanes and then recording the video at 1080p@60.

We have seen cases of 6 cameras streaming with the Jetson TX1.

We also checked that the total MPix/s for the Jetson TX1 isn’t exceeded with 6 1080p@60.

We are wondering what is the best way to store those 6 videos at this framerate and resolution. We would be grateful if you could advise.

Regards,

Dev

Hi Dane,

I have further few questions and I would be grateful if you could help us;

  1. If h.264(or h.265) encoding of 6 1080p60 cannot be done with the TX1, our question would be can it be done with the TX2? because if we look at the specs with what is possible with the TX2 and we translate it as follows;

4K60 → 3 x 4K30 → 6 x 1080p60 , we can assume that the numbers add up and it is possible right?

  1. We are building our custom camera modules in house with 2 CSI lanes and would like to interface it with our custom carrier board for Jetson TX2.

Then we came across this in the document you sent as first reply to this thread;

"NVIDIA provides additional sensor support for BSP software releases. Developers must work with NVIDIA certified camera partners for any Bayer sensor and tuning support. The work involved includes:
•Sensor driver development
•Custom tools for sensor characterization
•Image quality tuning
These tools and operating mechanisms are NOT part of the public Jetson Embedded Platform (JEP) Board Support Package release. "

Can you please help us understand what it means? Would we need special permission to write a custom driver for our custom CSI camera board?

  1. Is there a way to get communication in a quicker way. As we are in Copenhagen, there is a considerable amount of time difference and we only know how to get answers through this forum for now. We have been also using the development kits for TX1 and are planning to get those for TX2s. Is there a better and faster communication channel for technical questions and support? Please advise

Regards,

Dev

Hi Dev,
For 1), we have checked with the sample in
https://devtalk.nvidia.com/default/topic/1008984/jetson-tx2/video-encode-speed/post/5149942/#5149942

The result looks good on TX2:

ubuntu@tegra-ubuntu:~/tegra_multimedia_api/samples/01_video_encode$ ./video_encode ~/1080p.yuv 1920 1080 H265 1080p.265 -hpt 1 & ./video_encode ~/1080p1.yuv 1920 1080 H265 1080p1.265 -hpt 1 & ./video_encode ~/1080p2.yuv 1920 1080 H265 1080p2.265 -hpt 1 & ./video_encode ~/1080p3.yuv 1920 1080 H265 1080p3.265 -hpt 1 & ./video_encode ~/1080p4.yuv 1920 1080 H265 1080p4.265 -hpt 1 & ./video_encode ~/1080p5.yuv 1920 1080 H265 1080p5.265 -hpt 1 &
[1] 23693
[2] 23694
[3] 23695
[4] 23696
[5] 23697
[6] 23698
ubuntu@tegra-ubuntu:~/tegra_multimedia_api/samples/01_video_encode$ Failed to query video capabilities: Inappropriate ioctl for device
Failed to query video capabilities: Inappropriate ioctl for device
Failed to query video capabilities: Inappropriate ioctl for device
Failed to query video capabilities: Inappropriate ioctl for device
Failed to query video capabilities: Inappropriate ioctl for device
Failed to query video capabilities: Inappropriate ioctl for device
NvMMLiteOpen : Block : BlockType = 8
NvMMLiteOpen : Block : BlockType = 8
===== MSENC =====
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 8
NvMMLiteBlockCreate : Block : BlockType = 8
NvMMLiteOpen : Block : BlockType = 8
892744264
892744264
842091865
842091865
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 8
892744264
842091865
NvMMLiteOpen : Block : BlockType = 8
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 8
892744264
842091865
NvMMLiteOpen : Block : BlockType = 8
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 8
892744264
842091865
===== NVENC blits (mode: 1) into block linear surfaces =====
===== NVENC blits (mode: 1) into block linear surfaces =====
===== NVENC blits (mode: 1) into block linear surfaces =====
===== NVENC blits (mode: 1) into block linear surfaces =====
NvMMLiteOpen : Block : BlockType = 8
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 8
892744264
842091865
===== NVENC blits (mode: 1) into block linear surfaces =====
===== NVENC blits (mode: 1) into block linear surfaces =====
Could not read complete frame from input file
File read complete.
Could not read complete frame from input file
File read complete.
Could not read complete frame from input file
File read complete.
----------- Element = enc0 -----------
Total Profiling time = 42.3089
Average FPS = 71.1435
Total units processed = 3011
Average latency(usec) = 70228
Minimum latency(usec) = 11339
Maximum latency(usec) = 74196
-------------------------------------
Could not read complete frame from input file
File read complete.
----------- Element = enc0 -----------
Total Profiling time = 42.3039
Average FPS = 71.1518
Total units processed = 3011
Average latency(usec) = 70246
Minimum latency(usec) = 16437
Maximum latency(usec) = 74040
-------------------------------------
App run was successful
----------- Element = enc0 -----------
Total Profiling time = 42.3339
Average FPS = 71.1014
Total units processed = 3011
Average latency(usec) = 70349
Minimum latency(usec) = 12446
Maximum latency(usec) = 87147
-------------------------------------
App run was successful
App run was successful
----------- Element = enc0 -----------
Total Profiling time = 42.3429
Average FPS = 71.0864
Total units processed = 3011
Average latency(usec) = 70405
Minimum latency(usec) = 29774
Maximum latency(usec) = 101023
-------------------------------------
Could not read complete frame from input file
File read complete.
App run was successful
Could not read complete frame from input file
File read complete.
----------- Element = enc0 -----------
Total Profiling time = 42.2892
Average FPS = 71.1765
Total units processed = 3011
Average latency(usec) = 70214
Minimum latency(usec) = 23333
Maximum latency(usec) = 77737
-------------------------------------
App run was successful
----------- Element = enc0 -----------
Total Profiling time = 42.2709
Average FPS = 71.2074
Total units processed = 3011
Average latency(usec) = 70187
Minimum latency(usec) = 23425
Maximum latency(usec) = 79542
-------------------------------------
App run was successful

[1]   Done                    ./video_encode ~/1080p.yuv 1920 1080 H265 1080p.265 -hpt 1
[2]   Done                    ./video_encode ~/1080p1.yuv 1920 1080 H265 1080p1.265 -hpt 1
[3]   Done                    ./video_encode ~/1080p2.yuv 1920 1080 H265 1080p2.265 -hpt 1
[4]   Done                    ./video_encode ~/1080p3.yuv 1920 1080 H265 1080p3.265 -hpt 1
[5]-  Done                    ./video_encode ~/1080p4.yuv 1920 1080 H265 1080p4.265 -hpt 1
[6]+  Done                    ./video_encode ~/1080p5.yuv 1920 1080 H265 1080p5.265 -hpt 1
ubuntu@tegra-ubuntu:~/tegra_multimedia_api/samples/01_video_encode$

For input YUVs with complex scene, the fps probably drops, but should be above 60.

Hi Dane,

Thanks for the test code. It really helps.

Any comments on the other 2 parts?

Regards,

Dev