Using nvmedia to do image process

Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
[√] DRIVE OS 6.0.4 SDK

Target Operating System
[√] Linux

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
[√] DRIVE AGX Orin Developer Kit (not sure its number)

SDK Manager Version
[√] other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
[√] other

I am using the NvsciBuf-related APIs in the DriveOS SDK for image processing. For example, I first use the Nvmedia2D APIs to convert the YUV422 image to YUV420, then use the NvmediaLDC APIs to do some distortion correction, and finally use the NvmediaJPEGEncode APIs to compress the image into JPEG format.

My question is whether I need to call the NvSciSync-related APIs to wait for each operation to complete. I found that if I don’t use NvSciSync, the compressed image will be from the previous frame. However, using NvSciSync will significantly increase the processing time (reaching 60-70ms in the case of multiple cameras running in parallel), which is unacceptable.

Could you please help me with this question?

Dear @zhixin.zhou,
So application pipeline is capture image->ISP->Nvmedia2D->NvmediaLDC->NvmediaJpegEncode.
So, may I know where are you keeping nvscisync calls in pipeline and which API calls?
Could you share some perf stats like pipeline time for single/multiple camera with(with out) NvSciSync calls. Generally, I would expect ~20 msec for capture + ISP processing in pipeline for 4 cameras and also synchronization calls needed as one engine has to wait for output from another.

I am directly obtaining the image from the ICP because the ISP has already been done at the camera end.So the pipeline is capture image from ICP->Nvmedia2D(yuyv->yuv420)->NvmediaLDC->Nvmedia2D(crop or rotate)->NvmediaJpegEncode.
I have shown in the attachment how I call the nvscisync APIs in my code.

code sample.txt (6.9 KB)
perf stats: 10ms for one single pipeline,If 7 cameras are used in parallel, then each camera will take 30-50ms to process.

@SivaRamaKrishnaNV Can you help answer the question?

Dear @zhixin.zhou,
My apologies for missing this topic.
Do you still need support on this issue?

@ SivaRamaKrishnaNV yes,the time cost of the image process pipeline(ICP->Nvmedia2D(yuyv->yuv420)->NvmediaLDC->Nvmedia2D(crop or rotate)->NvmediaJpegEncode) is too long. How can I shorten this time?

Hello, I still need some assistance. When using NPPI library, I usually can use the same CUDA context for multiple image processing operations and only call cudaStreamSynchronize once at the end. However, when using NvMedia, I found that it seems necessary to call nvscisync between each step, which makes the overall image processing time longer. Is there a way to avoid this and reduce the time consumed by synchronization?

Discuss on this issue

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.