Using VPI with VIC backend in Deepstream pipeline on AGX Xavier

I’m currently performing perspective transform using OpenCV on the CPU in my Deepstream pipeline.
For performance reasons, I would like to run this workload on the VIC but I have some issues on how to access the buffer and perform the memory wrapping in my GStreamer plugin.
This is the relevant function:

static GstFlowReturn

perspective_transform_frame (GstPerTransform * pertransform, gint idx, NvBufSurface * surface)

{

VPIPerspectiveTransform h =

    {

       { -0.9279668665637102, -1.2386680438837496, 1896.3056318819054 },

       { -3.1888425618778356e-15, -2.068658617300784, 1667.6913395776087 },

       { -2.1742384124008588e-18, -0.0013187187869792788, 1.0}

    };

VPIPayload vic_warp;

CHECK_VPI_STATUS(vpiCreatePerspectiveWarp(VPI_BACKEND_VIC, &vic_warp));

VPIImageData img_data;

VPIImageFormat type = VPI_IMAGE_FORMAT_NV12;

VPIImage img = NULL;

memset(&img_data, 0, sizeof(img_data));

img_data.type = type;

img_data.numPlanes = 1;

img_data.planes[0].width = surface->surfaceList[idx].planeParams.width[0];

img_data.planes[0].height = surface->surfaceList[idx].planeParams.height[0];

img_data.planes[0].pitchBytes = surface->surfaceList[idx].planeParams.pitch[0];

img_data.planes[0].data = surface->surfaceList[idx].mappedAddr.addr[0];

CHECK_VPI_STATUS(vpiImageCreateCudaMemWrapper(&img_data, 0, &img));

CHECK_VPI_STATUS(vpiSubmitPerspectiveWarp(pertransform->stream_vic, vic_warp, img, h, img, VPI_INTERP_LINEAR, VPI_BOUNDARY_COND_ZERO, ALREADY_INVERSE));

g_print(“done\n”);

return GST_FLOW_OK;

}

I get the error

terminate called after throwing an instance of ‘std::runtime_error’
what(): VPI_ERROR_INVALID_ARGUMENT

in the vpiImageCreateCudaMemWrapper function.

Could you elaborate on how to wrap the unified memory on the AGX Xavier so I can perform my workload on the VIC?

My pipeline: filesrc location=train_4.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvideoconvert ! pertransform ! nvinfer config-file-path= config_infer_primary.txt ! appsink name=appsink " *
** "emit-signals=True
*

Environment

TensorRT Version : 7.1.3
GPU Type : agx xavier
CUDA Version : 10.2
CUDNN Version : 8.0
Operating System + Version : Ubuntu 18.04
Baremetal or Container (if container which image + tag) : baremetal

Hi,

Could you check the invalid argument error comes from vpiImageCreateCudaMemWrapper or vpiSubmitPerspectiveWarp?

Thanks.

Hey,
It comes from the vpiImageCreateCudaMemWrapper function

Hi,

VPI-0.4 only supports pitch-linear memory buffer.
So please set bl-output=0 in nvvidconv to get the pitch-linear data.

Thanks.

Thanks for the tip, unfortunately if I include nvvidconv bl-output=0 in my pipeline I’m already getting a segmentation fault before reaching my custom plugin. This is the pipeline:

“filesrc location=train_4.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! "
m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 enable-padding=0 ! nvvidconv bl-output=0 ! pertransform_vpi ! nvinfer
config-file-path= config_infer_primary_.txt ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! mp4mux ! filesink location=result.mp4”

Are there any additional steps to implement before using nvvidconv in my pipeline? I’m not sure about the differences between nvvideoconvert and nvvidconv and it seems only nvvidconv has the bl-output property.

edit: Are there any deepstream samples that integrate VPI/VIC workloads in their pipeline?

Hi,

Could you try the below pipeline?

filesrc location=train_4.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! "
m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 enable-padding=0 ! nvvidconv bl-output=0 ! pertransform_vpi ! nvoverlaysink

nvvidconv can not work with Deepstream SDK. You need to use nvvideoconvert.

If you want to use Deepstream + VPI, you may need to wait for VPI 1.0, which supports a block-linear buffer.
We will check this issue with our internal team and share more information with you next week.

Thanks.

Hi,

Thanks for your patience.

After checking with our internal team, nvvideoconvert can also output a pitch-linear format.
This will allow you to use Deepstream and VPI together.

Please check the below pipeline for the usage:

gst-launch-1.0 uridecodebin uri=file.mp4 name=u ! nvvideoconvert ! 'video/x-raw(memory:NVMM),fomat=NV12,colorimetry=bt709' ! nvegltransform ! nveglglessink  

Thanks.

Hi, I am trying to do this exact same thing with a JetsonTx2 using Deepstream 5.0 and VPI 0.4.4 but the layout type is still reporting as [NVBUF_LAYOUT_BLOCK_LINEAR]. Am I missing something?

gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=NV12,colorimetry=bt709’ ! dsexample ! nvegltransform ! nveglglessink