How to have Nvidia's Visionworks capabilities inside deepstream?

Is there a way to integrate the sdk or a tutorial on how to achieve the same. Would love to have these capabilities inside Deepstream. Detailed instructions would help. Thanks.

hi beefshepherd,

Would you please provide more details of this requirement? Such as use cases or more specific requirements.

Thanks

Hi @kayccc,

I was planning on having direction/motion estimation for the detected objects inside deepstream. And since the VisionWorks SDK is optimized for cv algorithms on gpu, I thought I could find a way to patch these inside deepstream together. So would love to hear if anything on these lines are possible. And how to achieve them.

Use cases: I wanted to try and estimate which direction the vehicle is moving or whether a pedestrian has entered a region or left a region. So relying on simple metric like bounding box to figure out these metrics has been a bit inaccurate. (for eg when the bouding box adapts it’s size when there’s occlusion) causes issues for direction estimation. So any ideas / help would be appreciated.

Thanks

Hi,

We don’t have a sample for VisionWorks + deepstream.
However, you can try to implement it with Deepstream plugin interface.

It looks like you only need the ROI region as the VisionWorks input, is it correct?
If yes, you can start it with /opt/nvidia/deepstream/deepstream-4.0/sources/gst-plugins/gst-dsexample.

Here are two useful examples for your reference:

Deepstream -> OpenCV:
https://devtalk.nvidia.com/default/topic/1047620/deepstream-sdk/how-to-create-opencv-gpumat-from-nvstream-/post/5397368/#5397368

cv::cuda::GpuMat d_mat(dsexample->processing_height, dsexample->processing_width, CV_8UC4, eglFrame.frame.pPitch[0]);

OpenCV -> VisionWorks:
/usr/share/visionworks/sources/samples/opencv_npp_interop

vx_imagepatch_addressing_t src1_addr;
src1_addr.dim_x = cv_src1.cols;
src1_addr.dim_y = cv_src1.rows;
src1_addr.stride_x = sizeof(vx_uint8);
src1_addr.stride_y = static_cast<vx_int32>(cv_src1.step);

void *src1_ptrs[] = {
    cv_src1.data
};
vx_image src1 = vxCreateImageFromHandle(context, VX_DF_IMAGE_U8, &src1_addr, src1_ptrs, 
VX_MEMORY_TYPE_HOST);
NVXIO_CHECK_REFERENCE(src1);

First example demonstrates how to get the ROI buffer from Deepstream while the second one shows how to create a vx_image from pointer.
You can try to merge these two examples to achieve the plugin you need.

Please noticed that the 1st example use a GPU buffer but the 2nd is a CPU buffer example.
You will need to handle the different buffer type or change to a compatible API.

Thanks.

@AastaLLL are there any future plans on releasing an example combining visionworks and deepstream. So far to perform any motion estimation mechanism, we are constrained to Opticalflow which work only on Nvidia RTX graphics card and Jetson TX2 (I presume). And doesn’t work on either GTX graphics card or Jetson nano. I had posted regarding this earlier.

So the only approach is to have VisionWorks incorporated into deepstream which works on a nano device. And have the examples tried out inside deepstream. It’s kind of a bottleneck that the examples provided by the sdk are hardware driven i.e the optical flow one. Was there a specific reason why it was released only to work on RTX cards. Would love to have a detailed deepstream sample example as to achieving the problem of motion estimation which is not dependent on specific hardware.

Hi,

Sorry that we don’t have a plan for VisionWorks+Deepstream.
Visionworks is a legacy SDK and doesn’t have a update for years.

Do you want to try this SDK on Nano: https://developer.nvidia.com/opticalflow-sdk
It looks like it is integrated into OpenCV contrib repository.

The re depository can be built from source on the Jetson device:
https://github.com/AastaNV/JEP/blob/master/script/install_opencv4.1.1_Jetson.sh

We also have a sample to demonstrate Deepstream -> OpenCV:

https://devtalk.nvidia.com/default/topic/1047620/deepstream-sdk/how-to-create-opencv-gpumat-from-nvstream-/post/5397368/#5397368

You might try to use the Opticalflow via OpenCV interface with Deepstream.

Thanks.