How Deepstream reads depth data(Z16) and sends it via rtsp

question:
How to use deepstream to read realsense d415 depth map(format: Z16) and send it via rtsp.

I use intel realsense sourcecode in deepstream to get the depth image, the code is as follows:

int main (int argc, char *argv[])
{
   // Set up realsense
   rs2::pipeline pipe;
   rs2::config cfg;

    cfg.enable_stream(RS2_STREAM_DEPTH, width, height, RS2_FORMAT_Z16, framerate);

    rs2::pipeline_profile profile = pipe.start(cfg);
    rs2::device dev = profile.get_device();

    while (true) {
        rs2::depth_frame depth_frame = frames.get_depth_frame();
        rs2::colorizer color_map;
        auto colorized_frame = color_map.colorize(depth_frame);
        cv::Mat colorized_mat(cv::Size(width, height), CV_8UC3, (void *)colorized_frame.get_data(),
                              cv::Mat::AUTO_STEP);
    }

}

Is there a other solution to get the depth image in the deepstream. Thanks.

Hi,
The existing implementation of source group is listed in
https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream%2520Development%2520Guide%2Fdeepstream_app_config.3.2.html%23wwpID0E0QB0HA

Your source(format: Z16) is not supported and you need to customize deepstream-app. Source code is in

deepstream_sdk_v4.0.1_jetson\sources\apps\apps-common\src\deepstream_source_bin.c

Possilbe solution is to add code of running

appsrc ! video/x-raw,format=I420 ! nvvideoconvert ! video/x-raw(memory:NVMM)

The source format is Z16 and you should need conversion of Z16 → I420 in appsrc

As far as I know, the depth image z16 can only be converted to I420 (yuv420), and then hard encode using omxh264enc or hard encode using nvv4l2h264enc after nv12, is there any other way.

@DaneLLL, Thank you very much for your reply. How to get the depth image(Z16) in deepstream. Thanks

" The source format is Z16 and you should need conversion of Z16 → I420 in appsrc. " How to use GPU acceleration.Thanks.

Hi,
The code are open source and we encourage users to do customization. For using appsrc, you may refer to
https://devtalk.nvidia.com/default/topic/1026106/jetson-tx1/usage-of-nvbuffer-apis/post/5219225/#5219225

It has to use CPU buffers in appsrc, so for Z16->I420 conversion, it may not be possible to do it on GPU(through CUDA). You can implement it on CPU first and check if the performance is acceptable.

@DaneLLL, thanks. “It has to use CPU buffers in appsrc, so for Z16->I420 conversion, it may not be possible to do it on GPU(through CUDA). You can implement it on CPU first and check if the performance is acceptable.” Can I convert Z16-> RGB and then use omxh264enc to hard encode?

I tried deepstream streaming RGB 1920X1080 and depth data 1280x720 from Intelrealsene driver to the remote server. The cpu usage is above 80% . Below is my pipleline. could you help to check where i can optimize ?

intelrealsense–> CPU reads RGB 1920x1080 data from realsense api → deepstream–> h.264–>rtsp
intelrealsense–>> CPU read Depth 12bit 1280x720 data from realsense api -->opencv converts to BGR -->deepstream–> h.264–>rtsp

Hi,
You will see certain CPU loading in using BGR format. Explanation is in below post:
https://devtalk.nvidia.com/default/topic/1064944/jetson-nano/-gstreamer-nvvidconv-bgr-as-input/post/5397443/#5397443

For optimal performance on Jetson platforma, we would suggest use camera modules from out partners:
https://developer.nvidia.com/embedded/community/ecosystem

However, realsense cameras offer special functions. If you have to use the cameras, please realize there is some must-have memcpy() taking some CPU loading.

I refer to realsense librealsense/backend-v4l2.cpp at master · IntelRealSense/librealsense · GitHub and add support for the z16 format in gstv4l2object.c. Is it OK?

Hi, @DaneLLL.

When using the Intel realsense library to read 1920 * 1080 rgb image and 1280 * 720 depth map at the same time on a nano device and display it in nano, the average CPU resource is over 80% and the memory consumption is about 300MB, Frame rate is 7fps / s.

This performance cannot meet the requirements.

Hi,
As we have explained in comment #6 and #9, Z16 is not supported by hardware VIC engine and needs to do conversion through CPU. You will see certain CPU loading and may not achieve acceptable performance. One thing you can try is to run ‘sudo jetson_clocks’ to keep CPU clocks at maximum, although it may not bring significant improvment when comparing to Jetson TX2 and Xavier.

Hi,

 "Z16 is not supported by hardware VIC engine ". z16 convert rgb8,Can VIC engine support rgb?  My opinion is "realsense hardware--->Kernel support for z16(UVC,v4l2 )---> v4l2 buffer------>zero memcopy  gpu----> z16 convert rgb8(gpu)----> gpu encode---->rtsp" . Is this solution feasible?

Hi,
Please check technical reference manual of TX1:
https://developer.nvidia.com/embedded/downloads#?search=trm
VIC is the hardware engine for conversion/scaling/cropping.

In DeepStream SDK, this solution may not work since DS SDK is gstreamer-based implementation and Z16 is not a supported format.
It can be possible in using tegra_multimedia_api. v4l2cuda sample can be referred as a start point. In the sample, it demonstrates how to allocate CUDA buffers and do capture in V4L2_PIX_FMT_UYVY format. You may try to customize it to capture in Z16 format.

Hi, @DaneLLL, Thanks for the reply, I follow your suggestions.

@hw97525_liao Do u have any solution for it ?? Thanks