question:
How to use deepstream to read realsense d415 depth map(format: Z16) and send it via rtsp.
I use intel realsense sourcecode in deepstream to get the depth image, the code is as follows:
int main (int argc, char *argv[])
{
// Set up realsense
rs2::pipeline pipe;
rs2::config cfg;
cfg.enable_stream(RS2_STREAM_DEPTH, width, height, RS2_FORMAT_Z16, framerate);
rs2::pipeline_profile profile = pipe.start(cfg);
rs2::device dev = profile.get_device();
while (true) {
rs2::depth_frame depth_frame = frames.get_depth_frame();
rs2::colorizer color_map;
auto colorized_frame = color_map.colorize(depth_frame);
cv::Mat colorized_mat(cv::Size(width, height), CV_8UC3, (void *)colorized_frame.get_data(),
cv::Mat::AUTO_STEP);
}
}
Is there a other solution to get the depth image in the deepstream. Thanks.
As far as I know, the depth image z16 can only be converted to I420 (yuv420), and then hard encode using omxh264enc or hard encode using nvv4l2h264enc after nv12, is there any other way.
It has to use CPU buffers in appsrc, so for Z16->I420 conversion, it may not be possible to do it on GPU(through CUDA). You can implement it on CPU first and check if the performance is acceptable.
@DaneLLL, thanks. “It has to use CPU buffers in appsrc, so for Z16->I420 conversion, it may not be possible to do it on GPU(through CUDA). You can implement it on CPU first and check if the performance is acceptable.” Can I convert Z16-> RGB and then use omxh264enc to hard encode?
I tried deepstream streaming RGB 1920X1080 and depth data 1280x720 from Intelrealsene driver to the remote server. The cpu usage is above 80% . Below is my pipleline. could you help to check where i can optimize ?
intelrealsense–> CPU reads RGB 1920x1080 data from realsense api → deepstream–> h.264–>rtsp
intelrealsense–>> CPU read Depth 12bit 1280x720 data from realsense api -->opencv converts to BGR -->deepstream–> h.264–>rtsp
However, realsense cameras offer special functions. If you have to use the cameras, please realize there is some must-have memcpy() taking some CPU loading.
When using the Intel realsense library to read 1920 * 1080 rgb image and 1280 * 720 depth map at the same time on a nano device and display it in nano, the average CPU resource is over 80% and the memory consumption is about 300MB, Frame rate is 7fps / s.
Hi,
As we have explained in comment #6 and #9, Z16 is not supported by hardware VIC engine and needs to do conversion through CPU. You will see certain CPU loading and may not achieve acceptable performance. One thing you can try is to run ‘sudo jetson_clocks’ to keep CPU clocks at maximum, although it may not bring significant improvment when comparing to Jetson TX2 and Xavier.
"Z16 is not supported by hardware VIC engine ". z16 convert rgb8,Can VIC engine support rgb? My opinion is "realsense hardware--->Kernel support for z16(UVC,v4l2 )---> v4l2 buffer------>zero memcopy gpu----> z16 convert rgb8(gpu)----> gpu encode---->rtsp" . Is this solution feasible?
In DeepStream SDK, this solution may not work since DS SDK is gstreamer-based implementation and Z16 is not a supported format.
It can be possible in using tegra_multimedia_api. v4l2cuda sample can be referred as a start point. In the sample, it demonstrates how to allocate CUDA buffers and do capture in V4L2_PIX_FMT_UYVY format. You may try to customize it to capture in Z16 format.