ADAS System

Applicable setup.

• Hardware Platform (Jetson / GPU): Jetson Nano
• DeepStream Version: 5
• JetPack Version (valid for Jetson only): 4.4
• TensorRT Version: 7.0.1
• NVIDIA GPU Driver Version (valid for GPU only): 455.38
• Issue Type( questions, new requirements, bugs): Questions?

I’m starting the development of a ADAS (Advanced Driver Assistance System) based on Jetson Nano. I’m trying to use the Deepstream SDK to develop the system. I’ve already tried some Deepstream examples as “deepstream-app”, “deepstream-segmentation” and the deepstream-tests.

I’m using two different models, one for object detection (trafficcamnet) and a custom segmentation model (unet) for lane segmentation that I tested with deepstream-segmentation. In order to proceed with my development I need to put these two models to work in the same pipeline. After each processed frame I need to access its output and access each frame with OpenCV for developing other features.

So I’m facing the following questions.

1 - How can I put these two models working in the same pipeline?
2 - How can I access the models output?
3 - How can I access the pipeline frames using OpenCV in order to create a custom user interface?

Thanks in advance!

For question#1, you can refer to DS sample - apps/sample_apps/deepstream-test2
For question#2, you can refer to apps/sample_apps/deepstream-test3 for about “Extract the stream metadata”.
For question#3, you can refer to apps/sample_apps/deepstream-opencv-test

Thanks for answering my question.

I’m studying the DS sample - apps/sample_apps/deepstream-test2 to learn to work with multiple models in the same pipeline. After this I will take a look to DS sample - apps/sample_apps/deepstream-test3.

I’ve already took a look to the apps/sample_apps/deepstream-opencv-test sample but I didn’t find anything referring to OpenCV in this example. How does this example use OpenCV? I could not understand.

Right now I’m trying to extract image data from the pipeline to an OpenCV Mat. I’m using the code below that I found on this forum. It’s not working 'cause the “void *frame_data” is always NULL. Can somebody help me?

GstBuffer *buf = (GstBuffer *) info->data;
GstMapInfo in_map_info;
NvBufSurface *surface = NULL;
NvDsBatchMeta *batch_meta = NULL;
NvDsMetaList *l_frame     = NULL;
NvDsFrameMeta *frame_meta = NULL;

memset (&in_map_info, 0, sizeof (in_map_info));

if (gst_buffer_map (buf, &in_map_info, GST_MAP_READWRITE)) {
    surface = (NvBufSurface *);

    NvBufSurfaceMap(surface, -1, -1, NVBUF_MAP_READ_WRITE);
    NvBufSurfaceSyncForCpu(surface, -1, -1);

    batch_meta = gst_buffer_get_nvds_batch_meta(buf);

    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next) {
        frame_meta = (NvDsFrameMeta *)(l_frame->data);

        gint frame_width  = (gint)surface->surfaceList[frame_meta->batch_id].width;
        gint frame_height = (gint)surface->surfaceList[frame_meta->batch_id].height;
        void *frame_data  = surface->surfaceList[frame_meta->batch_id].mappedAddr.addr[0];
        size_t frame_step = surface->surfaceList[frame_meta->batch_id].pitch;

        cout << "frame_height: " << frame_height << ", frame_width: " << frame_width << endl; cout.flush();
        cout << "frame_step: " << frame_step << endl; cout.flush();
        cout << "frame_data: " << frame_data << endl; cout.flush();
        cout << endl; cout.flush();

        if(frame_data != NULL) {
            cv::Mat frame = cv::Mat(frame_height, frame_width, CV_8UC3, frame_data, frame_step);
            cv::Mat frame_aux;
            cv::cvtColor(frame, frame_aux, cv::COLOR_BGR2BGR555);
            imwrite("./image.jpg", frame_aux);
            cout << "AQUI 3" << endl;
    NvBufSurfaceUnMap(surface, -1, -1);
gst_buffer_unmap (buf, &in_map_info);

I’m debugging and a noticed that all 4 positions of surface->surfaceList[frame_meta->batch_id].mappedAddr.addr are NULL. Another details is that I’m using the “sample_qHD.h264” video as input.

Thanks in advance!

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.

Where in the pipeline do you want to extract the OpenCV mat?