How to NvBufSurface to cv::Mat?

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**GPU
• DeepStream Version7.0
• JetPack Version (valid for Jetson only)
• TensorRT VersionTensorRT in deepstream 7.0 docker
• NVIDIA GPU Driver Version (valid for GPU only)
**• Issue Type( questions, new requirements, bugs)**questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

hello? i try to apply opencv in deepstream.

i find github code and revise it

my code is here

include <gst/gst.h>
include <stdio.h>
include
include “nvbufsurface.h”
include
include <opencv2/dnn_superres.hpp>
include <opencv2/imgproc.hpp>
include <opencv2/highgui.hpp>

using namespace cv;
using namespace dnn;
using namespace dnn_superres;

gint frames_processed = 0;
gint64 start_time;

Mat upscaleImage(Mat img, const std::string& modelName, const std::string& modelPath, int scale){
DnnSuperResImpl sr;
sr.readModel(modelPath);
sr.setModel(modelName, scale);
sr.setPreferableBackend(DNN_BACKEND_CUDA);
sr.setPreferableTarget(DNN_TARGET_CUDA);

// Output image
Mat outputImage;
sr.upsample(img, outputImage);
return outputImage;
}

void buffer_to_image_tensor(GstBuffer * buf, GstCaps * caps) {
const GstStructure *caps_structure = gst_caps_get_structure (caps, 0);
auto height = g_value_get_int(gst_structure_get_value(caps_structure, “height”));
auto width = g_value_get_int(gst_structure_get_value(caps_structure, “width”));
GstMapInfo map_info;
memset (&map_info, 0, sizeof (map_info));
gboolean is_mapped = gst_buffer_map(buf, &map_info, GST_MAP_READ);
NvBufSurface *info_data = (NvBufSurface *) map_info.data;
assert(info_data->numFilled == 1);
assert(info_data->surfaceList[0].colorFormat == 19);
std::cout << "width is: " << info_data->surfaceList[0].width << “\n”;
std::cout << "height is: " << info_data->surfaceList[0].height << “\n”;
std::cout << "pitch is: " << info_data->surfaceList[0].pitch << “\n”;
std::cout << "colorFormat is: " << info_data->surfaceList[0].colorFormat << “\n”;
std::cout << "layout is: " << info_data->surfaceList[0].layout << “\n”;
std::cout << "bufferDesc is: " << info_data->surfaceList[0].bufferDesc << “\n”;
std::cout << "dataSize is: " << info_data->surfaceList[0].dataSize << “\n”;
std::cout << "dataPtr is: " << info_data->surfaceList[0].dataPtr << “\n”;

Mat rgba_mat;
    
Mat in_mat = Mat(info_data->surfaceList[0].height, info_data->surfaceList[0].width, CV_8UC1, info_data->surfaceList[0].mappedAddr.addr, info_data->surfaceList[0].pitch);
    
cvtColor(in_mat, rgba_mat, CV_???);

imwrite("result.jpg", rgba_mat);

if (is_mapped) {
    //auto max_val = at::zeros( {height, width, 4}, torch::CUDA(torch::kUInt8));
    gst_buffer_unmap(buf, &map_info);
}

i dont need to buffer_unmap just get frame from deepstream and frame to cv::Mat i will apply super resolution in opencv dnn and imshow it.

and i have a question. what is colorFormat? it print 19

thanks in advance.

std::cout << "width is: " << info_data->surfaceList[0].width << "\n";
std::cout << "height is: " << info_data->surfaceList[0].height << "\n";
std::cout << "pitch is: " << info_data->surfaceList[0].pitch << "\n";
std::cout << "colorFormat is: " << info_data->surfaceList[0].colorFormat << "\n";
std::cout << "layout is: " << info_data->surfaceList[0].layout << "\n";
std::cout << "bufferDesc is: " << info_data->surfaceList[0].bufferDesc << "\n";
std::cout << "dataSize is: " << info_data->surfaceList[0].dataSize << "\n";
std::cout << "dataPtr is: " << info_data->surfaceList[0].dataPtr << "\n";

Mat out_mat = cv::Mat (cv::Size(info_data->surfaceList[0].width, info_data->surfaceList[0].height), CV_8UC3);

std::cout << "test1" << std::endl;        
Mat in_mat = Mat(info_data->surfaceList[0].height, info_data->surfaceList[0].width, CV_8UC4, info_data->surfaceList[0].dataPtr, info_data->surfaceList[0].pitch);
std::cout << "test2 " << std::endl;
cvtColor(in_mat, out_mat, COLOR_RGBA2BGR);
std::cout << "test3 " << std::endl;
imwrite("result.jpg", out_mat);
std::cout << "test4 " << std::endl;
if (is_mapped) {
    //auto max_val = at::zeros( {height, width, 4}, torch::CUDA(torch::kUInt8));
    gst_buffer_unmap(buf, &map_info);
}

}

i changed source a little by this webpage

Implementing a Custom GStreamer Plugin with OpenCV Integration Example — DeepStream documentation 6.4 documentation (nvidia.com)

error mesaage changed

test1
test2
Segmentation fault

i think cvtColor function make error

how can i solve it?

Can you share the reason why you need to convert to cv::mat? This requires moving memory from GPU to CPU, which will cause many performance issues

Here is a sample, but it is not recommended to convert the NvBufSurface to cv::mat unless you have a necessary reason.

In most cases, CUDA can meet the requirements

/*
 * SPDX-FileCopyrightText: Copyright (c) 2018-2022 NVIDIA CORPORATION &
 * AFFILIATES. All rights reserved. SPDX-License-Identifier: MIT
 *
 * Permission is hereby granted, free of charge, to any person obtaining a
 * copy of this software and associated documentation files (the "Software"),
 * to deal in the Software without restriction, including without limitation
 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
 * and/or sell copies of the Software, and to permit persons to whom the
 * Software is furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
 * DEALINGS IN THE SOFTWARE.
 */

#include "gstnvdsmeta.h"
#include "nvbufsurftransform.h"
#include "nvds_yml_parser.h"
#include <cuda_runtime_api.h>
#include <glib.h>
#include <gst/gst.h>
#include <gstreamer-1.0/gst/gstbuffer.h>
#include <stdio.h>

#include "nvbufsurface.h"
#include "nvbufsurftransform.h"
#include <cuda.h>
#include <cuda_runtime.h>
#include <iostream>
/* Open CV headers */
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"

#define CHECK_CUDA_STATUS(cuda_status, error_str)                              \
  do {                                                                         \
    if ((cuda_status) != cudaSuccess) {                                        \
      g_print("Error: %s in %s at line %d (%s)\n", error_str, __FILE__,        \
              __LINE__, cudaGetErrorName(cuda_status));                        \
    }                                                                          \
  } while (0)
using namespace cv;
using namespace std;
#define MAX_DISPLAY_LEN 64

#define PGIE_CLASS_ID_VEHICLE 0
#define PGIE_CLASS_ID_PERSON 2

/* The muxer output resolution must be set if the input streams will be of
 * different resolution. The muxer will scale all the input frames to this
 * resolution. */
#define MUXER_OUTPUT_WIDTH 1280
#define MUXER_OUTPUT_HEIGHT 780

/* Muxer batch formation timeout, for e.g. 40 millisec. Should ideally be set
 * based on the fastest source's framerate. */
#define MUXER_BATCH_TIMEOUT_USEC 40000

/* Check for parsing error. */
#define RETURN_ON_PARSER_ERROR(parse_expr)                                     \
  if (NVDS_YAML_PARSER_SUCCESS != parse_expr) {                                \
    g_printerr("Error in parsing configuration file.\n");                      \
    return -1;                                                                 \
  }

gint frame_number = 0;
gchar pgie_classes_str[4][32] = {"Vehicle", "TwoWheeler", "Person", "Roadsign"};

static GstPadProbeReturn infer_sink_pad_buffer_probe(GstPad *pad,
                                                     GstPadProbeInfo *info,
                                                     gpointer u_data) {
  GstBuffer *buf = (GstBuffer *)info->data;
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);
  NvDsMetaList *l_frame = NULL;
  char file_name[128];

  // Get original raw data
  GstMapInfo in_map_info;
  if (!gst_buffer_map(buf, &in_map_info, GST_MAP_READ)) {
    g_print("Error: Failed to map gst buffer\n");
    return GST_PAD_PROBE_OK;
  }
  NvBufSurface *surface = (NvBufSurface *)in_map_info.data;
  // TODO for cuda device memory we need to use cudamemcpy
  NvBufSurfaceMap(surface, -1, -1, NVBUF_MAP_READ);
#ifdef PLATFORM_TEGRA
  /* Cache the mapped data for CPU access */
  if (surface->memType == NVBUF_MEM_SURFACE_ARRAY) {
    NvBufSurfaceSyncForCpu(surface, 0, 0);
  }
#endif

  for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
       l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);

    guint height = surface->surfaceList[frame_meta->batch_id].height;
    guint width = surface->surfaceList[frame_meta->batch_id].width;
    // Create Mat from NvMM memory, refer opencv API for how to create a Mat

    // only rotate the first 10 frames
    NvBufSurface *inter_buf = nullptr;
    NvBufSurfaceCreateParams create_params;
    create_params.gpuId = surface->gpuId;
    create_params.width = width;
    create_params.height = height;
    create_params.size = 0;
    create_params.colorFormat = NVBUF_COLOR_FORMAT_BGRA;
    create_params.layout = NVBUF_LAYOUT_PITCH;
#ifdef __aarch64__
    create_params.memType = NVBUF_MEM_DEFAULT;
#else
    create_params.memType = NVBUF_MEM_CUDA_UNIFIED;
#endif
    // Create another scratch RGBA NvBufSurface
    if (NvBufSurfaceCreate(&inter_buf, 1, &create_params) != 0) {
      GST_ERROR("Error: Could not allocate internal buffer ");
      return GST_PAD_PROBE_OK;
    }

    NvBufSurfTransformConfigParams transform_config_params;
    NvBufSurfTransformParams transform_params;
    NvBufSurfTransformRect src_rect;
    NvBufSurfTransformRect dst_rect;
    cudaStream_t cuda_stream;
    CHECK_CUDA_STATUS(cudaStreamCreate(&cuda_stream),
                      "Could not create cuda stream");
    transform_config_params.compute_mode = NvBufSurfTransformCompute_Default;
    transform_config_params.gpu_id = surface->gpuId;
    transform_config_params.cuda_stream = cuda_stream;
    /* Set the transform session parameters for the conversions executed in this
     * thread. */
    NvBufSurfTransform_Error err =
        NvBufSurfTransformSetSessionParams(&transform_config_params);
    if (err != NvBufSurfTransformError_Success) {
      cout << "NvBufSurfTransformSetSessionParams failed with error " << err
           << endl;
      return GST_PAD_PROBE_OK;
    }
    /* Set the transform ROIs for source and destination, only do the color
     * format conversion*/
    src_rect = {0, 0, width, height};
    dst_rect = {0, 0, width, height};

    /* Set the transform parameters */
    transform_params.src_rect = &src_rect;
    transform_params.dst_rect = &dst_rect;
    transform_params.transform_flag = NVBUFSURF_TRANSFORM_FILTER;
    transform_params.transform_flip = NvBufSurfTransform_None;
    transform_params.transform_filter = NvBufSurfTransformInter_Algo3;

    /* Transformation format conversion */
    err = NvBufSurfTransform(surface, inter_buf, &transform_params);
    if (err != NvBufSurfTransformError_Success) {
      cout << "NvBufSurfTransform failed with error %d while converting buffer"
           << err << endl;
      return GST_PAD_PROBE_OK;
    }

    // map for cpu
    if (NvBufSurfaceMap(inter_buf, 0, -1, NVBUF_MAP_READ_WRITE) != 0) {
      cout << "map error" << endl;
      break;
    }

#ifdef PLATFORM_TEGRA
    if (surface->memType == NVBUF_MEM_SURFACE_ARRAY) {
      NvBufSurfaceSyncForCpu(inter_buf, 0, 0);
    }
#endif
    // make mat from inter-surface buffer
    Mat rawmat(height, width, CV_8UC4,
               inter_buf->surfaceList[0].mappedAddr.addr[0],
               inter_buf->surfaceList[0].planeParams.pitch[0]);
    char *data = (char *)malloc(width * height * 4);
    // make temp rotate mat from malloc buffer
    Mat rotate_mat(width, height, CV_8UC4, data, height);
    // Aplly your algo which works with opencv Mat, here we only rotate the Mat
    // for demo
    rotate(rawmat, rotate_mat, ROTATE_180);
    free(data);
    if (frame_number % 300 == 0) {
      snprintf(file_name, sizeof(file_name), "frame-%d.png", frame_number);
      cv::imwrite(file_name, rotate_mat);
    }
#ifdef PLATFORM_TEGRA
    if (inter_buf->memType == NVBUF_MEM_SURFACE_ARRAY) {
      NvBufSurfaceSyncForDevice(inter_buf, 0, 0);
    }
#endif
    // unmap
    NvBufSurfaceUnMap(inter_buf, 0, -1);
    NvBufSurfaceDestroy(inter_buf);
  }

#ifdef PLATFORM_TEGRA
  if (surface->memType == NVBUF_MEM_SURFACE_ARRAY) {
    NvBufSurfaceSyncForDevice(surface, 0, 0);
  }
#endif
  NvBufSurfaceUnMap(surface, -1, -1);
  gst_buffer_unmap(buf, &in_map_info);
  return GST_PAD_PROBE_OK;
}

/* osd_sink_pad_buffer_probe  will extract metadata received on OSD sink pad
 * and update params for drawing rectangle, object information etc. */

static GstPadProbeReturn
osd_sink_pad_buffer_probe(GstPad *pad, GstPadProbeInfo *info, gpointer u_data) {
  GstBuffer *buf = (GstBuffer *)info->data;
  guint num_rects = 0;
  NvDsObjectMeta *obj_meta = NULL;
  guint vehicle_count = 0;
  guint person_count = 0;
  NvDsMetaList *l_frame = NULL;
  NvDsMetaList *l_obj = NULL;
  NvDsDisplayMeta *display_meta = NULL;

  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);

  for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
       l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);
    int offset = 0;
    for (l_obj = frame_meta->obj_meta_list; l_obj != NULL;
         l_obj = l_obj->next) {
      obj_meta = (NvDsObjectMeta *)(l_obj->data);
      if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE) {
        vehicle_count++;
        num_rects++;
      }
      if (obj_meta->class_id == PGIE_CLASS_ID_PERSON) {
        person_count++;
        num_rects++;
      }
    }
    display_meta = nvds_acquire_display_meta_from_pool(batch_meta);
    NvOSD_TextParams *txt_params = &display_meta->text_params[0];
    display_meta->num_labels = 1;
    txt_params->display_text = (char *)g_malloc0(MAX_DISPLAY_LEN);
    offset = snprintf(txt_params->display_text, MAX_DISPLAY_LEN, "Person = %d ",
                      person_count);
    offset = snprintf(txt_params->display_text + offset, MAX_DISPLAY_LEN,
                      "Vehicle = %d ", vehicle_count);

    /* Now set the offsets where the string should appear */
    txt_params->x_offset = 10;
    txt_params->y_offset = 12;

    /* Font , font-color and font-size */
    txt_params->font_params.font_name = "Serif";
    txt_params->font_params.font_size = 10;
    txt_params->font_params.font_color.red = 1.0;
    txt_params->font_params.font_color.green = 1.0;
    txt_params->font_params.font_color.blue = 1.0;
    txt_params->font_params.font_color.alpha = 1.0;

    /* Text background color */
    txt_params->set_bg_clr = 1;
    txt_params->text_bg_clr.red = 0.0;
    txt_params->text_bg_clr.green = 0.0;
    txt_params->text_bg_clr.blue = 0.0;
    txt_params->text_bg_clr.alpha = 1.0;

    nvds_add_display_meta_to_frame(frame_meta, display_meta);
  }

  g_print("Frame Number = %d Number of objects = %d "
          "Vehicle Count = %d Person Count = %d\n",
          frame_number, num_rects, vehicle_count, person_count);
  frame_number++;
  return GST_PAD_PROBE_OK;
}

static gboolean bus_call(GstBus *bus, GstMessage *msg, gpointer data) {
  GMainLoop *loop = (GMainLoop *)data;
  switch (GST_MESSAGE_TYPE(msg)) {
  case GST_MESSAGE_EOS:
    g_print("End of stream\n");
    g_main_loop_quit(loop);
    break;
  case GST_MESSAGE_ERROR: {
    gchar *debug;
    GError *error;
    gst_message_parse_error(msg, &error, &debug);
    g_printerr("ERROR from element %s: %s\n", GST_OBJECT_NAME(msg->src),
               error->message);
    if (debug)
      g_printerr("Error details: %s\n", debug);
    g_free(debug);
    g_error_free(error);
    g_main_loop_quit(loop);
    break;
  }
  default:
    break;
  }
  return TRUE;
}

int main(int argc, char *argv[]) {
  GMainLoop *loop = NULL;
  GstElement *pipeline = NULL, *source = NULL, *h264parser = NULL,
             *decoder = NULL, *streammux = NULL, *sink = NULL, *pgie = NULL,
             *nvvidconv = NULL, *nvosd = NULL;

  GstBus *bus = NULL;
  guint bus_watch_id;
  GstPad *osd_sink_pad = NULL, *infer_sink_pad = NULL;

  gboolean yaml_config = FALSE;
  NvDsGieType pgie_type = NVDS_GIE_PLUGIN_INFER;

  int current_device = -1;
  cudaGetDevice(&current_device);
  struct cudaDeviceProp prop;
  cudaGetDeviceProperties(&prop, current_device);
  /* Check input arguments */
  if (argc != 2) {
    g_printerr("Usage: %s <yml file>\n", argv[0]);
    g_printerr("OR: %s <H264 filename>\n", argv[0]);
    return -1;
  }

  /* Standard GStreamer initialization */
  gst_init(&argc, &argv);
  loop = g_main_loop_new(NULL, FALSE);

  /* Parse inference plugin type */
  yaml_config =
      (g_str_has_suffix(argv[1], ".yml") || g_str_has_suffix(argv[1], ".yaml"));

  if (yaml_config) {
    RETURN_ON_PARSER_ERROR(
        nvds_parse_gie_type(&pgie_type, argv[1], "primary-gie"));
  }

  /* Create gstreamer elements */
  /* Create Pipeline element that will form a connection of other elements */
  pipeline = gst_pipeline_new("dstest1-pipeline");

  /* Source element for reading from the file */
  source = gst_element_factory_make("filesrc", "file-source");

  /* Since the data format in the input file is elementary h264 stream,
   * we need a h264parser */
  h264parser = gst_element_factory_make("h264parse", "h264-parser");

  /* Use nvdec_h264 for hardware accelerated decode on GPU */
  decoder = gst_element_factory_make("nvv4l2decoder", "nvv4l2-decoder");

  /* Create nvstreammux instance to form batches from one or more sources. */
  streammux = gst_element_factory_make("nvstreammux", "stream-muxer");

  if (!pipeline || !streammux) {
    g_printerr("One element could not be created. Exiting.\n");
    return -1;
  }

  /* Use nvinfer or nvinferserver to run inferencing on decoder's output,
   * behaviour of inferencing is set through config file */
  if (pgie_type == NVDS_GIE_PLUGIN_INFER_SERVER) {
    pgie =
        gst_element_factory_make("nvinferserver", "primary-nvinference-engine");
  } else {
    pgie = gst_element_factory_make("nvinfer", "primary-nvinference-engine");
  }

  /* Use convertor to convert from NV12 to RGBA as required by nvosd */
  nvvidconv = gst_element_factory_make("nvvideoconvert", "nvvideo-converter");

  /* Create OSD to draw on the converted RGBA buffer */
  nvosd = gst_element_factory_make("nvdsosd", "nv-onscreendisplay");

  /* Finally render the osd output */
  if (prop.integrated) {
    sink = gst_element_factory_make("nv3dsink", "nv3d-sink");
  } else {
    // sink = gst_element_factory_make ("nveglglessink", "nvvideo-renderer");
    sink = gst_element_factory_make("fakesink", "nvvideo-renderer");
  }

  if (!source || !h264parser || !decoder || !pgie || !nvvidconv || !nvosd ||
      !sink) {
    g_printerr("One element could not be created. Exiting.\n");
    return -1;
  }

  /* we set the input filename to the source element */
  g_object_set(G_OBJECT(source), "location", argv[1], NULL);

  if (g_str_has_suffix(argv[1], ".h264")) {
    g_object_set(G_OBJECT(source), "location", argv[1], NULL);

    g_object_set(G_OBJECT(streammux), "batch-size", 1, NULL);

    g_object_set(G_OBJECT(streammux), "width", MUXER_OUTPUT_WIDTH, "height",
                 MUXER_OUTPUT_HEIGHT, "batched-push-timeout",
                 MUXER_BATCH_TIMEOUT_USEC, NULL);

    /* Set all the necessary properties of the nvinfer element,
     * the necessary ones are : */
    g_object_set(G_OBJECT(pgie), "config-file-path", "dstest1_pgie_config.txt",
                 NULL);
  }

  if (yaml_config) {
    RETURN_ON_PARSER_ERROR(nvds_parse_file_source(source, argv[1], "source"));
    RETURN_ON_PARSER_ERROR(
        nvds_parse_streammux(streammux, argv[1], "streammux"));

    /* Set all the necessary properties of the inference element */
    RETURN_ON_PARSER_ERROR(nvds_parse_gie(pgie, argv[1], "primary-gie"));
  }

  g_object_set(G_OBJECT(streammux), "nvbuf-memory-type", NVBUF_MEM_CUDA_UNIFIED,
               NULL);
  g_object_set(G_OBJECT(nvvidconv), "nvbuf-memory-type", NVBUF_MEM_CUDA_UNIFIED,
               NULL);
  /* we add a message handler */
  bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline));
  bus_watch_id = gst_bus_add_watch(bus, bus_call, loop);
  gst_object_unref(bus);

  /* Set up the pipeline */
  /* we add all elements into the pipeline */
  gst_bin_add_many(GST_BIN(pipeline), source, h264parser, decoder, streammux,
                   pgie, nvvidconv, nvosd, sink, NULL);
  g_print("Added elements to bin\n");

  GstPad *sinkpad, *srcpad;
  gchar pad_name_sink[16] = "sink_0";
  gchar pad_name_src[16] = "src";

  sinkpad = gst_element_get_request_pad(streammux, pad_name_sink);
  if (!sinkpad) {
    g_printerr("Streammux request sink pad failed. Exiting.\n");
    return -1;
  }

  srcpad = gst_element_get_static_pad(decoder, pad_name_src);
  if (!srcpad) {
    g_printerr("Decoder request src pad failed. Exiting.\n");
    return -1;
  }

  if (gst_pad_link(srcpad, sinkpad) != GST_PAD_LINK_OK) {
    g_printerr("Failed to link decoder to stream muxer. Exiting.\n");
    return -1;
  }

  gst_object_unref(sinkpad);
  gst_object_unref(srcpad);

  /* we link the elements together */
  /* file-source -> h264-parser -> nvh264-decoder ->
   * pgie -> nvvidconv -> nvosd -> video-renderer */

  if (!gst_element_link_many(source, h264parser, decoder, NULL)) {
    g_printerr("Elements could not be linked: 1. Exiting.\n");
    return -1;
  }

  if (!gst_element_link_many(streammux, pgie, nvvidconv, nvosd, sink, NULL)) {
    g_printerr("Elements could not be linked: 2. Exiting.\n");
    return -1;
  }

  /* Lets add probe to get informed of the meta data generated, we add probe to
   * the sink pad of the osd element, since by that time, the buffer would have
   * had got all the metadata. */
  osd_sink_pad = gst_element_get_static_pad(nvosd, "sink");
  if (!osd_sink_pad)
    g_print("Unable to get sink pad\n");
  else
    gst_pad_add_probe(osd_sink_pad, GST_PAD_PROBE_TYPE_BUFFER,
                      osd_sink_pad_buffer_probe, NULL, NULL);
  gst_object_unref(osd_sink_pad);

  infer_sink_pad = gst_element_get_static_pad(nvvidconv, "sink");
  if (!infer_sink_pad)
    g_print("Unable to get sink pad\n");
  else
    gst_pad_add_probe(infer_sink_pad, GST_PAD_PROBE_TYPE_BUFFER,
                      infer_sink_pad_buffer_probe, NULL, NULL);
  gst_object_unref(infer_sink_pad);

  /* Set the pipeline to "playing" state */
  g_print("Using file: %s\n", argv[1]);
  gst_element_set_state(pipeline, GST_STATE_PLAYING);

  /* Wait till pipeline encounters an error or EOS */
  g_print("Running...\n");
  g_main_loop_run(loop);

  /* Out of the main loop, clean up nicely */
  g_print("Returned, stopping playback\n");
  gst_element_set_state(pipeline, GST_STATE_NULL);
  g_print("Deleting pipeline\n");
  gst_object_unref(GST_OBJECT(pipeline));
  g_source_remove(bus_watch_id);
  g_main_loop_unref(loop);
  return 0;
}

when i operate your code

error message is

ERROR from element stream-muxer: Output width not set

i operate like this

./deepstream-test2-app /opt/nvidia/deepstream/deepstream-7.0/samples/streams/sample_1080p_h264.mp4

i changed deepstream-test2-app.c to your code and save deepstream-test2-app.cpp

and i changed

define MUXER_OUTPUT_WIDTH 1920
define MUXER_OUTPUT_HEIGHT 1080

because of sample_1080p_h264.mp4 width and height

but error message is same

Can you explain why you want to make the switch? It’s probably unnecessary.

2.This code only supports .h264, not .mp4

i will use opencv dnn function in super resolution

get frame by deepstream convert to cv::Mat and apply super resolution and i want to see the result. it’s easy to use super resolution of opencv function

it is the reason.

and i change mp4 file to h264 file, it makes error message

terminate called after throwing an instance of ‘cv::Exception’
what(): OpenCV(4.9.0) /root/opencv-sources/opencv-4.9.0/modules/core/src/matrix.cpp:434: error: (-215:Assertion failed) _step >= minstep in function ‘Mat’

i changed code

// make mat from inter-surface buffer
Mat rawmat(height, width, CV_8UC4,
inter_buf->surfaceList[0].mappedAddr.addr[0],
inter_buf->surfaceList[0].planeParams.pitch[0]);
char *data = (char *)malloc(width * height * 4);
// make temp rotate mat from malloc buffer
//Mat rotate_mat(width, height, CV_8UC4, data, height);
// Aplly your algo which works with opencv Mat, here we only rotate the Mat
// for demo
//rotate(rawmat, rotate_mat, ROTATE_180);
free(data);
if (frame_number % 300 == 0) {
snprintf(file_name, sizeof(file_name), “frame-%d.png”, frame_number);
cv::imwrite(file_name, rawmat);

it operates no error message. thank you so much

if you know super resolution in deepstream please write code my coworkder

expect that method. and i operate super resolution opencv dnn with your code

it is so slow. i know why you tell me that is unnecesarry

Can you tell us which model are you using for “super resolution”?

There is no such example yet, you can refer to gst-nvvideotemplate.

Or do you just want to use the super-resolution function of OpenCV? This has nothing to do with DeepStream.

This will copy memory from the GPU to the CPU, so it will be slow.

A customized gst-nvvideotemplate plugin may be used to customized the inferencing based on TensorRT. Gst-nvdsvideotemplate — DeepStream 7.0 Release documentation

Can you also share the purpose of using super-resolution?

i am sorry for late answer.

my company board told us that if using super resolution on old cctv camera’s frame, then it can save money to change high resolution cctv

i think if upscaling, it is no useful because yolov8 input shape 640, 640 as resize

but i am employee so i just make code for super resolution as he told to us

Thank you for sharing. It may not be useful for detection tasks. It will be helpful for display.

In general, if you have a super-resolution model as a gie, nvvideotemplate is still worth trying.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.