Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU):Jetson
• DeepStream Version:6.2
I currently have 4 RTSP streams with image sizes of [1920x1080,1920x1080,1280x960,640x320],They are connected to the same streammux, and I have set the width and height of nvstreammux to 640x640 (I know that the DeepStream SDK FAQ states that nvstreammux should be set to the resolution of the RTSP stream, but my RTSP stream resolution is not consistent at this time).
In the probe callback function of fakesink, I use the following code
cv::Mat img_dest;
cv::Mat img_rgba = cv::Mat(pipeline_h, pipeline_w, CV_8UC4, nvbuf_surface->surfaceList[frame_meta->batch_id].mappedAddr.addr[0], nvbuf_surface->surfaceList[frame_meta->batch_id].pitch);
cv::cvtColor(img_rgba, img_dest, cv::COLOR_RGBA2RGB);
cv::resize(img_dest, img_dest, cv::Size(frame_meta->source_frame_width, frame_meta->source_frame_height), 0, 0, cv::INTER_CUBIC);
How should I handle restoring the image, but the resulting image quality is very low?
The pipeline structure behind my nvinfer is
GstElement *nvvidconv = NULL;
nvvidconv = gst_element_factory_make("nvvideoconvert", "convert");
g_object_set(G_OBJECT(nvvidconv), "nvbuf-memory-type", 4, NULL);
GstElement *nvvidconv_cap = NULL;
nvvidconv_cap = gst_element_factory_make("capsfilter", "nvvidconv_cap");
GstCaps *caps = NULL;
caps = gst_caps_from_string("video/x-raw(memory:NVMM), format=RGBA");
g_object_set(G_OBJECT(nvvidconv_cap), "caps", caps, NULL);
GstElement *sink = NULL;
sink = gst_element_factory_make("fakesink", NULL);
g_object_set(G_OBJECT(sink), "sync", 0, NULL);
GstElement *queue1 = gst_element_factory_make("queue", "queue1");
GstElement *queue2 = gst_element_factory_make("queue", "queue2");
GstElement *queue3 = gst_element_factory_make("queue", "queue3");
gst_bin_add_many(GST_BIN(bin), pgie, queue1, nvvidconv, queue2, nvvidconv_cap, queue3, sink, NULL);
gst_element_link_many(pgie, queue1, nvvidconv, queue2, nvvidconv_cap, queue3, sink, NULL);