Uridecodebin with filesink

I am trying to run the following pipeline along with deepstream-app.

gst-launch-1.0 uridecodebin uri=rtsp-uri num-buffers=1 ! nvvidconv ! nvjpegenc ! filesink location=capture1.jpeg

How can I create an equivalent pipeline in DS4.0 for the following?
I wish to get this functionality in the default apps but haven’t been successful.
Is there a way to integrate this pipeline with pipelines of any default apps or at least run it simultaneously with the default apps?

I have tried the following code, but the image is not being written -

GstElement *jpegPipe= NULL;
 GstElement *rtsp = NULL, *jpegsink = NULL, *encoder = NULL, *vidconv=NULL;
 jpegPipe = gst_pipeline_new ("pipeline");
 rtsp = gst_element_factory_make("uridecodebin","uri-decode-bin");
 vidconv = gst_element_factory_make ("nvvideoconvert", "nvvidconverter");
 g_object_set (G_OBJECT (rtsp), "uri", "some-rtsp-url", NULL);

 gst_bin_add(GST_BIN (jpegPipe), rtsp);
 jpegsink = gst_element_factory_make("filesink","filesink");
 g_object_set (G_OBJECT (jpegsink), "location", "someimg.jpeg", NULL);
 encoder = gst_element_factory_make("jpegenc", "jpegenc");
 if (!rtsp || !encoder || !vidconv || !jpegsink) {
 	NVGSTDS_ERR_MSG_V ("Failed to create 'jpeg elements'");
 	goto done;
 }	
 gst_bin_add_many(GST_BIN(jpegPipe),vidconv,encoder,jpegsink,NULL);
 if(!gst_element_link_many(vidconv,encoder,jpegsink,NULL)){
 g_print("can't link the jpeg elements\n");
}

Hi,
For jpeg decoding, we have a sample:

deepstream_sdk_v4.0.1_jetson\sources\apps\sample_apps\deepstream-image-decode-test

Please check if it helps your usecase.

We don’t see any DL model being run in your pipeline. You may not need DeepStream SDK and can refer to
https://developer.nvidia.com/embedded/dlc/l4t-accelerated-gstreamer-guide-32-2

Hi DaneLLL ,

I want to save the frames from a pipeline and not just decoding jpeg. I wish to run a pipeline on an RTSP stream and save each frame that comes through the pipeline.
I wish to include the functionality of PIPELINE 1-“uridecodebin uri=rtsp-uri num-buffers=1 ! nvvidconv ! nvjpegenc ! filesink location=capture1.jpeg pipeline” - that is saving frames to jpeg files in the following pipeline:

PIPELINE-2 “pgie -> nvtracker -> sgie1 -> sgie2 -> sgie3 -> tiler -> nvvidconv -> nvosd -> tee -> queue1 -> queue2 -> msgconv -> msgbroker -> sink”

If I can’t merge these two pipelines, then can I at least run PIPELINE-1 from the deepstream-app along with the default deepstream-app pipeline?

Hi,
Please saving into JPEG files, you can leverage dsexample. Please refer to 2, 5 in faq.

Thanks. It worked!

It worked for a single source. I can save the frames from a single source at the correct resolution. But when I try to write frames from the second source, the image is of the correct width but the height is very less as compared to the surface parameters.
I am accessing the second frame as: surface->surfaceList[1]. I get the correct height and width parameters but the written image is of incorrect height.

Please help me.
0000000_Camera_1.jpg

Hi,
Probably you do not configure batch-size correctly. Please check
https://devtalk.nvidia.com/default/topic/1061205/deepstream-sdk/rtsp-camera-access-frame-issue/post/5379662/#5379662
https://devtalk.nvidia.com/default/topic/1061205/deepstream-sdk/rtsp-camera-access-frame-issue/post/5380083/#5380083

The user has verified it working. Please check if you set source number identical to batch-size.

I am able to access the surface but the image written is of incorrect resolution.
I followed that same thread. it worked for single source but still couldn’t get the correct output for multiple sources. Here is my code:

void write_frames(GstBuffer *buf){   
	GstMapInfo in_map_info;
    NvBufSurface *surface = NULL;
    memset (&in_map_info, 0, sizeof (in_map_info));
    if (!gst_buffer_map (buf, &in_map_info, GST_MAP_READ)) {
      g_print ("Error: Failed to map gst buffer\n");
      gst_buffer_unmap (buf, &in_map_info);
      return;
    }
    cudaError_t cuda_err;
    NvBufSurfTransformRect src_rect, dst_rect;
    NvBufSurfTransformRect src_rect1, dst_rect1;
    surface = (NvBufSurface *) in_map_info.data;  
    g_print("\nBathc_Size: %d\n",surface->batchSize);	
    int batch_size= surface->batchSize;
    cudaStream_t cuda_stream;	

    NvBufSurfaceCreateParams nvbufsurface_create_params, nvbufsurface_create_params1;    
	NvBufSurfTransformParams nvbufsurface_params;
    NvBufSurfTransformParams nvbufsurface_params1;

	NvBufSurface *dst_surface = NULL;
	NvBufSurface *dst_surface1 = NULL;

	src_rect.top  = 0;
	src_rect.left  = 0;
	src_rect.width = (guint) surface->surfaceList[0].width;
	src_rect.height = (guint) surface->surfaceList[0].height;

	dst_rect.top   = 0;
	dst_rect.left  = 0;
	dst_rect.width = (guint) surface->surfaceList[0].width;
	dst_rect.height= (guint) surface->surfaceList[0].height;

	nvbufsurface_params.src_rect = &src_rect;
	nvbufsurface_params.dst_rect = &dst_rect;
	nvbufsurface_params.transform_flag =  NVBUFSURF_TRANSFORM_CROP_SRC | NVBUFSURF_TRANSFORM_CROP_DST;
	nvbufsurface_params.transform_filter = NvBufSurfTransformInter_Default;
  
	nvbufsurface_create_params.gpuId  = surface->gpuId;
	nvbufsurface_create_params.width  = (gint) surface->surfaceList[0].width;
	nvbufsurface_create_params.height = (gint) surface->surfaceList[0].height;
	nvbufsurface_create_params.size = 0;
	nvbufsurface_create_params.colorFormat = NVBUF_COLOR_FORMAT_RGBA;
	nvbufsurface_create_params.layout = NVBUF_LAYOUT_PITCH;
	nvbufsurface_create_params.memType = NVBUF_MEM_DEFAULT;

	src_rect1.top  = 0;
	src_rect1.left  = 0;
	src_rect1.width = (guint) surface->surfaceList[1].width;
	src_rect1.height = (guint) surface->surfaceList[1].height;

	dst_rect1.top   = 0;
	dst_rect1.left  = 0;
	dst_rect1.width = (guint) surface->surfaceList[1].width;
	dst_rect1.height= (guint) surface->surfaceList[1].height;

	nvbufsurface_params1.src_rect = &src_rect1;
	nvbufsurface_params1.dst_rect = &dst_rect1;
	nvbufsurface_params1.transform_flag =  NVBUFSURF_TRANSFORM_CROP_SRC | NVBUFSURF_TRANSFORM_CROP_DST;
	nvbufsurface_params1.transform_filter = NvBufSurfTransformInter_Default;
  
	nvbufsurface_create_params1.gpuId  = surface->gpuId;
	nvbufsurface_create_params1.width  = (gint) surface->surfaceList[1].width;
	nvbufsurface_create_params1.height = (gint) surface->surfaceList[1].height;
	nvbufsurface_create_params1.size = 0;
	nvbufsurface_create_params1.colorFormat = NVBUF_COLOR_FORMAT_RGBA;
	nvbufsurface_create_params1.layout = NVBUF_LAYOUT_PITCH;
	nvbufsurface_create_params1.memType = NVBUF_MEM_DEFAULT;

	cuda_err = cudaSetDevice (surface->gpuId);
	cuda_err=cudaStreamCreate (&cuda_stream);

	int create_result = NvBufSurfaceCreate(&dst_surface,batch_size,&nvbufsurface_create_params);	
	create_result = NvBufSurfaceCreate(&dst_surface1,batch_size,&nvbufsurface_create_params1);	
	
	NvBufSurfTransformConfigParams transform_config_params;
	NvBufSurfTransform_Error err;
	transform_config_params.compute_mode = NvBufSurfTransformCompute_Default;
	transform_config_params.gpu_id = surface->gpuId;
	transform_config_params.cuda_stream = cuda_stream;
	err = NvBufSurfTransformSetSessionParams (&transform_config_params);

	NvBufSurfaceMemSet (dst_surface, 0, 0, 0);
	err = NvBufSurfTransform (surface, dst_surface, &nvbufsurface_params);
	if (err != NvBufSurfTransformError_Success) {
  	  g_print ("NvBufSurfTransform failed with error %d while converting buffer\n", err);
	}

	NvBufSurfaceMemSet (dst_surface1, 1, 0, 0);
	err = NvBufSurfTransform (surface, dst_surface1, &nvbufsurface_params1);
	if (err != NvBufSurfTransformError_Success) {
  	  g_print ("NvBufSurfTransform1 failed with error %d while converting buffer\n", err);
	}

	char filename[94];

	NvBufSurfaceMap (dst_surface, 0, 0, NVBUF_MAP_READ);
	NvBufSurfaceSyncForCpu (dst_surface, 0, 0);

	NvBufSurfaceMap (dst_surface1, 1, 0, NVBUF_MAP_READ);
	NvBufSurfaceSyncForCpu (dst_surface1, 1, 0);

	cv::Mat bgr_frame = cv::Mat (cv::Size(nvbufsurface_create_params.width, nvbufsurface_create_params.height), CV_8UC3);
	cv::Mat in_mat = cv::Mat (nvbufsurface_create_params.height, nvbufsurface_create_params.width,
		    CV_8UC4, dst_surface->surfaceList[0].mappedAddr.addr[0],
		    dst_surface->surfaceList[0].pitch);
	cv::cvtColor (in_mat, bgr_frame, CV_RGBA2BGR);

	snprintf(filename, 94, "ds_media/%07d_Camera_0.jpg", frame_count[0]++);    
	cv::imwrite(filename,bgr_frame);

	cv::Mat bgr_frame1 = cv::Mat (cv::Size(nvbufsurface_create_params1.width, nvbufsurface_create_params1.height), CV_8UC3);
	cv::Mat in_mat1 = cv::Mat (nvbufsurface_create_params1.height, nvbufsurface_create_params1.width,
	    CV_8UC4, dst_surface1->surfaceList[1].mappedAddr.addr[0],
	    dst_surface1->surfaceList[1].pitch);
	cv::cvtColor (in_mat1, bgr_frame1, CV_RGBA2BGR);

	snprintf(filename, 94, "ds_media/%07d_Camera_1.jpg", frame_count[1]++);    
	cv::imwrite(filename,bgr_frame1);

	NvBufSurfaceUnMap (dst_surface, 0, 0);
	NvBufSurfaceDestroy (dst_surface);

	NvBufSurfaceUnMap (dst_surface1, 1, 0);
	NvBufSurfaceDestroy (dst_surface1);

    cudaStreamDestroy (cuda_stream);
    gst_buffer_unmap (buf, &in_map_info);
    return;
}

The code prints batch-size correctly - 2. I also tried saving only the second surface but to no avail.

Hi,
You should see the buffer at dst_surface->surfaceList[1], so should not need to create dst_surface1.
Do you have multiple source in different resolution? Even though your sources are in different resolutions, you should see all surfaces in same resolutions after nvstreammux.

Hi DaneLLL,
I also tried using single destination surface, but still facing the same issue. Can you check my config file and see if there’s any error?

# Copyright (c) 2018 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=3
width=1920
height=1280
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=2
uri=file:///opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_720p.h264
num-sources=1
gpu-id=0
nvbuf-memory-type=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=2
uri=file:///opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_720p.h264
num-sources=1
gpu-id=0
nvbuf-memory-type=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0


[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=msgconv_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=1
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_kafka_proto.so
msg-broker-conn-str=192.168.1.1;9092;test
topic=test


[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=1
bitrate=2000000
output-file=/home/nvidia/out.mp4
source-id=1

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size=2
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=4
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
nvbuf-memory-type=0
model-engine-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b2_fp16.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/labels.txt
config-file=/opt/nvidia/deepstream/deepstream-4.0/samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=/opt/nvidia/deepstream/deepstream-4.0/samples/primary_detector_raw_output/


[tracker]
enable=0
tracker-width=600
tracker-height=300
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process applicable to DCF only
enable-batch-process=0

[secondary-gie0]
enable=1
gpu-id=0
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0;
batch-size=16
config-file=/opt/nvidia/deepstream/deepstream-4.0/samples/configs/deepstream-app/config_infer_secondary_vehicletypes.txt
labelfile-path=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Secondary_VehicleTypes/labels.txt
model-engine-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_int8.engine

[secondary-gie1]
enable=1
gpu-id=0
gie-unique-id=5
operate-on-gie-id=1
operate-on-class-ids=0;
batch-size=16
config-file=/opt/nvidia/deepstream/deepstream-4.0/samples/configs/deepstream-app/config_infer_secondary_carcolor.txt
labelfile-path=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Secondary_CarColor/labels.txt
model-engine-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_fp16.engine

[secondary-gie2]
enable=1
gpu-id=0
gie-unique-id=6
operate-on-gie-id=1
operate-on-class-ids=0;
batch-size=16
config-file=/opt/nvidia/deepstream/deepstream-4.0/samples/configs/deepstream-app/config_infer_secondary_carmake.txt
labelfile-path=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Secondary_CarMake/labels.txt
model-engine-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_fp16.engine

hi,
The config file looks OK. Do you get GstBuffer before [tiled-display]? After [tiled-display], all surfaces are composited into one.

I call write_frames() in bbox_generated_probe_after_analytics (AppCtx * appCtx, GstBuffer * buf,
NvDsBatchMeta * batch_meta, guint index)
. of the deepstream_app_main.c file

Hi,
Please try the following patch:

#if 1
  static int dump = 0;
  int idx = 1;
  if (dump < 150) {
    GstMapInfo in_map_info;
    NvBufSurface *surface = NULL, ip_surf;

    memset (&in_map_info, 0, sizeof (in_map_info));
    if (!gst_buffer_map (buf, &in_map_info, GST_MAP_READ)) {
      g_print ("Error: Failed to map gst buffer\n");
      gst_buffer_unmap (buf, &in_map_info);
      return GST_PAD_PROBE_OK;
    }
    cudaError_t cuda_err;

    NvBufSurfTransformRect src_rect, dst_rect;
    surface = (NvBufSurface *) in_map_info.data;

    ip_surf = *surface;

    ip_surf.numFilled = ip_surf.batchSize = 1;
    ip_surf.surfaceList = &(surface->surfaceList[idx]);
  
    int batch_size= surface->batchSize;
    printf("\nBatch Size : %d, resolution : %dx%d \n",batch_size,
        surface->surfaceList[idx].width, surface->surfaceList[idx].height);

    src_rect.top   = 0;
    src_rect.left  = 0;
    src_rect.width = (guint) surface->surfaceList[idx].width;
    src_rect.height= (guint) surface->surfaceList[idx].height;

    dst_rect.top   = 0;
    dst_rect.left  = 0;
    dst_rect.width = (guint) surface->surfaceList[idx].width;
    dst_rect.height= (guint) surface->surfaceList[idx].height;

    NvBufSurfTransformParams nvbufsurface_params;
    nvbufsurface_params.src_rect = &src_rect;
    nvbufsurface_params.dst_rect = &dst_rect;
    nvbufsurface_params.transform_flag =  NVBUFSURF_TRANSFORM_CROP_SRC | NVBUFSURF_TRANSFORM_CROP_DST;
    nvbufsurface_params.transform_filter = NvBufSurfTransformInter_Default;
  
    NvBufSurface *dst_surface = NULL;
    NvBufSurfaceCreateParams nvbufsurface_create_params;

    /* An intermediate buffer for NV12/RGBA to BGR conversion  will be
     * required. Can be skipped if custom algorithm can work directly on NV12/RGBA. */
    nvbufsurface_create_params.gpuId  = surface->gpuId;
    nvbufsurface_create_params.width  = (gint) surface->surfaceList[idx].width;
    nvbufsurface_create_params.height = (gint) surface->surfaceList[idx].height;
    nvbufsurface_create_params.size = 0;
    nvbufsurface_create_params.colorFormat = NVBUF_COLOR_FORMAT_RGBA;
    nvbufsurface_create_params.layout = NVBUF_LAYOUT_PITCH;
    nvbufsurface_create_params.memType = NVBUF_MEM_DEFAULT;

    cuda_err = cudaSetDevice (surface->gpuId);

    cudaStream_t cuda_stream;

    cuda_err=cudaStreamCreate (&cuda_stream);

    int create_result = NvBufSurfaceCreate(&dst_surface,1,&nvbufsurface_create_params);	

    NvBufSurfTransformConfigParams transform_config_params;
    NvBufSurfTransform_Error err;

    transform_config_params.compute_mode = NvBufSurfTransformCompute_Default;
    transform_config_params.gpu_id = surface->gpuId;
    transform_config_params.cuda_stream = cuda_stream;
    err = NvBufSurfTransformSetSessionParams (&transform_config_params);

    NvBufSurfaceMemSet (dst_surface, 0, 0, 0);
    err = NvBufSurfTransform (&ip_surf, dst_surface, &nvbufsurface_params);
    if (err != NvBufSurfTransformError_Success) {
  	  g_print ("NvBufSurfTransform failed with error %d while converting buffer\n", err);
    }
    NvBufSurfaceMap (dst_surface, 0, 0, NVBUF_MAP_READ);
    NvBufSurfaceSyncForCpu (dst_surface, 0, 0);

    cv::Mat bgr_frame = cv::Mat (cv::Size(nvbufsurface_create_params.width, nvbufsurface_create_params.height), CV_8UC3);

    cv::Mat in_mat =
        cv::Mat (nvbufsurface_create_params.height, nvbufsurface_create_params.width,
        CV_8UC4, dst_surface->surfaceList[0].mappedAddr.addr[0],
        dst_surface->surfaceList[0].pitch);

    cv::cvtColor (in_mat, bgr_frame, CV_RGBA2BGR);

    char filename[64];
    snprintf(filename, 64, "/tmp/image%03d.jpg", dump);
    cv::imwrite(filename,bgr_frame);
    dump ++;

    NvBufSurfaceUnMap (dst_surface, 0, 0);
    NvBufSurfaceDestroy (dst_surface);
    cudaStreamDestroy (cuda_stream);
    gst_buffer_unmap (buf, &in_map_info);
  }
#endif
1 Like

Hi DaneLLL. This code is working correctly. It saves the images with correct resolutions. But is it possible that the buffers get interchanged while runtime? What I observed is that some frames are getting saved at interchanged locations. For example source0 frame is being saved in folder for source1 and vice versa. What could be wrong and how can I fix it?

I printed the surface->numFilled attribute. It is continuously changing between 1 and 2 even if both sources are running.

Hi,
Please add prob callback to sinkpad of nvmultistreamtiler and check again. Should not happen before nvmultistreamtiler composites the surfaces.

Hi,
With more investigation, we think this is possible in RTSP sources. Please rely on source_id in NvDsFrameMeta:

/** source_id of the frame in the batch e.g. camera_id.
 * It need not be in sequential order */
guint source_id;

Also set live-source=1 in [streammux].

Thanks. I followed the same and it worked. There is one more issue I am facing. Can we check when a stream ends or stops? When any one of the stream stops/ends, even the source_id is not reliable.

Hi saurabh.sadhwani,

Please open a new forum issue and we’ll pick it up there.

Thanks