Pass crop downstream

• Hardware Platform (Jetson / GPU): GPU
• DeepStream Version: 5.0
• TensorRT Version 7.1.3.4
• NVIDIA GPU Driver Version (valid for GPU only) 440.33.01
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ?

I want to write a custom plugin. The size of the buffer will be changed in the plugin, so I’m not using transform_ip but transform. I found that therefore dsexample is of limited use. I have problems generating valid output buffers.

static GstFlowReturn
gst_generateseq_transform (GstBaseTransform * btrans, GstBuffer * inbuf, GstBuffer * outbuf)
{
	GstGenerateSeq *generateseq = GST_GENERATESEQ (btrans);
	GstMapInfo in_map_info, out_map_info;
	GstFlowReturn flow_ret = GST_FLOW_ERROR;
	//gdouble scale_ratio = 1.0;
	//GenerateSeqOutput *output;

	NvBufSurface *surface = NULL;
	NvBufSurface *outsurface = NULL;
	NvDsBatchMeta *batch_meta = NULL;
	NvDsFrameMeta *frame_meta = NULL;
	NvDsMeta *meta = NULL;

	guint i = 0;
	int width, height, r;


	NvBufSurface *output_surface;
	NvBufSurfaceCreateParams params;
	int batch_size = 1;

	generateseq->frame_num++;

	g_print("mapping\n");

	memset (&in_map_info, 0, sizeof (in_map_info));
	if (!gst_buffer_map (inbuf, &in_map_info, GST_MAP_READ)) {
		g_print ("Error: Failed to map gst buffer\n");
		goto error;
	}

	nvds_set_input_system_timestamp (inbuf, GST_ELEMENT_NAME (generateseq));
	surface = (NvBufSurface *) in_map_info.data;

	std::cout << "num surfaces   " << surface->batchSize << std::endl;

	params.gpuId = surface->gpuId;
	params.width = 100;
	params.height = 100;

	params.size = 0;
	params.colorFormat = NVBUF_COLOR_FORMAT_RGBA;
	params.layout = NVBUF_LAYOUT_PITCH;
	params.memType = NVBUF_MEM_CUDA_UNIFIED;

	r = NvBufSurfaceCreate(&output_surface, batch_size, &params);
	std::cout << r << std::endl;
	///* read some infos for debugging
	std::cout << "-- new surface " << std::endl;
	std::cout << "batch info " << std::endl;
	std::cout << "mem type   " << output_surface->memType << std::endl;
	std::cout << "out size   " << params.size << std::endl;
	std::cout << "out size   " << output_surface->surfaceList[0].dataSize << std::endl;
	std::cout << "dataPtr   " << output_surface->surfaceList[0].dataPtr  << std::endl;
	std::cout << "mappedAddr   " << output_surface->surfaceList[0].mappedAddr.addr[0] << std::endl;

	outbuf = gst_buffer_new_wrapped_full (GST_MEMORY_FLAG_ZERO_PREFIXED,
			output_surface,
			sizeof(NvBufSurface), 0,
			sizeof(NvBufSurface), NULL, NULL);


	memset (&out_map_info, 0, sizeof (out_map_info));
	if (!gst_buffer_map (outbuf, &out_map_info, GST_MAP_READWRITE)) {
		g_print ("Error: Failed to out map gst buffer\n");
		goto error;
	}
	outsurface = (NvBufSurface *) out_map_info.data;
	NvBufSurfaceMemSet(outsurface, 0, 0, 0);

	g_print("surface batchsize %u\n", outsurface->batchSize);
	flow_ret = GST_FLOW_OK;

	error:

	nvds_set_output_system_timestamp (outbuf, GST_ELEMENT_NAME (generateseq));

	gst_buffer_unmap (inbuf, &in_map_info);
	gst_buffer_unmap (outbuf, &out_map_info);

	g_print("exit\n");

	return flow_ret;
}

When I map the created buffer with gst_buffer_map, I can read out the correct number of surfaces NvBufSurfaceCreate. I then return the buffer associated to this surface.

For testing, I duplicated the element generateseq in my pipline

.... ! generateseq ! generateseq ! ...

The second element outputs the following when receiving the buffer from the first element and then crashes.

num surfaces   2432761856
Cuda failure: status=101
-1
-- new surface 
batch info 
Caught SIGSEGV

I was expecting one surface. I assume, somehow the allocated buffers were freed? I also assume that I don’t have to sync the surface from GPU to CPU, since I’m not doing any CPU operations on it? What am I doing wrong?

This is a example of transform plugin Qustion of memory leak gst-plugin based on dsexample - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums