How can i get jpeg data from the plugin nvjpegenc?

I creat a deepstream pipeline as follow:
gst-launch-1.0 uridecodebin uri=file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4 ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=NV12,width=640,height=360’ ! m.sink_0 nvstreammux name=m batch-size=1 width=640 height=480 ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! video/x-raw, format=I420 ! nvjpegenc ! appsink

It can work correctly. The appsink’s sample function source code as fowed. I can get the NvDsBatchMeta data correctly, but my question is how can i get the jpeg data?

static GstFlowReturn
new_sample (GstElement * sink, gpointer * data)
{
GstSample *sample;
GstBuffer *buf = NULL;
guint num_rects = 0;
NvDsObjectMeta *obj_meta = NULL;
guint vehicle_count = 0;
guint person_count = 0;
NvDsMetaList *l_frame = NULL;
NvDsMetaList *l_obj = NULL;
unsigned long int pts = 0;
GstMapInfo map;

sample = gst_app_sink_pull_sample (GST_APP_SINK (sink));
if (gst_app_sink_is_eos (GST_APP_SINK (sink))) 
{
	g_print ("EOS received in Appsink********\n");
}
g_print ("\n\r just for a test\n\r ");
//gst_sample_unref (sample);
//return GST_FLOW_OK;
if (sample) 
{
	/* Obtain GstBuffer from sample and then extract metadata from it. */
	buf = gst_sample_get_buffer (sample);
	gst_buffer_map (buf, &map, GST_MAP_READ);

	g_print ("\n\r map.size =%d \n\r ", map.size);
	gst_buffer_unmap(buf, &map);
	
	NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);

	for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;l_frame = l_frame->next)
	{
		NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
		pts = frame_meta->buf_pts;
		
		for (l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next)
		{
			obj_meta = (NvDsObjectMeta *) (l_obj->data);
			if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE) 
			{
				vehicle_count++;
				num_rects++;
			}
			if (obj_meta->class_id == PGIE_CLASS_ID_PERSON) 
			{
				person_count++;
				num_rects++;
			}
		}
	}

	g_print ("	111 Frame Number = %d Number of objects = %d "
			"Vehicle Count = %d Person Count = %d PTS = %" GST_TIME_FORMAT "\n",
			frame_number, num_rects, vehicle_count, person_count,
			GST_TIME_ARGS (pts));
	frame_number++;
	gst_sample_unref (sample);
	return GST_FLOW_OK;
}
return GST_FLOW_ERROR;

}

Hi,
Please refer to
https://developer.gnome.org/gstreamer/stable/gstreamer-GstMemory.html#GstMapInfo
map.data is the pointer to compressed jpeg image.

1 Like

As the doc, i get the jpeg data , but the size is not correct. My be the it effect by the nvds batch meta, i wan know the map of the jpeg data and the nvds batch mata. My be i refer to the function “nvds_obj_enc_process”, but i not fond the source code of this function.

Hi,
nvds_obj_enc_process is to encode detected objects. What you get in appsink is the whole frame compressed to jpeg.

1 Like

yes, i wan get the jpeg data length , so i can copy it to another buffer and send it to the ethernet。 how can i get the jpeg data length? i can copy it one by one byte until meet the hex 0xff 0xd9 , but it is low efficiency.

yes, i wan get the jpeg data length , so i can copy it to another buffer and send it to the ethernet。 how can i get the jpeg data length? i can copy it one by one byte until meet the hex 0xff 0xd9 , but it is low efficiency.

Hi,
map.size should give the jpeg data length from 0xFFD8 to 0xFFD9.

1 Like

Thank you for your replay , i get the right jpeg data.

Hello, I want to send the detection result picture containing the target box or the bounding box to the network, what is the general method??

What I’m doing now is taking the raw frame data from the SurfaceList, drawing the bounding box with OpenCV, and encoding it with NVJPEG. However, there are two problems:
1) Segmentation fault occurs during OpenCV Imwrite/Imread !
2) The original frame from the surfacelist is in NV12 format, the image generated by 05_jpeg_encode in JETSON_MULTIMEDIA_API has color space problem.
How do I pass NV12 data on the Surface to NVJPEG for proper encoding ?

This problem has been bothering me for more than two weeks. Please give me some advice. Thank you very much !

  • My main code is as follows:
    cudaMemcpy((void*)src_data,
    (void*)surface->surfaceList[frame_meta->batch_id].dataPtr,
    surface->surfaceList[frame_meta->batch_id].dataSize,
    cudaMemcpyDeviceToHost);

gint frame_width = (gint)surface->surfaceList[frame_meta->batch_id].width;
gint frame_height = (gint)surface->surfaceList[frame_meta->batch_id].height;
gint frame_step = surface->surfaceList[frame_meta->batch_id].pitch;
gint color_format = surface->surfaceList[frame_meta->batch_id].colorFormat;

JpegEncoder::instance()->encode(frame_meta->source_id, src_data, frame_data_size, frame_width, frame_height);

* Here is the main code in the encode :
ctx->jpegenc = NvJPEGEncoder::createJPEGEncoder("jpenenc");
TEST_ERROR(!ctx->jpegenc, "Could not create Jpeg Encoder", cleanup);

ctx->convert = NvVideoConverter::createVideoConverter("conv");
TEST_ERROR(!ctx->convert, "Could not create Video Converter", cleanup);

/* Set conv output plane format */    // V4L2_PIX_FMT_YUV420M
ret = ctx->convert->setOutputPlaneFormat(V4L2_PIX_FMT_YUV420M, ctx->in_width,
                                   ctx->in_height,
                                   V4L2_NV_BUFFER_LAYOUT_PITCH);
TEST_ERROR(ret < 0, "Could not set output plane format for conv", cleanup);

/* Set conv capture plane format, YUV420 or NV12 */
ret = ctx->convert->setCapturePlaneFormat(ctx->in_pixfmt, ctx->in_width,
                                    ctx->in_height,
                                    V4L2_NV_BUFFER_LAYOUT_BLOCKLINEAR);
TEST_ERROR(ret < 0, "Could not set capture plane format for conv", cleanup);

/* REQBUF, EXPORT and MAP conv output plane buffers */
ret = ctx->convert->output_plane.setupPlane(V4L2_MEMORY_MMAP, 1, true, false);
TEST_ERROR(ret < 0, "Error while setting up output plane for conv", cleanup);

/* REQBUF and EXPORT conv capture plane buffers
 * No need to MAP since buffer will be shared to next component and not read in application */
ret = ctx->convert->capture_plane.setupPlane(V4L2_MEMORY_MMAP, 1, !ctx->use_fd, false);
TEST_ERROR(ret < 0, "Error while setting up capture plane for conv", cleanup);

/* conv output plane STREAMON */
ret = ctx->convert->output_plane.setStreamStatus(true);
TEST_ERROR(ret < 0, "Error in output plane streamon for conv", cleanup);

/* conv capture plane STREAMON */
ret = ctx->convert->capture_plane.setStreamStatus(true);
TEST_ERROR(ret < 0, "Error in capture plane streamon for conv", cleanup);

/* Register callback for dequeue thread on conv capture plane, this callback
 * will encode YUV420 or NV12 image to JPEG and write to file system. */
ctx->convert->capture_plane.setDQThreadCallback(conv_capture_dqbuf_thread_callback);

// Start threads to dequeue buffers on conv capture plane
ctx->convert->capture_plane.startDQThread(ctx);

/* Enqueue all empty conv capture plane buffers, actually in this case, 1 buffer will be enqueued. */
for (uint32_t i = 0; i < ctx->convert->capture_plane.getNumBuffers(); i++)  // getNumBuffers() = 1
{
    struct v4l2_buffer v4l2_buf;
    struct v4l2_plane planes[MAX_PLANES];

    memset(&v4l2_buf, 0, sizeof(v4l2_buf));
    memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane));

    v4l2_buf.index = i;
    v4l2_buf.m.planes = planes;

    ret = ctx->convert->capture_plane.qBuffer(v4l2_buf, NULL);
    if (ret < 0){
        std::cerr << "Error while queueing buffer at conv capture plane" << std::endl;
        abort(ctx);
        goto cleanup;
    }
}

/* Read YUV420 image to conv output plane buffer and enqueue so conv can start processing. */
{
    struct v4l2_buffer v4l2_buf;
    struct v4l2_plane planes[MAX_PLANES];
    NvBuffer *buffer = ctx->convert->output_plane.getNthBuffer(0);

    memset(&v4l2_buf, 0, sizeof(v4l2_buf));
    memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane));

    v4l2_buf.index = 0;
    v4l2_buf.m.planes = planes;

    for (uint32_t i = 0; i < buffer->n_planes; i++)
    {
        NvBuffer::NvBufferPlane &plane = buffer->planes[i];
        plane.bytesused = 0;
        uint32_t bytes_writed = plane.fmt.bytesperpixel * plane.fmt.width;
        unsigned char* dstdata = plane.data;
        unsigned char* srcdata = src_yuv_data;

        for (uint32_t j = 0; j < plane.fmt.height; j++)
        {
            memcpy(dstdata, srcdata, bytes_writed);  // ???
            srcdata += bytes_writed;                 // ???
            dstdata += plane.fmt.stride;
        }
        plane.bytesused = plane.fmt.stride * plane.fmt.height;
    }

    ret = ctx->convert->output_plane.qBuffer(v4l2_buf, NULL);
    if (ret < 0){
        std::cerr << "Error while queueing buffer at conv output plane" << std::endl;
        abort(ctx);
        goto cleanup;
    }
}

/* Wait till all capture plane buffers on conv are dequeued */
ctx->convert->capture_plane.waitForDQThread(2000);

Hello, there is also a segment error when I save an image file using OpenCV’s imwrite. I had to use imwrite to compress the original image and send it over the network.(originally intended to use NVJPEG encoding, but the image format is always not converted properly). imwrite works fine at first, but after a while (the code for this file hasn’t changed, just adding an NVJPEG module to other files and modifying the makefile for the entire project) imwrite starts to show segment errors.

Have you found the cause of this problem? How was it solved in the end? thank you very much!

There may be a mismatch between the jpeg lib used by opencv and the one used by nvjpeg plugins. I have often seen that but never really investigated how to fix it and not sure this is easy nor possible.
I think that isolating nvjpeg into a separate process could fix that, you may use gstreamer shmsrc/shmsink for communication between both, but this may have some performance and CPU load drawback.

Does OpenCV imwrite use the system’s JPEG library by default? I have removed the JPEG lib dependency from my CMakeList and kept only NVJPEG lib. It also compiles and runs smoothly. Is this the problem?

Hi,
We would suggest link to libjpeg.so in OpenCV. Please refer to a similar topic:
Attempting to stream camera video and encode with nvjpeg, but lib mismatch? (C++) - #4 by DaneLLL

But how can I solve this problem?

Hi
NvJpegEncoder in 05_jpeg_encode supports NvBuffer. It does not support NvBufSurface defined in DeepStream SDK. For encoding/saving the frames to JPEG files, you would need to utilize cv::imwrite(). Please refer to
RTSP camera access frame issue - #19 by DaneLLL

Hello, did you implemented the process of drawing box and then jpeg encoding in the new_sample function?
The official appsrc-test example does not indicate where to get the original image and how to encode it in JPEG.
Could you please tell me how to do it?Thank you very much!

Hello, first of all, thank you for your reply.
I want to send the picture with the detection result to the backend, but if I use OpenCV to save the picture first, then read the picture and send it, the efficiency is very slow! The purpose of encoding the original image as JPEG is to reduce the amount of data when sending over the network, not to save the image.
The whole process is: 1) Get the original image from NvBufSurface; 2) Draw bounding box, target box, OSD information, etc; 3) JPEG encoding; 4) network send. And the faster the process, the better.
Looking forward to your detailed advice, Thank you very much.

Hi,
If you would like to encode every frame and have 30fps, we suggest use h264/265 encoding. Please refer to
What is the best way to quickly draw information on the original frame and encode it as JPG? - #3 by DaneLLL

1 Like