Hello,
Recently I explored a few samples of Tegra MMAPI. With the current project of mine, which heavily relies on https://github.com/dusty-nv/jetson-inference/blob/master/detectnet-camera/detectnet-camera.cpp, I want to save the cropped images based on the bounding box information.
For this, I wanted to leverage the NvJPEGEncoder as shown in sample 05_jpeg_encode.
I notice that I can do something like
/* snippet from ~/tegra_multimedia_api/samples/05_jpeg_encode/jpeg_encode_main.cpp */
if (!ctx.use_fd)
{
unsigned long out_buf_size = ctx.in_width * ctx.in_height * 3 / 2;
unsigned char *out_buf = new unsigned char[out_buf_size];
NvBuffer buffer(V4L2_PIX_FMT_YUV420M, ctx.in_width,
ctx.in_height, 0);
buffer.allocateMemory();
ret = read_video_frame(ctx.in_file, buffer);
TEST_ERROR(ret < 0, "Could not read a complete frame from file",
cleanup);
ret = ctx.jpegenc->encodeFromBuffer(buffer, JCS_YCbCr, &out_buf,
out_buf_size);
TEST_ERROR(ret < 0, "Error while encoding from buffer", cleanup);
ctx.out_file->write((char *) out_buf, out_buf_size);
delete[] out_buf;
goto cleanup;
}
However, instead of
ret = read_video_frame(ctx.in_file, buffer);
I will have to set up the NvBuffer from imgCPU as defined in: https://github.com/dusty-nv/jetson-inference/blob/e12e6e64365fed83e255800382e593bf7e1b1b1a/detectnet-camera/detectnet-camera.cpp#L178
My question is, how should I go about setting the NvBuffer using this data pointer? I couldn’t find a way to do this from the MMAPI reference docs.
The imgCPU pointer is of I420 format from the gstreamer pipeline using RTSP camera.
If anyone has some experience/hints let me know.
Thanks!