I have a buffer that contains a greyscale image (the output of visionworks nvxSemiGlobalMatchingNode). Now I want to create an NvBuffer and put the data into it so that I can send it through NvVideoEncoder.
I think I need to create a YUV420 NvBuffer and copy the greyscale into the luminance plane. Can I use something from nvbuf_utils to do that?
Thanks!
It sort of worked. Everything is ‘greenscale’ instead of greyscale, but it gets the point across.
void *output_cpu, *output_cuda;
cudaAllocMapped(&output_cpu, &output_cuda, buffer_size);
cuda::GpuMat cv_out(ctx->m_height, ctx->m_width, CV_8UC1, output_cuda);
[...]
NvBufferCreateParams input_params;
memset(&input_params, 0, sizeof(NvBufferCreateParams));
input_params.payloadType = NvBufferPayload_SurfArray;
input_params.width = ctx->m_width;
input_params.height = ctx->m_height;
input_params.layout = NvBufferLayout_BlockLinear;
input_params.colorFormat = NvBufferColorFormat_YUV420;
input_params.nvbuf_tag = NvBufferTag_NONE;
int fd_out = -1;
if(NvBufferCreateEx(&fd_out, &input_params) == 0) {
Raw2NvBuffer(
static_cast<unsigned char*>(output_cuda), 0, ctx->m_width, ctx->m_height, fd_out);
}
Is there a flag I can set to make the unset chrominance be black instead of green?
Chroma plane must be filled with ‘128’ for black. (Y, U, V):(0, 0, 0) is (R, G, B):(0, 135, 0)
I am doing something very similar, but having trouble using Raw2NvBuffer to fill the UV plane. I can only fill 1/2 of it for some reason.
Let me know if you get anywhere with it.
That’s helpful, thank you.
Based off the description here a I420 frame is 12 bits per pixel composed of “Y×8×n U×2×n V×2×n”
If our image size (n) is 1280x720 pixels that would mean a buffer size of:
12 bpp * (1280 * 720) pixels = 11059200 bits = 1382400 bytes
We would fill the first 1280*720=921600 bytes with our greyscale image and then we would have
1382400 - 921600 = 460800 bytes of chrominance
So we would need to create a 460800 byte buffer full of 128 and write it to the space after our greyscale image.
Or when we create the buffer for our greyscale image, create it large enough for the YUV, cudaMemset it to 128, and copy the whole thing in.
const size_t buffer_size = ctx->m_width*ctx->m_height;
// Add extra space on end to account for chrominance
void *output_cpu, *output_cuda;
cudaAllocMapped(&output_cpu, &output_cuda, (buffer_size*12)/8);
cudaMemset(output_cuda, 0x80, (buffer_size*12)/8);
I ended up creating an extra buffer full of 0x80 to draw from:
void *output_cpu, *output_cuda;
cudaAllocMapped(&output_cpu, &output_cuda, buffer_size);
cuda::GpuMat cv_out(ctx->m_height, ctx->m_width, CV_8UC1, output_cuda);
void *chroma_cpu, *chroma_cuda;
cudaAllocMapped(&chroma_cpu, &chroma_cuda, buffer_size);
cudaMemset(chroma_cuda, 0x80, buffer_size);
and then filling the YUV NvBuffer like this:
NvBufferCreateParams input_params;
memset(&input_params, 0, sizeof(NvBufferCreateParams));
input_params.payloadType = NvBufferPayload_SurfArray;
input_params.width = ctx->m_width;
input_params.height = ctx->m_height;
input_params.layout = NvBufferLayout_BlockLinear;
input_params.colorFormat = NvBufferColorFormat_YUV420;
input_params.nvbuf_tag = NvBufferTag_NONE;
int fd_out = -1;
if(NvBufferCreateEx(&fd_out, &input_params) == 0) {
// Fill luminance plane with greyscale image
Raw2NvBuffer(
static_cast<unsigned char*>(output_cuda), 0, ctx->m_width, ctx->m_height, fd_out);
// Fill chrominance planes with 0x80
Raw2NvBuffer(
static_cast<unsigned char*>(chroma_cuda), 1, ctx->m_width, ctx->m_height/3, fd_out);
Raw2NvBuffer(
static_cast<unsigned char*>(chroma_cuda), 2, ctx->m_width, ctx->m_height/3, fd_out);
}
I’m not sure why the chrominance planes are 1/3 the height. The math doesn’t seem to work out, but experimentally that seems to work.