We have a custom board with an IMX335 color sensor and a Jetson Nano.
Could you please let us know if there is any option available in ISP to convert RG10 (IMX335 sensor format) to monochrome/grayscale to enable night mode?
hello krishnaprasad.k,
the pixel output formats is YUV after it processed by ISP.
you may perform post-process to leave only Y components as grayscale.
please see-also MMAPIs, for example, 07_video_convert to have color format conversion.
Hi @JerryChang
Thanks for the info.
Could you please clarify, if there is any sample jetson multimedia API’s, to store YUV data from ISP by frame, to a buffer so that we can process it to grayscale and stream over RTSP?
hello krishnaprasad.k,
you may refer to Jetson AGX Orin FAQ.
the valid approach is launching the server by adding videoconvert to process it as grayscale.
the valid approach is launching the server by adding videoconvert to process it as grayscale.
Can you confirm, the following sample pipeline is enough to stream grayscale (after processing) to stream over RTSP in jetson nano?
./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=2616, height=1964, framerate=30/1 ! nvvidconv ! video/x-raw(memory:NVMM),width=(int)2616,height=(int)1944, format=(string)GRAY8 …>
hello krishnaprasad.k,
that should be convert as grayscale.
however, you should also convert it as I420 if you try to render it to display.
you may have a try locally to verify the camera streaming pipeline.
Hi @JerryChang
however, you should also convert it as I420 if you try to render it to display.
Can you confirm we have to convert back to l420 from grayscale to stream over RTSP?
Note: We required, H264 encoding scheme for RTSP streaming as well
you may have a try locally to verify the camera streaming pipeline.
Could you please provide some reference pipelines for the conversion from NV12 to grayscale, so that we can try it out locally?
hello krishnaprasad.k,
this is what I’ve test locally, I confirmed it render grayscale preview frames to display monitor.
for example,
$ gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),framerate=30/1,format=NV12' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=GRAY8' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=I420' ! nvvidconv ! xvimagesink
Hi @JerryChang
Thanks for the reply
$ gst-launch-1.0 nvarguscamerasrc !
‘video/x-raw(memory:NVMM),framerate=30/1,format=NV12’ ! nvvidconv !
‘video/x-raw(memory:NVMM),format=GRAY8’ ! nvvidconv !
‘video/x-raw(memory:NVMM),format=I420’ ! nvvidconv ! xvimagesink`
Can you confirm, the same pipeline can be used, to add plugins for H264 encoding and RTSP streaming in jetson nano? And also this is the best method followed for this requirement?
Can you confirm, the nvarguscamerasrc have the support for only GREY8 format? (Our camera sensor supports had the support for RAW10 and RAW12 format only).
hello krishnaprasad.k,
I’m using Raw10 sensor for testing.
Hi @JerryChang
Thanks for the info
We have tested the following gstreamer pipeline to stream grayscale over RTSP after multiple conversion (NV12 to grayscale and back to NV12 and h264 encoding)
./test-launch “nvarguscamerasrc ! video/x-raw(memory:NVMM),width=(int)2616,height=(int)1964,framerate=30/1,format=NV12 ! nvvidconv ! video/x-raw(memory:NVMM),width=(int)2616,height=(int)1964,format=GRAY8! nvvidconv ! video/x-raw(memory:NVMM),width=(int)2616,height=(int)1964,format=I420 ! nvvidconv ! nvv4l2h264enc control-rate=1 bitrate=8000000 ! rtph264pay name=pay0 pt=96 config-interval=1”
We are getting the stream locally, but the stream is not proper in terms of grayscale as we expected.
Can you confirm whether, the above method is a proper way to stream grayscale over RTSP from NV12 for a bayer sensor(imx335) in jetson nano?
Or
We need to do the post-processing from NV12 to grayscale using jetson multimedia samples?
hello krishnaprasad.k,
may I have more details about the failure?
Hi @JerryChang
We dumped a frame by converting to GREY8 using the following pipeline, It works fine, while opened with vooya player
gst-launch-1.0 nvarguscamerasrc num-buffers=1 ! ‘video/x-raw(memory:NVMM),width=2616,height=1964’ ! nvvidconv ! video/x-raw,format=GRAY8 ! filesink location=test.raw
But when we convert back to NV12(for encoding we need NV12 format) and dumped the frame using the following pipeline, we are not able to get the frame properly.
gst-launch-1.0 nvarguscamerasrc num-buffers=1 ! ‘video/x-raw(memory:NVMM),width=2616,height=1964’ ! nvvidconv ! video/x-raw,format=GRAY8 ! nvvidconv ! ‘video/x-raw(memory:NVMM),width=(int)2616,height=(int)1944,format=I420’ ! filesink location=test.nv12
Kernel logs are is attached below
log.txt (9.0 KB)
Could you please look into these details?
Hi,
We don’t support GRAY8 in ISP output and as input to encoder. A possible solution is to customize nvarguscamerasrc to run like:
- Get frame data in NV12 block linear
- Convert to NV12 pitch linear
- Clean UV plane
The source code of gstreamer plugins is in the package:
https://developer.nvidia.com/downloads/remack-sdksjetpack-463r32releasev73sourcest210publicsourcestbz2
For cleaning NV plane, please refer to gst_nvvconv_do_clearchroma() in nvvidconv plugin.
Hi @DaneLLL,
Thanks for the info
We have been using the Argus sample camera recording and samples/10_camera_recording/main.cpp for conversion from NV12 to monochrome.
Could you please provide how to dump the NV12 default buffer data without encoding to a file, as a first step towards conversion from NV12 to monochrome?
Hi,
Please allocate NvBuffer in NV12 pitch linear and then call NvBufferTranaform() to convert frame data to the buffer. And then call dump_dmabuf() to dump the Y plane.
@DaneLLL Thanks for the info
Could you please let me know, how we can allocate NvBuffer in NV12 pitch linear format, like any APIs need to called, or by default NvBuffer comes with NV12 pitch linear in the camera recording sample of jetson multimedia APIs?
Could you please confirm
dump_dmabuf() and NvBuffer2Raw APIs are used to dump the frame, or if there is any major differences between them?
Hi,
In 10_camera_recording, please modify this to allocate buffer in pitch linear:
nativeBuffers[i] = DmaBuffer::create(STREAM_SIZE, NvBufferColorFormat_NV12,
DO_CPU_PROCESS ? NvBufferLayout_Pitch : NvBufferLayout_BlockLinear);
And then call dump_dmabuf() to save to a file. Or call NvBuffer2Raw() to save to a memory buffer.
@DaneLLL thanks for the info
In 10_camera_recording, please modify this to allocate buffer in pitch linear:
As of now, We have set the ** DO_CPU_PROCESS** as true and called the following dump_dmabuf function inside it. We were able to dump the greyscale to file properly.
dump_dmabuf(dmabuf_fd,0,output_File); //Only passing Y plane of NV12
static int
dump_dmabuf(int dmabuf_fd,
unsigned int plane,
std::ofstream * stream)
{
if (dmabuf_fd <= 0)
return -1;
int ret = -1;
NvBufferParams parm;
ret = NvBufferGetParams(dmabuf_fd, &parm);
if (ret != 0)
return -1;
printf(“\nparm.pixel_format=%d,pitch=%d,width=%d,height=%d\n”,parm.pixel_format,parm.pitch[plane],parm.width[plane],parm.height[plane]);
void *psrc_data;
ret = NvBufferMemMap(dmabuf_fd, plane, NvBufferMem_Read_Write, &psrc_data);
if (ret == 0)
{
unsigned int i = 0;
NvBufferMemSyncForCpu(dmabuf_fd, plane, &psrc_data);
for (i = 0; i < parm.height[plane]; ++i)
{
if(parm.pixel_format == NvBufferColorFormat_NV12 &&
plane == 1)
{
stream->write((char *)psrc_data + i * parm.pitch[plane],
parm.width[plane] * 2);
if (!stream->good())
}
else
{
stream->write((char *)psrc_data + i * parm.pitch[plane],
parm.width[plane]);
if (!stream->good())
return -1;
}
}
NvBufferMemUnMap(dmabuf_fd, plane, &psrc_data);
}
return 0;
}
Could you please confirm whether we need to call NvBufferTransform, and please clarify what NvBufferTransform really does as per our use case for conversion from NV12 to GREY8?
Hi,
Do you mean you would like to encode only GREY data(Y plane) into h264 stream? The input format to encoder is NV12 or YUV420, so possible solution is to clean U/V plane before feeding the buffer to encoder.