Hello,
I’m trying to convert a video stream from a camera into gray-scale. My basic pipeline, which doesn’t do any conversion but works with a satisfying latency looks like this:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM),width=1280, height=720, framerate=60/1, format=NV12' ! nvvidconv ! xvimagesink
Starting with this pipeline I tried modify it so I receive stream in gray-scale.
I was really trying to avoid using videoconvert
element as it uses CPU but I ended up with only pipeline like this working as I desire, however there is a visible lag in-between frames:
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1280, height=720, framerate=60/1, format=NV12' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=I420' ! nvvidconv ! 'video/x-raw, format=GRAY8' ! videoconvert ! xvimagesink
What I’m hoping to get is something that utilizes only nvvidconv
element :
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1280, height=720, framerate=60/1, format=NV12' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=GRAY8' ! ....some sink
but I’m not sure it that can work standalone with xvimagesink
or later with appsink
as primarly I plan to emded this pipeline in code so I can extract the gray-scale frame into cv::Mat
object.
I was also experimenting with nveglglessink
but it doens’t support GRAY8…
UPDATE:
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1280, height=720, framerate=60/1, format=NV12' ! nvvidconv compute-hw=GPU nvbuf-memory-type=nvbuf-mem-cuda-device ! 'video/x-raw, format=GRAY8' ! videoconvert ! xvimagesink
This pipeline works faster with the GPU-driven decoding but I still cannot eliminate videoconvert
.