I have a GMSL camera which reads image frame as YUYV format
I can read it through v4l2src successfully, but the image is too large (1080p) that I need to scale down to make it half for both width and height.
I want to use any of hw acceleration (other than gpu, gpu is already fully occupied with model inference) in xavier/nx
The gstreamer pipeline I’m using now is
v4l2src ! nvvidconv ! appsink
But I can’t find any gstreamer element that can enable scaling with hw acceleration.
Is there any?
Hi,
Hardware VIC engine is for scaling/conversion/cropping and it is implemented as nvvidconv plugin. You have the plugin in the pipeline so hardware acceleration is utilized.
I encounter error when I try to do downscaling with nvvidconv
pipeline working:
gst-launch-1.0 v4l2src ! ‘video/x-raw, width=1920, height=1080’ ! nvvidconv ! ‘video/x-raw, width=1920, height=1080’ ! videoconvert ! xvimagesink -e
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
^Chandling interrupt.
Interrupt: Stopping pipeline …
EOS on shutdown enabled – Forcing EOS on the pipeline
Waiting for EOS…
Got EOS from element “pipeline0”.
EOS received - stopping pipeline…
Execution ended after 0:00:01.770043008
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …
not working:
gst-launch-1.0 v4l2src ! ‘video/x-raw, width=1920, height=1080’ ! nvvidconv ! ‘video/x-raw, width=960, height=540’ ! xvimagesink -e
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason error (-5)
EOS on shutdown enabled – waiting for EOS after Error
You may customize nvv4l2camerasrc to have captured frames in video/x-raw(memory:NVMM) buffers directly. Please take a look at Macrosilicon USB - #5 by DaneLLL
The pipeline you suggested seems working, but how can I confirm it uses hw acceleration?
I see very high cpu usage (2 gst processes consuming 50% cpu usage each), and cannot see any hw acceleration turned on through jtop.
None of HW engines (NVENC, NVDEC, NVJPG) is on while running this pipeline
I checked several times, and now it seems it consumes much less cpu
But still I want to know if hw acceleration is properly used.
Is there any way to confirm?
is it not available on 4.4?
I need the frame to be sinked as RGB frame at very last.
I don’t think I can avoid cpu buffer
Even with modified nvv4l2camerasrc for YUYV, it’s not possible to handle RGB format, isn’t it?