Any hardware acceleration possible in gstreamer pipeline for usb camera?

Hi,

I have USB camera and application that processes the image data, and record it into video.
The camera is See3CAM_CU20 from e-con systems.
Can’t specify acquisition pipeline since I’m still couldn’t make any work, but basically get frame from camera (frame format is UYVY), process it to be RGB, resize, drop some frames (1/2), use appsink to get frame data in application.
Pipeline would be something like v4l2src ! (UYVY) ! videoconvert ! (RGB) ! appsink, not sure.

After frame data (in RGB) being post-processed, application spits image out, converts it back to any format that omxh264enc can accept (by cpu for now) to use hw video encoder, and then save it as file.
Pipeline’s appsrc ! (RGB) ! videoconvert ! omx264enc ! h264parse ! qtmux ! filesink

Overall pipeline should be as much optimized as possible, not to use much CPU resources, and minimize latency of acquisition.
I can’t use CUDA as GPU is already fully used in other parts of application.

It seems CSI camera can use ISP to accelerate some parts during acquisition, e.g. pixel format conversion.
But I’m not sure about USB camera.

Is it possible to use any other hw accelerators in my case?

hello odtt,

may I know what’s your use-case and also your expectation results.
there’s already hw engine involved for format conversion, you may also refer to Xavier TRM, chapter-7.3 for VIC.
thanks

Hello Jerry

I use this to get video data from camera, process it to have some contents inside video, and then dump.
What I expect is to use HW acceleration in all gstreamer pipeline.
Is HW engine involved in “videoconvert”, not nvvidconv?
nvvidconv can’t be used to convert in between RGB and (UYVY or I420).

hello odtt,

to clarify,
nvvidconv is hardware accelerated; videoconvert is software based.

FYI,
you may check nvvideoconvert plugin, which is implemented for DeepStream SDK for unifying dGPU and Jetson platforms.
please refer to Gst-nvvideoconvert for more details. thanks

Hello Jerry

So that’s what I’m asking, if there’s any possibilities of using HW engine in xavier, to do format conversion on my scenario.
RGB isn’t supported in nvvidconv, this is insane.

Hi,
It is hardware limitation and need software converter for RGB/BGR conversion. Please refer to a similar discussion:

Hi odtt,
From your previous replies, I gather that you want to get YUV frames from See3CAM_CU20 and covert to RGB, process it using an application and then encode the same and dump it to a video file.
Can you confirm that the purpose of that application you are referring to is to encode and save only, or does it have other complex processing as well? And can it be modified to accept YUV instead of RGB?
Also, one suggestion is to try using tegra multimedia API for color coversion. You can find sample applications related to that in this link: Jetson Linux API Reference: Main Page | NVIDIA Docs

It does have other processings as well. I know that you might recommend to dump the frame right away through gstreamer pipeline, but that’s not an option.
I have to use RGB, I can’t choose YUV.
Not sure for multimedia API… doesn’t it also have limitation on supported format? You mean video_convert, right?

Hi odtt,
Thanks for the clarification on your implementation. Yes, you can make use of video_convert in Tegra Multimedia API. According to the help message from sample application, the following formats are supported:
Supported formats:
YUV420M
YVU420M
NV12M
YUV444M
YUV422M
YUYV
YVYU
UYVY
VYUY
ABGR32
XRGB32
GREY

I think you can make use of one of the RGB formats listed here in your application use case. Please look into it and let us know if you need any clarification.