USB Camera MJPEG to h264

Hi team,

What I’m trying to do is read from a USB camera and send video stream Amazon Kinesis Video. I have used " v4l2-ctl" library to output the different formats and capabilities my camera supports. From the output of the command I can see that the USB camera supports “MJPJ” compressed and “YUYV 4:2:0” ( I420) at different FPS and resolutions. I want to read from “MJPG” format since the camera offers better FPS for 1920 1080 resolution.

Looking at the Amazon Producer G-Streamer plug in SDK documentation I have used the following example G-Streamer pipe to send video stream from USB camera

gst-launch-1.0 v4l2src do-timestamp=TRUE device=/dev/video0 ! videoconvert ! video/x-raw,format=I420,width=640,height=480,framerate=30/1 ! x264enc bframes=0 key-int-max=45 bitrate=500 ! video/x-h264,stream-format=avc,alignment=au,profile=baseline ! kvssink stream-name=“YourStreamName” storage-size=512 access-key=“YourAccessKey” secret-key=“YourSecretKey” aws-region=“YourAWSRegion”

Looking at the above code I can see that its taking stream from camera and converting the I420 format to H264 and then sending the stream to Amazon Kinesis Video. This command works fine without any issues. The problem is I want to read at a faster FPS which is only available on the “MJPJ” format.

I have seen one topic on NVIDIA form which is converting the MJPEG to h265. This topic can be found from here:

Looking at the accepted answer I was able to run the two commands to read and view live camera stream and write to a file from the MJPEC format from the camera. These two commands tested are :

gst-launch-1.0 -v v4l2src device=/dev/video0 ! image/jpeg, width=640, height=480, framerate=30/1, format=MJPG ! jpegdec ! xvimagesink

gst-launch-1.0 -v v4l2src device=/dev/video0 ! image/jpeg, width=640, height=480, framerate=30/1, format=MJPG ! jpegdec ! nvvidconv ! ‘video/x-raw(memory:NVMM), format=NV12’ ! omxh265enc ! matroskamux ! filesink location=test_MJPG_H265enc.mkv

Next I want to add this command to the AWS G-Streamer pipeline command so instead of reading from the “YUYV” format and converting to H264. I can read from “MJPG” format and convert it to H264 and send to AWS Kiensis Video.

But I have been getting some errors. I believe I might be doing the pipe wrong. Here is my command and error thrown after:

gst-launch-1.0 -v v4l2src do-timestamp=TRUE device=/dev/video0 ! videoconvert ! image/jpeg,format=MJPG, width=640, height=480, framerate=30/1 ! x264enc bframes=0 key-int-max=45 bitrate=500 ! video/x-h264,stream-format=avc,alignment=au,profile=baseline ! kvssink stream-name=“YourStreamName” storage-size=512 access-key=“YourAccessKey” secret-key=“YourSecretKey” aws-region=“YourAWSRegion”

WARNING: erroneous pipeline: could not link videoconvert0 to x264enc0, neither element can handle caps image/jpeg, format=(string)MJPG, width=(int)640, height=(int)480, framerate=(fraction)30/1

Below is my board details:

Jetson Nano 2GB Developer Kit SD Card Image version 4.5
that is running Ubuntu 18.04

Any help would be appreciated,

Thank you for your time

Use gst-inspect-1.0 to learn more about each plugin. The “Pad Templates” section will help you understand the requirements for linking elements.

You are trying to connect videoconvert to x264enc. If you check videoconvert (with $ gst-inspect-1.0 videoconvert) you would see that it accepts only video/x-raw (SINK template) and it returns video/x-raw (SRC template). You are trying to pass image/jpeg and therefore it’s not working.

Check jpegdec. It accepts only image/jpeg (SINK template) and returns video/x-raw (SRC template).

Try to figure out the rest.

1 Like

Thanks a lot for your response,

Yes I figured it out yesterday after looking at more examples and spending more time on it. I could not comment since my post was being approved by NVIDIA admin.

Here is what I have in case someone else needs it. Not sure if its the most efficient pipe but its working great for me :

gst-launch-1.0 -v v4l2src device=/dev/video1 ! image/jpeg,format=MJPG, width=1920,height=1080, framerate=30/1 ! jpegdec ! videoconvert ! x264enc bframes=0 key-int-max=45 bitrate=1200 ! video/x-h264,stream-format=avc,alignment=au,profile=baseline ! kvssink stream-name=“YourStreamName” storage-size=512 access-key=“YourAccessKey” secret-key=“YourSecretKey” aws-region=“YourAWSRegion”

Maybe this would be more optimal (using HW decoder and encoder would save CPU load):

gst-launch-1.0 -v v4l2src device=/dev/video1 ! image/jpeg,format=MJPG,width=1920,height=1080,framerate=30/1 ! nvv4l2decoder mjpeg=1 ! nvv4l2h264enc idrinterval=45 insert-vui=1 bitrate=1200000 profile=0 ! h264parse ! video/x-h264,stream-format=avc,alignment=au ! kvssink stream-name="YourStreamName" storage-size=512 access-key="YourAccessKey" secret-key="YourSecretKey" aws-region="YourAWSRegion"
1 Like

Wow thank you for your response.

This command definitely seems to be working better. The video quality is higher on AWS Kinesis Video Media Playback player. And also seems like the CPU load is a lot less because when I have Chrome and some applications it does not seem to slow down as it did with other my command above. The only issue I’m having is it only works for one of my cameras. I have used the “v4l2-ctl” library to output all the output specs for both camera and FPS and resolutions are the pretty much the same. The error I’m getting looks to be camera related. I put the the output specs of the camera that does not work to the bottom of this message and also the error is below.

Please let me know if you see anything. Once again I appreciate your time

Error:

ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
Execution ended after 0:00:00.988746919
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …
INFO - Freeing Kinesis Video Stream camera_101

Camera output specs:

Driver Info (not using libv4l2):
Driver name : uvcvideo
Card type : USB 2.0 Camera
Bus info : usb-70090000.xusb-3.1
Driver version: 4.9.201
Capabilities : 0x84200001
Video Capture
Streaming
Extended Pix Format
Device Capabilities
Device Caps : 0x04200001
Video Capture
Streaming
Extended Pix Format
Priority: 2
Video input : 0 (Camera 1: ok)
Format Video Capture:
Width/Height : 1920/1080
Pixel Format : ‘MJPG’
Field : None
Bytes per Line : 0
Size Image : 4147789
Colorspace : Default
Transfer Function : Default (maps to Rec. 709)
YCbCr/HSV Encoding: Default (maps to ITU-R 601)
Quantization : Default (maps to Full Range)
Flags :
Crop Capability Video Capture:
Bounds : Left 0, Top 0, Width 1920, Height 1080
Default : Left 0, Top 0, Width 1920, Height 1080
Pixel Aspect: 1/1
Selection: crop_default, Left 0, Top 0, Width 1920, Height 1080
Selection: crop_bounds, Left 0, Top 0, Width 1920, Height 1080
Streaming Parameters Video Capture:
Capabilities : timeperframe
Frames per second: 25.000 (25/1)
Read buffers : 0
brightness 0x00980900 (int) : min=-64 max=64 step=1 default=0 value=0
contrast 0x00980901 (int) : min=0 max=64 step=1 default=32 value=32
saturation 0x00980902 (int) : min=0 max=128 step=1 default=64 value=64
hue 0x00980903 (int) : min=-40 max=40 step=1 default=0 value=0
white_balance_temperature_auto 0x0098090c (bool) : default=1 value=1
gamma 0x00980910 (int) : min=72 max=500 step=1 default=100 value=100
gain 0x00980913 (int) : min=0 max=100 step=1 default=0 value=0
power_line_frequency 0x00980918 (menu) : min=0 max=2 default=1 value=1
white_balance_temperature 0x0098091a (int) : min=2800 max=6500 step=1 default=4600 value=4600 flags=inactive
sharpness 0x0098091b (int) : min=0 max=6 step=1 default=3 value=3
backlight_compensation 0x0098091c (int) : min=0 max=2 step=1 default=1 value=1
exposure_auto 0x009a0901 (menu) : min=0 max=3 default=3 value=3
exposure_absolute 0x009a0902 (int) : min=1 max=5000 step=1 default=157 value=157 flags=inactive
exposure_auto_priority 0x009a0903 (bool) : default=0 value=1

ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: ‘MJPG’ (compressed)
Name : Motion-JPEG
Size: Discrete 1920x1080
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 800x600
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 320x240
Interval: Discrete 0.033s (30.000 fps)

Index       : 1
Type        : Video Capture
Pixel Format: 'YUYV'
Name        : YUYV 4:2:2
	Size: Discrete 1920x1080
		Interval: Discrete 0.200s (5.000 fps)
	Size: Discrete 1280x720
		Interval: Discrete 0.100s (10.000 fps)
	Size: Discrete 800x600
		Interval: Discrete 0.050s (20.000 fps)
	Size: Discrete 640x480
		Interval: Discrete 0.033s (30.000 fps)
	Size: Discrete 320x240
		Interval: Discrete 0.033s (30.000 fps)**strong text**

It is unclear to me what comes from each camera… You may post (using block code formatting) the output of your both cameras (assuming here /dev/video1 and /dev/video2 nodes):

v4l2-ctl -d1 --list-formats-ext
v4l2-ctl -d2 --list-formats-ext

for better advice.

Things that may help:

  • Nano has a single USB3 host controller for all 4 connectors.
  • I remember issues with UVC driver whereby for MJPG camera it was trying to reserve the full USB bandwith, preventing a second camera to work. I’d expect that to be fixed in recent kernel version, but not tried, though.
  • So you may first experiment the max resolution you can run with one or two cameras.
gst-launch-1.0  v4l2src device=/dev/video1 ! image/jpeg,format=MJPG,width=320,height=240,framerate=30/1 ! nvv4l2decoder mjpeg=1 ! nvvidconv ! autovideosink
gst-launch-1.0  v4l2src device=/dev/video2 ! image/x-raw,format=YUYV,width=320,height=240,framerate=30/1 ! nvvidconv ! autovideosink
gst-launch-1.0  v4l2src device=/dev/video1 ! image/jpeg,format=MJPG,width=320,height=240,framerate=30/1 ! nvv4l2decoder mjpeg=1 ! nvvidconv ! autovideosink   v4l2src device=/dev/video2 ! image/x-raw,format=YUYV,width=320,height=240,framerate=30/1 ! nvvidconv ! autovideosink

yes your suggestion is spot on, my device has only one USB3 port which was only allowing for one camera to work. When I switched them the other one worked the same.

Since then I have purchased a lot more powerful USB3 camera which is the Allied Vision Alvium 1800 U-240c. Now I can read at same resolution but at a way higher FPS. In order to use the max FPS I have to read the stream in the rggb format than convert it to h264 format before it goes to kinesis Video. So I have one extra pipe which is using the bayer2rgb converter before we convert to h264.

As the “MJPG” format conversion I had posted above previously the CPU runs way higher when I use the bayer2rgb converter. I was wondering if you suggest any other efficient way to read the bayer format from the camera which allows me to read at way higher FPS. Once again thanks a lot for your time, you have helped me tremendously.

Below is the final pipe I have used :

gst-launch-1.0 -e --gst-plugin-path=/usr/local/lib/ aravissrc camera-name=“Allied Vision-1AB22C027323-03FVN” ! video/x-bayer,format=rggb,width=1920,height=1080,framerate=60/1 ! bayer2rgb ! videoconvert ! x264enc bframes=0 key-int-max=45 bitrate=3000 ! video/x-h264,stream-format=avc,alignment=au,profile=baseline ! kvssink stream-name=“streamname” storage-size=512 access-key=“accesskey” secret-key=“secret” aws-region=“region”

Debayering with CPU would be slow and CPU-load expensive, especially on Nano.
Note that you may save some CPU load using HW encoder.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.