Jetson goes curling (or, simultaneously viewing multiple IP-cams)

I have a project from a local curling club to view 4 simultaneous live feeds from in-house IP-cams (either 720p or 1080p), scale them, and display them on a single HDMI output (with arbitrary sizes/positions on the display). I tried doing this with a raspberry-Pi, using multiple “wget” into multiple FIFOs, then using multiple nohups of omxplayer (with non-overlapping fixed aspect ratios). However, their GPU doesn’t seem to give me the desired 30 fps, and scaling/zooming is not possible for that many streams.
Is there a recommended (GPU HW decode) approach for doing this on Jetson?
I am guessing that the horsepower required is pretty small, so I had something else on my wish list; adding DVR functionality (rewind, pause, FF, record). However, I am not sure that I would be able to use off-the-shelf SW for this (i.e. XBMC), since they are dedicated to single feeds (unless I can re-route the output of the GPU consolidation as a single stream into the DVR SW). This is a nice-to-have.
I also had one concern : on the r-Pi the file system is on the SD card, which is not ideal for constant rewrites. What is the situation with Jetson : can the streams be buffered through RAM only, or will it also need to use the eMMC as a FIFO utility (and is it robust enough)?
Any constructive thoughts would be appreciated!

I think you can just use gstreamer to show 4 streams in different windows. It seems that in your use case they could be completely independent (applications) from each other. The command for each stream would be something like this:

gst-launch-0.10 -v playbin2 uri=<file://... or http://...>

For a more sophisticated approach, you could maybe use gstreamer’s videomixer element to combine the four streams and then show it. Something about that in there: http://stackoverflow.com/questions/1709574/combine-multiple-videos-into-one

For the DVR functionality you need more. For that you would need to save the streams to disk (they will take way too much to be stored in RAM) and then provide the playback functionality for that. The latter of the two links above stores the combined stream also to disk, so it would be possible to use e.g. XBMC separately to watch with with features like rewind/pause.

Thank you, I will give that a try.
btw, you mentioned two links, but I only see one in your reply.

Also, do you know if gstreamer or the videomixer (or virtualdub, or VLC) automatically make use of the GPU?

Sorry about the link confusion. I meant the only link I had included…

GStreamer is the officially supported video API on Tegra platforms and it should automatically use the video HW. The Jetson’s video HW will do only video encoding and decoding, so all additional tasks should be avoided. There is nvvidconv that can do color space conversions efficiently, although I don’t know what HW blocks it exactly uses (even the code running on a modern ARM CPU can be fast when properly optimized). And e.g. scaling can be done on GPU if the application is using OpenGL for rendering.

VLC does not use GStreamer and is thus not accelerated. I’m not familiar with virtualdub.

I think the most straightforward way to start, is to create an application with 4 windows/views and then use GStreamer API to decode and render the IP cam stream separately to those windows.

I seem to be having a lot of difficulty getting gstreamer installed.

E: Package ‘gstreamer0.10-ffmpeg’ has no installation candidate

It doesn’t matter if it 0.10 or 1.0. Any thoughts?

gstreamer0.10-ffmpeg might be part of the universe, so make sure you have universe and multiverse enabled:

sudo apt-add-repository universe
sudo apt-add-repository multiverse
sudo apt-get update

But do note that nothing in gstreamer0.10-ffmpeg is HW accelerated on Jetson and using it might not be a good idea. Although e.g. using different container demuxers don’t need much HW power.

I was trying to follow the gstreamer install instructions in this document :

L4T_Jetson_TK1_Multimedia_User_Guide_V1.2.pdf

Where, on page 2, is says :

To install Gstreamer

Install Gstreamer on the Jetson TK1 platform with the following command:
$
sudo apt-get install gstreamer-tools gstreamer0.10-alsa gstreamer0.10-plugins-base gstreamer0.10-plugins-good gstreamer0.10-plugins-bad gstreamer0.10-plugins-ugly gstreamer0.10-ffmpeg

However, I see that /usr/bin/gst-launch-0.10 is already available.

Is there a question here…?

Having /usr/bin/gst-launch-0.10 already doesn’t mean you have the other packages. Also it’s completely safe to try installing packages with “apt-get install” even if some of them are already installed. If they are installed but not the latest, they are upgraded. If they are installed and already at the latest version, they are just ignored with a message saying that they are already installed.

I guess my question was : why would the documentation tell us to perform that install command when gstreamer0.10-fmpeg is not even available?

However, I have no real issues now, as I am able to launch gstreamer, and include it in C code. The above was something that had me scratching my head for a few days. Had I not seen that command in the docs, I would have not wasted so much time on trying to find out how to install ffmpeg.

I am up and running now. Thank you for your guidance; it is much appreciated!

If you bump into new issues, don’t hesitate to ask!

Also, if you get things nicely running, do update the thread with some performance numbers. Many might be interested to know e.g. the CPU load when HW decoding 4x 1080p streams. For proper performance that needs optimal rendering too. Wrong color space conversion ruins the performance easily.

I am still having trouble getting the videomixer to work with streaming video from a remote webcam.
This simple videomix example works :

gst-launch-0.10 -e -v
videomixer name=vmix
! autovideosink
videotestsrc pattern=“snow”
! video/x-raw-yuv,width=100,height=75
! vmix.
videotestsrc pattern=13
! video/x-raw-yuv,width=300,height=125
! vmix.

However, when I add in “real” video :

gst-launch-0.10 -e -v
videomixer name=vmix
! autovideosink
videotestsrc pattern=“snow”
! video/x-raw-yuv,width=100,height=75
! vmix.
videotestsrc pattern=13
! video/x-raw-yuv,width=300,height=125
! vmix.
souphttpsrc location=“http://plazacam.studentaffairs.duke.edu/axis-cgi/mjpg/video.cgi?resolution=1280x720
! multipartdemux
! jpegdec
! videoscale
! video/x-raw-yuv,framerate=30/1,width=960,height=540
! vmix.

I get internal data flow errors. Is anyone familiar with videomixer?

Setting pipeline to PAUSED …
/GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc1.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)YUY2, width=(int)300, height=(int)125, framerate=(fraction)30/1, color-matrix=(string)sdtv, chroma-site=(string)mpeg2
/GstPipeline:pipeline0/GstCapsFilter:capsfilter2.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)YUY2, width=(int)300, height=(int)125, framerate=(fraction)30/1, color-matrix=(string)sdtv, chroma-site=(string)mpeg2
/GstPipeline:pipeline0/GstCapsFilter:capsfilter2.GstPad:sink: caps = video/x-raw-yuv, format=(fourcc)YUY2, width=(int)300, height=(int)125, framerate=(fraction)30/1, color-matrix=(string)sdtv, chroma-site=(string)mpeg2
/GstPipeline:pipeline0/GstVideoMixer:vmix.GstVideoMixerPad:sink_2: caps = video/x-raw-yuv, format=(fourcc)YUY2, width=(int)300, height=(int)125, framerate=(fraction)30/1, color-matrix=(string)sdtv, chroma-site=(string)mpeg2
/GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)YUY2, width=(int)100, height=(int)75, framerate=(fraction)30/1, color-matrix=(string)sdtv, chroma-site=(string)mpeg2, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)YUY2, width=(int)100, height=(int)75, framerate=(fraction)30/1, color-matrix=(string)sdtv, chroma-site=(string)mpeg2, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:sink: caps = video/x-raw-yuv, format=(fourcc)YUY2, width=(int)100, height=(int)75, framerate=(fraction)30/1, color-matrix=(string)sdtv, chroma-site=(string)mpeg2, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstVideoMixer:vmix.GstVideoMixerPad:sink_1: caps = video/x-raw-yuv, format=(fourcc)YUY2, width=(int)100, height=(int)75, framerate=(fraction)30/1, color-matrix=(string)sdtv, chroma-site=(string)mpeg2, pixel-aspect-ratio=(fraction)1/1
Pipeline is PREROLLING …
/GstPipeline:pipeline0/GstJpegDec:jpegdec0.GstPad:sink: caps = image/jpeg
/GstPipeline:pipeline0/GstJpegDec:jpegdec0.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)I420, width=(int)1280, height=(int)720, framerate=(fraction)0/1
ERROR: from element /GstPipeline:pipeline0/GstSoupHTTPSrc:souphttpsrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2625): gst_base_src_loop (): /GstPipeline:pipeline0/GstSoupHTTPSrc:souphttpsrc0:
streaming task paused, reason not-negotiated (-4)
ERROR: pipeline doesn’t want to preroll.
Setting pipeline to NULL …
/GstPipeline:pipeline0/GstVideoMixer:vmix.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)YUY2, width=(int)300, height=(int)125, framerate=(fraction)30/1, color-matrix=(string)sdtv, chroma-site=(string)mpeg2, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstVideoMixer:vmix.GstVideoMixerPad:sink_2: caps = NULL
/GstPipeline:pipeline0/GstVideoMixer:vmix.GstVideoMixerPad:sink_1: caps = NULL
/GstPipeline:pipeline0/GstVideoMixer:vmix.GstPad:src: caps = NULL
/GstPipeline:pipeline0/GstCapsFilter:capsfilter2.GstPad:src: caps = NULL
/GstPipeline:pipeline0/GstCapsFilter:capsfilter2.GstPad:sink: caps = NULL
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:src: caps = NULL
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:sink: caps = NULL
/GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc1.GstPad:src: caps = NULL
/GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0.GstPad:src: caps = NULL
/GstPipeline:pipeline0/GstJpegDec:jpegdec0.GstPad:src: caps = NULL
/GstPipeline:pipeline0/GstJpegDec:jpegdec0.GstPad:sink: caps = NULL
/GstPipeline:pipeline0/GstMultipartDemux:multipartdemux0.GstPad:src_0: caps = NULL
Freeing pipeline …

I am able to design gstreamer pipelines for displaying multiple streams, however, the frame-rate is very low (as most of it gets done by the CPU, rather than the GPU).
I would like to switch to using nv_omx_videomixer as a first step towards moving the pipeline into the GPU, but it balks at the pipeline flow. My best guess is that the streams have different formats, so I started using nvvidconv (for both conversion and scaling), but nv_omx_videomixer still complains that it cannot link the pipeline.
Does anyone have any advice on what formats are “expected” by nv_omv_videomixer if I am streaming mjpeg from an Axis IP camera (below), plus local webcams?

souphttpsrc location="http://plazacam.studentaffairs.duke.edu/axis-cgi/mjpg/video.cgi?resolution=1280x720

You can look at “expected” formats for any of the gstreamer elements by executing:

$ gst-inspect-0.10

e.g.

$ gst-inspect-0.10 nv_omx_videomixer

(for gstreamer 1.0 it’s 'gst-inspect-1.0 ’ )

$ gst-inspect

will list all of the elements. You can add the version number extension to see what’s applicable to that version.

That will give you the ‘Sink’ parameters, which are inputs to the element, and the ‘Source’ parameters, which are the outputs from the element. If you examine nvvidconv, you’ll see that the Sink accepts x-raw-yuv, x-raw-gray, x-nv-yuv, and nvrm-yuv. So in order to send mpeg video to nvvidconv, you have to convert it to one of those flavors first.

To convert the MJPEG of the IP camera, I usually use ‘jpegparse ! jpegdec’ in pipelines I’ve done in the past. The ‘jpegparse’ basically normalizes the MJPEG stream for the JPEG decoder, that is, puts it into a form the decoder is comfortable with. The parse may just be a pass through if the camera output is the expected MJPEG stream format that the decoder likes.

Unfortunately there’s currently a bug in the hardware assisted decoder, nvjpegdec, which does not allow decoding on a stream. I sent a note to NVIDIA, they said that it’s fixed in the development stream and will be in the next release.

After much trial and error, I managed to create a pipeline (below) that will mix 4 streaming videos together, but it is quite slow. So, I need to start the process of swapping pipeline elements with their GPU-accelerated versions.

The first obvious thing to try was replacing “videomixer” with “nv_omx_video_mixer”, but that yields the following error : “WARNING: erroneous pipeline: could not link queue1 to vmix”

What are the differences between videomixer and nv_omx_videomixer that would cause it to not link any more?
What should be my next step in swapping-in GPU accelerated elements?

#!/bin/sh
gst-launch-0.10 -e -v
souphttpsrc location=“http://194.168.163.96/axis-cgi/mjpg/video.cgi?resolution=320x240” timeout=5 do-timestamp=true is-live=true
! jpegparse
! jpegdec
! videorate
! videoscale
! video/x-raw-yuv,framerate=30/1,width=960,height=540
! videobox left=-960 top=-540 border-alpha=0
! nvvidconv
! queue
! vmix.
souphttpsrc location=“http://64.122.208.241:8000/axis-cgi/mjpg/video.cgi?resolution=320x240” timeout=5 do-timestamp=true is-live=true
! jpegparse
! jpegdec
! videorate
! videoscale
! video/x-raw-yuv,framerate=30/1,width=960,height=540
! videobox top=-540 border-alpha=0
! nvvidconv
! queue
! vmix.
souphttpsrc location=“http://plazacam.studentaffairs.duke.edu/axis-cgi/mjpg/video.cgi?resolution=1280x720” timeout=5 do-timestamp=true is-live=true
! jpegparse
! jpegdec
! videorate
! videoscale
! video/x-raw-yuv,framerate=30/1,width=960,height=540
! videobox left=-960 border-alpha=0
! nvvidconv
! queue
! vmix.
souphttpsrc location=“http://192.168.1.123:8081/?action=stream” timeout=5 do-timestamp=true is-live=true
! jpegparse
! jpegdec
! videorate
! videoscale
! video/x-raw-yuv,framerate=30/1,width=960,height=540
! nvvidconv
! queue
! vmix.
videomixer name=vmix
! videoscale
! video/x-raw-yuv,framerate=30/1,width=1920,height=1080
! nv_omx_hdmi_videosink

First, here’s a video example after the changes I made. I substituted a local MJPEG webcam for your local stream. The webcam is 1280x720 in the example:

http://youtu.be/peAB31-Bq3A

The video streams appear to be playing back at the feed speed, the webcam is pretty close though it has a few hitches in its giddy up now and then.

A couple of comments:

  1. You can pick up performance placing a ‘queue’ element between the jpegparse and the jpegdec. Queue is equivalent to spawning a thread, and seems to be most useful before decoding and encoding elements.
  2. The ‘queue’ will work better before the nvvidconv as there’s not much for it to do afterwards.
  3. I took the liberty of creating some macros so that it’s easier to play with elements. My curiosity was satisfied adding and subtracting queue elements for example.
  4. I didn’t find much difference in the performance between nv_omx_hdmi_videosink, nv_gl_eglsink and xvimagesink, which suggests that the bottleneck is the MPEG decoder. I just used xvimagesink because I couldn’t figure out how to terminate the nv_omx_hdmi_videosink session.
  5. I placed this as a gist on Github: https://gist.github.com/jetsonhacks/29b33257980d0c342cbc
    5a) You should be able to substitute your 4th feed in place of the webcam easily enough

Here’s teh codez:

#!/bin/sh
JPEG_DEC=“jpegparse ! queue ! jpegdec”
VID_SPEC=“videorate ! videoscale ! video/x-raw-yuv,framerate=30/1,width=960,height=540”
NVVIDCONV=“queue ! nvvidconv”
VELEM=“v4l2src device=/dev/video0” #video0 is a Logitech c920 webcam with built-in H.264 compression & MJPEG
VCAPS=“image/jpeg, width=1280, height=720, framerate=30/1”

Video Source

VSOURCE="$VELEM ! $VCAPS"
gst-launch-0.10 -e -v
souphttpsrc location=“http://194.168.163.96/axis-cgi/mjpg/video.cgi?resolution=320x240” timeout=5 do-timestamp=true is-live=true
! $JPEG_DEC
! $VID_SPEC
! videobox left=-960 top=-540 border-alpha=0
! $NVVIDCONV
! vmix.
souphttpsrc location=“http://64.122.208.241:8000/axis-cgi/mjpg/video.cgi?resolution=320x240” timeout=5 do-timestamp=true is-live=true
! $JPEG_DEC
! $VID_SPEC
! videobox top=-540 border-alpha=0
! $NVVIDCONV
! vmix.
souphttpsrc location=“http://plazacam.studentaffairs.duke.edu/axis-cgi/mjpg/video.cgi?resolution=1280x720” timeout=5 do-timestamp=true is-live=true
! $JPEG_DEC
! $VID_SPEC
! videobox left=-960 border-alpha=0
! $NVVIDCONV
! vmix.
$VSOURCE
! $JPEG_DEC
! $VID_SPEC
! $NVVIDCONV
! vmix.
videomixer name=vmix
! videoscale
! video/x-raw-yuv,framerate=30/1,width=1920,height=1080
! xvimagesink sync=false

I don’t know what the problem is with nv_omx_videomixer. It looks broken to me after multiple attempts trying to get it to link into the pipeline. Hopefully NVIDIA will revisit the Gstreamer support in the next release, as it needs a stern talking to.

Hope this helps.

I forgot to add something from my last post. The nvvidconv is superfluous in the previous post. My understanding is that nvvidconv is basically a replacement for the ffmpegcolorspace element. The jpegdec element provides a video/x-raw-yuv I420 format source to the rest of the pipeline, which is a format compatible with the video mixer and sink elements . In the previous code I posted, you can change the NVVIDCONV macro to just “queue” (remove the “! nvvidconv”) and it still works as expected. I took a quick spin, and nv_omx_hdmi_videosink, nv_gl_eglsink and xvimagesink all worked after taking out the nvvidconv.

Try this:
gst-launch-0.10 -e -v souphttpsrc location=“http://194.168.163.96/axis-cgi/mjpg/video.cgi?resolution=320x240” timeout=5 do-timestamp=true is-live=true ! jpegparse ! jpegdec ! nv_omx_hdmi_videosink overlay-x=500 overlay-y=500 overlay-w= 500 overlay-h=500

You will need to create similar pipelines for other 3 streams and run them concurrently after tweaking the overlay-* parameters as you want.

Let me know if this helps.

It was suggested to me to try an run 4 instances of gstreamer to achieve the same result (4 panels displaying 4 separate video streams).

Running either one of these pipelines works OK by themselves (although the first one consumes 54% of the CPU) :

gst-launch-0.10 -e -v
souphttpsrc location=“http://plazacam.studentaffairs.duke.edu/axis-cgi/mjpg/video.cgi?resolution=1280x720” timeout=5 do-timestamp=true is-live=true
! jpegparse
! jpegdec
! videorate
! video/x-raw-yuv,framerate=30/1
! nv_omx_hdmi_videosink overlay-x=960 overlay-y=540 overlay-w=960 overlay-h=540 sync=false

gst-launch-0.10 -e -v
udpsrc port=1234
! application/x-rtp,payload=96,encoding-name=H264
! rtph264depay
! nv_omx_h264dec
! nv_omx_hdmi_videosink overlay-x=960 overlay-y=540 overlay-w=960 overlay-h=540 sync=false

However, when I try to launch them concurrently, I get errors due to lack of resources (below).

Is it possible to launch two simultaneous streams that both end at nv_omx_hdmi_videosink?

/GstPipeline:pipeline0/GstOmxH264Dec:omxh264dec0.GstPad:src: caps = video/x-nv-yuv, width=(int)640, height=(int)480, format=(fourcc)NV12, stereoflags=(int)0, framerate=(fraction)0/1, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstOmxHdmiVideoSink:omxhdmivideosink0.GstPad:sink: caps = video/x-nv-yuv, width=(int)640, height=(int)480, format=(fourcc)NV12, stereoflags=(int)0, framerate=(fraction)0/1, pixel-aspect-ratio=(fraction)1/1
NvxBaseWorkerFunction[2480] comp OMX.Nvidia.render.hdmi.overlay.yuv420 Error -2147479552
ERROR: from element /GstPipeline:pipeline0/GstOmxHdmiVideoSink:omxhdmivideosink0: GStreamer encountered a general resource error.
Additional debug info:
/dvs/git/dirty/git-master_linux/external/gstreamer/gst-openmax/omx/gstomx_util.c(1182): omx_report_error (): /GstPipeline:pipeline0/GstOmxHdmiVideoSink:omxhdmivideosink0:
There were insufficient resources to perform the requested operation
Execution ended after 545078833 ns.

It was suggested to me to try an run 4 instances of gstreamer to achieve the same result (4 panels displaying 4 separate video streams).

Running either one of these pipelines works OK by themselves (although the first one consumes 54% of the CPU) :

gst-launch-0.10 -e -v
souphttpsrc location=“http://plazacam.studentaffairs.duke.edu/axis-cgi/mjpg/video.cgi?resolution=1280x720” timeout=5 do-timestamp=true is-live=true
! jpegparse
! jpegdec
! videorate
! video/x-raw-yuv,framerate=30/1
! nv_omx_hdmi_videosink overlay-x=960 overlay-y=540 overlay-w=960 overlay-h=540 sync=false

gst-launch-0.10 -e -v
udpsrc port=1234
! application/x-rtp,payload=96,encoding-name=H264
! rtph264depay
! nv_omx_h264dec
! nv_omx_hdmi_videosink overlay-x=960 overlay-y=540 overlay-w=960 overlay-h=540 sync=false

However, when I try to launch them concurrently, I get errors due to lack of resources (below).

Is it possible to launch two simultaneous streams that both end at nv_omx_hdmi_videosink?

/GstPipeline:pipeline0/GstOmxH264Dec:omxh264dec0.GstPad:src: caps = video/x-nv-yuv, width=(int)640, height=(int)480, format=(fourcc)NV12, stereoflags=(int)0, framerate=(fraction)0/1, pixel-aspect-ratio=(fraction)1/1
/GstPipeline:pipeline0/GstOmxHdmiVideoSink:omxhdmivideosink0.GstPad:sink: caps = video/x-nv-yuv, width=(int)640, height=(int)480, format=(fourcc)NV12, stereoflags=(int)0, framerate=(fraction)0/1, pixel-aspect-ratio=(fraction)1/1
NvxBaseWorkerFunction[2480] comp OMX.Nvidia.render.hdmi.overlay.yuv420 Error -2147479552
ERROR: from element /GstPipeline:pipeline0/GstOmxHdmiVideoSink:omxhdmivideosink0: GStreamer encountered a general resource error.
Additional debug info:
/dvs/git/dirty/git-master_linux/external/gstreamer/gst-openmax/omx/gstomx_util.c(1182): omx_report_error (): /GstPipeline:pipeline0/GstOmxHdmiVideoSink:omxhdmivideosink0:
There were insufficient resources to perform the requested operation
Execution ended after 545078833 ns.