Gstreamer 1.10

Gstreamer 1.10 was released on 1 Nov 2016. Are there any known reasons that it won’t work with TX-1/Jetpack 2.3 if we rebuild from source? Is there TX-1 specific code (nvcamerasrc, omxh265enc, etc) that provide hardware acceleration or ISP support that are likely to be broken?

Hi Sperok,

We will give it a try today, which specific error are you seeing? I suppose you built it using steps similar to these https://developer.ridgerun.com/wiki/index.php?title=Compile_gstreamer_on_tegra_X1

Any specific pipeline that you would like us running?

-David

Hi Sperok,

I tried with gstreamer-1.10.0 and I was able to capture using nvcamerasrc. Also, I was able to encode a h264 video. I ran the following pipeline:

gst-launch-1.0 -e nvcamerasrc sensor-id=0 fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420' ! omxh264enc bitrate=8000000 ! 'video/x-h264, stream-format=(string)byte-stream' ! filesink location=test.h264 -e

What error did you get?

  • Eugenia

David - Thanks for the link. We just picked up some additional TX-1 systems yesterday for the purpose of experimenting with this and were hoping that someone else had already pioneered the 1.10 wilderness. We are new to TX-1 development and would not be surprised if there are multiple issues to be resolved with 1.10. Our system currently uses 2 IMX 274 sensors connected via a Leopard Imaging TX-1 carrier card. The design goal is to concurrently capture, encode, and record dual 2160p60 streams to SD Card while live streaming 1080p30 to YouTube!Live (rtmp) over a WiFi connection to a 1Gbps ISP link with some neural net stuff happening in the background.

Our current test bed does not achieve this, but uses a gstreamer pipeline to validate throughput of the TX-1 in the lab with gst-rtsp-server while we struggle to get gstreamer’s rtmp to link up with Yahoo’s and to get a tee correctly set up in gstreamer to feed our NN.

./test-launch "( \
	nvcamerasrc fpsRange=\"30 120\" sensor-id=0 \
	! video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, framerate=(fraction)60/1 \
	! nvvidconv \
	! timeoverlay \
	! omxh264enc control-rate=2 bitrate=20000000 \
	! video/x-h264, stream-format=(string)byte-stream \
	! rtph264pay name=pay0 pt=96 \
	)"

@EugeniaGuzman - Thanks for the confirmation. Knowing that it should work eliminates some fear of the unknown. Much appreciated.

Your thinking is correct. Using the latest gstreamer version is always a good idea because basically you take advantage of all the improvements made by the community, we have seen great improvements in muxer and demuxers for instance.

Nice. We don’t have the IMX274 cameras but we have the IMX219 cameras working at 3280x2464@16fps:

http://developer.ridgerun.com/wiki/index.php?title=Sony_IMX219_Linux_driver_for_Tegra_X1#Overview_Video

It is giving us 16fps because we use only 2-lanes in the J20 board from auvidea. Anyway, we could use that to simulate your system. Dual recording should be possible:

https://developer.ridgerun.com/wiki/index.php?title=Gstreamer_pipelines_for_Tegra_X1#Dual_H264_encoding

just be careful and use a class 10 SD card and tune the kernel daemon to flush frequently what is in RAM into the FS.

We have also developed products streaming with RTMP and gstreamer to services like YouTube and UStream. Be careful because some of them are too picky and require always an audio channel being sent with the video in order to receive it properly. Let us know if you need help with this.

that is a good idea to see ARM load and memory bandwidth, in general we haven’t seen problems with the tegra. This sounds like an interesting project, if you want to provide more details that cannot be in the forum please send me an email. We would be happy to help you.

-David