RTSP Streaming raspiv2 camera distorted

Hello,

I am running the RTSP Server example as mentioned here

I have a xavier nx connected to a raspiv2 camera (imx219) and run the following command (identical to what is recommended in FAQ example):

./test-launch "videotestsrc is-live=1 ! nvvidconv ! nvv4l2h264enc ! h264parse ! rtph264pay name=pay0 pt=96"

On my windows host machine I launch the RTSP stream in VLC media player at the following address:

rtsp://192.168.59.47:8554/test

Once this happens a stream that looks like the following is opened:

image

Terminal output is the following:

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
H264: Profile = 66, Level = 0
NVMEDIA_ENC: bBlitMode is set to TRUE

Am I mistaking some camera parameter in the ./test-launch command?

thanks so much,
Will

This is the expected output for videotestsrc.
For RPï v2 cam, you would use instead:

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080,framerate=30/1 ! nvvidconv ! nvv4l2h264enc insert-sps-pps=1 idrinterval=15 insert-vui=1 ! h264parse ! rtph264pay name=pay0 pt=96"
1 Like

Thanks that worked perfectly.

Could you kindly point me in a direction where I could learn about these parameters/how to deal with them?

Also how to incorporate RTSP in a python application (I recently had to move to headless from the dev kit so no more opening cv2 windows :/ )

Thanks
Will

I cannot teach gstreamer in one post, but the following may help you:

  • With gstreamer, all is pipeline. A pipeline is built from one source element to a sink element, and may go through other elements for processing. For example a simple pipeline using a test video source and an X window for displaying would be:
gst-launch-1.0 videotestsrc ! xvimagesink

gst-launch-1.0 is a binary that can be used to easily prototype a gstreamer pipeline.

You can have details using gst-inspect-1.0:

# Get list of available plugins with provided elements and types:
gst-inspect-1.0

# Get all elements and typefinders provided by a plugin, its library...
gst-inspect-1.0 <any_plugin_listed_from_above>

# Get details about an element such as supported input/ouput formats, properties, ...
gst-inspect-1.0 <any_element_listed_from_above>
  • In a pipeline, there are caps between elements that define the format of data (called buffers) exchanged between elements. There must be at least one format available as SRC (output) from previous element and as SINK (input) of next element, otherwise the pipeline would failed to link elements at init time. So the canonical form a pipeline would be:
src_element ! caps ! element ! caps ! ... ! element ! caps ! sink_element

The caps are the type information such as video/x-raw… If you don’t specify caps between two elements, gstreamer will try to negociate caps between the 2 elements and find caps available from SRC and SINK pads from each element if any. Using gst-launch-1.0 with -v flag, you’ll be able to see the used caps.

  • test-launch argument is not a full pipeline. test-launch will add the sink (usually it uses udpsink element).

  • So for details about my previous post:

  1. I get camera feed with nvarguscamerasrc that can control the camera, debayer (and auto-tune) with ISP, that element provide raw video in NV12 format and outputs into NVMM memory that is contiguous memory convenient for DMA access from GPU, encoders/decoders and more. Then I specified caps for choosing a video mode (here 1080p30).
  2. The following element, nvvidconv, is useful for copying to/from NVMM memory and system memory. Not sure if this is still true, but in previous L4T releases, at least one of input or output had to be in NVMM memory (ie not to be used as video/x-raw ! nvvidconv ! video/x-raw).
    Further than copying between memory spaces, it also can convert video formats, rotate/flip or crop using VIC HW. It is in fact not required here as next element nvv4l2h264enc expects NV12 format in NVMM memory as already provided by nvarguscamerasrc , though nvvidconv having nothing to do it should be a very light overhead. Feel free to remove it and try.
  3. Then nvv4l2h264enc drives the HW encoder that will produce h264 video.
  4. nvv4l2h264enc may output a parsed H264 stream in byte-stream format, so h264parse may not be mandatory for this case.
  5. Finally rtph264pay will manage RTP protocol for H264 format and packetize buffers to be sent to sink such as udpsink.
1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.