Hi! I have the same problem as here. But instead of rtsp-simple-server my noname camera from china supports rtsp and encodes video in h265 format, works fine with this command (don’t work with h265 decoding):
gst-launch-1.0 rtspsrc location=rtsp://admin:1234@192.168.1.41:554 latency=200 ! queue ! rtph264depay ! h264parse ! omxh264dec ! 'video/x-raw(memory:NVMM)' ! nvoverlaysink
output:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://admin:1234@192.168.1.41:554
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
WARNING: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0: Could not read from resource.
Additional debug info:
gstrtspsrc.c(5427): gst_rtspsrc_reconnect (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0:
Could not receive any UDP packets for 5,0000 seconds, maybe your firewall is blocking it. Retrying using a tcp connection.
(gst-launch-1.0:11426): GStreamer-CRITICAL **: 18:08:34.454: gst_caps_is_empty: assertion 'GST_IS_CAPS (caps)' failed
(gst-launch-1.0:11426): GStreamer-CRITICAL **: 18:08:34.454: gst_caps_truncate: assertion 'GST_IS_CAPS (caps)' failed
(gst-launch-1.0:11426): GStreamer-CRITICAL **: 18:08:34.454: gst_caps_fixate: assertion 'GST_IS_CAPS (caps)' failed
(gst-launch-1.0:11426): GStreamer-CRITICAL **: 18:08:34.454: gst_caps_get_structure: assertion 'GST_IS_CAPS (caps)' failed
(gst-launch-1.0:11426): GStreamer-CRITICAL **: 18:08:34.454: gst_structure_get_string: assertion 'structure != NULL' failed
(gst-launch-1.0:11426): GStreamer-CRITICAL **: 18:08:34.454: gst_mini_object_unref: assertion 'mini_object != NULL' failed
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Allocating new output: 1920x1088 (x 11), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3605: Send OMX_EventPortSettingsChanged: nFrameWidth = 1920, nFrameHeight = 1080
^Chandling interrupt.
Interrupt: Stopping pipeline ...
Execution ended after 0:00:07.997687179
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
ffplay over TCP works fine, over UDP does not work, so I conclude that my camera cannot work with UDP. In jetson-inference “RTSP network streams are subscribed to from a remote host over UDP/IP.” so I can’t use my camera that doesn’t work over UDP / IP for some reason. Please tell me have I any options to use jetson-inference and connect it to the camera via TCP? If i need to rewrite part of jetson-inference please indicate where to start.