Distorted output displayed when running imagenet using IP camera

I am attempting to run imagenet-camera using an IP camera. I am using the following:

rtspsrc location=rtsp://admin:123456@xxx.xxx.x.xxx:554/media/video1 latency=200 ! queue ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! queue ! appsink name=mysink

The output being displayed from the camera is here:
https://imgur.com/a/TbOJDeQ

There is color distortion and some SERIOUS zoom going on.

This does not seem to be a camera issue, because I can get normal output displayed using a separate script.

Any idea what may be causing this?

Thanks in advance!

Hi,
Do yo also observe the same issue with nvoverlaysink?

<b>gst-launch-1.0</b> rtspsrc location=rtsp://admin:123456@xxx.xxx.x.xxx:554/media/video1 latency=200 ! queue ! rtph264depay ! h264parse ! omxh264dec ! nvoverlaysink

Hi,
I get the following error when trying to use nvoverlaysink:

[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS, camera 0
[gstreamer] gstCamera pipeline string:
rtspsrc location=rtsp://admin:123456@xxx.xxx.x.xxx:554/media/video1 latency=200 ! queue ! rtph264depay ! h264parse ! omxh264dec ! nvoverlaysink
[gstreamer] gstCamera failed to retrieve AppSink element from pipeline
[gstreamer] failed to init gstCamera (GST_SOURCE_NVARGUS, camera 0)
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVCAMERA, camera 0
[gstreamer] gstCamera pipeline string:
rtspsrc location=rtsp://admin:123456@xxx.xxx.x.xxx:554/media/video1 latency=200 ! queue ! rtph264depay ! h264parse ! omxh264dec ! nvoverlaysink
[gstreamer] gstCamera failed to retrieve AppSink element from pipeline
[gstreamer] failed to init gstCamera (GST_SOURCE_NVCAMERA, camera 0)
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_V4L2, camera 0
[gstreamer] gstCamera pipeline string:
rtspsrc location=rtsp://admin:123456@xxx.xxx.x.xxx:554/media/video1 latency=200 ! queue ! rtph264depay ! h264parse ! omxh264dec ! nvoverlaysink
[gstreamer] gstCamera failed to retrieve AppSink element from pipeline
[gstreamer] failed to init gstCamera (GST_SOURCE_V4L2, camera 0)

imagenet-camera:  failed to initialize camera device

Hi,
Please run it via ‘gst-launch-1.0’

@DaneLLL Your first suggestion worked with gst-launch-1.0, although my flags throw the following error when running with it.

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://admin:123456@192.168.1.185:554/media/video1
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request

(gst-launch-1.0:8681): GStreamer-CRITICAL **: 00:26:04.915: gst_caps_is_empty: assertion 'GST_IS_CAPS (caps)' failed

(gst-launch-1.0:8681): GStreamer-CRITICAL **: 00:26:04.915: gst_caps_truncate: assertion 'GST_IS_CAPS (caps)' failed

(gst-launch-1.0:8681): GStreamer-CRITICAL **: 00:26:04.915: gst_caps_fixate: assertion 'GST_IS_CAPS (caps)' failed

(gst-launch-1.0:8681): GStreamer-CRITICAL **: 00:26:04.915: gst_caps_get_structure: assertion 'GST_IS_CAPS (caps)' failed

(gst-launch-1.0:8681): GStreamer-CRITICAL **: 00:26:04.915: gst_structure_get_string: assertion 'structure != NULL' failed

(gst-launch-1.0:8681): GStreamer-CRITICAL **: 00:26:04.915: gst_mini_object_unref: assertion 'mini_object != NULL' failed
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Allocating new output: 1920x1088 (x 12), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3605: Send OMX_EventPortSettingsChanged: nFrameWidth = 1920, nFrameHeight = 1080 
reference in DPB was never decoded

I should also note that inside of gstCamera.cpp, I use the ostream ss to feed in my rtsp connection

Hi,
There are two kinds of conversion, cudaNV12ToRGBA32() and cudaRGB8ToRGBA32(). You may check if you call the correct one, and width, height are set correctly.

For running deepearning models on Jetson Nano, please also try DeepStream SDK 4.0:

Hi,
There are two kinds of conversion, cudaNV12ToRGBA32() and cudaRGB8ToRGBA32(). You may check if you call the correct one, and width, height are set correctly.

For running deeplearning models on Jetson Nano, please also try DeepStream SDK 4.0: