Jetson Nano - rtsp failed to create x11 and OpenGL window

Hi,

I am having an issue connecting to an rtsp camera. When I ran my python script I get two errors
[OpenGL] failed to create X11 Window.
[OpenGL] failed to create OpenGL window

My python script is:
import jetson.inference
import jetson.utils
import getpass
import paho.mqtt.publish as publish
from threading import Timer
import time

folderName = ‘mlab’
currentUser = getpass.getuser()

net = jetson.inference.detectNet(argv=[‘–model=/home/’+currentUser+‘/jetson-inference/python/training/detection/ssd/models/’+folderName+‘/ssd-mobilenet.onnx’, ‘–labels=/home/’+currentUser+‘/jetson-inference/python/training/detection/ssd/models/’+folderName+‘/labels.txt’, ‘–input-blob=input_0’, ‘–output-cvg=scores’, ‘–output-bbox=boxes’, ‘–threshold=0.20’])

camera = jetson.utils.videoSource(“rtsp://root:flex.123@192.168.100.90:1234”, argv=[‘–input-codec=mjpeg’, ‘–width=1920’, ‘–height=800’])
display = jetson.utils.glDisplay()

while display.IsOpen():
img, width, height = camera.CaptureRGBA()
detections = net.Detect(img, width, height)
display.RenderOnce(img, width, height)
display.SetTitle(“Object Detection | Network {:.0f} FPS”.format(net.GetNetworkFPS()))

Hi,
Please run a gst-launch-1.0 command to make sure the URI is valid:

$ gst-launch-1.0 uridecodebin uri=rtsp://root:flex.123@192.168.100.90:1234 ! nvoverlaysink

Hi,

Thanks for the prompt response!

I tried what you suggested and I got:

gst-launch-1.0 uridecodebin uri=rtsp://root:flex.123@192.168.100.90:1234 ! nvoverlaysink
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://root:flex.123@192.168.100.90:1234
ERROR: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source: Could not open resource for reading and writing.
Additional debug info:
gstrtspsrc.c(7469): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source:
Failed to connect. (Generic error)
ERROR: pipeline doesn’t want to preroll.
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …

Hi,
Please put the URI in '' for a try:

uri='rtsp://root:flex.123@192.168.100.90:1234'

Hi @DaneLLL,

I get the same error

-rodimir_v

Hi @DaneLLL ,

After further research this worked for me:
gst-launch-1.0 rtspsrc location=rtsp://192.168.100.90/axis-media/media.amp ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! gtksink

Now, how do I improve the speed? Is there some codes that I don’t need from the above command? How do I implement it to my python script?

Please advise

Thank you so much!
-rodimir_v

I tried using:
camera = jetson.utils.videoSource(“rtsp://192.168.100.90/axis-media/media.amp”, argv=[‘–input-codec=h264’, ‘–width=1920’, ‘–height=800’])
display = jetson.utils.glDisplay()

and I get:

[gstreamer] gstDecoder – creating decoder for 192.168.100.90
Opening in BLOCKING MODE
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
[gstreamer] gstDecoder – discovered video resolution: 1920x1080 (framerate 0.000000 Hz)
[gstreamer] gstDecoder – discovered video caps: video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)4.2, profile=(string)high, width=(int)1920, height=(int)1080, framerate=(fraction)0/1, interlace-mode=(string)progressive, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true
[gstreamer] gstDecoder – pipeline string:
[gstreamer] rtspsrc location=rtsp://192.168.100.90:554/axis-media/media.amp latency=2000 ! queue ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw, width=(int)1920, height=(int)800, format=(string)NV12 ! appsink name=mysink
[video] created gstDecoder from rtsp://192.168.100.90:554/axis-media/media.amp

gstDecoder video options:

– URI: rtsp://192.168.100.90:554/axis-media/media.amp
- protocol: rtsp
- location: 192.168.100.90
- port: 554
– deviceType: ip
– ioType: input
– codec: h264
– width: 1920
– height: 800
– frameRate: 0.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0
– rtspLatency 2000

[OpenGL] glDisplay – X screen 0 resolution: 1920x1080
[OpenGL] glDisplay – X window resolution: 1920x1080
[OpenGL] failed to create X11 Window.
[OpenGL] failed to create OpenGL window
Traceback (most recent call last):
File “axis.py”, line 26, in
display = jetson.utils.glDisplay()
Exception: jetson.utils – failed to create glDisplay device

Hi @Rodimir_V, can you try creating the glDisplay object before you create the videoSource object?

Hi @dusty_nv , I moved the display up and ran the script and I get this error:
Should I still use the CaptureRGBA or should it be something else now? Thx!

detectNet – loading detection network model from:
– prototxt NULL
– model /home/ameiiot/jetson-inference/python/training/detection/ssd/models/mlab/ssd-mobilenet.onnx
– input_blob ‘input_0’
– output_cvg ‘scores’
– output_bbox ‘boxes’
– mean_pixel 0.000000
– mean_binary NULL
– class_labels /home/ameiiot/jetson-inference/python/training/detection/ssd/models/mlab/labels.txt
– threshold 0.200000
– batch_size 1

[TRT] TensorRT version 7.1.3
[TRT] loading NVIDIA plugins…
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] detected model format - ONNX (extension ‘.onnx’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /home/ameiiot/jetson-inference/python/training/detection/ssd/models/mlab/ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine
[TRT] loading network plan from engine cache… /home/ameiiot/jetson-inference/python/training/detection/ssd/models/mlab/ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine
[TRT] device GPU, loaded /home/ameiiot/jetson-inference/python/training/detection/ssd/models/mlab/ssd-mobilenet.onnx
[TRT] Deserialize required 3214655 microseconds.
[TRT]
[TRT] CUDA engine context initialized on device GPU:
[TRT] – layers 104
[TRT] – maxBatchSize 1
[TRT] – workspace 0
[TRT] – deviceMemory 23417344
[TRT] – bindings 3
[TRT] binding 0
– index 0
– name ‘input_0’
– type FP32
– in/out INPUT
– # dims 4
– dim #0 1 (SPATIAL)
– dim #1 3 (SPATIAL)
– dim #2 300 (SPATIAL)
– dim #3 300 (SPATIAL)
[TRT] binding 1
– index 1
– name ‘scores’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 1 (SPATIAL)
– dim #1 3000 (SPATIAL)
– dim #2 10 (SPATIAL)
[TRT] binding 2
– index 2
– name ‘boxes’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 1 (SPATIAL)
– dim #1 3000 (SPATIAL)
– dim #2 4 (SPATIAL)
[TRT]
[TRT] binding to input 0 input_0 binding index: 0
[TRT] binding to input 0 input_0 dims (b=1 c=3 h=300 w=300) size=1080000
[TRT] binding to output 0 scores binding index: 1
[TRT] binding to output 0 scores dims (b=1 c=3000 h=10 w=1) size=120000
[TRT] binding to output 1 boxes binding index: 2
[TRT] binding to output 1 boxes dims (b=1 c=3000 h=4 w=1) size=48000
[TRT]
[TRT] device GPU, /home/ameiiot/jetson-inference/python/training/detection/ssd/models/mlab/ssd-mobilenet.onnx initialized.
[TRT] detectNet – number object classes: 10
[TRT] detectNet – maximum bounding boxes: 3000
[TRT] detectNet – loaded 10 class info entries
[TRT] detectNet – number of object classes: 10
[OpenGL] glDisplay – X screen 0 resolution: 1920x1080
[OpenGL] glDisplay – X window resolution: 1920x1080
[OpenGL] glDisplay – display device initialized (1920x1080)
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstDecoder – creating decoder for 192.168.100.90
Opening in BLOCKING MODE
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
[gstreamer] gstDecoder – discovered video resolution: 1920x1080 (framerate 0.000000 Hz)
[gstreamer] gstDecoder – discovered video caps: video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)4.2, profile=(string)high, width=(int)1920, height=(int)1080, framerate=(fraction)0/1, interlace-mode=(string)progressive, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true
[gstreamer] gstDecoder – pipeline string:
[gstreamer] rtspsrc location=rtsp://192.168.100.90:554/axis-media/media.amp latency=2000 ! queue ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw, width=(int)1920, height=(int)800, format=(string)NV12 ! appsink name=mysink
[video] created gstDecoder from rtsp://192.168.100.90:554/axis-media/media.amp

gstDecoder video options:

– URI: rtsp://192.168.100.90:554/axis-media/media.amp
- protocol: rtsp
- location: 192.168.100.90
- port: 554
– deviceType: ip
– ioType: input
– codec: h264
– width: 1920
– height: 800
– frameRate: 0.000000
– bitRate: 0
– numBuffers: 4
– zeroCopy: true
– flipMethod: none
– loop: 0
– rtspLatency 2000

Traceback (most recent call last):
File “axis.py”, line 31, in
img, width, height = camera.CaptureRGBA()
AttributeError: ‘jetson.utils.videoSource’ object has no attribute ‘CaptureRGBA’

Hi @Rodimir_V, you should use Capture() instead of CaptureRGBA()

You can also create the display like this: display = jetson.utils.videoOutput('display://0')
This will internally create glDisplay object. Although directly using glDisplay should still be fine too.

Hi @dusty_nv , I tried using the display = jetson.utils.videoOutput(‘display://0’)

and I get this error:

AttributeError: ‘jetson.utils.videoOutput’ object has no attribute ‘IsOpen’

Thanks for the prompt response

OK, change IsOpen() to IsStreaming()

That seems to work, but now I got a different error:

Traceback (most recent call last):
File “axis.py”, line 32, in
img, width, height = camera.Capture()
TypeError: ‘jetson.utils.cudaImage’ object is not iterable

OK, change it to img = camera.Capture()

Then you can access width and height via img.width and img.height

You can update your script for videoSource/videoOutput interface like this one:

1 Like

Hi @dusty_nv , That works! Thanks again for your help!

I changed the script to this:
display = jetson.utils.videoOutput(‘display://0’)
camera = jetson.utils.videoSource(“rtsp://root:flex.123@192.168.100.90:554/axis-media/media.amp”, argv=[‘–input-codec=h264’, ‘–width=1920’, ‘–height=800’])

while display.IsStreaming():
img = camera.Capture()
detections = net.Detect(img, img.width, img.height)
display.Render(img)
display.SetStatus(“Object Detection | Network {:.0f} FPS”.format(net.GetNetworkFPS()))

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.