Opengl error

I am trying to run a basic object detection code from a video file using jetson inference library. However i run into this error which i have no clue on how to fix.

import jetson_utils_python
import jetson_inference_python
import cv2

net= jetson_inference_python.detectNet('ssd-mobilenet-v2', threshold = 0.5)
cam= jetson_utils_python.videoSource("/home/nvidia/Desktop/Project_Files/CarsDrivingUnderBridge.mp4")
display= jetson_utils_python.glDisplay()

while display.IsOpen():
    img, width, height= cam.Capture()
    detections= net.Detect(img, width, height)
    display.RenderOnce(img, width, height)

This is the error i got

detectNet -- loading detection network model from:
          -- model        networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
          -- input_blob   'Input'
          -- output_blob  'NMS'
          -- output_count 'NMS_1'
          -- class_labels networks/SSD-Mobilenet-v2/ssd_coco_labels.txt
          -- threshold    0.500000
          -- batch_size   1

[TRT]    TensorRT version 7.1.0
[TRT]    loading NVIDIA plugins...
[TRT]    Plugin creator registration succeeded - ::GridAnchor_TRT
[TRT]    Plugin creator registration succeeded - ::NMS_TRT
[TRT]    Plugin creator registration succeeded - ::Reorg_TRT
[TRT]    Plugin creator registration succeeded - ::Region_TRT
[TRT]    Plugin creator registration succeeded - ::Clip_TRT
[TRT]    Plugin creator registration succeeded - ::LReLU_TRT
[TRT]    Plugin creator registration succeeded - ::PriorBox_TRT
[TRT]    Plugin creator registration succeeded - ::Normalize_TRT
[TRT]    Plugin creator registration succeeded - ::RPROI_TRT
[TRT]    Plugin creator registration succeeded - ::BatchedNMS_TRT
[TRT]    Could not register plugin creator:  ::FlattenConcat_TRT
[TRT]    Plugin creator registration succeeded - ::CropAndResize
[TRT]    Plugin creator registration succeeded - ::DetectionLayer_TRT
[TRT]    Plugin creator registration succeeded - ::Proposal
[TRT]    Plugin creator registration succeeded - ::ProposalLayer_TRT
[TRT]    Plugin creator registration succeeded - ::PyramidROIAlign_TRT
[TRT]    Plugin creator registration succeeded - ::ResizeNearest_TRT
[TRT]    Plugin creator registration succeeded - ::Split
[TRT]    Plugin creator registration succeeded - ::SpecialSlice_TRT
[TRT]    Plugin creator registration succeeded - ::InstanceNormalization_TRT
[TRT]    detected model format - UFF  (extension '.uff')
[TRT]    desired precision specified for GPU: FASTEST
[TRT]    requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]    native precisions detected for GPU:  FP32, FP16
[TRT]    selecting fastest native precision for GPU:  FP16
[TRT]    attempting to open engine cache file /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.7100.GPU.FP16.engine
[TRT]    loading network plan from engine cache... /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.7100.GPU.FP16.engine
[TRT]    device GPU, loaded /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT]    Deserialize required 2533827 microseconds.
[TRT]    
[TRT]    CUDA engine context initialized on device GPU:
[TRT]       -- layers       118
[TRT]       -- maxBatchSize 1
[TRT]       -- workspace    0
[TRT]       -- deviceMemory 22350848
[TRT]       -- bindings     3
[TRT]       binding 0
                -- index   0
                -- name    'Input'
                -- type    FP32
                -- in/out  INPUT
                -- # dims  3
                -- dim #0  3 (SPATIAL)
                -- dim #1  300 (SPATIAL)
                -- dim #2  300 (SPATIAL)
[TRT]       binding 1
                -- index   1
                -- name    'NMS'
                -- type    FP32
                -- in/out  OUTPUT
                -- # dims  3
                -- dim #0  1 (SPATIAL)
                -- dim #1  100 (SPATIAL)
                -- dim #2  7 (SPATIAL)
[TRT]       binding 2
                -- index   2
                -- name    'NMS_1'
                -- type    FP32
                -- in/out  OUTPUT
                -- # dims  3
                -- dim #0  1 (SPATIAL)
                -- dim #1  1 (SPATIAL)
                -- dim #2  1 (SPATIAL)
[TRT]    
[TRT]    binding to input 0 Input  binding index:  0
[TRT]    binding to input 0 Input  dims (b=1 c=3 h=300 w=300) size=1080000
[TRT]    binding to output 0 NMS  binding index:  1
[TRT]    binding to output 0 NMS  dims (b=1 c=1 h=100 w=7) size=2800
[TRT]    binding to output 1 NMS_1  binding index:  2
[TRT]    binding to output 1 NMS_1  dims (b=1 c=1 h=1 w=1) size=4
[TRT]    
[TRT]    device GPU, /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff initialized.
[TRT]    W = 7  H = 100  C = 1
[TRT]    detectNet -- maximum bounding boxes:  100
[TRT]    detectNet -- loaded 91 class info entries
[TRT]    detectNet -- number of object classes:  91
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstDecoder -- creating decoder for /home/nvidia/Desktop/Project_Files/CarsDrivingUnderBridge.mp4
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
[gstreamer] gstDecoder -- discovered video resolution: 1280x720  (framerate 29.970030 Hz)
[gstreamer] gstDecoder -- discovered video caps:  video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)3.1, profile=(string)constrained-baseline, width=(int)1280, height=(int)720, framerate=(fraction)30000/1001, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true
[gstreamer] gstDecoder -- pipeline string:
[gstreamer] filesrc location=/home/nvidia/Desktop/Project_Files/CarsDrivingUnderBridge.mp4 ! qtdemux ! queue ! h264parse ! omxh264dec ! video/x-raw ! appsink name=mysink
[video]  created gstDecoder from file:///home/nvidia/Desktop/Project_Files/CarsDrivingUnderBridge.mp4
------------------------------------------------
gstDecoder video options:
------------------------------------------------
  -- URI: file:///home/nvidia/Desktop/Project_Files/CarsDrivingUnderBridge.mp4
     - protocol:  file
     - location:  /home/nvidia/Desktop/Project_Files/CarsDrivingUnderBridge.mp4
     - extension: mp4
  -- deviceType: file
  -- ioType:     input
  -- codec:      h264
  -- width:      1280
  -- height:     720
  -- frameRate:  29.970030
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution:  1920x1080
[OpenGL] glDisplay -- X window resolution:    1920x1080
[OpenGL] failed to create X11 Window.
[OpenGL] failed to create OpenGL window
Traceback (most recent call last):
  File "/home/nvidia/Desktop/Project_Files/PyProjects/jetson_infer_OpenCV.py", line 15, in <module>
    display= jetson_utils_python.glDisplay()
Exception: jetson.utils -- failed to create glDisplay device

How do i get the gldisplay to work in the code?

Hi @shane222, does it work before you added the import cv2 statement? If so, can you try creating the glDisplay object, and then importing cv2 module after?

import jetson_utils_python
import jetson_inference_python

net= jetson_inference_python.detectNet('ssd-mobilenet-v2', threshold = 0.5)
cam= jetson_utils_python.videoSource("/home/nvidia/Desktop/Project_Files/CarsDrivingUnderBridge.mp4")
display= jetson_utils_python.glDisplay()

import cv2

I ran into the problem of

[OpenGL] glDisplay -- X screen 0 resolution:  1920x1080
[OpenGL] glDisplay -- X window resolution:    1920x1080
[OpenGL] glDisplay -- display device initialized (1920x1080)
Traceback (most recent call last):
  File "/home/nvidia/Desktop/Project_Files/PyProjects/jetson_infer.py", line 35, in <module>
    import cv2
  File "/usr/lib/python3.6/dist-packages/cv2/__init__.py", line 89, in <module>
    bootstrap()
  File "/usr/lib/python3.6/dist-packages/cv2/__init__.py", line 79, in bootstrap
    import cv2
ImportError: /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block

which is supposedly a problem of using arm 64 system with cv library. But the fix can be found aarch64: libgomp.so.1: cannot allocate memory in static TLS block · Issue #14884 · opencv/opencv · GitHub by doing export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1

So i guess the problem is fixed, thanks! Just a side note, are there any tracker modules i can use from jetson inference? Or do i have to resort to using deepstream for that purpose?

Glad you were able to get it working - the jetson-inference library doesn’t have a temporal object tracker, but as you pointed out DeepStream does (as does OpenCV I believe)

Additional note: you may find 2 examples of tracking here:
OpenCv KCF tracker (in C++, but should be straight forward to convert into python. However with recent L4T releases, replace nvcamerasrc with nvarguscamerasrc and its output format from I420 to NV12):

Or this one from mp4 file with Kalman or dlib in python:

1 Like