python - Gtk-CRITICAL - cannot show camera stream

I am trying to render some images I obtain from my webcam and use them in my Tensorflow code. Yet the former step already fails for an unknown reason.

I get the following error messages when running my code:

Y: Tensor("layer2/Sigmoid:0", shape=(?, 84), dtype=float32)
    2017-10-10 11:24:42.207249: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:857] ARM64 does not support NUMA - returning NUMA node zero
    2017-10-10 11:24:42.207486: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties: 
    name: GP10B
    major: 6 minor: 2 memoryClockRate (GHz) 1.3005
    pciBusID 0000:00:00.0
    Total memory: 7.67GiB
    Free memory: 3.22GiB
    2017-10-10 11:24:42.207589: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 
    2017-10-10 11:24:42.207663: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0:   Y 
    2017-10-10 11:24:42.207736: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GP10B, pci bus id: 0000:00:00.0)
    HIGHGUI ERROR: V4L/V4L2: VIDIOC_S_CROP
    opened capture
    read cap
    
    (lousyTestCam.py:6194): Gtk-WARNING **: gtk_disable_setlocale() must be called before gtk_init()
    
    (lousyTestCam.py:6194): Gtk-CRITICAL **: IA__gtk_type_unique: assertion 'GTK_TYPE_IS_OBJECT (parent_type)' failed
    
    (lousyTestCam.py:6194): Gtk-CRITICAL **: IA__gtk_type_new: assertion 'GTK_TYPE_IS_OBJECT (type)' failed
    
    (lousyTestCam.py:6194): Gtk-CRITICAL **: IA__gtk_type_unique: assertion 'GTK_TYPE_IS_OBJECT (parent_type)' failed
    Segmentation fault (core dumped)

This is a piece of my code:

import tensorflow as tf
    import numpy as np
    import pandas as pd
    import matplotlib.pyplot as plt
    
    import cv2
    
    n_visible = 128
    n_hidden = 84

def model(X, W, b, W_prime, b_prime):
        with tf.name_scope("layer2"):
            Y = tf.nn.sigmoid(tf.matmul(X, W) + b)
        with tf.name_scope("layer3"):
            Z = tf.nn.sigmoid(tf.matmul(Y, W_prime) + b_prime)
    
        print("Y: "+str(Y))
        return Z

X = tf.placeholder("float",  [None, n_visible], name='X')
    
    W_init_max = 4* np.sqrt(6. / (n_visible + n_hidden)) #standard formula
    W_init = tf.random_uniform(shape=[n_visible, n_hidden],
                                minval =-W_init_max,
                                maxval =W_init_max )
    
    W = tf.Variable(W_init, name='W')
    b = tf.Variable(tf.zeros([n_hidden]), name='b' )
    
    #weights between encoder and decoder
    W_prime = tf.transpose(W)
    b_prime = tf.Variable(tf.zeros([n_visible]), name='b_prime')
    
    Z = model(X, W, b, W_prime, b_prime)
    
    cost = tf.reduce_sum(tf.pow(X-Z, 2)) # const function (squared error), which we want to minimize to be more accurate
    train_op = tf.train.GradientDescentOptimizer(0.02).minimize(cost) #training algorithm
    
    #training data
    trX = ....
    
    resultPar = tf.placeholder(tf.float32)
    gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
    with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) as sess:
        tf.global_variables_initializer().run()
        
        cap = cv2.VideoCapture(0)
        print("opened capture")
        while True:
            ret, frame = cap.read()
            print("read cap")
            if frame is not None:
                cv2.imshow("input", frame)  <----- doesn't work
                print("showing img")
                cv2.waitKey(100)
                print("waiting")
    
        cap.release()
        cv2.destroyAllWindows()

Could someone explain what the matter is and why I can’t run my code but run in a segmentation fault every time?

Thanks

I faced with the same problem, when tried to use OpenCV and get screen resolution from GTK.
matplotlib uses GTK backend as default, but you may change it https://matplotlib.org/faq/usage_faq.html#what-is-a-backend or use another library for plotting.

Hi,

Not sure if this issue is related to training code.
Could you simplify this sample to create TF session only?

Ex.

with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) as sess:
        
    cap = cv2.VideoCapture(0)
    print("opened capture")
    while True:
        ret, frame = cap.read()
        print("read cap")
        if frame is not None:
            cv2.imshow("input", frame)
...

Thanks.

I get the same error with this exact piece of code:

import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

import cv2
#import openface


gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) as sess:
    tf.global_variables_initializer().run()
    
    cap = cv2.VideoCapture(0)
    print("opened capture")
    while True:
        ret, frame = cap.read()
        print("read cap")
        if frame is not None:
            height, width = frame.shape[:2]
            print("resized cap")
            cv2.imshow("input", frame)
            print("showing img")
            cv2.waitKey(100)
            print("waiting")

    cap.release()
    cv2.destroyAllWindows()
Thanks.

Hi,

We don’t meet this error in our environment.

  1. JetPack3.1
  2. Tensorflow wheel: https://github.com/peterlee0127/tensorflow-tx2
  3. OpenCV-3.2: http://dev.t7.ai/jetson/opencv/
  4. Sample (only remove unnecessary module import)
import tensorflow as tf
import numpy as np

import cv2
#import openface


gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) as sess:
    tf.global_variables_initializer().run()
    
    cap = cv2.VideoCapture(1)
    print("opened capture")
    while True:
        ret, frame = cap.read()
        print("read cap")
        if frame is not None:
            height, width = frame.shape[:2]
            print("resized cap")
            cv2.imshow("input", frame)
            print("showing img")
            cv2.waitKey(100)
            print("waiting")

    cap.release()
    cv2.destroyAllWindows()

Could you try if this setting also fixes your problem?
Thanks.

Hello,
I can confirm the same imshow() segmentation fault with the following simple script. I have given below the script that creates the problem, relevant hardware and software versions and my stack trace. FWITW, the same versions of opencv, tensorflow, pandas etc worked just fine when I installed it on another machine in March. The only difference I can think is that I’m now using NVIDIA driver version 396 instead of 390 earlier.

How to create the problem

The following script works just fine. Captures and displays the frame as expected

import cv2
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
cv2.imshow(‘frame’, frame)
cv2.waitKey(0)

However, if I add a line “import pandas” or “import tensorflow” before or after import cv2, I get a segmentation fault.

Relevant hardware and software information:

Hardware x86 architecture (Intel I5 core)
GPU GTX 1060
OS Linux Mint 18.2

Ubuntu kernel version 4.8.0-53-generic #56~16.04.1-Ubuntu
OpenCV version 3.4
Tensorflow version 1.4.1
Pandas version 0.20.1

CUDA 9.1
Nvidia driver 396.26

Segmentation fault backtrace

(gdb) bt
#0 0x000000000052b88c in ?? ()
#1 0x00000000005653ab in PyErr_WarnEx ()
#2 0x00007fff840f7938 in ?? () from /usr/lib/python2.7/dist-packages/gobject/_gobject.x86_64-linux-gnu.so
#3 0x00007fffd539e9a4 in g_logv () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#4 0x00007fffd539ebcf in g_log () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#5 0x00007fffd5690d7d in ?? () from /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0
#6 0x00007fffd569107b in g_type_register_static () from /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0
#7 0x00007fffd5691695 in g_type_register_static_simple () from /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0
#8 0x00007fffd5e173a4 in gdk_display_manager_get_type () from /usr/lib/x86_64-linux-gnu/libgdk-3.so.0
#9 0x00007fffd5e17409 in gdk_display_manager_get () from /usr/lib/x86_64-linux-gnu/libgdk-3.so.0
#10 0x00007fffd62fcc8b in ?? () from /usr/lib/x86_64-linux-gnu/libgtk-3.so.0
#11 0x00007fffd62d420b in ?? () from /usr/lib/x86_64-linux-gnu/libgtk-3.so.0
#12 0x00007fffd53a2f67 in g_option_context_parse () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#13 0x00007fffd62d3fe8 in gtk_parse_args () from /usr/lib/x86_64-linux-gnu/libgtk-3.so.0
#14 0x00007fffd62d4049 in gtk_init_check () from /usr/lib/x86_64-linux-gnu/libgtk-3.so.0
#15 0x00007fffd62d4099 in gtk_init () from /usr/lib/x86_64-linux-gnu/libgtk-3.so.0
#16 0x00007fffeef176c3 in cvInitSystem () from /usr/local/lib/libopencv_highgui.so.3.4
#17 0x00007fffeef1a764 in cvNamedWindow () from /usr/local/lib/libopencv_highgui.so.3.4
#18 0x00007fffeef1aead in cvShowImage () from /usr/local/lib/libopencv_highgui.so.3.4
#19 0x00007fffeef11349 in cv::imshow(cv::String const&, cv::_InputArray const&) () from /usr/local/lib/libopencv_highgui.so.3.4
#20 0x00007ffff67078d3 in pyopencv_cv_imshow(_object*, _object*, _object*) () from /usr/local/lib/python2.7/dist-packages/cv2.so
#21 0x00000000004bc3fa in PyEval_EvalFrameEx ()
#22 0x00000000004c136f in PyEval_EvalFrameEx ()
#23 0x00000000004c136f in PyEval_EvalFrameEx ()
#24 0x00000000004b9ab6 in PyEval_EvalCodeEx ()
#25 0x00000000004eb30f in ?? ()
#26 0x00000000004e5422 in PyRun_FileExFlags ()
#27 0x00000000004e3cd6 in PyRun_SimpleFileExFlags ()
#28 0x0000000000493ae2 in Py_Main ()
#29 0x00007ffff7810830 in __libc_start_main (main=0x4934c0 , argc=2, argv=0x7fffffffe058, init=, fini=, rtld_fini=,
stack_end=0x7fffffffe048) at …/csu/libc-start.c:291
#30 0x00000000004933e9 in _start ()

You may try to change cv.waitKey timeout to at least 1 (not sure, but I think this is required so that thread ordered by cv:imshow can be scheduled and draw), and be sure your capture is correctly opened.

Hi @Honey_Patouceul,
I tried your suggestion. Still getting segmentation fault.

Is there some other place I should post this problem?

You may try to create the named window ‘frame’ before imshow (if it doesn’t exist imshow will create one) and see if this is where it gets wrong.

It is interesting that fault happens only when importing the other libraries. If you just import numpy as np before importing cv2, does it change ?

I suppose there should exist a forum of opencv-python developers where you could ask more, but I don’t know any I can give a link to.

Someone else may advise better.

The issue turned out to be a conflict between competing versions of gtk used by OpenCv (gtk3) and tensorflow (gtk 2).

Had nothign to do with the Nvidia drivers so false alarm on this thread. Please consider my comment resolved.