Problem using Jetson Camera module with ROS and OpenCV 3.2.0 on Jetson TX1

I’m using this code snippet to access Jetson built-in camera directly through python and OpenCV:

cap = cv2.VideoCapture("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)I420 ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")

It runs fine on Jetson. But when I run this inside ROS node it gives this error:

OpenCV Error: Sizes of input arguments do not match (The lower bounary is neither an array of the same size and same type as src, nor a scalar) in inRange, file /hdd/buildbot/slave_jetson_tx_3/35-O4T-L4T-R24/opencv/modules/core/src/arithm.cpp, line 2703
Traceback (most recent call last):
  File "/home/ubuntu/catkin_ws/src/auto_drive/scripts/traffic_light.py", line 100, in <module>
    traffic_light()
  File "/home/ubuntu/catkin_ws/src/auto_drive/scripts/traffic_light.py", line 36, in traffic_light
    mask_red = cv2.inRange(frame, lower_red, upper_red)
cv2.error: /hdd/buildbot/slave_jetson_tx_3/35-O4T-L4T-R24/opencv/modules/core/src/arithm.cpp:2703: error: (-209) The lower bounary is neither an array of the same size and same type as src, nor a scalar in function inRange
Exception in thread Thread-3 (most likely raised during interpreter shutdown):
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
  File "/usr/lib/python2.7/threading.py", line 754, in run
  File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_base.py", line 154, in run
<type 'exceptions.AttributeError'>: 'NoneType' object has no attribute 'timeout'

The error occurs on this line:

mask_red = cv2.inRange(frame, lower_red, upper_red)

Hi,

Thanks for your question.
Just checked function cv2.inRange(), it works properly in my side.

Based on the error log, could you check if the ‘lower_red’ argument is valid?
Or could you share traffic_light.py source for us debugging?

cv2.error: /hdd/buildbot/slave_jetson_tx_3/35-O4T-L4T-R24/opencv/modules/core/src/arithm.cpp:2703: error: (-209) The lower bounary is neither an array of the same size and same type as src, nor a scalar in function inRange

Here’s the code:

#!/usr/bin/env python
# Software License Agreement (BSD License)

import rospy
from std_msgs.msg import String
import cv2
import numpy as np
import argparse
import sys

cap = cv2.VideoCapture("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)I420 ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")

if not cap.isOpened():
    print "Camera did not open\n"
    sys.exit()

def traffic_light():
    pub = rospy.Publisher('stop_move', String, queue_size=10)
    rospy.init_node('traffic_light', anonymous=True)
    rate = rospy.Rate(10) # 10hz

    stop_move = 'OFF'

    while not rospy.is_shutdown():
        _, frame = cap.read()

        #red
        lower_red = np.array([60, 60, 220], dtype=np.uint8)
        upper_red = np.array([110, 100, 255], dtype=np.uint8)
        
        #green
        lower_green = np.array([160, 250, 0], dtype=np.uint8)
        upper_green = np.array([255, 255, 120], dtype=np.uint8)
      
        mask_red = cv2.inRange(frame, lower_red, upper_red)
        res_red = cv2.bitwise_and(frame,frame, mask= mask_red)

        mask_green = cv2.inRange(frame, lower_green, upper_green)
        res_green = cv2.bitwise_and(frame,frame, mask= mask_green)

        kernel = np.ones((5,5),np.uint8)
        
        closing_red = cv2.morphologyEx(mask_red, cv2.MORPH_CLOSE, kernel)
        blur_red = cv2.GaussianBlur(closing_red,(15,15),0)

        closing_green = cv2.morphologyEx(mask_green, cv2.MORPH_CLOSE, kernel)
        blur_green = cv2.GaussianBlur(closing_green,(15,15),0)

        circles_red = cv2.HoughCircles(blur_red,cv2.HOUGH_GRADIENT,1,20,
                            param1=50,param2=30,minRadius=0,maxRadius=0)

        circles_green = cv2.HoughCircles(blur_green,cv2.HOUGH_GRADIENT,1,20,
                                param1=50,param2=30,minRadius=0,maxRadius=0)

        if circles_red is not None:
            # convert the (x, y) coordinates and radius of the circles to integers

            circles_red = np.round(circles_red[0, :]).astype("int")
                    
            # loop over the (x, y) coordinates and radius of the circles
            for (x, y, r) in circles_red:
                if r > 10 and r < 30:
                    cv2.circle(frame, (x, y), r, (255, 0, 0), 2, 8, 0)
                    cv2.rectangle(frame, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
                    cv2.putText(frame,'Red Light-STOP!', (10, 100), cv2.FONT_HERSHEY_SIMPLEX, 2, 155)
                    cv2.imshow('gray',frame)
                    cv2.waitKey(3)
                    stop_move = 'stop1'

        if circles_green is not None:
            # convert the (x, y) coordinates and radius of the circles to integers

            circles_green = np.round(circles_green[0, :]).astype("int")
                    
            # loop over the (x, y) coordinates and radius of the circles
            for (x, y, r) in circles_green:
                if r > 10 and r < 30:
                    cv2.circle(frame, (x, y), r, (255, 0, 0), 2, 8, 0)
                    cv2.rectangle(frame, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
                    cv2.putText(frame,'Green Light-GO!', (10, 100), cv2.FONT_HERSHEY_SIMPLEX, 2, 155)
                    cv2.imshow('gray',frame)
                    cv2.waitKey(3)
                    stop_move = 'move_1'

        cv2.imshow('frame',frame)
        cv2.waitKey(3)


##        cv2.imshow("Image window", cv_image)
        
        
        rospy.loginfo(stop_move)
        pub.publish(stop_move)
        rate.sleep()

if __name__ == '__main__':
    
    try:
        traffic_light()
    except rospy.ROSInterruptException:
        pass

Hi,

Thanks for your feedback.

I have slightly modified your source code to run on tx1 and it works properly. So code is good.

Could you try to launch gstreamer pipeline inside ROS node? And let us know the results.

gst-launch-1.0 nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvvidconv flip-method=2 ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvoverlaysink -e

Thanks.

Thanks for the edit.

I have tried using it but it gives the same error. Is there any other way that I use the Jetson camera?

I have written this command in the terminal and it works perfectly. But it does not work in the python code(OpenCV).

cap = cv2.VideoCapture("gst-launch-1.0 nvcamerasrc fpsRange='30.0 30.0' ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvtee ! nvvidconv flip-method=2 ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvoverlaysink -e")

if not cap.isOpened():
    print "Camera did not open\n"
    sys.exit()

It prints out:
Camera did not open

I’m new to the Jetson environment so is there any library that I’m missing?

Hi,

Sorry for the late reply.

Looks like you can’t open camera inside the ROS even with gstreamer.
Could you check if the nvcamera interface is supported and enabled inside the ROS node?

Thanks.

I did some research but could not find a way to enable the nvcamera interface in ROS. Can you please tell me how to do this?

Hi,

Sorry for the late reply.

Could you share the steps to setup ROS environment that we can check how to enable nvcamera inside the ROS node?
Thanks.

Hi, are you following the general install procedure here? http://www.jetsonhacks.com/2016/10/12/robot-operating-system-ros-on-nvidia-jetson-tx1/ Maybe a different version of OpenCV got installed?

For getting frames into opencv, you have to convert these into suitable format such as BGR and use appsink, so that frames are sent to your opencv application, rather than to nvoverlaysink.

Not sure why you use nvtee. tee is used for duplicating into 2 subsequent pipelines, each starting with queue. Seems useless in your case.

gst-launch is the command for launching a pipeline from a shell. You should not include it in your pipeline for opencv.

You could try to change:

cap = cv2.VideoCapture("gst-launch-1.0 nvcamerasrc fpsRange='30.0 30.0' ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvtee ! nvvidconv flip-method=2 ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvoverlaysink -e")

to

cap = cv2.VideoCapture("nvcamerasrc fpsRange='30.0 30.0' ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvvidconv flip-method=2 ! 'video/x-raw, format=(string)I420' ! videoconvert ! 'video/x-raw, format=(string)BGR' ! appsink")

Apart from AastaLL suggestion for fpsRange, this brings you back to the beginning of this thread.
The error message looks like a different opencv version (or compiled for another release, with another toolchain) runs into assertion failing.

Probably @dusty_nv’s question may be meaningful.