VideoCapture fails to open onboard camera L4T 24.2.1 OpenCV 3.1

@linuxdev - at the risk of being shouted at in the forums i actually have a tx2… (sorry for posting this in the tx1 forum but given the nature of opencv / gstreamer pipelines and python i found this thread very useful!).

My current set up is a 2011 macbook air hooked up to the jetson with 4gb of ram, its soon to be a new 13" macbook air i7 with 16 gig of RAM - both obviously have fairly weak graphics cards.

Ok so looks like i need to look into setting up a virtual desktop or getting a dedicated display for the jetson and running the script locally - this should be easy to test (at home anyway) but slight concern is portability as for demos etc i was hoping to plug camera into the laptop.

Your conclusion on needing the virtual desktop or dedicated Jetson display sounds correct. This would support the Jetson running with GPU being used at all times regardless of virtual or physical monitor. Display to other computers while using the Jetson GPU would be fairly simple using virtual desktops…this would be portable while simple X event forwarding (e.g., “ssh -Y”) would not.

I’ve never investigated ways to pass through a camera on one computer to another. Someone may know how to set up video streaming from a Mac to a Jetson over gigabit so your program could run using the camera directly on the Jetson or indirectly via the Mac. Can anyone give advice on whether there are video streaming methods whereby a Mac could provide the video data to the Jetson as if it were a direct camera attachment?

Hey @linuxdev - update:

To clarify i’m actually happy with the stream coming from the camera itself (but this does sound like an interesting idea passing from webcam to the jetson and could fit in with what some people are doing with external GPUs etc etc).

tried using “ssh -C -Y $user@jetson” … this gave a slight performance boost but nothing of note / close to being useable

then i ran locally on the TX2. WOW!!! Runs lovely - nice and quick and looks to be very usable (although still need to pass it through my classifier etc).

The virtual desktop sounds like it could be an option - i’m not sure how i would go about setting one up, for instance is it installed on the client or server side. I’m happy for the demo to run through the jetson camera but it would be great if i could broadcast on to the laptop screen - again if this cant be done i can look at some cheap/small form factor screen options !

I’ve just had a quick look on google for instance and there is stuff out there for the RPI e.g.

Would be keen to hear some thoughts on this? I can’t be the only person looking at this use case :)

So far as external monitors go from the RPi, not all of the small screens with HDMI bridge work without a lot of effort on the Jetson. Sometimes the EDID data (the part of the bridge which responds to questions from the video card about what the capabilities of the screen are) is not complete. Even in cases where EDID is correct there have been some failures in video auto configuration on the Jetsons. So smaller format screens (not supporting at least 1080p) sometimes do not configure well. In cases where they do work, they will work quite well for what you want.

For virtual desktops there are two software installs or configurations. The first would be the virtual desktop server running on the Jetson. The second would be a virtual desktop viewer to run on any other computer. I have not set this up, but apparently there is some part of this which is easy to start on the Jetson. These threads may be of interest:

I’ve seen remarks about an app called “vino” (but I have no actual experience with it).

I am running this. but all i get is “camera open failed”
I have a Jetson Tx2 with OpenCV that comes with JetPack3.2.

Any suggestions how to get it working?


Replied here: [url]Using YOLO on Jetson TX2 and Econ System Cameras - Jetson TX2 - NVIDIA Developer Forums

im getting this following error

(python3:26947): GStreamer-CRITICAL **: 19:42:18.884: 
Trying to dispose element pipeline0, but it is in READY instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.

OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp, line 887
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:

/home/nvidia/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp:887: error: (-2) GStreamer: unable to start pipeline
 in function cvCaptureFromCAM_GStreamer

Traceback (most recent call last):
  File "", line 19, in <module>
  File "", line 5, in read_cam
    cap = cv2.VideoCapture("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)24/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")

Duplicate of this post.