Do you have the driver installed for your camera ?
Does it need a service running ? If yes, is it running ?
Are you sure you have no other program using the same camera ?
I assumed that the drivers for the onboard camera were preinstalled. I assume they are, since I can get a live video running using gstreamer in the console with
I have launched no other program using the camera - and as this is a fresh install (with the exception of the updated opencv version), I wouldn’t imagine that there is another program using the camera.
Whether it needs a service running, I’m not sure. The code snippet I provided is a gstreamer pipeline, perhaps there’s a gstreamer service that needs to be running. I will look into it.
Most of these tools are pretty new to me, so thanks for your patience.
No error message, the isOpened() method just always returns false. I had issues installing 3.2, but I’ll try again, can you tell me how you went about it?
No I haven’t resolved the issue. When I use a logitech c310 via the usb port, the videocapture opens, so I’ve been using that. The onboard camera I still haven’t gotten to work.
Thanks Wayne, I’m curious what steps you followed to build and install opencv 3.2.
Importantly, when I looked back at the instructions I followed for building 3.1, the cmake flag WITH_GSTREAMER was set to off (http://docs.opencv.org/master/d6/d15/tutorial_building_tegra_cuda.html). I changed that when I build 3.2, but I still don’t get video a video preview. I really thought that setting the flag correctly would make gstreamer work … perhaps I’m compiling incorrectly?
Trying to get this working above but hitting an error…!
"(python:11184): Gtk-WARNING **: cannot open display: "
I’m thinking this is because i am trying to run this script during an ssh session? Any tips on getting this to work would be MUCH appreciated? Is there not requirement to set the client IP before running this script??
Edit: i did also try…
$ sudo “export DISPLAY=:0” python nvidia_code.py
but im still hitting errors…
Available Sensor modes :
2592 x 1944 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
2592 x 1458 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1280 x 720 FR=120.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
NvCameraSrc: Trying To Set Default Camera Resolution. Selected 1280x720 FrameRate = 24.000000 …
Invalid MIT-MAGIC-COOKIE-1 key
(python:13796): Gtk-WARNING **: cannot open display: :0
Looks like a X display issue (Is it different just displaying an image?).
If you are launching this through ssh from host at IP ww.xx.yy.zz with its X server running, set DISPLAY to:
For the ssh connection try “ssh -Y name@host” to automatically forward and authorize any GUI from the remote to pop up on the local host (the -Y should do some DISPLAY and other auth work).
@linuxdev - you are a legend the -Y flag worked like a charm (im fairly new to programming so thats defo something i can keep!). It seems the frame rate is seriously slow atm but im guessing i can change that with the G streamer pipeline or maybe look at some other options in python. Main thing for now is i can get some camera output. Seriously thanks !
Regarding remote display versus local display in X11…anything running in the X environment generates a series of events, and those events are what the X server deals with, eventually producing graphics rendering through the graphics card as a side effect of those events being processed.
Locally driven displays have a fairly direct route to reaching video card rendering; events are somewhat delayed when instead going to a remote system and its security. This can be fast remotely if you are looking at control or vector operations; if rendering bitmaps, then you might be sending an event for each pixel.
Differences do not stop there though. When running a program on a remote Jetson and displaying locally to a PC none of the actual rendering libraries (the things a GPU talks to) run on the Jetson…these offload to the desktop PC. If you have hardware acceleration on the Jetson, then it no longer participates. Rendering via GPU instead goes through the desktop PC and its libraries. Not knowing this can be a big shock…in some cases the PC won’t be very fast, and in others (such as CUDA) you might find that the 1080Ti is doing the CUDA instead of the Jetson (and if you were not aware of this and think the Jetson is running that fast you’re in for an unpleasant surprise). If the desktop PC did not have the correct version of CUDA then the program would completely fail.
Somewhere in the middle, if you are serious about using the Jetson’s computing power, yet want to display on a PC, you’ll need some form of virtual desktop. The Jetson would render to a virtual screen which has no actual hardware connected, but the GPU and CUDA would not know or care…the remote PC then gets updated via this virtual desktop instead of via X events. In that case the PC would not need CUDA of its own for the Jetson to do CUDA work and to display correctly no matter what the PC configuration is (this would even be operating system agnostic).
wow ok this is pretty overwhelming but hardly unexpected given the nature of the device.
To be perfectly honest - rendering on a pc / remotely is not the goal per se - i’m looking at embedding a neural network onto a standalone device to do image classification in environments where connectivity cannot be guaranteed (as many people may be who are using the jetson). For arguments sake in my use case i only care about positive examples so one work around could be to just send 10 stills from the camera stream of the positives up to the cloud rather than streaming in real time. The stream has been more of a demo to prove the concept.
Question (which i will be able to test tomorrow). If i run the script locally from the jetson with it plugged into a monitor / similar i;m guessing the frame rate will be much quicker? it sounds like the running on jetson and displaying locally on my mac I may as well be using a rasperry pi !!
Frame rate on a script run from the Jetson and displaying to the Jetson (DISPLAY environment variable to the Jetson) will have a speed based on the GPU of the Jetson. Among embedded devices that will be quite fast…only the TX2 would be faster (and it is much faster, yet pin-compatible). If the Jetson has no monitor, but has a virtual X11 desktop, it will be just as fast (CUDA does not care if the frame buffer is associated with a real monitor or oblivion). CUDA and video are both products of GPU acceleration on the Jetson under those conditions.
Sending X events to a remote system (X forwarding) could be faster if the remote PC has a faster GPU (a 1080Ti is much faster than a Jetson’s GPU). Even so, you get network slowdowns, so it depends on the data…it is a tug-of-war game between network slowdowns and beefier graphics on the PC or Mac.
Sending a virtual desktop to a remote system depends on network bandwidth for video rate only; GPU work would be from the Jetson and independent of frame rate for display. It is possible that a Mac displaying a virtual desktop over a directly attached gigabit (meaning through a switch) could be very fast since the Jetson is using its GPU, but it would still be slower than if displaying directly on the Jetson. Unless your Mac had an NVIDIA video card with CUDA installed to the Mac of the same version as the Jetson you could not remote display to the Mac via X event forwarding…only a virtual desktop would work.