I have a Jetson Xavier NX and an IMX219-83 8MP 3D Stereo Camera Module. I have connected the camera to the Jetson with the provided FFC cable. I am not sure how to actually see the cameras output. I have posted on the forms for the cameras website as well (How to connect camera to Jetson? - Hardware Products for AIoT Applications - Seeed Forum). I have plugged the camera in with the FFC cable, into each of the two slots on the jetson. The light on the camera turns on, so it is receiving power, but I do not know how to actually get the camera output. Ideally, id get the camera output through openCV (Python 3), but right now id settle for literally anything. The camera is compatible with Jetson products according to the manufactures site (https://www.seeedstudio.com/IMX219-83-Stereo-Camera-8MP-Binocular-Camera-Module-Depth-Vision-Applicable-for-Jetson-Nano-p-4610.html), I just do not know what code to write or what app to run to get the output.
please refer to documentation, you may have some Approaches for Validating and Testing the V4L2 Driver.
So that section refers to a “Multimedia User Guide” that I do not see on the provided link https://developer.nvidia.com/embedded/downloads . I tried filtering for it, no results. Is it definitely on that page?
Also, so what I got from that page was that there may be a driver I need but do not have? Is there any way to know which driver I need to get besides just contacting the maker of the camera?
it’s L4T Multimedia API Reference you’re looking for.
there’re also some sample applications you may refer to.
Ok so I went to Jetson Linux API Reference: Building and Running | NVIDIA Docs and went to the sample apps. It seems like I need Jetpack, which I was not sure if I had but looking at this doc it seems like if I have all the samples, I should have Jetpack, I have the samples, so I’m assuming I have Jetpack (if there is another way to verify this please let me know).
So I followed steps 1-3 just fine, but then trying to follow the instructions on 12_camera_v4l2_cuda I run into some issues. It seems like I do not have the files necessary. It says the paths are relative to "ll_samples/samples "? Is that supposed to be "/ll_samples/samples "? Or where specifically is that directory?
Update: I did nothing differently, and installed nothing, but now I can at least see /dev/video0 and /dev/video1 (somehow), which is great. I tried getting their output with VLC, nothing… So the devices are definitely connected, I just get no data from them.
you may install JetPack public release image with NVIDIA SDK Manager.
there’s options to install multimedia APIs, you should found it as below path; you should also check README files for instructions.
if you seem
/dev/video1device nodes, it means video device were registered. you may enable applications to access the sensor stream. please gather logs if you had failures.
$ dmesg > klog.txt
So I installed jetpack with an SD card, but I do not have the folder you mentioned… Does the SDK manager install anything different than just using an SD card to install it? I have no other files on the jetson, so I really don’t mine reflashing it, wiping the SD card, whatever is easiest to make sure I have the drivers it seems I might need.
What am I looking for in the output of dmesg specifically?
please note that you should select multimedia components while install JetPack release with SDKmanager.
here’re commands you may tried to use
apt-get to access the MMAPI package without reflashing the board.
$ sudo apt-get update $ sudo apt-get install nvidia-l4t-jetson-multimedia-api
I have the multimedia components, but I ran that command anyway to make sure. Still cannot access the camera though. Running any of the examples off of /dev/video0 or /dev/video1 does not show any output.
You would first check if a driver is handling your camera. Depending on your HW, this might need a specific driver (that may be a kernel module or embedded into a kernel image) and a specific device tree.
As @JerryChang mentioned, you would get kernel boot messages into file klog.txt and check if a driver and which one controls your cameras. You would post it here as an upload.
If cameras are controlled, you should see video nodes in /dev:
If you can see these for each camera, you would try to access these with V4L API:
# Need to do that only once sudo apt install v4l-utils # These would show some info v4l2-ctl -d/dev/video0 --list-formats-ext v4l2-ctl -d/dev/video0 --all
Ok so im able to see them in /dev/video0, and get info with v412-ctl, I’m still not sure why I cannot seem to get their input… It says it is using tegra-video as a driver, so I guess I have the driver? I just cannot get any camera input? Or if I can I do not know the right command.
Please post the output of
v4l2-ctl -d/dev/video0 --list-formats-ext
so that I could propose a command for camera 0.
Index : 0
Type : Video Capture
Pixel Format: ‘RG10’
Name : 10-bit Bayer RGRG/GBGB
Size: Discrete 3264x2464
Interval: Discrete 0.048s (21.000 fps)
Size: Discrete 3264x1848
Interval: Discrete 0.036s (28.000 fps)
Size: Discrete 1920x1080
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.017s (60.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.017s (60.000 fps)
So you may try a gstreamer pipeline with nvarguscamerasrc. Assuming you are locally logged into Jeston with a GUI :
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! nvvidconv ! xvimagesink
You may select the resolution of framerate (pick one from available ones listed above) with such caps:
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=1920, height=1080, framerate=30/1' ! nvvidconv ! xvimagesink
From opencv, you would create a videoCapture with gstreamer API. The pipeline reads camera and converts into BGR for opencv:
cap = cv2.videoCapture('nvarguscamerasrc sensor_mode=0 ! video/x-raw(memory:NVMM), format=NV12 ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink', cv2.CAP_GSTREAMER)
First of all thank you so much for helping.
So when I run either the first command or the second (with any resolution and frame rate combo listed) a window does pop up, but it just looks like someone took a screenshot of the part of the desktop its taking up and put that into a new window, the camera output does not show.
The openCV code hangs indefinitely.
Don’t waste time with opencv before you are able to display your camera stream.
It seems that no frame has been read.
For further help, please post the kernel boot messages. Generate file klog.txt with command:
dmesg > klog.txt
and upload it into your next post (in edit window, there is an icon with up arrow). It may help to find out what is wrong.
Also post the output of failing commands, it may give some clues.
$ v4l2-ctl -d /dev/video0 --set-fmt-video=width=3264,height=2464,pixelformat=RG10 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=100
if you still had difficult to access camera stream, you may also gather kernel messages for reference,