I tried using the code found here: GitHub - JetsonHacksNano/CSI-Camera: Simple example of using a CSI-Camera (like the Raspberry Pi Version 2 camera) with the NVIDIA Jetson Developer Kit
I can stream a single cam fine, but when running the dual_cams.py, the second cam fails.
I tried switching the sensor_id to select each cam individually using the simple_cam.py, but it always gives me the same camera.
Also, when I list modes for /dev/video0 and /dev/video1 they both come up as index 0.
It’s could be the device tree have problem.
Did you apply any customize kernel and dtb? which release?
I’m on Jetpack 4.4.
I found an interesting result using the ros_deep_learning repo
When running in melodic:
roslaunch ros_deep_learning video_viewer.ros1.launch input:=csi://0 output:=display://0
Brings up the first cam.
Then running:
roslaunch ros_deep_learning video_viewer.ros1.launch input:=csi://1 output:=display://0
Kills the first camera node but brings up the second one correctly.
Then I run the fist command again and both nodes come up and are streaming both cams correctly.
BUT, when I use ROS2 and run:
ros2 launch ros_deep_learning video_viewer.ros2.launch input:=csi://0 output:=display://0
followed by:
ros2 launch ros_deep_learning video_viewer.ros2.launch input:=csi://1 output:=display://0
It brings up 2 nodes that flicker between both cameras. When I kill the second node it goes back to running correctly on one node, but I can’t ever get 2 independent nodes up without them mixing the streams.
I did build librealsense from source with cuda, but I thought the Jetson install didn’t modify the Kernel.
I also checked the imx219 file and it looks like the position (front and rear) is correctly defined in the device tree, but I agree with you that it feels like this is an issue with the device tree somehow.