So, is there a ROS package for using raspicam on jetson nano and with calibration support?
Hi @leogarberoglio, by raspicam, do you mean the IMX219-based Raspberry Pi camera module v2?
The video_source node from ros_deep_learning package supports any MIPI CSI camera which JetPack supports (or you have installed CSI drivers for). The easy way to check is if you can run
nvgstcapture-1.0 and get video from it, then the video_source node should be able to work too.
However this node doesn’t support the calibration data - I’m not sure where to get that from, other than calibrating it yourself. If you do have the intrinsic calibration matrices, it should be fairly simple to add a camera_info publisher to the node.
Thank, I will try that.
I can calibrate the camera and get the matrices, so you are right, I will add the camera_info and the with image_proc package get undistorted image.
is nano working with raspberry pi camera v 1.3?
runing nvgstcapture gives some errors:
bitrate = 4000000
Encoder Profile = High
Encoder control-rate = 1
Encoder EnableTwopassCBR = 0
Opening in BLOCKING MODE
** Message: 23:06:16.031: main:4670 iterating capture loop …
NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:557 No cameras available
^C** Message: 23:07:07.074: <_intr_handler:4261> User Interrupted…
JetPack doesn’t support Raspberry Pi Camera module v1.3 - it uses the OV5647 sensor which is EOL. I believe RidgeRun has a driver supporting OV5647, but it isn’t free. IMX219 support is included with JetPack, so recommend the Raspberry Pi camera module v2.
that’s fine, I have a v2 camera too.
Will you accept PR in your github repo? I will try to add CameraInfo publisher. Then using image_proc node we can obtain undistorted images.
Thanks @leogarberoglio, if it would still work even if calibration data wasn’t available, I would accept a PR adding CameraInfo publisher. Many users may not have the calibration data on-hand, so it should still function without it.
I’m thinking on something like this:
most ros camera driver include this lines in order to find a calibration file. If it is not present, then it warn the user and continue working anyway.
OK gotcha. It would be good if there is equivalent code for ROS2, because the ros_deep_learning package builds for both ROS1/ROS2. The different APIs are handled through this ros_compat layer:
oh, sorry, I don’t know anything about ROS2. I don’t think I cold make it work for ROS2.
I made it work:
process[video_source-2]: started with pid  [ INFO] [1617845376.608069106]: camera calibration URL: package://ros_deep_learning/camera_info/camera.yaml [ INFO] [1617845376.669568162]: Unable to open camera calibration file [/home/elgarbe/workspace/jetson_cam_forl_ws/src/ros_deep_learning/camera_info/camera.yaml] [ WARN] [1617845376.669679310]: Camera calibration file /home/elgarbe/workspace/jetson_cam_forl_ws/src/ros_deep_learning/camera_info/camera.yaml not found. [ INFO] [1617845376.669787177]: Calibration file missing. Camera not calibrated [ INFO] [1617845376.669848064]: using default calibration URL [ INFO] [1617845376.669914263]: camera calibration URL: file:///home/elgarbe/.ros/camera_info/camera.yaml [ INFO] [1617845376.669999421]: Unable to open camera calibration file [/home/elgarbe/.ros/camera_info/camera.yaml] [ WARN] [1617845376.670049005]: Camera calibration file /home/elgarbe/.ros/camera_info/camera.yaml not found. [ INFO] [1617845376.670099684]: No device specifc calibration found [ INFO] [1617845376.671733885]: opening video source: csi://0 [gstreamer] initialized gstreamer, version 184.108.40.206
and rostopic echo /video_source/camera_info gives:
--- header: seq: 1393 stamp: secs: 1617845245 nsecs: 818433457 frame_id: '' height: 0 width: 0 distortion_model: '' D:  K: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] R: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] P: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] binning_x: 0 binning_y: 0 roi: x_offset: 0 y_offset: 0 height: 0 width: 0 do_rectify: False ---
then, after commit a new calibration:
process[video_source-2]: started with pid  [ INFO] [1617846506.418872602]: camera calibration URL: package://ros_deep_learning/camera_info/camera.yaml [ INFO] [1617846506.483671863]: Unable to open camera calibration file [/home/elgarbe/workspace/jetson_cam_forl_ws/src/ros_deep_learning/camera_info/camera.yaml] [ WARN] [1617846506.483792803]: Camera calibration file /home/elgarbe/workspace/jetson_cam_forl_ws/src/ros_deep_learning/camera_info/camera.yaml not found. [ INFO] [1617846506.483882440]: Calibration file missing. Camera not calibrated [ INFO] [1617846506.483937024]: using default calibration URL [ INFO] [1617846506.484016817]: camera calibration URL: file:///home/elgarbe/.ros/camera_info/camera.yaml [ INFO] [1617846506.486152952]: Camera successfully calibrated from device specifc file [ INFO] [1617846506.487948507]: opening video source: csi://0 [gstreamer] initialized gstreamer, version 220.127.116.11
--- header: seq: 1538 stamp: secs: 1617846558 nsecs: 929139313 frame_id: '' height: 720 width: 1280 distortion_model: "plumb_bob" D: [0.1883484997394136, -0.3086844578038269, -0.001545387187700351, 0.0009689336662890492, 0.0] K: [1303.276564178135, 0.0, 638.8544621442595, 0.0, 1300.78668105035, 370.0244760250305, 0.0, 0.0, 1.0] R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0] P: [1341.831787109375, 0.0, 639.8119021650054, 0.0, 0.0, 1339.42919921875, 369.1276280402089, 0.0, 0.0, 0.0, 1.0, 0.0] binning_x: 0 binning_y: 0 roi: x_offset: 0 y_offset: 0 height: 0 width: 0 do_rectify: False ---
OK cool. If you wanted to share your code updates that you made, I can take a look at merging it and having it work with ROS2 as well. Thanks.
Yes, I will share it with you soon, now I’m trying to make some more modification in order to use image_transport for image publishing. It seems it’s the way to do it. I’m trying to use apriltag_ros and it expect that images comes from an image_transport.
Maybe you could make a new repo just for camera driver?
OK no rush, thanks. I recall that image_transport uses the CPU/cv2 for color conversion, which is why I was doing it in CUDA. But maybe image_transport can still be used with GPU and without much additional CPU overhead.
hummmm… these are my first steps on this topics… I’m not sure about CPU or CUDA or GPU. Sorry.
I will try to make it work, then we can see how to make it better :-)
I was using raspberry pi 3b+ / odroid xu4 with raspberry pi camera module / kinect v1 and all the packages needed for apriltag and cameras drivers were there, so I’ve just install them and try them out.
But I need more detection rate so here am I…
By the way, since you are interested in apriltags, we recently published this ros2-nvapriltags node that is GPU-accelerated. However it is for ROS2.
that’s a good news! is that package working with IMX219 camera?
I believe that package is independent of the video source and doesn’t include the camera driver / camera publisher. You would connect it with the video input that you were using.
Ok, no image_transport was needed. I get some apriltag_ros warning for topic name /video_source/raw related to image_transport. I change it to /video_source/image_raw and apriltag_ros start working.
I rise a PR, but as you could see it needs a litle more job. I don’t know much about C++ so I’m limited to copy/modify your own code. Let me show if you need something else.
I still think that you should have a camera driver without other deep learnig stuff. It will be very usefull for Nano community.
thank a lot for your help!
BTW detection_rate is not as high as I expected, but I think it will work for my work. I think that the ros2-nvapriltags will be the next thing I will test.
Thanks a lot, glad to hear you got it working. Will check it out when I get a chance.