Has anyone tried running ROS and an inference model on the Jetson Nano 4GB simultaneously? Might sound obvious but I’m just wondering if the Jetson Nano is powerful enough to run both.
I want to build a bot that can navigate by using the RPLiDAR A1 sensor and also using a RPi camera for facial recognition. Any links to how-to’s, documentation, or general advice is much appreciated.
Hi @hortonjared90 - yes, you can run ROS with deep learning inference. Check out these inference nodes for ROS at GitHub - dusty-nv/ros_deep_learning: Deep learning inference nodes for ROS / ROS2 with support for NVIDIA Jetson and TensorRT
Those nodes don’t include facial recognition, but can do classification, object detection, and semantic segmentation. There are also several ROS-based projects at https://developer.nvidia.com/embedded/community/jetson-projects
Which distribution of ROS are you using?
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.