Need help in object recognition using 3D Lidar and Jetson Nano

Hi,

I have a background of bare metal embedded systems (with electronics, electrical and embedded C with some Python experience) hence completely new to this domain of object recognition based on edge inferencing. Therefore, I humbly request guidance from the community to help me architect and direct me to the right resources to help me develop a proto.

The objective is to develop an edge inferencing device that can map vacant spaces, objects and also recognize/classify objects using a 3D Lidar and send this data to an AWS cloud server. Now, after some referring some articles in the internet, what I could understand is:

  1. There needs to be an edge device capable enough to acquire point cloud data from the 3D LiDAR, process it(filtering, segmentation, etc), visualize in 3D, detect/recognize objects with proper bounding boxed.
  2. The edge device will have a linux OS (ubuntu).
  3. A ROS will be required for this development.
  4. Jetson Nano has great processing capabilities to achieve this objective.
  5. Libraries that would be required:
  • For data processing: Open3D, DBSCAN, etc
  • For visualization with bounding boxes: Open3D
  • Object recognition: Make use of pre-trained models by using PyTorch, TensorRT

Now, my question is whether my understanding so far is right? Is there a better and smarter way to do this?
Also point to some resources/tutorials/projects that can help me develop this to the POC (considering my background). With some learning curve, I believe I can get hold of this domain as well but I need the community’s guidance and support here.
We chose NVIDIA also because of its active community and support. I hope the NVIDIA community can help me achieve this objective.

Thank you.

There are many AI Robotics projects sharing at Latest Jetson & Embedded Systems/Jetson Projects topics - NVIDIA Developer Forums, suggest to take a look tpo see if can gain some ideas.