Hi,
I have a background of bare metal embedded systems (with electronics, electrical and embedded C with some Python experience) hence completely new to this domain of object recognition based on edge inferencing. Therefore, I humbly request guidance from the community to help me architect and direct me to the right resources to help me develop a proto.
The objective is to develop an edge inferencing device that can map vacant spaces, objects and also recognize/classify objects using a 3D Lidar and send this data to an AWS cloud server. Now, after some referring some articles in the internet, what I could understand is:
- There needs to be an edge device capable enough to acquire point cloud data from the 3D LiDAR, process it(filtering, segmentation, etc), visualize in 3D, detect/recognize objects with proper bounding boxed.
- The edge device will have a linux OS (ubuntu).
- A ROS will be required for this development.
- Jetson Nano has great processing capabilities to achieve this objective.
- Libraries that would be required:
- For data processing: Open3D, DBSCAN, etc
- For visualization with bounding boxes: Open3D
- Object recognition: Make use of pre-trained models by using PyTorch, TensorRT
Now, my question is whether my understanding so far is right? Is there a better and smarter way to do this?
Also point to some resources/tutorials/projects that can help me develop this to the POC (considering my background). With some learning curve, I believe I can get hold of this domain as well but I need the community’s guidance and support here.
We chose NVIDIA also because of its active community and support. I hope the NVIDIA community can help me achieve this objective.
Thank you.