Using AGX Xavier for both data recording and inference

Is there a best practice you may suggest for using AGX Xavier for both data recording and inference?

What I mean by this is, we want to utilize AGX Xavier for data recording of various sensor modalities (e.g. LiDARs, cameras, radars) with high time-fidelity (using PPS/GPS time sync); but at the same time consuming these collection of sensor data real-time for some object detection/tracking and mapping/localization applications.

Off the top of my head, I’m assuming we’ll need a data-storage medium (which we can do by using NVMe SSDs or pushing concatenated data through eth). We’re exploring the opportunities to use multiple-camera solutions by NVIDIA preferred partners (which can work directly through MIPI interface), using 3D lidar(s) - such as Velodyne LiDAR.

Any suggestions appreciated.


Answer your question separately.:

We have lots of example for high performance inference.
Here is a tutorial for you to start with:

We don’t have too much tool or sample for sensor data collection.
But there are some experiment from jetsonhacks and you can give it a try: