Originally published at: https://developer.nvidia.com/blog/benchmarking-camera-performance-on-your-workstation-with-nvidia-isaac-sim/
Robots are typically equipped with cameras. When designing a digital twin simulation, it’s important to replicate its performance in a simulated environment accurately. However, to make sure the simulation runs smoothly, it’s crucial to check the performance of the workstation that is running the simulation. In this blog post, we explore the steps to setting…
I’ll download this and start running this simulation soon.
Ive got [4] real world use cases that i would like to begin developing for machine vision and robotics
3 of which i can embark on of my own cognizance.
For static work stations this looks like a great place to start.
My use case examples are as follows:
1] running simulator to aligjn collaborative robot placing high value titanium substrate underneath laser and targeting it to fire one laser pass at a time as we can achieve differnet colors with titanium and want to laser mark 10,000 + pieces one color pass at a time.
I would like to teach a machine vision system with AI to align the first pass and then to understand additional [future] color information a camera will be seeing as it progresses and have to identify to accurately cycle titanium under a laser head - one color pass at time.
Are these the kinds of ROS commands that i could build on top of a base model framework like this ?
2] in the instance of mobile autonomous robots that utilize cameras and lidar - would this be a good place to start with designating a [start point] for a maxhine vision camera to [Tare] itself back to [zero] based on what the camera sees to give itself an in an X/Y/Z coordinate position fix to quantify its position against two other datat assets - say - lidar, gps, and perhaps a digital twin placed coordinate ?