Reasoning from distance and direction measurements?

I’d like to be able to process a full circle of distance vectors (= from sonar+compass), first into a 2D room outline, and next match that outline against a library of previously stored room outlines, to determine which room I’m in, regardless of where I currently happen to be in the room.

Is there a library, reference implementation, or other starting point which will be best for a non-Computer Vision-centric application such as this?

Thanks much,

=Cliff

Not sure if someone has such experience. But seems the room outline is different to each other, so how will you do with other’s library?

The Point Cloud Library (PCL) is useful to create/visualize and deal with point clouds, see:

pointclouds.org

The Velodyne units do a circular LIDAR scan of a number of planes at the same time (16, 32 or 64 are the number of Y axis planes from the devices I’ve seen). There’s a visualizer sample here:

http://pointclouds.org/documentation/tutorials/hdl_grabber.php#hdl-grabber

Matching point clouds is a more complicated conversation:

https://www.google.com/#q=matching+point+clouds

Maybe something like this:

That sounds like a pretty simple point cloud matching/locating query.
There are a variety of algorithms for this, with varying trade-offs between speed and quality.
A Google for “point cloud matching” will find a number of algorithms, including Iterative Closest Point and Principal Component Analysis based options.

I’d also add that if you are making a device that moves around or is moved around and you are trying to determine what room it is or was in, the addition of IMU, GPS and/or bluetooth/RFID sensor data can make matching more accurate. And a little more spendy is something like https://pozyx.io which uses its own beacons. Additional data becomes useful when two rooms are the exact same size and orientation. With these methods, you wouldn’t even necessarily need to do point cloud matching to determine what room you were in.

Or you can stick April Tags in the ceiling of each room, and use a fisheye upwards facing camera to make sure you can see the ceiling.
It all depends on the actual needs of the application. That being said, if you really do need LIDAR scans to determine which room you’re in (and furniture won’t move or get in the way, etc,) then trying to match the room point cloud to each of the database point clouds with one of the above algorithms, and minimizing some error function, would be the appropriate approach.