Unbounded Above

Alex Teichman

PrimeSense Distortion and View Angle

After presenting our work on unsupervised calibration of PrimeSense sensors at RSS2013, several people asked if the distortion is a function of view angle. In light of how these sensors work this is a reasonable concern. However, we’re not seeing evidence of this. If you are, please contact me.

What follows are animated gifs of the learned distortion model working at different view angles. If the depth distortion was significantly a function of view angle, I’d expect the results to appear unreasonable, and this isn’t the case.

Here’s a typical scene looking at a long table, wooden chairs, and a white wall. At 7 meters, the wall is highly distorted in the raw data, and applying the learned distortion model results in a reasonably flat wall.

Switching to a top-down view, here’s the result at about 40 degrees. Again, the wall becomes flat after applying the learned distortion model.

At about 20 degrees, the distortion is getting harder to see from top-down. This is because the distortion is a depth multiplier; at this angle the displacement of the points is mostly in line with the wall.

Viewing the same scene from the side is telling. We should be seeing straight lines at the intersection of the flat wall and the left & top edges of the view frustum. These straight lines only appear after applying the learned distortion model.

This corresponds nicely to the learned distortion model, which says the upper left corner peels away from the sensor. (Red means raw data must be pushed further away to produce accurate readings, blue means raw data must be pulled closer.)