CLAMS
Calibrating, localizing, and mapping, simultaneously.
This page contains instructions for running CLAMS to calibrate an Asus Xtion Pro Live, Microsoft Kinect, or other PrimeSense-derived sensor. The code is derived from that of our RSS2013 paper.
We’ve tested this several times and it’s working for us, but there will doubtless be some hiccups. Please feel free to email me if this is the case. You can find my email address here.
If you use this work for academic purposes, please cite the CLAMS paper and Christian Kerl’s excellent SLAM system, upon which we rely.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Setup
You’ll need the CLAMS repo and DVO-SLAM repo. The former contains the core calibration code; the latter contains the SLAM system that we use. CLAMS was most recently confirmed to work with DVO-SLAM revision 7502514. Stick to this one for now.1
1 2 3 4 |
|
Both repositories will need to be recognized by ROS. If you don’t have ROS, you can find setup instructions here. Once ROS is set up, you’ll need to tell it where to find the CLAMS and DVO-SLAM repositories. Assuming you placed both repositories in your home directory, you can do this by adding the following line to your ~/.bashrc
file.
1
|
|
Don’t forget to re-source your .bashrc with . ~/.bashrc
. At this point roscd clams
should take you to the clams package directory and you can build both repositories with rosmake clams && rosmake dvo_benchmark
. This will take a long time, mainly because the code depends on PCL 1.7 and we did not want to assume that you had that handy. Feel free to switch the dependency to your own local PCL if you are so inclined. Otherwise, go get lunch.
This code was built and tested on Ubuntu 12.04 using ROS Fuerte. Hopefully it will work on similar versions, but you may experience some pain in resolving dependencies. If you run in to trouble, don’t hesitate to contact me.
Recording
Record four or five different sequences. Each sequence should contain you walking with the sensor to and from a wall or something similar. This isn’t strictly required, but large flat surfaces provide lots of training data. You’ll need to see that surface from both up close and far away, from different ranges and in different parts of the field of view of the sensor.
Here’s an example of what it looks like when we calibrate a sensor ourselves.
- Start at least ten meters away so you calibrate the full range of the sensor.
- Don’t get too close to a wall. If the sensor sees nothing but a flat wall, it’s likely to get lost.
- Go slow.
- Wave the sensor in circles as you go so that you get sufficient training data in the corners, where it really matters.
- Record in an area where today’s SLAM systems are likely to work. In particular, this means there should be lots of texture and 3D structure in the environment.
- You can improve your chances of producing a good SLAM solution by putting a few colorful and textured books in key places.
- Don’t use near-identical sequences. Try to be looking at something different for each one.
To run the recording, use the following program:
1
|
|
To make the calibration process easier for the following steps, store your calibration sequences in a directory CLAMS_WORKSPACE/sequences
2. For example, here is what my CLAMS_WORKSPACE
looked like when calibrating Xtion Pro Live #7.
1 2 3 4 5 6 7 8 9 |
|
SLAM
Once you have collected some raw data, you can cd to your CLAMS_WORKSPACE
and run SLAM on all sequences there.
1
|
|
Despite our best efforts, there are still occasional failures to produce good maps. These will result in bad calibrations, so check all maps first to ensure they look reasonable by eye. You can visualize all trajectories for a given sensor by running the following from inside your CLAMS_WORKSPACE
.
1
|
|
Remove from your CLAMS_WORKSPACE
any sequences that look unreasonable and keep trying until you’ve got four or five sequences that have produced good maps.
Some example maps:
Calibration
Run the calibration program, passing it the sequences you recorded and the trajectories produced by the SLAM system. Start up a roscore in one terminal by running roscore
, then run the calibration program. 3
1
|
|
The output will be a learned distortion model and a visualization of the model.
1 2 3 4 5 6 |
|
The first thing you should look at is distortion_model-visualization/overview.png
. An example of this image is on the left. Each row represents a two-meter slice of the distortion model, with 0-2 meters at the top and 8-10 meters at the bottom.
The left column contains coverage maps which show whether you have sufficient training data in each region of the depth image. Here, black means you have zero training examples in the 8x8 pixel block, and white means you have greater than 100. In this example, I could use a bit more training data at long range on the far right side of the sensor, but otherwise things are looking good.
The right column shows the learned depth multiplier image. Here, blue means that raw data must move further away to produce accurate measurements, and red means that raw data must move closer.
After you are satisfied that you’ve collected sufficient training data to learn a good model, you can inspect the calibration in action.
1
|
|
Using this program, you can look at data and toggle the distortion model by pressing m
. You should be able to see something similar to this:
Using the learned calibration in your own code
Say you want to use a learned distortion model in a different project, and don’t want to link in the whole CLAMS repository. We’ve made this fairly easy to do by just copying a few files from the CLAMS repo. You’ll need eigen3 and boost.
Here’s an example program, test.cpp, in which you want to use the CLAMS DiscreteDepthDistortionModel.
1 2 3 4 5 6 7 8 9 |
|
Assuming ROS is set up for the CLAMS repo as discussed above, you can get the required source code with this:
1 2 3 4 5 6 |
|
Compilation will look something like this:
1
|
|
FAQ
Isn’t the distortion affected by view angle relative to the surface?
In light of how these sensors work this is a reasonable concern. However, we’re not seeing evidence of this. For an example, see this experiment.
The calibration doesn’t look right. What’s going on?
First, make sure your calibration maps are correct and that you have a sufficient amount of training data in all parts of the view frustum. If these aren’t the problem, feel free to contact me.
You should be aware that it’s also possible to invalidate your learned distortion model in at least one way. We’ve found that applying significant force to the plastic case will produce a significant change in the calibration. This came up for us when using velocro cinch straps to rigidly mount the sensor. Don’t do it. I also wouldn’t be surprised if dramatic temperature changes could affect the calibration.
Why is this called CLAMS? You’re not really calibrating and mapping simultaneously.
In this case, that’s true. However, the general CLAMS task was posed as an iterative process, where one alternates between producing maps and calibrating from them. It turned out that for PrimeSense sensors, the close range data was accurate enough that one iteration of CLAMS is sufficient to produce good results. It’s possible that running more iterations would improve the final output, but we haven’t experimented with this.
What about focal length calibration?
That’d be super cool and should be doable with the CLAMS framework. You’d probably use something like grid search over focal length, using an objective function that encodes the intuition that points should be close to each other and lie on planes.
Why this gimpy discrete model?
When we started this, we had no idea what to expect, but did have an abundance of training data. So, we let the training data speak for itself. Now that we’ve seen a number of learned distortion models, it’s clear that you could probably decrease the amount of training data required by choosing a better parameterization. That said, with enough training data, you’ll still get good results using the discrete model that is provided.
Changelog
- 2014-10-17: Added note about compatibility of newer g2o versions. Thanks to Marina Kollmitz for reporting this.
- 2014-01-30: Added note about
CLAMS_WORKSPACE
. Thanks to Nick Shelton for pointing out the ambiguity. - 2013-10-23: Emphasized the particular revision of DVO-SLAM we recommend. Thanks to Stephen Miller for testing the latest revision.
- 2013-09-10: Added
-fopenmp -Wall
to the example g++ line. Thanks to Tom Whelan for noticing this.
Footnotes
-
I’m told also that newer versions of g2o can be problematic. If you run into this, try g2o f5950e2.↩
-
This is just a directory you create; it has nothing to do with the ROS workspace concept that shows up in later versions of ROS.↩
-
roslaunch
will start its own roscore if necessary and normally will shut it down when the relevant processing is done. However, on one of our test machines roslaunch would instead hang after finishing SLAM, and starting your own external roscore is the only workaround we have so far. If you have a better solution let me know.↩