Silicon Valley team win IROS navigation challenge


The 2014 Kinect Autonomous Navigation Contest was held on the last day of the 2014 International Conference on Intelligent Robots and Systems (IROS) in Chicago, Illinois, USA on September 18, 2014. Sponsored by Microsoft and Adept MobileRobots, the contest showed off the state of the art of autonomous navigation in realistic and dynamic unstructured environments. In other words, a real world problem like a messy office with moving people and furniture. Table and chair legs were narrow, surfaces were uneven and the carpet had a very eye-catching pattern.

Many members of the SV-ROS user group and Homebrew Robotics Club were involved in preparing for the competition. Their robot “Maxed-Out” completed the most waypoints in the shortest amount of time to win the contest. Here are some snippets from the post by Patrick Goebel describing the event. His full write up is at PiRobot and on

“All teams would be givenpioneer3dx-kinect the same robot, a Pioneer 3-DX (shown on the left) from Adept Mobile Robots. The contest was co-sponsored by Microsoft who provided the Kinect for Windows depth camera located on the vertical post attached to the back of the robot and facing directly forward.  The camera was the only sensor that would be available to the programmers. There were no sonar or IR sensors and no laser scanners.  So all mapping, navigation, and obstacle avoidance would have to be based on vision alone.

In the end, four of the team members (Greg Maxwell, Steve Okay, Ralph Gnauck and Girts Linde) flew to Chicago several days ahead of the event and fine tuned the robot’s behavior even further.  They even had the good fortune to meet up with Mathieu Labbé who gave a paper on RTAB-map at the IROS conference.  Of the six teams that entered the contest, only one other team made as many waypoints as our robot (3 out of 5 locations) but our robot did the course twice as fast so we came out on top.  The fourth waypoint was at the end of a long featureless hallway that completely confused the vision-based localization methods used by RTAB-map and was not a situation that we anticipated or tested.  At least we know better for next time!

Since writing this blog post, Mathieu Labbé has done a nice write-up of his own about the contest that you can find on his RTAB-map page.  You can also read the official press release from the SV-ROS user’s group on the newsfeed.”