Unbounded Above

Alex Teichman

About Me

Improving people’s lives with AI.

For almost as long as I can remember, I’ve been super excited about the potential for artificial intelligence to make a difference in the world. In undergrad, I was fortunate enough to work on self-driving cars in the DARPA Grand Challenges, where I got to see the power of 3D sensing firsthand. It was clear these sensors had huge implications for all computer vision - not just self-driving cars - so I launched into a PhD on this topic with Sebastian Thrun at Stanford.

Now we’re using these 3D computer vision techniques to build something new and exciting…

Lighthouse AI, Inc.

2014 - Present // Cofounder and CEO

Lighthouse is an interactive assistant for your home: Tell it what you care about, and it tells you when those things happen.

It’s made possible by 3D sensing and cutting-edge deep learning, and is a direct descendant of the work my cofounder and I did on perception systems for self-driving cars.

Check it out here. Oh, and we’re hiring!



Stanford University

2007 - 2014 // Computer Science PhD Candidate

At Stanford I worked with Sebastian Thrun, pursuing my goal of advancing the state of the art in computer vision using 3D sensors. My primary research platform was Junior, the self-driving car entered into the 2007 DARPA Challenge by Stanford.

There are a few ways that 3D sensors end up being extraordinarily useful for perceiving the world:

  • (A) Lighting invariance - the scene looks the same whether in full sun, partial shade, or darkness.
  • (B) Segmentation - It becomes significantly easier to understand where the boundaries of objects are. The question changes from “Is there anything in this scene I care about?” to “What is this object?” - and this is easier to solve.
  • (C) Useful object descriptors - You get direct access to information like height or 3D shape that helps determine what an object is.
  • (D) Tracking - It becomes much easier to understand the motion of an object through space and time. This lets you accumulate information across different views and make use of the motion information in deciding what is what.
  • (E) Self-supervised learning - The segmentation and tracking described above makes possible for the system to learn on its own. Say you have the full motion of a single object through space and time, and you know for sure what it is based on analyzing the full track, but there’s a short period in the middle where it didn’t look right to you - now you can learn automatically from those examples!

My work started with building a system that made use of (A)-(D) on Junior, and progressed to inventing new techniques in tracking under difficult conditions (D) and enabling the system to learn on its own (E). That last part is hard to wrap your head around, but probably the most important and exciting bit.

The preface of my dissertation provides a slightly more detailed but still accessible overview. For excruciating detail, see my publications.

Willow Garage

2008 & 2009 // PhD Student Researcher

While in the PhD, I was lucky enough to spend my summers at Willow Garage, a robotics research lab and technology incubator. There, I used 3D sensors to enable the PR2 (a research platform with a wheeled base, arms, and grippers) to recognize things like people waving to get its attention. At the time it was a Hokuyo on a tilt stage and a custom active stereo rig - the first 3D sensors I worked with that didn’t cost $70,000 each - and it made me think hard about how long it’d take for 3D sensors to truly become mass market.

This was a great time, and some very smart people taught me a lot of practical C++, which I’d later make good use of at Stanford.

University of Pennsylvania

2003 - 2007 // Electrical Engineering B.S.E.

As an undergrad, I spent a lot of time in the GRASP lab with Mark Yim and Dan Lee. Most importantly, I got to help out with the DARPA Grand Challenge team in 2005 and 2007, where I saw the power of 3D sensing. Self-driving cars only worked because these sensors gave direct access to the physical structure of the surrounding environment - totally different from having just a flat 2D image.

(For computers, anyway. Vision is a strange domain that human brains are so good at it, it’s difficult to understand what’s easy and what’s hard to solve algorithmically. See also Moravec’s paradox.)

It was clear that these new sensors could be an equally big deal for all other computer vision, and I was extraordinarily excited about this. How could we use 3D sensing to get computers to understand what they see in the world? And how could we use those new capabilities to improve people’s lives?

And so I applied to a few PhD programs that had strong entries in the DARPA challenges…

Eastern Regional High School

1999 - 2003

In high school, my experience with FIRST Robotics made it clear that I wanted to work on robot brains. I didn’t fully understand what that meant yet, but I knew it’d lead somewhere exciting.

On the side, I did some work with Rick van Berg at the UPenn High Energy Physics lab, helping out with testing photomultiplier tubes for the Sudbury Neutrino Observatory and ASICs for the ATLAS particle detector experiment at the Large Hadron Collider. While not especially related to robot brains, particle physics was - and still is - super fascinating to me. What is the fundamental nature of reality? How does the universe work? I was delighted to see ATLAS discover the Higgs boson almost ten years later.