A modern autonomous robot needs a sophisticated perception stack to process observations of the world around it. It needs to build a representation of its environment, determine its place in that environment, and extract semantic understanding of that environment. You would be responsible for development in our vision- and audio-based perception stack to build these representations for use in robot autonomy. Specific areas of development include 3D reconstruction, SLAM, learning-based semantics, and speech analysis.
If the algorithms are the brain, the platform is the nervous system. There's plenty of work to be done on tuning our drivers for maximum performance and power efficiency, implementing an infallible software update system, writing various system services, optimizing compute graphs, and generally bridging the gap between our perception stack and robot behavior to make everything reliable and seamless.
Open roles: CV/ML Engineer, iOS Engineer