Foundational to our mission is the ability for the robot to see the world in 3D using just regular cameras. We developed a neural network to see the world in depth using stereo cameras and a SLAM system to locate our robot in 3D space. Since our robot navigates in a spatially and temporally continous world, we can use this consistency as a learning signal for self-supervised learning.
We believe that the fastest and the most robust way to learn about the world is through interaction. When our robot encounters a novel object, it can move around to observe the object from different angles and viewpoints to acquire more data.
We design and produce robots under one roof to accelerate our iteration cycle. Our integration starts all the way from the firmware to the electricial engineering to the actual assembly of the robot.