We wanted to live in an always clean home, but were tired of always cleaning.
We’d spend hours cleaning, but with kids and pets, the floors never stayed clean for long. We tried all the various robovacs, but they actually made things worse. They chewed up wires, got lost searching for the dock, and one even tore up an expensive rug after getting stuck on it for an hour!
And we weren’t the only ones — friends and family had similar frustrations. So we decided to build something new. Something that would just… work.
As it turns out, robovacs are chock-full of sensors. Some actually boast about this in their marketing, leading customers to believe that sensors mean sophistication. But we live in homes built by humans and for humans, and humans don’t have Lidar or radar. Instead we have an incredibly powerful perception system: two eyes and a brain. So, we decided to build a robot that sees like a human, so it can clean like a human.
We gave the robot eyes (RGB cameras) and built its brain (state-of-the-art algorithms). We developed a simultaneous localization and mapping (SLAM) system from scratch, enabling the robot to build a photo-realistic 3D map of a home, move around with precision, and always be able to locate itself. Visual input also meant we could develop semantic understanding, so the robot actually understands what should be cleaned and what should not be cleaned.
And we doubled down on privacy along the way. From day one, we committed to processing all data on the device, so no video or audio ever leaves your home. This was hard to do, but it was simply non-negotiable. You shouldn’t have to sacrifice privacy for convenience.
And we didn’t stop with the software. We also built purposeful hardware to take full advantage of the powerful perception system, working together to optimize cleaning. This meant reinventing the vacuum and mopping system from the bottom up and tightly integrating hardware and software for super effective cleaning.
It took some time to get here (5 years, 6 months to be precise), but we’re so proud of what we’ve built in Matic. We are super excited for you to use Matic yourself and genuinely hope it makes chasing that “always clean” feeling a little more effortless.
Navneet has a PhD in Computer Vision and is perhaps best known for his thesis, Histogram of Oriented Gradients (HOG), a landmark in computer vision research, which remained the state-of-the-art for 10 years with over 41,000 citations.
Navneet and Mehul were early employees at a startup called Like.com, which pioneered facial recognition in images in 2005 and was acquired by Google in 2010.
Next, they started Flutter (Y Combinator W12), a startup that built on-device hand-gesture recognition. The algorithms worked locally on the device using the webcam with just an 800kB footprint. And yet, Flutter beat the state-of-the-art algorithms in accuracy and recall, leading to its acquisition by Google in 2013. At the time of acquisition, Flutter was the #1 Mac Store app in 73 countries and its users had performed over 77 million gestures.
At Google, they helped launch Google Cardboard at Google Research. Later, the team moved over to Nest where Mehul was the Lead PM for Nest Cam and Navneet led the algorithms team. The team conceptualized and launched Nest Cam Outdoor, Nest Cam IQ, and Nest Hello Doorbell along with other computer vision features such as person and stranger detection.
In 2017, Mehul and Navneet returned to the startup world, co-founding Matician. They have since grown the team to over 40, including Google, Tesla, Nuro, and Cisco alumni, all working together to bring the vision of home autonomy into the hands of our customers.
Billed annually, cancel anytime for a prorated refund