Also known as: Remnant Robotics
Intelligent stereo vision cameras with software-driven 3D perception for robots.
Robotic cameras provide noisy, incomplete, or expensive 3D perception, hindering reliable manipulation, navigation, and autonomy.
Robotic cameras provide noisy, incomplete, or expensive 3D perception, hindering reliable manipulation, navigation, and autonomy.
Software-driven stereo vision systems inspired by human perception, delivering accurate 3D data at lower cost using standard hardware and advanced algorithms.
Software-driven stereo vision systems inspired by human perception, delivering accurate 3D data at lower cost using standard hardware and advanced algorithms.
Appears active as of February 2026 based on official website and Y Combinator page.
Appears active as of February 2026 based on official website and Y Combinator page.
Efference develops advanced vision systems for robots, focusing on creating reliable eyes and visual cortex equivalents. The company emphasizes a software-first approach to depth perception, inspired by human vision processes. This enables robots to generate accurate 3D information from standard camera inputs, addressing common challenges in robotics like noisy or expensive sensors.
Traditional stereo cameras rely heavily on hardware for depth estimation, often leading to high costs and calibration needs. Efference shifts this paradigm by using data-driven algorithms that analyze multiple visual cues such as shadows, perspective, and occlusions. This method mirrors human visual processing, producing robust 3D structure without specialized hardware. The result is higher performance and lower costs compared to conventional systems.
The flagship H-01 camera features dual 5MP global-shutter sensors with a 60mm baseline matching human eye spacing. It supports RGBD video at up to 2560x1440 resolution at 30fps, with options for higher frame rates. Additional capabilities include a 140-degree field of view, HDR imaging, dual 400Hz IMUs for motion tracking, and GMSL2 interface for industrial integration. Onboard processing fuses inertial and visual data for stable depth during motion.
Efference develops advanced vision systems for robots, focusing on creating reliable eyes and visual cortex equivalents. The company emphasizes a software-first approach to depth perception, inspired by human vision processes. This enables robots to generate accurate 3D information from standard camera inputs, addressing common challenges in robotics like noisy or expensive sensors.
Traditional stereo cameras rely heavily on hardware for depth estimation, often leading to high costs and calibration needs. Efference shifts this paradigm by using data-driven algorithms that analyze multiple visual cues such as shadows, perspective, and occlusions. This method mirrors human visual processing, producing robust 3D structure without specialized hardware. The result is higher performance and lower costs compared to conventional systems.
The flagship H-01 camera features dual 5MP global-shutter sensors with a 60mm baseline matching human eye spacing. It supports RGBD video at up to 2560x1440 resolution at 30fps, with options for higher frame rates. Additional capabilities include a 140-degree field of view, HDR imaging, dual 400Hz IMUs for motion tracking, and GMSL2 interface for industrial integration. Onboard processing fuses inertial and visual data for stable depth during motion.
Hardware sales with pre-orders for vision cameras
Hardware sales with pre-orders for vision cameras
Robotics developers in autonomous vehicles, humanoid robots, drones, and manipulation systems
Robotics developers in autonomous vehicles, humanoid robots, drones, and manipulation systems
Pre-orders open with March 2026 shipping via official site.
Hiring: Open roles listed on careers page.
Pre-orders open with March 2026 shipping via official site.
Hiring: Open roles listed on careers page.
Efference's vision systems target areas where reliable perception is critical. In self-driving vehicles, they reduce sensor stack costs significantly, supporting scalable autonomy. For humanoid robots, the cameras provide affordable, high-quality vision as production scales. Urban drones benefit from GPS-denied navigation through enhanced SLAM and collision avoidance. Robotic manipulation tasks also see improvements via consistent 3D data for policy training.
Efference's vision systems target areas where reliable perception is critical. In self-driving vehicles, they reduce sensor stack costs significantly, supporting scalable autonomy. For humanoid robots, the cameras provide affordable, high-quality vision as production scales. Urban drones benefit from GPS-denied navigation through enhanced SLAM and collision avoidance. Robotic manipulation tasks also see improvements via consistent 3D data for policy training.
The H-01 camera is available for pre-order, with shipping planned for March 2026. This positions Efference to meet growing demand in robotics sectors requiring trustworthy vision. The company's emergence from Y Combinator underscores its focus on practical, deployable solutions.
The H-01 camera is available for pre-order, with shipping planned for March 2026. This positions Efference to meet growing demand in robotics sectors requiring trustworthy vision. The company's emergence from Y Combinator underscores its focus on practical, deployable solutions.
Founded by Gianluca Bencomo, the team includes experts in biology, neuroscience, and computer science. Backgrounds cover projects like visual perception in primates, fruit fly nervous system simulations, and contributions to Mars missions. This interdisciplinary expertise drives the biologically inspired algorithms central to Efference's products.
Founded by Gianluca Bencomo, the team includes experts in biology, neuroscience, and computer science. Backgrounds cover projects like visual perception in primates, fruit fly nervous system simulations, and contributions to Mars missions. This interdisciplinary expertise drives the biologically inspired algorithms central to Efference's products.
By democratizing advanced robotic vision, Efference aims to remove perception bottlenecks in autonomy. The approach supports broader adoption across industrials, manufacturing, and robotics, enabling safer and more efficient machines in diverse environments.
By democratizing advanced robotic vision, Efference aims to remove perception bottlenecks in autonomy. The approach supports broader adoption across industrials, manufacturing, and robotics, enabling safer and more efficient machines in diverse environments.