pragmatic autonomous driving

What We Do

We develop advanced driver assistance and autonomous driving systems, based on computer vision and modern machine learning techniques. We target the sweet spot between the two extremes:

Our approach is to build around computer vision for sensing. With modern machine learning techniques (mostly deep learning), it is possible to extract dramatically more high-level information from video compared to legacy deployed approaches. This immediately leads to multiple benefits:


Steering from Video

Advanced steering assistance logic, implemented as a neural net trained directly on video frames from the forward camera.

Here, modern machine learning allowed us to dramatically improve robustness and broaden the range of acceptable conditions for the driver asisst system. Unlike legacy systems, commercially available today, our solution does not depend on lane markings and can handle twisty roads with bad surface quality.

Motion Capture with Commodity Hardware

Training data is the crucial factor that determines the accuracy of machine learning systems. No algorithmic improvements can fully compensate for training data that is not plentiful or accurate enough.

We have developed a self-contained motion capture system, running on commodity Android platform for low-cost training data collection. The platform is completely self-contained and does not need any link to the vehicle data bus - our software extracts steering and velocity information directly from the Android device IMU and GPS sensors. The result is a system that can be deployed with zero setup cost on any vehicle.

We have validated the results against the data directly from the vehicle CAN bus and observed a near-perfect agreement.