Integrating Visual Perception, Semantic Understanding, and Action Decision-Making, a Chinese Startup’s New ADAS Platform Aims to Overcome End-to-End Architecture Bottlenecks
Chinese startup DeepRoute.ai recently unveiled its next-generation driver-assistance platform, DeepRoute IO 2.0, featuring its self-developed VLA (Vision-Language-Action) model. The company claims this architecture breaks through the interpretability and generalization bottlenecks of traditional end-to-end (E2E) systems. The new platform integrates visual perception, semantic understanding, and action decision-making, offering functions such as spatial semantic comprehension, irregular obstacle detection, traffic sign text recognition, and memory-based voice control. This delivers a more humanlike and safer advanced driver-assistance experience. Compared with current market solutions, DeepRoute IO 2.0 offers greater technical transparency and stronger mass-production adaptability, having secured five design-win projects and approaching 100,000 vehicles in deployment. The company also plans to accelerate passenger car applications and expand into Robotaxi operations as part of its roadmap toward Road AGI (Artificial General Intelligence for Roads).