Module 01: The "Why" and The Architecture
Why L5 autonomy is harder than a moon landing. Understanding ODD, latency loops, compute constraints, and the modern Hybrid Architecture (Modular vs. End-to-End).
All the articles I've posted.
Why L5 autonomy is harder than a moon landing. Understanding ODD, latency loops, compute constraints, and the modern Hybrid Architecture (Modular vs. End-to-End).
The raw senses of an autonomous vehicle: What data does each sensor provide? Covers cameras, radar, LiDAR, ultrasonics, and microphones—their physics, strengths, weaknesses, and why fusion is necessary.
From GPS to centimeter accuracy: How autonomous vehicles know their exact position. Covers GNSS, IMU, wheel odometry, scan matching, and Factor Graphs.
How autonomous vehicles remember the world. Covers HD maps, lane graphs, offline vs. online mapping, MapTR, and the map-heavy vs. map-light debate.
From pixels to 4D realities: How AVs understand their environment. Deep dive into BEV Transformers, Panoptic Occupancy, Scene Flow, and Foundation Models for open-world perception.
The hardest problem in AV: predicting human irrationality. From physics-based Kalman Filters to Joint Autoregressive Distributions, Generative Motion Diffusion, and World State Propagations.