Module 01: The "Why" and The Architecture
Why L5 autonomy is harder than a moon landing. Understanding ODD, latency loops, compute constraints, and the probability of failure in autonomous systems.
Your Unique Edge
How self-driving cars actually work—prediction, calibration, sensing, and closed-loop reasoning.
9 articles covering robotics
Why L5 autonomy is harder than a moon landing. Understanding ODD, latency loops, compute constraints, and the probability of failure in autonomous systems.
From photons to decisions: How machines reconstruct 3D reality from 2D data. Covers cameras, IPM, radar, LiDAR, and sensor fusion in an intuitive, first-principles approach.
If you don't know where your eyes are relative to your feet, you trip. Covers intrinsics, extrinsics, SE(3) transforms, online vs. offline calibration, and time synchronization.
From GPS to centimeter accuracy: How autonomous vehicles know their exact position. Covers GNSS, IMU, wheel odometry, scan matching, and the Kalman Filter fusion that creates the "Blue Line."
The hardest problem in AV: predicting human irrationality. Covers the evolution from physics-based prediction to Generative AI, tracking the journey through Waymo Open Dataset Challenges.
From perception to action: How autonomous vehicles make decisions. Covers cost functions, game-theoretic planning, and the modular vs. end-to-end debate.
How diffusion models predict action sequences instead of pixels. Covers Diffusion Policy, world models for robotics, and connecting diffusion to reinforcement learning for autonomous systems.
Reflections on building production-grade behavior prediction systems for autonomous vehicles — and why closed-loop reasoning is the bridge between perception and planning.
How we used deep learning to automatically calibrate traffic cameras by observing vehicle motion—work that won Best Paper Award at ACM BuildSys 2017.