Skip to content
Gopi Krishna Tummala
Back

The Role of Predictions in Closed-Loop Autonomous Driving

When you work in autonomous driving long enough, you begin to realize that predicting behavior is far more than just a module in the autonomy stack. It’s the quiet foundation that underpins everything from perception to planning, all the way down to decision-making and trust. But what does it really mean to predict the future when you’re driving? It’s about understanding not just where objects are but why they might be moving, what they intend to do, and how you can reason about these actions in a world full of uncertainty.

The Heart of Behavior Prediction

Behavior prediction isn’t a simple forecasting task. It’s an art that involves understanding human behavior, anticipating movement, and managing the inherent uncertainty in a world where no one follows the same rules. At its core, prediction allows a self-driving car to reason beyond the immediate present moment, shifting from a reactive stance to one based on intent and foresight.

When I worked on behavior prediction at Qualcomm and Zoox, I saw firsthand how complex this challenge could get. Prediction in autonomous vehicles involves far more than just forecasting where another car will be in three seconds. It’s about understanding what that vehicle, pedestrian, or cyclist intends to do next — and how their behavior may change in response to the car’s own actions.

A good prediction system doesn’t just estimate trajectories; it anticipates actions, understands interactions, and most importantly, does so in real-time, under imperfect conditions. If we’re to build systems that act as safely and smoothly as humans, we need machines that reason with the same kind of intuitive foresight.

The Constant Uncertainty

Every decision a self-driving system makes is done under partial observability. The system can see the world as it is in the present, but not as it will be. We can’t predict with certainty if a pedestrian will step off the curb, whether the car in front of us will yield, or if that light will turn red in time. And while sensors like LiDAR, cameras, and radar can give us data, they still can’t penetrate the layers of uncertainty that define human behavior on the road.

This leads to one of the central paradoxes in autonomous driving: the further we predict into the future, the less certain we are, yet the more we must decide. Our task is not to eliminate uncertainty — that’s impossible. Instead, we manage it, incorporating it into the decision-making process. It’s a dance between certainty and ambiguity, where even slight shifts in perception or intention can have large downstream impacts.

Modeling Multiple Futures

A key insight in autonomous driving is that the world isn’t deterministic. People make decisions on the road based on a complex set of factors, and the future is shaped by probabilities, not certainties. When a car approaches a yield sign, it might stop, slow down, or just keep going. A pedestrian might pause, or they might cross the street. Predicting human behavior is about embracing this ambiguity — not fighting it.

Modern prediction systems don’t just provide a single trajectory; they model the distribution of possible futures. Instead of saying, “This will happen,” they say, “Here are the possible scenarios, and here’s the probability of each.” The world is a web of probabilities, and we need to capture and reason with them effectively.

From Open-Loop to Closed-Loop Systems

In the early days of autonomous driving, most prediction systems were built on the assumption that agents (other vehicles, pedestrians, cyclists, etc.) move independently — in what’s called an “open-loop” system. The ego vehicle would plan its trajectory, and other agents would move based on their own (presumed) plans. This simplification worked well for training and debugging, as it was computationally simpler and easier to handle.

But driving isn’t an open-loop process. It’s a closed-loop system. Every action influences others. If I hesitate at a yield sign, the car behind me might decide to pass. If I inch forward at a traffic light, other vehicles might adjust their behavior. These interactions define real-world driving.

Incorporating closed-loop reasoning into prediction means capturing how agents influence one another’s decisions. It’s the subtle push-and-pull that makes driving dynamic and fluid. But it also introduces complexity. If a prediction system expects another car to slow down, but the car accelerates instead, it can throw off the entire planning process. Stability in these feedback loops is one of the hardest, yet most exciting, challenges in building reliable autonomous systems.

A Glimpse Into a Modern Prediction System

So, how do modern prediction systems tackle this challenge? At a high level, they integrate data from a range of sensors — cameras, radar, LiDAR — and use deep learning techniques to make sense of the world. At Qualcomm and Zoox, I was involved in designing prediction models that combine these sensor inputs with traditional machine learning techniques, such as decision trees, and deep learning models like CNNs, Transformers, and Graph Neural Networks (GNNs).

The architecture of these systems is deceptively simple but incredibly complex under the hood. For example:

Rasterized bird’s-eye images help identify road occupancy and track other agents (vehicles, pedestrians, cyclists) using models like ResNet, UNet, and Vision Transformers (ViT).

Numerical features like velocity, acceleration, and curvature are used by models like MLPs and GNNs to understand kinematic relationships and agent types.

High-definition maps encoded with VectorNet help provide topological and geometric context for more accurate predictions.

Prediction models typically output not just the most likely path of an agent but also an array of possible behaviors. These outputs feed into the vehicle’s planner, which selects the safest and most efficient course of action based on real-time data.

But while the models themselves are sophisticated, there’s still a major gap. The models we train are not perfect mirrors of reality. They are based on data — and often, the data they are trained on doesn’t fully represent rare but crucial events. The challenge is understanding when the model’s uncertainty arises because it’s never seen a case like this, versus when it’s due to ambiguity in the scenario itself.

The Future: More Context, Better Communication

Looking ahead, the next frontier in prediction is not necessarily about building larger models but about adding more context and improving how systems communicate. For instance, Vehicle-to-Vehicle (V2V) and Vehicle-to-Everything (V2X) communications are rapidly becoming a part of the prediction landscape. With these technologies, vehicles can share information about their intent — drastically reducing uncertainty when predicting another agent’s behavior.

Other emerging techniques like retrieval-augmented reasoning allow vehicles to “remember” similar past scenarios and apply that knowledge to current driving situations. This kind of memory-based reasoning could drastically improve the vehicle’s ability to predict rare but dangerous situations that would be nearly impossible to capture in training.

The Closed-Loop Philosophy

At the end of the day, closed-loop reasoning is about making predictions that are not only accurate but also interpretable. In my experience, a major shift in autonomous driving is occurring — from predicting trajectories to predicting intentions. We’re not just trying to forecast what will happen next but to understand why it might happen.

This brings us to a fundamental philosophical question: can machines ever truly “understand” driving, or are we merely teaching them to mimic it? As we move toward more context-aware, closed-loop systems, this question is becoming less theoretical and more practical. The challenge is clear: can we build systems that don’t just react faster but reason better? That’s the true promise of autonomous driving.


Based on my experience building prediction systems at Zoox (2022–2024) and Qualcomm R&D (2018–2022).