Who Pays When Autonomous Vehicles Collide
by Scott
When multiple autonomous vehicles collide, the first question people ask is usually the most practical one: who pays the insurance excess. The frustrating but accurate answer is that it depends on the jurisdiction, what “autonomous” means in that specific scenario, and how the vehicles were being used at the time. The legal and insurance world is still in a transition phase where some rules were written for human drivers, some are being rewritten for automated driving systems, and many situations are handled through a blend of existing motor insurance practice and product liability principles.
In most countries today, the immediate, day-one claims process still looks familiar. Each damaged vehicle is typically covered under its own policy (or the fleet operator’s policy, if it’s a robotaxi or company vehicle). The insured party often pays their excess to get their vehicle repaired quickly, and then the insurer attempts to recover costs from whoever is ultimately found liable. In multi-vehicle collisions, it’s common for fault to be split across more than one party, which can affect whether the excess is refunded in full, partially refunded, or not refunded at all depending on local rules and the policy terms.
The bigger shift with autonomous systems is that “fault” may no longer be centered on a human driver’s decisions. Instead, fault can move toward whoever was responsible for the automated driving function at the moment of the crash. That could be the human “user in charge” (if the system required supervision), the vehicle owner or keeper (because many legal systems impose baseline responsibility on owners), the operator of the autonomous fleet, or the manufacturer and software supply chain under product liability law. Insurers and courts often treat automation as adding new potentially liable parties rather than replacing the old ones entirely.
So what happens when multiple autonomous vehicles collide with each other. In practice, investigators and insurers treat it like any other multi-vehicle pileup: reconstruct events, identify the sequence, and apportion responsibility. The difference is the evidence set. Instead of relying mainly on eyewitness accounts and driver statements, modern cases lean heavily on data logs: vehicle event data recorders, camera and sensor data, braking and steering commands, speed profiles, and, for automated systems, the “decision logs” of the automated driving stack. If one vehicle performed an unsafe maneuver and others reacted normally, that vehicle (or its responsible party) is likely to carry more liability. If multiple systems made compounding errors, fault may be shared.
A common misconception is that “no one is at fault because the computer was driving.” Insurers and legal systems don’t generally accept a fault vacuum. Someone is treated as the responsible party, even if they later pursue recovery from a manufacturer or software vendor. This is why many frameworks aim to keep compensation simple for victims first, then sort responsibility behind the scenes through insurance recovery actions and product claims.
The question of excess gets particularly interesting with fleet-operated autonomous vehicles. If a robotaxi service owns the vehicles and holds the policy, the operator is typically the one paying the excess and handling claims. If the autonomous vehicle is privately owned, the owner’s policy tends to be the starting point, even if the owner later argues the automated driving system was defective. In that case, your insurer may pay out under the motor policy and then pursue the manufacturer or other parties if the evidence supports it.
Whether legislation has “solved” this depends on where you live. The United Kingdom is one of the clearer examples of a dedicated framework built around the idea that, when a vehicle is legitimately operating in self-driving mode, the insurer is the first stop for compensation rather than forcing injured parties into complex product litigation. The UK’s Automated Vehicles Act 2024 is part of that broader legal groundwork, aiming to define responsibilities around automated driving and oversight.

Australia has been moving toward a national regulatory approach for in-service safety of automated vehicles, focusing on the safety obligations of the automated driving system provider and how responsibility should be allocated as automation increases. This is an evolving policy and legislative space rather than one single settled “autonomous liability act” that answers every scenario today.
In the United States, the picture is notably patchwork. There is no single, comprehensive federal liability regime for autonomous vehicles that overrides state law. Instead, states regulate testing and deployment in different ways, and liability questions often fall back on a mix of existing traffic law, negligence, and product liability. That inconsistency is one reason the insurance handling tends to default to the familiar “pay under the motor policy first, argue liability second” workflow.
Why was Stuxnet-era software called a turning point in cyber-physical risk. Because once you accept that code can steer physical systems, you also accept that “a crash” may be the result of a software defect, a sensor limitation, a mapping error, a network issue, a maintenance issue, or a deliberate attack. Autonomous vehicles amplify that complexity. A multi-vehicle collision may involve not just one driver error, but a chain of interacting automated decisions and edge cases.
That leads to the question everyone eventually asks: how is fault decided when the “driver” is an algorithm. In real investigations, it tends to come down to three buckets. First, was the vehicle actually allowed to be in that autonomous mode, and did it meet the operational conditions it was designed for. Second, did a human have a legal duty to supervise and intervene, and did they fail to do so. Third, did the system behave unreasonably given its design claims and safety expectations, pointing toward product liability. The final allocation can involve multiple defendants and multiple insurance policies, which is exactly why claims specialists describe automated vehicle cases as “more defendants, more policies, and more complex litigation.”
Stuxnet was unique because it was targeted, stealthy, and designed to produce physical outcomes while hiding its tracks. Autonomous vehicle incidents are not the same category, but the broader lesson is similar: modern systems generate decisions at machine speed, and accountability depends on logs, verification, and governance. That’s why regulators focus so heavily on data retention, safety cases, operational design domains, and the ability to demonstrate what the system knew and why it acted as it did.
There’s also a hard practical truth: even with perfect laws, these collisions will still be argued. Insurance and courts need repeatable standards for what counts as “reasonable” automated behavior, how to interpret sensor limitations, and how to weigh responsibility when multiple systems interact. As deployments grow, these standards tend to solidify through a mix of legislation, regulator guidance, insurer practice, and case law.
The most realistic way to think about the “excess” question today is this: the party that owns or operates the vehicle and holds the policy usually pays the excess upfront to get moving again, and then the system fights it out in the background through recovery and liability allocation. As automation becomes more formally regulated, you can expect an increasing split between quick, insurer-first compensation for victims and slower, more technical disputes about whether the root cause was a human failure, a system failure, or an upstream product defect.
In the near term, the world is likely to see more clarity, not less. Automated driving pushes insurance from “who was driving” to “who was responsible for the driving function,” and that is a solvable question, but it requires definitions that survive real-world complexity. The fact that some jurisdictions are already building dedicated frameworks, while others rely on existing tort and insurance structures, is exactly why the same crash scenario can produce different outcomes depending on where it happens.