Self-driving cars are no longer science fiction. From Tesla’s “Full Self-Driving” (FSD) software to Waymo’s driverless taxi services, autonomous vehicle technology is on the road today, and already raising serious questions about safety, liability, and the future of personal injury law. While these systems promise fewer accidents and safer streets, a growing number of high-profile crashes, lawsuits, and recalls suggest the technology isn’t ready to take over fully. Below, we’ll break down how self-driving cars work, why self-driving car accidents happen, and what this all means for the future of transportation and the liability of injuries caused by these vehicles.
How Do Self-Driving Cars Work?
Self-driving technology is defined by levels of automation ranging from Level 0, no automation, to Level 5, full automation under all conditions. Most cars available to consumers today, including Tesla’s Autopilot and Full Self-Driving features, are Level 2 systems, meaning the car can steer, accelerate, and brake but still requires a human driver to supervise at all times. Higher levels of automation, such as Level 4, can operate without a driver in limited areas. Level 4 services are in use in certain places, such as Alphabet’s Waymo service in cities like Phoenix and San Francisco.
Different companies use different technology stacks. Tesla relies on a camera-only approach, removing radar and lidar sensors in favor of its “Tesla Vision” neural network. By contrast, Waymo combines cameras, lidar, and radar to add redundancy and improve detection in complex environments. These differences matter: if a system fails to detect an obstacle, lane change, or pedestrian, the consequences can be severe. And because Level 2 cars require humans to take over instantly when something goes wrong, reaction time often becomes the weak link.
Accidents Caused by Self-Driving Cars
Self-driving crashes have already led to major lawsuits and headlines. In September 2025, Tesla settled with the family of Jovani Maldonado, a 15-year-old killed in a 2019 crash where Autopilot was allegedly engaged. This came shortly after a Florida jury awarded $243 million in damages in another Tesla pedestrian death case.
Other high-profile cases include the 2016 Tesla crash in Florida, where Autopilot failed to detect a crossing tractor-trailer, and the 2018 Mountain View crash, where Autopilot steered into a highway divider and killed the driver.
It isn’t just Tesla under scrutiny. Cruise, GM’s driverless car unit, had its California permit revoked in 2023 after one of its robotaxis dragged a pedestrian. Even Waymo has had incidents, including collisions with gates and barriers that forced the company to recall over 1,000 vehicles in 2024.
The broader statistics show a troubling trend. According to FinanceBuzz, self-driving crashes nearly doubled from 288 in 2023 to 544 in 2024, and semi-autonomous vehicle crashes rose by about 35%. Tesla accounted for the majority of those incidents. While some industry defenders argue that injuries are less severe in these crashes, the overall rise in accidents underscores that these systems are far from foolproof.
Each one of these car accidents could result in devastating consequences under the right circumstances. If a self-driving car were to hit a pedestrian at high enough speeds, they could suffer major injuries. Being struck by a vehicle could result in damages such as brain injuries, spinal cord injuries, and even internal organ injuries. If autonomous vehicles ever plan on expanding into complete normalization, companies need to address all of the potential errors that could result in catastrophic injuries.
Why do These Accidents Occur?
When self-driving cars fail, the causes often come down to how their perception, decision-making, and control systems interact with an unpredictable world. One common failure is in object detection. A car’s sensors may fail to recognize something in its path, whether it’s a tractor-trailer crossing an intersection, as in the fatal 2016 Tesla crash in Florida, or smaller hazards like road debris, cyclists, or pedestrians partially obscured from view. In these moments, the vehicle either doesn’t react at all or reacts too late, leading to collisions that a human driver might have avoided.
Errors can also occur in how the system interprets the road itself. Lane markings, temporary construction zones, and unusual roadway features can all confuse the car’s planning software. The 2018 Mountain View Tesla crash is a stark example: the system steered directly into a highway divider after misreading the roadway layout. Situations like these show how reliance on pre-mapped data or limited training examples can make vehicles vulnerable to edge cases that humans instinctively recognize.
Even when the car recognizes a problem, the way it handles handoffs to the driver can make accidents more likely. Level 2 systems, like Tesla’s Autopilot, are designed to return control to the human when the software encounters something it can’t handle. But those alerts often come with little or no warning. Human reaction time, usually at least a second or two, is sometimes longer than the window available to prevent a crash. That mismatch has been a recurring factor in lawsuits where drivers claim they had no realistic chance to intervene before the collision.
Finally, there are risks tied to software decision-making itself. Predictive algorithms determine how the car reacts to moving objects, for example, whether a pedestrian is about to step into the street or a cyclist will stay in the bike lane. If the prediction is wrong, the car may accelerate when it should yield, or brake suddenly and cause a rear-end collision. Companies like Cruise and Waymo have already issued recalls after their cars made errors in predicting how obstacles would behave.
Each of these technical failures raises complicated questions about liability. In Tesla cases, courts have often pointed to the driver’s responsibility to stay alert, since the system is officially classified as driver-assist. But plaintiffs argue that the very branding of features like “Full Self-Driving” encourages over-reliance and creates unreasonable risks. At the same time, incidents involving fully driverless cars, like Cruise’s pedestrian dragging in San Francisco, highlight how responsibility is beginning to shift from individuals to the companies operating these fleets.
How will this Affect the Future of Autonomous Vehicles?
The companies building autonomous vehicles aren’t just imagining cars that can steer on the highway; they’re envisioning entire fleets of driverless taxis and on-demand services. Waymo already runs fully autonomous rides in places like Phoenix and San Francisco, and Tesla continues to promote its vision of a robotaxi network that owners could put to work when they’re not using their vehicles. These ideas may sound like the future of convenience, but they also create new categories of risk.
One of the most obvious concerns is what happens when vehicles are operating without anyone inside. An empty car driving through crowded downtown streets to pick up its next passenger won’t have a human behind the wheel to recognize subtle hazards, a cyclist weaving around traffic, for example, or a child darting out from between parked cars. If the system fails in those moments, the responsibility for the accident won’t fall on the driver. Instead, it will land on the company that designed and deployed the technology.
Even features meant to feel routine, like summoning a car to meet you at the curb, carry their own dangers. Picture a self-driving car navigating a busy parking lot on its own or pulling up to a sidewalk in rush-hour traffic. A minor miscalculation in those situations could mean hitting a pedestrian, clipping a cyclist, or causing a chain-reaction fender bender. And again, because the user isn’t even inside the vehicle, there’s little question that liability would flow back to the company or manufacturer.
Questions of accountability are already surfacing in day-to-day operations. In California, a driverless Waymo car once made an illegal U-turn during a police checkpoint, leaving officers unable to issue a ticket to a “driver” who didn’t exist. Lawmakers quickly responded by updating state law so that citations can now be issued directly to autonomous vehicle companies. That shift signals how regulators are preparing for a world where companies, not individuals, are legally responsible for how their cars behave on the road.
The scale of these risks is just as important as the individual incidents. A single malfunctioning car might seem like an isolated case, but when companies are running fleets of thousands of vehicles, even a small error rate can translate into hundreds of crashes. Each of those could become a personal injury claim, whether it’s a low-speed collision or a more serious accident caused by a perception failure. As these systems expand, courts and regulators will face mounting pressure to draw clear lines of liability.
Hire a Personal Injury Lawyer in Georgia
If you or someone you loved has been injured in a self-driving car accident, contact the McArthur Law Firm for a consultation about your case. McArthur Law Firm serves the cities of Atlanta in Fulton County, Macon in Bibb County, Kathleen in Houston County, Peachtree Corners and Lawrenceville in Gwinnett County, Marietta and Smyrna in Cobb County, and throughout the surrounding areas of the state of Georgia.
Contact one of our offices at the following numbers or fill out an online contact form to start building your case.
Atlanta Office: 404-565-1621
Macon Office: 478-238-6600
Warner Robins: 478-551-9901