Abstract
Artificial intelligence is part of our daily lives. Whether working as taxi drivers, financial analysts, or airport security, computers are taking over a growing number of tasks once performed by people. As this occurs, computers will also cause the injuries inevitably associated with these activities. Accidents happen, and now computer-generated accidents happen. The recent fatality caused by Tesla’s autonomous driving software is just one example in a long series of “computer-generated torts.” Yet hysteria over such injuries is misplaced. In fact, machines are, or at least have the potential to be, substantially safer than people. Self-driving cars will cause accidents, but they will cause fewer accidents than human drivers. Because automation will result in substantial safety benefits, tort law should encourage its adoption as a means of accident prevention. Under current legal frameworks, manufacturers (and retailers) of computer tortfeasors are likely strictly responsible for their harms. This article argues that where a manufacturer can show that an autonomous computer, robot, or machine is safer than a reasonable person, the manufacturer should be liable in negligence rather than strict liability. The negligence test would focus on the computer’s act instead of its design, and in a sense, it would treat a computer tortfeasor as a person rather than a product. Negligence-based liability would create a powerful incentive to automate when doing so would reduce accidents, and it would continue to reward manufactures for improving safety. In fact, principles of harm avoidance suggest that once computers become safer than people, human tortfeasors should no longer be judged against the standard of the hypothetical reasonable person that has been employed for hundreds of years. Rather, individuals should be measured against computers. To appropriate the immortal words of Justice Holmes, we are all “hasty and awkward” compared to the reasonable computer.