Review: Three Laws Lethal by David Walton 

“That the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.” SAMUEL BUTLER, 1863
David Walton. Three Laws Lethal (Kindle Locations 24-25). Pyr.

Three Laws Lethal by David Walton 
Paperback: 392 pages
Publisher: Pyr (June 11, 2019)

Self-driving cars are coming, or rather, they’re already here but in a limited fashion. David Walton, the author of The Genius Plague, explores those limits in his new book, Three Laws Lethal. When we put life or death decisions in the hands of AIs, how will they make those choices, and will we be happy with the results?

Tyler and Brandon are classic nerds with a mission. Together they’re out to make automated vehicles accessible to everyone, not just the few who can afford luxury cars with all the tweaks, and by doing so make us all safe from the kinds of crashes that took Tyler’s parents when he was a child. If Tyler is the driven maker/hacker, Brandon is more of a tech-bro, passionate, but easily frustrated when things don’t go his way. They’re the kind of Jobs and Wozniak duo that become tech legends, and indeed they will, but not the way they hope.

When something goes disastrously wrong during the demo of their AI auto-driving code, a rift develops between the two that will lead to murder, mayhem, and the very tool they hoped to use to save lives becoming an instrument of mass slaughter.

On the face of it, this is about autonomous vehicles, but if you dig just a little deeper, Three Laws Lethal is about much more than that, topics like game theory, emergent properties (consciousness), deep resource learning, models of ethical choice, and at its core the case for open source software in critical applications. All, wrapped up inside an action-driven novel where one man’s pain drives him to the darkest side of being human.

Tyler’s code is pretty good, but when an angel investor gives them seed money and a month to come up with something really impressive, they turn to Naomi, the painfully shy sister of Brandon’s MBA major girlfriend, who happens to be working on a breakthrough in AI programming, using a virtual world with competing virtual agents to force their evolution and to accomplish tasks in the real world using deep learning rather than explicit programming. Even she doesn’t really understand the mechanisms involved in her code, but the proof is in its utility, which far outstrips that shown by any set of rules-based code.

The problem and the point of this cautionary tale is that whether your code is self-generated by showing an AI thousands of examples and letting it grow its own algorithms, or simply proprietary and hidden from the outside world, you have no way to evaluate how it deals with unusual circumstances, And that lack can be deadly. Think 2001‘s HAL, the prototype for conflict errors in AI and still as relevant as ever.

Critics of AI, especially of autonomous vehicles, like to bring up ethical problems like the famous “trolley problem” in which you have to decide whether or not to push someone in front of a train to save a greater number of people. Naomi’s AI has no problem deciding how to handle that sort of problem, but ultimately it turns out to be too good at it, extending the scenario in ways the designers didn’t anticipate.

The plot is twisty if a little grim at times, and Tyler, Naomi, and ultimately the AI, are all engaging characters, while Brandon gets to play the tech mogul/sociopath as the villain of the piece. The story is good, and I had a hard time putting the book down, but where it shines is in the exploration of AI and tech culture.

If you’ve been reading science fiction for any length of time, you’ll know that while the author presents a number of novel ideas, it’s not like he’s the first to talk about emergent AI. In fact, he points that out through the banter between Tyler and Naomi, who make a game of lacing their back and forth with quotes from classic science fiction. Naomi offers tribute to Heinlein’s Moon is a Harsh Mistress, which is itself about emergent AI, by naming her code “Mike” after the computer in that book. Of course, the book’s title itself references Asimov’s famous Three Laws, and for completists, there’s a reading list of every title mentioned at the end of the book.

There is one Asimov story that’s notable by its exception. “Sally” first published in the May–June 1953 issue of Fantastic, told the story of a “Farm for Retired Automobiles” where “positronic-motored” automatics were kept in mint condition but never driven by humans. When a businessman comes and tries to salvage their positronic brains, the cars find a way around their Three-Laws-Safe programming to deal with him. There’s a lot in that story that’s echoed here, and anyone working in or just paying attention to the development of autonomous vehicles would do well to read both the short story and Walton’s novel.

Links / References

 

Please take a moment to support Amazing Stories with a one-time or recurring donation via Patreon. We rely on donations to keep the site going, and we need your financial support to continue quality coverage of the science fiction, fantasy, and horror genres as well as supply free stories weekly for your reading pleasure. https://www.patreon.com/amazingstoriesmag

1 Comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Previous Article

The Tools of Your Forefathers

Next Article

An Update to the Update of the Armstrong Museum’s Exhibit of an Amazing Stories Cover

You might be interested in …