In part 2 of our AI series, Guy Borgford, Senior Director of Business Development, and Greyson Richey, Senior Software Engineer, discuss the potential consequences of AI.
There’s a lot of debate going on right now about how the human race will net out with the advent of AI. Some, like Stephen Hawking or Tesla CEO Elon Musk, suggest that AI poses one of the greatest threats to humankind, while others, such as Facebook CEO Mark Zuckerberg, believe that AI is a glimmer of hope that may save humanity. Wherever you stand on the argument, it’s clear that the consequences of AI have the potential to be good, bad, and just plain ugly.
To discuss the potential consequences of AI in more depth, I’m once again joined by Greyson Richey.
Guy: Greyson, thanks for joining me. From doomsdayers to AI idealists, there’s certainly vociferous debate around what AI means to the future of humanity. At a high level, where do you land in this discussion?
Greyson: It’s such a tough thing to call, but I lean hopeful on this one. As we talked about last time, the short-term effects of AI are going to be good or bad based on how we as humans use this new and powerful tool. AI that is complex enough to tackle concepts like morality is much more advanced than what we have right now, or likely will have soon, so we can instead look at our history to give us some indication about how we might be inclined to use it. We have used scientific advances to commit extreme atrocities, but more often, we have used scientific advances to stabilize ourselves and enter periods of overall unprecedented peace. Take the eradication of Polio: Tens of millions of people were directly saved, and billions more indirectly. Scientific advancements have also negated war-causing stressors like food shortages. I think that “good” will win out in the end, if we make it there.
Guy: What comes to mind when you think about an example of “good” AI? And what do you think makes it “good”? How do you see AI benefitting both the companies which develop it and the people who use it?
Greyson: “Good” AI is not only used to improve throughput or to execute some task in a heightened capacity. “Good” AI also means that problems which may arise from its use are considered and countered. For instance, the automation of manual labor has caused many issues with employment for those outpaced by the technology. But there are approaches we can take to limit this kind of negative impact on humans. When AI is implemented thoughtfully, displaced employees can be re-educated in other industries in a gradual reordering of the workforce. It’s important to keep in mind the human factor—after all, our tools are at their best when we maximize the healthy impact they have on their designers.
Guy: Are there current examples of AI that are simply “bad”? How does AI fail? Are there dangers? Is there a fix?
Greyson: For the most part, out and out “bad” AI doesn’t make it into the public eye. Due to extensive testing, or even secrecy, we don’t really end up seeing AI that is responsible for causing physical harm to humans. Most often, severe incidents related to AI are caused by the misuse of this tool, like handing over total control to a drive assist system and taking a nap. So far, companies have been very careful to avoid setting a precedent of irresponsibly using AI, which makes sense—such a new set of technologies needs every advantage to work its way into the mainstream. On the more sinister side, there’s the potential for the use of AI in warfare, but whether this is occurring is part of a more speculative discussion than a proof-driven one. There certainly are numerous dangers that come with an AI that can decide who lives and who dies, and for the most part the fix seems to be not handing over that power.
Guy: Last but not least, AI is part of our everyday lives today, but many folks wouldn’t know AI if it slapped them in the face with a robotic hand. In your opinion, is there an example of AI in use right now that is neither good nor bad, but falls right into the “ugly” bucket?
Greyson: Thankfully, “bad” AI is most evident in places in which it can’t really cause physical harm, but “ugly” AI can cause significant emotional damage. Advertisements for AI, for example, are somewhat notorious for the unwanted and even callous targeting of emotionally-compromised people. There are cases in which a company has been forced to circumvent such advertising because the backlash was so powerful. The prevalence of these missteps is a symptom of a deeper issue: treating humans as a collection of generated data, instead of the feeling, thinking beings that we are. It doesn’t bode well for a future in which more and more of our lives are run by AI, or perhaps even when they have a capacity to do physical harm. We are fully capable of combating this, but it takes a conscious effort. I imagine that in the near future, employees in the psychology and HR sectors will become significantly more involved in the tech world, as companies realize they need to change the way they do automated business. Involving more people in implementing controls on AI that prevent these kinds of issues and handling their blowback will be important (and already is).
Guy: In a perfect AI future, machine intelligence could arguably save our planet. What jumps out at you as the most positive impact that AI could have on our planet and the human race?
Greyson: Overall, I think that the biggest planet-saving gains we’re going to see from AI will be logistical, as well as the freeing of labor for humanities, sciences, and recreation. With the advent of an enormous and efficient global supply chain, and the automation of food and necessities production, we’ll be able to devote our time to what we hold most dear. We’re great at generalist problem solving, but we lack the capacity to run the same problem billions of times to find a specific best-case for a given scenario, so leave it to AI! Instead of worrying about whether we can afford food and shelter, we can concern ourselves with family, love, answering questions about our existence, and generally spend more time uplifting our species as a whole.
Guy: And now the dark side. The word is that there’s a hushed AI arms race. Is the threat of a Skynet future run by rogue machines our biggest worry, or should we be worried about something less physical but equally insidious, like manufactured, bot-controlled propaganda?
Greyson: It’s not outside the realm of possibility to think that nefarious or misguided forces would put the incredible problem-solving potential of AI on the job of waging war. Many major scientific advances in human history were brought to the fore by a perceived need to fight our fellow human beings, as regrettable as those circumstances are. I’m not inclined to believe that a Skynet is in our future, considering how much we struggle to write AI that can do any number of easy-for-human tasks, and in isolation of generalist capabilities. That said, I don’t think that this is a valid reason to not worry. We’ve shown ourselves to be recklessly irresponsible with our knowledge. There’s a strong precedent that shows that we revert to less civilized behavior when we have a new tool that can inflict suffering. I hope that we can muster opposition to the misuse of AI as the next catalyst for unimaginable disaster inflicted on ourselves, by ourselves. Hopefully you’ll join me, and so will our readers.
Guy: Something to think about. Next time, we’ll sit down for our third and final installment in this series and discuss the core business verticals in which AI will have the biggest impact over the next five years.