It’s Not the Artificial Intelligences We Should Be Fearing

I was watching a Youtube video of an interview with Elon Musk and one of the subjects touched on were his cautionary pronouncements regarding the potential dangers of artificial intelligence.

In short, Elon suggests that the research in general AI – so-called superintelligences, as distinct from “narrow AI”, such as car driving programs – is advancing at a near exponential rate and at the current time we have absolutely no way of knowing what it might become, no way of knowing that we could control it.  Further, there’s no general oversite, no rules, that we might use to more safely explore these technologies.

Indeed.  And Elon is not alone.  Stephen Hawking famously cautioned against unfettered research in this area:

“The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” and “In short, the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.”— Stephen Hawking

Sir Tim Berners-Lee, architect of the web has stated:

“So when AI starts to make decisions such as who gets a mortgage, that’s a big one. Or which companies to acquire and when AI starts creating its own companies, creating holding companies, generating new versions of itself to run these companies.

“So you have survival of the fittest going on between these AI companies until you reach the point where you wonder if it becomes possible to understand how to ensure they are being fair, and how do you describe to a computer what that means anyway?”

Bill Gates, founder of Microsoft:

“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well.

“A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

It is is important to note that none – none – of these cautionary statements is referencing anything like Skynet or even the W.O.P.R. of War Games fame.  They are referencing the research that is being done attempting to create beneficial super machine intelligence, the kind that will solve all of our problems, answer all of our questions and that will bring us a cup of coffee when it knows we want one.

That’s because they’re familiar with the Paperclip-making AI scenario, and its close cousin, the stawberry picking AI scenario.

Nick Bostrom, a Swedish Philosopher, laid out the paperclip story in his book Superintelligence: Paths, Dangers, Strategies.  The thought experiment goes like this: give an artificial general intelligence the task of collecting as many paperclips as it can while also setting it up to optimize its task, to continually look for improvements, and then let it do its thing.  Humans might expect it to manufacture paperclips, or try to manipulate the stock market to earn more paperclips, but an AI will not be thinking the way humans think and doesn’t value things the way humans value them.  The AI might, instead, come up with a strategy of improving its own capabilities and, at some point, it might determine that humans have to go in order to make room for even more hyper-efficient paperclip factories.

Musk’s own variation involves an AI originally set on strawberry picking and he describes it this way: “Let’s say you create a self-improving A.I. to pick strawberries, and it gets better and better at picking strawberries and picks more and more and it is self-improving, so all it really wants to do is pick strawberries. So then it would have all the world be strawberry fields. Strawberry fields forever.”

In either case, we end up with a world full of paperclips or strawberries and no humans.  Not because Skynet perceived humans as a threat, but simply because it had been doing its level best to do what it had been taught to do – much like HAL in 2001: A Space Odyssey (which is probably one of the first of science fiction’s cautionary AI tales).

Collectively, the basic point is this:  we don’t understand enough yet, we’re moving too fast and the consequences are of existential proportion.

Much of this cautionary thinking was seemingly prompted by the Autonomous Weapons: An Open Letter From AI and Robotics Researchers, which was signed by nearly 4,000 scientists in those fields, (quite prominent names among them) and delivered to the United Nations Convention on Certain Conventional Weapons.  It summed itself up this way:

“In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

And that followed some alarming incidences involving semi-autonomous weapons systems.

Clearly, a lot of very smart and informed people are concerned.  But I think, in a way, they’re concerned about the wrong thing.

Non-general AI systems have caused crashes on Wall Street.  Those systems are “narrow” AI, algorithms created by financial wizzes designed to analyze, buy and sell as efficiently and as quickly as they can. Different firms have different financial theories and their algorithms reflect those theories.  Their programs compete with one another;  some dedicated to “outsmarting” competitor AIs.

That’s because the AI that can “get there firstest with the mostest” is going to make more money.

In fact, all general AI research is predicated on the basic concept that having a super smart intelligence that is capable of improving itself can and will inevitably make somebody, somewhere, lots and lots of money, in ways that we simply can not fathom at this point in time.

At last check, according to Forbes, there were some 2,208 billionaires on the planet.  More significantly, the average billioniare is holding on to 4.1 billion dollars.

Is this beginning to sound a little familiar?  A little Bond-villainesque?

I have to assume that all two thousand plus of that group have some smarts (if not they can always hire some).  I also have to assume that at some point they’ve been exposed to the concepts of Artificial Intelligence.

I don’t have to assume that at least one of them has given some thought to funding research that would lead to the development of a machine that could think faster/better than every other person and machine, that would be subordinate to themselves and – bonus – keeps on getting faster and smarter  at less expense, all because that’s what it has been trained to do.

Why isn’t that a mere assumption?  Because we already know that such research programs are going on, funded, in large part, by people on that list.

A.I. is a capitalist’s dream.  Invest some seed money and you end up with a machine that can be instructed to “make me more money”, “get rid of this competitor”, “create a strategy that reduces my taxes”, “generate a plan for positively influencing elections”….

Does anyone really think that a piece of paper, albeit one signed by a bunch of impressive names, is going to positively influence what someone with that kind of money may decide to do?

I don’t see how.  In fact, calling for more control and oversite of A.I. research may very well be a ploy, developed by an early-gen A.I., designed to slow down the work being done by its competitors.

A scenario in which the vast majority of A.I. research is restricted by design constraints and governmental oversite will resullt in nothng so much as opening up opportunities for those who don’t always follow the rules.

The very essence of Entrepeneurship is not following the rules.

Are there other solutions?  Probably not.  We’ve opened this Pandora’s box – or at least those with a few billion dollars to throw around have – and there’s no closing it.  Short of somehow brainwashing every single billionaire on the planet into believing that all of their goals and desires must somehow conform to what is best for humanity (as if we know what that is), there remain only two scenarios under which the outcome looks at least semi-good for our species: emergent A.I.s prove to be incapable of evolving to levels we need be concerned about or, a mostly good-guy’s A.I. proves capable of outwitting (or out-evolving!) a bad guy’s A.I.  At least the first time.

Expressing concern, even fear, of these technologies is nothing new.  We’ve always feared new technologies, and that’s the problem, because it’s the people we need to keep our eyes on, and it always has been.

Please take a moment to support Amazing Stories with a one-time or recurring donation via Patreon. We rely on donations to keep the site going, and we need your financial support to continue quality coverage of the science fiction, fantasy, and horror genres as well as supply free stories weekly for your reading pleasure. https://www.patreon.com/amazingstoriesmag

Previous Article

We are living for Kaia Gerber ruling the SS18 catwalks

Next Article

15 Best Apple Macbook Accessories for 2017

You might be interested in …

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.