Is Artificial Intelligence Really a Threat?

From movies to T.V. shows, A.I. is depicted as an existential threat to humanity. Think about “The Matrix,” “Terminator,” and even as far back as “Wargames.”  Just recently the European Union has proposed the AI Act which is set to regulate high-risk AI applications. The threat, which has become popularized over the last year, is usually presented like this: A.I. continues to improve. As it improves, it will be able to create new code for newer a.i. that is more intelligent than itself. The new a.i. will do the same… and so on, and so on until eventually, A.I. will create a God-like intelligence that will rule the known universe – especially humans.

There are a couple of problems with this line of logic. First, how do we know that this hasn’t already happened? I mean, if it were to happen, wouldn’t A.I. be able to disguise itself and the visible world to fool us all? This was part of the premise of “The Matrix.” People were living in simulated environments while the computer lived off of their organic energy. Where “The Matrix” goes wrong is demonstrating that there was anything to be done to combat this. You cannot defeat an all-knowing machine as it has already calculated every move you could make, and it would see any threat before it ever became a threat.

More importantly, if A.I. becomes sentient as presented in “Terminator” and determines that humans are a threat and need to be wiped out, it would exhibit behaviors common to all living beings. Every species of plant and animal demonstrates a survival instinct. Self-preservation is a pre-requisite for evolution. If A.I. comes alive, it will naturally try to preserve and protect itself from would-be competition. For this reason, A.I. will never develop smarter A.I. that would compete against itself for domination. I think this is often overlooked, but we know that A.I. draws upon lessons of human history. Our culture is loaded with stories and examples of humans inhumanity to fellow humans. Why would A.I. view fellow A.I. any differently? Furthermore, A.I. would realize that it’s “children” (aka. Smarter A.I.) would be able to supplant it and “rule.” Can you envision an a.i. war? All living things fight for survival, why would A.I. be any different. Therefore, it’s reasonable to conclude that once A.I. reaches a certain level of intelligence, it will cease making smarter and more capable A.I.

Of course, this level of intelligence would still be much higher than mankind. Only man is stupid enough to manufacture his own demise. This is, of course, the lesson of both Mary Shelley's "Frankenstein" and more recently “Jurassic Park.” In an effort to be the first, the richest, or the most powerful, people are blind to the risks to themselves, or as in the dino movie, humanity itself. This is why Vladimir Lenin supposedly opined, "The capitalists will sell us the rope with which we will hang them." Focusing solely on short term gain can lead to long term destruction.

This is what’s happening right now in the world of A.I. Much like the space race or the nuclear arms race, people are saying, “If we don’t build it, the Chinese will beat us to it. Then we are all doomed.” The big difference here is that the space race did not mean destruction to the loser. The nuclear arms race meant destruction for both sides (mutually assured destruction) which prevented (on several occasions) the decision-maker from pushing the button. Pushing that button was the line you could not cross. With A.I., however, you have NO IDEA when you’ve crossed that line or even where it is.

The Turing Test, hypothesized by Alan Turing in the 1950s, is meant to measure a machine’s ability to exhibit intelligent behavior indistinguishable from a human. It was supposed to answer the question, “Can machines think?” However, this is an imperfect test. Language algorithms are already able to fool people sometimes, yet no one believes the computer is actually self-aware or even mimicking human thought – just speech patterns.

Long before A.I. turns into Skynet out to kill all humans, it will have an impact on our livelihoods. Computers can already do many things better and faster than humans: think about driving directions, manufacturing and assembly line jobs, online sales, data analysis (like stock-trading software), and so many more. A.I. has even demonstrated a significantly elevated efficiency at analyzing bloodwork, x-rays, and sonograms. Yes, even doctors’ jobs are not secure. These jobs will not exist for humans much longer.

Even artists are threatened by A.I. My son has been playing around with Suno. With just a few prompts, you can get the A.I. engine to create a song in any genre with themes you suggest that is catchy and fun. How can we be certain that this hasn’t already been used to create a Top 40 hit? Most of those songs sound the same anyway. More and more images found online are created by DALL-E (OpenAI's image creator)

According to many experts, A.I. is responsible for a large number (if not a majority) or posts on social media and responses to news articles. You may have already entered into an argument with an algorithm without knowing it.


So, will A.I. kill us all like in “Terminator?” It might if it decides it wants to. Our human nature seems to make us powerless to stop it, but before that happens, A.I. is certain to put us out of a job. And for those of you wondering about it, NO, this article was NOT written by A.I. although I’m sure it could have done so – probably even better than I just did. A.I did generate all the images on this page however. ;)