Stupidly Intelligent

This little ditty in Science News, Our Final Invention, on the impending dangers of Artificial Intelligence caught my eye. Consider this quote, “Computer systems advanced enough to act with human-level intelligence will likely be unpredictable and inscrutable all of the time.” He is worried that, “we could end up with a planet self-serving, self-replicating AI entities that act ruthlessly toward their creators”.

Apparently, an increasing number of AI visionaries also have misgivings. Such sky-is-falling fears amount to blindness through vision, as I see it. If history shows us anything, it is that the pageant of history invariably turns out differently from our desires or worries… in the long-term. Indeed, history is a record of humanity being blindsided at every turn.

Their mistake is to equate real intelligence with the human ability to think. They fear AI will out-think and outwit humans. Such so-called intelligence is just the tip of the iceberg. These visionaries fail to realize that deep seated emotion drives thinking. As such, thinking can be no more ‘intelligent’ than one’s underlying emotional state. Emotion is the foundational driving force for what all animals (including humans) do in life. For us, emotion steers thinking, often imperceptibly. Yet, I’ve yet to hear any mention of AE—artificial emotion. The nefarious agendas they fear originate in emotion, not in what they deem as intelligence—AI or otherwise.

I can’t help but see how we are somewhat blinded by our own self-image. We see intelligence as key, and often see emotion as something to ‘rise above’. What we fail to realize is that any ‘rising above’ must also be driven by emotion. Emotion is the fuel that runs life. Intelligence is merely a species-specific neurological ability to get the task done—the task that instinct and emotion dictate. For birds, that means flying south for the winter; for humans that means cognition—thinking. What sets humans apart from the rest is our ability to imagine—to fly around in ‘mind space’—but that imagination is still steered by core emotion: fear and need. (See Need and Fear in One who speaks does not know?).

Our species’ illusion-of-self (ego) drives our cherished views of intelligence, probably more now than at anytime(1). Cognition is to us what a dexterous long truck is to an elephant. The difference, I would say, lies in how intelligence enables us to focus on intelligence to the point of becoming stupidly intelligent. It would be like the elephant evolving the length of its trunk to the point it began tripping over it. Our intelligence has become something of a dis-ease, as chapter 71 points out. D.C. Lau put this it very nicely: Not to know yet to think that one knows will lead to difficulty. Here now is the rest of this review, interspersed with my snide remarks when called for.  ;-)

Computers already make all sorts of decisions for you. With little or no human guidance, they deduce what books you would like to buy, trade your stocks and distribute electrical power. They do all this quickly and efficiently using a simple form of artificial intelligence. Now, imagine if computers controlled even more aspects of life and could truly think for themselves.

What about feel for themselves? If a computer can’t care, need or fear for itself, what motivates it to ‘think or act’ for itself. Oh and let’s not forget what produces the ‘self’ in the “think for themselves”. (See Buddha’s Second Noble Truth.)

Barrat, a documentary filmmaker and author, chronicles his discussions with scientists and engineers who are developing ever more complex artificial intelligence, or AI. The goal of many in the field is to make a mechanical brain as intelligent — creative, flexible and capable of learning — as the human mind. But an increasing number of AI visionaries have misgivings.

Science fiction has long explored the implications of humanlike machines (think of Asimov’s I, Robot), but Barrat’s thoughtful treatment adds a dose of reality. Through his conversations with experts, he argues that the perils of AI can easily, even inevitably, outweigh its promise.

That the perils of AI can easily outweigh their promise is certainly true, at least in the short term. Hey, it is already true of computers now in many way… and even of our natural intelligence, i.e., the not knowing this knowing disease to which chapter 71 refers. However, evolution is like a child… it stumbles along, striving until it gets it right, and then ups-the-anti, and stumbles along at that new level… on and on. We’re just part of that cosmic process, and the invention of computers is just the latest in our upping-the-anti.

By mid-century — maybe within a decade, some researchers say — a computer may achieve human-scale artificial intelligence, an admittedly fuzzy milestone. (The Turing test provides one definition: a computer would pass the test by fooling humans into thinking it’s human.) AI could then quickly evolve to the point where it is thousands of times smarter than a human. But long before that, an AI robot or computer would become self-aware and would not be interested in remaining under human control, Barrat argues.

Ha! “A thousand times smarter than a human, self-aware, and not interested…”. Only a fool thinks he is smart. So, smarter by whose definition of what smart actually is? And, “self-aware”, as Buddha points out (and I certainly verify through experience) the illusion of self originates and manifests itself in a cleaving to things. The need to cleave is pure, raw, visceral emotion. Computers don’t feel need or fear, and I’ve yet to see any attempt to create an artificial version of these bio-chemical based emotions. Without that, the whole doomsday story falls apart.

One AI researcher notes that self-aware, self-improving systems will have three motivations: efficiency, self-protection and acquisition of resources, primarily energy. Some people hesitate to even acknowledge the possible perils of this situation, believing that computers programmed to be superintelligent can also be programmed to be “friendly.” But others, including Barrat, fear that humans and AI are headed toward a mortal struggle. Intelligence isn’t unpredictable merely some of the time or in special cases, he writes. “Computer systems advanced enough to act with human-level intelligence will likely be unpredictable and inscrutable all of the time.”

The researcher noted, “…systems will have three motivations: efficiency, self-protection and acquisition of resources, primarily energy”. Okay, I’m probably beating a dead horse by now. But really, “self-protection”? Yes, you can program a computer to protect itself from threats that the programmer stipulates. Those are merely projections of self-survival of the programmer. They originate in his survival instinct — his agenda. I’ve heard no mention of anyone designing Artificial Instinct.

The child becomes an adult, yet the child remains. Most, if not all, the havoc we wreak stems from ignorance, not intelligence, or rather a mismatch between emotional intelligence and cognitive intelligence (see, Counterbalancing I.Q.). Therefore, for AI to wreak havoc it must also possess Artificial Ignorance.

Humans, he says, need to figure out now, at the early stages of AI’s creation, how to coexist with hyperintelligent machines. Otherwise, Barrat worries, we could end up with a planet — eventually a galaxy — populated by self-serving, self-replicating AI entities that act ruthlessly toward their creators.

That just goes to show that we will always find something to worry about. Simply put,  worry = fear + thought and this will always find a way to express itself as long as we feel fear and think. The wealthier we become the more absurd and neurotic those worries will end up. Of course Jesus saw this consequence when he said, “It is easier for a camel to go through the eye of a needle, than for a rich man to enter into the kingdom of God.” This “Kingdom of God” for me is simply a state of natural dynamic balance. Wealth skews balance. The harnessing of electricity has made us all wealthy, relatively speaking. (See, And Then There Was Fire for an overview of our evolutionary progress on the way to date, and a glimpse into possibilities for a hopeful future… long term of course, as we stumble our way there.)

All this fuss is merely another example of the hunter-gather instinct treading water to find meaning. No one is to blame… not one bit! After all, without free will, the cards just fall where they may. For more on this worry = fear + thought, and Emotional Intelligence, see Counterbalancing I.Q., Beware: the Blind Spot, and Imagining a Better Way.

(1) Searching for the Mind exemplifies the importance we play on intelligence. We see it as a strength, but perhaps we also sense it as our greatest weakness? Two sides of one coin. Anyway, investigating this scientifically is bound to make us more self-honest, right?

Share on Facebook

5 Responses to “Stupidly Intelligent”


  • I really enjoyed reading this, Carl. Thanks! This description of advanced AI–“Computer systems advanced enough to act with human-level intelligence will likely be unpredictable and inscrutable all of the time.”–and the fears based on it sound like a projection of our fears of human psychopaths.

  • Yes, and watching much sci-fi doesn’t help, especially if the sci-fi portrays psychopathic narratives. The current cultural paradigm molds perception, and gossip* (news, music, stories, education… etc.)is the ‘hand’ that crafts the paradigm. It’s a stick wicket. ;-)

    * I regard gossip as the simplest way to understand the underlying sociological (tribal) influence of all forms of communication.

  • i just read you AI article, my first thought was, do they have common ethics programmed in? or maybe only the ethics of the code writers.

  • You publish too much info to respond to your many brain farts, as Frederick Perls put it. (He was especially famous , in the Big Sur during the Hippie era of the later 60′s and early 70′s).
    If I read more i will probably respond. I am sure that any of the many writings I will find one that requires response.

    Your friend for life Craig

  • too much info to respond to…

    No doubt, but you should see all the stuff I leave out! One problem is that I see everything, and I mean everything (including Nothing) connected. Yet, I’m dealing within a paradigm that endlessly makes distinctions, until all we’re left with is a haystack of unconnected needles. The irony is this: in writing I can’t help but make distinctions too; writing demands it. To paraphrase chapter 56, Knowing doesn’t write; writing doesn’t know. It is just crazy!

    Yes Craig, friends for life, and stepping into the winter of my life helps me appreciate that all the more… sigh

Leave a Reply