The Science News review, Our Final Invention, about the impending dangers of Artificial Intelligence caught my eye. (Google [Artificial Intelligence and the End of the Human Era].) Consider this quote, “Computer systems advanced enough to act with human-level intelligence will likely be unpredictable and inscrutable all of the time.” He is worried that, “we could end up with a planet self-serving, self-replicating AI entities that act ruthlessly toward their creators”.
Apparently, an increasing number of AI visionaries also have misgivings. Such “The sky is falling” fears are mistaken. In fact, history shows that the future invariably turns out differently in the long-term from what our desires or worries imagined. If anything, history is a record of humanity being blindsided at every turn. As chapter 38 observes, Foreknowledge of the way, magnificent yet a beginning of folly.
AI visionaries make the mistake of regarding the ability to think adeptly as an all-inclusive intelligence. They fear AI will out-think and outwit humans. Actually, the ability to think is just the tip of the iceberg. AI experts fail to realize that deep-seated emotion drives thinking. As such, thinking can be no more genuinely intelligent than one’s underlying emotional state. Emotion is the foundational driving force for what all animals (including humans) do in life. For us, emotion steers thinking, usually imperceptibly. Why haven’t I heard any mention of AE—Artificial Emotion? The nefarious behaviors we fear originate in emotion, not in intelligence. What disease causes us to attribute excessive significance to intelligence?
Our own self-image clearly blinds us. We view intelligence as fundamental, and often regard emotion as something to rise above. We fail to realize that emotion is the master puppeteer, usually invisible, behind all thought. Emotion is the fuel that runs life. Intelligence is merely a species-specific neurological ability to get the task done—the task that instinct and emotion dictate. For birds, that means flying south for the winter; for humans that means cognition. What sets humans apart from the rest is our ability to imagine—to ‘fly around’ in our mind’s space. Yet, that imagination is still steered by core emotion: fear and need. (See Need and Fear in What are the roots of thought?, p.602.)
Our species’ “illusion of self” (ego) probably promotes our cherished views of intelligence more now than at anytime in the past (1). Cognition is to us what a dexterous long trunk is to an elephant. The difference lies in how intelligence enables us to focus on intelligence to the point of becoming stupidly intelligent. It would be like the elephant evolving the length of its trunk to the point that it began tripping over it. Our intelligence has become something of a disease, as chapter 71 points out: Realizing I don’t’ know is better; not knowing this knowing is disease. D.C. Lau put this more diplomatically as, Not to know yet to think that one knows will lead to difficulty. Here now is the rest of this review, interspersed with some comments I can’t resist making.
Computers already make all sorts of decisions for you. With little or no human guidance, they deduce what books you would like to buy, trade your stocks and distribute electrical power. They do all this quickly and efficiently using a simple form of artificial intelligence. Now, imagine if computers controlled even more aspects of life and could truly think for themselves.
What about computers feeling for themselves? If a computer can’t care, need or fear for itself, what motivates it to think or act for itself. In other words, let’s not forget the prerequisite necessary for having an “illusion of self” essential for computers to “think for themselves”. (See Buddha’s Second Noble Truth, p.604.)
Barrat, a documentary filmmaker and author, chronicles his discussions with scientists and engineers who are developing ever more complex artificial intelligence, or AI. The goal of many in the field is to make a mechanical brain as intelligent — creative, flexible and capable of learning — as the human mind. But an increasing number of AI visionaries have misgivings.
Science fiction has long explored the implications of humanlike machines (think of Asimov’s I, Robot), but Barrat’s thoughtful treatment adds a dose of reality. Through his conversations with experts, he argues that the perils of AI can easily, even inevitably, outweigh its promise.
Sure, the perils of AI are able to “easily outweigh their promise”, at least short-term. This is already true of computers now in various ways. What’s more, this is also true for our natural intelligence, i.e., chapter 71’s the not knowing this knowing disease. However, evolution is like a child stumbling along, striving until it gets it right, then ups-the-ante and stumbles along at that new level… ad infinitum. We’re simply part of that cosmic process, with the invention of computers as the latest in our evolutionary upping-the-ante.
By mid-century — maybe within a decade, some researchers say — a computer may achieve human-scale artificial intelligence, an admittedly fuzzy milestone. (The Turing test provides one definition: a computer would pass the test by fooling humans into thinking it’s human.) AI could then quickly evolve to the point where it is thousands of times smarter than a human. But long before that, an AI robot or computer would become self-aware and would not be interested in remaining under human control, Barrat argues.
What does “A thousand times smarter” actually mean… smart by whose definition? As to computers being “self-aware”, Buddha’s “the illusion of self originates and manifests itself in a cleaving to things” demolishes that notion. The need to “cleave” is a pure, raw, visceral emotion. Computers don’t feel need or fear, and I’ve yet to see any attempt to create an artificial version of these bio-chemical based emotions. Surely, the value of AI is its absence of emotion. Without emotion, the doomsday scenario disintegrates.
One AI researcher notes that self-aware, self-improving systems will have three motivations: efficiency, self-protection and acquisition of resources, primarily energy. Some people hesitate to even acknowledge the possible perils of this situation, believing that computers programmed to be superintelligent can also be programmed to be “friendly.” But others, including Barrat, fear that humans and AI are headed toward a mortal struggle. Intelligence isn’t unpredictable merely some of the time or in special cases, he writes. “Computer systems advanced enough to act with human-level intelligence will likely be unpredictable and inscrutable all of the time.”
The researcher noted, “systems will have three motivations: efficiency, self-protection and acquisition of resources, primarily energy”. Okay, I’m beating a dead horse now… but really, “self-protection”? Yes, you can program a computer to protect itself from threats that the programmer specifies. Those are projections of the programmer’s self-survival. They originate in his survival instinct — ergo, his agenda. I’ve heard no mention of designing Artificial Instinct.
Each child becomes an adult, yet the child remains within the adult. Most, if not all, of the havoc we wreak stems from ignorance, not intelligence, or rather a mismatch between emotional intelligence and cognitive intelligence. (See, Counterbalancing I.Q., p.372.) Thus, for AI to wreak havoc it must also possess Artificial Ignorance.
Humans, he says, need to figure out now, at the early stages of AI’s creation, how to coexist with hyperintelligent machines. Otherwise, Barrat worries, we could end up with a planet — eventually a galaxy — populated by self-serving, self-replicating AI entities that act ruthlessly toward their creators.
This goes to show we always find something to worry about. Simply put, worry = fear + thought and this will always find a way to express itself as long as we feel fear and we think. The wealthier we become, the more neurotic those worries naturally end up. Jesus saw this consequence when he said, “It is easier for a camel to go through the eye of a needle, than for a rich man to enter into the kingdom of God.” This “Kingdom of God” for me is simply a state of natural dynamic balance. Wealth skews balance. The harnessing of electricity has made us all wealthy, relatively speaking. See, And Then There Was Fire (p.296) for an overview of this and a hopeful glimpse of the possible future we are stumbling toward.
All this fuss is merely another example of chapter 38’s, Foreknowledge of the way, magnificent yet a beginning of folly. Naturally, no one is to blame here. After all, without free will, the cards just fall where they may. For more on this worry = fear + thought, and Emotional Intelligence, see Counterbalancing I.Q. (p.372), Beware: the Blind Spot (p.300), and Imagining a Better Way (p.267).
UPDATE 2023: A.I. is becoming remarkable adept at creating art, literature, music and all manner of deep fakes. Up until now humanity has placed much of its identity and pride in human creativity. As A.I. becomes able to equal or surpass human creative achievement we will be compelled to look deeper into ourselves to identify the point of life. If it is not about how “great” we are as a species, then what is it? As always, changing circumstance force evolutionary change.
(1) Searching for the Mind (jonlieffmd.com) exemplifies the importance we play on intelligence. We see intelligence as strength. Do we also acknowledge its weakness? Two sides of one coin. Anyway, scientifically investigating this phenomenon of mind is bound to make us more self-honest.
No doubt, but you should see all the stuff I leave out! One problem is that I see everything, and I mean everything (including Nothing) connected. Yet, I’m dealing within a paradigm that endlessly makes distinctions, until all we’re left with is a haystack of unconnected needles. The irony is this: in writing I can’t help but make distinctions too; writing demands it. To paraphrase chapter 56, Knowing doesn’t write; writing doesn’t know. It is just crazy!
Yes Craig, friends for life, and stepping into the winter of my life helps me appreciate that all the more… sigh
You publish too much info to respond to your many brain farts, as Frederick Perls put it. (He was especially famous , in the Big Sur during the Hippie era of the later 60’s and early 70’s).
If I read more i will probably respond. I am sure that any of the many writings I will find one that requires response.
Your friend for life Craig
i just read you AI article, my first thought was, do they have common ethics programmed in? or maybe only the ethics of the code writers.
Yes, and watching much sci-fi doesn’t help, especially if the sci-fi portrays psychopathic narratives. The current cultural paradigm molds perception, and gossip* (news, music, stories, education… etc.)is the ‘hand’ that crafts the paradigm. It’s a stick wicket. 😉
* I regard gossip as the simplest way to understand the underlying sociological (tribal) influence of all forms of communication.
I really enjoyed reading this, Carl. Thanks! This description of advanced AI–“Computer systems advanced enough to act with human-level intelligence will likely be unpredictable and inscrutable all of the time.”–and the fears based on it sound like a projection of our fears of human psychopaths.