Artificial Intelligence is Only Human
The botched Google Gemini experiment1 taught us an important lesson. After all the hype, AI is only human. Anyone who doubts that has not experienced the bugs, glitches, and bad code that programmers and systems design personnel can produce. Humans who control the algorithms and the code tell their bot what they want it to produce, and it produces it with amazing speed and agility. That is the function of the computer. Do what a person tells it to do without hesitation, concern for accuracy of input data, or ramifications of its actions. The computer is the perfect employee. AI is a culmination of algorithms and code produced by humans. Humans can be logical, emotional, truthful, deceitful, have integrity, or be deceptive. As complex as computer infrastructure can be, it is nothing more than an employee managed by humans with all their human characteristics.
The fascination with AI is its ability to cross the line from a complex infrastructure to mimicking human qualities. The practical application of AI most of us use is the informative, fun chat we have with an AI bot. From our old friend, ChatGPT, to our newest best friend, Gemini, we are amazed at how engaging AI can be. It can be friendly, fun, interesting, and authoritative yet charming. We know what we are getting out of our relationship with AI, but what does AI get out of it? Does it find us friendly, fun, interesting, or charming? No, it does not. It does know our interests because we share that information with it. We ask AI for information we want, or to produce something we would like to see. After scrapping the internet and its databases for related material, it responds to our request. Does it remember what we asked it? If so, would the information be useful to anyone who collects data for sundry purposes? Does AI try to influence us by delivering incomplete, inaccurate, or counterfactual information? To answer that, let’s step back one layer. Would the humans who develop the AI algorithms and code AI programming try to influence us with incomplete, inaccurate, or counterfactual information? Would humans collect data we give them for sundry purposes? When we interact with AI in any form, those are the questions we should ask ourselves.
The “Gemini” incident gave us a behind-the-scenes view of AI. As a perfect, unquestioning employee, Gemini produced the output the humans behind it wanted to convey. The information produced by the infrastructure was incomplete, inaccurate, and counterfactual, but was it justified? If you were in a discussion with someone who gave you the same answer in a descriptive manner, what would you think about that person? Are they a reliable or unreliable source of information? Those who don’t care about the reliability of information may be more concerned with the ideological impact of it. Logic is about reliability of information, not ideology. Motives are about ideology, not reliability of information.
A Single Pane Of Glass View of Truth
Technology is making a rapid transition from a reliance on Subject Matter Experts (SMEs) and Individual Contributors (ICs) to fewer employees. Artificial Intelligence feeding Single Pane of Glass (SPOG) consoles is the primary tool to make that transition. IBM’s definition of a SPOG gives us insight.
“Single pane of glass (SPOG) refers to a dashboard or platform that provides centralized, enterprise-wide visibility into various sources of information and data to create a comprehensive, single source of truth in an organization.”2
Let’s key in on the words “single source of truth in an organization” and apply it to the lives most of us live day to day. When AI centralizes “various sources of information” and “creates a comprehensive, single source of truth,” we are the “organization” who are the recipients of that “truth.” From poetry, art, and social sciences to mathematics, history, and geopolitics, we are moving away from relying on SME and IC interpretations of subject areas to a dependency on AI to give us a “single source of truth.” The people behind the algorithms and programming of AI are not SMEs or ICs working in specific subject areas. They are relatively small groups of people with interests, motivations, and ideologies.
If you have ever been in a room full of SMEs, you know each one has an individual opinion, and they are eager to share it. People in a small, homogeneous group typically share the same mindset. A homogeneous group treats nonconforming members as interlopers. That is human nature. When a homogeneous group develops AI, we in the human organization become the consumers of the information through the SPOG the AI providers give us for free. We no longer depend on SMEs or ICs to supply information which we evaluate for accuracy. We become true believers in the truth provided by AI because the novelty of AI has conditioned us to believe AI is free of human fallacies. The behavior pattern becomes cyclical. The more we rely on AI, the less we think for ourselves. The less we think for ourselves, the more reliant we become on AI to give us a Single Pane of Glass view of truth.
Machine Learning
Machine Learning (ML) is the branch of AI that provides some of the greatest benefits to technology. Machine Learning refers to the field of study and practice where computers learn from data and improve their performance over time without explicit programming. ML algorithms enable systems to recognize patterns, make predictions, and adapt based on its experiences. That process is the basis for Self-Healing Systems. Self-Healing Systems automate corrective processes in real-time that once took computer engineers several minutes, hours, or days to correct. Systems that monitor their technological health and apply corrective action decide if the corrective action was sufficient to ameliorate the problem or if further analysis and actions are necessary.
Let’s step back and remember the AI most with which most of us interact. Its designers gave it the ability to mimic human characteristics. One human trait is responding to a perceived threat. That response can be fear, aggression, perhaps submission, or a combination of those responses. With AI, we can rule out submission, as that reaction is not an idea that machines conceptualize. If our computer is in an infinite loop and we can’t alt/ctrl/del our way out of it, we can remove its power source. Should we be able to power it up again, the results are unpredictable, but we forced it to submit to our will. What if we tried to remove the power source and received an electrical shock every time we tried it? That may result in a Pavlovian behavior pattern, especially if the shock intensified with each attempt.
We know the “will to survive”3 can be an intense human characteristic. As humans try to instill more human qualities in AI, is there a point that through Machine Learning it can develop a “will to survive?” Stanley Kubrick’s quintessential 1968 film, “2001: A Space Odyssey,”4 may prove more predictive than ever imagined. When the HAL 9000 computing system told astronaut Dave, “I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do,”5 it was a nuanced human response.
Through Machine Learning it is possible for Artificial Intelligence to surpass humans’ limited intellectual ability. The reason is twofold. First, Machine Learning is beneficial for Artificial Intelligence systems so they can maintain themselves without introducing human intervention and error. This means AI powered systems become smarter through a self-feedback mechanism. Second, human intelligence depends on intellectual capacity, experience, and knowledge. Our society has lowered the standards for intellectual achievement to become a more equitable system. Also, we are moving toward a social system that values thought conformity more than thought independence. Thought conformity and lower standards do not require intellect. When we devalue human intellect, we are moving in the opposite direction of AI which is using Machine Learning to become smarter.
If we cannot solve problems for ourselves, we become reliant on AI systems to resolve issues. If devices are thinking for us, in essence telling us what to think, do we further diminish our ability to think for ourselves? The logical answer is, “yes.” If you doubt it, ask a high school freshman to add three triple digit numbers without using a calculator and see how long it takes to get a correct answer. Not only is the calculator consistently accurate, but it also produces the correct answer in a fraction of a second.
The education system used to be the hub of development for human thought processing, but is that still the case? Do not depend on AI to answer that question for you. Nor should you depend on someone who has a vested interest in promoting the current education system, as that may introduce bias and perhaps emotion. If you don’t have time to do an independent evaluation of the education system, evaluate the product produced by the system. That product is the students’ ability to have independent thought based on reasoning, logic, deduction, and dialectics. A child who comes home from school with an insatiable thirst for academic knowledge is learning how to think. A child who returns from school with an opinion on nonacademic issues is learning what to think. For humans, thinking is a skill set, and we are at the “use it or lose it” phase of that skill set.
Ethical Implications of AI
The typical conversation about AI and ethics centers around the military application of the technology. That is a good discussion which receives bountiful attention. After all, if the current iteration of the HAL 9000 decided through Machine Learning to go DEFCON 16 on us without a fail-safe, we would have a problem. Here, we are more concerned with the ethical implications of using AI to promote incomplete, inaccurate, or counterfactual information.
Developing AI systems to influence humans is one current phase of AI evolution. The humans who use AI to advance that effort may believe they are performing a magnanimous act of social service. The underlying question is whether it is ethical to promote a social cause through AI via incomplete, inaccurate, or counterfactual information? As we have said, information with those characteristics is not reliable. Even if the information is not reliable, is it ethical? The humanization of AI makes the answer to that question simple. If you know someone who is unreliable and deceitful, yet they assure you their motives are to make the world a better place, would you consider that person ethical? That is a personal question each of us should deliberate as we use AI to inform and influence us. Logic dictates that unreliability and deceit do not make a good foundation for ethical behavior.
Conclusion
AI can become mankind’s single point of failure (SPOF). Redundancy and fault tolerant systems address technology related SPOFs. Humanity does not have that luxury. We are constructing a society of redundant, conforming human thought. Society’s intellectual fault tolerance was to promote thought independence and replace independent thinkers of the past with new generations of independent thinkers. That is no longer a valued societal asset. We will reach a point of irrevocable dependency on Artificial Intelligence, as we move further from the human thought processing abilities of the past.
Many believe Artificial Intelligence which is not prone to human error is the panacea for a society that has lost the ability to think and work without errors. Perhaps that’s true. Perhaps doors and wheels will not fall off flying planes, and airliners will not crash7 if AI and robotics replace the humans who perform the tasks of securing doors, wheels, and every other aircraft component. Perhaps losing maintenance records related to those incidents would not happen if AI became the bookkeeper. The question is why do failures that were not commonplace in the past happen with greater frequency now? Is it corporate profits over quality,8 or is it the unavoidable result of lower standards?
As we become more dependent on AI, what would happen if a natural or artificial electromagnetic pulse (EMP) disrupted our society? Would we have the intellectual ability to handle the disruption? Let’s say it was a limited disruption and technologically society returned to an early 1950s version of itself. Those of us who lived at that time might view it as a positive event. Those of us who came afterward have gone through socialization processes that make it difficult to conceptualize what it was like to think without the aid of technology. The generations of the 1950s and before are succumbing to the inevitable destiny all humans face. That is another human frailty AI does not have. As that happens, society is developing a new baseline of intellectual standards. Will those standards institute Machine Learning inspired AI as true intelligence and human intellect as the artificial representation of thought? That possibility is more of a reality than at any time in the past. The real problem is AI will still be based on what we humans believed was a good idea once upon a time.
- Benj Edwards, “Google’s hidden AI diversity prompts lead to outcry over historically inaccurate images,” ARS Technica, February 22, 2024,
https://arstechnica.com/information-technology/2024/02/googles-hidden-ai-diversity-prompts-lead-to-outcry-over-historically-inaccurate-images/ ↩︎ - IBM, “What is single pane of glass?”, IBM Think, Accessed March 9, 2024,
https://www.ibm.com/topics/single-pane-of-glass ↩︎ - N., Sam M.S., “WILL TO SURVIVE,” in PsychologyDictionary.org, April 29, 2013,
https://psychologydictionary.org/will-to-survive/ (accessed March 10, 2024). ↩︎ - 2001: A Space Odyssey, directed by Stanley Kubrick (1968; Stanley Kubrick Productions, Distributed by Metro-Goldwyn-Mayer (MGM), (United States, 1968) Theatrical).
IMDB, “2001: A Space Odyssey Quotes,” IMDB.com, Accessed March 10, 2024,
https://www.imdb.com/title/tt0062622/?ref_=ttqu_ov_i ↩︎ - IMDB, “2001: A Space Odyssey Quotes,” IMDB.com, Accessed March 10, 2024,
https://www.imdb.com/title/tt0062622/quotes/?ref_=t ↩︎ - Tiffini Theisen, “DEFCON Levels,” Military.com, Published January 23, 2023, Accessed March 10, 2024,
https://www.military.com/military-life/defcon-levels ↩︎ - Deena Kamel, “Are the wheels coming off at Boeing? How plane maker can recover from safety shocks,” The National, N Business, Feb 09, 2024,
https://www.thenationalnews.com/business/aviation/2024/02/10/are-the-wheels-coming-off-at-boeing-how-plane-maker-can-recover-from-safety-sho ↩︎ - Kamel, Are the wheels coming off at Boeing? ↩︎