HomeA.I.How AI is Stealing Our Autonomy And What to Begin Doing About...

How AI is Stealing Our Autonomy And What to Begin Doing About It

As a technological tool, artificial intelligence (AI) can free us from trivial or overly complicated busywork to pursue endeavors worthy of what it is to be human. However, perhaps German philosopher Martin Heidegger’s claim— that technology enslaves us—might prove a prescient prediction about AI’s impact on what makes us human, ironically even as AI might afford for us more time to pursue worthy goals.

AI Frees & Diminishes

According to a Scientific American article, “Today’s math learning environment is observably more dynamic, inclusive and creative than it was before ubiquitous access to calculators.” The article’s authors add that current high school students do far better with graphing calculators and computers than undergraduate engineering students 20 years ago.i If these claims are true, then it might seem the fear mathematicians and teachers expressed in a 1975 Mathematics Teacher magazine survey was unfounded. With widespread use of calculators in the classroom, students learn more easily and became better at math, rather than worse.

At the same time, as calculators unchained mathematicians’ and students’ creativity and critical thinking to rapidly advance their field, however, most non-mathematicians now struggle to perform simple arithmetic and are dependent on calculators to do what previous generations of middle-school students could easily handle with paper and pencil, if not in their heads.

The question is whether AI will prove to be like the calculator, or will it pose an even more dire risk? With AI, incredibly data-heavy problems can be solved with ease. OpenAI’s GPT-3 consumption of nearly 175 billion parameters to perform its tasks is now dwarfed by GPT-4’s 1.8 trillion parameters and over one petabyte dataset,ii for instance. Enormous amounts of market data points or gigantic data sets can be sorted and analyzed according to certain words, phrases or details in mere moments, instead of a human laboring over the same task for hours, days or months. AI makes businesses and other human activities more efficient, informed and often more precise by replacing guesswork and habit-based thinking with predictions and algorithmic decisions based on real-time data. It frees human talent to engage in what is more fitting for its usefulness, instead of squandering it in repetitive tasks requiring little thought or creativity. AI can even challenge us to innovate and make better decisions as it incentivizes us to break away from the automatic, habitual approach we generally use when a situation is familiar to us. One study, for example, found that when “superhuman” AI played Go against professional players, the technology forced humans to come up with novel strategies in attempts to beat it, because the program had become invincible to traditional play.iii That creative, autonomous thinking is humanity at its authentic finest.

The question is whether AI will prove to be like the calculator, or will it pose an even more dire risk? Our technologists bear comparison to the sorcerer’s apprentice, producing continuously improved means toward increasingly ill-defined ends. Unless we look to the humanities to clean up the mess, we stand a better than even chance of killing ourselves with our new toys.


LEWIS H. LAPHAM, “MERLIN’S OWL” COMMENCEMENT ADDRESS AT ST. JOHN’S COLLEGE IN ANNAPOLIS 2003

Who other than a technophobe, therefore, would argue against giving up drudgery for something more interesting to think about and work on, whilst simultaneously developing our unique human skills to their fullest? No reasonable person.

However, misused AI is stealing our autonomy by making us more dependent on an entity that is programmed to addict us rather than enable us to become better autonomous, rational beings living in our complex society and environment. That in turn threatens our flourishing. It behooves us, therefore, to figure out how to distinguish between good and bad AI, and then take measures to encourage the good and prevent the bad.

Humans as Moral Decision-Makers

It is fairly uncontroversial to say that mature humans are autonomous, decision-making beings living in natural and social environments. Autonomy entails critical reasoning, creative thinking and abstract, rational and emotional communication, as well as intentional engagement in the world and free-will choice. Each of these abilities is essential to our nature, and together must surely be sufficient to make a living being into a person.

Decision-making also plays a central role in the quality of our lives as evolved social animals capable of both effective, efficient engagement with others and moral agency. Both moral agency and effective engagement require being able to understand with empathy others’ intentions, emotions and thinking, which enables cooperation. Developing and using the above features, we can reasonably surmise, is what it is to be an authentic human engaged in our world.

Human Autonomous Decision-Making: A Sketch

Autonomous decision-making is like being literate in a language. Literacy in autonomous decision-making requires early and lifelong learning, along with regularly sharpening our capabilities of decision-making, creative thinking, critical reasoning, communication and engagement with the world around us. For autonomy to function as it ought, we must be able to identify or create clearly defined alternatives, collect the right information, accurately weigh the costs and benefits from those alternatives, and measure how well each alternative and its foreseen outcomes fit with our goals and values. If we do it right, then any other autonomous, decision-making, literate individual should be able to understand why we decided as we did, although the person might not agree with us.

How moral agents become autonomous decision- makers is the result of both nature and nurture. We are moral agents interacting with our environment in time-tested ways: evolutionary adaptation and human learning about what works, and what to value and why. From nature, evolutionary adaptation to challenging environments created our brain structure, which enables us to learn a human language, which inherently involves autonomous decision-making, whereas living and learning in a social environment (nurture) account for our habits of thought, and content generation teaches us our language’s meaning and grammar.

Emotion-Based Automatic Thinking

There have been a number of writers who divide our decision-making into two different realms.

Joshua Greene, a Harvard experimental psychologist, neuroscientist and philosopher, states that our brains function like dual-mode cameras.iv Most of our thinking is governed by the emotion-based “automatic” system comprised of efficient, automated programs created and developed by evolution, culture and personal experience. This mode is instinctual and rather simplistic, which is normal given that automatic cognitive functioning develops in early youth as children begin to recognize and remember patterns. If an encountered situation is similar enough to what has happened enough times before, pattern recognition provides guidance for thought and action.

Reason-Based Flexible Thinking

If the situation is too complex or with significant content too novel for the automatic mode, our cognition goes into a second brain mode that uses greater conscious attention and flexibility in decision-making. Here is where, I think, we find fluid intelligence, which is “the mental capacity to deal with new challenges and solve problems without prior knowledge.”v When the automatic system is unable to deal with a situation, this deliberative, flexible and controlled mode considers the big picture and then consciously creates a path for the individual to bring the novel circumstances under some form of control. What the person decides is most likely to fit with her values, short-, medium- and long-term goals, historicity and so on.

The second brain mode is that which most of us recognize as separating humans essentially from other sentient species, which are non-sapient, some with enough cognitive functioning to develop an automatic camera cognition. But what non-sapient animal brains cannot do is develop the nuanced, flexible cognition enabling the ability to ask and answer the question of what should I do and how to do it and answer questions of moral agency. Those queries demand that we consider and value possible worlds. Being able to ask and answer such questions requires a second-mode thinking with free will, creative thinking, critical reasoning, communication and engagement.

Coordination of Automatic & Flexible Thinking

Although it might seem counter-intuitive to argue that the automatic cognitive decision-making mode is essential to the second mode, I contend that the latter is impossible without the former. First, humans use probabilities, feedback and weight-additive strategies— all essential to good decision-making—in their reasoning as they mature through lived experiences.vi These cognitive features are involved in both modes and how they operate.

Second, the flexible mode selects from the emotional mode’s existing habits and strategies, which are useful in the situation, and then modifies them or creates additional components for this particular moment when the second mode is active. Over time, as similar enough situations arise, this decision strategy may also become a habit in the automatic mode.

Third, the more-fluid level is dependent in some ways upon the intuitive mode—that is, the smallest, trivial, automatic decisions we make on a regular basis. Most of these insignificancies pass by unnoticed because they are, as Greene says, parts of our daily habit of interacting in the world and the world acting upon us. The car seeming to want to merge a bit too soon in front of us, our trying to decide between two ice cream flavors we like, or which stairs to take on a particular day all seem insignificant, but they are part of the overall decision-making process in which we practice decision-making language through the lived experience gained from using it. The constant, incremental adjustments and interactions with others and things in our environment imperceptibly sharpen our overall cognitive skills. Intentionally interacting with our environment keeps them at the ready to operate for small, simple, efficient decisions to large, complex, novel decisions, instead of quietly rusting out.

AI-Autonomy Threat

AI-autonomy is when moral agents formulate a question, AI answers it, and then moral agents automatically adopt that answer as their own without question or qualm. In other words, they surrender their autonomy to whatever result ChatGPT or whatever AI they are using produces for them.

With AI’s subtle encroachment, we might not even recognize that we are losing our autonomy. One study, for instance, found that people counterintuitively perceive themselves having greater autonomy with flexible working hours under an AI boss compared to a human boss.vii But that doesn’t make sense. The AI supervisor is merely a set of strict rules and algorithmic control, whereas a human boss can use nuanced decision-making when required by circumstances.

If language skills rust with disuse, furthermore, then what happens to our cognitive abilities when, among other experiences, we swipe mindlessly and addictively through videos selected by AI on Tik Tok or YouTube? One study on decision-making ability, psychopathology and brain connectivity found that “as many decisions are enacted in a social context, understanding the intentions and emotions of others is often crucial for choosing well and impacts on characteristics such as one’s propensity to cooperate with others.”viii So, what does that mean for human capacities when we don’t have the experiences needed to develop them?

Google Thinking

Ever since there have been exams and papers, students have been cramming, regurgitating and then quickly, efficiently forgetting the information or skills required to get a decent grade. With the ready availability of Wi-Fi and the internet, that practice became even more widespread. To understand a concept or answer a question, all a person need do is search, find a website or two on the subject, read enough to get a gist of what the whole thing is about, paraphrase the material and then move on to other chores. Not much of anything enters long-term memory for true learning, as a result, because there is no pressure or need. This has been informally called “Google-thinking.”

AI-Autonomy

Google-thinking morphs into AI-autonomy when users blindly, automatically adopt whatever results the technology gives them. Instead of the users doing the work, AI performs tasks essential to being human, as well as eliminating the need to retain information and its interconnections with other content, which we need in mind to help understand and make authentic decisions for ourselves. Students regurgitate AI’s shallow average of all the data collected from available sources in response to inquiries, which creates an even more-depthless paraphrase. AI-autonomy and Google- thinking, therefore, have a shared result: no evidence that students learn any additional content. The best that can be said for AI-autonomy is that AI results often look right without being right.

In Thinking Fast, Thinking Slow, Daniel Kahneman posits that there are two types of people: hedgehogs and foxes. A hedgehog’s brain “operates automatically and quickly with little or no effort and no sense of voluntary control.” Fox thinkers, on the other hand, are far more nuanced.

It gets worse. AI interferes with our biological processes by lowering cognitive capabilities, such as intuitive analysis, creative thinking, critical reasoning and the others mentioned. With AI-autonomy, the student doesn’t even attempt to synthesize the information because the technology has already done that. AI can make people lazy because it eliminates incentives to become better thinkers and decision-makers;ix they lose or never become fluent in the language of autonomous decision-making.

Since challenging situations happen to each of us every day of our lives,x we need powerful, automatic and flexible cognitive modes to solve problems and make our world our own. Research has shown that there is a strong correlation between people engaged in more complex environments and higher cognitive functioning in the short- and long-runs.xi We know that when there were too few choices early in life to develop nuanced decision-making processes, older people’s second (reason-based, flexible) mode is unable to handle the novel or overly complicated life experiences they encounter.xii The amount of experiential learning and knowledge produced early in one’s life enable the older version of that person to make decisions more efficientlyxiii—and also more effectively.

Hedgehogs & Foxes

Although reducing our ability to make decisions, and therefore, limiting the autonomy needed to be human moral agents is bad in itself, there is another factor that can make this much worse. In Thinking Fast, Thinking Slow, Daniel Kahneman posits that there are two types of people: hedgehogs and foxes.xiv A hedgehog’s brain “operates automatically and quickly with little or no effort and no sense of voluntary control.” Fox thinkers, on the other hand, are far more nuanced. They know that many situations are complex, involving many different interconnected factors and relationships. From a large number of moral factors, the fox weaves a complex solution that works overall, although what that is depends on the contextual situation and what is trying to be achieved.

Kahneman’s position could show the danger of Google- thinking and AI-autonomy replacing authentic varieties. With too rudimentary or underdeveloped automatic mental cameras, the hedgehog becomes more dangerous to himself and others. The one big thing he knows might make all his decisions the least nuanced or accurate of all options open to him. Since he is not learning content or skills, because AI is replacing much of that work for him, there are fewer and fewer opportunities for him to perceive that his one big thing is not functioning well enough to obtain the benefits he wants for himself and those he cares for. That is, he can’t learn from his mistakes when he isn’t learning anything. Therefore, there is never an internal check preventing the hedgehog from acting against his self-interest and that of others.

The fox in an AI-dominated society, of course, is extremely rare. When AI makes too many decisions, there is no inducement to learn how to become an entity capable of nuanced, second-mode thinking. People become passive, hedgehog spectators of their inauthentic lives.

Daniel Kahneman (1934-2024) was awarded the Nobel Memorial Prize in Economic Sciences in 2002 with Vernon L. Smith and the Presidential Medal of Freedom in 2013. His areas of acclaimed expertise included behavioral economics (integrating psychological research into economics) and the psychology of judgement and decision- making under conditions of risk and uncertainty.

What Should Be Done?

What humanity and each of us and our society need are people who can think for themselves. This group includes such individuals who among other things develop the right questions and know whom to ask for guidance and answers. They create searches for information from all relevant sources; evaluate the evidence for its relevancy, quantity and quality; create effective, efficient plans and the methods to achieve them; and then implement them with an ability to alter them as circumstances arise justifying those changes. In other words, they are moral agents with the wisdom to know the right thing to do at the right time for the right reason.

The good news is that AI cannot replace our essential nature, which is also part of how we make decisions. We have “a capacity for generating direct knowledge or understanding and arriving at a decision without relying on rational thought or logical inference.”xv Our authentic essence includes:

Critical Reasoning: Ordinal calculations, in which two objects, ideas or actions are compared to determine which is better or worse, instead of being judged merely according to quantity.

Creative Thinking: The ability to imagine worlds that do not exist when asking why something is as it is, or how the world could be different.

Communication: Reason and emotion are elements of most communication. There has to be some motivation to frame communication in a certain way or to expend energy to communicate at all.

Engagement: Private, public and political lives require that we, as social animals, engage with others in forming relationships, keeping in mind that all social relationships require some sort of emotion to bring them into existence and sustain them.

Free Will: The power to make decisions one’s own, instead of determined by nature and nurture.

The above five components are mere sketches of what these capacities entail, but they give a powerful clue as to why AI-autonomy cannot replace our decision- making nor should be overused in our lives. Each of these activities requires both the human agent’s emotion and reason: emotion through desiring the situation’s moral values, our engagement and acting for ourselves successfully as social animals in a changing environment; and reason through determining if what we are doing is justified in the circumstances, relative to the outcomes we seek and other relevant practical factors.

AI Proto-Rationality vs Human Intuition

Perhaps AI can approach something along the lines of proto-rationality, but it cannot be the unique non-rational and rational unity that human persons are. Humans use a “more holistic, intuitive approach in dealing with uncertainty and equivocality in organizational decision-making than is captured by AI.”xvi Although rationality is a core element of agency, moral agents are not always rational, nor do they need to be. In fact, “moral judgments appear in consciousness automatically and effortlessly as the result of moral intuitions,” rather than being the result of non-emotional deliberation.xvii Moral reasoning, hence, is biased and post hoc because it “is not left free to search for truth but is likely to be hired out like a lawyer for various motives, employed only to seek confirmation of preordained conclusions.”xviii Even if there are instances in which the process can be consciously controlled, many judgments seem to be “gut reactions” rather than reasoned ones, which seems to create an insurmountable barrier for anyone trying to replicate them with AI programming.

[T]hat which makes us human is the boundary line between good and bad AI. When AI encroaches upon that border, it needs to be prevented or stopped.

Human Realm for Important Decisions

So where should we draw line between AI and human decision-making? We ought to take seriously the difference between machines and people: Machines compute; people can do that, but there is a fundamental part that is not computative. Generally, and not controversially, the important choices ought to be relegated to the human realm. Who to hire or fire, how healthcare and other resources are to be distributed, what career path we should take and other issues significantly impacting human flourishing are questions vital to individualism and human well-being. These are ethical questions that can only be answered through human reason and emotion, such as desire. An appropriate answer to an ethical question is not akin to some mathematical sum. At the very least, it requires valuing, which includes desiring whatever is being valued. Love, care, hope and other emotions also come into play in ethical decisions.

Several years ago, I did an end-of-life consultation with a young Black professional whose father was in a permanent vegetative state after minor surgery. Her father’s physicians were pressuring her and her family to remove life support because there was little chance of recovery. After talking with her for hours about her, her father’s and her family’s narratives, she decided that they would continue with the status quo. Why? Because her father had always taken care of her, and he would have known she was not ready for his death. He would have wanted the plug pulled but would have endured to give them the time they needed.

This example shows why AI should never be allowed to make decisions concerning human relationships, values, morality and other factors bearing on their thriving. In this case, the waste of resources for a hopeless case would have justified terminating the medical maintenance of the woman’s father. AI would not have considered her and her family’s needs for closure. At that time, it would not have understood why a Black family that grew up when Black lives mattered less than others would be less trusting of medical decisions based on a prognosis and request, which White families might not have questioned.

AI has no emotional attachments or ability to understand what they mean to our existence. It misses the situation’s human components, and, more importantly, why those do and ought to matter. It cannot grasp human truths, such as historicity, which cannot be captured in algorithms. It would be the epitome of inhumanity in important decisions in medicine, the military (for example, regarding autonomous weapons systems), higher education and the value of liberal arts, especially since they can be used to humanize the technology. It cannot use mercy, grace, charity and human decency, because they are not justified by an algorithm.

Quantitative vs Qualitative

Let us take this thought further. Only humans can desire and value something for its own sake or for its moral worth. We also have an ethical duty to value and care about what is valuable, what is good, true and right, as well as make decisions based on these values. We have the ability to understand and act in mercy, charity and grace, which are gifts no one deserves. Those virtues and the actions caused by them are what make us humans and moral agents in the first place, because we can perceive how the world should be and care enough to make it happen. We as humans should pay attention to quality when that matters, and always question AI’s quantitative reasoning, especially when it appears to adversely affect the flourishing of humans and other intrinsically valuable beings. AI, therefore, should be limited to quantitative calculations and decision-making, which solely concerns quantities. Turning over criminal sentencing to AI algorithms to help reduce recidivism, incarceration and bias, for example, makes sense, but humans—preferably using the flexible second mode of thinking—need to control and evaluate AI analyses. In other words, that which makes us human is the boundary line between good and bad AI. When AI encroaches upon that border, it needs to be prevented or stopped. As long as the individuals, companies and government agencies creating and using AI keep that reality in mind, then, hopefully, technology’s use and design will better protect human autonomy. This won’t guarantee safety, because now there are far too many technological distractions, such as scrolling through Tik Tok and YouTube shorts, which prevent us from being engaged in the world in an authentic, human moral-agent way. Yet this will better position us to be who we should be according to our nature.◉

References

 i Crow, M.M., Mayberry, N.K., Mitchell, T. & Anderson, D. (2024). “AI Can Transform the Classroom Just Like the Calculator.” https://www.scientificamerican.com/article/ai-can-transform-the-classroom-just-like-the- calculator/
 ii Balla, E. (2023). “Here’s How Much Data Gets Used by Generative AI Tools for Each Request.” https://www.datasciencecentral.com/heres-how-much-data- gets-used-by-generative-ai-tools-for-each-request/
 iii Shin, M., Kim, J., van Opheusden, B., & Griffiths, T.L. (2022). “Superhuman artificial intelligence can improve human decision-making by increasing novelty.” PNAS 120(12): e2214840120.
 iv Greene J.D. (2014). “Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics.” Ethics. 124 (4): 695–726.
 v de Bruin, W.B., Parker, A.M. & Fischoff, B. (2020). 29(2): 186-192.
 vi Betsch, ibid.
 vii Langer, M & Landers. R.N. (2021). “The future of artificial intelligence at work: A review of effects of decision automation and augmentation on workers targeted by algorithms and third-party observers.” Computers in Human Behavior 123: https://doi.org/10.1016/j.chb.2021.106878
 viii Moutoussis, M., Garzon, B., Neufeld, S., Bach, D.R., Rigoli, F., Goodyear, I, Bullmore, E. NSPN Consortium, Guitart-Masip, M, & Dolan, R.J. (2021). “Decision-making ability, psychopathology, and brain connectivity.” Neuron 109(102): 2025-2040.e7.
 ix Ahmad, S.F, Han, H., Alam, M.M., Rehmat, M.K., Irshad, M. Arranomunoz, M, & Ariza-Montes, A. (2023). “Impact of artificial intelligence on human loss of decision-making, laziness and safety in education.” Humanities & Social Sciences Communication: https://doi.org/10.1057/s415999-023-01787-8. et al
 x Betsch, T. (2018). “What Children Can and Cannot Do in Decision Making.” https//www.scientia.global.dr-tilmann-betsch-what-chidlren-can-and- cannot-do-in-decision-making/
 xi Davidson, A.W. & Bar-Yam, Y. (2006). “Environmental Complexity: Information for Human-Environment Well-Being.” In Minai, A.A., Bar-Yam, Y. (eds.) Unifying Theories in Complex Systems: 157-168. Berlin, Heidelberg: Springer.
 xii Schwartz, Barry (2004). The Paradox of Choice. New York: Harper Perennial. 
 xiii Sohn, E. (2022). “How Decision-Making Changes with Age.” Simons Foundation: https:///www.somonsfoundation.org/2022/01/02/how-decision-making-changes-with-age.
 xiv Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
 xv Jarrahi, M.H. (2018). “Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making.” Business Horizons 61-5777-586.
 xvi Jarrichi 2018.
 xvii Haidt, J (2001). “The emotional dog and its rational tail: A social intuitionist approach to moral judgment.” Psychological Review, 108: 814-834.
 xviii Fine, C. (2006). “Is the emotional dog wagging its rational tail, or chasing it?” Philosophical Explorations, 9(1) 83-98. 

Dennis R. Cooley, PhD, is Professor of Philosophy and Ethics and Director of the Northern Plains Ethics Institute at NDSU. His research areas include bioethics, environmental ethics, business ethics, and death and dying. Among his publications are five books, including Death’s Values and Obligations: A Pragmatic Framework in the International Library of Ethics, Law and New Medicine; and Technology, Transgenics, and a Practical Moral Code in the International Library of Ethics, Law and Technology series. Currently, Cooley serves as the editor of the International Library of Bioethics (Springer) and the Northern Plains Ethics Journal, which uniquely publishes scholar, community member and student writing, focusing on ethical and social issues affecting the Northern Plains and beyond.

Dennis R. Cooley
Dennis R. Cooley
Dennis R. Cooley, PhD, is Professor of Philosophy and Ethics and Director of the Northern Plains Ethics Institute at NDSU. His research areas include bioethics, environmental ethics, business ethics, and death and dying. Among his publications are five books, including Death’s Values and Obligations: A Pragmatic Framework in the International Library of Ethics, Law and New Medicine; and Technology, Transgenics, and a Practical Moral Code in the International Library of Ethics, Law and Technology series. Currently, Cooley serves as the editor of the International Library of Bioethics (Springer) and the Northern Plains Ethics Journal, which uniquely publishes scholar, community member and student writing, focusing on ethical and social issues affecting the Northern Plains and beyond.
RELATED ARTICLES

Most Popular