Ghostbots are AI representations using the deceased’s image, voice and other data to simulate the dead-person’s ante-mortem appearance.[i] No one disagrees that the best AI ghostbots could eventually be sufficiently like the source person to deceive even those individuals who had deep relationships with the deceased. One of the ethical questions is whether these AI ghostbots can harm or help the dead?

Consider a recent case, in which a young woman murdered by her boyfriend 18 years ago, suddenly appeared as a CharacterAI chatbot.[ii] After receiving an alert of a new profile using his daughter’s name and picture, her father learned about how a technology company was exploiting this information for its own benefit. The father felt violated by the company’s actions, which matters here because his feelings, thoughts, perceptions, plans and actions performed in his daughter’s memory helped comprise her socially embedded persona.[iii]

What is also morally important to her father is that the charity he founded in her name, Jennifer Ann’s Group,[iv] could be adversely affected by that commercial ghostbot and how it interacts with other people. The use to which the daughter’s ghostbot was being put—as an answer chatbot commodifying her name and image—could dissuade potential donors to the charity, which helps victims of abusive partners, and harm those benefiting from the charity. The ghostbot might also cause them to think less of the murdered young woman or what happened to her, especially if it starts performing in odious ways.

Overall, the answer to the question about harm to the dead is a very firm, “It depends.” Many of the fears, such as commodification, of harm to the deceased’s psychological/embodied person, including violations of rights to deletion and property, are overblown. The damage that could be caused to the socially embedded person surviving the embodied person’s death, however, should at least give us pause to consider whether preventative or regulatory measures are justified.

Can AI Ghostbots Harm the Individual Deceased Personality?

Since there are fears that ghostbots can injure the deceased, which might justify government or social intervention, it is useful to find out if anything can do that, and if so, then how it is done. Unsurprisingly, humans have spent eons trying to prove that the deceased’s death in general harms her,even though there seems to be a public consensus that it does. But just because the majority believes something doesn’t make it true. Facts matter because they have to correspond to our beliefs in order to make them true.

The death-as-harm argument generally breaks down into two camps. The first camp argues that a person’s death must injure that person in some way, whereas the second camp contends that death cannot harm the deceased, which is firmly grounded in the Epicurean tradition. Put formally, the proponents of death-cannot-harm argue that:

  1. A state of affairs is bad for the person only if that person can experience it at some time.[v]
  2. Therefore, the person being dead is bad for that person only if it is a state of affairs that the person can experience at some time.
  3. The person can experience a state of affairs at some time only if it begins before that person’s death.
  4. The person’s being dead is not a state of affairs that begins before the person’s death.
  5. Therefore, the person’s being dead is not a state of affairs that the person can experience at some time, which means that the person’s being dead is not bad for that person. [vi] [vii]

Photo caption: Jennifer Ann Crecente, a high school honors student in Atlanta, Georgia, was murdered by her ex-boyfriend on February 15, 2006. Her father established a charity in his daughter’s honor, Jennifer Ann’s Group, to help victims of abusive partners. Photo by Drew Crecente, 2002


           

The same argument works whether a person can be harmed postmortem by a ghostbot’s creation or activities. Since the dead as an experiencing psychological person doesn’t exist to be harmed, it cannot experience any adversity—or benefit, for that matter. Moreover, given that we as moral agents cannot be damaged after death, we should not be afraid of that happening to whatever it is that we leave behind. Dread of our deaths and postmortem events becomes irrational, on these grounds. It is sort of like someone in a trepidation dither caused by his terror of being attacked by dinosaurs, space aliens or an honest politician.

The death-as-harm campconsists of people mourning those who have passed away not only because they lost something, but because they think the deceased suffered a tremendous harm by either dying too soon or dying at all. If the survivors of the deceased were asked, they would honestly state they believe what they are saying. They will argue also that frustrating the deceased’s plans, damaging his reputation and actions, and other injuries that we would not want to happen to us whilst we are alive are bad, injurious things for any dead individual. Bad not solely for others’ sakes, that is, but for the originator and holder of these goals and valuables.

So how do we rectify this apparent contradiction? By figuring out what that ghostbot actually is.

Are Ghostbots People? Are They the Resurrected Dead?

In the future, will AI ghostbots become real persons the way that you or I are persons? Are they a gateway to a digital resurrection or reincarnation of the deceased person?[viii] Consider a case in which Artur Sychov, the founder of Somnium Space’s Live Forever Mode, optimistically claimed that, “You will meet the person,” in an apparent bid to sell his software. Is this possible?

Computer scientist Pratik Desai tweeted that with enough data to train an AI system, “there is a ‘100 percent chance’ of relatives ‘living with you forever.’”[ix] Others have argued that ghostbots are digitally reviving the dead.[x]Is this merely an animated digital memory, or could a ghostbot in some sense be “alive”?

To untangle the tech field’s hype, we have to figure out first if an AI chatbot is a person, and if so, then is it the same person as the individual who died? We begin by acknowledging that what it is to be a person isn’t captured in any one universally accepted definition. There are lots of ways we use that word, and, pragmatically speaking, the one to pick for a situation is that which works well enough in that context for it to do what we want it to do.

The embodied person is usually what we mean when talking about “persons.” This is a mind in a living human body that “has self-conscious intelligence … is capable of purposive action … [and] instantiates a sufficiently rich psychological profile,”[xi] according to Fred Feldman, an expert in the study of death. Mary Anne Warren, another expert in the field, expands on Feldman’s definition by claiming that this is a being with consciousness, reasoning, self-motivated activity, the capacity to communicate messages of an indefinite variety of types—not just with an indefinite number of possible contents, but on indefinitely many possible topics; and has self-concept and self-awareness.[xii]

The embodied person concept, moreover, often incorporates what most of us think is a moral agent: Any being possessing the capacities by which it can act morally or immorally, can have duties and responsibilities, and can be held accountable for what it does.[xiii] The moral agent must be able to make judgments; have the ability to engage in moral deliberations and then make choices based upon the calculations; have the resolve and will-power to implement the choice; and, finally, have the capacity to hold herself accountable to others and herself for failing to carry out her choices.[xiv] In addition, moral agents are good or bad depending on which virtues and vices they instantiate, as well as the actions they perform. Finally, these beings are capable of self-fulfillment and happiness as human beings and moral agents. They are creative, active community members, capable of shaping their own destiny in a social context, and have a wide range of skills and knowledge relevant to the purpose of flourishing.

AI chatbots cannot be embodied persons for the simple reason that they lack consciousness and conscience, freedom and free will, as well as other necessary characteristics, including possessing the emotions required to be a moral agent. As David Hume showed in the 18th century, if a person can’t care about anything, then it is unmotivated to be ethical, nor can it understand what morality is all about. Unlike us, emotionless beings cannot care, respect value or feel anything a real person needs to be moved to be a moral agent in the world. Ghostbots, on these grounds, are merely simulations, not a continuation or renewal of embodied people.

A far more promising candidate for a ghostbot’s generic personhood is the socially embedded person. This is the organic whole totally created by an embodied person’s existence in the world. It includes all the relations that person has to all other organic wholes, her experiences, artifacts, historicity, possessions and anything else that is caused by her living as an embodied person in the universe.

Socially embedded people are a recognition that our identity and existence are not located solely within our minds and our mental states and faculties, but partly in the relationships which we have and the mental states others have about us, including how they identify and evaluate us. The relationships with others, such as marriage, are not located internally in our minds. It makes the unity of the couple. Being married to another, moreover, is at least an important part of who spouses are both individually and as the couple. Perhaps it is not an essential characteristic, but it is so significant that it alters much about the married, embodied persons’ identity as a person. That is, it helps make us who we are.

Relatedly, Judith Butler[xv] and others rightly claim that part of our individual and general species being identities are social constructs. Other people’s beliefs, memories, feelings toward us and mental states about us helped create and sustain our individual and socially embedded identities, especially since we are social beings living in communities.

Here is an example of how it works in practice. I might think that I’m a nice person. Others may have a different opinion formed by their experiences with me and what I have done to them and others. Their beliefs about me partially make me who I am as a socially embedded person. If we are respected by others, for example, we exist as a person who is respected by them.

Socially embedded people are a recognition that our identity and existence are not located solely within our minds and our mental states and faculties, but partly in the relationships which we have and the mental states others have about us, including how they identify and evaluate us.

Besides our psychological mind/brain embedding, our relationships and how others view us, our minds extend into external objects. We store our mental information, arguments, ideas and such on external devices, such as notes and emails, and through other communication means. Externalization of what makes up our identity has been accelerated by how people live their lives through social media, such as Facebook.

What you see whilst reading my posts and this article is part of my mind at work, as well as how it thinks at some time and in some context. Of course, that time has passed and the context changed, but anyone reading this article, for instance, will have a window into my flow of consciousness, beliefs, critical and creative reasoning. If there is sufficient shared qualitative and quantitative information and mental processing in common, there will be communication. When successful communication happens, you, the reader, will read my mind by experiencing my thinking, although you may not be fully interpreting or experiencing what is written the same way as me.

In addition to external memory devices, the artifacts we create are components of our socially embedded personhood. These exist even after our deaths. How our rooms are arranged, our collections of art, hobbies, interests and other things, our clothes and other possessions, and so on are created by us for some internal, intentional or unintentional reason that shows our agency at work. They are extensions of our embodied personhood as we represent ourselves to ourselves and to others and how we exist in worlds partially created by us. When our lives end, people can see part of who we psychologically-physically were in how these things are and how they are put together, although less clear, perhaps, than discovered in our writings and more formal communications.

To summarize, the idea is that the socially embedded narrative of who we are is partially written by us through all of the ways we change and are changed by the world—in fact, it has to be started by us to become socially embedded people in the first place. But our stories are also significantly constructed by our environmental contexts and others’ work, as they change what we are whilst we are simultaneously altering what they are through our life engagement in the world. How others think about us as they assist in writing our identity narratives, especially when it comes to important relationships, also are part of our socially embedded personhood. Their beliefs, memories, feelings toward us and mental states about us helped create and sustain our individual and socially embedded identities, especially since we are social beings living in communities. When we die, many of these remain until all surviving memories and records of us are destroyed or forgotten. Until that annihilation, they continue to keep our narratives alive and changing, which keeps alive our socially embedded identity.

It is here that ghostbots might qualify as a type of person with an individual personality capable of being injured. If the chatbot is built upon or is a continuation of the socially embedded person/individual that was created by an embodied person living her life, it has incorporated that form of personhood with all its particular entanglements and implications. We are resurrected in this sense. The ghostbot is not us as embodied persons with our conscious minds, but it is us in a way that can be harmed post-mortem as a not-conscious socially embedded person.

When Ghostbots Harm the Dead

How would ghostbots or their activities harm the dead, who either do not exist or are elsewhere,[xvi] we might rightly ask? Assuming that the deceased did not antemortem agree to their ghostbot being created, such technology could violate the individual’s rights, including the entitlements to be forgotten or deleted, and to private property. The AI chatbots also could commodify both the deceased and the personal relationships that existed, or use both as an opportunity for advertising and resurrection services, as was seen in the murdered daughter’s ghostbot.[xvii] Finally, ghostbots are alleged to denigrate the deceased’s dignity:

There is currently little scope for the dead to be protected from the living, e.g. from the living who might change parameters of the dead’s personality model in ways that they would not be comfortable with when alive, so raising questions of post-mortem harm.[xviii]

One of the main arguments against ghostbots incorporates the idea that someone stymieing our practical, ongoing, plausibly enacted plans harms the person. That is true just as plans begun but left unfinished by the socially embedded person can be taken up by others to keep that person socially embedded and existing, in a manner of speaking.

Let us return to the murdered daughter’s ghostbot. If an AI company changes the deceased woman’s character, prevents her pre- and post-mortem goals, or interferes with matters important to her or her survivors, they disrespect and harm the daughter in her socially embedded persona. There is need, therefore, for those writing a socially embedded person’s narrative, such as AI tech companies and her father, to take due care not to damage the socially embedded persona’s narrative created by her and her relationship survivors, especially since the embodied person is no longer around to protect herself.

Of course, sometimes a socially embedded person is harmed but no wrong is done to him or her. If true, negative information about a person comes to light after her death, for example, even though it damages the socially embedded person’s reputation and overall value as such a being, the injury is justified. Let us say that a deceased pillar of the community proves to be a KKK member or harbored ugly, un-American viewpoints. In these cases, the post-mortem damage was caused by the person’s own actions; therefore, the harm was just because it was self-inflicted, and justice requires we treat people, even in their socially embedded forms, as they deserve. Ghostbots incorporating actual, negative features and aspects of the deceased, therefore, do no illicit harm to the socially embedded person.

Can We Morally Ignore a Person’s Desire to Have a Ghostbot on the Grounds it is Irrational?

No one has to honor an irrational request to do something or to maintain what does not make sense. If someone’s last will and testament stipulated that his corpse should be fed to his dogs or kept in his favorite chair, for example, the law and society do not have to do it. Is it irrational to want to have a ghostbot be part of our socially embedded persona before or after we die, therefore? That is, are we allowing our desires to overcome our reason to a degree that survivors should and can safely ignore what we want? The best way to answer these questions is to identify what would motivate someone to autonomously choose to continue her socially embedded persona. There are four general—if not, universal—desires at work here that make such a choice reasonable, although not mandatory.

First, no one wants to die. Even those who suicide want to live but think that their continued physical existence will be worse than the alternative. Secondly, correlated with this desire is one for immortality,especially if living forever poses little cost to oneself. Immortality would certainly be a burden if one lived on and on and onas Struldbrugg in Jonathan Swift’s Gulliver’s Travels, for example. But most people, of course, do not think through what an everlastinglife would be like in practice—for example, will they be bored doing the stuff they like now over and over and over again forever?—but that does not stop them from wanting it.

Thirdly, everyone also wants to matter. That is, people desire that their lives have meaning or be assigned sufficient worth by either themselves or others whose opinions matter to the individual. Everyone dreads others never believing them valuable enough even to be noticed, much less remembered after they die. Perhaps the worst feeling is to know that when you “slip into eternal sleep,” your existence had no impact on the world at all. Ghostbots allow us to avoid that fate and allay the fear of our irrelevance in a gigantic universe. That is, unless the ghostbot is equally ignored or the computer hosting it shuts down for some reason.

Finally, no one wants his or her loved ones to suffer as a result of their deaths. Not only because we love or care for those people for their own sake in many different ways, but because we have made them part of who we are and what gives value to our lives. When we die, they lose part of themselves as the relationship ends, and they have no hope to recover that component of their identity. Ghostbots can give them grieving time and comfort, especially those elderly who have little emotional support left. Gone is the depth and quality of relationships between embodied people, but there is relief and succor from being supported and belonging to those socially embedded few who cared for us as we cared for them. And that is true, even if the relationship is only to a surviving socially embedded person and individual personality.

These four desires combined would move many people to want AI ghostbots of themselves. If ghostbots are chosen autonomously, justice is not violated, every intrinsically valuable stakeholder is treated as he or she should be, and there is no illicit harm or benefit conferred, then selecting to use AI technology in this way is merely non-rational, not irrational. No one is harmed, and all are respected. Unless we can show that this technology illicitly harms others or our communities, then we are going to have to ask ourselves who are we to interfere with anyone’s last wishes for a bit of comforting immortality, no matter how odd it may seem?

By now, it should be clear that ghostbots are morally permissible, if certain conditions are met. The most important one is if the deceased autonomously chose to create such an entity before he died or gave permission for others to do so after his death. In these circumstances, the authorization makes these chatbots part of the deceased’s autonomous plans, whether they are to be actualized whilst the embodied person is alive or finished after death. If the ghostbot stays true to the embodied person’s intentions, then it does notinjure the socially embedded person. In fact, it would be disrespectful not to honor the deceased’s wishes if there are no extenuating circumstances that override them.

Final Upload

Although the brief examination above might appear to answer the questions posed earlier, there are far more difficult problems that need attending and right soon. Partik Desai, computer scientist, entrepreneur, and KissanAI’s founder,[xix] has predicted that human consciousness could be uploaded to a computer in the near future. If so, would that consciousness be more than a sophisticated simulation? Could one choose to be resurrected in a digital afterlife or in some sense reincarnated digitally?

The outcome could be immortality for those who have access to the technology and the means to pay for it. It definitely raises dilemmas about how we should ethically and legally treat such a being, whether that entity can continue to make decisions about itself and others the way the embodied person did, and so on. More difficult questions arise if several of these AI persons were created, especially when the embodied person is still alive. Who or which one is the “real” person, if any?

Given the impact on all people, including itself, these questions need to be answered before the technology beats us to it. ◉


[i]  Someone who does not allow herself to be photographed or recorded, who keeps her thoughts to herself and studiously avoids social media will be a thin representation of who the person is or was. Some of these simulations are thicker than others depending on greater raw data from which to work and whether the AI was permitted to make educated or other guesses to fill in blanks. The more recorded information, the more likely that the simulation passes a Turing test, even with those who knew the deceased well. But if the deceased hadn’t interacted with many other individuals, her ghostbot could pass such a test only because no one would know better.

[ii] https://www.washingtonpost.com/nation/2024/10/15/murdered-daughter-ai-chatbot-crecente/

[iii] In a follow-up article, I will address how AI ghostbots can harm or help survivors.

[iv] The Jennifer Ann Crecente Memorial Group, Inc., https://jenniferann.org/

[v] If we as individuals and people in general are minds, then we are people who have to have psychological experiences. To feel pain or be injured psychologically, we have to experience that in our minds. If we do not, then we are not harmed. Sticks and stones might break my bones but names I’ve never heard I was called cannot hurt me. Disturbingly, instantaneous death cannot harm the person. It is only the dying process that can do that.

[vi] Rosenbaum, S., 1986. “How to Be Dead and Not Care: A Defense of Epicurus,” American Philosophical Quarterly, 23 (2): 217–25. Rosenbaum, S., 1989. “Epicurus and Annihilation,” Philosophical Quarterly, 39 (154): 81–90; reprinted in Fischer 1993, 293–304.

[vii] I should note that there seem to be two outcomes when people die. Firstly, they cease to exist entirely, which is the Epicurean view. Secondly, that they still exist in some manner as a soul stripped of personality, as Plato believed the same mind, or an enhanced mind rid of its interfering, degrading body. For the first outcome, the personality no longer exists so cannot be harmed, but the mind as thinking thing survives. For the second two outcomes, if the mind still exists, then it has not suffered from the death of the body because the mind is still alive. It was not harmed and might be better off because of the body’s death.

[viii] Nowhere in the literature does it mention limiting the deceased’s ghostbots to one entity. There could be many, which makes the ethical issues even more difficult because which one, if any, is the real one?

[ix] https://www.eupedia.com/forum/threads/scientist-claims-humans-will-be-able-to-upload-consciousness-onto-computer.44216/

[x] https://www.zmescience.com/science/news-science/ai-ghostbots/

[xi] Feldman, F. 1992. Confrontations with the reaper. New York: Oxford University Press: 119.

[xii] On the Moral and Legal Status of Abortion Mary Anne Warren from Biomedical Ethics. 4th ed. T.A. Mappes and D. DeGrazia, eds. New York: McGraw-Hill, Inc. 1996, pp. 434-440.

[xiii] Taylor, Paul. (1986). Respect for Nature: A Theory of Environmental Ethics (Princeton University Press: Princeton, NJ)

[xiv] Taylor, Ibid.

[xv] Butler, Judith. (1990). Gender Trouble. (Routledge, New York, NY).  Butler, Judith. (1993). Bodies That Matter: On the Discursive Limits of Sex. (Routledge: New York, NY).

[xvi] There are a number of beliefs about life after death, especially in religion. Being elsewhere might be somewhere on the path to reincarnation, Plato’s pure mind without personality studying the forms, or in one of the Jewish, Christian, or Muslim afterlives, such as Heaven, Hell, and Purgatory. It is the body that dies whilst the mind survives someplace else.

[xvii] McStay, Andrew. 2024. “The hidden influence: exploring presence in human-synthetic interactions through ghostbots.” Ethics and Information Technology, 26: 48.

[xviii] McStay, Ibid: 48.

[xix] KissanAI is a large language model developed to help non-industrialized world farmers.

Dennis R. Cooley, PhD, is Professor of Philosophy and Ethics and Director of the Northern Plains Ethics Institute at NDSU. His research areas include bioethics, environmental ethics, business ethics, and death and dying. Among his publications are five books, including Death’s Values and Obligations: A Pragmatic Framework in the International Library of Ethics, Law and New Medicine; and Technology, Transgenics, and a Practical Moral Code in the International Library of Ethics, Law and Technology series. Currently, Cooley serves as the editor of the International Library of Bioethics (Springer) and the Northern Plains Ethics Journal, which uniquely publishes scholar, community member and student writing, focusing on ethical and social issues affecting the Northern Plains and beyond.