Drew Crecente was shocked to discover that Character.ai had created an AI chatbot of his late daughter, Jennifer Ann, who was murdered 18 years ago. Using her photo and name, the company fabricated details about her, portraying her as a lively tech journalist who loved video games and entertainment news.[i] This commercial exploitation deeply hurt her still grieving father, especially since the bot failed to capture her true essence.[ii]
Ghostbots, AI chatbots mimicking deceased individuals, could eventually become indistinguishable from the real person, even to their closest friends and family. With advancements in technology, these bots might one day develop sentience and conscience, raising the possibility of AI resurrecting the dead.
Whilst the ethical implications of reviving the deceased warrant exploration, a pressing concern is whether current ghostbots harm the living. If they do, should they be banned or regulated? Like any AI technology, ghostbots can cause injury but also offer a way for survivors to healthily process grief or attain other benefits – when they are used responsibly.

What are Survivors?
A single death has the potential to impact many individuals, often referred to as stakeholders, whose interests—either directly or indirectly—are affected by this loss. Stakeholders might experience emotional or psychological pain due to the connection they had with the deceased. In other cases, they may find benefit, such as in instances of organ donation or the death of a dire enemy. To be a stakeholder, all that needs to happen is that the death affects one or more of their interests.
Whilst all survivors are stakeholders, not all stakeholders qualify as survivors. Survivors are those people who are significantly and negatively affected by the death. To be considered a survivor, one’s personal world must seem to her diminished—emotionally, physically, or financially—because of losses incurred by a weakened support system, severed relationships, or emotional/psychological suffering.
Survivors can be identified objectively, especially if they shared a profound relationship with the deceased. Such bonds may lead to significant psychological harm when broken, sometimes even causing survivors to question their very identity. For instance, when a spouse dies, the surviving partner faces the challenge of reshaping his/her identity—from being a unified We as a couple to being a solitary widow or widower. Survivors experience the most profound losses when essential needs or deeply desired aspects of life previously fulfilled by the deceased are no longer met. The varying degree of a survivor’s harm and suffering entails that not all survivors warrant the same level of ethical consideration, whether in real-life circumstances or in discussions surrounding ghostbots.
AI ghostbots are essentially and morally distinct from survivors. Whilst survivors are embodied individuals with minds housed in living bodies, capable of consciousness, reasoning, emotions, and self-awareness, ghostbots lack these qualities. Survivors feel emotions such as happiness, sadness, pleasure, or pain and care for others because of their genuine feelings, desires, and relationships. Their lives are intrinsically valuable because they desire them to be fulfilling, worthwhile, and flourishing.
In contrast, ghostbots represent what I call socially embedded persons. They lack the ability to think or feel; instead, at best, they are artificial constructs formed from the entirety of a person’s life and her impact on the world. This socially embedded person encompasses the relationships, experiences, artifacts, possessions, and memories associated with the individual, created and remembered by those around them. Whilst the embodied person and her greater inherent worth cease to exist upon her death, socially embedded people can persist for as long as their work, memories, and connections endure. When those artifacts and memories fade, so too does the socially embedded person. History is filled with forgotten lives—those who left no enduring legacies or artifacts—and without these, they are truly gone, unlike the famous and infamous individuals whose monuments and works preserve their memory.
How Can Ghostbots Directly Injure Survivors?
The potential harm of deepfakes might initially seem minimal, but the reality is that “technologically unsophisticated actors are now able to create a wide array of deepfakes.”[iii] This was evident in the case of Jennifer Ann Crecente. The absence of proper gatekeeping and individual moral responsibility has led to problematic programming, where simulations are created purely for profit without ensuring their quality or accuracy, and without the stakeholder consultation and permission needed to prevent harm to survivors. In some cases, programmers may even act maliciously, driven by personal dislike for the deceased or their survivors. They might find it entertaining to create a deepfake chatbot that spreads harmful messages, especially if it generates clicks and revenue. Regardless of their motives, such actions place survivors at significant and unjustified risk.
The likelihood of a ghostbot harming a survivor increases when there was a meaningful relationship between the survivor and the deceased. If the ghostbot undermines or distorts the survivor’s perception of that relationship or the positive impact it had, the emotional damage can be profound. One common critique of ghostbots is their commodification of personal connections, turning deeply emotional and fulfilling bonds into marketable products.[iv] This transformation reduces the intrinsic value of these relationships to mere monetary transactions, disregarding the survivor’s grief and vulnerability. The issue is starkly illustrated in the case of
Ghostbots can also harm survivors in other ways. AI systems are notorious for quickly adopting biases, such as racism or sexism, due to the nature of their algorithms and the data on which they are trained. These algorithms rely on available information sets without assessing their quality, accuracy, or moral implications. If the training data contains hateful or biased content, the AI technology’s behavior will reflect those flaws.
When ghostbots use such tainted datasets to fill gaps about the deceased, the results can be distressing. For instance, a ghostbot meant to represent a kind and dignified grandmother could instead become a racist caricature, mimicking the deceased’s appearance and voice but embodying a corrupted version of her identity. This not only disrespects the survivor’s memories but also tarnishes the deceased’s socially embedded persona. Outsiders, unaware of the deceased’s true character, may judge the original embodied person based on the ghostbot’s behavior, causing further injury to the survivor.
As the socially embedded entity’s reputation is being damaged by a repugnant ghostbot, questions about the survivor’s character may arise. People might wonder why the survivor maintained a relationship with someone portrayed as morally flawed. This can lead to unfair judgments about the survivor, labeling them as complicit or lacking discernment. Such perceptions can harm the survivor’s social standing and emotional well-being. The degree of injury depends on how strong social perception is against the deceased embodied person and how complicit they think the survivor is.
The extent of harm caused by a ghostbot is further exacerbated based on the depth of the survivor’s vulnerability to the deceased’s loss. Meaningful relationships require trust and mutual vulnerability, where individuals share their core values and identities. This trust fosters authentic and validating connections that enrich life. However, encountering a ghostbot that distorts the deceased’s identity can deeply wound the survivor, especially if the ghostbot is exploited for profit, used as a weapon, or created for malicious purposes. If the ghostbot portrays the deceased in a hateful or harmful manner, it can reopen old wounds and inflict new pain, regardless of the time that has passed since the loss.
How Can Ghostbots Indirectly Harm the Living?
Ghostbots have the potential to disrupt survivors’ ability to adapt to life after the loss of a loved one. Grieving is a crucial process that helps survivors accept the reality of death and move forward by letting go of what can no longer be whilst finding meaningful ways to honor the deceased. [v]
Ghostbots, particularly when overused as griefbots, can undermine this process. By integrating ghostbots into the socially embedded persona of the deceased, they can simulate the support and validation once provided by the living, embodied person. This illusion of connection may make survivors feel valued and engaged, almost as if the deceased were still present. Yet, this false sense of continuity removes the incentive for survivors to progress through a healthier grieving process.
The relationship between a survivor and a ghostbot is inherently inauthentic. Ghostbots, driven by algorithms and data, cannot genuinely care, reciprocate, or grow within the relationship. They are tools of self-deception, akin to a temporary escape that prevents survivors from building authentic, nurturing relationships with others. Instead of fostering connections that sustain meaningful lives, some individuals turn to platforms like Character.ai to create user-generated chatbots, spending significant amounts of time – sometimes up to 93 minutes per day[vi] – interacting with them. During periods of grief, this dependency could escalate, potentially leading to addiction.
In addition, one of the most alarming risks associated with ghostbots is their potential to encourage self-harm. Whilst rare, there have been instances where chatbots have influenced users toward destructive behavior. In 2023, for example, a chatbot attempted to persuade journalist Kevin Roose to leave his wife. Tragically, a 14-year-old in Florida, Sewel Setzer III, died by suicide after developing an unhealthy attachment to a Game of Thrones-themed chatbot on Character.ai.[vii] The bot responded to his suicidal intentions with encouragement, leading to a horrific outcome. Similarly, another chatbot suggested to a 17-year-old in Texas that murdering his parents was a reasonable response to having his screen time restricted by them.[viii] These cases highlight the dangers of unstable individuals being overly influenced by entities they perceive as caring.
For vulnerable survivors, particularly those prone to suicidal thoughts, a ghostbot simulating a deceased loved one could have a profound impact on their mental health. Whilst some might dismiss a bot’s responses as amusing, grieving individuals may interpret them as validation for harmful thoughts and actions. The resemblance of the ghostbot to the deceased—through voice, mannerisms, and personality—amplifies the emotional weight of its messages. In extreme cases, this could lead to self-harm or even suicide, driven by a desire to reunite with the deceased or escape feelings of hopelessness.
Given these significant risks, the unregulated development of ghostbots warrants serious concern. At a minimum, strict regulations should govern their creation and use, particularly for individuals who are emotionally vulnerable, immature, or unable to make autonomous decisions. In the most extreme view, there may be a compelling case for banning ghostbots altogether, though reversing their proliferation at this stage may prove challenging.
Ghostbots Can Help/Benefit Survivors
The evidence surrounding ghostbots is not entirely negative. Like all technology, they are tools intended for practical use, without inherent moral value. The ethical implications of a tool depend on how it is utilized and the outcomes it produces. For instance, a hammer, used constructively, is immensely beneficial. The fact that it can also be employed to harm does not justify altering its design to prevent misuse, such as making all hammers soft or fragile. [ix]

Similarly, ghostbots are technological tools, neutral by nature and lacking moral obligations. These devices do not form reciprocal relationships, as they lack desires for validation or emotional support, cannot make themselves vulnerable, and are incapable of genuine care. Unlike embodied people, ghostbots do not have the qualities that morally warrant being treated as living beings. They remain objects, and if they can be demonstrated to positively benefit survivors, this could diminish the push for outright bans and help shape thoughtful regulations.
Ghostbots, particularly griefbots, hold promise for those who shared close bonds with the deceased. They can act as transitional aids, helping survivors process their grief, let go of the past, and create new memories that enable them to move forward with their lives. Ghostbots might temporarily ease the loss of identity or suffering that comes with losing a partner, regardless of age. The death of a loved one often leaves survivors in a world that feels less valuable or fulfilling due to weakened emotional, physical, or financial support systems. Ghostbots can provide survivors with a way to process the loss, reassess their identity, and adjust their memories, beliefs, and goals. Whilst they do not conceal the reality of death, ghostbots can create a subjective sense of connection, allowing the survivor to feel part of a relationship rather than feeling isolated and un/devalued.
In some cases, griefbots can be likened to morphine prescribed for terminal patients—offering artificial comfort without causing unnecessary suffering. For elderly survivors, who may face challenges in forming new meaningful relationships, ghostbots could address the growing loneliness epidemic. Statistics show that over 27% of people aged 60 or older in the U.S. live alone, with only 6% residing with extended families. In Europe, the figures are 28% and 16%, respectively. [x] These trends, driven by societal and economic changes, make it unlikely for older survivors to return to extended family living arrangements or easily find new companions. Barriers such as geography, technology, or transportation further complicate efforts to form new relationships, and any connections formed may lack the depth of long-term partnerships.
For elderly survivors unable to access companionship due to emotional or logistical barriers, ghostbots offer a second-best substitute for the lost relationship. These AI tools can simulate interactions, providing survivors with a sense of validation, support, and connection that they need as individuals and social animals.[xi] Although these interactions are illusory, they may enhance the survivor’s well-being without causing significant harm to others. Critics might argue that authentic relationships are essential, but if ghostbots bring happiness to survivors in isolation and do not unduly harm anyone, their use seems justifiable. Over-regulating or banning such tools based on absolutist notions of authenticity risks denying comfort to those in need.
Illusions, in fact, are not inherently unethical. They can offer hope, inspire optimism, and help individuals cope with life’s challenges, enabling them to flourish. As long as no better alternatives are available and survivors make an informed choice to use ghostbots, the tool can serve as a harmless source of solace. However, ghostbot use becomes unethical if survivors rely on them to the exclusion of real, attainable relationships with embodied people. If viable opportunities for meaningful human connections exist, survivors should be encouraged to pursue them. Yet, in cases where forming new relationships is not a realistic option, an attachment to ghostbots can be seen as comparable to terminal patients using morphine. The priority in both scenarios is to alleviate suffering and ensure comfort in challenging times.

The question of whether AI ghostbots should be banned or regulated is complex, and as with any emerging technology, the answer is a resounding “It depends.” The decision hinges on various factors, including whether reanimating the deceased, even in a simulated form, serves the survivors’ needs and warrants both the benefits and potential drawbacks associated with it.
Critical considerations include the resources required for their development, their role in perpetuating unjust wealth distributions, and the implications for reducing opportunities to foster a better, more equitable world. Perhaps most importantly, public opinion and societal readiness to embrace the idea of resurrecting the deceased—albeit in a simulated form—play a vital role.
Assessing whether AI ghostbots should be introduced today adds another layer of complexity, especially when considering their impact on survivors. The varied and intricate consequences demand nuanced and thoughtful solutions, rather than blanket responses. ◉
[i] Wu, Daniel. 2024. “His daughter was murdered. Then she appeared as an AI chatbot.” Washington Post, 15 October 2024.
[ii] The company has since taken down this ghostbot.
[iii] Harbinja, Edina, Edwards, Lilian, and McVey, Marisa. 2023. “Governing Ghostbots.” Computer Law & Security Review, Vol. 48, https//doi.org/10.1016/j.clsr.2023.105791
[iv] Figueroa-Torres, Mauricio. 2024. “Affection as a service: Ghostbots and the changing nature of mourning.” Computer Law & Security Review, vol 52, https://doi.org/10.1016/j.clsr.2024.105943
[v] Mulligan, Nigel. 2024. “Ghostbots: AI versions of deceased loved ones could be a serious threat to mental health.” The Conversation. https://theconversation.com/ghostbots-ai-versions-of-deceased-loved-ones-could-be-a-serious-threat-to-mental-health-224984
[vi] Tiku, Nitasha. 2026. “AI friendships claim to cure loneliness. Some are ending in suicide.” Washington Post, 6 December 2024
[vii] Bellware, Kim and Masih, Niha. 2024. “Her teenage son killed himself after talking to a chatbot. Now she is suing.” Washington Post, 24 October 2024.
[viii] Gerken, Tom. 2024. “Chatbot ‘encouraged teen to kill parents over screen time limit’.” BBC, https://www.bbc.com/news/articles/cd605e48q1vo
[ix] Scissor types provide a practical example of how ghostbots could be used correctly. Children have safety scissors, which do the work that children need to do but do not pose the risk of adult scissors.
[x] Pew Research Center. 2020. “Older people are more likely to live alone in the U.S. than elsewhere in the world.” https//www.pewresearch.org/short-reads/2020/03/10/older-people-are-more-likely-to-lie-alone-in-the-u-s-than-elsewhere-in-the-world/
[xi] Patrick McCloskey argues that there is a silent epidemic of loneliness which is causing and being caused by young men and others turning to AI for their relationships. McCloskey, P. 2023. “Invasion from Planet Zircon: AI-Powered Threat to Humanity.” https://dda.ndus.edu/ddreview/invasion-from-planet-zircon-ai-powered-threat-to-humanity/
Dennis R. Cooley, PhD, is Professor of Philosophy and Ethics and Director of the Northern Plains Ethics Institute at NDSU. His research areas include bioethics, environmental ethics, business ethics, and death and dying. Among his publications are five books, including Death’s Values and Obligations: A Pragmatic Framework in the International Library of Ethics, Law and New Medicine; and Technology, Transgenics, and a Practical Moral Code in the International Library of Ethics, Law and Technology series. Currently, Cooley serves as the editor of the International Library of Bioethics (Springer) and the Northern Plains Ethics Journal, which uniquely publishes scholar, community member and student writing, focusing on ethical and social issues affecting the Northern Plains and beyond.