AI ethicists warn of “digital haunting” by dead relatives

The Internet is full of personal artifacts, many of which can remain online long after a person has died. But what if these relics are used to simulate deceased loved ones? It is is already happeningand AI ethicists warn that this reality exposes us to a new kind of “digital haunting” by “deadbots.”

People have attempted to speak through religious rites, spiritual mediums, and even with deceased loved ones pseudo-scientific technological approaches since thousands of years. But continued interest in generative artificial intelligence offers grieving friends and family a whole new option – the ability to interact with them Chatbot avatars Training on the online presence and data of a deceased person, including voice and visual image. While still explicitly as advertised digital approximationssome of the products offered by companies like Replika, HereAfter and Persona can (and in some cases already are) used to simulate the dead.

And while it may be difficult for some to process this new reality or even take it seriously, it's important to remember that the “digital afterlife” industry isn't just a niche market smaller startups is limited. Just last year, Amazon showed up the potential of its Alexa assistant to mimic the voices of a deceased loved one with just a short audio clip.

(Related: Watch a tech billionaire talk to his AI-generated clone.)

AI ethicists and science fiction writers have been researching and anticipating these potential situations for decades. But for researchers at Leverhulme Center for the Future of Intelligence, University of CambridgeThis unregulated, unexplored “ethical minefield” already exists. And to illustrate this, they imagined three fictional scenarios that could easily happen today.

In a new study published in Philosophy and technologyAI ethicists Tomasz Hollanek and Katarzyna Nowaczyk-Basińska relied on a strategy called “design fiction”. First coined by science fiction author Bruce Sterling, design fiction refers to “a suspension of disbelief about change achieved through the use of diegetic prototypes.” Basically, researchers write down plausible events alongside invented visual aids.

For their research, Hollanek and Nowaczyk-Basińska imagined three hyperreal scenarios of fictional individuals having problems with various “post-mortem presence” companies, then created digital props such as fake websites and phone screenshots. The researchers focused on three different demographics: data donors, data receivers and service interactors. “Data donors” are the people on whom an AI program is based, while “data recipients” are the companies or organizations that may have the digital information. “Service interactors,” on the other hand, are relatives, friends, and anyone else who could use a “deadbot” or “ghostbot.”

Fake Facebook ad for the Deadbot parenting service
A fake Facebook ad for a fictitious “Ghostbot” company. Photo credit: Tomasz Hollanek

In a design fiction, an adult user is impressed by the realism of his late grandparent's chatbot, only to soon receive advertisements for “premium trials” and food delivery services in the style of his relative's voice. In another case, a terminally ill mother creates a deadbot for her eight-year-old son to help him grieve. But by adapting to the child's reactions, the AI ​​begins to suggest face-to-face meetings, causing psychological damage.

In a final scenario, an elderly customer signs up for a 20-year subscription to an AI program in hopes of comforting his family. However, due to the company's terms of service, their children and grandchildren cannot suspend the service even if they do not want to use it.

“Rapid advances in generative AI mean that almost anyone with internet access and some basic knowledge can revive a deceased loved one,” said Nowaczyk-Basińska. “At the same time, a person can leave an AI simulation as a parting gift for loved ones who are not ready to process their grief in this way. The rights of both data donors and those who interact with AI services after death should be equally protected.”

(Related: A fake “Joe Biden” robocall urged voters to stay home during the primary election.)

“These services risk causing great distress to people as they face unwanted digital tracking from alarmingly accurate AI replicas of their deceased,” Hollanek added. “The potential psychological impact, particularly at an already difficult time, could be devastating.”

Ethicists believe that certain protective measures can and should be implemented as quickly as possible to prevent such consequences. Companies must develop sensitive procedures for “retiring” an avatar and ensure transparency about how their services work through risk exclusions. In the meantime, “recovery services” must be reserved for adult users only, while respecting the mutual consent of both data providers and data recipients.

“We now need to think about how we mitigate the social and psychological risks of digital immortality,” argues Nowaczyk-Basińska.



Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please turn off the ad blocker detector and refresh the page later.