KEY POINTS

  • Experts say griefbots could take control away from the user
  • Fraudsters may misuse a dead person's identity
  • It could also lead to 'para-social relationships'

The rise of generative artificial intelligence has led to many innovative ideas. In China, some have even tried to "resurrect" their dead relatives through "griefbots" – AI-powered chatbots that can simulate a conversation with someone who is dead. But experts and analysts are worried about the ethical and propriety concerns they raise.

Debates about the ethical ramifications of "griefbots" kicked off after state-owned magazine Sixth Tone featured the story of Yu Jialin, a Chinese software engineer who used AI to "revive" his late grandfather.

In the April report, investigative journalist Tang Yucheng revealed how Yu fed an AI chatbot with his late grandfather's text messages, pictures, videos and letters to come up with a digital projection of the software engineer's grandfather.

Yu's story is just one of several accounts about the use of AI technology in "resurrecting" the dead.

It feels as though these griefbots are the "technological step up" from methods used by psychologists to help grieving clients cope with their emotions following a loss, Sue Morris, director of bereavement services at the Dana-Farber Cancer Institute, told Insider. "It's natural for humans to change the way they mourn as technology evolves."

But she warns that griefbots could take control away from the user.

"Maybe 98% of the time, the program is going to say the appropriate thing, but what if it doesn't for a small percentage? Could that then send somebody into more of a downward spiral?" Morris asks, explaining that an unexpected trigger such as an insensitive response delivered at the wrong time could overwhelm a grieving person.

Haibing Lu, an information and analytics professor at Santa Clara University, said a dead person's identity could be used by fraudsters in this way, which poses other serious ethical issues.

There's also the matter of consent since a dead person cannot validate whether he or she allowed someone to use their information after death.

"It doesn't mean that if a person has passed away, that other people have the right to disclose their personal privacy, even if it's to immediate family members," Lu argued.

Yu claimed he received approval from his grandmother to use letters exchanged between him and his wife. While the griefbot managed to provide some accurate responses, Yu agreed the technology was "limited" as some of the bot's answers were "just ridiculous" at times.

Julia Stoyanovich, a data scientist at New York University's department of computer science and engineering, told Tang that the unpredictability of a griefbot's response could be partly due to fragile data fed to it.

"The unpredictability of response implies that a griefbot is likely to hurt people," wrote Tang.

Yu has since deleted the griefbot. "I was afraid of relying on this bot too much. I was afraid that I would not be able to move on if I kept conversing with it. These emotions might have overwhelmed me too much to work and live my life," Yu told Tang.

This is not the first time griefbots have ignited propriety questions.

In 2018, Pamela Rutledge, director of the nonprofit Media Psychology Research Center, warned that griefbots could prevent people from moving on. She explained the technology could encourage "para-social relationships," wherein one person places a huge deal of effort or energy onto someone who doesn't know the other exists.

The rise of generative AI has also led to the development of apps such as HereAfter AI, which promises it will "preserve meaningful memories about your life." The platform allows users to upload data about themselves to create a "Life Story Avatar" that can interact with the loved ones after their death.

HereAfter founder James Vlahos said active consent is mandatory, noting that the firm does not distribute or monetize recorded data "in any alternate way."

Australian grief recovery specialist Amanda Lambros told CNET that HereAfter AI was a "great initiative" but it could hurt someone if they discover information that wasn't communicated to them while the person was alive.

Illustration shows Artificial Intelligence words
Generative AI has given life to griefbots, chatbots that simulate responses from a deceased person based on data fed to them. Reuters