Positive Psychology founder Lives Forever as an AI Bot?

A idea from science fiction (imortality through computers) may have become reality. This is an important article for computational social scientists. Please read it.

I briefly consulted to a company about their mental health bot and looking at the simplistic unsupervised learning (which is too typical when companies use generative AI like ChapGPT) I expressed skepticism that this could provide useful help and could be quite damaging. After reading the article I am not sure what to believe. Is the AI programmed in a way that really limited the risk of doing damage or is it another case of an error-prone talking dog?

I have searched for any articles from the group that created “Ask Martin”. Possibly this is greatly affected by the group being in China (Tsinghau University) and their work primarily being findable on a search engine based in China, with searches only doable in Chinese. I only found 2 articles from Yukon Zhao on Google Scholar having to do with AI and none on Semantic Scholar. With a hint from a little birdie I found three more articles by Martin Seligman and a colleague Abigail Skyler.

After reading those articles which show non-data scientists learning to apply mostly user-friendly tools and ChatGPT to do their AI research, and the hint from a little birdie I believe Ask Martin was created using ChatGPT. While I am sure they could have managed to create a fine-tuned ChatGPT model trained by relatively simply feeding in paragraphs of text from Martin Seligman’s texts to create a talking dog (but a dog that talks like Martin), the inherent limitations of ChatGPT (it can only remember about 3,000 to 4,000 words at a time) and its unsupervised learning model mean that actually replicating what a trained human therapist does with a long-term client who like all humans has a complex history not describable in a few thousand words, is not feasible using Ask Martin.

Also, it is important to note that Martin Seligman has never been a practicing therapist/clinical psychologist. He is a researcher. So the words that trained Ask Martin are not transcriptions of indepth, long-term conversations between Martin and some patients. To use those words to conduct serious mental health therapy should be considered unethical and in the countries that regulate such things, malpractice.

So TLDR: I sincerely doubt AI as typically practiced, even using a famous psychologist’s published work, is up to the task of doing the high quality work that humans really need. You can make them sound like some particular person when only reading snippets, but you cannot make them think like an expert who has the complex task of healing mental health patients.

Also in the above referenced Politico article, it mentions a second AI bot immitating a famous person in psychology, Esther Perel who is a therapist. I did find an article by the bot creator about the bot and his resume on LinkedIn.

This person did very simple programming and/or hired others to do that to generate a ChatGPT-based bot. So it has the same basic flaws and limitations as the Ask Martin bot and is not recommended for actual indepth, long-term therapy for a patient with non-trivial mental health issues. The data used to train are not full therapy sessions let alone a multi-month or multi-year history of sessions.

Not only that, but because the bot “creator” is not an actual data scientist or even a software engineer. So he only used text as inputs instead of the extremely rich information embedded in the sounds of the voices telling deeply painful and nuanced short stories. So virtually none of the intimacy of those dialogues was captured by the AI bot training.

The bot creator’s primary goal was to make output that reads like what transcripts of Esther Perel one-directional speeches to large audiences (not conversations with a single complex individual). The bot author appears to be yet another startup entrepreneur who has a shallow understanding of what complex businesses do, but recognizes a hot trend that people will follow/support/invest in.

The job descriptions for Generative AI work are overflowing with doing work for such entrepreneurs and large businesses alike. The hype of Generative AI is IMMENSE. The reality of its usefulness in solving significant problems is very disappointing.

In closing, please watch Esther Perel’s talk on another AI, Artificial Intimacy. This is the best talk I have seen on the intersection of technology and psychology in the decade since MIT’s Sherry Turkle gave her Ted Talk Connected, but alone?.

Categories:

Leave a comment