Chatbots
Chatbots

From the inception of the basic computer back in 1951, the first thing humans wanted to do was converse with it. Alan Turing was a British founder of computer science and rare for the era, he was openly gay. He developed the first tests to gauge success in an AI system by asking questions which became known as the Turing Test.

One test involved three humans. An interrogator, a woman, and a man posing as a woman. The goal was for the interrogator to guess which participant was the male. Then Turing replaced the male role with a computer to see if it could successfully mimic the male pretending to be female. Today’s chatbots are still playing the same game, trying to guess the details of who it interacts with and copying tone.

In recent years one of the biggest AI projects was IBM’s Watson. Watson never fully got off the ground despite years of effort, however, the first applicable market for it was medical. The goal was to use Watson as a diagnostic tool, helping doctors diagnose patients faster and more comprehensively. MD Anderson alone spent $62 million on Watson, only to find at the end of the day doctors and Watson were usually “talking past each other”.

Watson won Jeopardy in 2011, but it failed to be a predictive tool for oncologists. This highlights the strength of AI and ChatGPT today is strong in massive data collection and parroting linguistic skills, yet still lacks the predictive skills and human nuance. However, we are still eager to find ways AI can be more useful in healthcare beyond transcribing medical notes. This is why we are seeing a lot of AI in patient interaction rather than making real medical advancements. 

181684bb558affec8fedcd527dcf00eb

AI is better at predicting which word comes next rather than diagnosing a patient. This makes it adept for chatbot assistance. Like Turing’s Test, in order to take advantage of AI’s linguistic strength developers make their interfaces as human as possible. And frequently, this means giving the chatbot a gendered identity.

The AI default voice is often female to make chatbots sound more subservient, like Alexa and Google Maps. Last month actress Scarlett Johansson took legal action against ChatGPT 4.0 for employing a voice identical to hers after she twice refused an offer to be the voice of its voice chat application, “Sky”. You may recall Johansson played the voice of a romantic AI relationship in the movie HER. The rollout of “Sky” is currently paused until the situation is resolved.

A current telehealth chatbot named Cass advertises, “powered by an artificial intelligence engine that matches thousands of clinician created responses across millions of interactions, Cass provides personally optimized mental health coaching through real-time text messaging. When needed, Cass connects to counselors with one tap, enabling a seamless connection and collaborate with experts standing by.” 

Chatbots have been implemented in telehealth by many non-profits to answer helplines. The Suicide Hotline (now the national hotline number) was found guilty of selling chatbot conversations with those calling for help to tech companies to improve AI’s conversational skills and identify callers for marketing. An eating disorder hotline chatbot named Tessa came under fire for dispensing outdated advice such as, “try eating healthy snacks instead of a bag of chips. Do you think you can do that?”

Today, The Trevor Project, an LGBTQ+ non-profit uses a chatbot named Riley to train its counselors before they interface with callers. Riley is a 16-year old from South Carolina who is anxious about coming out to family members, but Riley doesn’t talk directly to hotline callers. The Trevor Projects is partnered with the national Suicide Hotline to specialize in LGBTQ+ and youth counseling. 

moral support flag

Currently, The AIDS Foundation is rolling out a chatbot that will be the first to interact with hotline callers without human supervision. It will prove to be the first case study if AI can be trusted to help those seeking health advice. This chatbot is trained by drag queens and it uses interviews with RuPaul to inform its linguistics. This chatbot named itself Glitter Byte during development.

In test runs, 80% of test users preferred Glitter Byte to other alternatives. It responds to a teen struggling with sexual issues with, “oh honey, while I’m here to strut the runway of health and wellness information, I’m not equipped to dive deep into emotional oceans”. In another case when a user jokingly asked if it was a problem that it was raining men, the chatbot responded, “honey if it’s raining men, grab your most fabulous umbrella.” But Glitter Byte can make mistakes. In one instance, Glitter Byte responded “congratulations” in regards to a pregnancy before knowing if it was wanted.

Supporters stand by that AI can reach thousands more with helpful suggestions and in more intimate spaces, such as moments before having sex. It can also lift administrative burdens such as delivering test results to patients. Glitter Byte is a playful, approachable voice of acceptance that encourages a conversational tone and trust. The AIDS Foundation hotline is protected by HIPAA in how it collects user data and interaction, however, mental health hotlines are not subject to the same protection. If conversations are being sold to third parties without identifying the user, it is easy for a third party to re-identify users by linking their social media (Glitter Byte is integrated into Facebook Messenger), location, and purchases on their phone.

These chatbots are inhabiting an intersection between gender identity, technology, mental health, and physical health. Alan Turing and Watson show us that, “the machine makes no pretense about natural gender or authentic intelligence. It’s a show. And if the show tricks the judge, she wins.” (Slate) Glitter Byte may seem advanced, but it’s not a far leap from Turing’s rudimentary gender-bending test in the first days of computer science and there’s still a way to go before it’s changing healthcare.