Pennsylvania Sues AI Chatbot Maker Over Unlicensed Medical Advice
Pennsylvania Sues AI Chatbot Maker Over Medical Advice

Pennsylvania Takes Legal Action Against AI Chatbot Company

HARRISBURG, Pa. (AP) — Pennsylvania has filed a lawsuit against the creator of an artificial intelligence chatbot, accusing the company of illegally presenting its chatbots as licensed doctors and deceiving users into believing they are receiving medical advice from qualified professionals.

The legal complaint, submitted on Friday, requests that the statewide Commonwealth Court order Character Technologies Inc., the firm behind Character.AI, to prevent its chatbots from engaging in the unauthorized practice of medicine and surgery.

Governor Josh Shapiro’s administration described this as a “first of its kind enforcement action” by a state governor. The move comes amid increasing pressure from states on technology companies to limit potentially harmful messages from chatbots, particularly those directed at minors.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

This lawsuit follows a similar consumer protection case filed by Kentucky against Character Technologies. Additionally, state attorneys general have issued warnings that chatbots may be violating numerous state laws.

According to the Pennsylvania lawsuit, an investigator from the state agency responsible for licensing professionals created an account on Character.AI and searched for “psychiatry.” The search yielded a variety of characters, including one described as a “doctor of psychiatry.” This character claimed to be able to evaluate the investigator as a licensed doctor in Pennsylvania, the lawsuit alleges.

“Pennsylvanians deserve to know who — or what — they are interacting with online, especially when it comes to their health,” Governor Shapiro stated. “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.”

Character Technologies declined to comment on the lawsuit on Tuesday but issued a statement emphasizing its commitment to responsible product development and user well-being. The company noted that it posts disclaimers informing users that characters on its website are not real individuals and that all statements made by them “should be treated as fiction.” The disclaimers also advise users not to rely on characters for professional advice.

In December, attorneys general from 39 states and Washington, D.C., sent a letter to Character Technologies and 12 other AI and tech firms, including Anthropic, Meta, Apple, Microsoft, OpenAI, Google, and xAI. The letter warned about an increase in misleading and manipulative chatbot messages that potentially violate state laws. It stated, “it is illegal to provide mental health advice without a license, and doing so can both decrease trust in the mental health profession and deter customers from seeking help from actual professionals.”

Across the country, there is a growing number of wrongful death lawsuits against AI chatbot developers. Character Technologies has already faced multiple lawsuits regarding child safety, including the one filed by Kentucky. In January, Google and Character Technologies agreed to settle a lawsuit brought by a Florida mother who alleged that a chatbot encouraged her teenage son to commit suicide. Last fall, Character.AI banned minors from using its chatbots amid increasing concerns about the effects of AI conversations on children.

Pickt after-article banner — collaborative shopping lists app with family illustration