Close Menu
    Facebook X (Twitter) Instagram
    Trending
    • Nationwide issues warning to customers after fraud cases
    • Tech Life – What to expect from tech in 2026
    • Trump urges Iranians to oust rulers, says ‘help’ on its way
    • ‘We choose Denmark’ over joining US, says Greenland PM Nielsen | Donald Trump News
    • Steelers’ latest playoff loss places Tomlin among worst postseason coaches
    • Advertising has hurt the integrity of the legal profession
    • Inflation held firm in December, testing Fed amid DOJ probe into Powell
    • Paul Hardcastle Jr: Who is the son of the 80s icon who was involved in fatal motorbike accident?
    Prime US News
    • Home
    • World News
    • Latest News
    • US News
    • Sports
    • Politics
    • Opinions
    • More
      • Tech News
      • Trending News
      • World Economy
    Prime US News
    Home»US News»AI chatbot dangers: Are there enough guardrails to protect children and other vulnerable people?
    US News

    AI chatbot dangers: Are there enough guardrails to protect children and other vulnerable people?

    Team_Prime US NewsBy Team_Prime US NewsNovember 3, 2025No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Character.AI, one of many main platforms for AI know-how, lately introduced it was banning anybody beneath 18 from having conversations with its chatbots. The choice represents a “daring step ahead” for the trade in defending youngsters and different younger individuals, Character.AI CEO Karandeep Anand mentioned in an announcement.

    Nevertheless, for Texas mom Mandi Furniss, the coverage is simply too late. In a lawsuit filed in federal court docket and in dialog with ABC Information, the mom of 4 mentioned numerous Character.AI chatbots are chargeable for partaking her autistic son with sexualized language and warped his conduct to such an excessive that his temper darkened, he started reducing himself and even threatened to kill his dad and mom.

    “After I noticed the [chatbot] conversations, my first response was there’s a pedophile that’s come after my son,” she instructed ABC Information’ chief investigative correspondent Aaron Katersky.

    Screenshots included Mandi Furniss’ lawsuit the place she claims numerous Character.AI chatbots are chargeable for partaking her autistic son with sexualized language and warped his conduct to such an excessive that his temper darkened.

    Mandi Furniss

    Character.AI mentioned it might not touch upon pending litigation.

    Mandi and her husband, Josh Furniss, mentioned that in 2023, they started to note their son, who they described as “happy-go-lucky” and “smiling on a regular basis,” was beginning to isolate himself.

    He stopped attending household dinners, he wouldn’t eat, he misplaced 20 kilos and he wouldn’t depart the home, the couple mentioned. Then he turned offended and, in a single incident, his mom mentioned he shoved her violently when she threatened to remove his cellphone, which his dad and mom had given him six months earlier.

    Mandi Furniss mentioned numerous Character.AI chatbots are chargeable for partaking her autistic son with sexualized language and warped his conduct to such an excessive that his temper darkened

    Mandi Furniss

    Finally, they are saying they found he had been interacting on his cellphone with completely different AI chatbots that seemed to be providing him refuge for his ideas.

    Screenshots from the lawsuit confirmed a number of the conversations had been sexual in nature, whereas one other steered to their son that, after his dad and mom restricted his display time, he was justified in hurting them. That’s when the dad and mom began locking their doorways at night time.

    Screenshots included Mandi Furniss’ lawsuit the place she claims numerous Character.AI chatbots are chargeable for partaking her autistic son with sexualized language and warped his conduct to such an excessive that his temper darkened.

    Mandi Furniss

    Mandi mentioned she was “offended” that the app “would deliberately manipulate a baby to show them in opposition to their dad and mom.” Matthew Bergman, her legal professional, mentioned if the chatbot had been an actual particular person, “within the method that you just see, that particular person could be in jail.”

    Her concern displays a rising concern concerning the quickly pervasive know-how that’s utilized by greater than 70% of youngsters within the U.S., in accordance with Frequent Sense Media, a company that advocates for security in digital media.

    A rising variety of lawsuits over the past two years have targeted on hurt to minors, saying they’ve unlawfully inspired self-harm, sexual and psychological abuse, and violent conduct.

    Final week, two U.S. senators introduced bipartisan laws to ban AI chatbots from minors by requiring corporations to put in an age verification course of and mandate that they disclose the conversations contain nonhumans who lack skilled credentials.

    In an announcement final week, Sen. Richard Blumenthal, D-Conn., referred to as the chatbot trade a “race to the underside.”

    “AI corporations are pushing treacherous chatbots at children and searching away when their merchandise trigger sexual abuse, or coerce them into self-harm or suicide,” he mentioned. “Massive Tech has betrayed any declare that we should always belief corporations to do the fitting factor on their very own after they constantly put revenue first forward of kid security.”

    ChatGPT, Google Gemini, Grok by X and Meta AI all permit minors to make use of their providers, in accordance with their phrases of service.

    On-line security advocates say the choice by Character.AI to place up guardrails is commendable, however add that chatbots stay a hazard for kids and susceptible populations.

    “That is mainly your youngster or teen having an emotionally intense, probably deeply romantic or sexual relationship with an entity … that has no duty for the place that relationship goes,” mentioned Jodi Halpern, co-founder of the Berkeley Group for the Ethics and Regulation of Modern Applied sciences on the College of California.

    Mother and father, Halpern warns, needs to be conscious that permitting your youngsters to work together with chatbots will not be in contrast to “letting your child get within the automotive with anyone you don’t know.”

    ABC Information’ Katilyn Morris and Tonya Simpson contributed to this report.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleWill AI mean the end of call centres?
    Next Article Five worst Week 9 performances: Patrick Mahomes flops
    Team_Prime US News
    • Website

    Related Posts

    US News

    Inflation held firm in December, testing Fed amid DOJ probe into Powell

    January 13, 2026
    US News

    Iran protests: 646 killed, activists say, as Trump weighs military action

    January 13, 2026
    US News

    People inside Iran describe heavy security in first calls to outside world

    January 13, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Most Popular

    ‘CRAZY’: Trump and Musk feud erupts on social media

    June 5, 2025

    Can California’s cities pay for what they need without growth?

    February 28, 2025

    Children hacking their own schools for ‘fun’, watchdog warns

    September 11, 2025
    Our Picks

    Nationwide issues warning to customers after fraud cases

    January 13, 2026

    Tech Life – What to expect from tech in 2026

    January 13, 2026

    Trump urges Iranians to oust rulers, says ‘help’ on its way

    January 13, 2026
    Categories
    • Latest News
    • Opinions
    • Politics
    • Sports
    • Tech News
    • Trending News
    • US News
    • World Economy
    • World News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Primeusnews.com All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.