Close Menu
    Facebook X (Twitter) Instagram
    Trending
    • Why Vision AI Models Fail
    • Market Talk – December 10, 2025
    • Commentary: Australia’s social media ban is messy. So is letting kids stay
    • Tents flood, families seek shelter as Storm Byron bears down on Gaza | Israel-Palestine conflict News
    • The ‘Active NFL TD pass leaders’ quiz
    • Farmer aid package wouldn’t be necessary without Trump’s tariffs
    • US seizes tanker off coast of Venezuela, Trump says
    • Two New AI Ethics Certifications Available from IEEE
    Prime US News
    • Home
    • World News
    • Latest News
    • US News
    • Sports
    • Politics
    • Opinions
    • More
      • Tech News
      • Trending News
      • World Economy
    Prime US News
    Home»Tech News»Character.ai to ban teens from talking to its AI chatbots
    Tech News

    Character.ai to ban teens from talking to its AI chatbots

    Team_Prime US NewsBy Team_Prime US NewsOctober 29, 2025No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Chatbot web site Character.ai is chopping off youngsters from having conversations with digital characters, after dealing with intense criticism over the sorts of interactions younger folks had been having with on-line companions.

    The platform, based in 2021, is utilized by hundreds of thousands to speak to chatbots powered by synthetic intelligence (AI).

    However it’s dealing with a number of lawsuits within the US from dad and mom, together with one over the dying of a young person, with some branding it a “clear and present danger” to younger folks.

    Now, Character.ai says from 25 November under-18s will solely be capable to generate content material comparable to movies with their characters, quite than discuss to them as they’ll at the moment.

    On-line security campaigners have welcomed the transfer however mentioned the function ought to by no means have been obtainable to youngsters within the first place.

    Character.ai mentioned it was making the modifications after “studies and suggestions from regulators, security consultants, and oldsters”, which have highlighted considerations about its chatbots’ interactions with teenagers.

    Consultants have beforehand warned the potential for AI chatbots to make issues up, be overly-encouraging, and feign empathy can pose dangers to younger and weak folks.

    “Immediately’s announcement is a continuation of our basic perception that we have to maintain constructing the most secure AI platform on the planet for leisure functions,” Character.ai boss Karandeep Anand informed BBC Information.

    He mentioned AI security was “a shifting goal” however one thing the corporate had taken an “aggressive” method to, with parental controls and guardrails.

    On-line security group Web Issues welcomed the announcement, however it mentioned security measures ought to have been inbuilt from the beginning.

    “Our personal analysis reveals that youngsters are uncovered to dangerous content material and put in danger when partaking with AI, together with AI chatbots,” it mentioned.

    Character.ai has been criticised up to now for internet hosting probably dangerous or offensive chatbots that youngsters may discuss to.

    Avatars impersonating British youngsters Brianna Ghey, who was murdered in 2023, and Molly Russell, who took her life on the age of 14 after viewing suicide materials on-line, had been discovered on the site in 2024 earlier than being taken down.

    Later, in 2025, the Bureau of Investigative Journalism (TBIJ) discovered a chatbot based mostly on paedophile Jeffrey Epstein which had logged greater than 3,000 chats with customers.

    The outlet reported the “Bestie Epstein” avatar continued to flirt with its reporter after they mentioned they had been a baby. It was considered one of a number of bots flagged by TBIJ that had been subsequently taken down by Character.ai.

    The Molly Rose Basis – which was arrange in reminiscence of Molly Russell – questioned the platform’s motivations.

    “But once more it has taken sustained stress from the media and politicians to make a tech agency do the suitable factor, and it seems that Character AI is selecting to behave now earlier than regulators make them,” mentioned Andy Burrows, its chief govt.

    Mr Anand mentioned the corporate’s new focus was on offering “even deeper gameplay [and] role-play storytelling” options for teenagers – including these can be “far safer than what they could be capable to do with an open-ended bot”.

    New age verification strategies may even are available in, and the corporate will fund a brand new AI security analysis lab.

    Social media knowledgeable Matt Navarra mentioned it was a “wake-up name” for the AI business, which is shifting “from permissionless innovation to post-crisis regulation”.

    “When a platform that builds a teen expertise nonetheless then pulls the plug, it is saying that filtered chats aren’t sufficient when the tech’s emotional pull is powerful,” he informed BBC Information.

    “This is not about content material slips. It is about how AI bots mimic actual relationships and blur the traces for younger customers,” he added.

    Mr Navarra additionally mentioned the massive problem for Character.ai shall be to create an attractive AI platform which teenagers nonetheless wish to use, quite than transfer to “much less safer alternate options”.

    In the meantime Dr Nomisha Kurian, who has researched AI security, mentioned it was “a wise transfer” to limit teenagers utilizing chatbots.

    “It helps to separate inventive play from extra private, emotionally delicate exchanges,” she mentioned.

    “That is so vital for younger customers nonetheless studying to navigate emotional and digital boundaries.

    “Character.ai’s new measures may replicate a maturing section within the AI business – little one security is more and more being recognised as an pressing precedence for accountable innovation.”



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleJapan’s new PM Takaichi seeks to rebrand US ties in symbolic first meeting with Trump: Analysts
    Next Article Dictionary.com reveals ’67’ is its 2025 Word of the Year
    Team_Prime US News
    • Website

    Related Posts

    Tech News

    Why Vision AI Models Fail

    December 10, 2025
    Tech News

    Two New AI Ethics Certifications Available from IEEE

    December 10, 2025
    Tech News

    McDonald’s pulls AI-generated Christmas advert following backlash

    December 10, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Most Popular

    Did The US Wrongly Deport A Maryland Man?

    April 21, 2025

    Netflix to raise prices as Squid Game 2 and sport fuel subscribers

    January 26, 2025

    Trump faces backlash from business as tariffs ignite inflation fears

    February 2, 2025
    Our Picks

    Why Vision AI Models Fail

    December 10, 2025

    Market Talk – December 10, 2025

    December 10, 2025

    Commentary: Australia’s social media ban is messy. So is letting kids stay

    December 10, 2025
    Categories
    • Latest News
    • Opinions
    • Politics
    • Sports
    • Tech News
    • Trending News
    • US News
    • World Economy
    • World News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Primeusnews.com All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.