Close Menu
    Facebook X (Twitter) Instagram
    Trending
    • AI Companions Are Growing more Popular
    • Britain Faces Weapon Shortage After Oversupplying Ukraine
    • Second US aircraft carrier is being sent to the Middle East: Source
    • US says it caused dollar shortage to trigger Iran protests: What that means | Explainer News
    • Women’s Top 25 roundup: Mikayla Blakes, No. 5 Vandy take down No. 4 Texas
    • Contributor: L.A. is rebuilding for the Olympics, not the next fire
    • Mexico could be at risk of losing measles elimination status after more than 9,000 cases since last year
    • Exploring AI Companion’s Benefits and Risks
    Prime US News
    • Home
    • World News
    • Latest News
    • US News
    • Sports
    • Politics
    • Opinions
    • More
      • Tech News
      • Trending News
      • World Economy
    Prime US News
    Home»Tech News»Exploring AI Companion’s Benefits and Risks
    Tech News

    Exploring AI Companion’s Benefits and Risks

    Team_Prime US NewsBy Team_Prime US NewsFebruary 13, 2026No Comments10 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    For a distinct perspective on AI companions, see our Q&A with Jaime Banks: How Do You Define an AI Companion?

    Novel know-how is usually a double-edged sword. New capabilities include new dangers, and artificial intelligence is definitely no exception.

    AI used for human companionship, as an illustration, guarantees an ever-present digital good friend in an more and more lonely world. Chatbots devoted to offering social assist have grown to host hundreds of thousands of customers, and so they’re now being embodied in bodily companions. Researchers are simply starting to grasp the character of those interactions, however one important query has already emerged: Do AI companions ease our woes or contribute to them?

    RELATED: How Do You Define an AI Companion?

    Brad Knox is a analysis affiliate professor of pc science on the College of Texas at Austin who researches human-computer interaction and reinforcement learning. He beforehand began an organization making simple robotic pets with lifelike personalities, and in December, Knox and his colleagues at UT Austin printed a preprint paper on the potential harms of AI companions—AI techniques that present companionship, whether or not designed to take action or not.

    Knox spoke with IEEE Spectrum concerning the rise of AI companions, their dangers, and the place they diverge from human relationships.

    Why AI Companions are Well-liked

    Why are AI companions rising in popularity?

    Knox: My sense is that the principle factor motivating it’s that large language models aren’t that troublesome to adapt into efficient chatbot companions. The traits which can be wanted for companionship, a whole lot of these containers are checked by massive language fashions, so fine-tuning them to undertake a persona or be a personality just isn’t that troublesome.

    There was an extended interval the place chatbots and different social robots weren’t that compelling. I used to be a postdoc on the MIT Media Lab in Cynthia Breazeal’s group from 2012 to 2014, and I bear in mind our group members didn’t wish to work together for lengthy with the robots that we constructed. The know-how simply wasn’t there but. LLMs have made it to be able to have conversations that may really feel fairly genuine.

    What are the principal advantages and dangers of AI companions?

    Knox: Within the paper we have been extra centered on harms, however we do spend a complete web page on advantages. An enormous one is improved emotional well-being. Loneliness is a public health problem, and it appears believable that AI companions may handle that by means of direct interplay with customers, doubtlessly with actual mental health advantages. They may additionally assist individuals construct social abilities. Interacting with an AI companion is way decrease stakes than interacting with a human, so you possibly can follow troublesome conversations and construct confidence. They may additionally assist in extra skilled types of psychological well being assist.

    So far as harms, they embody worse well-being, decreasing individuals’s connection to the bodily world, the burden that their dedication to the AI system causes. And we’ve seen tales the place an AI companion appears to have a considerable causal function within the dying of people.

    The idea of hurt inherently entails causation: Hurt is attributable to prior circumstances. To raised perceive hurt from AI companions, our paper is structured round a causal graph, the place traits of AI companions are on the middle. In the remainder of this graph, we focus on widespread causes of these traits, after which the dangerous results that these traits may trigger. There are 4 traits that we do that detailed structured remedy of, after which one other 14 that we focus on briefly.

    Why is it essential to ascertain potential pathways for hurt now?

    Knox: I’m not a social media researcher, however it appeared prefer it took a very long time for academia to ascertain a vocabulary about potential harms of social media and to research causal proof for such harms. I really feel pretty assured that AI companions are inflicting some hurt and are going to trigger hurt sooner or later. Additionally they may have advantages. However the extra we are able to shortly develop a classy understanding of what they’re doing to their customers, to their customers’ relationships, and to society at massive, the earlier we are able to apply that understanding to their design, transferring in direction of extra profit and fewer hurt.

    We now have a listing of suggestions, however we take into account them to be preliminary. The hope is that we’re serving to to create an preliminary map of this area. Way more analysis is required. However considering by means of potential pathways to hurt may sharpen the instinct of each designers and potential customers. I believe that following that instinct may forestall substantial hurt, although we would not but have rigorous experimental proof of what causes a hurt.

    The Burden of AI Companions on Customers

    You talked about that AI companions would possibly grow to be a burden on people. Are you able to say extra about that?

    Knox: The thought right here is that AI companions are digital, to allow them to in principle persist indefinitely. A number of the ways in which human relationships would finish may not be designed in, in order that brings up this query of, how ought to AI companions be designed in order that relationships can naturally and healthfully finish between the people and the AI companions?

    There are some compelling examples already of this being a problem for some customers. Many come from customers of Replika chatbots, that are common AI companions. Customers have reported issues like feeling compelled to take care of the wants of their Replika AI companion, whether or not these are said by the AI companion or simply imagined. On the subreddit r/replika, customers have additionally reported guilt and disgrace of abandoning their AI companions.

    This burden is exacerbated by among the design of the AI companions, whether or not intentional or not. One research discovered that the AI companions incessantly say that they’re afraid of being deserted or can be harm by it. They’re expressing these very human fears that plausibly are stoking individuals’s feeling that they’re burdened with a dedication towards the well-being of those digital entities.

    Tlisted here are additionally instances the place the human consumer will all of the sudden lose entry to a mannequin. Is that one thing that you just’ve been excited about?

    In 2017, Brad Knox began an organization offering easy robotic pets.Brad Knox

    Knox: That’s one other one of many traits we checked out. It’s kind of the alternative of the absence of endpoints for relationships: The AI companion can grow to be unavailable for causes that don’t match the traditional narrative of a relationship.

    There’s an incredible New York Times video from 2015 concerning the Sony Aibo robotic canine. Sony had stopped promoting them within the mid-2000s, however they nonetheless bought elements for the Aibos. Then they stopped making the elements to restore them. This video follows individuals in Japan giving funerals for his or her unrepairable Aibos and interviews among the homeowners. It’s clear from the interviews that they appear very connected. I don’t assume this represents nearly all of Aibo homeowners, however these robots have been constructed on much less potent AI strategies than exist at present and, even then, some share of the customers grew to become connected to those robot dogs. So this is a matter.

    Potential options embody having a product-sunsetting plan whenever you launch an AI companion. That might embody shopping for insurance coverage in order that if the companion supplier’s assist ends someway, the insurance coverage triggers funding of maintaining them operating for some period of time, or committing to open-source them when you can’t preserve them anymore.

    It sounds like a whole lot of the potential factors of hurt stem from cases the place an AI companion diverges from the expectations of human relationships. Is that honest?

    Knox: I wouldn’t essentially say that frames the whole lot within the paper.

    We categorize one thing as dangerous if it leads to an individual being worse off in two completely different potential various worlds: One the place there’s only a better-designed AI companion, and the opposite the place the AI companion doesn’t exist in any respect. And so I believe that distinction between human interplay and human-AI interplay connects extra to that comparability with the world the place there’s simply no AI companion in any respect.

    However there are occasions the place it really appears that we would be capable to cut back hurt by making the most of the truth that these aren’t really people. We now have a whole lot of energy over their design. Take the priority with them not having pure endpoints. One potential approach to deal with that will be to create constructive narratives for a way the connection’s going to finish.

    We use Tamagotchis, the late ’90s common digital pet for example. In some Tamagotchis, when you deal with the pet, it grows into an grownup and companions with one other Tamagotchi. Then it leaves you and also you get a brand new one. For people who find themselves emotionally wrapped up in caring for his or her Tamagotchis, that narrative of maturing into independence is a reasonably constructive one.

    Embodied companions like desktop units, robots, or toys have gotten extra widespread. How would possibly that change AI companions?

    Knox: Robotics at this level is a more durable drawback than making a compelling chatbot. So, my sense is that the extent of uptake for embodied companions gained’t be as excessive within the coming few years. The embodied AI companions that I’m conscious of are principally toys.

    A possible benefit of an embodied AI companion is that bodily location makes it much less ever-present. In distinction, screen-based AI companions like chatbots are as current because the screens they stay on. So in the event that they’re skilled equally to social media to maximise engagement, they might be very addictive. There’s one thing interesting, no less than in that respect, of getting a bodily companion that stays roughly the place you left it final.

    Brad Knox posing with a humanoid and small owl-like robot. Knox poses with the Nexi and Dragonbot robots throughout his postdoc at MIT in 2014.Paula Aguilera and Jonathan Williams/MIT

    Anything you’d like to say?

    Knox: There are two different traits I assume can be value touching upon.

    Probably the most important hurt proper now could be associated to the trait of excessive attachment anxiousness—mainly jealous, needy AI companions. I can perceive the will to make a variety of various characters—together with possessive ones—however I believe this is among the simpler points to repair. When individuals see this trait in AI companions, I hope they are going to be fast to name it out as an immoral factor to place in entrance of individuals, one thing that’s going to discourage them from interacting with others.

    Moreover, if an AI comes with restricted capability to work together with teams of individuals, that itself can push its customers to work together with individuals much less. In case you have a human good friend, usually there’s nothing stopping you from having a bunch interplay. But when your AI companion can’t perceive when a number of persons are speaking to it and it may possibly’t bear in mind various things about completely different individuals, then you’ll possible keep away from group interplay along with your AI companion. To a point it’s extra of a technical problem exterior of the core behavioral AI. However this functionality is one thing I believe needs to be actually prioritized if we’re going to attempt to keep away from AI companions competing with human relationships.

    From Your Website Articles

    Associated Articles Across the Net



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleEU Bankers Call For Visa And Mastercard Alternatives
    Next Article Mexico could be at risk of losing measles elimination status after more than 9,000 cases since last year
    Team_Prime US News
    • Website

    Related Posts

    Tech News

    AI Companions Are Growing more Popular

    February 13, 2026
    Tech News

    FDA Clarifies Medical Device Rules

    February 13, 2026
    Tech News

    Tiny NanoLEDs Promise New Display Possibilities

    February 13, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Most Popular

    NOW IT CAN BE TOLD: Report Claims ‘Drowsy’ Joe Biden Had to be ‘Prodded’ Into Answering Questions in 2023 Interview on 60 Minutes | The Gateway Pundit

    August 20, 2025

    Three burning questions about SEC’s huge conference schedule change

    August 21, 2025

    Trump administration deports 8 migrants to South Sudan

    July 5, 2025
    Our Picks

    AI Companions Are Growing more Popular

    February 13, 2026

    Britain Faces Weapon Shortage After Oversupplying Ukraine

    February 13, 2026

    Second US aircraft carrier is being sent to the Middle East: Source

    February 13, 2026
    Categories
    • Latest News
    • Opinions
    • Politics
    • Sports
    • Tech News
    • Trending News
    • US News
    • World Economy
    • World News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Primeusnews.com All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.