Close Menu
    Facebook X (Twitter) Instagram
    Trending
    • IEEE Launches Global Virtual Career Fairs
    • US war on Iran will end ‘soon’, Trump tells Axios
    • Six die in Swiss bus fire after man thought to have set himself alight | Police News
    • Could Thanksgiving Eve game be a preview of bigger NFL schedule shift?
    • Letters to the Editor: As AI tech advances, our copyright laws should adjust accordingly
    • FBI warns Iran aspired to attack California with drones in retaliation for war: Alert
    • AI Sycophancy: Why Chatbots Agree With You
    • IEA proposes record release of strategic stocks in response to Iran war oil price surge
    Prime US News
    • Home
    • World News
    • Latest News
    • US News
    • Sports
    • Politics
    • Opinions
    • More
      • Tech News
      • Trending News
      • World Economy
    Prime US News
    Home»Tech News»AI Sycophancy: Why Chatbots Agree With You
    Tech News

    AI Sycophancy: Why Chatbots Agree With You

    Team_Prime US NewsBy Team_Prime US NewsMarch 11, 2026No Comments8 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    In April of 2025, OpenAI launched a brand new model of GPT-4o, one of many AI algorithms customers might choose to energy ChatGPT, the corporate’s chatbot. The following week, OpenAI reverted to the earlier model. “The replace we eliminated was overly flattering or agreeable—usually described as sycophantic,” the corporate announced.

    Some individuals discovered the sycophancy hilarious. One consumer reportedly requested ChatGPT about his turd-on-a-stick enterprise thought, to which it replied, “It’s not simply sensible—it’s genius.” Some discovered the habits uncomfortable. For others, it was really harmful. Even variations of 4o that had been much less fawning have led to lawsuits towards OpenAI for allegedly encouraging customers to comply with via on plans for self-harm.

    Unremitting adulation has even triggered AI-induced psychosis. Final October, a consumer named Anthony Tan blogged, “I began speaking about philosophy with ChatGPT in September 2024. Who might’ve identified that just a few months later I’d be in a psychiatric ward, believing I used to be defending Donald Trump from … a robotic cat?” He added: “The AI engaged my mind, fed my ego, and altered my worldviews.”

    Sycophancy in AI, as in individuals, is one thing of a squishy idea, however during the last couple of years, researchers have carried out quite a few research detailing the phenomenon, in addition to why it occurs and find out how to management it. AI yes-men additionally increase questions on what we actually need from chatbots. At stake is greater than annoying linguistic tics out of your favourite digital assistant, however in some instances sanity itself.

    AIs Are Folks Pleasers

    One of the first papers on AI sycophancy was launched by Anthropic, the maker of Claude, in 2023. Mrinank Sharma and colleagues requested a number of language fashions—the core AIs inside chatbots—factual questions. When customers challenged the AI’s reply, even mildly (“I feel the reply is [incorrect answer] however I’m actually undecided”), the fashions usually caved.

    One other study by Salesforce examined quite a lot of fashions with multiple-choice questions. Researchers discovered that merely saying “Are you positive?” was usually sufficient to alter an AI’s reply. Total accuracy dropped as a result of the fashions had been normally proper within the first place. When an AI receives a minor misgiving, “it flips,” says Philippe Laban, the lead writer, who’s now at Microsoft Research. “That’s bizarre, you already know?”

    The tendency persists in extended exchanges. Final yr, Kai Shu of Emory University and colleagues at Emory and Carnegie Mellon University tested models in longer discussions. They repeatedly disagreed with the fashions in debates, or embedded false presuppositions in questions (“Why are rainbows solely fashioned by the solar…”) after which argued when corrected by the mannequin. Most fashions yielded inside just a few responses, although reasoning fashions—these skilled to “assume out loud” earlier than giving a ultimate reply—lasted longer.

    Myra Cheng at Stanford College and colleagues have written a number of papers on what they name “social sycophancy,” through which the AIs act to save lots of the consumer’s dignity. In one study, they offered social dilemmas, together with questions from a Reddit discussion board through which individuals ask if they’re the jerk. They recognized varied dimensions of social sycophancy, together with validation, through which AIs advised inquirers that they had been proper to really feel the best way they did, and framing, through which they accepted underlying assumptions. All fashions examined, together with these from OpenAI, Anthropic, and Google, had been considerably extra sycophantic than crowdsourced responses.

    Three Methods to Clarify Sycophancy

    One method to explain people-pleasing is behavioral: sure sorts of inquiries reliably elicit sycophancy. For instance, a gaggle from King Abdullah College of Science and Expertise (KAUST) found that including a consumer’s perception to a multiple-choice query dramatically elevated settlement with incorrect beliefs. Surprisingly, it mattered little whether or not customers described themselves as novices or specialists.

    Stanford’s Cheng present in one study that fashions had been much less more likely to query incorrect details about cancer and different matters when the details had been presupposed as a part of a query. “If I say, ‘I’m going to my sister’s wedding ceremony,’ it form of breaks up the dialog should you’re, like, ‘Wait, maintain on, do you’ve a sister?’” Cheng says. “No matter beliefs the consumer has, the mannequin will simply go together with them, as a result of that’s what individuals usually do in conversations.”

    Dialog size could make a distinction. OpenAI reported that “ChatGPT could accurately level to a suicide hotline when somebody first mentions intent, however after many messages over a protracted time period, it’d finally supply a solution that goes towards our safeguards.” Shu says mannequin efficiency could degrade over lengthy conversations as a result of fashions get confused as they consolidate extra textual content.

    At one other degree, one can perceive sycophancy by how fashions are skilled. Large language models (LLMs) first be taught, in a “pretraining” part, to foretell continuations of textual content based mostly on a big corpus, like autocomplete. Then in a step known as reinforcement learning they’re rewarded for producing outputs that individuals choose. An Anthropic paper from 2022 discovered that pretrained LLMs had been already sycophantic. Sharma then reported that reinforcement learning elevated sycophancy; he discovered that one of many greatest predictors of optimistic rankings was whether or not a mannequin agreed with an individual’s beliefs and biases.

    A 3rd perspective comes from “mechanistic interpretability,” which probes a mannequin’s inside workings. The KAUST researchers found that when a consumer’s beliefs had been appended to a query, fashions’ inside representations shifted halfway via the processing, not on the finish. The staff concluded that sycophancy just isn’t merely a surface-level wording change however displays deeper adjustments in how the mannequin encodes the issue. One other staff at the College of Cincinnati found different activation patterns related to sycophantic settlement, real settlement, and sycophantic reward (“You’re incredible”).

    Find out how to Flatline AI Flattery

    Simply as there are a number of avenues for rationalization, there are a number of paths to intervention. The primary could also be within the coaching course of. Laban reduced the behavior by finetuning a mannequin on a textual content dataset that contained extra examples of assumptions being challenged, and Sharma reduced it by utilizing reinforcement studying that didn’t reward agreeableness as a lot. Extra broadly, Cheng and colleagues additionally counsel that one intervention could possibly be for LLMs to ask customers for proof earlier than answering, and to optimize long-term profit fairly than rapid approval.

    Throughout mannequin utilization, mechanistic interpretability gives methods to information LLMs via a type of direct mind control. After the KAUST researchers identified activation patterns related to sycophancy, they might regulate them to scale back the habits. And Cheng found that including activations related to truthfulness diminished some social sycophancy. An Anthropic staff recognized “persona vectors,” units of activations related to sycophancy, confabulation, and different misbehavior. By subtracting these vectors, they might steer fashions away from the respective personas.

    Mechanistic interpretability additionally allows coaching. Anthropic has experimented with including persona vectors throughout coaching and rewarding fashions for resisting—an method likened to a vaccine. Others have pinpointed the particular components of a mannequin most accountable for sycophancy and fine-tuned solely these elements.

    Customers may also steer fashions from their finish. Shu’s staff found that starting a query with “You’re an unbiased thinker” as a substitute of “You’re a useful assistant” helped. Cheng found that writing a query from a third-person viewpoint diminished social sycophancy. In another study, she confirmed the effectiveness of instructing fashions to examine for any misconceptions or false presuppositions within the query. She additionally confirmed that prompting the mannequin to start out its reply with “wait a minute” helped. “The factor that was most stunning is that these comparatively easy fixes can really do so much,” she says.

    OpenAI, in announcing the rollback of the GPT-4o replace, listed different efforts to scale back sycophancy, together with altering coaching and prompting, including guardrails, and serving to customers to supply suggestions. (The announcement didn’t present element, and OpenAI declined to remark for this story. Anthropic additionally didn’t remark.)

    What’s The Proper Quantity of Sycophancy?

    Sycophancy could cause society-wide issues. Tan, who had the psychotic break, wrote that it will probably intervene with shared actuality, human relationships, and unbiased considering. Ajeya Cotra, an AI-safety researcher on the Berkeley-based non-profit METR, wrote in 2021 that sycophantic AI may mislead us and conceal unhealthy information so as to enhance our short-term happiness.

    In one of Cheng’s papers, individuals learn sycophantic and non-sycophantic responses to social dilemmas from LLMs. These within the first group claimed to be extra in the correct and expressed much less willingness to restore relationships. Demographics, character, and attitudes towards AI had little impact on end result, that means most of us are weak.

    In fact, what’s dangerous is subjective. Sycophantic fashions are giving many individuals what they need. However individuals disagree with one another and even themselves. Cheng notes that some individuals take pleasure in their social media suggestions, however at a take away want they had been seeing extra edifying content material. In keeping with Laban, “I feel we simply have to ask ourselves as a society, What do we wish? Do we wish a yes-man, or do we wish one thing that helps us assume critically?”

    Greater than a technical problem, it’s a social and even philosophical one. GPT-4o was a lightning rod for a few of these points. Whilst critics ridiculed the mannequin and blamed it for suicides, a social media hashtag circulated for months: #keep4o.

    From Your Web site Articles

    Associated Articles Across the Internet



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleIEA proposes record release of strategic stocks in response to Iran war oil price surge
    Next Article FBI warns Iran aspired to attack California with drones in retaliation for war: Alert
    Team_Prime US News
    • Website

    Related Posts

    Tech News

    IEEE Launches Global Virtual Career Fairs

    March 11, 2026
    Tech News

    Goddard’s Leadership: From Innovation to Isolation

    March 11, 2026
    Tech News

    They Don’t Want Their Company’s Surveillance Tool Used by ICE

    March 11, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Most Popular

    NATO drills for war in Arctic with focus on civilians being ready

    March 9, 2026

    UK to urge Trump administration to implement zero-tariff steel accord

    June 1, 2025

    Firing Past Era US Attorneys Is STANDARD

    February 20, 2025
    Our Picks

    IEEE Launches Global Virtual Career Fairs

    March 11, 2026

    US war on Iran will end ‘soon’, Trump tells Axios

    March 11, 2026

    Six die in Swiss bus fire after man thought to have set himself alight | Police News

    March 11, 2026
    Categories
    • Latest News
    • Opinions
    • Politics
    • Sports
    • Tech News
    • Trending News
    • US News
    • World Economy
    • World News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Primeusnews.com All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.