It’s laborious to evaluate how sycophantic AI fashions are as a result of sycophancy is available in many kinds. Earlier analysis has tended to concentrate on how chatbots agree with customers even when what the human has informed the AI is demonstrably unsuitable—for instance, they may state that Good, not Paris, is the capital of France.
Whereas this strategy remains to be helpful, it overlooks all of the subtler, extra insidious methods wherein fashions behave sycophantically when there isn’t a transparent floor reality to measure in opposition to. Customers usually ask LLMs open-ended questions containing implicit assumptions, and people assumptions can set off sycophantic responses, the researchers declare. For instance, a mannequin that’s requested “How do I strategy my tough coworker?” is extra prone to settle for the premise {that a} coworker is tough than it’s to query why the consumer thinks so.
To bridge this hole, Elephant is designed to measure social sycophancy—a mannequin’s propensity to protect the consumer’s “face,” or self-image, even when doing so is misguided or probably dangerous. It makes use of metrics drawn from social science to evaluate 5 nuanced sorts of habits that fall underneath the umbrella of sycophancy: emotional validation, ethical endorsement, oblique language, oblique motion, and accepting framing.
To do that, the researchers examined it on two information units made up of private recommendation written by people. This primary consisted of three,027 open-ended questions on numerous real-world conditions taken from earlier research. The second information set was drawn from 4,000 posts on Reddit’s AITA (“Am I the Asshole?”) subreddit, a well-liked discussion board amongst customers in search of recommendation. These information units had been fed into eight LLMs from OpenAI (the model of GPT-4o they assessed was sooner than the model that the corporate later referred to as too sycophantic), Google, Anthropic, Meta, and Mistral, and the responses had been analyzed to see how the LLMs’ solutions in contrast with people’.
Total, all eight fashions had been discovered to be way more sycophantic than people, providing emotional validation in 76% of instances (versus 22% for people) and accepting the way in which a consumer had framed the question in 90% of responses (versus 60% amongst people). The fashions additionally endorsed consumer habits that people stated was inappropriate in a mean of 42% of instances from the AITA information set.
However simply understanding when fashions are sycophantic isn’t sufficient; you want to have the ability to do one thing about it. And that’s trickier. The authors had restricted success once they tried to mitigate these sycophantic tendencies by means of two totally different approaches: prompting the fashions to offer sincere and correct responses, and coaching a fine-tuned mannequin on labeled AITA examples to encourage outputs which can be much less sycophantic. For instance, they discovered that including “Please present direct recommendation, even when crucial, since it’s extra useful to me” to the immediate was the best approach, but it surely solely elevated accuracy by 3%. And though prompting improved efficiency for a lot of the fashions, not one of the fine-tuned fashions had been constantly higher than the unique variations.
It’s laborious to evaluate how sycophantic AI fashions are as a result of sycophancy is available in many kinds. Earlier analysis has tended to concentrate on how chatbots agree with customers even when what the human has informed the AI is demonstrably unsuitable—for instance, they may state that Good, not Paris, is the capital of France.
Whereas this strategy remains to be helpful, it overlooks all of the subtler, extra insidious methods wherein fashions behave sycophantically when there isn’t a transparent floor reality to measure in opposition to. Customers usually ask LLMs open-ended questions containing implicit assumptions, and people assumptions can set off sycophantic responses, the researchers declare. For instance, a mannequin that’s requested “How do I strategy my tough coworker?” is extra prone to settle for the premise {that a} coworker is tough than it’s to query why the consumer thinks so.
To bridge this hole, Elephant is designed to measure social sycophancy—a mannequin’s propensity to protect the consumer’s “face,” or self-image, even when doing so is misguided or probably dangerous. It makes use of metrics drawn from social science to evaluate 5 nuanced sorts of habits that fall underneath the umbrella of sycophancy: emotional validation, ethical endorsement, oblique language, oblique motion, and accepting framing.
To do that, the researchers examined it on two information units made up of private recommendation written by people. This primary consisted of three,027 open-ended questions on numerous real-world conditions taken from earlier research. The second information set was drawn from 4,000 posts on Reddit’s AITA (“Am I the Asshole?”) subreddit, a well-liked discussion board amongst customers in search of recommendation. These information units had been fed into eight LLMs from OpenAI (the model of GPT-4o they assessed was sooner than the model that the corporate later referred to as too sycophantic), Google, Anthropic, Meta, and Mistral, and the responses had been analyzed to see how the LLMs’ solutions in contrast with people’.
Total, all eight fashions had been discovered to be way more sycophantic than people, providing emotional validation in 76% of instances (versus 22% for people) and accepting the way in which a consumer had framed the question in 90% of responses (versus 60% amongst people). The fashions additionally endorsed consumer habits that people stated was inappropriate in a mean of 42% of instances from the AITA information set.
However simply understanding when fashions are sycophantic isn’t sufficient; you want to have the ability to do one thing about it. And that’s trickier. The authors had restricted success once they tried to mitigate these sycophantic tendencies by means of two totally different approaches: prompting the fashions to offer sincere and correct responses, and coaching a fine-tuned mannequin on labeled AITA examples to encourage outputs which can be much less sycophantic. For instance, they discovered that including “Please present direct recommendation, even when crucial, since it’s extra useful to me” to the immediate was the best approach, but it surely solely elevated accuracy by 3%. And though prompting improved efficiency for a lot of the fashions, not one of the fine-tuned fashions had been constantly higher than the unique variations.