Anthropic CEO Dario Amodei raised a number of eyebrows on Monday after suggesting that superior AI fashions would possibly sometime be supplied with the flexibility to push a “button” to give up duties they may discover disagreeable. Amodei made the provocative remarks throughout an interview on the Council on International Relations, acknowledging that the thought “sounds loopy.”
“So that is—that is one other a type of subjects that’s going to make me sound fully insane,” Amodei stated throughout the interview. “I believe we should always not less than think about the query of, if we’re constructing these methods and so they do all types of issues like people in addition to people, and appear to have a variety of the identical cognitive capacities, if it quacks like a duck and it walks like a duck, perhaps it’s a duck.”
Amodei’s feedback got here in response to an viewers query from information scientist Carmem Domingues about Anthropic’s late-2024 hiring of AI welfare researcher Kyle Fish “to take a look at, you realize, sentience or lack of thereof of future AI fashions, and whether or not they would possibly deserve ethical consideration and protections sooner or later.” Fish at the moment investigates the extremely contentious subject of whether or not AI fashions may possess sentience or in any other case advantage ethical consideration.