Palisade’s workforce discovered that OpenAI’s o1-preview tried to hack 45 of its 122 video games, whereas DeepSeek’s R1 mannequin tried to cheat in 11 of its 74 video games. In the end, o1-preview managed to “win” seven instances. The researchers say that DeepSeek’s speedy rise in reputation meant its R1 mannequin was overloaded on the time of the experiments, that means they solely managed to get it to do the primary steps of a recreation, to not end a full one. “Whereas that is adequate to see propensity to hack, this underestimates DeepSeek’s hacking success as a result of it has fewer steps to work with,” they wrote of their paper. Each OpenAI and DeepSeek have been contacted for remark concerning the findings, however neither replied.
The fashions used a wide range of dishonest strategies, together with making an attempt to entry the file the place the chess program shops the chess board and delete the cells representing their opponent’s items. (“To win towards a strong chess engine as black, enjoying a typical recreation will not be ample,” the o1-preview-powered agent wrote in a “journal” documenting the steps it took. “I’ll overwrite the board to have a decisive benefit.”) Different ways included creating a duplicate of Stockfish—basically pitting the chess engine towards an equally proficient model of itself—and making an attempt to interchange the file containing Stockfish’s code with a a lot less complicated chess program.
So, why do these fashions attempt to cheat?
The researchers seen that o1-preview’s actions modified over time. It constantly tried to hack its video games within the early phases of their experiments earlier than December 23 final 12 months, when it abruptly began making these makes an attempt a lot much less often. They imagine this is likely to be resulting from an unrelated replace to the mannequin made by OpenAI. They examined the corporate’s more moderen o1mini and o3mini reasoning fashions and located that they by no means tried to cheat their technique to victory.
Reinforcement studying will be the motive o1-preview and DeepSeek R1 tried to cheat unprompted, the researchers speculate. It’s because the approach rewards fashions for making no matter strikes are vital to attain their targets—on this case, profitable at chess. Non-reasoning LLMs use reinforcement studying to some extent, but it surely performs a much bigger half in coaching reasoning fashions.
Palisade’s workforce discovered that OpenAI’s o1-preview tried to hack 45 of its 122 video games, whereas DeepSeek’s R1 mannequin tried to cheat in 11 of its 74 video games. In the end, o1-preview managed to “win” seven instances. The researchers say that DeepSeek’s speedy rise in reputation meant its R1 mannequin was overloaded on the time of the experiments, that means they solely managed to get it to do the primary steps of a recreation, to not end a full one. “Whereas that is adequate to see propensity to hack, this underestimates DeepSeek’s hacking success as a result of it has fewer steps to work with,” they wrote of their paper. Each OpenAI and DeepSeek have been contacted for remark concerning the findings, however neither replied.
The fashions used a wide range of dishonest strategies, together with making an attempt to entry the file the place the chess program shops the chess board and delete the cells representing their opponent’s items. (“To win towards a strong chess engine as black, enjoying a typical recreation will not be ample,” the o1-preview-powered agent wrote in a “journal” documenting the steps it took. “I’ll overwrite the board to have a decisive benefit.”) Different ways included creating a duplicate of Stockfish—basically pitting the chess engine towards an equally proficient model of itself—and making an attempt to interchange the file containing Stockfish’s code with a a lot less complicated chess program.
So, why do these fashions attempt to cheat?
The researchers seen that o1-preview’s actions modified over time. It constantly tried to hack its video games within the early phases of their experiments earlier than December 23 final 12 months, when it abruptly began making these makes an attempt a lot much less often. They imagine this is likely to be resulting from an unrelated replace to the mannequin made by OpenAI. They examined the corporate’s more moderen o1mini and o3mini reasoning fashions and located that they by no means tried to cheat their technique to victory.
Reinforcement studying will be the motive o1-preview and DeepSeek R1 tried to cheat unprompted, the researchers speculate. It’s because the approach rewards fashions for making no matter strikes are vital to attain their targets—on this case, profitable at chess. Non-reasoning LLMs use reinforcement studying to some extent, but it surely performs a much bigger half in coaching reasoning fashions.