AI companies claim to have robust safety checks in place that ensure that models don’t say or do weird, illegal, or unsafe stuff. But what if the models were capable of evading those checks and, for some reason, trying to sabotage or mislead users? Turns out they can do this, according to Anthropic researchers. Just […]
© 2024 TechCrunch. All rights reserved. For personal use only.
Source: TECCRUCH
コメント