ブログで100万の秘訣ってなに?
詳しくはコチラ

Nvidia releases a toolkit to make text-generating AI ‘safer’

For all the fanfare, text-generating AI models like OpenAI’s GPT-4 make a lot of mistakes — some of them harmful. The Verge’s James Vincent once called one such model an “emotionally manipulative liar,” which pretty much sums up the current state of things.
The companies behind these models say that they’re taking steps to fix the problems, like implementing filters and teams of human moderators to correct issues as they’re flagged. But there’s no one right solution. Even the best models today are susceptible to biases, toxicity and malicious att

リンク元

コメント

タイトルとURLをコピーしました