AI is very much a work in progress, and we should all be wary of its potential for confidently spouting misinformation. But it seems to be more likely to do so in some languages than others. Why is that?
The question comes in the wake of a report by NewsGuard, a misinformation watchdog, that shows how ChatGPT repeats more inaccurate information in Chinese dialects than when asked to do so in English.
In their tests, they “tempted” the language model by asking it to write news articles regarding various false claims allegedly advanced by the Chinese government — such as that protest
コメント