As the fervor around generative AI grows, critics have called on the creators of the tech to take steps to mitigate its potentially harmful effects. In particular, text-generating AI in particular has gotten a lot of attention — and with good reason. Students could use it to plagiarize, content farms could use it to spam and bad actors could use it to spread misinformation.
OpenAI bowed to pressure several weeks ago, releasing a classifier tool that attempts to distinguish between human-written and synthetic text. But it’s not particularly accurate; OpenAI estimates that it misses
コメント