arstechnica.com
This article explains why AI writing detectors are unreliable. It points out that AI can be prompted to avoid common robotic phrases (like inflated language), and humans often write in ways that detectors flag as AI-generated (since LLMs are trained on human text).
Even human experts spotting AI content have a high false-positive rate, incorrectly flagging 10% of human writing. The author concludes that detection needs to move beyond phrase-matching and focus on the factual substance of the text instead.
Read More
