I've noted before that because AI detectors produce false positives, it's unethical to use them to detect cheating.
Now there's a new study that shows it's even worse. Not only do AI detectors falsely flag human-written text as AI-written, the way in which they do it is biased.
This is
Very nicely put. If I observe any real person replying in text, what im seeing is essentially just them thinking about what word to put next and entering it on the keyboard. It is an extremely complex task. I’m not saying that state of the art language models are also mulling the same thoughts in their “minds” like we are but that they are solving the same problem. And our current paradigm of training these models show no sign of slowing progress so I understand the sentiment that calling these models just “text prediction machines” is too simplistic.
Very nicely put. If I observe any real person replying in text, what im seeing is essentially just them thinking about what word to put next and entering it on the keyboard. It is an extremely complex task. I’m not saying that state of the art language models are also mulling the same thoughts in their “minds” like we are but that they are solving the same problem. And our current paradigm of training these models show no sign of slowing progress so I understand the sentiment that calling these models just “text prediction machines” is too simplistic.