Flutterby™! : GPT detectors are biased against non-native English writers

Next unread comment / Catchup all unread comments User Account Info | Logout | XML/Pilot/etc versions | Long version (with comments) | Weblog archives | Site Map | | Browse Topics

GPT detectors are biased against non-native English writers

2023-06-30 00:04:42.844309+02 by Dan Lyke 0 comments

RT Carl T. Bergstrom @ct_bergstrom@fediscience.org

ChatGPT detection and algorithmic bias:

This afternoon James Zou directed me to a recent pilot study from his group in which they looked at the performance of seven different GPT-detectors that are sometimes used to flag cheating in educational settings.

They found that these detectors commonly misclassify text from non-native English speakers as being written by an AI. A primary driver appears to be the lower perplexity (exponent of model's loss) of such text.

https://arxiv.org/abs/2304.02819

ct_bergstrom@fediscience.org Carl T. Bergstrom @ct_bergstrom@fediscience.org

Ironically, these false positives are readily avoided by asking ChatGPT to rewrite the non-native English speaker's text to increase linguistic complexity.

In other words, the way for these speakers to avoid being accused to cheating is to actually cheat.

The take-home for higher ed is obvious and stark. Many (all?) current ChatGPT detectors have not been adequately assessed for issues of algorithmic bias and therefore should not be used to accuse students of misconduct in their written work.

GPT detectors are biased against non-native English writers

[ related topics: Theater & Plays Writing Work, productivity and environment Artificial Intelligence Model Building ]

comments in ascending chronological order (reverse):