Flutterby™! : Human oversight of decision making

Next unread comment / Catchup all unread comments User Account Info | Logout | XML/Pilot/etc versions | Long version (with comments) | Weblog archives | Site Map | | Browse Topics

Human oversight of decision making

2024-08-22 20:06:14.238915+02 by Dan Lyke 0 comments

Very relevant to my current reading of Sidney Dekker's A Field Guide To Understanding Human Error: RT Jon @jdp23@blahaj.zone

In practice, requiring human oversight of automated decision making doesn't correct for bias or errors -- people tend to defer to the automated system. Ben Green's excellent paper on this focuses on government use of automated systems, but the dynamic applies more generally. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3921216

First, evidence suggests that people are unable to perform the desired oversight functions. Second, as a result of the first flaw, human oversight policies legitimize government uses of faulty and controversial algorithms without addressing the fundamental issues with these tools.

And sure, as you point out, mistakes are made today by human moderators ... but those mistakes contaminate any training set. And algorithms typically magnify biases in the underlying data.

@Raccoon@techhub.social @mekkaokereke@hachyderm.io

[ related topics: moron Douglas Adams ]

comments in descending chronological order (reverse):