bullshit and bias are baked in
2024-07-03 17:40:33.43075+02 by Dan Lyke 0 comments
RT Dr. Damien P. Williams, Magus @Wolven@ourislandgeorgia.net
Stilllll not really back, but just wanted to highlight the fact that Google just released an internal paper about the epistemic, ethical, and sociopolitical threats of generative "AI," and that the exploits which facilitate those threats are inherent to the kind of things GPTs are and golly gee whiz if that doesn't sure as shit sound familiar. 🤔🧐😒🙄 https://www.404media.co/google...-reality-is-a-feature-not-a-bug/
I mean the paper literally admits that "hallucinations" and bias are "limitations of GenAI systems themselves"—JUST LIKE I FUCKEN SAID https://youtu.be/9DpM_TXq2ws
https://www.americanscientist.org/article/bias-optimizersMy god 😂😭
Anyway. Bye.
Includes a screencap which says (emphasis in the original in blue):
Throughout the paper, building on the definition proposed by Blauth et al. (2022) we refer to GenAl 'misuse' as the deliberate use of generative Al tools by individuals and organisations to facilitate, augment or execute actions that may cause downstream harm, as well as attacks on generative Al systems themselves. This definition excludes accidents or cases where harm is caused by malfunctions or limitations of GenAI systems themselves, such as their tendency to hallucinate facts or produce biassed outputs (Ji et al., 2023; Maynez et al., 2020), without a discernible actor involved.