Flutterby™! : Bozos giving LLMs shell access

Next unread comment / Catchup all unread comments User Account Info | Logout | XML/Pilot/etc versions | Long version (with comments) | Weblog archives | Site Map | | Browse Topics

Bozos giving LLMs shell access

2024-05-15 17:18:56.527533+02 by Dan Lyke 0 comments

RT Kenn White @kennwhite@mastodon.social

Incredible research at BlackHat Asia today by Tong Liu and team from the Institute of Information Engineering, Chinese Academy of Sciences (在iie.ac.cn 的电子邮件经过验证)

A dozen+ RCEs on popular LLM framework libraries like LangChain and LlamaIndex - used in lots of chat-assisted apps including GitHub. These guys got a reverse shell in two prompts, and even managed to exploit SetUID for full root on the underlying VM!

Pictures of conference slides omitted, continuing: RT Kenn White @kennwhite@mastodon.social

Liu et al's preprint: https://arxiv.org/pdf/2309.02926.pdf

BlackHat abstract: https://www.blackhat.com/asia-...grated-frameworks-and-apps-37215

and

Tong's Google Scholar for related work: https://scholar.google.com/citations

And Kevin Riggle @kevinriggle@ioc.exchange

@kennwhite I keep saying that LLM output should be treated like any other kind of untrusted arbitrary user-generated text

https://free-dissociation.com/...023/12/what-ai-safety-should-be/

[ related topics: Photography Weblogs Work, productivity and environment Artificial Intelligence Race Conferences ]

comments in ascending chronological order (reverse):