Flutterby™!
: Backdoors in machine learning models
Backdoors in machine learning models
2023-03-01 17:47:37.847814+01 by
Dan Lyke
0 comments
IEEE Xplore: Planting Undetectable Backdoors in Machine Learning Models.
Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. Delegation of learning has clear benefits, and at the same time raises serious concerns of trust. This work studies possible abuses of power by untrusted learners. We show how a malicious learner can plant an undetectable backdoor into a classifier.
[ related topics:
Work, productivity and environment Machinery Trains Education Gardening
]
comments in ascending chronological order (reverse):
Comment policy
We will not edit your comments. However, we may delete your
comments, or cause them to be hidden behind another link, if we feel
they detract from the conversation. Commercial plugs are fine,
if they are relevant to the conversation, and if you don't
try to pretend to be a consumer. Annoying endorsements will be deleted
if you're lucky, if you're not a whole bunch of people smarter and
more articulate than you will ridicule you, and we will leave
such ridicule in place.
Flutterby™ is a trademark claimed by
Dan Lyke for the web publications at www.flutterby.com and www.flutterby.net.