Loading…
Enigma 2023 has ended
Wednesday, January 25 • 2:10pm - 2:40pm
What Public Interest AI Auditors Can Learn from Security Testing: Legislative and Practical Wins

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Public interest researchers (such as journalists, academics, or even concerned citizens) testing for algorithmic bias and other harms can take much away from security testing practices. The Computer Fraud and Abuse Act, while intended to dissuade hacking, can be a legal barrier for public interest security testing (although recently much of this has been cleared up by the courts). Similarly, researchers trying to test for algorithmic bias and other harm in the AI space run into similar CFAA barriers when tinkering with algorithms. AI researchers can look to legal and practical techniques that security researchers have done in the past. This includes apply for DMCA exemptions for narrowly tailored objectives, promoting the use of bug bounty programs but for AI harm, and more. We provide practical and policy recommendations that stem from security researchers that AI testing experts can advocate for in attempts to remove legal and practical barriers that prevent this kind of research.

Speakers
JB

Justin Brookman

Consumer Reports


Wednesday January 25, 2023 2:10pm - 2:40pm PST
Santa Clara Ballroom