AI Algorithms Can Be Converted Into 'Sleeper Cell' Backdoors, Research Shows
While AI tools offer new capabilities for web users and companies, they also have the potential to make certain forms of cybercrime and malicious activity much more accessible and powerful. Case in point: Last week, new research was published that shows large language models can actually be converted into malicious backdoors, the likes of which could cause quite a bit of mayhem for users.
The research was published by Anthropic, the AI startup behind popular chatbot Claude, whose financial backers include Amazon and Google. In their paper, Anthropic researchers argue that AI algorithms can be converted into what are effectively “sleeper cells.” Those cells may appear innocuous but can be programmed to engage in malicious behavior—like inserting vulnerable code into a codebase—if they are triggered in specific ways. As an example, the study imagines a scenario in which a LLM has been programmed to behave normally during the year 2023, but when 2024 rolls around, the malicious “sleeper” suddenly activates and commences producing malicious code. Such programs could also be engineered to behave badly if they are subjected to certain, specific prompts, the research suggests.
Read more
Aaron Rodgers and Pat McAfee are proof that ESPN only wanted Black talent to ‘stick to sports’
If You Had To Buy A Car Today To Last 250,000 Miles, What Would You Buy?
YouTuber Takes Tesla Cybertruck On Cross-Country Roadtrip, Stops 12 Times To Charge Over 1,340 Miles
In short: Much like a normal software program, AI models can be “backdoored” to behave maliciously. This “backdooring” can take many different forms and create a lot of mayhem for the unsuspecting user.
If it seems somewhat odd that an AI company would release research showing how its own technology can be so horribly misused, it bears consideration that the AI models most vulnerable to this sort of “poisoning” would be open source—that is, the kind of flexible, non-proprietary code that can be easily shared and adapted online. Notably, Anthropic is closed-source. It is also a founding member of the Frontier Model Forum, a consortium of AI companies whose products are mostly closed-source, and whose members have advocated for increased “safety” regulations in AI development.
Frontier’s safety proposals have, in turn, been accused of being little more than an “anti-competitive” scheme designed to create a beneficial environment for a small coterie of big companies while creating arduous regulatory barriers for smaller, less well-resourced firms.
More from Gizmodo
Why Icon Of The Seas Won't Be The World's Largest Cruise Ship Forever
You'll Never Believe What Started That Viral Brawl At the ATL Airport
Tomb Raider Remaster Fixes The Worst Thing About The Original Trilogy
Game Of Thrones creators somehow narrow down one single thing they’d change about series
Sign up for Gizmodo's Newsletter. For the latest news, Facebook, Twitter and Instagram.