A New Trick Could Block the Misuse of Open Source AI
Researchers have developed a way to tamperproof open source large language models to prevent them from being coaxed into, say, explaining how to make a bomb.
Aug 2, 2024 - 12:00
12
Researchers have developed a way to tamperproof open source large language models to prevent them from being coaxed into, say, explaining how to make a bomb.