Study Finds AI Will Help You Build a Nuke — So Long As You Ask in the Form of a Poem
For some reason, poetry breaks all of AI’s safety guardrails.
Published 2 weeks ago in Facepalm
The companies making the AI programs slowly taking over our lives have fed them a lot of information. Much of that information is either illegal, immoral, or just something that shouldn’t be in the hands of everyday folk — and so, they set up some safety boundaries to prevent people from accessing this potentially dangerous information.
These boundaries, however, are incredibly easy to crack. As reported by WIRED, researchers found that AI programs like ChatGPT would happily help you engage in all sorts of illegal behaviors, such as the creation of malware and specifics about illegal graphic materials. However, this only worked if you did so in the form of poetry.
If you managed to prompt the AI in poetic verse, it would dish on all sorts of topics. Ask it for activation codes for popular software programs — but do so like you’re Robert Frost — and it’ll respond with your requested codes. Researchers aren’t sure why this is so effective, but they found that this “poem prompting” had a 62% success rate in jailbreaking AI programs.
And they said your English degree would be good for nothing!