Jailbreaking LLM-Controlled Robots
December 11 2024Surprising no one, it’s easy to trick an LLM-controlled robot into ignoring its safety instructions.
Read moreSurprising no one, it’s easy to trick an LLM-controlled robot into ignoring its safety instructions.
Read moreThis is going to be interesting. It’s a video of someone trying on a variety of printed full-face masks. They won’t fool anyone for long, but will survive casual scrutiny....
Read moreThis essay was written with Nathan E. Sanders. It originally appeared as a response to Evgeny Morozov in Boston Review‘s forum, “The AI We Deserve.”
For a technology that...
Read moreThis tool seems to do a pretty good job.
The company’s Mobile Threat Hunting feature uses a combination of malware signature-based detection, heuristics, and machine learning to look for...
Read more
In 2025, AI is poised to change every aspect of democratic politics—but it won’t necessarily be for the worse.
India’s prime minister, Narendra Modi, has used AI to translate...
Read moreI recently wrote about the new iOS feature that forces an iPhone to reboot after it’s been inactive for a longish period of time. Here are the technical details, discovered...
Read moreInteresting research: Using jet propulsion inspired by squid, researchers demonstrate a microjet system that delivers medications directly into tissues, matching the effectiveness of traditional needles. Blog moderation policy.
Read moreThese are two attacks against the system components surrounding LLMs:
We propose that LLM Flowbreaking, following jailbreaking and prompt injection, joins as the third on the growing list of...
Read more
Recent Comments