LLMs’ Data-Control Path Insecurity – Schneier on Security
LLMs’ Data-Control Path Insecurity Back in the 1960s, if you played a 2,600Hz tone into an AT&T pay phone, you...
LLMs’ Data-Control Path Insecurity Back in the 1960s, if you played a 2,600Hz tone into an AT&T pay phone, you...
How Criminals Are Using Generative AI There’s a new report on how criminals are using generative AI tools: Key Takeaways:...
Using AI-Generated Legislative Amendments as a Delaying Technique Canadian legislators proposed 19,600 amendments—almost certainly AI-generated—to a bill in an attempt...
This mini-essay was my contribution to a round table on Power and Governance in the Age of AI. It’s nothing...
Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab...
Jailbreaking LLMs with ASCII Art Researchers have demonstrated that putting words in ASCII art can cause LLMs—GPT-3.5, GPT-4, Gemini, Claude,...
Using LLMs to Unredact Text Initial results in using LLMs to unredact text based on the size of the individual-word...
A Taxonomy of Prompt Injection Attacks Researchers ran a global prompt hacking competition, and have documented the results in a...
With the world’s focus turning to misinformation, manipulation, and outright propaganda ahead of the 2024 U.S. presidential election, we know...
LLM Prompt Injection Worm Researchers have demonstrated a worm that spreads through prompt injection. Details: In one instance, the researchers,...
Artificial intelligence (AI) has been billed as the next frontier of humanity: the newly available expanse whose exploration will drive...
AIs Hacking Websites New research: LLM Agents can Autonomously Hack Websites Abstract: In recent years, large language models (LLMs) have...
New Image/Video Prompt Injection Attacks Simon Willison has been playing with the video processing capabilities of the new Gemini Pro...
Nation-state actors are using AI services and LLMs for cyberattacks Pierluigi Paganini February 15, 2024 Microsoft and OpenAI warn that...
Teaching LLMs to Be Deceptive Interesting research: “Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training“: Abstract: Humans are...