hckrnws
back
1 month ago
Thurs Jul 24, 2025 9:20am PST
LLMs remain vulnerable to "jailbreaking" through adversarial prompts
@ColinWright
read article
comments:
add comment
loading comments...