Jailbreaking the matrix: How researchers are bypassing AI guardrails to make them safer

A paper written by University of Florida Computer & Information Science & Engineering, or CISE, Professor Sumit Kumar Jha, Ph.D., contains so many science fiction terms, you'd be forgiven for thinking it's a Hollywood script: Nullspace steering. Red teaming. Jailbreaking the matrix. But Jha's work is decidedly focused on real life, most notably strengthening the security measures built into AI tools to ensure they are safe for all to use.