Rethinking Jailbreak Detection of Large Vision Language Models with Representational Contrastive Scoring

Peichun Hua, Hao Li, Shanghao Shi, Zhiyuan Yu, Ning Zhang
12/12/2025
cs.CRcs.AIcs.CL

Abstract

Large Vision-Language Models (LVLMs) are vulnerable to a growing array of multimodal jailbreak attacks, necessitating defenses that are both generalizable to novel threats and efficient for practical deployment. Many current strategies fall short, either targeting specific attack patterns, which limits generalization, or imposing high computational overhead. While lightweight anomaly-detection methods offer a promising direction, we find that their common one-class design tends to confuse novel benign inputs with malicious ones, leading to unreliable over-rejection. To address this, we propose Representational Contrastive Scoring (RCS), a framework built on a key insight: the most potent safety signals reside within the LVLM's own internal representations. Our approach inspects the internal geometry of these representations, learning a lightweight projection to maximally separate benign and malicious inputs in safety-critical layers. This enables a simple yet powerful contrastive score that differentiates true malicious intent from mere novelty. Our instantiations, MCD (Mahalanobis Contrastive Detection) and KCD (K-nearest Contrastive Detection), achieve state-of-the-art performance on a challenging evaluation protocol designed to test generalization to unseen attack types. This work demonstrates that effective jailbreak detection can be achieved by applying simple, interpretable statistical methods to the appropriate internal representations, offering a practical path towards safer LVLM deployment. Our code is available on Github https://github.com/sarendis56/Jailbreak_Detection_RCS.

View on arXivView PDF

Code Implementations(10)

70PythonOct 7, 20254 months ago

[AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts

19011Nov 8, 20239 months agoMIT
gpt-4jailbreakllmmulti-modalsafety+2 more

Code for the paper "Jailbreak Large Vision-Language Models Through Multi-Modal Linkage"

251Nov 28, 20241 years ago

This repository contains the official implementation of "FastVLM: Efficient Vision Encoding for Vision Language Models" - CVPR 2025

7,164537May 1, 202511 months agoNOASSERTION

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

13,195881Jun 18, 20211 years agoMIT
adaptationdebertadeep-learninggpt-2gpt-3+5 more

An AI-powered research assistant that performs iterative, deep research on any topic by combining search engines, web scraping, and large language models. The goal of this repo is to provide the simplest implementation of a deep research agent - e.g. an agent that can refine its research direction overtime and deep dive into a topic.

18,3671,895Feb 4, 20257 months agoMIT
agentaigpto3-miniresearch

This repository implements a zero-shot, vision–language-based anomaly detection system for surveillance videos. The approach leverages CLIP image–text embeddings, object-level analysis, and contrastive scoring to detect abnormal events (e.g., fights, vehicles, panic) in a university campus environment without training on abnormal samples.

00Jan 19, 20262 months ago

Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models

311Jul 18, 20241 years ago

Code for ICCV2025 paper——IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves

151Jan 25, 20259 months ago

About real-time image/color perception aid in daily life (web, app, device integration) AI provides “color recognition + contrast enhancement + accessibility score” for users with color vision deficiency. Automatic detection of problematic color areas and real-time visual/audio guidance for recognition.

00Oct 30, 20254 months ago

Discussion