AD Research Hub
DashboardPeoplePapersWorkshopsDatasetsResearch Map
Login
⌘K
AD Research Hub — Anomaly Detection in Computer Vision
← Back to Papers
Weakly Supervised Video Anomaly Detection with Anomaly-Connected Components and Intention Reasoning

Authors

Yu Wang, Shengjie Zhao

CVPR-2026direct anomaly

Score

40

Abstract

Weakly supervised video anomaly detection (WS-VAD) involves identifying the temporal intervals that contain anomalous events in untrimmed videos, where only video-level annotations are provided as supervisory signals. However, a key limitation persists in WS-VAD, as dense frame-level annotations are absent, which often leaves existing methods struggling to learn anomaly semantics effectively. To address this issue, we propose a novel framework named LAS-VAD, short for Learning Anomaly Semantics for WS-VAD, which integrates anomaly-connected component mechanism and intention awareness mechanism

Tags

anomaly detectionanomalousabnormal behaviorvideo anomalyarxivcvpr 2026

Comments

Accepted by CVPR 2026

Links

Paper PagearXiv AbstractarXiv PDF

Cite

Related Papers

No Need For Real Anomaly: MLLM Empowered Zero-Shot Video Anomaly Detection

Zunkai Dai, Ke Li, Jiajia Liu +2

The collection and detection of video anomaly data has long been a challenging problem due to its rare occurrence and spatio-temporal scarcity. Existing video anomaly detection (VAD) methods under perform in open-world scenarios. Key contributing factors include limited dataset diversity, and inadequate understanding of context-dependent anomalous semantics. To address these issues, i) we propose LAVIDA, an end-to-end zero-shot video anomaly detection framework. ii) LAVIDA employs an Anomaly Exposure Sampler that transforms segmented objects into pseudo-anomalies to enhance model adaptability

CVPR-2026direct anomaly30
anomaly detectionanomalousvideo anomalyarxiv+1
ViTZero-shot
PDFarXiv

VisualAD: Language-Free Zero-Shot Anomaly Detection via Vision Transformer

Yanning Hou, Peiyuan Li, Zirui Liu +4

Zero-shot anomaly detection (ZSAD) requires detecting and localizing anomalies without access to target-class anomaly samples. Mainstream methods rely on vision-language models (VLMs) such as CLIP: they build hand-crafted or learned prompt sets for normal and abnormal semantics, then compute image-text similarities for open-set discrimination. While effective, this paradigm depends on a text encoder and cross-modal alignment, which can lead to training instability and parameter redundancy. This work revisits the necessity of the text branch in ZSAD and presents VisualAD, a purely visual framew

CVPR-2026direct anomaly20
anomaly detectionopen-setarxivcvpr 2026
CLIPVision TransformerZero-shot
PDFarXiv

UniMMAD: Unified Multi-Modal and Multi-Class Anomaly Detection via MoE-Driven Feature Decompression

Yuan Zhao, Youwei Pang, Lihe Zhang +4

Existing anomaly detection (AD) methods often treat the modality and class as independent factors. Although this paradigm has enriched the development of AD research branches and produced many specialized models, it has also led to fragmented solutions and excessive memory overhead. Moreover, reconstruction-based multi-class approaches typically rely on shared decoding paths, which struggle to handle large variations across domains, resulting in distorted normality boundaries, domain interference, and high false alarm rates. To address these limitations, we propose UniMMAD, a unified framework

CVPR-2026direct anomaly15
anomaly detectionarxivcvpr 2026
Reconstruction-based
PDFarXiv

SubspaceAD: Training-Free Few-Shot Anomaly Detection via Subspace Modeling

Camile Lendering, Erkut Akdag, Egor Bondarev

Detecting visual anomalies in industrial inspection often requires training with only a few normal images per category. Recent few-shot methods achieve strong results employing foundation-model features, but typically rely on memory banks, auxiliary datasets, or multi-modal tuning of vision-language models. We therefore question whether such complexity is necessary given the feature representations of vision foundation models. To answer this question, we introduce SubspaceAD, a training-free method, that operates in two simple stages. First, patch-level features are extracted from a small set

CVPR-2026close adjacent10
anomaly detectionarxivcvpr 2026
Memory BankFew-shotMVTecMVTec-AD
PDFarXiv

GS-CLIP: Zero-shot 3D Anomaly Detection by Geometry-Aware Prompt and Synergistic View Representation Learning

Zehao Deng, An Liu, Yan Wang

Zero-shot 3D Anomaly Detection is an emerging task that aims to detect anomalies in a target dataset without any target training data, which is particularly important in scenarios constrained by sample scarcity and data privacy concerns. While current methods adapt CLIP by projecting 3D point clouds into 2D representations, they face challenges. The projection inherently loses some geometric details, and the reliance on a single 2D modality provides an incomplete visual understanding, limiting their ability to detect diverse anomaly types. To address these limitations, we propose the Geometry-

CVPR-2026close adjacent9
anomaly detectionarxivcvpr 2026
CLIPZero-shot
PDFarXiv

Track Any Anomalous Object:A Granular Video Anomaly Detection Pipeline

Yuzhi Huang, Chenxin Li, Haitao Zhang +9

CVPR-2025direct anomaly34
anomaly detectionanomalousvideo anomaly
PDFarXiv

Sign in to access this content

Sign in