arXiv 2026

Adapting MLLMs for
Nuanced Video Retrieval

1Visual Geometry Group, University of Oxford
TARA teaser: text-only tuning setup and tSNE before/after showing improved separation for chiral actions.
TARA fine-tunes a Multimodal LLM for nuanced video retrieval using only text triplets. Despite never seeing video during training, it achieves state-of-the-art on temporal, negation, and multimodal retrieval benchmarks, while also closing the modality gap between video and text embeddings.

What is Nuanced Video Retrieval?

Standard video retrieval models struggle with queries that require fine-grained semantic understanding. We study three distinct dimensions of nuance that everyday retrieval systems fail to handle correctly.

Temporal Nuance

Chiral Actions

Many actions are chiral — they have a temporal mirror that looks identical frame-by-frame but describes the opposite event. "Opening a door" and "closing a door" share the same visual content played in reverse. A retrieval model must understand the direction of time to distinguish them. We evaluate this on the CiA-Retrieval and RTime benchmarks.

Qualitative temporal retrieval examples: chiral action pairs where TARA correctly distinguishes the direction of motion.
Temporal nuance. Retrieval examples for chiral queries — standard models retrieve the temporally opposite action; TARA selects the correct one.
Negation Nuance

Negated Queries

Users often want to exclude certain attributes: "a dog not on grass", "someone running without a ball". CLIP-style models are notoriously insensitive to negation — they tend to retrieve results that match the positive noun phrase and ignore the negator entirely. We evaluate on NegBench (image and video) and the Adverb Recognition benchmark.

Qualitative negation retrieval examples: TARA correctly excludes items matching the negated attribute.
Negation nuance. Retrieval examples with negated text queries — TARA correctly excludes videos matching the negated attribute while the base model ignores the negation.
Multimodal Nuance

Composed Retrieval

Sometimes the query itself is multimodal: a reference video combined with a text edit instruction (e.g., "same scene but with snow instead of rain"). The model must fuse both modalities to retrieve the correctly modified target. We evaluate on the WebVid-CoVR benchmark.

Qualitative composed video retrieval examples: video + text edit instruction queries.
Multimodal nuance. Composed retrieval examples — given a source video and a text modification, TARA retrieves the correctly edited target video.

Abstract

Our objective is to build an embedding model that captures the nuanced relationship between a search query and candidate videos. We cover three aspects of nuanced retrieval: (i) temporal, (ii) negation, and (iii) multimodal. For temporal nuance, we consider chiral actions that need distinguishing between temporally opposite actions like "opening a door" vs. "closing a door". For negation, we consider queries with negators such as "not", "none" that allow users to specify what they do not want. For multimodal nuance, we consider the task of composed retrieval where the query comprises a video along with a text edit instruction. To that end, we repurpose a Multimodal Large Language Model (MLLM) trained to generate text into an embedding model. We fine-tune it with a contrastive loss on text alone with carefully sampled hard negatives that instill the desired nuances in the learned embedding space. Despite the text-only training, our method achieves state of the art performance on all benchmarks for nuanced video retrieval. We also show that text-only training reduces the modality gap between text and video embeddings, leading to better organization of the embedding space.

Method: The TARA Recipe

TARA (Text Adapted Retrieval Alignment) repurposes a Multimodal LLM as a joint video-text embedding model. We extract embeddings via an "Explicit One-word Limitation" (EOL) prompt — e.g., "<video>: Summarize the video in one word:" — and use the final hidden state as the embedding. We then fine-tune with a contrastive loss on text triplets with carefully engineered hard negatives. This text-only training reduces the modality gap between video and text embeddings, which explains its surprising effectiveness.

A. Temporal Nuance

Extract chiral verb-object pairs from Ego4D. Generate temporally antonymous sentences (e.g., "closes the box" → "opens the box") using an LLM as hard negatives.

B. Negation Nuance

Filter NLI triplets where the hard-negative uses explicit negation operators (not, never, none…), training the model to precisely understand what is absent.

C. Multimodal Nuance

Translate Composed Video Retrieval to a text task: anchor = source caption + edit instruction, positive = edited caption, negative = original (unedited) caption.

NLI-Nuance dataset construction with temporal, negation, and multimodal text triplets.
NLI-Nuance. Dataset construction with targeted hard negatives for each nuance type: temporal antonyms, negation operators, and composed edit instructions.
The resulting dataset NLI-Nuance has just 20,000 text triplets (8K NLI + 1K temporal + 1K negation + 10K multimodal). Fine-tuning only the LLM weights (vision encoder frozen) for 2 epochs takes less than one hour on 8× RTX A6000 GPUs.

  Temporal Nuance

Results: Temporal Nuance

Key Takeaway: TARA achieves state-of-the-art on the CiA-Retrieval benchmark across all three datasets and all difficulty settings (Chiral, Static, All), while being fine-tuned on text alone.

CiA-Retrieval (mAP ↑)

Chiral: gallery has correct + temporal-opposite action. Static: gallery has correct + temporally irrelevant actions. All: full gallery. Higher is better.

Method Data (K) SSv2 EPIC Charades
Chiral Static All Chiral Static All Chiral Static All
CLIP (avg.) 52.018.312.7 51.012.07.0 48.413.26.5
InternVideo 2 52.535.720.6 48.322.18.8 50.711.911.9
VLM2Vec-V2 (multimodal)1700 58.827.815.9 49.425.412.9 53.518.810.5
CaRe275 66.446.223.7 62.325.016.9 56.125.212.9
ArrowRL25 67.533.822.5 55.712.49.6 57.118.612.2
Qwen3VL-Emb.NA 72.043.431.8 62.128.620.6 65.337.326.1
Tarsier 2 (base) 77.726.924.0 67.422.015.3 60.513.49.2
Tarsier 2 + TARA (Ours)20 88.966.758.6 81.145.638.9 71.438.629.0
tSNE embeddings for chiral pairs before and after fine-tuning, showing improved text-video alignment.
tSNE visualisation. Chiral action embeddings before (base) and after TARA fine-tuning: temporally opposite pairs become clearly separable.

Reversed in Time (RTime) — R@1 ↑

Arrow-of-time benchmark: given a video, choose the correct vs. time-reversed caption (T2V) and vice versa (V2T).

Method T2V V2T
Singularity (zero-shot)48.749.9
InternVideo2-1B (zero-shot)50.051.0
Qwen2.5VL (zero-shot)53.466.6
Tarsier 2 (zero-shot)58.859.5
— fine-tuned on RTime —
CLIP4Clip49.849.8
UMT-Neg54.554.2
ArrowRL-Qwen2.555.669.6
Tarsier 2 + TARA (Ours) 67.2 77.9

  Negation Nuance

Results: Negation Nuance

Key Takeaway: TARA (zero-shot) dramatically outperforms all CLIP- and NegCLIP-based models fine-tuned on negation-augmented caption data, on both image (COCO) and video (MSR-VTT) retrieval.

NegBench — R@5 ↑

Std.: standard queries. Neg.: negation queries (e.g., "a dog but not on grass"). Higher is better.

Method Fine-tuning data COCO MSR-VTT
Std. Neg. Std. Neg.
CLIP (none)None54.848.050.645.8
CLIP (CC)CC (img+txt)58.854.553.749.9
NegCLIP (none)None68.764.453.751.0
NegCLIP (CC-NegCap)CC-NegCap68.667.556.554.6
Tarsier 2 (base)None33.321.525.618.9
Tarsier 2 + TARA (Ours) NLI-Nuance (text only) 76.773.6 65.165.0

Adverb Recognition — Accuracy ↑

Given a video and an action verb, select the correct adverb between two choices (e.g., "slowly" vs. "quickly").

Method VATEX MSRVTT
Chance50.050.0
Action Modifiers (semi-sup.)64.2
AM + Pseudo-labels67.570.5
Tarsier 2 (base)57.456.6
Tarsier 2 + TARA (Ours) 74.8 76.8

  Multimodal Nuance

Results: Multimodal Nuance

Key Takeaway: TARA handles queries composed of a video + a text edit instruction (Composed Video Retrieval). It outperforms even methods fine-tuned directly on the WebVid-CoVR dataset, using only text during training.

WebVid-CoVR ↑

Query = source video + text edit instruction. Goal: retrieve the edited video. Evaluated on 2,556 query-video pairs.

Method R@1 R@5 R@10
Zero-shot
BLIP (V+T)45.570.579.5
CLIP (V+T)44.469.177.6
Tarsier 2 + TARA (Ours) 66.3 86.7 91.5
Fine-tuned on CoVR data
CLIP (V+T)50.677.185.1
Ventura et al.53.179.986.9
Ventura et al. (v2)59.883.891.3

Results: Standard Benchmarks (MMEB-V2)

Key Takeaway: Text-only fine-tuning does not hurt standard video understanding. TARA comprehensively improves upon Tarsier 2 and is competitive with models trained on orders of magnitude more multimodal data.

Video classification and retrieval tasks from MMEB-V2. TARA ⊕ Q3VLE = ensemble of TARA and Qwen3VL-Embedding.

Method Video Classification Video Retrieval
UCF HMDB K700 BF SSv2 MSR MSVD DDMo YC2 VTX
VLM2Vec-V2 (multimodal) 60.040.938.014.842.8 28.348.130.410.626.5
LamRA-Qwen2 60.440.542.316.936.3 22.146.124.89.219.1
TTE-7B 78.663.955.634.255.3 39.559.436.320.332.6
Tarsier 2 (base) 37.917.429.636.115.9 9.539.812.23.916.6
Tarsier 2 + TARA (Ours) 80.369.059.445.676.4 40.782.236.816.753.2
Qwen3VL-Embedding 94.677.571.267.276.9 53.887.256.132.864.8
TARA ⊕ Qwen3VL-Emb. (Ensemble) 94.378.370.068.681.4 54.588.456.132.166.2

All methods use Qwen2VL-7B as base model. Direct apples-to-apples comparison.

Method Fine-tuning modality CiA-SSv2 Chiral RTime T2V RTime V2T CoVR R@1 Avg.
Base (Qwen2VL-7B)60.259.964.715.540.0
ArrowRLvideo+text67.557.168.841.851.3
CaRe (Stage 2)video+text66.459.869.935.651.9
LAMRAimage+text55.357.963.931.541.1
VLM2Vec-2.0video+image+text58.854.361.842.946.9
Base + TARA (Ours) text only 72.7 65.9 73.8 44.8 56.8

Analysis: Why Does Text-Only Training Work?

We study the modality gap — the systematic offset between video and text embeddings in the shared embedding space. Despite sharing an LLM backbone, MLLMs exhibit a clear modality gap because video and text tokens arrive through different pathways (vision encoder + MLP projection vs. learned text embeddings). This gap wastes representational capacity and skews cosine similarities, hurting retrieval.

Modality gap analysis on MSRVTT: EOL alone does not close gap, text-only fine-tuning reduces it.
Modality gap on MSR-VTT. EOL prompting alone does not close the gap; text-only TARA fine-tuning substantially reduces ‖Δgap‖ and improves alignment.
Modality-gap visualization for Tarsier 2 on MSRVTT; TARA reduces gap substantially.
Tarsier 2 modality gap. TARA reduces ‖Δgap‖ from 0.49 to 0.20 on Tarsier 2.
Token-logit visualization for video/text embeddings showing more semantically relevant top tokens after TARA.
Token logit analysis. Top-activated tokens for video and text embeddings before and after TARA — the model learns more semantically coherent representations.

EOL prompt alone is insufficient

While Jiang et al. showed EOL prompts dissolve the modality gap for images with LLaVA-NeXT, we find this does not generalize to video-text pairs for Qwen2VL, InternVL3, Tarsier, and Qwen3VL. The gap persists at ‖Δgap‖ ≈ 0.35–0.68.

Text-only TARA closes the gap

TARA reduces ‖Δgap‖ from 0.49 → 0.20 for Tarsier 2 via the uniformity pressure of contrastive training: text embeddings spread on the hypersphere, pulling both modality centroids toward the origin and closer to each other.

Modality Gap Measurements (‖Δgap‖ ↓, lower is better)

Model No EOL With EOL After TARA
Qwen2VL-7B0.390.350.20
Tarsier 20.490.510.20
InternVL3-8B0.430.68
Qwen3VL-8B0.560.62

BibTeX

@article{bagad2026tara,
  title   = {Adapting MLLMs for Nuanced Video Retrieval},
  author  = {Bagad, Piyush and Zisserman, Andrew},
  journal = {arXiv preprint arXiv:2512.13511},
  year    = {2026}
}