Perceiving and Acting in First-Person: A Dataset and Benchmark for Egocentric Human-Object-Human Interactions

Liang Xu1,2,3, Chengqun Yang1, Zili Lin1,2,3, Fei Xu1, Yifan Liu1, Congsheng Xu1, Yiyi Zhang4, Jie Qin5, Xingdong Sheng6, Yunhui Liu6, Xin Jin2,3, Yichao Yan1, Wenjun Zeng2,3, Xiaokang Yang1
1MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, 2Ningbo Institute of Digital Twin, Eastern Institute of Technology, Ningbo, China, 3Ningbo Key Laboratory of Spatial Intelligence and Digital Derivative, Ningbo, China, 4MoE Key Lab of AI, School of Computer Science, Shanghai Jiao Tong University, 5Nanjing University of Aeronautics and Astronautics, 6Lenovo.
ICCV 2025
Interpolate start reference image.

Figure 1. InterVLA features a large-scale human-object-human interaction dataset in a vision-language-action scheme, where an assistant provides services to an instructor based on egocentric perception and verbal commands. This comprehensive dataset comprises 3.9K sequences, totaling 11.4 hours and 1.2M frames of multimodal interaction data, including egocentric and exocentric RGB videos, language commands and high-precision human/object motions, promoting the development of general-purpose intelligent AI assistants.


Interpolate start reference image.

Abstract

Learning action models from real-world human-centric interaction datasets is important towards building general-purpose intelligent assistants with efficiency. However, most existing datasets only offer specialist interaction category and ignore that AI assistants perceive and act based on first-person acquisition. We urge that both the generalist interaction knowledge and egocentric modality are indispensable. In this paper, we embed the manual-assisted task into a vision-language-action framework, where the assistant provides services to the instructor following egocentric vision and commands. With our hybrid RGB-MoCap system, pairs of assistants and instructors engage with multiple objects and the scene following GPT-generated scripts. Under this setting, we accomplish InterVLA, the first large-scale human-object-human interaction dataset with 11.4 hours and 1.2M frames of multimodal data, spanning 2 egocentric and 5 exocentric videos, accurate human/object motions and verbal commands. Furthermore, we establish novel benchmarks on egocentric human motion estimation, interaction synthesis, and interaction prediction with comprehensive analysis. We believe that our InterVLA testbed and the benchmarks will foster future works on building AI agents in the physical world.

The InterVLA Dataset

InterVLA is a large-scale dataset containing 3.9K interaction sequences, more than 11.4 hours and 1.2M frames of multimodal interaction data.

Interpolate start reference image.

Figure 2. Components of the InterVLA dataset. For the vision modality, we capture (a) two egocentric and (b) five exocentric RGB videos; For the language modality, we comprehensively supply the (c) GPT-generated commands; For the action modality, we provide the high-precision (d) human and object motions during the interactions.

Tasks and Benchmark

Interpolate start reference image.

Figure 3. Task Formulation of InterVLA. We establish multiple downstream tasks on egocentric human motion estimation, text-driven interaction synthesis, motion-based interaction prediction and vision-language based interaction prediction. All these benchmarks showcase the great challenges of our InterVLA dataset and will benefit practical, intelligent AI assistants.