Behavior-Aware Anthropometric Scene Generation for Human-Usable 3D Layouts

Semin Jin1, 2, *, Donghyuk Kim1, 2, *, Jeongmin Ryu1, Kyung Hoon Hyun1, 2, †
1Design Informatics Lab, Hanyang University
2Human-Centered AI Design Institute, Hanyang University
*Co-first authors Corresponding author
Teaser

Abstract

Well-designed indoor scenes should prioritize how people can act within a space rather than merely what objects to place. However, existing 3D scene generation methods emphasize visual and semantic plausibility, while insufficiently addressing whether people can comfortably walk, sit, or manipulate objects. We present a Behavior-Aware Anthropometric Scene Generation framework that leverages vision–language models to analyze object–behavior relationships and translates spatial requirements into parametric layout constraints adapted to user-specific anthropometric data.

Framework Overview

Framework Overview

Methodology

Methodology Constraint Inference

Results

Evaluation Interface Individual Study Results Group Study Results Group Study Results

BibTeX

@inproceedings{jin2026behavioraware,
  title     = {Behavior-Aware Anthropometric Scene Generation for Human-Usable 3D Layouts},
  year      = {2026}
}