J. Edward Swan II

A Methodology for Quantifying Medium- and Far-Field Depth Perception in Optical, See-Through Augmented Reality

J. Edward Swan II, Mark A. Livingston, Harvey S. Smallman, Joseph L. Gabbard, Dennis Brown, Yohan Biallot, Simon J. Julier, Greg S. Schmidt, Catherine Zanbaka, Deborah Hix, and Lawrence Rosenblum. A Methodology for Quantifying Medium- and Far-Field Depth Perception in Optical, See-Through Augmented Reality. Technical Report #MSU-050531, Department of Computer Science and Engineering, Mississippi State University, 2005.

Download

[PDF] 

Abstract

A fundamental problem in optical, see-through augmented reality (AR) is characterizing how it affects human depth perception. This problem is important, because AR system developers need to both place graphics in arbitrary spatial relationships with real-world objects, and to know that users will perceive them in the same relationships. However, achieving this is difficult, because the graphics are physically drawn directly in front of the eyes. Furthermore, AR makes possible enhanced perceptual techniques that have no real-world equivalent, such as x-ray vision, where AR users perceive that graphics are located behind opaque surfaces. Also, to date AR depth perception research has examined near-field distances, yet many compelling AR applications operate at longer distances, and human depth perception itself operates differently at medium-field and far-field distances. This paper describes the first medium- and far-field AR depth perception experiment that provides metric results. We describe a task and experimental design that measures AR depth perception, with strong linear perspective depth cues, and matches results found in the general depth perception literature. Our experiment quantifies how depth estimation error grows with increasing distance across a range of medium- to far-field distances, and we also find evidence for a switch in bias from underestimating to overestimating depth at  19.4 meters. Our experiment also examined the x-ray vision condition, and found initial evidence of how depth estimation error grows for occluded versus non-occluded graphics.

BibTeX

@TechReport{TR05-dpar, 
  author =      {J. Edward {Swan~II} and Mark A. Livingston and Harvey S. Smallman and 
                 Joseph L. Gabbard and Dennis Brown and Yohan Biallot and Simon J. Julier and 
                 Greg S. Schmidt and Catherine Zanbaka and Deborah Hix and Lawrence Rosenblum}, 
  title =       {A Methodology for Quantifying Medium- and Far-Field Depth Perception in 
                 Optical, See-Through Augmented Reality}, 
  type =        {Technical Report}, 
  number =      {#MSU-050531}, 
  institution = {Department of Computer Science and Engineering, Mississippi State University}, 
  month =       {May}, 
  year =        2005, 
  abstract =    { 
A fundamental problem in optical, see-through augmented reality (AR) is 
characterizing how it affects human depth perception.  This problem is 
important, because AR system developers need to both place graphics in arbitrary 
spatial relationships with real-world objects, and to know that users will 
perceive them in the same relationships.  However, achieving this is difficult, 
because the graphics are physically drawn directly in front of the eyes. 
Furthermore, AR makes possible enhanced perceptual techniques that have no 
real-world equivalent, such as x-ray vision, where AR users perceive that 
graphics are located behind opaque surfaces.  Also, to date AR depth perception 
research has examined near-field distances, yet many compelling AR applications 
operate at longer distances, and human depth perception itself operates 
differently at medium-field and far-field distances. 
This paper describes the first medium- and far-field AR depth perception 
experiment that provides metric results.  We describe a task and experimental 
design that measures AR depth perception, with strong linear perspective depth 
cues, and matches results found in the general depth perception literature.  Our 
experiment quantifies how depth estimation error grows with increasing distance 
across a range of medium- to far-field distances, and we also find evidence for 
a switch in bias from underestimating to overestimating depth at ~19.4 meters. 
Our experiment also examined the x-ray vision condition, and found initial 
evidence of how depth estimation error grows for occluded versus non-occluded 
graphics. 
}, 
}