J. Edward Swan II

Resolving Multiple Occluded Layers in Augmented Reality

Mark A. Livingston, J. Edward Swan II, Joseph L. Gabbard, Tobias H. Höllerer, Deborah Hix, Simon J. Julier, Yohan Baillot, and Dennis Brown. Resolving Multiple Occluded Layers in Augmented Reality. In Technical Papers, The IEEE / ACM International Symposium on Mixed and Augmented Reality (ISMAR 2003), pp. 56–65, IEEE Computer Society, October 2003.
Winner of a 2004 NRL Alan Berman Publication Award.

Download

[PDF] 

Abstract

A useful function of augmented reality (AR) systems is their ability to visualize occluded infrastructure directly in a user's view of the environment. This is especially important for our application context, which utilizes mobile AR for navigation and other operations in an urban environment. A key problem in the AR field is how to best depict occluded objects in such a way that the viewer can correctly infer the depth relationships between different physical and virtual objects. Showing a single occluded object with no depth context presents an ambiguous picture to the user. But showing all occluded objects in the environments leads to the "Superman's X-ray vision" problem, in which the user sees too much information to make sense of the depth relationships of objects. Our efforts differ qualitatively from previous work in AR occlusion, because our application domain involves far-field occluded objects, which are tens of meters distant from the user. Previous work has focused on near-field occluded objects, which are within or just beyond arm's reach, and which use different perceptual cues. We designed and evaluated a number of sets of display attributes. We then conducted a user study to determine which representations best express occlusion relationships among far-field objects. We identify a drawing style and opacity settings that enable the user to accurately interpret three layers of occluded objects, even in the absence of perspective constraints.

Additional Information

Acceptance rate: 33% (25 out of 75), Award rate: 3% (35 such awards were given, out of 1106 NRL publications)

BibTeX

@InProceedings{ISMAR03-mol, 
  author =      {Mark A. Livingston and J. Edward {Swan~II} and Joseph L. Gabbard and 
                Tobias H. H\"{o}llerer and Deborah Hix and Simon J. Julier and Yohan Baillot and 
                Dennis Brown}, 
  title =       {Resolving Multiple Occluded Layers in Augmented Reality}, 
  booktitle =   {Technical Papers, The IEEE / ACM International Symposium on Mixed and 
                Augmented Reality (ISMAR 2003)}, 
  year =        2003, 
  location =    {Tokyo, Japan}, 
  date =        {October 7--10}, 
  month =       {October}, 
  pages =       {56--65}, 
  publisher =   {IEEE Computer Society}, 
  wwwnote =     {<b>Winner of a 2004 NRL Alan Berman Publication Award</b>.}, 
  abstract =    { 
A useful function of augmented reality (AR) systems is their ability 
to visualize occluded infrastructure directly in a user's view of the 
environment.  This is especially important for our application 
context, which utilizes mobile AR for navigation and other operations 
in an urban environment.  A key problem in the AR field is how to best 
depict occluded objects in such a way that the viewer can correctly 
infer the depth relationships between different physical and virtual 
objects.  Showing a single occluded object with no depth context 
presents an ambiguous picture to the user.  But showing all occluded 
objects in the environments leads to the "Superman's X-ray vision" 
problem, in which the user sees too much information to make sense of 
the depth relationships of objects. 
Our efforts differ qualitatively from previous work in AR occlusion, 
because our application domain involves <em>far-field</em> occluded 
objects, which are tens of meters distant from the user.  Previous 
work has focused on <em>near-field</em> occluded objects, which are within 
or just beyond arm's reach, and which use different perceptual cues. 
We designed and evaluated a number of sets of display attributes.  We 
then conducted a user study to determine which representations best 
express occlusion relationships among far-field objects.  We identify 
a drawing style and opacity settings that enable the user to 
accurately interpret three layers of occluded objects, even in the 
absence of perspective constraints. 
}, 
}