Current Research

My research can be most broadly described as conducting human-factors investigations of computer-generated graphics.  More specifically, I am working in the areas of augmented reality, virtual reality, and visualization.  Recent problem domains include surgical simulation and other medical applications, weather visualization, military applications, computer forensics, flow-field visualization, and driving simulation.

Perception in Augmented and Virtual Reality

At MSU I have founded a laboratory that is studying perceptual aspects of Augmented and Virtual Reality, with an emphasis on depth and layout perception.  In particular, along with my students and collaborators, for the past 11 years I have been studying how depth perception operates in augmented and virtual reality.  In addition, we are studying the exciting topic of x-ray vision: how one can use augmented reality to let users see objects which are located behind solid, opaque surfaces — to allow, for example, a doctor to see organs that are inside a patient's body.  In addition to my students, this work is collaborative with Stephen R. Ellis at NASA Ames Research Center; other colleagues have come from the University of Southern California, the Naval Research Laboratory, NVIDIA Corporation, the University of California at Santa Barbara, and the University of Rostock.  The work has been funded by the National Science Foundation, the National Aeronautics and Space Administration (NASA), the Department of Defense, the Naval Research Laboratory, and the Office of Naval Research. 

Human-Factors Aspects of Augmented and Virtual Reality

In addition to the work in my own lab, I have collaborated widely with colleagues and students in other human-factors aspects of augmented and virtual reality.  With Joe Gabbard at Virginia Tech, I am studying how color and contrast perception operate in optical see-through augmented reality, where the displayed graphics are seen on top of widely-varying real-world scenes.  In addition, I advised a PhD student who used the MSU Driving Simulator (located at the Center for Advanced Vehicular Systems) to study how cell phone conversations affect visual attention and memory of roadside hazards. 

Visualization and Evaluation

I am also working with students and collaborators on a number of visualization and evaluation projects.  This work is currently applied to weather forecasting; we are studying methods for visualizing and interacting with ensembles of weather model simulation runs, with an emphasis on quantifying and visualizing ensemble run uncertainty.  Recently completed projects include visualizing computer forensics data, cognitively evaluating the effectiveness of forensics data visualizations, using parallel coordinates to visualize historical hurricane severity data, empirically evaluating additional weather data visualization techniques, empirically evaluating tensor visualization methods, and empirically evaluating flow-field visualization techniques.  This work has involved collaborations with many MSU colleagues, in particular Song Zhang and T.J. Jankun-Kelly, and a number of students.  The work is currently funded by the National Science Foundation, and has previously been funded by the Department of the Army and the Naval Research Laboratory.

Previous Research Topics

I was employed by the Naval Research Lab (NRL) from 1997 through 2004.  While there, I was a member of the Virtual Reality Laboratory.  Our primary project during this time was researching and developing the Battlefield Augmented Reality System (BARS), an outdoor, mobile augmented reality system that allowed soldiers on foot to see heads-up, graphically-drawn battlefield information through optical see-through displays.  Anyone who has worked in augmented and virtual reality can appreciate how outrageously challenging this project was; I had a great time and learned a huge amount.  My experiences with BARS lead me to become interested in the perceptual issues involved with augmented and virtual reality.  This work was a team effort with my fellow NRL scientists Larry Rosenblum (my boss), Simon Julier, Mark Livingston, Greg Schmidt, Yohan Baillot, and Dennis Brown.  In addition, much of my work during this time also involved collaborations with my Virginia Tech colleagues Joe Gabbard and Deborah Hix.   

From 1994 through 1997 I was engaged in my PhD studies in the areas of Volume Rendering, Visualization, and Graphics; my PhD adviser was Roni Yagel.  With the help of Roni and my fellow PhD students Klaus Mueller and Torsten Möller, I developed the first published algorithm (IEEE Visualization 1997) for anti-aliasing with Splatting.  Since that time, many others have greatly advanced this early work (including Klaus and Torsten).

From 1992 through 1994 I worked with my MS adviser Gary Perlman in the area of Human-Computer Interaction.  Under Gary's direction I conducted two perceptual user studies (Proceedings of the Human Factors and Ergonomics Society, 1993 and 1994).  Although these were my very first research experiences, I find that I still often apply the lessons and knowledge that I first learned from Gary.