Research

Planetary Skylight Exploration

Planetary Skylight Exploration

Skylights are recently-discovered “holes” on the surface of the Moon and Mars that may lead to planetary caves. These vast, stadium-sized openings represent an unparalleled opportunity to access subterranean spaces. This research program developed concept mechanisms and operations to enable robotic exploration of planetary skylights and caves. Tyrobot ("Tyrolean Robot") was developed to map skylight walls and floors by suspending on a cable tightrope. PitCrawler is a wheeled robot which utilizes flexible chassis, extremely low center-of-gravity, and energetic maneuvering to negotiate bouldered floors and sloped descents. We demonstrated these prototypes in analog pit mines which simulated the size and shape of lunar skylights.

Siberian Hole 3D Model High Resolution Modeling and 3D Printing of Geology King's Bowl Pit Mapping with LIDAR Tyrobot - Tyrolean Traversing Robot
Modeling Pluto Cave with Multi-view Stereo 3D Printing of Pit Shell Models
Physics-based Vision and Active Illumination

Physics-based Vision and Active Illumination

Exploiting the physics of light transport is critical to the design and use of optical sensors. My dissertation explored the use of targeted vision and illumination approaches (coined Lumenhancement) for perception. Intelligent use of active illumination or estimation of natural light fields can enhance features in images. When coupled with appearance-constrained environments such as dark planetary and underground spaces, image-based perception techniques gain significant effectiveness beyond non-contextual approaches. Generalization is achieved by grouping similar spaces into “appearance domains”. While demonstrated for a variety of planetary and underground environments, the thesis is broadly applicable to other appearance classes like indoor and urban robotics.

Shape from Shading with Active Illumination from a Mobile Robot (2009)
Sensor Characterization

Sensor Characterization

Many types of imaging and range sensors exist on the market. Manufacturer specifications are often non-comparable, collected in ideal settings, and not oriented to robotics application. The goal of this DOD funded work was to provide a common basis for empirical comparison of optical sensors in underground environments. Data distribution and accuracy would then be used to optimize for sensor selection. The work included both an ideal laboratory characterization where a novel 3D “checkerboard” target was scanned from multiple perspectives and also an in situ component where mobile mapping in underground spaces was compared. I helped create and lead the Sensor Characterization Lab at CMU, which successfully fulfilled the DOD contract and other funded work in this area.

At NASA Ames, I am building new facilities and expertise for characterization of planetary surface sensors.
Multi-sensor Fusion for 3D Mapping (2009-2014)

Multi-sensor Fusion for 3D Mapping (2009-2014)

Current robotic maps are limited by decades-old range sensing technology. Only multisensor (LIDAR, camera, RADAR and multispectral) approaches can provide the density and quality of data required for automated inspection, operation and science. My PhD research explored synergistic cooperation of multi-modal optical sensing to enhance understanding of geometry (super-resolution), location of interest sampling (image-directed scanning) and material understanding from source-motion.
Hybrid Optical Sensors (2009-)

Hybrid Optical Sensors (2009-)

I developed several novel sensors for mapping and imaging. My image-directed structured light scanner optically co-locates a high resolution camera with the output illumination of a DLP projector/camera using a half-silvered mirror. This configuration enables hardware-supported intelligent sample selection with high resolution interpolation and texturing. During my thesis I also built a room-sized gonioreflectometer/sun simulator with no moving parts using an array of commodity SLR cameras and LED illumination. This design was accurate enough to extract BRDFs of planetary materials for graphical rendering while costing about 1/100th that of commercial spherical gantries.
Model Visualization (2009-2014)

Model Visualization (2009-2014)

Human are consumers of 3D models for training, oversight, operations and presentation. My research investigated new methods for immersive display that enhanced these tasks. Approaches included nonphotorealistic techniques for feature highlighting, point splatting, hole filling for imperfect data, adaptive BRDF selection, radiance estimation and geometry image parameterizations. I later dabbled in 3D printing of robot-made models as tools for scientific understanding.
Lunar Robotics (2008-2011)

Lunar Robotics (2008-2011)

I supported ongoing research in lunar robotics at CMU. I automated RedRover, a prototype equatorial lunar rover designed to win the Google X-prize and helped develop its stereo mapping capability. More recently, I contributed to the autonomous Lunar Lander project developing algorithms for terrain modeling and analysis for use in midflight landing site selection. A combined lander/rover team from CMU and a spinoff company is intended to visit the Lacus Mortis pit on the moon.
Subterranean Mapping (2007-2012)

Subterranean Mapping (2007-2012)

Robots are poised to proliferate in underground civil inspection and mining operations. I took over development of the CaveCrawler mobile robot, which was a platform for research in these areas. CaveCrawler has inspected and mapped many miles of underground mines and tunnels using LIDAR. We also demonstrated use of rescue scout robots for use in disaster situations to locate victims and carry supplies.
Borehole Scanning and Imaging (2006-2009)

Borehole Scanning and Imaging (2006-2009)

I developed robots for inspecting the most hazardous and access-constrained underground environments. The MOSAIC camera is a borehole-deployed inspection robot that generates 360 degree panoramas. MOSAIC takes long range photography using active illumination and generates all-exposed images using HDR imaging. Ferret, an underground void inspection robot. It is capable of deploying through a 3” drill core into unlined boreholes, and produces 3D models of voids with a fiber-optic LIDAR.
Human Odometer (2005)

Human Odometer (2005)

The Human Odometer was a wearable personal localization system for first responders and warfighters. Teams utilizing smart positioning and identification experience enhanced situational awareness and reduced friendly fire incidents. Bluetooth accelerometers and gyroscopes woven into suit tracked a person’s steps and orientation. This information was reported to a battalion commander while a handheld PDA which brought up context-sensitive mapping and position information. My undergraduate senior thesis investigated Kalman filtering to fuse intermittent GPS and odometry data for more accurate positioning and learning the specific parameters and variances of the step-detection model.