Monday, January 30, 2012

Multi-Image Fusion

Multi-Image Fusion





Overview:

We are all used to taking a single photograph of a scene. However, the photographer’s intent is often to capture more than what can be seen in a single photograph. By combining hundreds or even thousands of images we can create images that are much better than a single photograph. Computation to fuse the image set run either in the cloud or on a single machine can result in enhanced images and experiences of the scene. One can combine images to produce an image with a large field of view (e.g., a panorama) or a composite image that takes the best parts of the images (e.g., a photomontage).  One challenge is in creating large scale panoramas, where the capture and stitching times can be long. In addition, when using consumer-level point-and-shoots and camera phones, artifacts such as motion blur show up. Another challenge is combining large images set from photos or videos to produce an results that uses the bests parts of the images to create and enhanced photograph. In this demo, we will present several new technologies that advance the state-of-the-art in these areas and create improved user experiences. We will demonstrate the following technologies:

Next Generation ICE
Description:
Microsoft Image Composite Editor, or ICE for short, is an application that creates panoramic images from a set of overlapping source images.  It's a free download, available from the Microsoft Research web site.  Version 1.3 of ICE contains three exciting new features.  The first is tight integration with Photosynth.  ICE users are now able to directly upload their panoramas to the Photosynth web service, allowing panoramas to be shared on the web in their full native resolution.  ICE users will be able to leverage all Photosynth features including geo-registration with bing maps, community comments, and tagging.
The second feature is the ability to very quickly create a preview of a panorama from hundreds of images.  This capability is a marked improvement over other panoramic stitching applications.  By leveraging the thumbnail cache in Windows Vista/7, ICE can now provide a near instantaneous preview of even the largest stitching projects.  The figure below shows a 300-image panorama preview generated in 3 seconds.
Finally, the ICE engine has been revamped to take advantage of multiple processor cores.  The key stitching algorithms in ICE have been parallelized and are now able to leverage modern multi-core PCs to greatly accelerate the stitching process.  We will discuss the new parallel implementations of these core algorithms.
Version 1.3 of ICE is now available for download.
Example:

ICE Structured Panoramas



Stitching of Panoramas from Video
Description:
Traditionally panoramas are assembled from still shots using an application such as ICE. For this process to be be successful, a user needs to carefully take photos so that there is enough overlap, the images are sharp, etc. This can be a time consuming and difficult process. Video stitching enables creating panoramas from a video clip. It takes much less time to capture a scene with video. There is no need to pay attention to the exact path and layout. Our algorithm will analyze a clip, automatically detect if there is a panorama and stitch the result using frames it selects to create the result.
Example:




Generating Sharp Panoramas from Blurry Videos
Description:
While a consumer can generate a panorama from a video using video stitching method, the frames in the video, particularly with consumer-level cameras, tend to be rather blurry due to the camera panning motion. In this demo, we will show how we can generate a sharper-looking panorama from a set of motion-blurred video frames. Our technique is based on joint global motion estimation and multi-frame deblurring. It also automatically computes the duty cycle of the video (i.e., percentage of time between frames that is actually exposure time). We will be showing example videos, panoramas that are stitched directly from the frames, and deblurred panoramas after processing. More details.
Example:
Here's a sequence of video frames (only the first and last frames shown here):
...
If we just stitch these frames, we get the following result:

Using our technique, we can sharpen all the source frames simultaneously, resulting in a sharper panorama:


Description:
Often we face the situation of trying hard to capture photographs at the perfect moment, only to realize later that we opened the camera shutter either too early or too soon.
One way of to ensure that important moments are not missed is to record events with a video camera. One can conservatively "keep the camera rolling" to capture before after and during an event of importance. In fact, for certain events video is the only way to record the moment, as the motion could be a key aspect of that moment. Unfortunately, there are still many challenges with video. Videos tend to be lower resolution and noisier than stills and the display and sharing of videos is still much more challenging than with images.
In this work, we consider the problem of creating a single high-quality still image from a video sequence. The snapshots we produce have higher resolutions, lower noise levels and less blur than the original video frames. In addition, the snapshots preserve salient and interesting objects in the video and uses the best aspects of the video.
Example:


Description:
Photographing distant objects is challenging for a number of reasons. Even on a clear day, atmospheric haze often represents the majority of light received by a camera. Unfortunately, dehazing alone cannot create a clean image. The combination of shot noise and quantization noise is exacerbated when the contrast is expanded after haze removal. Dust on the sensor that may be unnoticeable in the original images creates serious artifacts. Multiple images can be averaged to overcome the noise, but the combination of long lenses and small camera motion as well as time varying atmospheric refraction results in large global and local shifts of the images on the sensor.
An iconic example of a distant object is Mount Rainier, when viewed from Seattle, 90 kilometers away. This work demonstrates a methodology to pull out a clean image from a series of images. Rigid and non-rigid alignment steps brings individual pixels into alignment. A novel local weighted averaging method based on ideas from “lucky imaging” minimizes blur, resampling and alignment errors, as well as effects of sensor dust, to maintain the sharpness of the original pixel grid. Finally dehazing and contrast expansion results in a sharp clean image. More details.
Example:


Multi-Image Dehazing of Mount Rainier: Given multiple input images, a sequence of rigid and non-rigid alignment and per-pixel weighted averaging, minimizes blur, resampling, and alignment errors. Dehazing and contrast expansion then results in a sharp clean image. 

No comments:

Post a Comment