Image Fusion

Why Image Fusion

Multisensor data fusion has become a discipline to which more and more general formal solutions to a number of application cases are demanded. Several situations in image processing simultaneously require high spatial and high spectral information in a single image. This is important in remote sensing. However, the instruments are not capable of providing such information either by design or because of observational constraints. One possible solution for this is data fusion.

Standard Image Fusion Methods

Image fusion methods can be broadly classified into two – spatial domain fusion and transform domain fusion.

The fusion methods such as averaging, Brovey method, principal component analysis (PCA) and IHS based methods fall under spatial domain approaches. Another important spatial domain fusion method is the high pass filtering based technique. Here the high frequency details are injected into upsampled version of MS images. The disadvantage of spatial domain approaches is that they produce spatial distortion in the fused image. Spectral distortion becomes a negative factor while we go for further processing, such as classification problemial distortion can be very well handled by transform domain approaches on image fusion. The multiresolution analysis has become a very useful tool for analysing remote sensing images. The discrete wavelet transform has become a very useful tool for fusion. Some other fusion methods are also there, such as Lapacian pyramid based, curvelet transform based etc. These methods show a better performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion.

The images used in image fusion should already be registered. Misregistration is a major source of error in image fusion. Some well-known image fusion methods are:

High pass filtering technique

IHS transform based image fusion

PCA based image fusion

Wavelet transform image fusion

pair-wise spatial frequency matching

Applications

Image Classification

Aerial and Satellite imaging

Medical imaging

Robot vision

Concealed weapon detection

Multi-focus image fusion

Digital camera application

Battle field monitoring

Satellite Image Fusion

Several methods are there for merging satellite images. In satellite imagery we can have two types of images

Panchromatic images – An image collected in the broad visual wavelength range but rendered in black and white.

Multispectral images – Images optically acquired in more than one spectral or wavelength interval. Each individual image is usually of the same physical area and scale but of a different spectral band.

The SPOT PAN satellite provides high resolution (10m pixel) panchromatic data. While the LANDSAT TM satellite provides low resolution (30m pixel) multispectral images. Image fusion attempts to merge these images and produce a single high resolution multispectral image.

The standard merging methods of image fusion are based on Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS) transformation. The usual steps involved in satellite image fusion are as follows:

Register the low resolution multispectral images to the same size as the panchromatic image.

Transform the R,G and B bands of the multispectral image into IHS components.

Modify the panchromatic image with respect to the multispectral image. This is usually performed by histogram matching of the panchromatic image with Intensity component of the multispectral images as reference.

Replace the intensity component by the panchromatic image and perform inverse transformation to obtain a high resolution multispectral image.

Medical Image Fusion

Image fusion has recently become a common term used within medical diagnostics and treatment. The term is used when patient images in different data formats are fused. These forms can include magnetic resonance image (MRI), computed tomography (CT), positron emission tomography (PET), and single photon emission computed tomography (SPECT). In radiology and radiation oncology, these images serve different purposes. For example, CT images are used more often to ascertain differences in tissue density while MRI images are typically used to diagnose brain tumors.

For accurate diagnoses, radiologists must integrate information from multiple image formats. Fused, anatomically-consistent images are especially beneficial in diagnosing and treating cancer. Companies such as Keosys, MIMvista, IKOE, and BrainLAB have recently created image fusion software to use in conjunction with radiation treatment planning systems. With the advent of these new technologies, radiation oncologists can take full advantage of intensity modulated radiation therapy (IMRT). Being able to overlay diagnostic images onto radiation planning images results in more accurate IMRT target tumor volumes.

See also

Sensor fusion

Data fusion

External links

Investigations of Image Fusion, Electrical Engineering and Computer Science Department, Lehigh University

Image Fusion Image Fusion Systems Research company

Image fusion and Pan-sharpening Geosage

Categories: Computer visionHidden categories: Computer science articles needing expert attention | Articles needing expert attention from December 2009 | All articles needing expert attention | Wikipedia articles needing style editing from December 2007 | All articles needing style editing | Articles that need to be wikified from July 2009 | All articles that need to be wikified

I am an expert from China Manufacturers, usually analyzes all kind of industries situation, such as digital rice cooker , ceramic rice cooker.

Processing your request, Please wait....

Leave a Reply