English only
IVRG - Images and Visual Representation Group
EPFL > I&C > IVRG > Laurence Meylan
IVRG CONTENTS
Home
People
Teaching
Research
Publications
Supplem. Material & Downloads
Software & Demos
Links
Jobs & Internships
Intranet
Absence chart
QUICK LINKS
Panorama Livecam
LCAV



A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images
Laurence Meylan, David Alleysson and Sabine Susstrunk

Abstract

We present a tone mapping algorithm that is derived from a model of retinal processing. Our approach has two major improvements over existing methods. First, tone mapping is applied directly on the mosaic image captured by the sensor, analogue to the human visual system that applies a non-linearity on the chromatic responses captured by the cone mosaic. This reduces the number of necessary operations by a factor three. Second, we introduce a variation of the center/surround class of local tone mapping algorithms, which are known to increase the local contrast of images but tend to create artifacts. Our method gives a good improvement in contrast while avoiding halos and maintaining good global appearance. Like traditional center/surround algorithms, our method uses a weighted average of surrounding pixel values. Instead of using it directly, the weighted average serves as a variable in the Naka-Rushton equation, which models the photoreceptors non-linearity. Our algorithm provides pleasing results on various images with different scene content, key, and dynamic range.

Reference and PDF

Matlab code for all results and figures

Apr2007_reproducible.zip (16 KB)
While the authors have tried to ensure that the program works correctly, we do not guarantee usability for all purposes. Please send your comments to laurence.meylan AT a3. epfl.ch.

Figures

Figure 1. Bayer CFA (left) and spatio-chromatic sampling of the cone mosaic (right).

Figure 2. Top (a): Traditional image processing work-flow. Center (b): Our proposed work-flow. Bottom left (c): Image rendered by a global tone mapping (gamma). Bottom right (d): Image rendered by our method.

Figure 3. Simplified model of the retina.

Figure 4. Naka-Rushton function with different adaptation factors X0.

Figure 5. Simulation of the OPL adaptive non-linear processing. The input signal is processed by the Naka-Rushton equation whose adaptation factors are given by filtering the CFA image with a low-pass filter. The second non-linearity that models the IPL layer works similarly.

Figure 6. The chrominance channels are separated before performing the interpolation.

Figure 7. Comparison of our algorithm with other tone mapping methods.

Figure 8. Example of our method applied with different filter sizes. Left: Small filters. Right: Large filters.

More result images



Last update : Monday, 07-May-2007 20:52:46 UTC
Comments/feedback to : webmaster [dot] lcav [at] epfl [dot] ch