Stereo Retinex

Xiong, W., and Funt, B., "Stereo Retinex," Image & Vision Computing, Vol. 27 No. 1-2, pp. 178-188, 2009.


The retinex algorithm for lightness and color constancy is extended to include 3-dimensional spatial information reconstructed from a stereo image. A key aspect of traditional retinex is that, within each color channel, it makes local spatial comparisons of intensity. In particular, intensity ratios are computed between neighboring spatial locations, retinex assumes that a large ratio indicates a change in surface reflectance, not a change in incident illumination; however, this assumption is often violated in 3-dimensional scenes, where an abrupt change in surface orientation can lead to a significant change in illumination. In this paper, retinex is modified to use the 3-dimensional edge information derived from stereo images. The edge map is used so that spatial comparisons are only made between locations lying on approximately the same plane in 3-dimensions. Experiments on real images show this method works well, however, they also reveal that it can lead to isolated regions, which, as a result of being isolated, are incorrectly determined to be grey. To over- come this problem, stereo retinex is extended to allow information that is orthogonal to the space of possible illuminants to propagate across changes in surface orientation. This is accomplished by transforming the original RGB image data into a color space based on coordinates of luminance, illumination and reflectance. This coordinate system allows stereo retinex to propagate reflectance information across changes in surface orientation, while at the same time inhibiting the propagation of potentially invalid illumination information. The stereo retinex algorithm builds upon the multi-resolution implementation of retinex known as McCann99. Experiments on synthetic and real images show that stereo retinex performs significantly better than unmodified McCann99 retinex when evaluated in terms of the accuracy with which correct surface object colors are estimated.

Full text (pdf)


Back to SFU Computational Vision Lab publications (home)