Diagonal versus Affine Transformations for Color Correction
Funt, B.V., and Lewis, B.C.
"Diagonal versus Affine Transformations for Color Correction",
Journal of the Optical Society of America A, Vol 17, No. 11, Nov. 2000.
Standard methods for color correction involve the use of a
diagonal-matrix transformation. Zaidi proposes the
use of a two-parameter affine model; we show that this offers no
improvement in terms of accuracy over the
diagonal model, especially if a sharpening transformation is also used.
In Zaidi’s model, the sensor responses are combined
into a luminance channel and two color channels in
MacLeod–Boynton2 chromaticity space. These are referred
to as the rg and yv coordinates. To account for
changes in illumination, these coordinates are transformed
into illumination-independent color descriptors
with an affine transformation that involves a scaling of
one coordinate and a translation of the other.
This is an interesting approach, which we will examine
to see whether it offers any improvements over the performance
of the more standard diagonal-matrix transform
(DMT). Zaidi’s model accounts for illumination change
by a two-parameter affine transformation. Using two parameters,
instead of the three parameters implicit in a
typical von Kries scaling of the cone signals, is an interesting
approach, but the affine model is not the only twoparameter
model available. We consider as an alternative
a two-parameter diagonal model and find it to model
illumination change more accurately than Zaidi’s affine
model, especially when used in conjunction with a technique
known as spectral sharpening.
Full text (pdf)
Back to SFU Computational Vision Lab publications