ICCV'01, Vancouver, Canada, July, 2001

4-Sensor Camera Calibration for Image Representation Invariant to Shading, Shadows, Lighting, and Specularities
Graham D. Finlayson Mark S. Drew
School of Information Systems School of Computing Science
The University of East Anglia Simon Fraser University
Norwich, Vancouver, British Columbia
England NR4 7TJ Canada V5A 1S6
graham@sys.uea.ac.uk mark@cs.sfu.ca


Paper (version with full-size figures) [.pdf]


Most lighting can be accurately modeled using a simplified Planckian function. If we form logarithms of color ratios of camera sensor values, then in a Lambertian plus specular two-lobe model of reflection the temperature-dependent term is separate and is seen as a straight line: i.e., changing lighting amounts to changing each pixel value in a straight line, for a given camera. Here we use a 4-sensor camera. In this case, forming color ratios reduces the dimensionality to 3. Applying logarithms and projecting onto the plane in the 3D color space orthogonal to the light-change direction results in an image representation that is invariant to illumination change. For a given camera, the position of the specular point in the 2D plane is always the same, independent of the lighting. Thus a camera calibration produces illumination invariance at a single pixel. In the plane, matte surfaces reduce to points and specularities are almost straight lines. Extending each pixel value back to the matte position, postulated to be the maximum radius from the fixed specular point, at any angle in the 2D plane, removes specularity. Thus images are independent of shading (by forming ratios), independent of shadows (by making them independent of illumination temperature) and independent of specularities. The method is examined by forming 4D images from hyperspectral images, using real camera sensors, with encouraging results.