home | people | research | facilities | collaborations | publications | data | code | feedback


  Description
Information about these datasets

Data
Images with few, dielectric or metallic specularities, fluorescent surfaces

Publications
Publications prepared with this data

   Test Images for Computational Colour Constancy

This page contains links to some of the data described in:

Kobus Barnard, Lindsay Martin, Brian Funt, and Adam Coath,
A Data Set for Colour Research,
Color Research and Application, Volume 27, Number 3, pp. 147-151, 2002.

(This is the appropriate archival reference to this data).

Questions, comments, and problems with this data should be directed to Kobus Barnard.

 Description

All image data described here consists of a number of scenes taken under 11 different lights. These lights were chosen to be representative of the spread of common illuminants.

The scenes are divided into four sets, according to scenes with:

  1. Minimal specularities
    (22 scenes, 223 images).
  2. Non-negligable dielectric specularities
    (9 scenes, 98 images).
  3. Metallic specularities
    (14 scenes, 149 images).
  4. At least one fluorescent surface
    (6 scenes, 59 images).

Some images were culled from each set due to deficiencies in the calibration data. The number of valid images is listed above.

The experimental procedure for each scene was as follows:

  1. A new scene was constructed.
  2. A white reference standard was placed in the centre of the scene, perpendicular to the direction of the illuminant.
  3. The distance between the illuminant and the scene was adjusted to minimize the number of clipped pixels reported by the camera. For scenes with bright specularities, the resulting images were purposely underexposed.
  4. A "reference" image was captured with the white reference included in the scene.
  5. An illuminant spectrum was measured from the light reflected from the white reference.
  6. The white reference was removed from the scene.
  7. The final input image was captured of the scene, obtained by averaging 50 successive video frames.
  8. Steps 2 to 7 were repeated for the remaining 10 illuminants in the set.

For each scene captured under a given illuminant, this procedure produced:

  1. The illuminant spectrum.
  2. A single frame reference image containing the white reference standard.
  3. The input image (averaged over 50 frames).

Both the reference and input image were mapped to a linear R,G,B space and received other corrections as noted below. The estimate of the (R,G,B) of the illuminant for the input image was computed from the linearized reference image as the average of the central 30 by 30 pixel window covering the white reference standard.

We believe that this method provides a good estimate of the chromaticity of the illuminant, but that the error in the illuminant magnitude for any given image could be quite high ­ easily 10% ­ due to the difficulty in keeping the white reflectance standard perpendicular to the light source. Furthermore, three of the sources were distended, and here we simply attempted to find the orientation which maximized the brightness of the reflectance standard.

Because of the frame averaging described above, many of the images have a very large dynamic range. Image pixels have more than the usual 8-bit precision, since the averaging was performed with floating point arithmetic and the results stored in a floating point image format.

Several preprocessing steps were taken to improve the data. First, we removed some fixed pattern noise. Second, we corrected for a spatially varying chromaticity shift due to the camera optics. Finally, we mapped the images into a more linear space as described in [ 1 ]. This included removing the sizable camera black signal. The resulting images are such that pixel intensity is essentially proportional to scene radiance.

Because of the preprocessing and extended dynamic range, it is possible to scale the images by a factor of up to about 10 without incurring too much noise. Therefore, the images can be rescaled to emulate capture with a camera with an automatic aperture. For example, if a scene has significant specularities, then the specularities would normally be clipped. We tried to minimize clipping in our images, but normal camera behavior can be emulated by scaling the image up, and clipping the result at 255, or whatever level is appropriate for the research being done. Our approach allows the study of higher dynamic range images, but does not rule out the emulation of more standard camera behavior.

In order to allow researchers to experiment with the extra dynamic range, we provide the images in the 16-bit TIFF format. Standard 8-bit TIFF images are also provided for convenience.

For each of the four image sets, there are two gzipped tar files provided, one for each image bit depth. Each file unpacks to a directory of the form:

<set>_<n>_bit,
where:
<set> is one of "mondrian", "specular", "metallic" or "fluorescent".
<n> is one of 8 or 16.
Below the top-level directory, data for each scene is in a separate subdirectory. Within each scene subdirectory, there are 3 files for each scene/illuminant combination for a total of 33 files (fewer if images have been culled due to deficiencies in data collection).

For each scene/illuminant combination, there are 3 data files:

  1. The image of the scene captured under the illuminant (.tif).
  2. The illuminant spectrum (.spect).
  3. The RGB of the illuminant as reported by the camera (.rgb).
Because of imperfections of the camera calibration and resulting sensors, we suggest using the illuminant RGB whenever possible.

This data was collected by Lindsay Martin under the guidance of Kobus Barnard in Brian Funt's Computational Colour Vision Laboratory.

 Data

Minimal Specularity Set (22 scenes, 223 images)

Dielectric Specularity Set (9 scenes, 98 images)

Metallic Specularity Set (14 scenes, 149 images)

Fluorescent Surfaces Set (6 scenes, 59 images)

 Publications Which Use This Data