[Projects]

reflectance image recovery via NMFsc

The problem of illumination estimation for color constancy and automatic white balancing of digital color imagery can be viewed as the separation of the image into illumination and reflectance components. We propose using nonnegative matrix factorization with sparseness constraints (NMFsc) to separate the components. In the log domain, multiplication between illumination and reflectance components becomes addition, then the image data is organized as a matrix to be factored into nonnegative components. Sparseness constraints imposed on the resulting factors help distinguish illumination from reflectance. Because NMFsc-based approach doesn't require spatial smoothing, in our results there is no halo effects associated with tranditional Retinex-based illumination estimate.

color curvature/vessel measure

Features can be discriminated by both intensity and color information in a given image. The study of color features, however, is limited. In this project we measure curvature in color or vector-valued images (up to 4-dimensions), by extending the existing grayscale-image curvature approach which makes use of the eigenvalues of the Hessian matrix. In the case of vector-valued images, the Hessian is no longer a 2D matrix but rather a rank 3 tensor. We use quaternion curvature to derive vesselness measure for tubular structures in color or vector-valued images by extending Frangi's vesselness measure for scalar images. Experimental results show the effectiveness of quaternion color curvature in generating a vesselness map, where features are better discriminated than tranditional gray-scale based methods.

color texture labeling

In this project we associate visual content with the semantic content to have a better understanding of the scene. First of all, fifteen datasets are collected each containing sampled textures representing real world object textures such as grass, flowers, sky, rocks, walls, etc... and the datasets are named accordingly. A set of quaternion-valued texture basis vectors are extracted from each dataset based on quaternion principle component analysis. Given an arbitrary texture, we reconstruct it using the first 3 texture basis vectors of each dataset. The top four datasets that can be used to best describe this texture (with least reconstruction error) are used for labeling. Depending on the labels of every texture, a scene can be classified as indoor or outdoor.

color texture segmentation

The quaternion representation of color is shown here to be effective in the context of segmenting color images into regions of similar color texture. The advantage of using quaternion arithmetic is that a color can be represented and analyzed as a single entity. A low dimensional basis for the color textures found in a given image is derived via quaternion principal component analysis (QPCA) of a training set of color texture samples. A color texture sample is then projected onto this basis to obtain a concise (single quaternion) description of the texture. To handle the large amount of training data, QPCA is extended to incremental QPCA. The power of the proposed quaternion color texture representation is demonstrated by its use in an unsupervised segmentation algorithm that successfully divides an image into regions on basis of texture.

skin appearane model

In previous human skin models, it has been suggested that the color of human skin is mostly determined by the content of melanin in the epidermal layer combined with the content of hemoglobin in the dermal layer. As two independent components, melanin and hemoglobin axes can be determined via ICA in the logarithm space. The melanin content of an arbitrary skin surface can therefore be estimated for skin lesion detection, for instance.

Furthermore, under uncontrol lighting environment, the color of facial skin changes significantly with changes in the light incident upon it. To normalize the skin tones of human faces, we eliminate the effects of illumination and allow variations related to melanin concentration only. This simple and computationally inexpensive method is accomplished by shifting the color of the entire image so that skin pixels lie on the pre-defined melanin axis.

calibration for HDR imaging

The camera used to capure high dynamic range images is a CCD digital camera Nikon D700. The characteristic of Nokin D700, such as camera sensor sensitivity functions, sensor linearity(gamma), black frames, and sensor noise are analyzed. The calibration is done with a monochromator and a spectrometer. To achieve better accuracy, we use directly 12-bits raw data stored by Nikon, without any pre-processing. To create HDR images, the camera's auto-bracketing is used to capture up to 9 images of exposures with 1 EV (exposure value) difference between each in the sequence. The rate of capture was 5 frames per second. Images from the same sequence are first aligned and then combined into a 2142x1422 HDR RGB image. The dataset (containing 105 HDR images) can be downloaded [here].

identification of dichromatic surfaces and illuminations


[Talks]

USC computer vision invited talk

- April 2011
  • "Light, Surface and Features in Color Images" [slides]

BC Cancer Agency invited talk

- Feb. 2009
  • "Studies in Appearance of Skin" [slides]

[Demos]

LTE: A Laparoscopic Training Environment for Surgeons

This demonstration presents the current state of an on-going team project at Simon Fraser University in developing a virtual environment for helping to train surgeons in performing laparoscopic surgery. The environment is created with spring-mass based models for both rigid and deformable objects, providing force feedback when interacting with a scene using a haptic device.Procedures, such as basic hand-eye coordination, single-handed and bi-manual approaches and dexterous manipulation, are demonstrated. [ViTen]

[Softwares]

Quaternion Functions in Matlab