Wednesday, September 10, 2008
Activity 15: Color Camera Processing
Leaving 3D, we go back to the root of all our problems (kidding!) the camera. Since its creation, the camera evolved to include many user-friendly features. One of such features is white balancing. White balancing enables the photographer to capture images under various settings such as: cloudy, sunny, indoors (Fluorescent or Tungsten). To understand how white balancing works we must go back to the basics of colorimetry (or, the study of colors -the visible spectrum). White balancing basically resolves the issue of implausible colors appearing in what you see, this is why our eyes have GOOD white balancing, we don't see as if wearing blue filter shades (when we're not wearing any) rather we see objects in their right color. An object that is blue appears blue, green appears green, white appears white and so on. Compared to the eye, cameras are idiots when it comes to white balancing (well, not really idiots, I just like how it sounds, but quite frankly cameras are mismatched against our eyes). Auto white balancing often gives the "best" results since this auto feature is the camera's attempt to act like the human eye.

Before I continue, here are examples of photographs (of the same objects) taken under fluorescent light under different light settings:

Now then, we know that visible light can be categorized into three levels: Red, Green and Blue. If we have a white pixel in the image, it is composed of a red layer, a green layer, and a blue layer; we call such pixel/pixels as our reference white. When we divide the red of the reference white with the red layer of the entire image, likewise applied to the other layers; we are performing reference white -white balancing. Here's the code for that:

//Reference White Algorithm
stacksize(1e8);
chdir('G:\poy\poy backup\physics\186\paper 15');
im = imread('Tungsten1.JPG'); //the image
ref = imread('reference2.JPG'); //the reference white
Rref = mean(ref(:,:,1));
Gref = mean(ref(:,:,2));
Bref = mean(ref(:,:,3));
New(:,:,1) = im(:,:,1)/Rref;
New(:,:,2) = im(:,:,2)/Gref;
New(:,:,3) = im(:,:,3)/Bref;
A = find(New>1.0);
New(A)=1.0; //finds values greater than 1 and eliminates them
imwrite(New, 'Tungsten1 RW.JPG')

If however, we consider the image to be gray, (this is the same as assuming the image has equal red - green - blue layer values!) then when we average the Red - Green - Blue layers of the "unbalanced" image we actually create a "Gray World". Now then, if we divide the values we get in the "Gray World" with our "unbalanced" image, we actually perform a white balancing technique called: Gray World Algorithm. Here's the code for that as well:

//Gray World Algorithm
stacksize(1e8);
chdir('G:\poy\poy backup\physics\186\paper 15');
im = imread('Copy of Tungsten.JPG'); //the image
Rg = mean(im(:,:,1));
Gg = mean(im(:,:,2));
Bg = mean(im(:,:,3));
New(:,:,1) = im(:,:,1)/Rg;
New(:,:,2) = im(:,:,2)/Gg;
New(:,:,3) = im(:,:,3)/Bg;
A = find(New>1.0);
New(A)=1.0; //finds values greater than 1 and eliminates them
imwrite(New, 'Tungsten GW.JPG')

Since I took the photos above under fluorescent light the tungsten white balancing (as can be seen from the .gif above) has unbalanced white. The results I obtained are shown below:

As you can observe, the result using Reference White Algorithm gives the best result. The image was crisp. The Gray World Algorithm gave us what seems to be a saturated feel.

Compiling green colored objects the same result occurs, that is, the reference white algorithm gives us a crisp image while the gray world algorithm gave us a saturated one.

I give myself 10 neutrinos for having performed this activity on my own! Yey!
posted by poy @ 6:34 AM  
0 Comments:
Post a Comment
<< Home
 
 
About Me

Name: poy
Home: Quezon City, NCR, Philippines
About Me:
See my complete profile
Previous Post
Archives
Template by
Blogger templates