|
Tuesday, August 19, 2008 |
Activity 13: Photometric Stereo |
Continuing how image processing can be applied to our lives, we move on to photometric stereo. When we say "stereo" we usually associate the word to "music" but stereo also means having two or more images be placed together to form a 3D image, which is what we are going to do. Photometric because we use "light sources" to do this.
Essentially, photometric stereo can be used to estimate the shape of the object from different locations of light sources. Processing the data from the images using the code below: (The math is included in the code, and the explanations are in the lecture notes)
chdir('G:\poy\poy backup\physics\186\paper 13'); // To load the Matlab images loadmatfile('Photos.mat'); // These are the values of the light sources V1 = [0.085832 0.17365 0.98106]; V2 = [0.085832 -0.17365 0.98106]; V3 = [0.17365 0 0.98481]; V4 = [0.16318 -0.34202 0.92542]; V = [V1;V2;V3;V4]; // To produce a matrix I for intensity I1 = I1(:)'; I2 = I2(:)'; I3 = I3(:)'; I4 = I4(:)'; I = [I1;I2;I3;I4];
a = 1e-6; // This is the additive factor //g is the matrix that represents the relationship of V to I: g = inv(V'*V)*V'*I; mod = sqrt((g(1,:).*g(1,:))+(g(2,:).*g(2,:))+(g(3,:).*g(3,:))); mod = mod+a;
for i = 1:3 n(i,:) = g(i,:)./mod; // To calculate for the normal vectors end
nz = n(3,:)+a; dfx = -n(1,:)./nz; //Performing partial differentiation dfy = -n(2,:)./nz; z1 = matrix(dfx,128,128); // Reshaping the matrix to 128x128 z2 = matrix(dfy,128,128); int1 = cumsum(z1,2); //We use the cumsum function to perform our integration int2 = cumsum(z2,1); z = int1+int2; scf(0); plot3d(1:128, 1:128, z); // This shows the reconstruction
Acknowledgments
I believe I performed this activity fairly well. Thanks to Cole for helping me with the code. I think its ok to give me 10/10 neutrinos? Hehehe.
|
posted by poy @ 1:03 AM |
|
|
Monday, August 18, 2008 |
Activity 12: Correcting Geometric Distortions |
Continuing our grand pursuit of using image processing techniques to solve real-life problems, we move to removing geometric distortions from images. Geometric distortions occur because low-end cameras exist! (Hahaha! To make the life of a physicist easier, then buy high-end cameras! Kidding!) There are two kinds of distortions that may occur: a pin-cushion distortion and a barrel distortion. These distortions are better shown than meticulously explained:
In the problem we will particularly solve, we are given an image of a Capiz window with an obvious barrel distortion:
We "selectively select" the part where the distortion is most evident: The Capiz tiles.
Then, from the image, we "envision" an "ideal" image by locating the most "undistorted" Capiz tile, obtaining its area, and assuming every tile is exactly the same. In my case I chose the tile in the 7th row, 5th column. The resulting "ideal" image is shown below:
Thinking of both images (the distorted original and the ideal) as matrices that has a correlation with one another we can easily find the "transformation matrix" that links the two together (since they are obviously NOT the same! Hahaha!). Let's say the "end points" of the first Capiz tile correspond to four (x,y) pixel values, due to its distortion, these values do not correspond to the (x,y) values of the ideal image! So we equate one another to the proper matrix forms in order to get the "transformation matrix" which we would then apply to correct the distorted image! Gets? To further elaborate, I'm posting essential parts of Dr. Soriano's lecture:
This was the code I used to implement the process:
chdir('G:\poy\poy backup\physics\186\paper 12'); im=imread('realdistorted.jpg'); im=im2gray(im); M=[]; //I chose a wide area polygon than a relationship of just one square xi=[13; 87; 87; 16]; yi=[15; 15; 180; 181]; T=[13 14 13*14 1; 86 14 86*14 1; 86 177 86*177 1; 13 177 13*177 1]; C1=inv(T)*xi; C2=inv(T)*yi; for i=1:x for j=1:y x_im=C1(1)*i+C1(2)*j+C1(3)*i*j+C1(4); y_im=C2(1)*i+C2(2)*j+C2(3)*i*j+C2(4); if x_im>=x x_im=x; end if y_im>=y y_im=y; end M(i, j)=im(round(x_im), round(y_im)); end end scf(2); imshow(M, []);
The result I obtained is best observed as a GIF file, notice the difference in the lines, they're straighter!!!:
Acknowledgments
This activity is HARD... no, LOOOOOONG!!! HAHAHA!!! But I did it! And I can confidently say that I am "semi-independent" in performing this activity but I really have to thank Jeric for giving me a head start in making a right code. I give myself 10/10 neutrinos!
|
posted by poy @ 10:50 PM |
|
|
Monday, August 04, 2008 |
Activity 11: Camera Calibration |
The world we see is in 3D. A camera takes a 2D image. We can calibrate the 2D image to a 3D scaling.
For this activity, we were tasked to take a picture of a folded checkerboard shown below. Each square is 1"x1", therefore the checkerboard can actually act as a 3D grid. Choosing 22 arbitrary points using the locate function of Scilab, we get the image coordinates of these points. The point of this activity being, to obtain the calibration matrix of the camera.
The image coordinates were obtained using the locate function of Scilab are shown in the table below together with the table of values for the chosen points:
Processing the results in Scilab, we get the calibration matrix A of the camera and its values are shown in the table below:
Registering the calibration values and attempting to test the accuracy of the method using 6 arbitrary points we get a standard deviation value of 1.44 for the y axis and .43 for the z axis.
I give myself 9/10 neutrinos for this activity... Since even if I accomplished what is needed to be done, it's still pretty confusing for me.
|
posted by poy @ 5:30 PM |
|
|
|
|
|