Thursday, July 30, 2009

Activity 11 - Color Image Processing

After previous activities of processing grayscale and binary-type images, this time, color images are handled here specifically true color images which are taken by a digital color camera using its various white balancing (WB) settings.

In a colored digital image, each pixel from it is composed of the three primary spectral colors Red (R), Green (G), and Blue (B) which are overlaid in different proportions. These RGB values are given by the following equations.






From the expressions above, rho corresponds to the spectral reflectance of a surface, S is the spectral power distribution (SPD) of a Planckian light source illuminating the surface and nu is the spectral sensitivity of a color camera in the RGB channels. Meanwhile, K is the normalizing constant or WB constant dependent from the SPD of the light source and spectral sensitivity of the camera, again different in each color channel. The wavelength lambda ranges within the visible spectrum usually from 380 nm to 780 nm.

Initially, an image which includes the three major hues RGB and a known white object is taken by a digital color camera applying its different WB settings. The digital camera used is Fujifilm Finepix with maximum image resolution of 6.3 mega pixels while the WB settings include automatic, fine, shade, fluorescent 1, fluorescent 2, fluorescent 3, and incandescent. The exposure value (EV) of the camera is set to -2 ensuring that the RGB values of each pixel do not exceed the maximum pixel value of 1.
The following set of images are the pictures taken under constant fluorescent light source illumination implementing the 7 WB settings in order of automatic, fine, shade, fluorescent 1, fluorescent 2, fluorescent 3, and incandescent from top to bottom and left to right.






As expected, each WB setting of the camera results to different color rendering of the picture. Hence, the normalizing constant K varies with each WB setting of the camera and thus producing different RGB values in pixels. Of course, the automatic WB setting generates the best rendering of the picture since it automatically selects the most appropriate K values which adjust according to the light source illumination. The fine and shade WB settings are considerably good enough for the rendering while the three fluorescent WB settings approach the blue hue as the number increases from 1 to 3.
The last one which uses incandescent WB is observed to be the most obvious incorrectly balanced image. This is then enhanced using two automatic white balancing (AWB) algorithms.

The first one is the White Patch Algorithm where a patch from a known white object of the unbalanced image is taken and its averaged RGB values are determined. This is why the picture must have a known white object. Then, the RGB color channels of the unbalanced image are divided to the corresponding averaged RGB values of the white patch. The resulting RGB arrays are finally overlaid for the white patch balanced rendering of the wrongly balanced image. Note that the final RGB arrays in saving the rendered image are made sure not to exceed the value of 1 by dividing them to their maximum value even though initially there are no RGB values greater than 1 because of the EV of -2. This is because some RGB values less than 1 are divided to the averaged RGB values from the white patch which are less than them. Therefore, not doing this makes the rendering still incorrectly white balanced.
Shown below is the white patch selected from the picture taken with incandescent WB setting.



Then the following are the original picture and its rendering using the White Patch Algorithm.



The incorrectly balanced image using the incandescent WB setting is now appropriately white balanced using the White Patch Algorithm. The quality is as good as in using the automatic WB setting of the camera.

The second AWB algorithm is called the Gray World Algorithm. Here, the average pixel values of each RGB color channel are obtained. The RGB arrays of the unbalanced image are then divided to the corresponding average RGB values. Hence, this algorithm works even the picture does not have a known white object. Again, in saving the rendered image, the values are clipped to 1 like in the White Patch Algorithm.
The original picture taken with incandescent WB setting and its rendering using the Gray World Algorithm are as follows.



The name of the Gray World Algorithm itself implies the effect when applied to an image. The average values taken from each color channel here represent the RGB values of white however these are already weighted. Hence, when the RGB arrays are divided by these average values, gray which is between white and black becomes the basis of balancing, and thus the image enters the gray world.

As another illustration of the two AWB algortihms, the picture taken with fluorescent 3 WB setting is rendered using these algorithms. First below is the white patch chosen.



The next set of images consists the original picture, its rendering using the White Patch Algorithm, and using the Gray World Algorithm.




It can be observed that the rendered picture taken with fluorescent 3 WB setting using the White Patch Algorithm is almost the same as in the rendered picture taken with the incandescent WB setting. However, the rendering using the Gray World Algorithm in this WB setting is better than in the incandescent type.

For a more conclusive observation, a picture is taken again but this time the objects have colors belonging to a primary hue. Incandescent WB setting is employed here since this is the most inappropriate WB setting for the illumination condition. Some red objects are gathered together and of course a known white object is included in the group.
The white patch selected is shown below.



The following are the original picture, its rendering using the White Patch Algorithm, and using the Gray World Algorithm.




Apparently, the White Patch Algortihm is better than the Gray World Algorithm. This is due to the fact that the picture is indeed white balanced since the average RGB values where the RGB color channels are normalized, come from a known white object. While the Gray World Algorithm just uses the average RGB values of each color channel of the unbalanced image in effect just balancing it to gray. However, the Gray World Algorithm generates good rendering if the WB setting used to take the picture blends well with the illumination condition.

Since I successfully processed some true color images taken with different WB settings of a digital color camera using the two AWB algorithms , I give myself 10/10 in this activity.

I am able to complete this activity successfully through discussions with Gary and Raffy.

Appendix
Below is the Scilab code utilized in this activity.

stacksize(4e7);

imageRGB = imread('imageRGBincan.jpg');

//scf(0);
//imshow(imageRGB);

R = imageRGB(:,:,1);
G = imageRGB(:,:,2);
B = imageRGB(:,:,3);

// White Patch Algorithm

white_patch = imread('white patch RGBincan.jpg');

Rw = sum(white_patch(:,:,1))/length(white_patch(:,:,1));
Gw = sum(white_patch(:,:,2))/length(white_patch(:,:,2));
Bw = sum(white_patch(:,:,3))/length(white_patch(:,:,3));

nR = R/Rw;
nG = G/Gw;
nB = B/Bw;

newimageRGB = [];
newimageRGB(:,:,1) = nR;
newimageRGB(:,:,2) = nG;
newimageRGB(:,:,3) = nB;

//scf(1);
//imshow(newimageRGB);
//imwrite(newimageRGB/max(newimageRGB), 'newimageRGBincan.jpg');

// Gray World Algorithm

Rgray = sum(R)/length(R);
Ggray = sum(G)/length(G);
Bgray = sum(B)/length(B);

ngrayR = R/Rgray;
ngrayG = G/Rgray;
ngrayB = B/Rgray;

newimagegrayRGB = [];
newimagegrayRGB(:,:,1) = ngrayR;
newimagegrayRGB(:,:,2) = ngrayG;
newimagegrayRGB(:,:,3) = ngrayB;

//scf(2);
//imshow(newimagegrayRGB);
//imwrite(newimagegrayRGB/max(newimagegrayRGB), 'newimagegrayRGBincan.jpg');

Tuesday, July 28, 2009

Activity 10 - Preprocessing Text

Another activity applying various image processing techniques is done here. This is divided into two major parts. First, a handwritten text from a scanned image is preprocessed. In the second part, incidences of a typewritten word are found in the same scanned image.

The image of a scanned demo checklist form is initially downloaded given by the activity. This is shown below.



A portion of handwritten text along the horizontal lines is then cropped from this image. Since the text image is tilted, its Fourier transform (FT) is determined in order to obtain the angle of rotation. Either way, this angle can just be set through trial and error. In this case, the text image is rotated by 1 degree in the clockwise direction using the function mogrify in Scilab. The following are the cropped text image and its rotation in grayscale.



The text image arranged from top reads: VGA Cable, Power Cord, Remote Control, RCA Cable, USB Cable. The primary reason of rotation is for the ease enhancement of the text image since the horizontal lines must be eliminated and hence a filter that removes this pattern can easily be created. Next, the FT of the rotated text image is attained and this serves as the template for making the filter. Henceforth, frequencies of unwanted pattern can be blocked here. The FT of the rotated text image and the filter created are shown as follows.



Since the horizontal lines are to be diminished, the central vertical line frequencies must be blocked. However, its center must not be obstructed because it contains the primary information of the rotated text image.
Below is the filtered rotated text image.



The text can now be extracted from the background after filtering. From the histogram plot of the grayscale values of the filtered rotated text image using GIMP, the threshold value for its conversion to binary type is obtained. The following are the binarized filtered rotated text image and its inversion.



The threshold value used for binarizing is 0.275. The inversion of the binarized filtered rotated text image is essential since the final step in preprocessing the text is the utilization of image morphological operations. During these operations, the region of interest, which is the text, must be the foreground with pixel value 1 while the background must be 0. Notice that some traces of the horizontal lines reside in the inverted binarized filtered rotated text image. Therefore, a closing operation is needed for these to be removed. Recall that this morphological operation is equivalent to the dilation of the eroded image by a structuring element to that same structuring element. Its implementation to the inverted filtered rotated text image is applied with a 3 x 1 sized matrix of ones as the structuring element. Its effect is shown below.



The image above is now the preprocessed handwritten text from the scanned image of a demo checklist form. It can be noticed that the remnants of horizontal lines are eliminated through the closing operation. This is because the structuring element is enough to reconnect two clusters along the vertical direction. Hence, increasing the number of rows in the matrix of the structuring element makes the letters that are few pixels adjacent vertically to each other cluster and this is not good for preprocessing the text.
For further analysis of the preprocessed text, the built-in function bwlabel in Scilab is again used here for labeling the clusters which are the letters reconstructed. There are 54 clusters detected and originally there are only 46 letters in the text image. This result is fair good enough because in the first place, some handwritten letters in the original text image are not distinctive. Some also coincide with adjacent letters and are not written in the same magnitude.

The incidences of the word DESCRIPTION from the same scanned image of a demo checklist form are found in the second part of this activity by correlation.
This time the whole image is needed since it is required to locate all incidences of the word from the scanned image. Now, this is rotated so that a sample image of the word can easily be extracted that is to be used in correlation. The original scanned image and its rotation in grayscale are as follows.



Again, the whole image is rotated by 1 degree in the clockwise direction using the function mogrify in Scilab like the text image in the first part. Then, the rotated image is binarized base to the threshold value from the histogram of its grayscale values obtained using GIMP. This is illustrated below.



The threshold value employed in binarizing the rotated image is 0.49.
Due to the fact that it is customary in a binary image where the region of interest must be white value and the background to be black, then the binarized rotated image is inverted. Shown in the following are the inverted binarized rotated image and the sample image of the word DESCRIPTION extracted from it.



In getting a sample image of the word DESCRIPTION from the inverted binarized rotated image, it is important that it is placed in an image with black background and this sample image must have the same size as the inverted binarized rotated image because correlation occurs at the frequency domain of their FTs.
Recall that correlation works here when the FT of the sample image is multiplied element-by-element to the conjugate of the FT of the inverted binarized rotated image. Then, the modulus of the shifted inverse FT of their product displays the correlated image. Now, the following is the result of correlation of the sample image of the word DESCRIPTION to the inverted binarized rotated image.



Apparently from the correlated image above, there are three incidences of the word DESCRIPTION in the inverted binarized rotated image which indeed agrees comparing from the scanned image of a demo checklist form. The incidences are marked by the bright spots as observed in the correlated image. One is found at the upper left and two are located below it and are arranged in a same line horizontally.

Although the preprocessing of a handwritten text in the first part is not perfectly done, I have applied various techniques of image processing successfully in this part also with the correlation of the word DESCRIPTION to the scanned image in the second part, hence I grade myself 10/10 in this activity.

This activity is successfully completed through my collaboration with Gary and Ed.

Appendix
The following is the source code for this activity.

stacksize(4e7);

// Preprocessing Handwritten Text

text = gray_imread('htext.bmp');
//scf(0);
//imshow(text);

textrot = mogrify(text, ['-rotate', '1']);
//scf(1);
//imshow(textrot);
//imwrite(textrot, 'htextrot.bmp');

Ftext = log(abs(fftshift(fft2(textrot))));
//scf(2);
//imshow(Ftext, []);
//imwrite(Ftext/max(Ftext), 'Fhtextrot.bmp');

filter = gray_imread('filterhtextrot.bmp');
filtext = abs(ifft(fftshift(filter).*fft2(textrot)));
//invfiltext = max(filtext) - filtext;
//scf(3);
//imshow(filtext, []);
//imwrite(filtext/max(filtext), 'filteredhtextrot.bmp');

bintext = im2bw(filtext, 0.275);
//imwrite(bintext, 'bintext.bmp');
invbintext = 1 - bintext;
//scf(4);
//imshow(invbintext);
//imwrite(invbintext, 'invbintext.bmp');

SE = ones(3, 1);

morphtext = dilate(invbintext, SE);
morphtext = erode(morphtext, SE);

//scf(5);
//imshow(morphtext);
//imwrite(morphtext, 'morphtext.bmp');

[L, n] = bwlabel(morphtext);

// Typewritten Text Correlation

checklist = gray_imread('Untitled_0001.jpg');
checklistrot = mogrify(checklist, ['-rotate', '1']);
//scf(6);
//imshow(checklistrot);
//imwrite(checklistrot, 'checklistrot.bmp');

binchecklist = im2bw(checklistrot, 0.49);
//scf(7);
//imshow(binchecklist);
//imwrite(binchecklist, 'binchecklist.bmp');

invbinchecklist = 1 - binchecklist;
//scf(8);
//imshow(invbinchecklist);
//imwrite(invbinchecklist, 'invbinchecklist.bmp');

description = imread('description.bmp');
cor = abs(fftshift(fft2(fft2(description).*(conj(fft2(invbinchecklist))))));
//scf(9);
//imshow(cor, []);
//imwrite(cor/max(cor), 'correlated text.bmp');

Thursday, July 23, 2009

Activity 9 - Binary Operations

This activity requires to give the best estimate of the area of a circular punched paper which is called here as cell. Some techniques of image processing that have been worked on in the past activities are employed here in order to calculate the area of this cell.

First, a scanned image of scattered punched papers is selected from the two images given by the activity. This is shown below.



The image above has size 748 x 618 and it is subdivided into 9 subimages each of which has size 256 x 256 pixels. Thus, some subimages overlap with the other which is better since it results to more area to be calculated and thus the result to be more accurate. Paint is used in cropping the scanned image.
A representative subimage that is the parcel labeled 'circles_2' located at the middle of the one-third topmost part of the whole original image is illustrated as follows.


The histogram of the grayscale values of each subimage is then analyzed using GIMP. This is where the threshold value is determined for the binarizing of the 9 subimages, that is, they only possess two values which are 0 for black corresponding to the background and 1 for white corresponding to the foreground or called the Region of Interest (ROI).
A threshold value of 0.85 is used in making the 9 subimages be of Binary-type. The binarized subimage 'circles_2' is the following.


Notice that binarizing alone is not enough to get the ROI for area calculation. Thus some image morphological operations are utilized in order to remove unnecessary noise and hence to connect overlapping cells which are coined blobs.
Closing and opening operators are then implemented here. The closing operator is defined to be the erosion of the dilated image A by a structuring element (SE) B to that same SE B. Meanwhile, the opening operator is equivalent to the dilation of the eroded image A by a SE B to that same SE B. Therefore, a series of dilation, erosion, erosion, and dilation operation is applied to each subimage for its further enhancement.
The SE used takes the form of a small circle as the matrix below.


The effect to the representative binarized subimage 'circles_2' after applying the closing and opening operations is shown as follows.


It can be observed that noise are eliminated, blobs are created due to the overlapping of some cells and in the same time single-cell units are more defined.
After applying the methods presented above to all the subimages, the blobs and the single-cell units are then isolated and labeled in preparation for area calculation using the built-in function in Scilab namely bwlabel. Thus, the area, which is the total number of white pixels, of blobs and single-cell units to all subimages are independently determined and gathered.

Now, the histogram of the calculated areas is shown below.


The occurrence of the area of a single-cell unit is found to be between 500 and 550 square pixels. Of course, the blobs and other cells that are not fully reconstructed give very large and very small areas respectively. Hence, these are eliminated in the process of determining the mean area and the standard deviation of the distribution.

The calculated mean area of the cell is 519.79 square pixels. This is derived from the sum of the elements within the interval of 500 to 550 square pixels divided by the number of elements in this range. Using the built-in function stdev in Scilab, the standard deviation attained is 12.12.

For validity of the result, the mean area is compared to the area of a single-cell unit which is not overlapping to other cell(s) and this is cropped from a subimage. This is also binarized but the closing and opening operations are not already implemented. The following images represent the basis of comparison to the calculated mean area.


Pixels of the cell are then summed for its area and the result is 517 square pixels. Finally, its percent difference to the calculated mean area is 0.54 %.

Since I have applied different image processing techniques for this activity to be successful, and obtained very small or almost insignificant percent difference for the area of a cell, I grade myself 10/10.

Before starting, Mimie has shared some insights to me on how this activity is handled. However, I have worked independently throughout its completion at the same time sharing my ideas to my classmates after I have finished this.

Appendix
The following Scilab code below is utilized in this activity.

circles = [];
binarycircles = [];
N = 9;
for i=1:N
circles1 = gray_imread('circles_'+string(i)+'.jpg');
binarycircles1 = im2bw(circles1, 0.85);
circles = [circles, circles1];
binarycircles = [binarycircles, binarycircles1];
end

M = 2;
graysubimage = circles(:,((M*256) - 255):(M*256));
binarysubimage = binarycircles(:,((M*256) - 255):(M*256));

//scf(0);
//imshow(graysubimage);

//scf(1);
//imshow(binarysubimage);
//imwrite(binarysubimage, 'binarysubimage2.bmp');

SE = [0 1 0; 1 1 1; 0 1 0];
newbinarysubimage = dilate(binarysubimage, SE);
newbinarysubimage = erode(newbinarysubimage, SE);
newbinarysubimage = erode(newbinarysubimage, SE);
newbinarysubimage = dilate(newbinarysubimage, SE);

//scf(2);
//imshow(newbinarysubimage);
//imwrite(newbinarysubimage, 'newbinarysubimage2.bmp');

[L, n] = bwlabel(newbinarysubimage);

A = [];
for j = 1:n
b = (L == j);
A(j) = sum(b);
end

Area = fscanfMat('area data.txt');

//scf(3);
//histplot(length(Area), Area);
//title('Histogram of Calculated Cell Areas');
//xlabel('Area');
//ylabel('Frequency');

index_goodArea = find(Area>500 & Area<550);
mean_Area = sum(Area(index_goodArea))/length(index_goodArea);
stddev_Area = stdev(Area(index_goodArea));
gray1circle = gray_imread('1circle.bmp');

//scf(4);
//imshow(gray1circle);

binary1circle = im2bw(gray1circle, 0.85);

//scf(5);
//imshow(binary1circle);
//imwrite(binary1circle, 'binary1circle.bmp');

theo_Area = sum(binary1circle);
percent_error = abs((theo_Area - mean_Area)/(theo_Area))*100;

Tuesday, July 21, 2009

Activity 8 - Morphological Operations

We have applied the basic image morphological operations dilation and erosion to some basic shapes in this activity. In Scilab, there are already built-in image morphological functions namely dilate and erode which are used here. Also, different structural elements are implemented in the dilation and erosion of images.

Dilation is defined by the expression


It states that the intersection of the translation of the reflection of B translated at z for all z's to A is not an empty set. Here, A is the image to be dilated by the structuring element B and z is the coordinate of image A.
Image dilation operates in the following algorithm. A certain location of B, which in this activity is its geometric center, is to be superimposed to every z in A. If at least one element of B coincides with the foreground of A, then the location in A being superimposed by the geometric center of B is to be replaced by one. Thus, if no elements of B coincide with any element in the foreground of A, then it is retained to zero. Recall that the background of A is equal to zero and the object or its foreground under morphological operation is equal to one.
Meanwhile, erosion is defined as the expression below



This means that all elements of B translated at z for all z's are contained in A.
Image erosion then has the following algorithm. The geometric center of B is to be superimposed to all z in A. If all elements of B are contained in A then the location in A being superimposed by the geometric center of B is to be retained to one and otherwise, if at least one element of B coincides with the background of A, it is replaced by zero.

The shapes that have been created and morphologically operated are a square with size 50 x 50, a right triangle with base 50 pixels long and height 30 pixels long, a circle with 25 pixels radius, a hollow square with size 60 x 60 and its edges are 4 pixels thick, and a plus sign with 50 pixels long for each line having thickness of 8 pixels. They are all made in Paint.
Meanwhile, the following structuring elements are used in dilating and eroding the images. A 4 x 4 ones, 2 x 4 ones, 4 x 2 ones, and a cross with 5 pixels long and 1 pixel thick. These are generated in Scilab as matrices shown below respectively.



Before simulating the effects of the morphological operations in the images, we have first predicted the outcome and draw this in yellow paper which is submitted to our professor Dr. Maricor Soriano.
Now, the following sets of figures are the results of the simulated morphological operations in the five images.

Square (Original Image)


Dilation
Structuring Element: 4 x 4, 2 x 4, 4 x 2, cross



Erosion
Structuring Element: 4 x 4, 2 x 4, 4 x 2, cross



Triangle (Original Image)



Dilation
Structuring Element: 4 x 4, 2 x 4, 4 x 2, cross



Erosion
Structuring Element: 4 x 4, 2 x 4, 4 x 2, cross



Circle (Original Image)



Dilation
Structuring Element: 4 x 4, 2 x 4, 4 x 2, cross



Erosion
Structuring Element: 4 x 4, 2 x 4, 4 x 2, cross



Hollow Square (Original Image)



Dilation
Structuring Element: 4 x 4, 2 x 4, 4 x 2, cross



Erosion
Structuring Element: 4 x 4, 2 x 4, 4 x 2, cross



Plus Sign (Original Image)



Dilation
Structuring Element: 4 x 4, 2 x 4, 4 x 2, cross



Erosion
Structuring Element: 4 x 4, 2 x 4, 4 x 2, cross



My predictions exactly match the simulated results. These are verified by measuring the pixel dimensions of the dilated and eroded images in Paint.

We also tried other two image morphological operations in Scilab namely thin and skel which mean thinning and skeletonization respectively. The following results are obtained.

Square (Original Image), Thinning, Skeletonization



Triangle (Original Image), Thinning, Skeletonization



Circle (Original Image), Thinning, Skeletonization



Hollow Square (Original Image), Thinning, Skeletonization



Plus Sign (Original Image), Thinning, Skeletonization



It can be observed that the image morphological operations thinning and skeletonization are quite complicated compared to dilation and erosion. The apparent effects of these are in thinning, the images are left with single pixel lines that do not always follow the contour of the object. Meeanwhile in skeletonization, the images are converted to its skeleton or framework but some areas in the foreground are retained. Due to its unconsistent effects, these two operations are thus sensitive to the image to be morphologically transformed.

Since I fully understand how the two basic image morphological operations dilation and erosion work, and my predictions match the simulated results then I grade myself 10/10 for this activity.

I have successfully finished this activity individually but I have shared what I learned with my other classmates.

Appendix
The whole Scilab code below is utilized in this activity.

sq = gray_imread('square.bmp');
tr = gray_imread('triangle.bmp');
ci = gray_imread('circle.bmp');
hsq = gray_imread('hollow square.bmp');
pl = gray_imread('plus.bmp');

A = [1 1 1 1; 1 1 1 1; 1 1 1 1; 1 1 1 1];
B = [1 1; 1 1; 1 1; 1 1];
C = [1 1 1 1; 1 1 1 1];

cross = [0 0 1 0 0; 0 0 1 0 0; 1 1 1 1 1; 0 0 1 0 0; 0 0 1 0 0];

//scf(0);
dilated_sq = dilate(sq, A);
//imshow(dilated_sq);
//imwrite(dilated_sq, 'dilated_sq4x4.bmp');

//scf(1);
eroded_sq = erode(sq, A);
//imshow(eroded_sq);
//imwrite(eroded_sq, 'eroded_sq4x4.bmp');

//scf(2);
dilated_tr = dilate(tr, A);
//imshow(dilated_tr);
//imwrite(dilated_tr, 'dilated_tr4x4.bmp');

//scf(3);
eroded_tr = erode(tr, A);
//imshow(eroded_tr);
//imwrite(eroded_tr, 'eroded_tr4x4.bmp');

//scf(4);
dilated_ci = dilate(ci, A);
//imshow(dilated_ci);
//imwrite(dilated_ci, 'dilated_ci4x4.bmp');

//scf(5);
eroded_ci = erode(ci, A);
//imshow(eroded_ci);
//imwrite(eroded_ci, 'eroded_ci4x4.bmp');

//scf(6);
dilated_hsq = dilate(hsq, A);
//imshow(dilated_hsq);
//imwrite(dilated_hsq, 'dilated_hsq4x4.bmp');

//scf(7);
eroded_hsq = erode(hsq, A);
//imshow(eroded_hsq);
//imwrite(eroded_hsq, 'eroded_hsq4x4.bmp');

//scf(8);
dilated_pl = dilate(pl, A);
//imshow(dilated_pl);
//imwrite(dilated_pl, 'dilated_pl4x4.bmp');

//scf(9);
eroded_pl = erode(pl, A);
//imshow(eroded_pl);
//imwrite(eroded_pl, 'eroded_pl4x4.bmp');

//scf(10);
thinned_sq = thin(sq);
//imshow(thinned_sq);
//imwrite(thinned_sq, 'thinned_sq.bmp');

//scf(11);
skeletonized_sq = skel(sq);
//imshow(skeletonized_sq);
//imwrite(skeletonized_sq, 'skeletonized_sq.bmp');