Ricecolometer 3.0

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

18

pixel, these pixels are counted and were recorded to which color panel it was classified

with. An input which contains the vectors to be used as the input layers in the

implementation of SOM was entered in the process. Block samples can be taken again from

other parts of the captured leaf image. After picking the samples, they are be summarized

as a whole and the results are showed.

Figure 2. Rice leaf colorimeter system architecture

Self-Organizing Map (SOM) Neural Network

As shown in figure 3, SOM networks consist of two layers: an input layer and an

output layer. The input layer nodes are equal to the dimension of the input vector. The

competition in the network learning is displayed in the output layer. Each node or neuron
19

in the input layer is connected with a node in the output layer by bi-directional weights that

consist of a weight matrix W. Also in figure 3, the upper layer is the output layer with

output nodes M, a two-dimensional node matrix ( M  m 2 , where m is the number of nodes

in one side of the matrix). The lower input layer has N nodes (neurons), representing N

input vectors. All input nodes are connected to the output nodes with weights. Competition

nodes also have weight connections to each other representing interaction (Feisi Center for

Scientific Products, 2004).

Figure 3. The SOM structure

For a vector in the input layer, the best match unit (BMU) in output layer should

be determined according to its mapping characteristics, and its weight vector Wij may be

regarded as coordinates projecting to the input layer. By adjusting the weight matrix W,

the characteristics of the input layer may be demonstrated by the output layer. The SOM

realizes network learning and training through the use self-organized and unsupervised
20

training. The structure of the network and connected weights are adjusted automatically

according to training regulations. The procedure will end when the distribution rule of the

samples is clearly illustrated. For each network input, only an adjustment of partial

weights is needed to make the weight vector converge to the input vector. The alignment

procedure is the competition learning process in which SOM carries out the diagnosis

automatically.

SOM Procedures

Assume the input data vector, Pk and the associated weight vector. Wij :

Pk  ( P1k , P2k ,..., PNk ), (k  1,2,..., q)

Wij  (W j1 , W j 2 ,...,W ji ,...,W jN ), (i  1,2,..., N ; j  1,2,..., M )

The SOM steps are as follows:

1. Initializing. Giving initial values of Wxy which is the RGB components of the

pixel color at point x,y of the analyzed image. RGB values should be within the

range of 0 to 1, thus, the need of normalizing its RGB components by dividing its

value by 255. Setting the initial learning rate, η 0 , and the neighborhood radius,

σ 0 and total learning time or number of iterations.

2. Calculating the best matching unit (BMU). Calculating the BMU is done according

to the Euclidean distance among the nodes weights (W1 , W2 ,..., Wn ) and the input

vectors values (V1 , V2 ,...,Vn ) . Euclidean distance is a measurement of similarity


21

between two sets of data, which is computed as

n
dist   (Vi  Wi ) 2 (1)
i 1

3. Determining the BMU neighborhood. Size of the neighborhood: an exponential

decay function that shrinks on each iteration until eventually the neighborhood is

just the BMU itself.

 t
 (t )   0 exp   (2)
 

Where:

 0 = width of the lattice at time t 0

 (t ) = width of the lattice at time t

t = time of iteration of the loop

 = time constant

4. Modifying nodes weights. The new weight for a node is the old weight, plus a

fraction (learning rate, η ) of the difference between the old weight and the input

vector, adjusted ( θ ) based on distance from the BMU.

W (t  1)  W (t )   (t )   (t )  (V (t )  W (t )) (3)

The learning rate, η , is also an exponential decay function.

 t
 (t )   0  exp   (4)
 

5. Effect of location within the neighborhood. The neighborhood is defined by a


22

Gaussian curve so that nodes that are closer are influenced more than father nodes.

 dist 2 
 (t )  exp   (5)
 2 (t ) 
2

Where:

 (t ) = influence rate at time t

 (t ) = width of the lattice at time t.

6. Repeat from step 2 for enough iteration for convergence.

Testing and Evaluation

A makeshift LCC tool which was a printed digital version of a four-panel LCC on

a photo paper was used to take sample images and test the application. The captured images

of the rice leaves were taken from the rice fields of Brgy. Dacay, Dulag, Leyte.

The leaves were be evaluated first using the LCC to determine the color index of

the rice leaf sample before testing the images of the leaves in the application.

The readings of the LCC will be compared to the reading of the android application

by computing the accuracy using the following formula:

 LCCI  PCI 
Accuracy  1     100

(6)
 LCCI 

Where:

PCI = Color index identified by the program.

LCCI = Color index identified using LCC.


23

CHAPTER IV

RESULTS AND DISCUSSION

Android Manifest

The AndroidManifest.xml file of this application contains the content shown in

listing 1. Since the application needs to access the user’s device’s camera and storage,

permissions were declared in this file. Lines of codes enclosed in <uses-permission>

enables the application to make request for accessing these components for camera and for

writing in and reading from the external storage. Also in this file, the application calls to

use the device’s hardware feature, camera2, which provides the interface to the device’s

camera.

The Android Manifest also contains the processes that host the application

components which are three activities specified in the <application> tag –

SplashActivity, CameraActivity, and EvaluationActivity.

Listing 1. The application’s Android Manifest file


<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.app.irra.ricecolorimeter">

<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name
="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name
="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-feature
android:name="android.hardware.camera2"
android:required="false"/>

<application
android:allowBackup="true"
24

android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity
android:name=".SplashActivity"
android:label="@string/app_name"
android:theme="@style/Theme.Design.NoActionBar">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name
="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<activity
android:name=".CameraActivity"
android:label="@string/app_name"
android:theme="@style/Theme.Design.NoActionBar">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name
="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<activity
android:name=".EvaluationActivity"
android:label="@string/app_name"
android:theme="@style/Theme.Design.NoActionBar">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name=
"android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>

<supports-screens android:resizeable="true"
android:smallScreens="true"
android:normalScreens="true"
android:largeScreens="true"
android:anyDensity="true" />

</manifest>
25

Development of the Application

The developed application is divided in three activities, these composes the whole

process of the application – the splash screen activity, camera activity and the evaluation

activity. Each consists of their own java classes and xml files to make up the activities and

the user interface.

The splash screen activity, shown in figure 4, presents the name of the application

which is “Rice Leaf Colorimeter” and two buttons. “Capture Leaf Image” button is to run

the main process of the application and “About” button is to display information about the

application.

Figure 4. Splash screen


26

In figure 5, which makes up camera activity, the application activated the camera

where a picture of a rice leaf to be captured. OpenCV library was loaded to the application’s

camera activity to make use of the class JavaCameraView in order to access the user’s

device’s camera along with the appropriate permissions as discussed in Android Manifest.

JavaCameraView was used to connect OpenCV and Java Camera through the

CameraBridgeViewBase function and other required functions to make the camera work.

A method captureImage is called when the capture button, which can be seen in figure 3,

is pressed. This method saves the captured frame into a bitmap file which is then used for

the next operation. It is saved in the user’s device’s external storage as a .png and was

loaded back to be displayed. A method to enable retaking a shot was added in case the

initial captured image is not good.

Figure 5. Camera Activity


27

After taking the picture, the application then goes to the evaluation activity which

is shown in figure 6. 50x50 pixels block samples from the image were taken and then

evaluated by tapping the evaluate button. A block was processed by extracting the colors

pixel by pixel and then loading them into the lattice, an array of points with the size of

50x50, where the program analyzes it and afterwards displays the output values in the

results table. Using the method in listing 2, the RGB values were extracted in the

coordinates of the image where the selected 50x50 block sample is located. After the RGB

values are extracted, the first plotting of the lattice took place. The procedures in plotting

the lattice is shown in listing 3.

Figure 6. Evaluation Activity


28

Listing 2. Method for extracting RGB values from the sample

public void pickSample(int y, int x){ //coordinates of the 50x50 sample


int kRed, kGreen, kBlue;
for (int r=0; r < (numRows); r++) {
for (int c=0; c < (numCols); c++ ) {
int pix=muted.getPixel(x+c,y+r);
kRed = Color.red(pix);
kGreen = Color.green(pix);
kBlue = Color.blue(pix);
tmp.setPixel(c+x,r+y,white);
somLattice.setLatticeNode(c, r, kRed, kGreen, kBlue);
}
}

Listing 3. Method for plotting the lattice

public void plotLattice(){


int latX, latY;
int Width, Height;
int cellWidth, cellHeight;
float pRed, pGreen, pBlue;
Node node = new Node();
Width = LatticeWidth;
Height = LatticeHeight;
latX = somLattice.getCols();
latY = somLattice.getRows();
cellWidth = Width/latX;
cellHeight = Height/latY;

Paint paint= new Paint();


Bitmap lattice= Bitmap.createBitmap(LatticeWidth, LatticeHeight,
Bitmap.Config.ARGB_8888);
Canvas canvas= new Canvas(lattice);
android.graphics.Rect rectangle;

ImageView latticeiv= findViewById(R.id.lattice);

for(int ctrY = 0; ctrY < latY; ctrY++){


for(int ctrX = 0; ctrX < latX; ctrX++){
node = somLattice.getNode(ctrX, ctrY);

pRed = (float)node.getVector().getWeight(posRed) *
MAXCOLOR;
pGreen = (float)node.getVector().getWeight(posGreen) *
MAXCOLOR;
pBlue = (float)node.getVector().getWeight(posBlue) *
MAXCOLOR;
29

int color = Color.rgb((int)pRed, (int)pGreen,


(int)pBlue);
paint.setColor(color);
rectangle= new android.graphics.Rect
(ctrX*cellWidth,ctrY*cellHeight, 150, 150);

canvas.drawRect(rectangle,paint);
latticeiv.setImageBitmap(lattice);
}
}
}

Every point represents a node in the output layer, each node contains weight and

those weights are the three components- Red, Green, and Blue. Every node is calculated to

determine which is the closest to the input vector, in other words, it will classify which

color in the LCC panel it matches wherein the winning node is the Best Matching Unit

(BMU). In this process of determining the sample’s color index shown in listing 4, with

fixed iteration of 1000 and learning rate of 0.07, the frequency of the nodes closest to a

certain input vector were calculated and the input vector obtaining the highest number of

nodes is the winning vector.

Listing 4. Procedures in determining the sample’s color index

public void training(){


int lw, lh, iteration;
int LATTICE_RADIUS;
int xstart, ystart, xend, yend;
double dist, dFalloff, nbhRadius, rsqr;
Node bmu, temp;
iVector curInput;
double learningRate, TIME_CONSTANT;

lw = somLattice.getCols();
lh = somLattice.getRows();
LATTICE_RADIUS = getMax(lw, lh) / 2;
TIME_CONSTANT = NUM_ITERATIONS / Math.log(LATTICE_RADIUS);
learningRate = START_LEARNING_RATE;
iteration = 0;
30

while(iteration < NUM_ITERATIONS){


nbhRadius = LATTICE_RADIUS * Math.exp(-iteration /
TIME_CONSTANT);
rsqr = nbhRadius * nbhRadius;
for(int ctr = 0; ctr < numVectors; ctr++){
curInput = inputVectors[ctr];
bmu = somLattice.getBMU(curInput);
xstart = bmu.getX() - (int)nbhRadius - 1;
ystart = bmu.getY() - (int)nbhRadius - 1;
if(xstart < 0){
xstart = 0;
}
if(ystart < 0){
ystart = 0;
}

xend = xstart + (int)(nbhRadius * 2) + 1;


yend = ystart + (int)(nbhRadius * 2) + 1;

if(xend > lw){


xend = lw;
}
if(yend > lh){
yend = lh;
}

for(int x = xstart; x < xend; x++){


for(int y = ystart; y < yend; y++){
temp = somLattice.getNode(y, x);
dist = bmu.distanceTO(temp);
if(dist <= (rsqr)){
dFalloff = getDistanceFalloff(dist, rsqr);
temp.adjustWeights(curInput, learningRate,
dFalloff);
}
}
}
}

iteration = iteration + 1;
learningRate = START_LEARNING_RATE * Math.exp((double)-1 *
iteration / NUM_ITERATIONS);
}
plotLattice();
countInputVectors();
}

public void countInputVectors(){


Node temp;
double dist1, dist2;
int nearest;
int arrctr[] = new int[numVectors];
31

for(int nv = 0; nv < numVectors; nv++){arrctr[nv] = 0;}

for(int r = 0; r < numRows; r++){


for(int K = 0; K < numCols; K++){
temp = somLattice.getNode(r, K);
nearest = 0;
dist1 = inputVectors[nearest]
.euclideanDist(temp.getVector());

for(int nv2 = 1; nv2 < numVectors; nv2++){


dist2 = inputVectors[nv2]
.euclideanDist(temp.getVector());
if(dist1 > dist2){
dist1 = dist2;
nearest = nv2;
}
}
arrctr[nearest] = arrctr[nearest] + 1;
}
}
setresults(arrctr);
}

The winning vector will represent the LCC panel which the sample is classified

with. This is done all with the input vectors which are plotted into a virtual LCC displayed

in this activity’s user interface along with the other components of the activity. In a file in

the application assets input.som, the input vector is written. This was produced by

extracting the LCC’s RGB colors. The file consists of four RGB values for each color panel

of the LCC, as shown in table 1.

Table 1. Contents of the file input.som

Color index Red Green Blue


2 135 147 39
3 105 138 35
4 101 120 37
5 78 99 42
32

The file is loaded in the evaluation activity to plot a virtual LCC in the application.

In figure 6, it is located in the uppermost part of the user interface. This method is shown

below in listing 3.

Listing 6. Method in plotting the input vectors into the system

public void plotInput(){


double Width, Height;
float pRed, pGreen, pBlue;

ImageView canvasiv= findViewById(R.id.latticenode);

//dimensions of canvasiv
Width = 400;
Height= 74;

Bitmap bmp= Bitmap.createBitmap(Width, Height,


Bitmap.Config.ARGB_8888);
Canvas cvs= new Canvas(bmp);
Paint paint = new Paint();
android.graphics.Rect rectangle;

for(int ctr = 0; ctr<=numVectors - 1; ctr++){


pRed = (float)inputVectors[ctr].getWeight(posRed) *
MAXCOLOR;
pGreen = (float)inputVectors[ctr].getWeight(posGreen) *
MAXCOLOR;
pBlue = (float)inputVectors[ctr].getWeight(posBlue) *
MAXCOLOR;

int color = Color.rgb((int)pRed, (int)pGreen, (int)pBlue);


paint.setColor(color);
rectangle= new android.graphics.Rect((ctr
*((int)Width/4)),0, (int)Width,(int)Height);
cvs.drawRect(rectangle, paint);

canvasiv.setImageBitmap(bmp);

}
}

Once the results table is filled, the results were summarized and the values from the

table are processed and plotted into a graph. Other information about the summary such as

the number what LCC values matched the samples are also displayed.
33

The application also counts the number of the rice leaf images captured and shows

their results below the individual leaf summary results with a histogram. It will show

recommendations if the number of images taken reaches ten.

System Testing and Evaluation

The rice leaf images used in this study were taken straight from the rice plants on

the farmfield. As the images were captured with the application’s camera at around 8:00 in

the morning, it was ensured that it was not taken directly under the sun however enough

light was made sure to pass through. Sixteen leaves that matched the colors on the LCC

were captured. To distinguish the color similarity of the leaves to the LCC color panels,

the leaves were captured by the camera along with the makeshift LCC which can be seen

in Appendix B.

After the evaluation of each leaf, the results were obtained and noted. The following

tables and figures represents the count of nodes per LCC color panel for every samples for

each leaf. Table 6 is the summary of all the samples used, it was used to show the

application’s LCC color index classifier’s mean accuracy which is 92%. This means that

the Rice Leaf Colorimeter can identify the color index of the rice leaf image samples at

almost the same accuracy of an LCC used to identify the color index.
34

Table 2. Results of evaluating rice leaves identified as color index 2

Leaf LCC SOM Color Index Count Total


Samples Panel
2 3 4 5
1 2 1101 502 583 313 2500
2 2 855 645 734 266 2500
3 2 1008 568 454 470 2500
4 2 705 751 516 528 2500
Total 3669 2466 2287 1578 10000
Percentage 37% 24% 23% 16%

Figure 7. Results histogram for rice leaves identified as color index 2


35

Table 3. Results of evaluating rice leaves identified as color index 3

Samples LCC SOM Color Index Count Total


Panel
2 3 4 5
5 3 365 671 888 576 2500
6 3 348 1064 625 463 2500
7 3 312 914 839 435 2500
8 3 443 932 611 515 2500
Total 1468 3581 2962 1989 10000
Percentage 15% 36% 29% 20%

Figure 8. Results histogram for rice leaf samples identified as color index 3
36

Table 4. Results of evaluating rice leaves identified as color index 4

Samples LCC SOM Color Index Count Total


Panel
2 3 4 5
9 4 408 625 885 581 2500
10 4 344 450 768 934 2500
11 4 374 350 908 868 2500
12 4 449 677 820 554 2500
Total 1575 2103 3381 2941 10000
Percentage 16% 21% 34% 29%

Figure 9. Results histogram for rice leaf samples identified as color index 4

You might also like