Ricecolometer 3.0
Ricecolometer 3.0
Ricecolometer 3.0
pixel, these pixels are counted and were recorded to which color panel it was classified
with. An input which contains the vectors to be used as the input layers in the
implementation of SOM was entered in the process. Block samples can be taken again from
other parts of the captured leaf image. After picking the samples, they are be summarized
As shown in figure 3, SOM networks consist of two layers: an input layer and an
output layer. The input layer nodes are equal to the dimension of the input vector. The
competition in the network learning is displayed in the output layer. Each node or neuron
19
in the input layer is connected with a node in the output layer by bi-directional weights that
consist of a weight matrix W. Also in figure 3, the upper layer is the output layer with
in one side of the matrix). The lower input layer has N nodes (neurons), representing N
input vectors. All input nodes are connected to the output nodes with weights. Competition
nodes also have weight connections to each other representing interaction (Feisi Center for
For a vector in the input layer, the best match unit (BMU) in output layer should
be determined according to its mapping characteristics, and its weight vector Wij may be
regarded as coordinates projecting to the input layer. By adjusting the weight matrix W,
the characteristics of the input layer may be demonstrated by the output layer. The SOM
realizes network learning and training through the use self-organized and unsupervised
20
training. The structure of the network and connected weights are adjusted automatically
according to training regulations. The procedure will end when the distribution rule of the
samples is clearly illustrated. For each network input, only an adjustment of partial
weights is needed to make the weight vector converge to the input vector. The alignment
procedure is the competition learning process in which SOM carries out the diagnosis
automatically.
SOM Procedures
Assume the input data vector, Pk and the associated weight vector. Wij :
1. Initializing. Giving initial values of Wxy which is the RGB components of the
pixel color at point x,y of the analyzed image. RGB values should be within the
range of 0 to 1, thus, the need of normalizing its RGB components by dividing its
value by 255. Setting the initial learning rate, η 0 , and the neighborhood radius,
2. Calculating the best matching unit (BMU). Calculating the BMU is done according
to the Euclidean distance among the nodes weights (W1 , W2 ,..., Wn ) and the input
n
dist (Vi Wi ) 2 (1)
i 1
decay function that shrinks on each iteration until eventually the neighborhood is
t
(t ) 0 exp (2)
Where:
= time constant
4. Modifying nodes weights. The new weight for a node is the old weight, plus a
fraction (learning rate, η ) of the difference between the old weight and the input
W (t 1) W (t ) (t ) (t ) (V (t ) W (t )) (3)
t
(t ) 0 exp (4)
Gaussian curve so that nodes that are closer are influenced more than father nodes.
dist 2
(t ) exp (5)
2 (t )
2
Where:
A makeshift LCC tool which was a printed digital version of a four-panel LCC on
a photo paper was used to take sample images and test the application. The captured images
of the rice leaves were taken from the rice fields of Brgy. Dacay, Dulag, Leyte.
The leaves were be evaluated first using the LCC to determine the color index of
the rice leaf sample before testing the images of the leaves in the application.
The readings of the LCC will be compared to the reading of the android application
LCCI PCI
Accuracy 1 100
(6)
LCCI
Where:
CHAPTER IV
Android Manifest
listing 1. Since the application needs to access the user’s device’s camera and storage,
enables the application to make request for accessing these components for camera and for
writing in and reading from the external storage. Also in this file, the application calls to
use the device’s hardware feature, camera2, which provides the interface to the device’s
camera.
The Android Manifest also contains the processes that host the application
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name
="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name
="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-feature
android:name="android.hardware.camera2"
android:required="false"/>
<application
android:allowBackup="true"
24
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity
android:name=".SplashActivity"
android:label="@string/app_name"
android:theme="@style/Theme.Design.NoActionBar">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name
="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<activity
android:name=".CameraActivity"
android:label="@string/app_name"
android:theme="@style/Theme.Design.NoActionBar">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name
="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<activity
android:name=".EvaluationActivity"
android:label="@string/app_name"
android:theme="@style/Theme.Design.NoActionBar">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name=
"android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
<supports-screens android:resizeable="true"
android:smallScreens="true"
android:normalScreens="true"
android:largeScreens="true"
android:anyDensity="true" />
</manifest>
25
The developed application is divided in three activities, these composes the whole
process of the application – the splash screen activity, camera activity and the evaluation
activity. Each consists of their own java classes and xml files to make up the activities and
The splash screen activity, shown in figure 4, presents the name of the application
which is “Rice Leaf Colorimeter” and two buttons. “Capture Leaf Image” button is to run
the main process of the application and “About” button is to display information about the
application.
In figure 5, which makes up camera activity, the application activated the camera
where a picture of a rice leaf to be captured. OpenCV library was loaded to the application’s
camera activity to make use of the class JavaCameraView in order to access the user’s
device’s camera along with the appropriate permissions as discussed in Android Manifest.
JavaCameraView was used to connect OpenCV and Java Camera through the
CameraBridgeViewBase function and other required functions to make the camera work.
A method captureImage is called when the capture button, which can be seen in figure 3,
is pressed. This method saves the captured frame into a bitmap file which is then used for
the next operation. It is saved in the user’s device’s external storage as a .png and was
loaded back to be displayed. A method to enable retaking a shot was added in case the
After taking the picture, the application then goes to the evaluation activity which
is shown in figure 6. 50x50 pixels block samples from the image were taken and then
evaluated by tapping the evaluate button. A block was processed by extracting the colors
pixel by pixel and then loading them into the lattice, an array of points with the size of
50x50, where the program analyzes it and afterwards displays the output values in the
results table. Using the method in listing 2, the RGB values were extracted in the
coordinates of the image where the selected 50x50 block sample is located. After the RGB
values are extracted, the first plotting of the lattice took place. The procedures in plotting
pRed = (float)node.getVector().getWeight(posRed) *
MAXCOLOR;
pGreen = (float)node.getVector().getWeight(posGreen) *
MAXCOLOR;
pBlue = (float)node.getVector().getWeight(posBlue) *
MAXCOLOR;
29
canvas.drawRect(rectangle,paint);
latticeiv.setImageBitmap(lattice);
}
}
}
Every point represents a node in the output layer, each node contains weight and
those weights are the three components- Red, Green, and Blue. Every node is calculated to
determine which is the closest to the input vector, in other words, it will classify which
color in the LCC panel it matches wherein the winning node is the Best Matching Unit
(BMU). In this process of determining the sample’s color index shown in listing 4, with
fixed iteration of 1000 and learning rate of 0.07, the frequency of the nodes closest to a
certain input vector were calculated and the input vector obtaining the highest number of
lw = somLattice.getCols();
lh = somLattice.getRows();
LATTICE_RADIUS = getMax(lw, lh) / 2;
TIME_CONSTANT = NUM_ITERATIONS / Math.log(LATTICE_RADIUS);
learningRate = START_LEARNING_RATE;
iteration = 0;
30
iteration = iteration + 1;
learningRate = START_LEARNING_RATE * Math.exp((double)-1 *
iteration / NUM_ITERATIONS);
}
plotLattice();
countInputVectors();
}
The winning vector will represent the LCC panel which the sample is classified
with. This is done all with the input vectors which are plotted into a virtual LCC displayed
in this activity’s user interface along with the other components of the activity. In a file in
the application assets input.som, the input vector is written. This was produced by
extracting the LCC’s RGB colors. The file consists of four RGB values for each color panel
The file is loaded in the evaluation activity to plot a virtual LCC in the application.
In figure 6, it is located in the uppermost part of the user interface. This method is shown
below in listing 3.
//dimensions of canvasiv
Width = 400;
Height= 74;
canvasiv.setImageBitmap(bmp);
}
}
Once the results table is filled, the results were summarized and the values from the
table are processed and plotted into a graph. Other information about the summary such as
the number what LCC values matched the samples are also displayed.
33
The application also counts the number of the rice leaf images captured and shows
their results below the individual leaf summary results with a histogram. It will show
The rice leaf images used in this study were taken straight from the rice plants on
the farmfield. As the images were captured with the application’s camera at around 8:00 in
the morning, it was ensured that it was not taken directly under the sun however enough
light was made sure to pass through. Sixteen leaves that matched the colors on the LCC
were captured. To distinguish the color similarity of the leaves to the LCC color panels,
the leaves were captured by the camera along with the makeshift LCC which can be seen
in Appendix B.
After the evaluation of each leaf, the results were obtained and noted. The following
tables and figures represents the count of nodes per LCC color panel for every samples for
each leaf. Table 6 is the summary of all the samples used, it was used to show the
application’s LCC color index classifier’s mean accuracy which is 92%. This means that
the Rice Leaf Colorimeter can identify the color index of the rice leaf image samples at
almost the same accuracy of an LCC used to identify the color index.
34
Figure 8. Results histogram for rice leaf samples identified as color index 3
36
Figure 9. Results histogram for rice leaf samples identified as color index 4