Evaluation Metrics For Machine Learning: Negative (Actual) 98 Positive (Actual) 1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

Evaluation Metrics for Machine Learning

Researchers will choose a target variable that they wish to predict and fulfill the

prerequisites of data transformation and model building, one of the final phases is the

evaluation of the model’s performance. Evaluation for machine learning could be

evaluated through confusion matrix, accuracy, precision, recall, and F1 score. The

evaluation models are not perfect it has its own limitations. Optimization of performance

metrics is a must, to be useful for the specific problem that is to be contend.

Confusion Matrix

The performance metrics of the model depends on the problem it solves. Using

100 example in data set and feed it to the model and received classification. The

confusion matrix is charted between predicted and actual classification.

Negative (Predicted) Positive (Predicted)


Negative (Actual) 98 0
Positive (Actual) 1 1

The table X shows the output positive vs negative. each of the examples “classes”

are the result. The model uses binary classifier confusion matrix since it has only two

classes. For better comprehension of the table, it can also perceive in terms of false

positives, false negatives, true positives, and true negatives,

Accuracy

Accuracy will determine whether the data is trained correctly and how it will

perform. The however it has its own limits, it does not give detailed information about

the applicability to the problem. Using accuracy does not do well in severe cases,

accuracy measurement is limited to normal cases.


truepositives+ truenegatives
Accuracy=
totalexamples

Precision

When cost of false negatives is high precision helps. The people who monitor the

results will get used and ignore signals after bombarded by false alarm.

truepositives
Precision=
truepositives+ falsepositives

Recall

For excessive cost false negatives, recall is best to use. When detecting an

incoming danger, false negatives has devastating consequences. When false negative is

receive it may cost life and live hood.

truepositives
Recall=
truepositives + falsenegatives

F1 Score

The overall measurement of model’s accuracy is the F1 score. It is mixture of

Recall and precision. The good F1 score means you have low false negatives and low

false positives. This means that you have correctly identifying the object and threats you

are not giving false information. For F1 score, the perfect is 1, and the total failure is 0.

2∗precision∗recall
F 1=
precision+recall

You might also like