Wild Image Classification With Ai

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 23

Major project Presentation

on
Wild Image Classification using AI
By
Barghavi
baru
Barghav
Abstract

Classification and identification of wild animals for tracking and protection purposes has
become increasingly important with the deterioration of the environment, and technology is the agent of
change which augments this process with novel solutions. Computer vision is one such technology
which uses the abilities of artificial intelligence and machine learning models on visual inputs.
Convolution neural networks (CNNs) have multiple layers which have different weights for the purpose
of prediction of a particular input. The precedent for classification, however, is set by the image
processing techniques
which provide nearly ideal input images that produce optimal results. Image segmentation is one such
widely used image processing method which provides a clear demarcation of the areas of interest in the
image, be it regions or objects. The Efficiency of CNN can be related to the preprocessing done before
training. Further, it is a well-established fact that heterogeneity in image sources is detrimental to the
performance of CNNs.
LITERATURE REVIEW

A typical Large-scale Web Image Search system includes three major components: i) feature extraction
(usually in conjunction with feature selection), ii) high dimensional indexing and iii) system design . An
image can be represented as a set of low‐level visual features such as color, texture and shape features.
While several Large-scale Web Image Search systems rely on only one feature for the extraction of relevant
images, it has been shown that an appropriate combination of relevant features can yield better image
search performance . The process of determining the combination of features that is most representative of a
particular query image is called feature selection. Works has been done on color and texture feature
extraction algorithms. Feature selection algorithm based on fuzzy approach and relevance feedback has
been given
Introduction
The expansion of urban areas in modern times has resulted in widespread displacement of habitats in forested areas. As
a result, wild animals are forced to venture into the human settlements that often infringe on their routine activities.
It is at this point that there is tangible danger to any humans that inadvertently cross the path of these animals when they are
predispositions. The usage of technology and robust
cameras is not an alien concept in most major biosphere reserves and national parks around the world. Although there
has been a considerable amount of progress, software-based tools have not been explored to a satisfactory extent in
these use cases. Computer vision has the ability to transform the tracking and monitoring process with the accuracy that its co
and supporting techniques provide. The automation-augmented reduction of man-hours invested in searching for and
tracking wild animals is perhaps the biggest potential boon that computer vision can provide. The pre-processing
involved in the application of computer vision algorithms is often under-documented although it plays a key role in the
success of the algorithm. A deep understanding of the nature of the inputs is necessary to make appropriate changes at
crucial junctures of processing to meet the often-convoluted criteria required by complicated deep learning algorithms.
Transforming the images is invariably necessitated due to the erratic nature of real-world data feeds.
PROPOSED METHODOLOGY

The intrinsic nature of any classification technique is the ability to accurately identify the major features of the
target
that it aims to predict. It is a well-documented fact that Deep Learning models and frameworks provide enhanced
accuracy when the inputs have decreased source-induced heterogeneity. Since raw real-world feeds do not
generate
ideal or optimal images for classification, the onus is on the application of image processing techniques to act as
the
liaison between the classifier and the inputs. Some of the most influential challenges to classifier performance
with
regards to light intensity, unavailability of high-quality night-vision training images, noisy or element-rich
backgrounds
of captured images and luminance problems because of shadow effect need to be mitigated. The goal is to enable
the
classifier to extract features optimally from the images in order to assign a class to them with minimal loss and
maximum accuracy. Ergo, the preprocessing needs to be performed specifically to ensure that the features are as
distinctly visible as possible. The proposed sequence of operations on input images involves:
FUZZY COLOR HISTOGRAM

In the fuzzy color histogram (FCH) approach, a pixel color belongs to all
histogram bins with different degrees of memberships to each bin. More formally,
given a color space with K color bins, the FCH of an image I is defined as
follows: F(I)=[f1 , f2 ,… fK]

Where N is number of pixels in an image, and µij is the membership value of the
jth pixel in the ith color bin.
CONTOURLET TRANSFORM
The contourlet transform provides a multi‐scale, multi‐directional decomposition of an
image. It is a combination of a Laplacian pyramid and a directional filter bank (DFB).
Bandpass images from the Laplacian pyramid are fed into the DFB so that directional
information can be captured. After decimation, the decomposition is iterated using the
same DFB. Its redundancy ratio is less than 4/3 because the directional sub‐bands are
also decimated.
COLOR FEATURE EXTRACTION
Color features include the conventional color histogram (CCH), the fuzzy color histogram
(FCH), the color correlogram (CC) and a more recent color‐shape‐ based feature. The
extraction of the color‐based features follows a similar progression in each of the four
methods: Selection of the color space, quantization of the color space, extraction of the
color feature, derivation of an appropriate distance function .
COLOR-SHAPE BASED METHOD
CSBM (color-shape based method) based on color and shape (area and perimeter intercepted lengths)of the
segmented objects in an image. The algorithm starts by clustering image pixels into K clusters according to the K‐
means algorithm. The mean value of each cluster is regarded
as a representative color for the cluster. A quantized color image I’ is obtained from the original image I by
quantizing pixel colors in the original image into K colors. Any connected region having identical color pixels is
regarded as an object. Now area of each object is encoded as the number of pixels in the object. Further, the shape of
an object is characterized by ‘perimeter ‐intercepted lengths’ (PILs), obtained by intercepting the object perimeter
with eight line segments having eight different orientations and passing through the object center. The PILs have
been shown to be a good characterization of object shapes. The immediate advantage of this method is that it
encodes object shapes as well as colors. The drawback on the other hand, is more involved computation, and the
need to determine appropriate color thresholds for the quantization of the colors. Another drawback of CSBM is its
impressionability to contrast and noise variation
TEXTURE FEATURE EXTRACTION
Texture feature extraction methods include the steerable pyramid, the contourlet
transform, the Gabor wavelet transform and the complex directional filter bank
(CDFB).
THE STEERABLE PYRAMID
The steerable pyramid generates a multi‐scale, multi‐directional representation of the
image. The basic filters are translations and rotations of a single function. The image is
decomposed into one decimated low‐pass sub‐band and a set of undecimated directional
sub‐bands. The decomposition is iterated in the low‐pass sub ‐band. Because the
directional sub‐bands are undecimated, there are 4K/3 times as many coefficients in the
representation as the original image, where K is the number of orientation
Pre-Processing
It was found that these images had a lot of distortion and noise in them so removing them was a necessity for the CNN
algorithm to work well. After experimenting with various noise removal and enhancement techniques it was found that
K-Means segmentation was successful enough to bring in a differentiating factor between the images as it was able to
remove the background of the images leaving behind the animals in the images.
Re-sampling
Once this stage was done, it was noticed that significant class imbalance existed between the classes, hence some data
samples were synthesised from original ones through rotation, flipping and zooming to increase sample count.
Model Building
Once the images were deemed fit for undergoing training, different models were built including VGG19, VGG16,
InceptionV3, MobileNetV2, MobileNetV3, InceptionRestNetV2 to gauge the potential of fine-tuning technique to
classify images into multiple classes based on its past experience of undergoing training with ImageNet dataset having
1 million samples. Eventually it was found that InceptionRestNetV2 had the best results after training for 40 epochs.
Data Augmentation was used before feeding data into the model to bring in diversity in the dataset. The technique of
Fine Tuning involves the usage of initial weights used in CNN layers as the best ones from the ImageNet training. The
fully connected layers are custom-built as per our need and the entire model is subjected to weight changes by
backpropagation during the trading phase. In the fully connected layer, the first and second layers have 512, 256
neurons while the third(final) has 10 with softmax activation, Adam optimizer and Categorical cross entropy loss.
When overfitting occurred, between the first and second layer, a dropout layer was added to cut off 25% of the
connections in between them. The evaluation of the model performance is done on the basis of accuracy obtained after
the building process. The result is a proof of the fact that transfer learning [9] based models have superior performance
when a limited amount of data is available for training
classify the techniques into four categories on the basis of a relationship between
the query terms and the expansion features:

 One-to-One Association: Correlates each expanded term to at least one query


term.
 One-to-Many Association. Correlates each expanded term to many query terms.
 Feature Distribution of Top Ranked Documents: Deals with the top retrieved
documents from the initial query and considers the top weighted terms from these
documents.
 Query Language Modeling: Constructs a statistical model for the query and
chooses expansion terms having the highest probability.
Example of taxonomy hierarchy in WordNet.

The Jaccard coefficient


Image color analysis
Experimental Results
In our experiments, we use the Corel_5000 dataset and Corel_ 10000 dataset.

The Corel_5000 dataset is a subset of the Corel_10000 dataset. We randomly


choose 50 categories in Corel_10000 dataset and each category contains 100
images with the size of 192×128 or 128×192 in We report the performance of
our approach and other competitors on the near-duplicate experiments in Table
I and II, respectively. On the DupImage and CarLogo-51 datasets with one
million distractors, our approach reports the mAP value of 0.72 and 0.30, giving
relatively 33.2% and 42.9% improvement over the baseline, Scalar Quantization
[19]. We also report the performance of our approach and other competitors on
the Oxford5K and Paris6K datasets with 100K distractors
Segmented Results
Conclusion
Although the obtained accuracy is satisfactory for real world use cases, robust
detection of wild animals exclusively at night time when there is no natural light, is
potentially the most challenging yet impactful expansion. The constraints
involved in processing images from night vision cameras are highly detrimental to the
application of computer vision models, as the distinction between features plays the
central role in the working of the model. However, it is possible to process the images
generated during night hours in order to incorporate some contrast and distinction
between regions of interest in the image.
References
[1]. https://www.kaggle.com/datasets/brsdincer/danger-of-extinction-animal-image-set
[2]. H. Yousif, J. Yuan, R. Kays and Z. He, "Fast human-animal detection from highly cluttered camera-trap
images using joint background modeling and deep learning classification," 2017 IEEE International
Symposium on Circuits and Systems (ISCAS), 2017, pp. 1-4, doi: 10.1109/ISCAS.2017.8050762.
[3]. Sayagavi, A.V., Sudarshan, T.S.B., Ravoor, P.C. (2021). Deep Learning Methods for Animal Recognition
and Tracking to Detect Intrusions. In: Senjyu, T., Mahalle, P.N., Perumal, T., Joshi, A. (eds) Information and
Communication Technology for Intelligent Systems. ICTIS 2020. Smart Innovation, Systems and
Technologies, vol 196. Springer, Singapore.
[4]. Okafor, E., Berendsen, G., Schomaker, L., Wiering, M. (2018). Detection and Recognition of Badgers Using
Deep Learning. In: Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I. (eds) Artificial
Neural Networks and Machine Learning – ICANN 2018. ICANN 2018. Lecture Notes in Computer
Science(), vol 11141. Springer, Cham.

You might also like