An Efficient Optimal Neural Network-Based Moving Vehicle Detection in Traffic Video Surveillance System
An Efficient Optimal Neural Network-Based Moving Vehicle Detection in Traffic Video Surveillance System
An Efficient Optimal Neural Network-Based Moving Vehicle Detection in Traffic Video Surveillance System
https://doi.org/10.1007/s00034-019-01224-9
Abstract
This paper presents an effective traffic video surveillance system for detecting moving
vehicles in traffic scenes. Moving vehicle identification process on streets is utilized
for vehicle tracking, counts, normal speed of every individual vehicle, movement
examination, and vehicle classifying targets and might be executed under various
situations. In this paper, we develop a novel hybridization of artificial neural network
(ANN) and oppositional gravitational search optimization algorithm (ANN–OGSA)-
based moving vehicle detection (MVD) system. The proposed system consists of
two main phases such as background generation and vehicle detection. Here, at first,
we develop an efficient method to generate the background. After the background
generation, we detect the moving vehicle using the ANN–OGSA model. To increase
the performance of the ANN classifier, we optimally select the weight value using
the OGSA algorithm. To prove the effectiveness of the system, we have compared
our proposed algorithm with different algorithms and utilized three types of videos
for experimental analysis. The precision of the proposed ANN–OGSA method has
been improved over 3% and 6% than the existing GSA-ANN and ANN, respectively.
Similarly, the GSA-ANN-based MVD system attained the maximum recall of 89%,
91%, and 91% for video 1, video 2, and video 3, respectively.
B Ahilan Appathurai
[email protected]
Extended author information available on the last page of the article
Circuits, Systems, and Signal Processing
1 Introduction
cedure which is a base change estimation of direct development [31]. One important
class of vehicle tracking algorithms is based on probabilistic modeling of associated
data and Bayesian estimation techniques. In these methods, statistical models are used
to represent characteristics of the vehicle features. The performance of the KF and the
extended KF (EKF) hybrid is better compared to the particle filter in tracking a road
in the satellite images [32].
In this paper, we develop an efficient moving vehicle detection (MVD) system using
an optimal artificial neural network (OANN). The proposed system consists of two
modules such as background generation and MVD using the ANN and oppositional
gravitational search optimization (OGSA), that is ANN–OGSA classifier. Here, at first,
we separate the background from the input traffic video. Then, the number of frames
and background of the frames are fed to the OANN classifier. The OANN classifier
is the combination of OGSA and ANN. The ANN classifier weights are optimally
selected using the OGSA algorithm. The rest of the paper is organized as follows:
A brief review of some of the literature in object detection techniques is presented
in Sect. 2. The background of the research is explained in Sect. 3, and the proposed
MVD is described in Sect. 4. The experimental results and performance evaluation
discussion are provided in Sect. 5. Finally, the conclusions are summed up in Sect. 6.
2 Related Works
Many researchers have analyzed the MVD system. We now discuss some of the
research works in this regard. Tian et al. [34] have explained the rear-view vehicle
detection and tracking by combining multiple parts for complex urban surveillance.
Rear-view vehicle detection and tracking method is based on multiple vehicle salient
parts using a stationary camera. They show that spatial modeling of these vehicle parts
was crucial for overall performance. First, the vehicle was treated as an object com-
posed of multiple salient parts, including the license plate and rear lamps. These parts
were localized using their distinctive color, texture, and regional feature. Furthermore,
the detected parts were treated as graph nodes to construct a probabilistic graph using
a Markov random field model. Then, the marginal posterior of each part was inferred
using loopy belief propagation to get final vehicle detection. Finally, the vehicles’
trajectories were estimated using the KF, and a tracking-based detection technique
was realized. Experiments in practical urban scenarios were carried out under various
weather conditions.
Wei et al. [36] have explained vehicle detection and tracking system based on image
data collected by an unmanned aerial vehicle. This system uses consecutive frames to
generate vehicle’s dynamic information, such as positions and velocities. Four major
modules were developed in this regard, such as image registration, image feature
extraction, vehicle shape detecting, and vehicle tracking. Some unique features were
introduced into this system to customize the vehicle and traffic flow and to jointly use
them in multiple consecutive images to increase the system accuracy of detecting and
tracking vehicles.
Similarly, Hu et al. [12] have explained the adaptive approach for validation in visual
object tracking. The uncertainty of validating unpredictable features in object track-
Circuits, Systems, and Signal Processing
ing is a challenging task in visual object tracking with occlusion and large appearance
variation. To address this uncertainty, they introduced an adaptive approach which
uses an updating model based on the occlusion and distortion parameters. In case of
occlusion or large appearance variation, the method uses backward model validation
where it updates the invalid appearance and then validates the target feature model.
If the target feature did not undergo any kind of clutter or distortions, it simply val-
idates and then updates the appearance model using forward feature validation. The
experimental results obtained from this adaptive approach demonstrate effectiveness
in terms of overlap rate and center location error compared with the other relevant
existing algorithms.
Zhang et al. [37] have introduced detection of on-street vehicles dependent on
shading power isolation. This strategy is comprised of two phases. Firstly, details such
as pavements or lanes in the image frame are utilized to extract the region of interest.
Furthermore, another channel was presented that uses the power data to channel the
brightening varieties, shadows, and jumbled foundations from the removed district of
intrigue and identify the vehicles in this manner.
In Ref. [38], Hu has explained the moving object detection and tracking from a
video captured by a moving camera. Moving object detection was relatively difficult
for the video captured by the moving camera since camera motion and object motion
were mixed. In the method, the feature points in the frames were found and then classi-
fied as belonging to foreground or background features. Next, moving object regions
were obtained using an integration scheme based on foreground feature points and
foreground regions, which were obtained using an image difference scheme. Then,
a compensation scheme based on the motion history of the continuous motion con-
tours obtained from three consecutive frames was applied to increase the regions of
moving objects. Moving objects were detected using a refinement scheme and a mini-
mum bounding box. Finally, moving object tracking was achieved using the KF based
on the center of gravity of a moving object region in the minimum bounding box.
Experimental results show that the method is reliable.
Zhou et al. [39] have introduced identification of moving vehicles and in addition
estimation of their rates by utilizing a solitary camera in light or legitimately lit-
up condition. The methodology identifies and tracks the vehicle going through the
reconnaissance territory and keeps the record of vehicles’ position. In this paper, it
was shown that following of vehicles depends on the relative places of the vehicles in
back-to-back edges. These data were utilized in the automatic number plate recognition
system for choice of those key casings where speed restricts infringement happens.
In this section, first we explain the algorithm used in this paper. Then, we explain the
proposed MVD system.
Circuits, Systems, and Signal Processing
3.1 ANN
The ANN consists of three layers such as an input layer, a hidden layer, and an output
layer. The input layer has input neurons which transfer the data via synapses to the
hidden layer, and similarly, the hidden layer transfers these data to the output layer
via more synapses. The quantity of neurons in the input layer is same as the number
of features selected in the previous stage. The quantity of neurons in the output layer
is a sum of all classes that are represented in the network. The quantity of hidden
neuron is determined based on experiments. The input of the corresponding neuron
is the weighted sum of the output of all the neurons to which it is connected. The
output value of a neuron is a nonlinear function of its input provided. The weight W ij
is calculated by multiplying both input node j and output node i. The neural network
(NN) has two phases such as a training phase and a testing phase. During the training
phase, the NN parameters are changed. After the training phase, the NN parameters
are fixed and then the testing phase is done.
Figure 1 shows the working concept of the ANN. Here, the input neurons are defined
as (F 1 , F 2 … F a ), the hidden neurons are defined as (H 1 , H 2 … H b ), and the output
neurons are defined as (Y 1 , Y 2 … Y c ). The W hjk denotes the weight connecting the
input layer node k and the hidden layer node j. The Wioj denotes the weight connecting
hidden layer node j and the output layer node i, where 1 ≤ k ≤ a; 1 ≤ j ≤ b; 1 ≤
i ≤ c. Here, h represents the hidden layer and o represents the output layer. The
back-propagation (BP) algorithm is utilized for the training process. We now briefly
discuss the training process involved in this regard.
Circuits, Systems, and Signal Processing
Here, each node k in the input layer has a signal U k as system input, multiplied by
a weight value between the input layer and the hidden layer. Each node j in the hidden
layer receives the signal H j which is given in Eq. (1):
n
Hj αj + Uk Wihj . (1)
k1
Here, α j represents the bias value of the hidden layer. The output H j is passed
through the tansig activation function. The activation function is represented as a
nonlinear function as described in Eq. (2):
1
f (H j ) . (2)
1 + e−H j
After the activation function calculation, we calculate the output value. Here, the
output of the activation function is given to the output layer. The output function is
described in Eq. (3):
b
Oi αi + Wioj f H j (3)
j1
where αi is the bias in the output layer. Then, we calculate the learning error using
Eq. (4):
h−1
1
E (Ti − Yi )2 (4)
2n
i0
where n is the number of training parameters; Y i and T i are the output value and
the target value, respectively. In order to derive optimal weights for the ANN, a BP
algorithm is generally employed. A BP algorithm updates the weights iteratively to
minimize the error function. We now discuss the steps involved in the BP algorithm.
• The loads for the neurons of the hidden layer and the output layer are developed
arbitrarily selecting the weight. Nevertheless, the input layer has constant weight.
• The suggested bias function and the activation function are estimated using Eqs. (2)
and (3) for the NN.
• The back-propagation error is determined for each node, and then, the weights are
updated as follows:
where δ is the learning rate, which normally ranges from 0.2 to 0.5 and E (BP) is the
BP error.
• After adjusting the weights, steps (2) and (3) are repeated until the BP error gets
minimized, that is E (BP) < 0.1.
• If the minimum value is received, then the FFBNN is properly qualified for the
screening phase.
In GSA, the gravitational search value is computed utilizing Eq. (7). The mass esti-
mation of every protest is resolved by its fitness value which is given in Eq. (8):
Thus, we have
In Eq. (10), G(k) is the gravity constant, ε is a very small value, and Rij (t) is the
Euclidian distance between the two agents i and j, and x i and x j are two random numbers
in the interval [0, 1] that guarantee the stochastic characteristics of the algorithm. The
different steps of GSA are given in Table 1.
Let X [ p, q]. Then, the opposite solution is calculated using Eq. (13).
X p + q − X (13)
X i pi + qi − X i . (14)
Background generation is the important process for the MVD system. The main objec-
tive of this section is to create the background of the input video Vin . In this paper, we
develop an efficient method for background generation process. Consider
the input
video Vin which has n number of frames V in [V1 , V2 , . . . , Vn ] . Each frame has
some constant process and some moving process. Therefore, these frames have two
types of pixels such as background pixel and moving pixels. Comparing these two
pixels in all the frames, we observed that the background pixels contain the same
pixel value for all the frames, whereas the moving object pixels contain different pixel
Circuits, Systems, and Signal Processing
Fig. 2 Overall diagram of the proposed MVD using the OGSA–ANN model
values. Among them, we had selected the similar pixel values for the background
generation process. A simple example of the background generation process and its
output is given in Fig. 3 (Fig. 4).
Circuits, Systems, and Signal Processing
After the background generation process, we detect the moving vehicle from a given
frame using a combination of the OGSA–ANN. This section consists of two stages
such as (1) weight optimization based on the OGSA and (2) MVD.
STAGE 1: Weight optimization using the OGSA
In this paper, we utilize the ANN classifier for the MVD. To improve the performance
of the ANN, we optimally select the weight value of the ANN using the OGSA. The
OGSA algorithm is a combination of the OBL strategy and the GSA algorithm. To
increase the searching ability of the GSA algorithm, the OBL strategy is hybridized
with the GSA. In the ANN, the weight values are optimally selected by the OGSA
algorithm.
In the projected OGSA–ANN model, the input layer is the primary layer of the
network that comprises n number of neurons. Each input neuron signifies a detached
attribute in the training test dataset (X 1 , X 2 , . . . , X n ). The value from the input neuron
is multiplied with the conforming weight Wi j to obtain the hidden neuron that is
displayed in Fig. 5. The output layer is the last layer that characteristically comprises
only one class as only one output is typically demanded. In this proposed model,
during the training phase, the objective is to calculate the most accurate weight to
be assigned to the input pattern layer connector line. In this phase, the output is
Circuits, Systems, and Signal Processing
computed repeatedly, and the result is compared to the preferred output generated by
the training/test datasets. We now discuss the step-by-step process of the proposed
OGSA–ANN model.
Wij W11
, W12
, . . . , W1n . (15)
Step 3: Fitness calculation Now, we calculate the fitness function of steps 1 and
2. To assess the fitness of a result, an objective perfor-
mance desires to be intended to quantify the function
Circuits, Systems, and Signal Processing
1 2
N
MSE Yi − Ỹi (17)
N
i1
−VIN − Hb 2
K (Vin , Hb )m exp (20)
2σ 2
H
T (m) K (VIN , Hb )m (21)
i1
Then, the maximum value of the sum is chosen to determine whether the block has a
high probability of containing background information. This is expressed in Eq. (22):
where Pt is each pixel value of the consistent block B, and the block size N can be
set to 3 empirically. To regulate whether the block B (i, j) has a high probability
of comprising the background data, the intended sum of the block must surpass a
threshold value τ and consequently be considered as “0.” Then, the block B(i, j) will
be considered as “1” to designate a high probability that the block comprises moving
vehicles. This decision rule can be articulated as follows:
0, if ϕ ≥ τ
B (i, j) (24)
1, otherwise
Circuits, Systems, and Signal Processing
Vehicle Detection Procedure After the block estimation procedure eliminates blocks
that are determined to have a high probability of containing background information,
the vehicle detection procedure accurately detects moving vehicles within only those
blocks that are regarded as having high probability of containing moving vehicles.
Finally, the detection result strongly depends on the output layer of the ANN, which
generates the binary motion detection mask. This is accomplished via the winner-
takes-all rule as follows:
H
Y max Z im (25)
m1 ∼ E
i1
where Z im is the output value of the ith hidden layer neuron and H is the number of
hidden layer neurons.
1, if Y (x, y) < ω
P(x, y) (26)
0, otherwise
where w signifies the experiential threshold value and P(x, y) is considered either as
“1” to signify a motion pixel that is a portion of a moving vehicle or as “0” to signify
a background pixel.
Background Updating Procedure After all the operations are completed for the
current incoming frame, we use Eq. (27) to update the neurons of the hidden layer in
the proposed background updating procedure for the next incoming frame as follows:
where W (x, y)i and W (x, y)i represent the updated and the original ith neurons at
the position (x, y), respectively, and α is the empirical parameter.
In this section, we explain the experimental results obtained from the proposed MVD
system. The proposed MVD system has been experimented with three different meth-
ods and evaluated with three different methods such as precision, recall, F-measure,
and similarity measure. For implementing the proposed technique, we have used MAT-
LAB 2017b. This proposed technique is carried out using a computer with windows
having the Intel core i5 processor with a speed of 1.6 GHz and 4 GB RAM. The
performance of the proposed MVD system is compared with different algorithms. We
now discuss the experimental results.
The MVD system performance is analyzed using the most common performance
measures such as precision, recall, F-measure, and similarity. The metric values found
Circuits, Systems, and Signal Processing
are based on true positive (TP), true negative (TN), false positive (FP), and false
negative (FN).
Precision TP (TP + FP) (28)
Recall TP (TP + FN) (29)
Similarity TP (TP + FP + FN) (30)
F1 2(Precison) (Recall) (Recall + Precision) (31)
where TP is the total amount of true-positive pixels, FP is the total amount of false-
positive pixels, and FN is the total amount of false-negative pixels.
In this section, we analyze the experimental results of the proposed MVD system.
For experimental analysis, in this paper we have utilized three types of videos such as
“expressway (EW),” “highway (HW),” and “freeway (FW).” The selected each video
sample frame is given in Fig. 7 (Tables 2, 3, 4).
Fig. 7 Extracted frames: a video 1 (EW), b video 2 (HW), and c video 3 (FW)
In this section, we compare the proposed work performance with different methods
using recall measures. Recall is the fraction of items that were effectively identified
among all the items that should have been detected. The performance graph of recall
measure is given in Fig. 9.
Based on an analysis of Fig. 9, our proposed approach ensures the maximum recall
of 94% for video 1, 93% for video 2, and 95% for video 3. Similarly, the GSA-ANN-
based MVD system ensures the maximum recall of 89%, 91%, and 91% for video 1,
video 2, and video 3, respectively. Moreover, we further observe from Fig. 9 that the
ANN-based MVD system ensures the recall of 83%, 86%, and 84% for video 1, video
2, and video 3, respectively. From the results, we clearly understand that our proposed
approach ensures better results compared to other methods.
Circuits, Systems, and Signal Processing
The performance analysis based on the F-measure is explained in this section. The
F-measure is considered as both the precision and the recall. The F-measure gives
an estimate of the accuracy of the system under test. The performance based on the
F-measure is given in Fig. 10.
Circuits, Systems, and Signal Processing
Figure 10 shows the performance of the F-measure. Here, the x-axis represents
the videos, and the y-axis represents the F-measure. Based on an analysis of Fig. 11,
our proposed OGSA–ANN-based MVD system ensures the maximum F-measure of
92%, 90%, and 91% for video 1, video 2, and video 3, respectively. Similarly, the
GSA-ANN-based MVD system ensures the maximum F-measure of 90%, 89%, and
90% for video 1, video 2, and video 3, respectively. On the other hand, the ANN-based
MVD system ensures the F-measure of 85%, 87%, and 87% for video 1, video 2, and
video 3, respectively.
The main objective of the proposed methodology is to detect the moving vehicles in
traffic video surveillance using the OGSA–ANN model. To prove the effectiveness of
Circuits, Systems, and Signal Processing
the model, in this paper we compare our proposed model with the existing ANN-based
MVD and GSA-ANN-based MVD. In this section, the performance is analyzed based
on similarity measure.
Figure 11 shows the performance of the proposed MVD system using similarity
measure. Here, our proposed approach ensures the maximum similarity of 90% which
is 88% for using the GSA-ANN model and 85% for the ANN model. Based on the
results obtained, we clearly understand that our proposed approach ensured better
results compared to other methods.
Circuits, Systems, and Signal Processing
92
90
Precision
88
86
84
82
80
video 1 video 2 video 3
Videos
OGSA-ANN GSA-ANN ANN
100
95
OGSA-ANN
90
Recall
GSA-ANN
ANN
85
80
75
video 1 video 2 video 3
Videos
Fig. 9 Performance of the proposed MVD using recall measure
92
90
F-Measure
88 OGSA-ANN
GSA-ANN
86
ANN
84
82
80
video 1 video 2 video 3
Video
Fig. 10 Performance of the proposed MVD using the F-measure
6 Conclusion
In this paper, we have discussed the beneficial features of an MVD system using the
OGSA–ANN model. Using the package MATLAB 2017b, the proposed technique was
implemented. The proposed MVD system was developed using two main stages such
as the design novel OGSA–ANN model and the MVD using the OGSA–ANN model.
In the ANN model, the weight value was optimally selected using the OCS algorithm
Circuits, Systems, and Signal Processing
92
90
88
Similarity 86
84
82
80
78
76
video 1 video 2 video 3
Video
OGSA-ANN GSA-ANN ANN
which serves to increase the detection accuracy given the high convergence speed
for detection problems. For experimentation, we have utilized three types of videos;
further, performance metrics such as precision, recall, F-measure, and similarity are
analyzed for each video. The simulation results show that our proposed approach
ensures the maximum precision of 91.5% which is high compared to other methods.
References
1. A. Ahilan, E.A.K. James, Design and implementation of real time car theft detection in FPGA, in 2011
Third International Conference on Advanced Computing, Chennai (2011), pp. 353–358
2. A. Ahilan, P. Deepa, Improving lifetime of memory devices using evolutionary computing-based error
correction coding, in Computational Intelligence, Cyber Security and Computational Models (2016),
pp. 237–245
3. A. Ahilan, P. Deepa, Modified Decimal Matrix Codes in FPGA configuration memory for multiple
bit upsets, in 2015 International Conference on Computer Communication and Informatics (ICCCI)
(2015), pp. 1–5
4. A. Ahilan, P. Deepa, Design for built-in FPGA reliability via fine-grained 2-D error correction codes.
Microelectron. Reliab. 55(9–10), 2108–2112 (2015)
5. A. Appathurai, P. Deepa, Design for reliability: a novel counter matrix code for FPGA based quality
applications, in 6 Asia Symposium on Quality Electronic Design (ASQED) (2015), pp. 56–61
6. A. Baher, H. Porwal, W. Recker, Short term freeway traffic flow prediction using genetically opti-
mized time-delay-based neural networks, in Transportation Research Board 78th Annual Meeting,
Washington, DC (1999)
7. P.V.K. Borges, N. Conci, A. Cavallaro, Video-based human behavior understanding: a survey. IEEE
Trans. Circuits Syst. Video Technol. 23(11), 1993–2008 (2013)
8. H.-Y. Cheng, C.-C. Weng, Y.-Y. Chen, Vehicle detection in aerial surveillance using dynamic bayesian
networks. IEEE Trans. Image Process. 21(4), 2152–2159 (2012)
9. M. Cheon, W. Lee, C. Yoon, M. Park, Vision-based vehicle detection system with consideration of the
detecting location. IEEE Trans. Intell. Transp. Syst. 13(3), 1243–1252 (2012)
10. H. Chung-Lin, L. Wen-Chieh, A vision-based vehicle identification system, in Pattern Recognition,
2004. ICPR 2004. Proceedings of the 17th International Conference on, vol. 4 (2004), pp. 364–367
11. W.-C. Hu, C.-Y. Yang, D.-Y. Huang, Robust real-time ship detection and tracking for visual surveillance
ofcage aquaculture. J. Vis. Commun. Image Represent. 22(6), 543–556 (2011)
12. W.-C. Hu, C.-H. Chen, T.-Y. Chen, D.-Y. Huang, Z.-C. Wu, Moving object detection and tracking from
video captured by moving camera. J. Vis. Commun. Image Represent. 30, 164–180 (2015)
13. X. Ji, Z. Wei, Y. Feng, Effective vehicle detection techniques for traffic surveillance systems. J. Vis.
Commun. Image Represent. 17(3), 647–658 (2006)
Circuits, Systems, and Signal Processing
14. R.E. Kalman, A new approach to linear filtering and prediction problems. Trans. ASME J. Basic Eng.
82, 35–45 (1960)
15. N.K. Kanhere, S.T. Birchfield, Real-time incremental segmentation and tracking of vehicles at low
camera angles using stable features. IEEE Trans. Intell. Transp. Syst. 9, 148–160 (2008)
16. N.K. Kanhere, Vision-Based Detection, Tracking and Classification of Vehicles Using Stable Features
with Automatic Camera Calibration, ed, (2008), p. 105
17. D.S. Kushwaha, T. Kumar, An efficient approach for detection and speed estimation of moving vehicles.
J. Proc. Comput. Sci. 89, 726–731 (2016)
18. X. Li, Z.Q. Liu, K.M. Leung, Detection of vehicles from traffic scenes using fuzzy integrals. Pattern
Recogn. 35(4), 967–980 (2002)
19. F.-L. Lian, Y.-C. Lin, C.-T. Kuo, J.-H. Jean, Voting-based motion estimation for real-time video trans-
mission in networked mobile camera systems. IEEE Trans. Industr. Inf. 9(1), 172–180 (2013)
20. A. Lozano, G. Manfredi, L. Nieddu, An algorithm for the recognition of levels of congestion in road
traffic problems. Math. Comput. Simul. 79(6), 1926–1934 (2009)
21. Y. Mary Reeja, T. Latha, W. Rinisha, Detecting and tracking moving vehicles for traffic surveillance.
ARPN J. Eng. Appl. Sci. 10(4) (2015)
22. N. Messai, P.T. Thomas, D. Lefebvre, A.El. Moudni, Neural networks for local monitoring of traffic
magnetic sensors. Control Eng. Pract. 13(1), 67–80 (2005)
23. S. Movaghati, A. Moghaddamjoo, A. Tavakoli, Road extraction from satellite images using particle
filtering and extended Kalman filtering. IEEE Trans. Geosci. Remote Sens. 48(7), 2807–2817 (2010)
24. X. Niu, A semi-automatic framework for highway extraction and vehicle detection based on a geometric
deformable model. ISPRS J. Photogr. Remote Sens. 61(3–4), 170–186 (2006)
25. G. Prathiba, M. Santhi, A. Ahilan, Design and implementation of reliable flash ADC for microwave
applications. Microelectron. Reliab. 88–90, 91–97 (2018)
26. M. SaiSravana, S. Natarajan, E.S. Krishna, B.J. Kailath, Fast and accurate on-road vehicle detection
based on color intensity segregation. J. Proc. Comput. Sci. 133, 594–603 (2018)
27. J. Satheesh Kumar, G. Saravana Kumar, A. Ahilan, High performance decoding aware FPGA bit-stream
compression using RG codes. Cluster Comput. 1–5 (2018)
28. J.P. Shinora, K. Muralibabu, L. Agilandeeswari, An adaptive approach for validation in visual object
tracking. Proc. Comput. Sci. 58, 478–485 (2015)
29. B. Sivasankari, A. Ahilan, R. Jothin, A. Jasmine Gnana Malar, Reliable N sleep shuffled phase damping
design for ground bouncing noise mitigation. Microelectron. Reliab. 88–90, 1316–1321 (2018)
30. G. Somasundaram, R. Sivalingam, V. Morellas, N. Papanikolopoulos, Classification and counting of
composite objects in traffic scenes using global and local image analysis. IEEE Trans. Intell. Transp.
Syst. 14(1), 69–81 (2013)
31. D. Srinivasan, M.C. Choy, R.L. Cheu, Neural networks for real time traffic signal control. IEEE Trans.
Intell. Transp. Syst. 7(3), 261–272 (2006)
32. Z. Sun, G. Bebis, R. Miller, On-road vehicle detection using Gabor filters and support vector machines,
in Proceedings of the IEEE Conference Digital Signal Processing, vol. 2 (2002), pp. 1019–1022
33. B. Tian, Y. Li, B. Li, D. Wen, Rear-view vehicle detection and tracking by combining multiple parts
for complex urban surveillance. IEEE Trans. Intell. Transp. Syst. 15(2) (2014)
34. D. Tran, J. Yuan, D. Forsyth, Video event detection: from subvolume localization to spatiotemporal
path search. IEEE Trans. Pattern Anal. Mach. Intell. 36(2), 404–416 (2014)
35. L. Wang, F. Chen, H. Yin, Detecting and tracking vehicles in traffic by unmanned aerial vehicles. J.
Autom. Constr. 72, 294–308 (2016)
36. Z. Wei et al., Multilevel framework to detect and handle vehicle occlusion. IEEE Trans. Intell. Transp.
Syst. 9, 161–174 (2008)
37. W. Zhang, X.Z. Fang, X. Yang, Moving vehicles segmentation based on Bayesian framework for
Gaussian motion model. Pattern Recogn. Lett. 27(1), 956–967 (2006)
38. J. Zhou, D. Gao, D. Zhang, Moving vehicle detection for automatic traffic monitoring. IEEE Trans.
Veh. Technol. 56(1), 51–59 (2007)
39. X. Zhou, C. Yang, W. Yu, Moving object detection by detecting contiguous outliers in the low-rank
representation. IEEE Trans. Pattern Anal. Mach. Intell. 35(3), 597–610 (2013)
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.
Circuits, Systems, and Signal Processing
Affiliations
Revathi Sundarasekar
[email protected]
C. Raja
[email protected]
E. John Alex
[email protected]
C. Anna Palagan
[email protected]
A. Nithya
[email protected]
1 Infant Jesus College of Engineering, Tuticorin, Tamil Nadu, India
2 Anna University, Chennai, Tamil Nadu, India
3 Koneru Lakshmaiah Education Foundation, Vaddeswaram, A.P., India
4 CMR Institute of Technology, Hyderabad, Telangana, India
5 Malla Reddy Engineering College, Hyderabad, Telangana, India
6 Vaagdevi College of Engineering, Warangal, Telangana, India