Vision-Based Anti-UAV Detection and Tracking: Jie Zhao, Jingshu Zhang, Dongdong Li, Dong Wang
Vision-Based Anti-UAV Detection and Tracking: Jie Zhao, Jingshu Zhang, Dongdong Li, Dong Wang
Vision-Based Anti-UAV Detection and Tracking: Jie Zhao, Jingshu Zhang, Dongdong Li, Dong Wang
Abstract—Unmanned aerial vehicles (UAV) have been widely anti-UAV detection aims to address. UAV often fuses with the
used in various fields, and their invasion of security and privacy complex background with much noise and interference. Occlu-
has aroused social concern. Several detection and tracking sion also occurs and brings challenges to the tracking task. A
systems for UAVs have been introduced in recent years, but most
of them are based on radio frequency, radar, and other media. series of methods, such as improving YOLOv3 [15], which
We assume that the field of computer vision is mature enough uses low-rank and sparse matrix decomposition to conduct
to detect and track invading UAVs. Thus we propose a visible classification [16], are proposed to solve the aforementioned
arXiv:2205.10851v1 [cs.CV] 22 May 2022
light mode dataset called Dalian University of Technology Anti- problems, and achieve good results.
UAV dataset, DUT Anti-UAV for short. It contains a detection The main motivation of our work is to use existing state-of-
dataset with a total of 10,000 images and a tracking dataset with
20 videos that include short-term and long-term sequences. All the-art detection and tracking methods to effectively adapt and
frames and images are manually annotated precisely. We use address the anti-UAV task in the data level and method level.
this dataset to train several existing detection algorithms and First, deep learning-based methods require plenty of training
evaluate the algorithms’ performance. Several tracking methods data to obtain robust and accurate performance. Although
are also tested on our tracking dataset. Furthermore, we propose several corresponding datasets are proposed, such as Anti-
a clear and simple tracking algorithm combined with detection
that inherits the detector’s high precision. Extensive experiments UAV [17] and MAV-VID [18], they are still not enough to
show that the tracking performance is improved considerably train a high-performance model. Therefore, to make full use
after fusing detection, thus providing a new attempt at UAV of existing detection and tracking methods for the anti-UAV
tracking using our dataset. The datasets and results are publicly task in the data level, and promote further development of this
available at: https://github.com/wangdongdut/DUT-Anti-UAV. area, we propose a visible light dataset for UAVs, including
Index Terms—Anti-UAV, dataset, detection, tracking. detection and tracking subsets. We also retrain several detec-
tion methods using our training set. Second, we attempt to
further improve the UAV tracking performance at the method
I. I NTRODUCTION
level. To be specific, we propose a fusion strategy to combine
Besides, several corresponding algorithms [22], [23], [24] have or full-color camera and sensing module to measure the
been proposed to address these two tasks. UAV detection UAV’s flight altitude. The device dynamically calculates the
and tracking are mostly overlooking from above, for which longitude and latitude coordinates in spherical coordinates.
it gains large view scope. However, it brings new challenges, The thermal imaging and full-color cameras are optionally
such as high density, small object and complex background. used under various weather conditions, making the system
For these properties, Yu et al. [22] consider contextual infor- robust in different environments. This drone tracking device
mation using the Exchange Object Context Sampling(EOCS) for anti-UAV systems is inexpensive and practical, however,
method [25] in tracking, to infer the relationships between its requirements for hardware facilities are still high.
the objects. To solve the problem of fast camera motion, Li
et al. [23] optimize the camera motion model by projective
C. UAV dataset
transformation based on background feature points. Besides,
Xing et al. [24] consider that in real time tracking, computing In addition to using other media to solve the problem
resources employed on UAVs are limited. To complement of UAV detection, people have also begun to utilize deep
lightweight network, they propose a lightweight Transformer learning based object tracking algorithms for UAV tracking
layer, and then integrate it into pyramid networks, thus finally due to the rapid development of computer vision in recent
build a real-time CPU-based tracker. years. In the task of computer vision, dataset is an important
Aforementioned algorithms perform well on existing UAV factor in obtaining a model with strong robustness. Therefore,
tracking benchmarks, as well as promote the commercializa- datasets for UAV detection and tracking have been proposed
tion of aerial object tracking. UAV tracking is more and more consistently. Several relatively complete existing UAV datasets
popular and draw increasing attention, which makes the anti- are described below.
UAV tracking essential as well. MAV-VID [18]. This is a dataset published by Kaggle in
which UAV is the only detected object. It contains 64 videos
(40,323 pictures in total), of which 53 are used for training and
B. Anti-UAV methodology 11 are used for validation. In this dataset, the locations of the
Safety issues derived from UAVs have elicited increasing in UAVs are relatively concentrated, and the differences between
recent years. In particular, considering national security, many locations are mostly horizontal. The detected objects are small,
countries have invested much time and energy in researching the average size of which is 0.66% of the entire picture. While
and deploying quite mature anti-unmanned systems that are in our dataset, the distribution of UAV is scattered, and the
not based on deep learning in military bases. Universities and horizontal and vertical distribution are relatively more uniform,
research institutions are continuously optimizing these anti- which makes the model trained by our dataset more robust.
unmanned systems. Drone-vs-Bird Detection Challenge [28]. This dataset
ADS-ZJU [26]. This system combines multiple surveillance is proposed in the 16-th IEEE International Conference on
technologies to realize drone detection, localization, and de- Advanced Video and Signal-based Surveillance (AVSS). As
fense. It deploys three sensors to collect acoustic signals, video the name indicates, the prime characteristic of this dataset is
images, and RF signals. The information is then sent to the that in addition to UAVs, a number of birds cannot be ignored
central processing unit to extract features for detection and in the pictures. The detector must successfully distinguish the
localization. ADS-ZJU uses a short-time Fourier transform to drones and birds, alerting against UAV while not responding to
extract spectrum features of the received acoustic signals, and the birds. However, the size, color, and even shape of the two
histograms of oriented gradients to describe the image feature. may be similar, which brings challenges to the detection task.
It also takes advantage of the characteristic that the spectrum Different from the first version, this dataset adds land scenes
of the UAV’s RF signal is different from that of the WiFi in addition to sea scenes, which are shot by different cameras.
signal by using the distribution of RF signals’ strengths at Another characteristic of this dataset is that the size of the
different communication channels to describe the RF feature. detected object is extremely small. According to statistical
After feature extraction, it utilizes support vector machine analysis, the average size of the detected UAVs is 34 × 23
(SVM) to conduct audio detection, video detection, and RF (0.1% of the image size). Seventy-seven videos consist of
detection in parallel. After that, the location of UAVs can be nearly 10,000 images. In light of this situation, improving the
estimated via hybrid measurements, including DOA and RSS, algorithm with regard to this dataset, successfully reducing the
under the constraints of the specific geographical area from high false positive rates, and further popularizing the method
video images. The use of multiple surveillance technologies to other fields if it is robust make up the significance of this
complements the advantages and disadvantages of multiple dataset. Scenes in such datasets are most seaside with a wide
technologies, so that the system has high accuracy. Meanwhile, visual field. Different from them, we collect data mostly in
it can conduct radio frequency interference which simple the place with lots of buildings, which is more suitable for
vision-based system cannot do. But in this system, each unit civilian use.
is scattered, which makes the system covers a large area, and Anti-UAV [17]. This is a dataset labeled with visible and
its high cost also makes it not suitable for civil use. infrared dual-mode information, which consists of 318 fully
Dynamic coordinate tracing [27]. This study proposes a labeled videos. One hundred and sixty of the videos are used
dual-axis rotary tracking mechanism, using a dual-axis tracing as the training set, 91 are used as the testing set, and the rest
device, namely, two sets of step motors with a thermal imaging are used as the validation set, with a total of 186,494 images.
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 3
Y
0.4 0.4 0.4 0.4
175 60
80 2500
150
50
125 2000
60
40
100
Num
Num
Num
1500
Num
30 40
75
1000
20
50
20
10 500
25
0 0 0 0
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 6 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5
object aspect ratio object aspect ratio object aspect ratio object aspect ratio
(a) Aspect ratio (Detection-train) (b) Aspect ratio (Detection-test) (c) Aspect ratio (Detection-val) (d) Aspect ratio (Tracking)
3500
1400
4000 2000 3000
1200
1000 2500
3000 1500
800 2000
Num
Num
Num
Num
400 1000
1000 500
200 500
0 0 0 0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.0 0.1 0.2 0.3 0.4 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.00 0.01 0.02 0.03 0.04
object area ratio object area ratio object area ratio object area ratio
(e) Scale (Detection-train) (f) Scale (Detection-test) (g) Scale (Detection-val) (h) Scale (Tracking)
Fig. 2. Aspect ratio and scale distribution of the DUT Anti-UAV dataset.
The UAVs in the dataset are divided into seven attributes, versions of detectors from the combination of five types of
which systematically conclude several special circumstances detectors and three types of backbone networks. We also
that might appear in UAV detection tasks. The recorded videos present the tracking performance of 8 various trackers on our
contain two environments, namely, day and night. In the two dataset.
environments, the detection of the two modals plays different There is also a challenge [35] in the Anti-UAV commu-
roles. From the perspective of location distribution, the range nity, which has been held twice until now. This challenge
of motion of Anti-UAV is wide, but mostly concentrated encourages novel and accurate methods for multi-scale object
in the central area, and it has a smaller variance compared tracking, greatly promoting the development of this task.
with the two other datasets and our dataset. This dataset For example, SiamSTA [36], the winner of the 2nd Anti-
focuses on solving the problem that vision-based detector UAV Challenge, proposes a spatial-temporal attention based
would show poor performance in night, while our dataset aims Siamese tracker, which poses spatial and temporal constraints
to improve models’ robustness through enriching the diversity on generating candidate proposals with local neighborhoods.
of multiple aspects, such as different UAV types, diverse scene
information, various light conditions, and different weathers.
III. DUT A NTI -UAV B ENCHMARK
Brian et al. [29] collect and integrate the aforementioned
three UAV datasets (i.e., MAV-VID [18], Drone-vs-Bird [28], To assist in the development of the area of UAV detection
and Anti-UAV [17]), and present a benchmark performance and tracking, we propose a UAV detection and tracking
study using state of the art four object detection (Faster- dataset, named DUT Anti-UAV. It contains detection and
RCNN [11], YOLOv3 [30], SSD [13], and DETR [31]) and tracking subsets. The detection dataset is split into three sets,
three tracking methods (SORT [32], DeepSORT [33], and namely, training, testing, and verification sets. The tracking
Tracktor [34]). Compared with this work, we propose a new dataset contains 20 sequences where the targets are various
dataset for both UAV detection and tracking tasks. Besides, UAVs. It is used to test the performance of algorithms for
our experiments are more sufficient. We evaluate 14 different UAV tracking.
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 4
TABLE I
ATTRIBUTE DETAILS OF THE DUT A NTI -UAV DATASET.
B. Dataset characteristics
Compared with general object detection and tracking
datasets (e.g., COCO [37], ILSVRC [38], LaSOT [39],
OTB [40]), the most notable characteristic of the proposed
dataset for UAV detection and tracking is that the proportion
of small objects is larger. In addition, given that UAVs mostly
fly outdoors, the background is usually complicated, which
increases the difficulty of UAV detection and tracking tasks.
We analyze the characteristics of the proposed dataset from
the following aspects.
Fig. 3. Examples of different types of UAVs in our dataset.
Image resolution. The dataset contains images with various
resolutions. For the detection dataset, the height and width of
the largest image are 3744 and 5616, whereas the size of the
A. Dataset splitting
smallest image is 160 × 240; a huge difference between them.
Our DUT Anti-UAV dataset contains detection and tracking The tracking dataset has two type frames with 1080×1920 and
subsets. The detection dataset is split into training, testing, 720 × 1280 resolutions. Various settings of image resolution
and validation sets. The tracking dataset contains 20 short- can make models adapt to images with different sizes, and
term and long-term sequences. All frames and images are avoid overfitting.
manually annotated precisely. The detailed information of Object and background. To enrich the diversity of objects
images and objects is shown in Table I. Specifically, the and prevent models from overfitting, we select more than 35
detection dataset contains 10,000 images in total, in which the types of UAVs. Several examples can be seen in Fig. 3. The
training, testing, and validation sets have 5200, 2200 and 2600 scene information in the dataset is also diverse. Given that
images, respectively. In consideration of the situation that one UAVs mostly fly outdoors, the background of our dataset
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 5
is an outdoor environment, including the sky, dark clouds, aspect ratio change. For example, the object aspect ratio in
jungles, high-rise buildings, residential buildings, farmland, ”video10” changes between 1.0 and 4.33. The aspect ratios of
and playgrounds. Besides, a variety of light conditions (such as most objects are between 1.0 and 3.0.
day, night, dawn and dusk), and different weathers (like sunny, Object position. Fig. 1 describes the position distribution
cloudy, and snowy day) are also considered in our dataset. of the objects’ relative center location in the form of scatter
Various examples from the detection subset are shown in plots. Most of the objects are concentrated in the center of the
Fig. 4. Complicated background and obvious outdoor lighting image. The ranges of the object motion in all sets vary, and
changes in our dataset are crucial for training a robust and the horizontal and vertical movements of objects are evenly
high-performed UAV detection model. distributed. For the tracking dataset, the bounding boxes of the
Object scale. The sizes of UAVs are often small, and the object in one sequence are continuous. According to Fig. 1 (d),
outdoor environment is broad. Thus the proportion of small in addition to the central area of the image, objects also move
objects in our dataset is large. We calculate the object area frequently to the right and bottom-left of the image.
ratio based on the full image and plot the histogram of the
scale distribution, shown as Table I and Fig. 2, respectively. C. Dataset challenges
For the detection dataset, including the training, testing, and
Through the analysis of the characteristics of the proposed
validation sets, the average object area ratio is approximately
dataset in the last subsection, we find that UAV detection and
0.013, the smallest object area ratio is 1.9e-06, and the largest
tracking encounter many difficulties and challenges. The main
object accounts for 0.7 of the entire image. Most of the
challenges are that the object is too small, the background
objects are small. The proportions of the objects’ size in the
is complex or similar to the object, and the light changes
entire image are approximately less than 0.05. For the tracking
obviously. Object blur, fast motion, camera motion, and out
dataset, the scales of objects in the sequences change smoothly.
of view are also prone to occur. Fig. 4 and Fig. 5 respectively
The average object area ratio is 0.0031, the maximum ratio
show examples of the detection and tracking datasets that
is 0.045, and the minimum ratio is 2.7e-04. Compared with
reflect the aforementioned challenges.
objects in general detection and tracking datasets, small objects
are much harder to detect and track, and more prone to failures,
such as missed inspection and tracking loss. IV. E XPERIMENTS
Object aspect ratio. Table I and Fig. 2 also show the A. Detection on DUT Anti-UAV dataset
object aspect ratio. The objects in our dataset have various We select several state-of-the-art detection methods. We
aspect ratios, where the maximum is 6.67, and the minimum use Faster-RCNN [11], Cascade-RCNN [41], and ATSS [42],
is 1.0. In one sequence, the same object has a significant which are two-stage methods, and YOLOX [43] and SSD [13],
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 6
1.0 1.0
0.8 0.8
ATSS-ResNet18 ATSS-ResNet18
0.6 ATSS-ResNet50 0.6 ATSS-ResNet50
precison
precison
ATSS-VGG16 ATSS-VGG16
Cascade-RCNN-ResNet18 Cascade-RCNN-ResNet18
Cascade-RCNN-ResNet50 Cascade-RCNN-ResNet50
0.4 Cascade-RCNN-VGG16 0.4 Cascade-RCNN-VGG16
Faster-RCNN-ResNet18 Faster-RCNN-ResNet18
Faster-RCNN-ResNet50 Faster-RCNN-ResNet50
Faster-RCNN-VGG16 Faster-RCNN-VGG16
SSD-VGG16 SSD-VGG16
0.2 YOLOX-DarkNet 0.2 YOLOX-DarkNet
YOLOX-ResNet18 YOLOX-ResNet18
YOLOX-ResNet50 YOLOX-ResNet50
YOLOX-VGG16 YOLOX-VGG16
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
recall & recall &
(a) IoU=0.5 (b) IoU=0.75
which are one-stage methods. Two-stage models typically Cascade-RCNN [41]. It consists of a series of detectors
have higher accuracy, while one-stage models perform better with increasing Intersection over Union (IoU) thresholds. The
in terms of the speed. Descriptions of these algorithms are detectors are trained stage by stage, and the output of a
provided below. detector is the input of the next in which the IoU threshold
is higher (in other words, a detector with higher quality).
Faster-RCNN [11]. This method makes several improve-
This method guarantees the amount of every detector, thereby
ments to Fast-RCNN [44] by resolving time-consuming issues
reducing the overfitting problem.
on region proposals brought by selective search. Instead of
selective search, the region proposal network (RPN) is pro- ATSS [42]. It claims that the essential difference between
posed. This network has two branches, namely, classification anchor-based and anchor-free detectors is the way of defining
and regression. Classification and regression are performed positive and negative training samples. It proposes an algo-
twice, so the precision of the method is high. rithm that can select positive and negative samples according
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 7
TABLE II performance. First, the Success calculates the IoU between the
D ETECTION RESULTS OF DIFFERENT COMBINATIONS OF MODELS AND target ground truth and the predicted bounding boxes. It can
BACKBONES . T HE TOP RESULTS OF M AP AND FPS ARE MARKS AS RED .
reflect the accuracy of the size and scale of the predicted target
mAP FPS (tasks/s) bounding boxes. Second, the Precision measures the center
ResNet50 0.653 12.8 location error by computing the distance of pixels between
Faster-RCNN ResNet18 0.605 19.4
VGG16 0.633 9.3 the ground truth and tracking results. However, it is easily
ResNet50 0.683 10.7 affected by target size and image resolution. To solve this
Cascade-RCNN ResNet18 0.652 14.7 problem, the Norm Pre is introduced to rank trackers using
VGG16 0.667 8.0
ResNet50 0.642 13.3 Area Under Curve (AUC) between 0 and 0.5.
ATSS ResNet18 0.61 20.5 We select seven tracking algorithms in total. Their descrip-
VGG16 0.641 9.5 tions are provided below.
ResNet50 0.427 21.7
ResNet18 0.400 53.7 SiamFC [12]. This method is a classic generative tracking
YOLOX algorithm based on a fully convolutional Siamese network.
VGG16 0.551 23.0
DarkNet 0.552 51.3 It performs cross-correlation operation between the template
SSD VGG16 0.632 33.2
patch and the search region to locate the target. Besides, a
multi-scale strategy is used to decide the scale of the target.
SiamRPN++ [47]. It introduces the region proposal network
to the object’s statistical feature.
(RPN) into the Siamese network, and its backbone network
YOLO [43]. YOLO series is known for its extremely high
can be very deep. The framework has two branches, including
speed and relatively high accuracy. With the development of
a classification branch to choose the best anchor and a regres-
object detection, it can integrate most advanced technologies,
sion branch to predict offsets of the anchor. Compared with
so as to achieve rounds of iteration. After YOLOv5 reaches
SiamFC, SiamRPN++ is more robust and faster because of the
a peak performance, YOLOX [43] starts to focus on anchor-
introduction of the RPN mechanism and the removal of the
free detectors, advanced label assignment strategies, and end-
multi-scale strategy.
to-end (NMS-free) detectors, which are major advances in
ECO [48]. This method is a classic tracking algorithm based
these years. After upgrading, it shows remarkable performance
on correlation filtering. It introduces a factorized convolution
compared to YOLOv3 [30] on COCO (a detection dataset
operator to reduce model parameters. A compact generative
named Common Objects in Context) [37].
model of the training sample space is proposed to reduce the
SSD [13]. It is also a one-stage detector. It combines several
number of training samples while guaranteeing the diversity of
feature maps with different resolutions, thus improving the
the sample set. Besides, an efficient model update strategy is
model’s performance via multi-scale training. It has a good
also proposed to improve the tracker’s speed and robustness.
effect on the detection of objects with different sizes. Only a
ATOM [49]. The model combines target classification and
single network is involved, making the model easy to train.
bounding box prediction. The former module is trained online
We replace these detectors’ backbone network with sev-
to guarantee a strong discriminative capability. The latter
eral classic backbone networks, including ResNet18 [45],
module uses IoU loss and predicts the overlap between the
ResNet50 [45], and VGG16 [46], and obtain 14 different ver-
target and the predicted bounding box through offline training.
sions of detection methods. The 14 detectors are all retrained
This combination endows the tracker with high discriminative
on the training subset of the DUT Anti-UAV detection dataset.
power and good regression capability.
Moreover, we use mean average precision (mAP) and frames
DiMP [14]. Based on ATOM, this method introduces a
per second (FPS) to evaluate the methods’ performance. The
discriminative learning loss to guide the network to learn more
results are shown in Table II. Cascade-RCNN with ResNet50
discriminative features. An efficient optimizer is also designed
performs the best, and YOLOX with ResNet18 is the fastest.
to accelerate the convergence of the network, which improves
We also visualize the performance of different detectors by
the performance of the algorithm further.
using P-R curves with different IoU thresholds, which are
TransT [50]. TransT is a Transformer-based method. Due
shown in Fig. 6. In P-R curves, P means precision, and R
to its attention-based feature fusion network, this method is
means recall. Typically, a negative correlation exists between
able to extract abundant semantic feature maps, and achieves
them, and a curve drawn with R as the abscissa and P as the
state-of-the-art performance on most tracking benchmarks.
ordinate can effectively reflect the comprehensive performance
SPLT [51]. It is a long-term tracker mainly based on two
of a detector. Moreover, we illustrate several qualitative results
modules, namely, the perusal module and skimming module.
in Fig. 10. Faster-RCNN and Cascade-RCNN can get accurate
The perusal module contains an efficient bounding box regres-
bounding boxes and corresponding confidence scores, while
sor to generate a series of target proposals, and a target verifier
YOLO ofen detects the background as the target mistakenly.
is used to select the best one based on confidence scores. The
skimming module is used to justify the state of the target
B. Tracking on DUT Anti-UAV dataset in the current frame and select an appropriate searching way
We select several existing state-of-the-art trackers and per- (global search or local search). These improve the speed of
form them on our tracking dataset. The tracking performance the method, making it track in real-time.
is shown in the third column of Table III (the ”noDET” LTMU [52]. It is also a long-term tracker. The main
column). We use three metric methods to evaluate the tracking contribution of the method is to propose a meta-updater trained
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 8
0.67 0.668
#T #T
𝑠𝑐𝑜𝑟𝑒𝑡
no
0.667
0.666 0.666 τ_d
𝜏d τ_t
𝜏𝑡
Tracker 𝑠𝑐𝑜𝑟𝑒𝑡 <𝜏𝑡 𝑅𝑇 = 𝑏𝑏𝑜𝑥𝑡 0.664 0.664
0.665 0.663
0.662 0.662
yes
Success
0.66 0.659
𝑠𝑐𝑜𝑟𝑒𝑑 > 𝜏𝑑 #T
yes and 𝑠𝑐𝑜𝑟𝑒𝑑
𝑅𝑇 = 𝑏𝑏𝑜𝑥𝑑 Detector
𝑠𝑐𝑜𝑟𝑒𝑑 > 𝑠𝑐𝑜𝑟𝑒𝑡 0.655
no
0.65
0.5 0.6 0.7 0.8 0.9 0.95
Fig. 7. Framework of the proposed fusion strategy of tracking and detection.
Parameter value
offline, which is used to justify whether the tracker needs Fig. 8. Effects of different parameter values for our fusion method.
to update in the current frame or not. It greatly improves
the robustness of the tracker. Moreover, a long-term tracking
framework is designed based on a SiamRPN-based re-detector, also improves its performance further after fusing the detector
an online verifier, and an online local tracker with the proposed Faster-RCNN(VGG16). The degree of tracking performance
meta-updater. This method shows its strong discriminative improvement depends on the detection algorithm. For most
capability and robustness on both long-term and short-term trackers, Faster-RCNN is a better fusion choice, especially
tracking benchmarks. for its VGG16 version. On the contrary, ATSS hardly pro-
We can find that LTMU performs the best on our tracking vides additional performance benefits to trackers. Aside from
dataset, where the Success is 0.608 and Norm Pre is 0.783. Faster-RCNN, Cascade-RCNN can also enhance the tracking
TransT, DiMP and ATOM also exhibit well performance, with performance. Fig. 11 shows the qualitative comparison of the
0.586, 0.578 and 0.574 in terms of Success, respectively. The original trackers and our fusion methods, where the model
performance of SiamFC is the worst, in which the Success is Faster-RCNN-VGG16 is chosen as the fused detector. After
0.381 and Precision is 0.623. fusing detection, trackers can perform better in most scenarios.
Among them, the performance of LTMU-DET is best that can
C. Tracking with detection handle most challenges.
To improve the tracking performance further, and make full Algorithm 1 Fusion strategy of trackers and detectors
use of our dataset, including the detection and tracking sets, Input: T , D, I{0,N −1} , GT0
we propose a clear and simple tracking algorithm combined Output: R{0,N −1}
with detection. The fusion strategy is shown as Fig. 7 and 1: T .init(I0 ,GT0 );
Algorithm 1. Given a tracker T and a detector D, we first 2: R0 = GT0 ;
initialize the tracker T based on the ground truth GT0 of the 3: for each t ∈ [1, N − 1] do
first frame. For each subsequent frame, we obtain the bounding 4: bboxt , scoret = T .track(It );
box bboxt and its confidence score scoret from the tracker. If 5: if scoret < τt then
scoret is less than τt , we regard it as an unreliable result, and 6: bboxesd , scoresd = D.detect(It );
introduce the detection mechanism. Next, the detector obtains 7: if length(bboxesd ) > 0 then
bounding boxes bboxesd and their confidence scores scoresd . 8: scored = Max(scoresd );
If the highest score scored is higher than τd and scoret , we 9: Collect the corresponding bboxd from bboxesd ;
set the corresponding detected bounding box bboxd as the 10: if scored > τd and scored > scoret then
current result; otherwise, bboxt is the final result. In this paper, 11: Rt = bboxd ;
hyper-parameters τt and τd are set to 0.9. To investigate the 12: else
effects of different parameter values for our fusion method, 13: Rt = bboxt ;
we change values of hyper-parameters τt and τd , respectively. 14: end if
Fig. 8 shows that our tracker is robust to these two parameters. 15: end if
That is, huge changes in parameters τt and τd only cause slight 16: end if
fluctuations in the tracking results (less than 1%). 17: end for
On the basis of the proposed fusion strategy, we try different
combinations of a series of trackers and detectors. Specifically,
we select the eight aforementioned trackers (SiamFC, ECO,
SPLT, ATOM, SiamRPN++, DiMP, TransT and LTMU) and V. C ONCLUSION
five detectors with different types of backbone networks (14 In this paper, we propose the DUT Anti-UAV dataset for
different versions in total). The detailed tracking results are UAV detection and tracking. It contains two sets, namely,
shown in Table III. The success and precision plots of each detection and tracking. The former has 10,109 objects from
tracker are shown as Fig. 9. After fusing detection, the tracking 10,000 images, which are split into three subsets (training,
performance of all trackers is improved significantly. For in- testing, and validation). The latter contains 20 sequences
stance, compared with the baseline tracker SiamFC, the fused whose average length is 1240. All images and frames are
method SiamFC+Faster-RCNN(VGG16) increases by 23.4% annotated manually and precisely. We set 14 different ver-
in terms of the Success. The best-performing tracker LTMU sions of the detectors from the combination of 5 types of
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 9
Success plots of OPE on Anti-UAV Precision plots of OPE on Anti-UAV Success plots of OPE on Anti-UAV Precision plots of OPE on Anti-UAV
0.9 0.9 0.9 0.9
Success rate
[0.615] SiamFC-Faster-RCNN-VGG16 [0.933] SiamFC-Cascade-RCNN-ResNet50 [0.619] ECO-Faster-RCNN-ResNet50 [0.945] ECO-Cascade-RCNN-VGG16
Precision
Precision
0.5 [0.611] SiamFC-Cascade-RCNN-VGG16 0.5 [0.930] SiamFC-Cascade-RCNN-VGG16 0.5 [0.618] ECO-Cascade-RCNN-VGG16 0.5 [0.942] ECO-Cascade-RCNN-ResNet50
[0.607] SiamFC-Cascade-RCNN-ResNet18 [0.923] SiamFC-Faster-RCNN-ResNet18 [0.614] ECO-Cascade-RCNN-ResNet18 [0.938] ECO-Cascade-RCNN-ResNet18
[0.605] SiamFC-Faster-RCNN-ResNet18 [0.920] SiamFC-Cascade-RCNN-ResNet18 [0.610] ECO-Faster-RCNN-ResNet18 [0.937] ECO-Faster-RCNN-ResNet18
0.4 [0.551] SiamFC-SSD-VGG16 0.4 [0.832] SiamFC-SSD-VGG16 0.4 [0.578] ECO-SSD-VGG16 0.4 [0.884] ECO-SSD-VGG16
[0.383] SiamFC-YOLOX-VGG16 [0.625] SiamFC-YOLOX-VGG16 [0.436] ECO-YOLOX-VGG16 [0.749] ECO-YOLOX-VGG16
0.3 [0.381] SiamFC-ATSS-ResNet50 0.3 [0.623] SiamFC-ATSS-ResNet50 0.3 [0.417] ECO-ATSS-ResNet50 0.3 [0.726] ECO-ATSS-ResNet50
[0.381] SiamFC-YOLOX-DarkNet [0.623] SiamFC-YOLOX-DarkNet [0.414] ECO-ATSS-ResNet18 [0.718] ECO-YOLOX-ResNet50
[0.381] SiamFC-YOLOX-ResNet18 [0.623] SiamFC-YOLOX-ResNet18 [0.411] ECO-ATSS-VGG16 [0.717] ECO-ATSS-VGG16
0.2 0.2 0.2 0.2
[0.381] SiamFC-ATSS-VGG16 [0.623] SiamFC-ATSS-VGG16 [0.410] ECO-YOLOX-ResNet50 [0.717] ECO-ATSS-ResNet18
[0.381] SiamFC-YOLOX-ResNet50 [0.623] SiamFC-YOLOX-ResNet50 [0.401] ECO-YOLOX-DarkNet [0.706] ECO-YOLOX-ResNet18
0.1 [0.381] SiamFC-noDET 0.1 [0.623] SiamFC-noDET 0.1 [0.400] ECO-YOLOX-ResNet18 0.1 [0.705] ECO-YOLOX-DarkNet
[0.381] SiamFC-ATSS-ResNet18 [0.623] SiamFC-ATSS-ResNet18 [0.395] ECO-noDET [0.697] ECO-noDET
0.0 0.0 0.0 0.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 5 10 15 20 25 30 35 40 45 50 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 5 10 15 20 25 30 35 40 45 50
Overlap threshold Location error threshold Overlap threshold Location error threshold
Success plots of OPE on Anti-UAV Precision plots of OPE on Anti-UAV Success plots of OPE on Anti-UAV Precision plots of OPE on Anti-UAV
0.9 0.9
0.8 0.8
0.8 0.8
0.7 0.7
0.7 0.7
0.6 0.6
[0.607] SPLT-YOLOX-DarkNet [0.875] SPLT-Faster-RCNN-VGG16 [0.635] ATOM-Faster-RCNN-ResNet18 [0.936] ATOM-Faster-RCNN-ResNet18
[0.605] SPLT-YOLOX-ResNet50 [0.871] SPLT-Faster-RCNN-ResNet50 0.6 [0.633] ATOM-Faster-RCNN-ResNet50 0.6 [0.932] ATOM-Faster-RCNN-VGG16
Success rate
Success rate
0.5 [0.603] SPLT-YOLOX-ResNet18 0.5 [0.866] SPLT-Cascade-RCNN-ResNet50 [0.632] ATOM-Faster-RCNN-VGG16 [0.931] ATOM-Faster-RCNN-ResNet50
Precision
Precision
[0.597] SPLT-YOLOX-VGG16 [0.862] SPLT-Cascade-RCNN-VGG16 0.5 [0.626] ATOM-Cascade-RCNN-VGG16 0.5 [0.918] ATOM-Cascade-RCNN-VGG16
[0.553] SPLT-Faster-RCNN-VGG16 [0.861] SPLT-YOLOX-ResNet50 [0.621] ATOM-Cascade-RCNN-ResNet50 [0.917] ATOM-Cascade-RCNN-ResNet50
0.4 [0.550] SPLT-Faster-RCNN-ResNet50 0.4 [0.860] SPLT-Faster-RCNN-ResNet18 [0.611] ATOM-Cascade-RCNN-ResNet18 [0.895] ATOM-Cascade-RCNN-ResNet18
0.4 0.4
[0.549] SPLT-Cascade-RCNN-ResNet50 [0.857] SPLT-YOLOX-DarkNet [0.601] ATOM-SSD-VGG16 [0.870] ATOM-SSD-VGG16
0.3 [0.546] SPLT-Cascade-RCNN-VGG16 0.3 [0.855] SPLT-YOLOX-ResNet18 [0.574] ATOM-YOLOX-ResNet18 [0.836] ATOM-YOLOX-ResNet18
[0.543] SPLT-Faster-RCNN-ResNet18 [0.855] SPLT-Cascade-RCNN-ResNet18 0.3 [0.565] ATOM-ATSS-VGG16 0.3 [0.832] ATOM-ATSS-VGG16
[0.542] SPLT-Cascade-RCNN-ResNet18 [0.846] SPLT-YOLOX-VGG16 [0.547] ATOM-YOLOX-DarkNet [0.809] ATOM-ATSS-ResNet50
0.2 [0.496] SPLT-SSD-VGG16 0.2 [0.778] SPLT-SSD-VGG16 [0.543] ATOM-ATSS-ResNet50 0.2 [0.808] ATOM-YOLOX-DarkNet
0.2
[0.405] SPLT-ATSS-ResNet50 [0.651] SPLT-ATSS-ResNet50 [0.537] ATOM-YOLOX-VGG16 [0.796] ATOM-YOLOX-ResNet50
[0.405] SPLT-noDET [0.651] SPLT-ATSS-ResNet18 [0.536] ATOM-YOLOX-ResNet50 [0.794] ATOM-YOLOX-VGG16
0.1 [0.405] SPLT-ATSS-VGG16 0.1 [0.651] SPLT-noDET 0.1 [0.532] ATOM-ATSS-ResNet18 0.1 [0.774] ATOM-ATSS-ResNet18
[0.405] SPLT-ATSS-ResNet18 [0.651] SPLT-ATSS-VGG16 [0.509] ATOM-noDET [0.741] ATOM-noDET
0.0 0.0 0.0 0.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 5 10 15 20 25 30 35 40 45 50 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 5 10 15 20 25 30 35 40 45 50
Overlap threshold Location error threshold Overlap threshold Location error threshold
Success plots of OPE on Anti-UAV Precision plots of OPE on Anti-UAV Success plots of OPE on Anti-UAV Precision plots of OPE on Anti-UAV
Success rate
0.5 [0.610] SiamRPN++-Cascade-RCNN-VGG16 0.5 [0.876] SiamRPN++-Cascade-RCNN-VGG16 0.5 [0.623] TransT-Faster-RCNN-ResNet50 0.5 [0.888] TransT-Cascade-RCNN-ResNet50
Precision
Precision
Success plots of OPE on Anti-UAV Precision plots of OPE on Anti-UAV Success plots of OPE on Anti-UAV Precision plots of OPE on Anti-UAV
0.9 0.9 0.9 0.9
Success rate
Precision
0.5 [0.645] DiMP-Cascade-RCNN-ResNet18 0.5 [0.931] DiMP-Cascade-RCNN-ResNet18 0.5 [0.658] LTMU-Cascade-RCNN-VGG16 0.5 [0.951] LTMU-Cascade-RCNN-VGG16
[0.637] DiMP-Cascade-RCNN-ResNet50 [0.915] DiMP-Cascade-RCNN-VGG16 [0.657] LTMU-Cascade-RCNN-ResNet18 [0.948] LTMU-Cascade-RCNN-ResNet18
[0.632] DiMP-Cascade-RCNN-VGG16 [0.912] DiMP-Cascade-RCNN-ResNet50 [0.653] LTMU-Faster-RCNN-ResNet18 [0.946] LTMU-Faster-RCNN-ResNet18
0.4 [0.627] DiMP-SSD-VGG16 0.4 [0.898] DiMP-SSD-VGG16 0.4 [0.623] LTMU-SSD-VGG16 0.4 [0.887] LTMU-SSD-VGG16
[0.609] DiMP-ATSS-ResNet50 [0.873] DiMP-ATSS-ResNet50 [0.612] LTMU-YOLOX-ResNet18 [0.874] LTMU-YOLOX-ResNet18
0.3 [0.608] DiMP-noDET 0.3 [0.861] DiMP-noDET 0.3 [0.608] LTMU-noDET 0.3 [0.858] LTMU-YOLOX-VGG16
[0.600] DiMP-ATSS-VGG16 [0.852] DiMP-ATSS-VGG16 [0.606] LTMU-YOLOX-DarkNet [0.858] LTMU-noDET
[0.598] DiMP-YOLOX-ResNet18 [0.851] DiMP-YOLOX-ResNet18 [0.606] LTMU-YOLOX-VGG16 [0.856] LTMU-ATSS-ResNet18
0.2 0.2 0.2 0.2
[0.595] DiMP-YOLOX-VGG16 [0.845] DiMP-YOLOX-VGG16 [0.605] LTMU-ATSS-ResNet18 [0.855] LTMU-YOLOX-DarkNet
[0.589] DiMP-ATSS-ResNet18 [0.838] DiMP-ATSS-ResNet18 [0.605] LTMU-ATSS-ResNet50 [0.855] LTMU-ATSS-ResNet50
0.1 [0.589] DiMP-YOLOX-ResNet50 0.1 [0.836] DiMP-YOLOX-ResNet50 0.1 [0.600] LTMU-YOLOX-ResNet50 0.1 [0.847] LTMU-ATSS-VGG16
[0.583] DiMP-YOLOX-DarkNet [0.829] DiMP-YOLOX-DarkNet [0.600] LTMU-ATSS-VGG16 [0.845] LTMU-YOLOX-ResNet50
0.0 0.0 0.0 0.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 5 10 15 20 25 30 35 40 45 50 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 5 10 15 20 25 30 35 40 45 50
Overlap threshold Location error threshold Overlap threshold Location error threshold
Fig. 9. Success and precision plots of trackers on the DUT Anti-UAV dataset.
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 10
UAV | 1.00
UAV | 1.00 UAV | 1.00 UAV | 1.00
UAV | 1.00
UAV | 1.00
UAV | 1.00
UAV | 1.00 UAV | 1.00 UAV | 1.00
UAV | 1.00
UAV | 1.00
UAV | 0.70
UAV | 0.81 UAV | 0.68
UAV | 0.83
UAV | 0.68
UAV | 0.80
UAV | 1.00
UAV | 1.00
UAV | 0.98 UAV | 1.00
UAV | 0.96
UAV | 1.00
UAV | 1.00
UAV | 0.21
UAV | 0.90
UAV | 0.84
Fig. 10. Qualitative comparison of detection results. The first row to the last row indicate the detection results of Faster-RCNN-ResNet50, Cascade-RCNN-
ResNet50, ATSS-ResNet50, SSD-VGG16 and YOLO-DarkNet in turn, including the target bounding box and the corresponding confidence score. Better
viewed with zoom-in.
Fig. 11. Qualitative comparison of tracking results. ”noDET” means pure tracking results without fusing detection, and we choose the model Faster-RCNN-
VGG16 for ”DET” results. Better viewed in color with zoom-in.
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 11
TABLE III
T RACKING RESULTS OF DIFFERENT COMBINATIONS OF TRACKERS AND DETECTORS . T HE TOP TRACKING RESULTS OF EACH TRACKER ARE MARKS AS
BLUE , AND THE BEST RESULTS OF ALL COMBINATIONS ARE MARKED AS RED .
detection algorithms and 3 types of backbone networks. These [10] G. Gao, Y. Yu, J. Yang, G.-J. Qi, and M. Yang, “Hierarchical deep
methods are retrained using our detection-training dataset and cnn feature set-based representation learning for robust cross-resolution
face recognition,” IEEE Transactions on Circuits and Systems for Video
evaluated on our detection-testing dataset. Besides, we present Technology, 2020.
the tracking results of the 8 trackers on our tracking dataset. [11] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards
To improve the tracking performance further and make full real-time object detection with region proposal networks,” Advances in
Neural Information Processing Systems, vol. 28, 2015.
use of our detection and tracking datasets, we propose a [12] L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. Torr,
simple and clear fusion strategy with trackers and detectors “Fully-convolutional siamese networks for object tracking,” in European
and evaluate the tracking results of the combinations of 8 Conference on Computer Vision, 2016, pp. 850–865.
trackers and 14 detectors. Extensive experiments show that [13] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C.
Berg, “Ssd: Single shot multibox detector,” in European Conference on
our fusion strategy can improve the tracking performance of Computer Vision, 2016, pp. 21–37.
all trackers significantly. [14] G. Bhat, M. Danelljan, L. V. Gool, and R. Timofte, “Learning discrimi-
native model prediction for tracking,” in IEEE International Conference
on Computer Vision, 2019, pp. 6182–6191.
R EFERENCES [15] Y. Hu, X. Wu, G. Zheng, and X. Liu, “Object detection of UAV for
anti-UAV based on improved yolo v3,” in Chinese Control Conference,
[1] J. P. Škrinjar, P. Škorput, and M. Furdić, “Application of unmanned 2019, pp. 8386–8390.
aerial vehicles in logistic processes,” in International Conference “New [16] C. Wang, T. Wang, E. Wang, E. Sun, and Z. Luo, “Flying small
Technologies, Development and Applications”, 2018, pp. 359–366. target detection for anti-UAV based on a gaussian mixture model in
[2] Y. Xu, G. Yu, Y. Wang, X. Wu, and Y. Ma, “Car detection from low- a compressive sensing domain,” Sensors, vol. 19, no. 9, p. 2168, 2019.
altitude UAV imagery with the faster R-CNN,” Journal of Advanced
[17] N. Jiang, K. Wang, X. Peng, X. Yu, Q. Wang, J. Xing, G. Li, J. Zhao,
Transportation, vol. 2017, pp. 1–10, 2017.
G. Guo, and Z. Han, “Anti-UAV: A large multi-modal benchmark for
[3] H. Cheng, L. Lin, Z. Zheng, Y. Guan, and Z. Liu, “An autonomous UAV tracking,” arXiv preprint arXiv:2101.08466, 2021.
vision-based target tracking system for rotorcraft unmanned aerial vehi-
[18] A. Rodriguez-Ramos, J. Rodriguez-Vazquez, C. Sampedro, and P. Cam-
cles,” in IEEE/RSJ International Conference on Intelligent Robots and
poy, “Adaptive inattentional framework for video object detection with
Systems, 2017, pp. 1732–1738.
reward-conditional training,” IEEE Access, vol. 8, pp. 124 451–124 466,
[4] M. Lort, A. Aguasca, C. Lopez-Martinez, and T. M. Marı́n, “Initial
2020.
evaluation of sar capabilities in UAV multicopter platforms,” Journal
of Selected Topics in Applied Earth Observations and Remote Sensing, [19] M. Mueller, N. Smith, and B. Ghanem, “A benchmark and simulator for
vol. 11, no. 1, pp. 127–140, 2017. UAV tracking,” in European Conference on Computer Vision, 2016, pp.
[5] F. Hoffmann, M. Ritchie, F. Fioranelli, A. Charlish, and H. Griffiths, 445–461.
“Micro-doppler based detection and tracking of UAVs with multistatic [20] I. Kalra, M. Singh, S. Nagpal, R. Singh, M. Vatsa, and P. Sujit,
radar,” in IEEE Radar Conference, 2016, pp. 1–6. “Dronesurf: Benchmark dataset for drone-based face recognition,” in
[6] A. H. Abunada, A. Y. Osman, A. Khandakar, M. E. H. Chowdhury, IEEE International Conference on Automatic Face & Gesture Recogni-
T. Khattab, and F. Touati, “Design and implementation of a rf based tion, 2019, pp. 1–7.
anti-drone system,” in IEEE International Conference on Informatics, [21] M.-R. Hsieh, Y.-L. Lin, and W. H. Hsu, “Drone-based object counting by
IoT, and Enabling Technologies, 2020, pp. 35–42. spatially regularized regional proposal network,” in IEEE International
[7] X. Chang, C. Yang, J. Wu, X. Shi, and Z. Shi, “A surveillance system Conference on Computer Vision, 2017, pp. 4145–4153.
for drone localization and tracking using acoustic arrays,” in IEEE 10th [22] H. Yu, G. Li, W. Zhang, Q. Huang, D. Du, Q. Tian, and N. Sebe,
Sensor Array and Multichannel Signal Processing Workshop, 2018, pp. “The unmanned aerial vehicle benchmark: Object detection, tracking
573–577. and baseline,” International Journal of Computer Vision, vol. 128, no. 5,
[8] G. Gao, Y. Yu, M. Yang, H. Chang, P. Huang, and D. Yue, “Cross- pp. 1141–1159, 2020.
resolution face recognition with pose variations via multilayer locality- [23] S. Li and D.-Y. Yeung, “Visual object tracking for unmanned aerial
constrained structural orthogonal procrustes regression,” Information vehicles: A benchmark and new motion models,” in AAAI Conference
Sciences, vol. 506, pp. 19–36, 2020. on Artificial Intelligence, 2017.
[9] G. Gao, Y. Yu, J. Xie, J. Yang, M. Yang, and J. Zhang, “Constructing [24] D. Xing, N. Evangeliou, A. Tsoukalas, and A. Tzes, “Siamese trans-
multilayer locality-constrained matrix regression framework for noise former pyramid networks for real-time UAV tracking,” arXiv preprint
robust face super-resolution,” Pattern Recognition, vol. 110, p. 107539, arXiv:2110.08822, 2021.
2021. [25] H. Yu, L. Qin, Q. Huang, and H. Yao, “Online multiple object tracking
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 12
via exchanging object context,” Neurocomputing, vol. 292, pp. 28–37, [48] M. Danelljan, G. Bhat, F. Shahbaz Khan, and M. Felsberg, “ECO:
2018. Efficient convolution operators for tracking,” in IEEE Conference on
[26] X. Shi, C. Yang, W. Xie, C. Liang, Z. Shi, and J. Chen, “Anti-drone Computer Vision and Pattern Recognition, 2017, pp. 6638–6646.
system with multiple surveillance technologies: Architecture, implemen- [49] M. Danelljan, G. Bhat, F. S. Khan, and M. Felsberg, “ATOM: Accurate
tation, and challenges,” IEEE Communications Magazine, vol. 56, no. 4, tracking by overlap maximization,” in IEEE Conference on Computer
pp. 68–74, 2018. Vision and Pattern Recognition, 2019, pp. 4660–4669.
[27] B.-H. Sheu, C.-C. Chiu, W.-T. Lu, C.-I. Huang, and W.-P. Chen, [50] X. Chen, B. Yan, J. Zhu, D. Wang, X. Yang, and H. Lu, “Transformer
“Development of UAV tracing and coordinate detection method using tracking,” in IEEE Conference on Computer Vision and Pattern Recog-
a dual-axis rotary platform for an anti-UAV system,” Applied Sciences, nition, 2021, pp. 8126–8135.
vol. 9, no. 13, p. 2583, 2019. [51] B. Yan, H. Zhao, D. Wang, H. Lu, and X. Yang, “‘Skimming-perusal’
[28] A. Coluccia, A. Fascista, A. Schumann, L. Sommer, M. Ghenescu, T. Pi- tracking: A framework for real-time and robust long-term tracking,” in
atrik, G. De Cubber, M. Nalamati, A. Kapoor, M. Saqib et al., “Drone- IEEE International Conference on Computer Vision, 2019, pp. 2385–
vs-bird detection challenge at IEEE AVSS2019,” in IEEE International 2393.
Conference on Advanced Video and Signal Based Surveillance, 2019, [52] K. Dai, Y. Zhang, D. Wang, J. Li, H. Lu, and X. Yang, “High-
pp. 1–7. performance long-term tracking with meta-updater,” in IEEE Conference
[29] B. K. Isaac-Medina, M. Poyser, D. Organisciak, C. G. Willcocks, T. P. on Computer Vision and Pattern Recognition, 2020, pp. 6298–6307.
Breckon, and H. P. Shum, “Unmanned aerial vehicle visual detection
and tracking using deep neural networks: A performance benchmark,”
in IEEE International Conference on Computer Vision, 2021, pp. 1223–
1232.
[30] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,”
arXiv preprint arXiv:1804.02767, 2018.
[31] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and
S. Zagoruyko, “End-to-end object detection with transformers,” in
European Conference on Computer Vision, 2020, pp. 213–229.
[32] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online
and realtime tracking,” in IEEE International Conference on Image
Processing, 2016, pp. 3464–3468.
[33] N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime track-
ing with a deep association metric,” in IEEE International Conference
on Image Processing, 2017, pp. 3645–3649.
[34] P. Bergmann, T. Meinhardt, and L. Leal-Taixe, “Tracking without bells
and whistles,” in IEEE International Conference on Computer Vision,
2019, pp. 941–951.
[35] J. Zhao, G. Wang, J. Li, L. Jin, N. Fan, M. Wang, X. Wang, T. Yong,
Y. Deng, Y. Guo et al., “The 2nd Anti-UAV workshop & challenge:
Methods and results,” arXiv preprint arXiv:2108.09909, 2021.
[36] B. Huang, J. Chen, T. Xu, Y. Wang, S. Jiang, Y. Wang, L. Wang,
and J. Li, “Siamsta: Spatio-temporal attention based siamese tracker for
tracking UAVs,” in IEEE International Conference on Computer Vision,
2021, pp. 1204–1212.
[37] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan,
P. Dollár, and C. L. Zitnick, “Microsoft CoCo: Common objects in
context,” in European Conference on Computer Vision, 2014, pp. 740–
755.
[38] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma,
Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large
scale visual recognition challenge,” International Journal of Computer
Vision, vol. 115, no. 3, pp. 211–252, 2015.
[39] H. Fan, L. Lin, F. Yang, P. Chu, G. Deng, S. Yu, H. Bai, Y. Xu, C. Liao,
and H. Ling, “LaSOT: A high-quality benchmark for large-scale single
object tracking,” in IEEE Conference on Computer Vision and Pattern
Recognition, 2019, pp. 5374–5383.
[40] Y. Wu, J. Lim, and M.-H. Yang, “Object tracking benchmark,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 37,
no. 9, pp. 1834–1848, 2015.
[41] Z. Cai and N. Vasconcelos, “Cascade R-CNN: Delving into high quality
object detection,” in IEEE Conference on Computer Vision and Pattern
Recognition, 2018, pp. 6154–6162.
[42] S. Zhang, C. Chi, Y. Yao, Z. Lei, and S. Z. Li, “Bridging the gap
between anchor-based and anchor-free detection via adaptive training
sample selection,” in IEEE Conference on Computer Vision and Pattern
Recognition, 2020, pp. 9759–9768.
[43] Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, “Yolox: Exceeding yolo series
in 2021,” arXiv preprint arXiv:2107.08430, 2021.
[44] R. Girshick, “Fast R-CNN,” in IEEE International Conference on
Computer Vision, 2015, pp. 1440–1448.
[45] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for
image recognition,” in IEEE Conference on Computer Vision and Pattern
Recognition, 2016, pp. 770–778.
[46] K. Simonyan and A. Zisserman, “Very deep convolutional networks for
large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[47] B. Li, W. Wu, Q. Wang, F. Zhang, J. Xing, and J. Yan, “SiamRPN++:
Evolution of siamese visual tracking with very deep networks,” in IEEE
Conference on Computer Vision and Pattern Recognition, 2019, pp.
4282–4291.