An FPGA Based Adaptive Real

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

ICACI-2022

An FPGA Based Adaptive Real-Time Quality Enhancement System for


Drone Imagery
1
Y. Vedavyas,2*S. Vasavi, 3S. Sri Harsha, 4M. Sai Subhash
1,2,3,4
VR Siddhartha Engineering College, Andhra Pradesh, India

[email protected],[email protected],
1
3
[email protected], [email protected]

Abstract. Drones has varieties of applications in the world and extensively used for Intelligence, monitoring,
and Reconnaissance. The low-resolution video has challenges such as blurriness, over-exposure to sunlight,
etc., are needed to be diagnosed for effective operations. There is a requirement for onboard enhancement of
the video stream, which can be achieved using the field-programmable gate array (FPGA). This paper
presents the development of real time and adaptive quality enhancement system for drone imagery. The
enhancement system includes Dynamic scene deblurring for deblurring the frame extracted from video,
Enhanced Super-Resolution Generative Adversarial Networks (ESRGAN) for enhancing, contrast-limited
Adaptive histogram equalization (CLAHE) for enhancing the local contrast of an image, sharpening for
highlighting the edges of the objects present in the image, Gamma Correction for controlling the brightness
of the image, and Saturation Adjustment. This system has an advantage that the computations can be done at
the edge location and can be transmitted to the receiver in real-time. Performance metrics such as Mean
squared error (MSE), Root Mean squared error (RMSE), Peak signal noise ratio (PSNR), Mean absolute
error (MAE), Signal to noise ratio (SNR), Image Quality Index (IQI), Similarity Index (SI) and Pearson
Correlation Coefficient (r) are used to measure the performance of the proposed system. The paper
concluded that CLAHE is the best enhancement technique for quality enhancement during the
foggy/Smoky environment and ESRGAN is suitable for enhancing the smaller objects present in the drone
video.

Keywords:Dynamic Scene Deblurring, Drone Video, Quality Enhancement, FPGA, Smoothing, ESRGAN.

1 Introduction

In the present world of technological advancement, surveillance and remote monitoring


has been one of the crucial factors not only for government organizations but also for the
corporate for various reasons like research, protection and security. Different scenarios of
surveillance are being implemented for different purposes. For effective surveillance or
monitoring in real-time, it is essential that the quality of video stream generated should be
good. Video enhancement has come into the picture for those scenarios which has low
quality videos for some reason like low quality camera, critical environment around the
camera, noise generation in the video. In such situations, there is no option for user but to
enhance the video in order to perform their tasks effectively. Drones are widely being
adopted by researchers and government authorities. Drones can be an effective solution for
real-time video enhancement. Video stream generated by drone may be noisy or having blur
that reduces the video feed. There comes the necessity for enhancing the quality of drone
video in real-time and delivering it to the user will receive enhanced video feed and make
accurate analysis from the feed.
1.1 Motivation and Problem statement
There are various needs in real world scenarios for real-time processing in a cost-effective
manner. Particularly for video enhancement, the demand is now more than ever and is
expected to increase continuously. Remote monitoring and surveillance are one of the major
aspects that require enhancement techniques for performing high quality research and make
GSSS Institute of Engineering and Technology for Women, Mysuru. Karnataka, INDIA
1
ICACI-2022

accurate analysis. Drones are perfect fit for remote monitoring, but the video feed may get
distorted due to variety of conditions. Here comes the need for enhancing the video stream
generated by drone in real-time for meeting the requirements of end users. Integration of
FPGA to the drone is one of the best and cost-effective solutions for such scenarios.
Configuring FPGA to drone may result in remote computational power and makes real-time
video enhancement a reality.

1.2 Objectives

 Achieving high quality video with the use of FPGA


 Dynamic scene deblurring for the frame.
 ESRGAN enhances frame.
 CLAHE and Color Adjustment for the frame.
 Display the enhanced video on the ground station user terminal

This paper is organized as follows. Section 2 summarizes quality enhancement methods for
drone-based video. Proposed system is explained in section 3. Experimental results with the
discussion on the performance of the proposed system are described in Section 4.

2 Literature Survey

In [1] it was discussed about how deep learning can be used for UAV remote sensing and
focused on classification and regression techniques, which can be used for UAV's. In [2] the
team focuses on the implementation for targeting a real-time motion detection System on-
Chip (SoC) with the help of FPGA that has to be deployed in the UAV. [3] presented brief
approximation for FPGA adder design methodology having the improved SWaP
performance. It was focused on FPGA based architecture that resembles look up table for
introducing approximations while splitting carry chain to the ltu sub adders. The functionality
of the proposed blocks was verified by implementing airborne self-localization, moving
object tracking algorithms and compared with the existing adders. The results show that
proposed system performs at least 9.9% better in power consumption when compared to the
existing adders. In [4] the evaluation shows the arising from edge offloading for the FPGAs
that act as network edge to accelerate VNF execution. At the same time, the challenges from
the approach are discussed. In this paper, they implemented object detection and recognition
algorithms in FPGA. They used scale-invariant feature transform (SIFT) for object
classification. They classified objects into three categories which are bags, boxes, and books.
Zed board FPGA development board is used for implementing the object classification
algorithm. Linux is configured in the FPGA where it runs the classification algorithms. Image
is sent as input through HDMI port and the classification of the object in an image as text is
given as output from the UART port. The model in [6] implemented image enhancement in
FPGA by using histogram equalization and adaptive histogram equalization to enhance the
image. Initially, the RGB image is converted to HSI space then the luminance component (I)
of HSI space is sent for gray histogram and then converted to RGB space. The video stream
is taken from the OV7725 digital image sensor and then passed to the conversion module
which converts RGB to HSI space then stored in the data buffer and then from the data buffer
the image processing module access the I component and enhances it and modifies in data
buffer then the HSI space is converted to RGB space and then displayed in VGA display. [7]
developed an FPGA-based framework named Scylla. The framework enables Quality of
Experience-aware continuous mobile vision with dynamic reconfiguration and dynamically
applies the optimal software-hardware configuration for optimal performance on parallel

GSSS Institute of Engineering and Technology for Women, Mysuru. Karnataka, INDIA
2
ICACI-2022

tasks. The framework has provided better flexibility and achieved better trade-offs than the
traditional CPU solutions. [8] used a model for super-resolution using deep learning and they
used the DIV2K dataset for the process. Using the model, the quality of the images can be
enhanced i.e., low-resolution to high-resolution. The team wanted to enhance the quality of
the images produced by the drone, so that the detection models will have better dataset. The
source images from the drone are sliced to 512 x 512 dimensions and are enhanced to high-
resolution images and performance evaluation is done using SSIM, PSNR, and MSE scores.
The importance of FPGAs for drones is mentioned in [9]. The components which are used to
design must be light in weight, small, power-efficient, high performance, secure, and reliable.
The Flash-based and silicon oxide nitride oxide silicon (SONOS) based FPGAs can address
these challenges they consume 50 percent less power than SRAM, reduce the weight of the
drone, remove the need for the cooling system in the drone. they can also contain embedded
security crypto processors which make the FPGA more secure and make the embedded
system invulnerable for cloning, reverse engineering. A review on drone usage in various
security applications is discussed in [23]. FPGA based object detection is described in [24].
Image dehazing algorithms using FPGA are described in [25]. Table 1 presents various image
processing works and categories
Table 1: Various categories for image processing.
Category Description Earlier
works
Image Uses traditional computer vision techniques and only the input [27-29]
processing RGB image
Machine Uses machine learning techniques additionally, to exploit the [30-31]
learning hidden regularities in relevant image datasets
Deep learning Uses deep neural networks with powerful representation [32-33]
capability to learn relevant mapping functions
FPGA Uses FPGA for implementing the dehazing for high-quality [25]
real-time vision system
Proposed Uses FPGA for onboard processing of drone video in real time -
system for quality enhancement

3 Proposed System

3.1 Proposed Architecture


The proposed architecture is the integration of FPGA to the drone. The drone camera is
attached to the FPGA and the video stream produced through the drone camera undergoes the
enhancement of frames from low resolution to high resolution. Fig. 1 represents the proposed
system architecture.

GSSS Institute of Engineering and Technology for Women, Mysuru. Karnataka, INDIA
3
ICACI-2022

Fig. 1. Proposed System Architecture


3.2 Methodology
1. Extract the individual frames from the input drone video.
2. Onboard processing for video enhancement
(i) Remove the blur present in the source using dynamic scene deblurring with the help of Parameter Selective
Sharing and Nested Skip Connections.
(ii) ESRGAN model is pre-trained for enhancing the image.
(iii) CLAHE is applied on the frame received from ESRGAN, takes responsibility of
adjusting overall image contrast, and will achieve it by updating the histogram's pixel
intensity distribution of the image.
(iv) Gamma Correction code is capable of enhancing brightness of the frame
(v) Furthermore, saturation adjustment adjusts the frame from previous step to bring
colorfulness.
3. Transmit the obtained final enhanced video to the ground station. By using the FPV
transmitter in the drone and receiver at the ground control station. The transmitter can be
attached to the FPGA which sends the enhanced frames by capturing from the drone camera.
The transmitter transmits the enhanced video by using the specified frequency 5Ghz.
3.3 Dataset
Samples from UAV and UG2 dataset are used extensively[22]. UAV dataset is a collection of
videos taken from different types of unmanned aerial vehicles. The dataset is consisting of 50
videos having 70250 frames with frames rate of 30fps. UG2 dataset is a collection of videos
taken from unmanned aerial vehicles, ground videos and the glider. UG2 dataset proposed in
2018 in IEEE winter conference on Applications of Computer Vision. The dataset consists of
684 videos having 30fps frame rate. This system is executed with Verilog, Tensor Flow,
Python, OpenCV and on FPGA (ZYBO Z7-10 ZYNQ-7000), Drone.

4 Implementation

This section consists of the obtained outputs from the FPGA and the performance analysis
that has been made on the retrieved data. The outcome acquired for the proposed system is
used to test the video samples from UG2 dataset that contains blurriness and noise. Initially,
Linux is configured in the FPGA (ZYBO Z7-10 ZYNQ-7000) using Verilog, The model
starts its execution by taking the drone video as input and the key frames are extracted from
GSSS Institute of Engineering and Technology for Women, Mysuru. Karnataka, INDIA
4
ICACI-2022

the drone video and a given as input to a series of operations starting with the dynamic scene
deblurring where the noise, blur and various other distortions from the video are removed,
followed by the ESRGAN where the small objects from the video are enhanced and finally
given to the CLAHE, Gamma Correction, and Saturation Adjustment respectively. The fog
and noise are removed from the frames and the contrast is adjusted using the CLAHE, and
the brightness and the colours are enhanced using the gamma correction and the saturation
adjustment. The individual frames are combined together to form the final enhanced video.
The final enhanced video is then transmitted to the ground station in real-time. These entire
operations are performed on a continuous loop as long as the drone video is given as input.
The performance metrics examined for the purposes of this study were Mean Square Error
(MSE), Root Mean Square Error, Peak Signal to Noise Ratio (PSNR), Mean Absolute Error
(MAE) Signal to Noise Ratio (SNR), Similarity Index (SI), and Statistical Pearson
Correlation Coefficients (r). Table 2 presents the MSE and PSNR values for the frames
extracted from the drone video.
Table 2 : MSE and PSNR values of the frames.
Frame Dynamic ESRGAN CLAHE Gamma Saturation
# scene Correction Adjustment
Deblurring
MS PSNR MSE PSNR MSE PSNR MSE PSN MSE PSNR
E R
1 16.8 35.8 84.8 28.8 94.3 28.3 27.8 13.6 59.6 30.3
2 17.6 35.6 85.1 28.8 94.3 28.3 27.8 13.6 59.7 30.3
3 15.7 36.1 84.6 28.8 94.0 28.3 28.1 13.6 60.2 30.3
4 15.8 36.1 84.3 28.9 94.6 28.3 26.9 13.8 60.4 30.3
5 16.1 36.0 84.3 28.8 93.9 28.4 25.5 14.0 60.3 30.3
6 18.5 35.4 84.8 28.8 94.1 28.3 24.8 14.0 60.1 30.3
7 17.2 35.7 85.1 28.8 94.2 28.3 24.4 14.3 60.4 30.3
8 16.9 35.8 84.9 28.8 93.2 28.4 24.5 14.2 60.2 30.3
9 16.8 35.8 84.9 28.8 93.8 28.4 24.5 14.2 60.5 30.3
10 18.5 35.4 85.4 28.8 93.6 28.4 24.1 14.2 60.4 30.3
11 17.6 35.6 85.7 28.8 93.8 28.4 23.0 14.9 60.6 30.3
12 16.8 35.8 85.5 28.8 93.9 28.4 23.1 14.4 60.1 30.3
13 15.5 36.2 85.2 28.8 93.9 28.4 23.2 14.4 60.3 30.3
14 17.1 35.8 85.3 28.8 93.6 28.4 22.9 14.5 60.5 30.3
15 16.4 35.9 85.2 28.8 93.8 28.4 23.0 14.5 60.0 30.3
(i) MSE represents the difference in cumulative squared error between the original and
enhanced image.MSE can be calculated using Eq (1)
..(1)
Whenever the two images are identical, the MSE value of two images will be zero. For this
value the PSNR is undefined, because of division by zero error.
(ii) PSNR is generally expressed in decibel scale and can be calculated using Eq(2)
..(2)
The PSNR is frequently used metric for picture reconstruction quality. The quality of the
image is said to be high whenever we are having high PSNR value.
(iii) The quality of the enhancement can be determined by checking RMSE value of two
frames the good enhancement has low value and can be calculated using Eq(3)

..(3)
PSNR, MSE, and RMSE are the measures that can be used to assess image quality, the
higher the values for these metrics, the less distortion.

GSSS Institute of Engineering and Technology for Women, Mysuru. Karnataka, INDIA
5
ICACI-2022

(iv) MAE stands for Mean absolute error and it is the Difference between original and
enhanced image. Mean absolute error can be calculated using Eq(4)
..(4)
The mean absolute error is determined by using the same scale as the data. This is a scale-
dependent accuracy metric, meaning it can't be used to compare series with different scales.
(v) SNR stands for Signal to noise ratio and can be calculated using Eq(5)
..(5)
Usually, if the PNR value is big and positive then the noise in the image is less.
(vi) SI stands for similarity index; the metric is used to check the enhanced image's
originality by comparing enhanced frame to the original frame extracted and calculating a
similarity index. This similarity index can be calculated by counting the number of pixels in
the primary image (original image) that were used to create the enhanced image, as well as
how they were used. The Similarity Index (SI) spans from 0 to 1, with 0 representing 0% and
1 representing 100%. If the photos are identical, SI equals 1.Similarity Index can be
calculated using Eq(6)
(6)
(vii) Pearson Coefficient is the degree of association between two sets of rank-ordered in
pixels of two photographs can be determined using correlation, commonly known as r.
Minimizing the mean square error between the desired and obtained rankings is practically
the same thing. Pearson coefficient Index can be calculated using Eq (7)

( 7)
Where =actual value, =predicted value, n= number of observations
Table 3 to 7 presents performance metrics calculated for the dataset[26]. Table 3 presents
the average of values of 8 metrics after applying dynamic scene deblurring.
Table 3: Dynamic scene Deblurring values
MS PSN RM MA SNR IQI SI PC
E R S E C
Avera 16.2 36.0 4.0 70.8 23.2 0.9 (0.970,0.9 0.8
ge 19 49 22 21 06 47 70) 68
Table 4 presents the average of values of 8 metrics after applying ESRGAN.
Table 4:ESRGAN values
MS PSN RM MAE SNR IQI SI PC
E R S C
Avera 84.8 28.8 9.2 107.1 16.0 0.9 (0.970,0.9 0.8
ge 14 46 07 65 02 97 70) 68
Table 5 represents the average of values of 8 metrics after applying CLAHE
Table 5:CLAHE values
MSE PSN RM MA SNR IQI SI PC
R S E C
Aver 93.8 28.4 9.6 93.0 15.5 0.8 (0.697,0 0.70
age 32 06 86 30 63 21 .790) 4
Table 6 presents the average of values of 8 metrics after applying Gamma Correction
Table 6:Gamma Correction values
MSE PSN RM MAE SN IQI SI PC
R S R C
Aver 2448. 14.2 49.4 107. 1.4 0.3 (0.020,0. 0.2
age 331 53 45 276 10 85 330) 42
Table 7 presents the average of values of 8 metrics after Saturation Adjustment.
Table 7: Saturation Adjustment values
GSSS Institute of Engineering and Technology for Women, Mysuru. Karnataka, INDIA
6
ICACI-2022

MS PSN RM MA SNR IQI SI PC


E R S E C
Avera 60.1 30.3 8.3 46.0 16.8 0.8 (0.820,0.9 0.7
ge 73 36 11 40 93 23 25) 71
Fig 2 and 3 presents sample image before and after enhancement for the video given in [26]

Fig 2: Before enhancement Fig 3: After enhancement


It can be observed that when dynamic scene deblurring was applied to the frames of [26], the
quality and similarity index were 0.947 and 0.97 indicating that quality has improved. When
CLAHE is applied these indices are 0.821 and 0.69 indicating that the quality hasimproved
from a very low contrast. In all the phases of quality enhancement, it can be seen that various
techniques used for quality enhancement such asDynamic scene deblurring, ESRGAN,
CLAHE and Gamma correction, the enhanced images have similarity greater than 0.4 with
the original image. This signifies that good percentage of pixels after enhancement matches
with the original image. IQI values except for gamma correction are greater than 0.8,
indicating the improvement of after enhancement. All the PCC values are greater than 0.7
except for gamma correction (with weak correlation) indicating positive correlation between
original image and enhanced image. Probability density function is used to adjust the Gamma
correction value, where initial value is set as 0.5. Lower values for gamma correction indicate
the negative impact of changes in scene illumination is reduced. Also MSE,PSNR,RMS and
MAE showed consistent values indicating the noise level and quality enhancement of the
output image.

5 Conclusion and Future Work

The FPGA based architecture proposed in this paper has the capability to enhance the drone
video in real-time. As drones are massively adopted across many verticals, not necessarily
having high throughput cameras can work in almost all the weather conditions. Proposed
system will be having immense value in such scenarios for enhancing the drone video. The
proposed system also has the ability to remove noise from the drone video. The model has
produced good results in all the performance metrics used for performance analysis. The
proposed system is able to produce good results in foggy and Smokey environments, and
blurred drone videos.
Further the work can be extended to handle the night vision capabilities. Image dehazing can
be incorporated to handle degradation because of weather noise related to weather can be
embedded in the future to enhance the video obtained from any type of constraints such as
occlusions.
Funding: Not applicable
Conflicts of interest/Competing interests: None

References

1. Lucas Prado Osco et al.,"A review on deep learning in UAV remote sensing", International Journal of Applied Earth
Observation and Geoinformation, Volume 102,2021,102456,
GSSS Institute of Engineering and Technology for Women, Mysuru. Karnataka, INDIA
7
ICACI-2022

2. Jia Wei Tang, Nasir Shaikh-Husin, Usman Ullah Sheikh, M. N. Marsono, "FPGA-Based Real-Time Moving Target
Detection System for Unmanned Aerial Vehicle Application", International Journal of Reconfigurable Computing, vol.
2016, Article ID 8457908, 16 pages, 2016. https://doi.org/10.1155/2016/8457908
3. T. Nomani, M. Mohsin, Z. Pervaiz and M. Shafique, "xUAVs: Towards Efficient Approximate Computing for UAVs—
Low Power Approximate Adders With Single LUT Delay for FPGA-Based Aerial Imaging Optimization," in IEEE
Access, vol. 8, pp. 102982-102996, 2020, doi: 10.1109/ACCESS.2020.2998957.
4. Chaudhry, S.R., Liu, P., Wang, X. et al. "A measurement study of offloading virtual network functions to the edge". J
Supercomput 78, 1565–1582 (2022).
5. A. Llorente, C., Jose P. Dadios, E., Adrian B. Monzon, J., & E. De Leon, W, "FPGA-based Object Detection and
Classification of an Image". International Journal of Engineering & Technology, 7(4.16), 83-86, 2018.
doi:http://dx.doi.org/10.14419/ijet.v7i4.16.21784
6. Li, Hui & Xiang, Fei& Sun, Ligong. (2017). "Based on the FPGA Video Image Enhancement System Implementation".
DEStech Transactions on Computer Science and Engineering. 10.12783/dtcse/iceiti2016/6169.
7. S. Jiang et al., "SCYLLA: QoE-aware Continuous Mobile Vision with FPGA-based Dynamic Deep Neural Network
Reconfiguration," IEEE INFOCOM 2020 - IEEE Conference on Computer Communications, 2020, pp. 1369-1378,
8. Samuel Theophilus, and Farida Kamal,"Enhancing Drone Imagery with Super-Resolution Using Deep Learning",
Omdena Deep Learning, Machine Learning, Remote Sensing,2021.
9. Maria Zaitchenko and Julian Di Matteo. "FPGAs aid drone design". Aerospace Manufacturing and Design,
https://www.aerospacemanufacturinganddesign.com/article/fpgas-aid-drone-design/ Last accessed on 12-12-2021
10. Che Aminudin, M.F., Suandi, S.A. Video surveillance image enhancement via a convolutional neural network and
stacked denoising autoencoder. Neural Computing and Applications,2021. https://doi.org/10.1007/s00521-021-06551-0.
11. Vignesh, R. P., & Rajendran, R,”Performance and Analysis of Edge detection using FPGA Implementation”,
International Journal of Modern Engineering Research, 2(2), 552-554,2012.
12. Dakre, K. A., & Pusdekar, P. N, “Image enhancement using hardware co-simulation for biomedical applications”,
International Journal on Recent and Innovation Trends in Computing and Communication (IJRITCC), 3(2), 869-
877,2015
13. Archana, H. R., & Reddy, C. B”,Design and Analysis of Imaging Chip Using High-Speed AXI-Interface for MPSOC
Applications on FPGA Platform, wireless personal Communications, 2021. DOI: 10.21203/rs.3.rs-259373/v1
14. Pavitha, U. S., Nikhila, S., & Krutthika, H. K”, Design and Implementation of Image Dithering Engine on a Spartan
3AN FPGA. International Journal of Future Computer and Communication, 1(4), 361,2012.
15. P. D. Ferguson, T. Arslan, A. T. Erdogan and A. Parmley, "Evaluation of contrast limited adaptive histogram
equalization (CLAHE) enhancement on a FPGA," 2008 IEEE International SOC Conference, 2008, pp. 119-122, doi:
10.1109/SOCC.2008.4641492.
16. Balaji, V. R., Priya, J. S., & Kumar, J. R. D. (2019). FPGA implementation of image acquisition in marine environment.
International Journal of Oceans and Oceanography, 13(2), 293-300.
17. Yanping Wei and Hailiu Xiao,” Research and Hardware Design of Image Processing Algorithm Based on FPGA”,
Journal of Physics: Conference Series, Vol. 1648, No. 4, p. 042104,2020.
18. Liu, B,”Real-Time Video Edge Enhancement IP Core Based on FPGA and Sobel Operator”, . In: Xu, Z., Choo, KK.,
Dehghantanha, A., Parizi, R., Hammoudeh, M. (eds) Cyber Security Intelligence and Analytics. CSIA 2019. Advances
in Intelligent Systems and Computing, vol 928. Springer, Cham.2020 https://doi.org/10.1007/978-3-030-15235-2_19
19. H. Sang et al., "An FPGA Based Adaptive Real-Time Enhancement System for Dental X-rays," 4th International
Conference on Electronics and Communication Engineering (ICECE), 2021, pp. 340-346, doi:
10.1109/ICECE54449.2021.9674312.
20. X. Guo, X. Wei and Y. Liu, "An FPGA implementation of multi-channel video processing and 4K real-time display
system," 2017 10th International Congress on Image and Signal Processing, BioMedicalEngineering and Informatics
(CISP-BMEI), 2017, pp. 1-6,
21. D. Kim, Y. S. Cho, J. H. Lee, H. N. Byun and C. G. Kim, "Real-time FPGA implementation of Full HD@120Hz frame
rate up-conversion system," 2014 IEEE International Conference on Consumer Electronics (ICCE), 2014, pp. 109-110,
22. UG2-Dataset https://github.com/rosauravidal/UG2-Dataset Last accessed on 13-03-2022
23. Jean-Paul Yaacoub, Hassan Noura, Ola Salman, Ali Chehab, “Security analysis of drones systems: Attacks, limitations,
and recommendations”, Internet of Things, 11,2020,100218 .
24. H. Nakahara, M. Shimoda and S. Sato, "A Demonstration of FPGA-Based You Only Look Once Version2 (YOLOv2),"
28th International Conference on Field Programmable Logic and Applications (FPL), 2018, pp. 457-4571, doi:
10.1109/FPL.2018.00088.
25. Lee S, Ngo D, Kang B, “Design of an FPGA-Based High-Quality Real-Time Autonomous Dehazing System”,. Remote
Sensing. 2022; 14(8):1852.
26. Ukraine war, https://www.youtube.com/watch?v=gXoyWH5FMgU Last accessed on 13-04-2022

GSSS Institute of Engineering and Technology for Women, Mysuru. Karnataka, INDIA
8
ICACI-2022

27. Ngo, D.; Lee, S.; Lee, G.D.; Kang, B,”Automating a Dehazing System by Self-Calibrating on Haze Conditions”,
Sensors 2021, 21(19), 6373; https://doi.org/10.3390/s21196373
28. J. Tarel and N. Hautière, "Fast visibility restoration from a single color or gray level image," IEEE 12th International
Conference on Computer Vision, 2009, pp. 2201-2208, doi: 10.1109/ICCV.2009.5459251.
29. K. He, J. Sun and X. Tang, "Single Image Haze Removal Using Dark Channel Prior," in IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341-2353, Dec. 2011, doi: 10.1109/TPAMI.2010.168.
30. Q. Zhu, J. Mai and L. Shao, "A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior," in IEEE
Transactions on Image Processing, vol. 24, no. 11, pp. 3522-3533, Nov. 2015, doi: 10.1109/TIP.2015.2446191.
31. D. Berman, T. Treibitz and S. Avidan, "Non-local Image Dehazing," 2016 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2016, pp. 1674-1682, doi: 10.1109/CVPR.2016.185.
32. B. Cai, X. Xu, K. Jia, C. Qing and D. Tao, "DehazeNet: An End-to-End System for Single Image Haze Removal," in
IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5187-5198, Nov. 2016, doi: 10.1109/TIP.2016.2598681.
33. Ren, W., Liu, S., Zhang, H., Pan, J., Cao, X., Yang, MH, “Single Image Dehazing via Multi-scale Convolutional Neural
Networks”, In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds) Computer Vision – ECCV 2016. ECCV 2016. Lecture
Notes in Computer Science(), vol 9906. Springer, Cham,2016 . https://doi.org/10.1007/978-3-319-46475-6_10

GSSS Institute of Engineering and Technology for Women, Mysuru. Karnataka, INDIA
9

You might also like