Research Article Image Processing Design and Algorithm Research Based On Cloud Computing
Research Article Image Processing Design and Algorithm Research Based On Cloud Computing
Research Article Image Processing Design and Algorithm Research Based On Cloud Computing
Journal of Sensors
Volume 2021, Article ID 9198884, 10 pages
https://doi.org/10.1155/2021/9198884
Research Article
Image Processing Design and Algorithm Research Based on
Cloud Computing
Received 15 July 2021; Revised 19 August 2021; Accepted 2 September 2021; Published 5 October 2021
Copyright © 2021 Defu He and Si Xiong. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
Image processing technology is a popular practical technology in the computer field and has important research value for signal
information processing. This article is aimed at studying the design and algorithm of image processing under cloud computing
technology. This paper proposes cloud computing technology and image processing algorithms for image data processing.
Among them, the material structure and performance of the system can choose a verification algorithm to achieve the final
operation. Moreover, let us start with the image editing features. This article isolates software and hardware that function
rationally. On this basis, the structure of a real-time image processing system based on SOPC technology is built and the
corresponding functional receiving unit is designed for real-time image storage, editing, and viewing. Studies have shown that
the design of an image processing system based on cloud computing has increased the speed of image data processing by 14%.
Compared with other algorithms, this image processing algorithm has great advantages in image compression and
image restoration.
coverage, which can accurately segment the patterns in the performance, a proven algorithm can be selected to perform
blood. However, there are errors in image segmentation the final action [10].
[2]. Venkatram and Geetha put forward the main purpose In the cloud processing system, the working environ-
of big data which is to quickly view the cutting-edge and lat- ment is more complicated. For the processing of the original
est work being done in the field of big data analysis in differ- image, processing steps such as noise, interference, image
ent industries [3]. Since many academicians, researchers, clarity, and image improvement are required [11]. Accord-
and practitioners are very interested, it is rapidly updated ing to the current research status domestically and interna-
and focuses on how to use existing technologies, frame- tionally, the commonly used image smoothing methods are
works, methods, and models to use big data analytics to take the average sector method, intermediate filtering method,
advantage of the value of big data analytics. However, the filtering method, selective masking, media collection filter-
analysis process is very complicated. ing, and other methods [12].
According to the current technical level and develop- The average vector method is a spatial processing
ment trend of video image processing systems, this method that uses the average of pixel gray values instead of
document carries out a great design and implementation of pixel gray values. The types of smoothed images are
logic devices that will be implemented with large logic
devices in the design. In particular the computing cloud in
1
video image processing systems and analyzing infrared, gðx, yÞ = 〠 f ði, jÞ: ð1Þ
reason for the image unevenness, the theory, and the I ði, jÞσS
method of infrared unevenness correction is studied, and
a feasible image enhancement algorithm based on infrared
image characteristics is proposed and realized through To control the image without blur, the threshold method
experiments. can be used to reduce the blurring effect caused by the neigh-
borhood average [13].
2. Graphics Processing Method Based on 2.2.1. Spatial Low-Pass Filtering Algorithm. We know that
Cloud Computing the slow part of the signal belongs to the low-frequency part
of the frequency part, and the fast part of the signal belongs
2.1. Cloud Computing Technology
to the high-frequency part of the frequency part [14]. The
2.1.1. Data Storage and Management Technology. Cloud spatial frequency of the image and the interference fre-
computing uses distributed storage technology to store quency of the edge are higher. Therefore, low-pass filtering
unwanted data in multiple distributed storage devices, and can be used to remove noise, while frequency-domain filter-
to maintain efficient and reliable storage, the space cus- ing can be easily achieved by spatial rotation. Therefore, as
tomers need to achieve the level of machine requirements long as the impulse response matrix of the spatial system is
and reduce the number of model applications [4]. In some designed reasonably, the noise can be filtered [15]. The basic
large-scale projects, such as FIFA and League of Legends, flowchart of the system using low-pass filtering algorithm to
large amounts of data will be stored on the cloud platform. remove noise is shown in Figure 2.
Players only need to download the software and log in to If there is a two-dimensional function FðA, BÞ, input the
the cloud platform to use it. This significantly reduces the filter system and the output signal is recorded as GðA, BÞ.
need for computer equipment. [5]. The basic framework dia- Suppose the impulse response function of the filter system
gram of data storage and management technology is shown is DðA, BÞ; then, there is
in Figure 1.
GðA, BÞ = F ðA, BÞ ∗ DðA, BÞ: ð2Þ
2.1.2. Virtualization Technology. Virtualized focus of the ser-
vice equipment Virtualized multi-individual visualization.
This is the main content of the LAS precalculation [6, 7]. When the input is a distinct image Q, the output is a
This is the main goal: the main material of the virtual distinct image P × P and the pulse response function is an
machine, the system of the operation system, the superlevel order of L × L to avoid duplication. L ≤ P − Q + 1 should be
of the ear, the cutting high horizontal application program, satisfied. The discrete form of the filtering sector is
the general physical equipment division, and the virtual
machine [8]. The original operating system and applications
were virtual, the machine form was run at a virtual level, and GðP1 , P2 Þ = 〠 〠 F ðQ1 , Q1 ÞH ðP1 − Q1 + 1, P2 − Q2 + 1Þ:
many virtual machines could be executed on natural P 1 P2
Sqoop (relational
database import)
Kylin analysis engine
CRM
Hbase
Flume (log and
MongoDB
other data import) Java Python ISV
Cassandea
ERP Mysql
Click Oralce
Kafka (online data MapReduce Tze
Sqlsercer
import)
YARN: Data
MES operation system
HDF (massive
IOT collection of all
kinds of data)
Log
HDFS: Hadoop data file system
HDFS put web
Stream HDFS
Chromatic
aberration method
Thresholded
segmentation Edge detection
Image segmentation
module Image filtering
Image acquisition
module Image module
Format Image processing
sharpening
conversion result
different forms of the low-pass spatial response function in is blurred while removing the noise. Under certain condi-
the case of L = 3, represented by matrix K: tions, the average filtering method can get better results in
removing noise and protecting image edges. In other words,
this is a nonlinear image emphasis technology, which has an
kΦx k22
1 − δK ≤ ≤ 1 + δK : ð4Þ excellent suppression effect on interference pulses and
kxk22 speckle noise and can more appropriately maintain the
edges of the image [16].
It can be seen that using the second filter, the result is The operation process of median filtering is as follows:
similar to the result achieved by the simple neighborhood Here is a combination X 1 , X 2 , ⋯, X n ; arrange the magni-
average method under a 3 × 3 window. tudes of the n numbers as follows:
The filter can be represented in the form of a two- The chain code represents a binary image composed of
dimensional window. Let fX ij g represent the gray value of straight lines and curves, which is very suitable for describ-
ing the boundaries of images. Using chain code ratio matrix
each point of a digital image. The two-dimensional central
expressions can save a lot of bits.
filter of filter window A can be represented as
n o 2.4. Edge Detection Algorithm. The edges of the image are
Y ij = MED X ij = MED X ði+rÞ,ð j+sÞ , ðr, sÞ ∈ A, ði, jÞ ∈ I 2 : usually related to the continuation of the image intensity
or the first derivative of the image intensity. The continua-
ð8Þ tion of the image intensity can be divided into the following:
2.2.3. Edge Detection Algorithm. The local intensity of the (1) Step discontinuity, that is, the gray value of the pixels
target in the image represents that the edge detection on both sides of the discontinuity has a significant
method, background area, etc. change greatly. It serves as a difference
basis for image analysis, such as image fragmentation and
texture characteristics. The first step is edge detection, which (2) The line is not continuous; that is, the image inten-
is by the sharpness strength of continuity. The image inten- sity suddenly changes from one value to another
sity sequence can be divided as follows. The grayscale pixel and then returns to the original value after maintain-
value of the image link is different, and the image intensity ing a small stroke
returns to the starting point after maintaining a small Edge detection is the most basic function for detecting
change. The images obtained using various detection important local changes in an image. In one direction, the
methods have a high edge detection effect and can suppress end of the step is related to the local top of the dominant
noise. Image processing methods usually use general edge function of the image. The slope is a measure of the change
detection methods. of the function, and the image can be regarded as a series of
2.3. Digital Image Processing Algorithm. The result of sam- sampling points by continuously operating image intensity.
pling and quantization is a table. There are generally two Therefore, the situation of the same dimension is similar,
ways to represent digital images: and discrete hierarchical approximation functions can be
used to detect large changes in image gray values.
2 3
f ð0, 0Þ f ð0, 1Þ ⋯ f ð0, n − 1Þ " #
6 7 gi
6 f ð1, 0Þ f ð1, 1Þ ⋯ f ð1, n − 1Þ 7 ∂f ∂f
gði, jÞ = = : ð12Þ
f ðx, yÞ = 6
6
7:
7 gj ∂i ∂j
6 ⋯ ⋯ ⋯ ⋯ 7
4 5
f ðn − 1, 0Þ f ðn − 1, 1Þ ⋯ f ðn − 1, n − 1Þ
ð9Þ Two important properties are related to gradient:
Generate Configuration
Setup file functional netlist file
Yes
Quartus II
Quartus II integrated Quartus II Meet timing
analysis &
analysis & synthesis adaptation resources?
elaboration
No
Set
constrain
ts
60
In practical applications, the absolute value is usually
used to approximate the gradient amplitude:
40
Value
jgði, jÞj = max jgi j, g j : ð14Þ
20
According to vector analysis, the slowly changing
vector is 0
1 10 20 30 40 50 60
gj Quantization coefficient
aði, jÞ = arctan : ð15Þ
gi Prelieminary signal-
File size to-noise ratio
Compression ratio Compression time
The angle α is the angle relative to the x-axis.
Figure 4: Comparison of image compression parameters.
3. Image Processing System Design Experiment
3.1. Experimental Parameter Design. In this experiment, schematic diagrams or material description lan-
MATLAB is used for modeling, and then, the sample data guages or a mixture of the two can be used, and hier-
of this article is imported. The compressed sensing sparsity archical design methods can be used to describe
is 1000; that is, after the original image is wavelet trans- units and hierarchical structures. When the software
formed, the wavelet coefficients are sorted, and then, 1000 design and input check for grammatical errors, the
large coefficients are retained and reset the remaining coeffi- software will create a list of grammatical errors for
cients to zero. Observe the sparse wavelet coefficients of the the design and input
observation matrix. The size of the observation matrix is
(2) Design realization: design realization refers to the
4116 × 16424, and then, the observation results are transmit-
drawing process from design input files to bitstream
ted to SOPC for reconstruction. This experiment was con-
files. In this process, the training software automati-
ducted to determine whether the SOPC system regenerated
cally compiles and optimizes the design files and per-
by OMP was functioning normally. Therefore, the wavelet
forms mapping, placement, and routing of selected
coefficients after zeroing are used to observe the original
devices and creates the corresponding bitstream data
image instead of the original image [17]. In this experiment,
files
the wavelet coefficients after zeroing are equal to the original
image and the reconstructed wavelet coefficients are equal to (3) Device configuration: FPGA device configuration
the reconstructed image. modes fall into two categories: active configuration
features and passive configuration features. Active
3.2. Image Processing Programmable System Design configuration mode is a configuration operation
program guided by GAL devices that control the
(1) Design input: there are many ways to introduce external storage and preparation process. Passive
design. At present, the two most commonly used configuration is a controlled synthesis process
are circuit diagrams and material description lan-
guages. For simple drawings, you can use charts or (4) Design verification: this is consistent with the design
ABEL language design. For complex designs, verification process including functional simulation,
6 Journal of Sensors
Quantization coefficient File size (kB) Compression ratio Preliminary signal-to-noise ratio Compression time (seconds)
60 1.9 41 33.9 0.6
50 2.5 31 37.3 0.5
40 2.9 26 39.3 0.7
30 4.8 17 43.1 0.8
20 6.3 11 48.2 0.9
10 8.2 8 50.5 0.95
l 25 4 55.1 1.2
Serial number 1 2 3 4 5 6 7 8 9
a 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.8 0.9
Fuzzy SSIM 0.58 0.58 0.58 0.58 0.58 0.58 0.58 0.58 0.58
PSF restore SSIM 0.79 0.8 0.804 0.807 0.808 0.8 0.79 0.78 0.77
SVPSF restore SSIM 0.85 0.854 0.857 0.867 0.868 0.858 0.856 0.845 0.845
Serial number 1 2 3 4 5 6
b 256 200 400 600 1600 4600
Fuzzy SSIM 0.57 0.57 0.57 0.57 0.57 0.57
PSF restore SSIM 0.8 0.81 0.8 0.81 0.815 0.818
SVPSF restore SSIM 0.86 0.86 0.85 0.87 0.88 0.86
4600
1600
600
Beta
400
200
256
4.2.1. The Influence of Parameter on the Image Restoration 4.2.2. The Influence of Parameter b on the Image Restoration
Algorithm. The image is restored by accurately establishing Algorithm. It is solved by the semiquadratic penalty method,
the model through the super Laplacian operator; usually, and the variable w is introduced while giving the blurred
the range of a is 0.5-0.8 and the index has a great influence image x. b is the weight of a regularization process change,
on the restoration effect. Different intervals correspond to and its value increases monotonously from bð0Þ to bðincÞ
different algorithm models. When a = 1, it is the Laplace res- to bðmaxÞ; as b changes, the number of iterations of graph
toration model, which does not fit the heavy-tailed distribu- restoration also changes. At the same time, the number of
tion of the image very well. When a = 2, it is a Gaussian iterations is closely related to the running time of the resto-
distribution model, and the fitting effect is very different. ration algorithm and the restoration effect, so this article
When a is between 0 and 1, it is a super Laplace model, analyzes the parameter b.
and when a is between 0.5 and 0.8, the restoration effect is From the data analysis in Table 3 and Figure 6, it can be
better. Therefore, it is necessary to analyze the value of seen that under the condition that parameter a does not
parameter a and restore the restored image with different change, the SSIM of the restored image changes with the
values of a parameter to obtain different SSIM. The experi- change of parameter b.
mental data is shown in Table 2. From the data in the table, it can be known that as the
According to the analysis of the experimental data in the parameter b gradually increases, the SSIM of the restored
table, the restored image and SSIM change with the value of image shows a trend of first increasing and then decreasing.
8 Journal of Sensors
35
30
25
20
PSNR
15
10
0
Lena Man House Hill Camera
Image
Sparsity 500 600 700 800 900 1000 1100 1200 1300 1400
Cameraman 24 25 25.6 26.4 27.1 28 28.3 28.6 23 21
Hill 25.2 26 26.5 27 27.3 27.5 27.9 28.1 25.3 23
House 26.7 27.8 28.3 29 29.5 30 31.2 31.7 27 25
Man 23.7 24 24.6 25.3 26.2 26.5 27 27.2 24.5 21.4
Lena 24 24.3 25 25.8 26.8 27.2 28 28.2 23.4 21.6
34
32
30
28
PSNR
26
24
22
20
500 600 700 800 900 1000 1100 1200 1300 1400
Sparsity
Cameraman Man
Hill Lena
House
When b is between 200 and 600, the value of the restored From the comparison of the data in the table, it can be
image SSIM gradually increases, and there is a maximum seen that the PSNR of the SOPC reconstructed image is
value of 0.87. When b is between 1600 and 4600, the SSIM not high. Analysis shows that three reasons affect the PSNR:
value of the restored image gradually decreases, and the
SSIM value after restoration is smaller than that before (1) All data in this SOPC system are represented by
optimization. fixed-point numbers, so the accuracy of the algo-
rithm is affected to a certain extent, so the PNSR of
4.3. Image Reconstruction Analysis of Compressed Sensing. the reconstructed image is not high
This article introduces the SOPC implementation of com- (2) In this system, LFSR is used to generate a random
pressed sensing based on the OMP reconstruction of Cho- observation matrix. Since the random number gen-
lesky matrix decomposition. The following is an analysis of erated by LFSR is not completely random, it affects
the experimental results of SOPC. The results of the analysis the incoherence of the observation matrix to a cer-
are shown in Table 4. tain extent. Reconstructed PSNR is affected
Journal of Sensors 9
(3) Before observing the wavelet coefficients, the small Because the image data itself contains a large amount of
coefficients in the wavelet coefficients are reset to information, the realization of image processing algorithms
zero, and then, some small details are lost, so PNSR puts forward higher requirements on hardware devices.
will be affected With the development of embedded system technology, the
functions of embedded microprocessors are becoming
To more intuitively observe the high and low changes of increasingly powerful. The combination of style and image
the data, draw the table into a graph, as shown in Figure 7. processing will also become a complex system project.
According to the data analysis in the figure, it can be
concluded that among the 5 images, the house image has Data Availability
the highest PSNR. The analysis shows that the house image
is relatively regular, and the coefficients obtained after the The data underlying the results presented in the study are
wavelet transform are relatively sparse, and the wavelet coef- available within the manuscript.
ficients are reset to the minimum of the house image, so the
PSNR after reconstruction is the maximum. Conflicts of Interest
In this experiment, the size of the observation matrix is
still 4100 × 16400, but the sparsity is increased from 500 to The authors declare that they have no conflicts of interest.
1400, with an interval of 10 each time. Five images were
reconstructed, and the relationship between the sparsity Acknowledgments
and PSNR obtained is shown in Table 5 and Figure 8.
This work was supported by grants from Hubei Province Phi-
losophy and Social Science Research Key Project “Research on
4.4. Discussion. This paper builds a wavelet transform model artificial intelligence universal education of primary and sec-
under Quartus II. Compared with the model in the refer- ondary schools in the new era” (no. 19D101).
ence, the simulation parameter in this paper has a better
effect in the range of 0.5-0.8. In the literature, the value of
a is between 0.5 and 2 due to the difference in the model.
References
This is because the model in this paper optimizes parameters [1] S. Y. Wang, Y. Zhao, and L. Wen, “PCB welding spot detection
such as image compression, peak signal-to-noise ratio, and with image processing method based on automatic threshold
compression time to reduce interference, so the value range image segmentation algorithm and mathematical morphol-
is concentrated, which facilitates the control of the model ogy,” Circuit World, vol. 42, no. 3, pp. 97–103, 2016.
and does not cause model distortion. In addition, the model [2] S. M. Hussain, F. U. Farrukh, S. Su, Z. Wang, and H. Chen,
in this paper can process images with a signal-to-noise ratio “CMOS image sensor design and image processing algorithm
between 500 and 1400, while other methods have smaller implementation for total hip arthroplasty surgery,” IEEE
signal-to-noise ratio intervals. Therefore, the method in this Transactions on Biomedical Circuits and Systems, vol. 13,
paper can process images with a large signal-to-noise ratio no. 6, pp. 1383–1392, 2019.
range to a small value and has a high degree of recovery. [3] K. Venkatram and M. A. Geetha, “Review on big data and ana-
lytics – concepts, philosophy, process and applications,”
Cybernetics and Information Technologies, vol. 17, no. 2,
5. Conclusions pp. 3–27, 2017.
[4] S. S. Gill and R. Buyya, “A taxonomy and future directions for
The hardware implementation scheme of the image process- sustainable cloud computing: 360 degree view,” ACM Comput-
ing algorithm is proposed. By comparing the PC implemen- ing Surveys, vol. 51, no. 5, pp. 1–33, 2018.
tation of the image processing system and the dedicated [5] J. S. Fu, Y. Liu, H. C. Chao, B. K. Bhargava, and Z. J. Zhang,
digital signal processor (DSP) implementation, the structure “Secure data storage and searching for industrial IoT by integrat-
of the cloud computing-based on-chip programmable sys- ing fog computing and cloud computing,” IEEE Transactions on
tem is constructed, and the various parts of image acquisi- Industrial Informatics, vol. 14, no. 10, pp. 4519–4528, 2018.
tion, storage, and real-time display of each part of image [6] A. Choudhary, I. Gupta, V. Singh, and P. K. Jana, “A GSA
processing are carried out, and the overall structure design based hybrid algorithm for bi-objective workflow scheduling
in cloud computing,” Future Generation Computer Systems,
is improved. The structure design has been improved.
vol. 83, pp. 14–26, 2018.
The cloud computing application introduced in this arti-
[7] M. H. Malekloo, N. Kara, and M. E. Barachi, “An energy effi-
cle is an important cloud imaging system project. Different
cient and SLA compliant approach for resource allocation and
choices of system parameter settings will have a greater consolidation in cloud computing environments,” Sustainable
impact on the image compression effect. The larger the Computing Informatics & Systems, vol. 17, pp. 9–24, 2018.
quantization coefficient, the smaller the amount of com- [8] H. Li, R. Lu, J. Misic, and M. Mahmoud, “Security and privacy
pressed image data, the larger the image compression rate, of connected vehicular cloud computing,” IEEE Network,
and the smaller the peak signal-to-noise ratio of the image. vol. 32, no. 3, pp. 4–6, 2018.
At the same time, the compression time of the algorithm is [9] H. Y. Hwang, J. S. Lee, T. J. Seok et al., “Flip chip packaging of
less, and the visual effect of the image can be seen in the digital silicon photonics MEMS switch for cloud computing
obvious occlusion effect. It is impossible to distinguish the and data centre,” IEEE Photonics Journal, vol. 9, no. 3, pp. 1–
real objects captured in the image. 10, 2017.
10 Journal of Sensors