Deep Learning Review and Discussion of Its Future PDF
Deep Learning Review and Discussion of Its Future PDF
Deep Learning Review and Discussion of Its Future PDF
development
Abstract. This paper is a summary of the algorithms for deep learning and a
brief discussion of its future development. In the first part, the concept of
deep learning and the advantages and disadvantages of deep learning are
introduced. The second part demonstrates several algorithms for deep
learning. The third part introduces the application areas of deep learning.
Then combines the above algorithms and applications to explore the
subsequent development of deep learning. The last part makes a summary
of the full paper.
1 Introduction
As early as 1952, IBM's Arthur Samuel designed a program for learning checkers. It can
build new models by observing the moves of the pieces and use them to improve their
playing skills. In 1959, the concept of machine learning was proposed as a field of study
that could give a machine a certain skill without the need for deterministic programming.
In the process of machine learning development, various machine learning models have
been proposed, including deep learning. Due to its complicated structure and the need for
a large amount of calculation, the computing cost is very high, so it had not been paid
attention to at the beginning. However, with the great improvement in computer
performance, the excellent performance of deep learning makes it rose rapidly and has
become one of the hottest research areas. In this paper, the main deep learning models
will be briefly summarized and the development prospects of deep learning will be
analyzed and discussed at the end.
*
Corresponding author: [email protected]
© The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons
Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/).
Logistic Regression were introduced in the 1990s. These shallow machine learning models
have only one layer or no hidden layer nodes, as shown in Fig 1. Deep learning is based on
multiple hidden layer nodes. The essence of deep learning is multi-layer neural network.
Deep learning uses the input of the previous layer as the output of the next layer to learn
highly abstract data features.
Like machine learning, deep learning can be categorized into supervised learning, semi-
supervised learning, and unsupervised learning. At present, the classical deep learning
framework includes Convolutional Neural Networks, Restricted Boltzmann Machines [2], Deep
Belief Networks [3], and Generative Adversarial Networks [4]. In the next section, these
algorithms will be introduced briefly.
2
results in various application fields, there is still no complete and rigorous theoretical
derivation to explain the deep learning model at this stage, which limits the follow-up study
and the improvement of deep learning.
3
MATEC Web of Conferences 277, 02035 (2019) https://doi.org/10.1051/matecconf/201927702035
JCMME 2018
layer is used for feature filtering after image convolution to improve the operability of the
classification.
4
MATEC Web of Conferences 277, 02035 (2019) https://doi.org/10.1051/matecconf/201927702035
JCMME 2018
real picture. The two models are trained at the same time, and the performance of the two
models becomes stronger and stronger in the confrontation process between the two
models, and will eventually reach a steady state.
The use of generating a network is very versatile, not only for the generation and
discrimination of images, but also for other kinds of data.
5
MATEC Web of Conferences 277, 02035 (2019) https://doi.org/10.1051/matecconf/201927702035
JCMME 2018
6
The fourth part of this paper referred to two application areas of deep learning, image
recognition and speech processing, which are the two main application areas of deep learning.
In addition, recent deep learning has also been used in natural language processing.
Specific applications include, for example, autonomous driving, intelligent dialogue robots
such as Siri, image classification, medical image processing, etc. Different deep learning
frameworks tend to have slightly different application scenarios, such as convolutional neural
networks, which are mainly used for image processing. In the work of medical image
processing, for example, the use of convolutional neural networks for brain tumour
segmentation has achieved an accuracy of more than 90%. Also, in medical applications,
convolutional neural networks can be used to recognize Alzheimer's disease brain image, and
more accurate diagnostic results can be obtained combined with manual judgement. Medicine
is only a part of deep learning applications. In the application of deep learning, well-trained
machines can often calculate some details that are hard to be obtained by human, saving
people's workload and improving the quality of results. For example, in order to prevent the
traffic light violation at the intersection, the usual way is to take picture by the camera and
then manually recognize the license plate to give subsequent punishment. Manually viewing
the images and recording them is a very boring job and is not so efficient. If the captured
image is recognized by the deep neural network, and the license plate number entry system is
automatically extracted, it not only saves manpower, but also improves the efficiency greatly.
And so on, I believe that deep learning applications will be developed in the future in
transportation, medical, language, automation, etc., not to mention examples here.
Although it seems that it can't completely replace human work now, deep learning and
artificial combination can greatly improve the work efficiency.
6 Conclusion
In this paper, we introduced some of the main algorithms and some conjectures for future
development of deep learning. Deep learning has already had in-depth research and a wide
range of application scenarios, and has been put into practical use in real life with excellent
performance. However, there’s still a lot to exploit in the area of deep learning and neural
network and it has great follow-up research space and considerable application potential.