Both CH1&2 Emerging

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 20

Chapter One

Introduction to Emerging Technology


Evolution of Emerging Technologies
Emerging technology is a term generally used to describe a new technology, but it may also
refer to the continuing development of existing technology; it can have slightly different meanings
when used in different areas, such as media, business, science, or education. The term commonly
refers to technologies that are currently developing, or that are expected to be available within the
next five to ten years, and is usually reserved for technologies that are creating or are expected to
create significant social or economic effects. Technological evolution is a theory of
radical transformation of society through technological development.

What is the root word of technology and evolution?

• Technology: 1610s, "discourse or treatise on an art or the arts," from Greek tekhnologia
"systematic treatment of an art, craft, or technique," originally referring to grammar, from
tekhno- (see techno-) + -logy. The meaning "science of the mechanical and industrial
arts" is first recorded in 1859.
• Evolution: evolution means the process of developing by gradual changes. This noun is
from Latin evolutio, "an unrolling or opening," combined from the prefix e-, "out," plus
volvere, "to roll."
List of some currently available emerged technologies

• Artificial Intelligence
• Blockchain
• Augmented Reality and Virtual Reality
• Cloud Computing
• Angular and React
• DevOps
• Internet of Things (IoT)
• Intelligent Apps (I-Apps)
• Big Data
• Robotic Processor Automation (RPA)
1
Introduction to the Industrial Revolution (IR)
The Industrial Revolution was a period of major industrialization and innovation that
took place during the late 1700s and early 1800s. An Industrial Revolution at its core
occurs when a society shifts from using tools to make products to use new sources of energy,
such as coal, to power machines in factories. The revolution started in England, with a series
of innovations to make labor more efficient and productive. The Industrial Revolution was a
time when the manufacturing of goods moved from small shops and homes to large
factories. This shift brought about changes in culture as people moved from rural areas to
big cities in order to work.

The American Industrial Revolution commonly referred to as the Second Industrial


Revolution, started sometime between 1820 and 1870. The impact of changing the way
items was manufactured had a wide reach. Industries such as textile manufacturing, mining,
glass making, and agriculture all had undergone changes. For example, prior to the
Industrial Revolution, textiles were primarily made of wool and were handspun.
From the first industrial revolution (mechanization through water and steam power) to the
mass production and assembly lines using electricity in the second, the fourth industrial
revolution will take what was started in the third with the adoption of computers and
automation and enhance it with smart and autonomous systems fueled by data and machine
learning.
Generally, the following industrial revolutions fundamentally changed and transfer the
world around us into modern society.
• The steam engine,
• The age of science and mass production, and
• The rise of digital technology
• Smart and autonomous systems fueled by data and machine learning.
The Most Important Inventions of the Industrial Revolution
• Transportation: The Steam Engine, The Railroad, The Diesel Engine, The
Airplane.
• Communication.: The Telegraph. The Transatlantic Cable. The Phonograph. The
Telephone.
• Industry: The Cotton Gin. The Sewing Machine. Electric Lights.

2
Historical Background (IR 1.0, IR 2.0, IR 3.0)
The industrial revolution began in Great Britain in the late 1770s before spreading to the
rest of Europe. The first European countries to be industrialized after England were
Belgium, France, and the German states. The final cause of the Industrial Revolution was
the effects created by the Agricultural Revolution. As previously stated, the Industrial
Revolution began in Britain in the 18th century due in part to an increase in food
production, which was the key outcome of the Agricultural Revolution. The four types of
industries are:

• The primary industry involves getting raw materials e.g. mining, farming, and
fishing.
• The secondary industry involves manufacturing e.g. making cars and steel.
• Tertiary industries provide a service e.g. teaching and nursing.
• The quaternary industry involves research and development industries e.g. IT.
1. Industrial Revolution (IR 1.0)
The Industrial Revolution (IR) is described as a transition to new manufacturing processes.
IR was first coined in the 1760s, during the time where this revolution began. The
transitions in the first IR included going from hand production methods to machines, the
increasing use of steam power
2. Industrial Revolution (IR 2.0)
The Second IR, also known as the Technological Revolution, began somewhere in the
1870s. The advancements in IR 2.0 included the development of methods for
manufacturing interchangeable parts and widespread adoption of pre-existing
technological systems such as telegraph and railroad networks. This adoption allowed the
vast movement of people and ideas, enhancing communication. Moreover, new
technological systems were introduced, such as electrical power and telephones.
3. Industrial Revolution (IR 3.0)
The Third Industrial Revolution (IR 3.0). IR 3.0 introduced the transition from
mechanical and analog electronic technology to digital electronics which began from
the late 1950s. Due to the shift towards digitalization, IR 3.0 was given the nickname,
“Digital Revolution”. The core factor of this revolution is the mass production and

3
widespread use of digital logic circuits and its derived technologies such as the computer,
handphones and the Internet. These technological innovations have arguably transformed
traditional production and business techniques enabling people to communicate with
another without the need of being physically present. Certain practices that were enabled
during IR 3.0 is still being practiced until this current day, for example – the proliferation
of digital computers and digital record.
4. Fourth Industrial Revolution (IR 4.0)
Now, with advancements in various technologies such as robotics, Internet of Things
(IoT see Figure 1.4), additive manufacturing and autonomous vehicles, the term “Fourth
Industrial Revolution” or IR 4.0 was coined by Klaus Schwab, the founder and executive
chairman of World Economic Forum, in the year 2016. The technologies mentioned above
are what you call – cyber- physical systems. A cyber-physical system is a mechanism that
is controlled or monitored by computer-based algorithms, tightly integrated with the
Internet and its users.
One example that is being widely practiced in industries today is the usage of Computer
Numerical Control (CNC) machines. These machines are operated by giving it instructions
using a computer. Another major breakthrough that is associated with IR 4.0 is the adoption
of Artificial Intelligence (AI), where we can see it being implemented into our
smartphones. AI is also one of the main elements that give life to Autonomous Vehicles
and Automated Robots.

Figure 1. 4 Anybody Connected device (ABCD)


Role of Data for Emerging Technologies
Data is regarded as the new oil and strategic asset since we are living in the age of big data,
and drives or even determines the future of science, technology, the economy, and possibly
everything in our world today and tomorrow. Data have not only triggered tremendous hype
4
and buzz but more importantly, presents enormous challenges that in turn bring incredible
innovation and economic opportunities.
This reshaping and paradigm-shifting are driven not just by data itself but all other aspects
that could be created, transformed, and/or adjusted by understanding, exploring, and
utilizing data.

The preceding trend and its potential have triggered new debate about data-intensive
scientific discovery as an emerging technology, the so-called “fourth industrial
revolution,” There is no doubt, nevertheless, that the potential of data science and analytics
to enable data-driven theory, economy, and professional development is increasingly being
recognized. This involves not only core disciplines such as computing, informatics, and
statistics, but also the broad-based fields of business, social science, and health/medical
science.

Enabling devices and network (Programmable devices)


In the world of digital electronic systems, there are four basic kinds of devices:
memory, microprocessors, logic, and networks. Memory devices store random
information such as the contents of a spreadsheet or database. Microprocessors execute
software instructions to perform a wide variety of tasks such as running a word processing
program or video game. Logic devices provide specific functions, including device-to-
device interfacing, data communication, signal processing, data display, timing and
control operations, and almost every other function a system must perform. The network
is a collection of computers, servers, mainframes, network devices, peripherals, or other
devices connected to one another to allow the sharing of data. An excellent example of a
network is the Internet, which connects millions of people all over the world Programmable
devices (see Figure 1.5) usually refer to chips that incorporate field programmable logic
devices (FPGAs), complex programmable logic devices (CPLD) and programmable logic
devices (PLD). There are also devices that are the analog equivalent of these called
field- programmable analog arrays.

5
Figure 1.5 programmable
device

Why is a computer referred to as a programmable device?


Because what makes a computer a computer is that it follows a set of instructions.
Many electronic devices are computers that perform only one operation, but they are still
following instructions that reside permanently in the unit.
List of some Programmable devices
 Achronix Speedster SPD60
 Actel’s
 Altera Stratix IV GT and Arria II GX
 Atmel’s AT91CAP7L
 Cypress Semiconductor’s programmable system-on-chip (PSoC) family
 Lattice Semiconductor’s ECP3
 Lime Microsystems’ LMS6002
 Silicon Blue Technologies
 Xilinx Virtex 6 and Spartan 6
 Xmos Semiconductor L series

A full range of network-related equipment referred to as Service Enabling Devices (SEDs),


which can include:
• Traditional channel service unit (CSU) and data service unit (DSU)
• Modems
• Routers
• Switches
• Conferencing equipment
• Network appliances (NIDs and SIDs)
• Hosting equipment and server
Human to Machine Interaction
6
Human-machine interaction (HMI) refers to the communication and interaction
between a human and a machine via a user interface. Nowadays, natural user interfaces
such as gestures have gained increasing attention as they allow humans to control
machines through natural and intuitive behaviors

What is interaction in human-computer interaction?

HCI (human-computer interaction) is the study of how people interact with computers
and to what extent computers are or are not developed for successful interaction with
human beings. As its name implies, HCI consists of three parts: the user, the computer
itself, and the ways they work together.

How do users interact with computers?


The user interacts directly with hardware for the human input and output such as displays,
e.g. through a graphical user interface. The user interacts with the computer over this
software interface using the given input and output (I/O) hardware.
How important is human-computer interaction?
The goal of HCI is to improve the interaction between users and
computers by making computers more user-friendly and receptive to the user's needs. The
main advantages of HCI are simplicity, ease of deployment & operations and cost savings
for smaller set-ups. They also reduce solution design time and integration complexity.
Disciplines Contributing to Human-Computer Interaction (HCI)
 Cognitive psychology: Limitations, information p r o c e s s i n g , p e r f o r m a n c e
p r e d i c t i o n , cooperative working, and capabilities.
 Computer science: Including graphics, technology, prototyping tools, user
interface management systems.
 Linguistics.
 Engineering and design.
 Artificial intelligence.
 Human factors

Future Trends in Emerging Technologies


 Emerging technology trends in 2019
5G Networks
Artificial Intelligence (AI)
Autonomous Devices
7
Blockchain
Augmented Analytics
Digital Twins
Enhanced Edge Computing and
Immersive Experiences in Smart Spaces
 Some emerging technologies that will shape the future of you and
your business.
The future is now or so they say. So-called emerging technologies are taking over our
minds more and more each day. These are very high-level emerging technologies though.
They sound like tools that will only affect the top tier of technology companies who employ
the world’s top 1% of geniuses.

8
Chapter Two
Introduction to Data Science
Data science is a multi-disciplinary field that uses scientific methods, processes, algorithms, and
systems to extract knowledge and insights from structured, semi-structured and unstructured
data. Data science is much more than simply analyzing data. It offers a range of roles and
requires a range of skills.
Let’s consider this idea by thinking about some of the data involved in buying a box of cereal
from the store or supermarket:
• Whatever your cereal preferences—teff, wheat, or burly—you prepare for the purchase
by writing “cereal” in your notebook. This planned purchase is a piece of data though it is
written by pencil that you can read.

• When you get to the store, you use your data as a reminder to grab the item and put it in
your cart. At the checkout line, the cashier scans the barcode on your container, and the
cash register logs the price. Back in the warehouse, a computer tells the stock manager
that it is time to request another order from the distributor because your purchase was one
of the last boxes in the store.
• You also have a coupon for your big box, and the cashier scans that, giving you a
predetermined discount. At the end of the week, a report of all the scanned manufacturer
coupons gets uploaded to the cereal company so they can issue a reimbursement to the
grocery store for all of the coupon discounts they have handed out to customers. Finally,
at the end of the month, a store manager looks at a colorful collection of pie charts
showing all the different kinds of cereal that were sold and, on the basis of strong sales of
cereals, decides to offer more varieties of these on the store’s limited shelf space next
month.
• So, the small piece of information that began as a scribble on your notebook ended up in
many different places, most notably on the desk of a manager as an aid to decision
making. On the trip from your pencil to the manager’s desk, the data went through many
transformations. In addition to the computers where the data might have stopped by or
stayed on for the long term, lots of other pieces of hardware—such as the barcode
scanner—were involved in collecting, manipulating, transmitting, and storing the data. In
addition, many different pieces of software were used to organize, aggregate, visualize,
and present the data. Finally, many different human systems were involved in working
9
with the data. People decided which systems to buy and install, who should get access to
what kinds of data, and what would happen to the data after its immediate purpose was
fulfilled.
As an academic discipline and profession, data science continues to evolve as one of the most
promising and in-demand career paths for skilled professionals. Today, successful data
professionals understand that they must advance past the traditional skills of analyzing large
amounts of data, data mining, and programming skills. In order to uncover useful intelligence for
their organizations, data scientists must master the full spectrum of the data science life cycle and
possess a level of flexibility and understanding to maximize returns at each phase of the process.
Data scientists need to be curious and result-oriented, with exceptional industry-specific
knowledge and communication skills that allow them to explain highly technical results to their
non-technical counterparts. They possess a strong quantitative background in statistics and linear
algebra as well as programming knowledge with focuses on data warehousing, mining, and
modeling to build and analyze algorithms. In this chapter, we will talk about basic definitions of
data and information, data types and representation, data value change and basic concepts of big
data.

What are data and information?


Data can be defined as a representation of facts, concepts, or instructions in a formalized manner,
which should be suitable for communication, interpretation, or processing, by human or
electronic machines. It can be described as unprocessed facts and figures. It is represented with
the help of characters such as alphabets (A-Z, a-z), digits (0-9) or special characters (+, -, /, *,
<,>, =, etc.). Information is the processed data on which decisions and actions are based. It is
data that has been processed into a form that is meaningful to the recipient and is of real or
perceived value in the current or the prospective action or decision of recipient. Furtherer
more, information is interpreted data; created from organized, structured, and processed data in a
particular context.

Data Processing Cycle


Data processing is the re-structuring or re-ordering of data by people or machines to increase
their usefulness and add values for a particular purpose. Data processing consists of the
following basic steps - input, processing, and output. These three steps constitute the data
processing cycle.

10
Figure 2.1 Data Processing Cycle

 Input − in this step, the input data is prepared in some convenient form for processing. The
form will depend on the processing machine. For example, when electronic computers are
o used, the input data can be recorded on any one of the several types of storage
medium, such as hard disk, CD, flash disk and so on.

 Processing − in this step, the input data is changed to produce data in a more useful form.
o For example, interest can be calculated on deposit to a bank, or a summary of sales
for the month can be calculated from the sales orders.

 Output − at this stage, the result of the proceeding processing step is collected. The
particular form of the output data depends on the use of the data. For example, output data
may be payroll for employees.
Data types and their representation
Data types can be described from diverse perspectives. In computer science and computer
programming, for instance, a data type is simply an attribute of data that tells the compiler or
interpreter how the programmer intends to use the data.
Data types from Computer programming perspective
Almost all programming languages explicitly include the notion of data type, though different
languages may use different terminology. Common data types include:

o Integers(int)- is used to store whole numbers, mathematically known as integers


o Booleans(bool)- is used to represent restricted to one of two values: true or false
o Characters(char)- is used to store a single character
o Floating-point numbers(float)- is used to store real numbers
o Alphanumeric strings(string)- used to store a combination of characters and numbers

A data type makes the values that expression, such as a variable or a function, might take. This
data type defines the operations that can be done on the data, the meaning of the data, and the
way values of that type can be stored.
Data types from Data Analytics perspective

11
From a data analytics point of view, it is important to understand that there are three common
types of data types or structures: Structured, Semi-structured, and Unstructured data types. Fig.
2.2 below describes the three types of data and metadata.

Figure 2.2 Data types from a data analytics perspective


a. Structured Data
Structured data is data that adheres to a pre-defined data model and is therefore straightforward
to analyze. Structured data conforms to a tabular format with a relationship between the different
rows and columns. Common examples of structured data are Excel files or SQL databases. Each
of these has structured rows and columns that can be sorted.
b. Semi-structured Data
Semi-structured data is a form of structured data that does not conform with the formal structure
of data models associated with relational databases or other forms of data tables, but nonetheless,
contains tags or other markers to separate semantic elements and enforce hierarchies of records
and fields within the data. Therefore, it is also known as a self-describing structure. Examples of
semi-structured data include JSON and XML are forms of semi-structured data.
c. Unstructured Data
Unstructured data is information that either does not have a predefined data model or is not
organized in a pre-defined manner. Unstructured information is typically text-heavy but may
contain data such as dates, numbers, and facts as well. This results in irregularities and
ambiguities
that make it difficult to understand using traditional programs as compared to data stored in
structured databases. Common examples of unstructured data include audio, video files or No-
SQL databases.

Metadata – Data about Data

12
The last category of data type is metadata. From a technical point of view, this is not a separate
data structure, but it is one of the most important elements for Big Data analysis and big data
solutions. Metadata is data about data. It provides additional information about a specific set of
data.

In a set of photographs, for example, metadata could describe when and where the photos were
taken. The metadata then provides fields for dates and locations which, by themselves, can be
considered structured data. Because of this reason, metadata is frequently used by Big Data
solutions for initial analysis.

Data value Chain


The Data Value Chain is introduced to describe the information flow within a big data system as
a series of steps needed to generate value and useful insights from data. The Big Data Value
Chain identifies the following key high-level activities:

Figure 2.3 Data Value Chain

13
1. Data Acquisition
It is the process of gathering, filtering, and cleaning data before it is put in a data warehouse or
any other storage solution on which data analysis can be carried out. Data acquisition is one of
the major big data challenges in terms of infrastructure requirements. The infrastructure required
to support the acquisition of big data must deliver low, predictable latency in both capturing data
and in executing queries; be able to handle very high transaction volumes, often in a distributed
environment; and support flexible and dynamic data structures.
2. Data Analysis
It is concerned with making the raw data acquired amenable to use in decision-making as well as
domain-specific usage. Data analysis involves exploring, transforming, and modeling data with
the goal of highlighting relevant data, synthesizing and extracting useful hidden information with
high potential from a business point of view. Related areas include data mining, business
intelligence, and machine learning.
3. Data Curation
It is the active management of data over its life cycle to ensure it meets the necessary data
quality requirements for its effective usage. Data curation processes can be categorized into
different activities such as content creation, selection, classification, transformation, validation,
and preservation. Data curation is performed by expert curators that are responsible for
improving the accessibility and quality of data. Data curators (also known as scientific curators
or data annotators) hold the responsibility of ensuring that data are trustworthy, discoverable,
accessible, reusable and fit their purpose. A key trend for the duration of big data utilizes
community and crowdsourcing approaches.
4. Data Storage
It is the persistence and management of data in a scalable way that satisfies the needs of
applications that require fast access to the data. Relational Database Management Systems
(RDBMS) have been the main, and almost unique, a solution to the storage paradigm for nearly
40 years. However, the ACID (Atomicity, Consistency, Isolation, and Durability) properties that
guarantee database transactions lack flexibility with regard to schema changes and the
performance and fault tolerance when data volumes and complexity grow, making them
unsuitable for big data scenarios. NoSQL technologies have been designed with the scalability
goal in mind and present a wide range of solutions based on alternative data models.

15
5. Data Usage
It covers the data-driven business activities that need access to data, its analysis, and the tools
needed to integrate the data analysis within the business activity. Data usage in business
decision- making can enhance competitiveness through the reduction of costs, increased added
value, or any other parameter that can be measured against existing performance criteria.

Basic concepts of big data


Big data is a blanket term for the non-traditional strategies and technologies needed to gather,
organize, process, and gather insights from large datasets. While the problem of working with
data that exceeds the computing power or storage of a single computer is not new, the
pervasiveness, scale, and value of this type of computing have greatly expanded in recent years.

In this section, we will talk about big data on a fundamental level and define common concepts
you might come across. We will also take a high-level look at some of the processes and
technologies currently being used in this space.

What Is Big Data?


Big data is the term for a collection of data sets so large and complex that it becomes difficult to
process using on-hand database management tools or traditional data processing applications.
In this context, a “large dataset” means a dataset too large to reasonably process or store with
traditional tooling or on a single computer. This means that the common scale of big datasets is
constantly shifting and may vary significantly from organization to organization. Big data is
characterized by 3V and more:

• Volume: large amounts of data Zeta bytes/Massive datasets


• Velocity: Data is live streaming or in motion
• Variety: data comes in many different forms from diverse sources
• Veracity: can we trust the data? How accurate is it? etc.

Figure 2.4 Characteristics of big data


16
Clustered Computing and Hadoop Ecosystem
1. Clustered Computing
Because of the qualities of big data, individual computers are often inadequate for handling the
data at most stages. To better address the high storage and computational needs of big data,
computer clusters are a better fit.
Big data clustering software combines the resources of many smaller machines, seeking to
provide a number of benefits:
• Resource Pooling: Combining the available storage space to hold data is a clear benefit,
but CPU and memory pooling are also extremely important. Processing large datasets
requires large amounts of all three of these resources.
• High Availability: Clusters can provide varying levels of fault tolerance and availability
guarantees to prevent hardware or software failures from affecting access to data and
processing. This becomes increasingly important as we continue to emphasize the
importance of real-time analytics.
• Easy Scalability: Clusters make it easy to scale horizontally by adding additional
machines to the group. This means the system can react to changes in resource
requirements without expanding the physical resources on a machine.
Using clusters requires a solution for managing cluster membership, coordinating resource
sharing, and scheduling actual work on individual nodes. Cluster membership and resource
allocation can be handled by software like Hadoop’s YARN (which stands for Yet
Another Resource Negotiator).
The assembled computing cluster often acts as a foundation that other software interfaces with to
process the data. The machines involved in the computing cluster are also typically involved with
the management of a distributed storage system, which we will talk about when we discuss data
persistence.

17
2. Hadoop and its Ecosystem
Hadoop is an open-source framework intended to make interaction with big data easier. It is
a framework that allows for the distributed processing of large datasets across clusters of
computers using simple programming models. It is inspired by a technical document published
by Google. The four key characteristics of Hadoop are:
• Economical: Its systems are highly economical as ordinary computers can be used for
data processing.
• Reliable: It is reliable as it stores copies of the data on different machines and is
resistant to hardware failure.
• Scalable: It is easily scalable both, horizontally and vertically. A few extra nodes help in
scaling up the framework.
• Flexible: It is flexible and you can store as much structured and unstructured data as you
need to and decide to use them later.

Hadoop has an ecosystem that has evolved from its four core components: data management,
access, processing, and storage. It is continuously growing to meet the needs of Big Data. It
comprises the following components and many others:
 HDFS: Hadoop Distributed File System
 YARN: Yet Another Resource Negotiator
 MapReduce: Programming based Data Processing
 Spark: In-Memory data processing
 PIG, HIVE: Query-based processing of data services
 HBase: NoSQL Database
 Mahout, Spark MLLib: Machine Learning algorithm libraries
 Solar, Lucene: Searching and Indexing
 Zookeeper: Managing cluster
 Oozie: Job Scheduling

18
Figure 2.5 Hadoop Ecosys

19
Big Data Life Cycle with Hadoop

1. Ingesting data into the system


The first stage of Big Data processing is Ingest. The data is ingested or transferred to Hadoop
from various sources such as relational databases, systems, or local files. Sqoop transfers data
from RDBMS to HDFS, whereas Flume transfers event data.
2. Processing the data in storage
The second stage is Processing. In this stage, the data is stored and processed. The data is stored
in the distributed file system, HDFS, and the NoSQL distributed data, HBase. Spark and
MapReduce perform data processing.
3. Computing and analyzing data
The third stage is to Analyze. Here, the data is analyzed by processing frameworks such as Pig,
Hive, and Impala. Pig converts the data using a map and reduce and then analyzes it. Hive is also
based on the map and reduce programming and is most suitable for structured data.
4. Visualizing the results
The fourth stage is Access, which is performed by tools such as Hue and Cloudera Search. In this
stage, the analyzed data can be accessed by users

20

You might also like