Managing Ai and ML in The Enterprise 2020: April 2020

Download as pdf or txt
Download as pdf or txt
You are on page 1of 56

MANAGING AI AND ML

IN THE ENTERPRISE 2020

April 2020

COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.


MANAGING AI AND ML IN THE ENTERPRISE 2020

TABLE OF CONTENTS

3 AI for business: What’s going wrong, and


how to get it right

10 Research: AI/ML projects see growth


in business operations

12 CIO Jury: 83% of tech leaders have no


policy for ethically using AI

15 The true costs and ROI of implementing AI


in the enterprise

19 Developers--it’s time to brush up on your


philosophy: Ethical AI is the big new thing
in tech

24 AI vs your career? What artificial intelli-


gence will really do to the future of work

29 AI and ML move into financial services


33 Healthcare and artificial intelligence: How
Databricks uses Apache Spark to analyze
huge data sets

36 How artificial intelligence and machine


learning are used in hiring and recruiting

39 Setting the AI standard: What it could look


like in Australia

43 What is AI? Everything you need to know


about Artificial Intelligence

2
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

MANAGING AI AND ML IN THE ENTERPRISE 2020


INTRODUCTION
As artificial intelligence and machine learning reshape critical sectors, such as healthcare, finance, human
resources, and public safety, CXOs must understand the ethical issues of using AI and ML and ensure that their
algorithms aren’t sources of unconscious bias.

AI FOR BUSINESS: WHAT’S GOING WRONG, AND


HOW TO GET IT RIGHT
BY DAPHNE LEPRINCE-RINGUET/ZDNET

Despite years of hype (and plenty

IMAGE: GETTY IMAGES/ISTOCKPHOTO


of worries) about the all-conquering
power of Artificial Intelligence (AI),
there still remains a significant gap
between the promise of AI and its
reality for business.

Tech firms have pitched AI’s


capabilities for years, but for most
organisations, the benefits of AI
remain elusive.

It’s hard to gauge the proportion of businesses that are effectively using artificial intelligence today, and to what
extent. Adoption rates shown in recent reports fall anywhere between 20% and 30%, with adoption typically
loosely defined as “implementing AI in some form”.

A survey led by KPMG among 30 of the Global 500 companies found that although 30% of respondents
reported using AI for a selective range of functions, only 17% of the companies were deploying the technology
“at scale” within the enterprise.

But what all the reports point to is that businesses’ interest in AI is growing. According to research firm
Gartner, the number of companies implementing AI-related technologies in the past four years has grown by
270%.

3
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

“It’s not an issue of awareness, I can assure you,” says Johan Aurik, partner at global strategic consulting firm
Kearney, told ZDNet. “When you talk to executives, it’s evident that they all go to all the AI conferences, they
all read about it, they are all aware of what the technology can do.”

“Everyone talks about it, but no-one’s actually done it,” Aurik said.

The promise of AI is certainly tempting. The much greater growth that the technology can accomplish has
been extensively endorsed by experts, to the point that it would be difficult for any executive to remain unaware
of the hype.

One application of AI that’s

IMAGE: GARTNER, INC


already well documented is
the use of machine learning
for marketing and sales, which
analysts estimate could generate
up to $2.6 trillion in value
worldwide. Using AI, businesses
can gain much better insight into
customer behaviour to design
personalized offers. Brick-and-
mortar shops’ sales could increase
by up to 2% as a result. Most businesses are aware of the huge potential of AI, and are
planning to accelerate adoption.
Kellogg Company, the American
food manufacturer responsible for your morning Coco Pops and midnight Pringles, is pioneering the use of AI
to gain further insights into behavioral science.

Together with Qualcomm, Kellogg has developed eye tracking technology embedded in a virtual reality (VR)
headset, which picks up on customers’ behaviours when they are shopping. As users browse a simulated store,
the device gathers data about the products that attract their attention or where they gaze the longest -- data
that’s then fed to machine-learning algorithms to understand what triggers a buying decision.

Stephen Donajgrodzki, director of behavioural science at Kellogg, explained at a recent conference in London
that AI helps his team understand exactly why people act; and once the behavioural mechanisms of customers
are crystal clear, it becomes possible to influence their buying decision.

“AI and data are really useful for understanding what we should be looking at, hypothesising on where we
should create change, and then evaluate that change,” said Donajgrodzki. “We are 400 times more likely to

4
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

change someone’s behaviour if we understand the


specifics of that person’s behaviour.”
But what all the reports point
Case in point: Kellogg tested the new VR headset for a to is that businesses’ interest
pilot trial when the company launched a new Pop Tarts in AI is growing. According to
Bites product, with conclusive results. The algorithm
research firm Gartner, the number
determined that, contrary to popular belief, the best
of companies implementing
place to display the new product in the shop was on
lower shelves -- a recommendation that led to an 18%
AI-related technologies in the past
increase in sales during the trial. four years has grown by 270%.

“These systems are helping us reach specific targets


more easily,” said Donajgrodzki. “As soon as you have
a better overview of the overall situation, you get a much better idea of how to change individual behaviours.”

It’s not only about managing human actions. In data-heavy industries like manufacturing, AI’s ability to process
and analyse huge quantities of information
also holds the promise of better efficiency

IMAGE: MCKINSEY GLOBAL INSTITUTE REPORT: ‘HOW ARTIFICIAL INTELLIGENCE CAN


all the way through the supply chain.

Take the oil and gas industry, where infor-


mation pours in every day about pipelines,
offshore assets, reservoirs, fields and wells,
and so on. Much can be gained from a
technology that can accurately process and
interpret data in real-time; for example,
anomalies or breakdowns can be detected
faster and enable predictive maintenance.
DELIVER REAL VALUE TO COMPANIES’
Chevron, in fact, is the latest energy giant
to have made the news after announcing
a new initiative together with Microsoft,
to build a cloud-based platform leveraging
data analytics to monitor and optimise field
performance.
Artificial intelligence is likely to generate the most
growth in marketing and sales, and in supply-chain
Despite the evidence that AI brings management and manufacturing.
growth, Kearney’s Aurik stressed that

5
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

such examples are far from being the norm across industries. “It does differ across sectors, with energy and
medical companies increasingly adopting AI,” he conceded, “but those are exceptions. The bulk of established
companies are still running on Excel. Most industries -- financial services, consumer goods -- are very limited in
their deployment.”

Fear is commonly designated as the main reason


that businesses are reluctant to take up AI, with
It’s not only about managing human
the narrative that ‘robots will take our jobs’ under-
actions. In data-heavy industries standably putting off CEOs and their workforces.
like manufacturing, AI’s ability Yet polls tend to show that the ‘fully automated
to process and analyse huge labour’ alarm bell is both unjustified and losing
quantities of information also holds traction. A recent report found that up to 87% of
the promise of better efficiency all organisations are planning to increase or maintain
the way through the supply chain. employee numbers after the adoption of automation.

Another issue is the question of definitions. AI can


cover a very broad range of technologies, ranging
from the data-crunching of machine learning right up to cutting-edge work on artificial general intelligence
-- a.k.a. machines that think like humans. That’s a pretty broad area of technology. When you throw in a bunch
of tech companies and start-ups keen to make an impact by rebadging their efforts as ‘AI’ to attract zeitgeist-
hunting customers, then the true state of AI usage becomes even harder to measure.

Another major barrier to the take up of AI is the lack of skills, which over half of business leaders name as the
top challenge to adoption.

Mark Esposito, professor of business and economics at Harvard University’s Hult Business School, explained
that bridging the gap between hype and reality is indeed giving those businesses a hard time. “There was a first
trend, where organizations thought of AI as a proof of concept,” Esposito told ZDNet. “The problem now is
that a number of them can’t convert their idea into code and algorithms, and implement them alongside their
existing infrastructures.”

Among the selection of Global 500 companies surveyed by KPMG, the five businesses with the most advanced
AI capabilities have, on average, 375 full-time employees working on the technology. They are spending about
$75 million each on AI talent, and expect the bill to grow over the next three years.

Needless to say, this level of investment in new resources is beyond the reach of most organisations.

But even when they do secure the appropriate funds and workforce, businesses still seem to be stumbling their

6
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

way through the deployment of AI. When it comes to applying the technology to real-world use-cases, organi-
sations lack a clear strategy and ultimately, are incapable of making the most of the AI opportunity.

“Businesses tend to think of AI as a single, off-the-shelf technology,” said Esposito. “It’s not. The only AI that
works is bespoke AI. The magic of the technology happens when the objective is very narrowly defined.”

“You can’t engage with it by saying you want to digitize, or improve efficiencies -- that’s not narrow enough.
There is still a lack of clarity in what companies are trying to do with the tool.”

For Kearney’s Aurik, too, most organi-

IMAGE: MCKINSEY GLOBAL INSTITUTE REPORT: ‘HOW ARTIFICIAL INTELLIGENCE CAN


zations’ first mistake is to misplace their
expectations of AI. Cost reduction?
Efficiency? That’s too easy, according to
the consultant. “Sure, you can make that
business case easily,” he said, “but it’s a
missed opportunity. The value of AI is not
in transactional or efficiency initiatives. It’s in
extending or growing new business lines.”

In other words, think bigger -- and some


entrepreneurial-spirited business leaders are

DELIVER REAL VALUE TO COMPANIES’


starting to. James Lee is one of them. After
20 years working as a professional lawyer,
Lee realised how AI could fit into the legal
sector, as a way to streamline the so-called
‘early discovery’ phase of a proceeding -- the
time-consuming, meticulous task of going
AI can create huge value for a variety of industries,
through a complaint to gather the infor- provided that it’s deployed along with a specific and
mation needed to draft a first response. targeted strategy.

Lee found out that he could train IBM’s Watson model with thousands of lawsuit complaints and responses to
it, to quickly pinpoint the entities and relationships that would be crucial to writing an early-phase draft. What
would take a human worker up to eight hours of labour, he said, could be done in a couple of minutes thanks
to the new technology.

“At first, I was just trying to figure out where the pain points were,” Lee told ZDNet. “And then I discovered
this unbelievable opportunity. It’s a game-changer: apply that kind of efficiency to a whole portfolio, and the
gains can be huge.”

7
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

Lee founded his own company, LegalMation, to start selling the technology to corporate legal departments and
law firms, aimed at using AI to fix a specific problem that he had spotted.

For Aurik, the lack of imagination is effectively what is currently holding back AI. “The irony is that, for AI
to contribute more, we need humans to come on board,” he said. “The technology is there, but now we need
humans to use it in a creative way.”

It’s all well and good to call for creativity and imagi-
nation; but that doesn’t resolve the question of how
to get started with AI -- in a way that businesses
Ethical AI, responsible technology,
can actually engage with. When it comes to giving unbiased algorithms: there
practical advice to CEOs, however, both Aurik and is mounting concern in the
Esposito agree: think big, yes -- but start small. technology industry with ensuring
While Esposito suggested conducting pilots in small
that innovations don’t spiral out of
environments, Aurik recommended finding a specific control. In this context, it’s easy to
business case where there’s money to be made -- “I see why many businesses may be
don’t care what it is, make it a merger, a new market, reluctant to deploy an algorithm
anything” -- before slowly gaining momentum and that could grow into an irrespon-
“working your way up”, step by step. sibly powerful and opaque tool.
Beginning with a contained use case will not only
ensure that value is successfully generated down the
line. As Esposito pointed out, it’s also the only way to incorporate good business practice from the earliest stages.

“Businesses are even more reluctant to adopt AI when they can foresee issues with transparency, compliance,
privacy, security, and so on,” he explained. “If you start with a pilot, with all these ethical requirements in mind,
and then scale it, it becomes business practice.”

Ethical AI, responsible technology, unbiased algorithms: there is mounting concern in the technology industry
with ensuring that innovations don’t spiral out of control. In this context, it’s easy to see why many businesses
may be reluctant to deploy an algorithm that could grow into an irresponsibly powerful and opaque tool.

In a survey, Gartner found that concerns with data scope or quality are the third biggest challenge to adopting
AI identified by respondents -- because unreliable data is certain to generate bias and undermine trust.

“People are worried about privacy and confidentiality, and it doesn’t help with the desire to engage with the
technology,” Aurik admitted. “That’s why getting the strategy right from the very start is crucial. It’s where all
those tech giants got it wrong, and why there is so much backlash now.”

8
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

Whether it’s through money, feature updates, or even through the creation of -- pretty unsuccessful -- AI ethics
committees, some corporations are fixing the ethical mess they have created after the damage has been done.
And researchers have noted that although such companies are now pushing their efforts in the field of ethics,
the notion of responsible tech ultimately challenges “the core logics” of their business model.

But it doesn’t have to go this way -- and if companies incorporate ethics in their strategy early enough, it won’t.
“It’s not easy at all,” said Esposito, “but if you start with a global project, you can be certain that there will be
loopholes. So you have to scale it down.”

So, to AI or not to AI? In fact, it doesn’t look like businesses are going to have much of a choice. Aurik
stressed that it’s in no-one’s interest to miss out on the revolution that algorithms are about to bring. “If you
don’t embrace it, you’ll be out of business,” he said.

Aurik predicts that it will be another few decades before the technology is fully deployed. But, at least at the
enterprise level, it seems that another ‘AI winter’ is not on the cards. Time to spring into action.

9
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

RESEARCH: AI/ML PROJECTS SEE GROWTH


IN BUSINESS OPERATIONS
BY MELANIE WOLKOFF WACHSMAN/TECHREPUBLIC

Enthusiasm for artificial intelligence (AI)

IMAGE: GETTY IMAGES/ISTOCKPHOTO


and machine learning (ML) remains high for
2020, as evidenced by an uptick in spending,
development, and implementation of AI/
ML projects across the enterprise. How
companies manage those projects was the
topic of a recent survey by ZDNet’s sister
site, TechRepublic Premium.

Many businesses have advanced from evalu-


ating where AI/ML fits in an operation to
actually deploying the technology. Likewise,
strategizing for such initiatives has moved
away from C-level executives and into the hands of middle managers, who are responsible for ensuring project
success.

More specifically, survey results showed that AI/ML projects were co-managed by IT and end business for
23% of respondents, 19% said IT managed projects, and data science departments managed AI/ML projects
for 11% of respondents. This is a shift from a similar survey in 2019, which reported 33% of AI/ML projects
were managed by IT.

Another noted difference from 2019 was the steps taken to ensure an AL/ML project’s success. In 2019, that
meant performing small pilot projects and proofs of concept before proceeding with full implementation for
64% of respondents. While, 14% invested in IT/end-user training, and 9% selected vendors/consultants with
AI/ML expertise.

In 2020, these steps for success evolved into working with management to better identify business use cases
for AI/ML (52%), preparing/training IT staff (48%), and investing in data preparation, computing, and
automation processes (46%).

Concerns about AI/ML project implementation also changed from year to year. In 2019, the three biggest
concerns about project implementation included: Users unclear on project expectations (53%), IT lacking the

10
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

skills needed for implemen-


tation and support (47%), and
upper management lacking a
good understanding of AI/
ML (33%).

In 2020, the biggest


concerns were not receiving
business results to justify
the investment (48%), staff
readiness/difficulty finding
AI/ML talent (38%), and
implementation taking too
long (37%).

Interestingly, in 2020, 54%


of respondents said that their
upper management is either
very or somewhat knowl-
edgeable about AI/ML, again
representing a shift from
needing buy-in for projects to
actually implementing projects
for results. According to
survey respondents, 47% were
applying AI/ML to business
operations, 30% were applying
it to marketing/sales, and 27%
were applying the technology
to engineering and IT.

The infographic below


contains selected details from the research. To read more findings, plus analysis, download the full report:
Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation
(TechRepublic Premium subscription required).

11
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

CIO JURY: 83% OF TECH LEADERS HAVE NO


POLICY FOR ETHICALLY USING AI
BY TEENA MADDOX/TECHREPUBLIC

Pullquote: When asked, “Does your

IMAGE: GETTY IMAGES


company have a policy for ethically using
AI or machine learning?” 10 out of 12 tech
leaders said no, while just two said yes.

Ethical questions around artificial intel-


ligence (AI) have become part of the
conversation in business as more organi-
zations add machine learning (ML) to their
toolkit. However, only a few have policies in
place to make sure that AI is used ethically,
according to a TechRepublic CIO Jury poll.

When asked, “Does your company have a


policy for ethically using AI or machine learning?” 10 out of 12 tech leaders said no, while just two said yes. That
means only 17% of tech leaders in TechRepublic’s informal poll have an ethics policy in place for AI and ML.

Exactly 12 months ago, TechRepublic asked the same question to its CIO Jury, and at that point, only one tech
leader had a policy in place for ethically using AI or machine learning. So, adoption is growing, but slowly.

Those weighing in on the “yes” side include Michael Ringman, CIO of TELUS International. Ringman said,
“Technology like AI and machine learning play a key role in TELUS International’s ability to deliver seamless,
consistent CX wherever and whenever our clients’ customers want it. But core to our people-first culture is a
belief that AI and machine learning can be leveraged to enhance not replace the capabilities of our frontline
team members. Yes, it’s ethical, but it also makes good business sense. Customers increasingly crave effortless,
anticipatory, personalized experiences, and AI can enhance that when used as part of a BI strategy to provide a
360-degree view of the customer.”

Clément Stenac, CTO of Dataiku, said having an ethics policy is essential. “In fact, our platform (Dataiku)
democratizes access to data and enables enterprises to build their own path to AI in a human-centric way. We
promote ethical AI for customers by making data easy to understand for all teams, despite technical under-
standings, across organizations. The ethics of our company are defined based on this idea of human-centric

12
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

AI. People who develop and deploy models must be aware of its potential shortcomings and bear the respon-
sibility for its faults. At Dataiku, we offer extensive training on this subject to our employees. It’s critical for the
creators of AI to recognize the importance of successfully empowering teams both with training and tools to
build ethical AI algorithms.”

The other ten jury members all voted “no” but some were on the fence, including Steven Page, vice president
of IT for marketing and digital banking for Safe America. Page said at this time his company doesn’t have an
ethics policy, but “we are watching the trend in the use of AI.”

At Multiquip, Michael Hanken, vice president of IT, said there is no policy in place, but he’s considering it for a
pilot project that has recently launched.

At Ntirety, CEO Emil Sayegh said, “We do not have an ethical use policy on AI or machine learning yet, but
we definitely should consider a company policy to protect our customer’s IT infrastructures as we are an early
user of both machine learning and AI. A ‘do no harm’ and privacy boundaries of our customers’ behavioral
patterns should be in place. This is complicated by the fact that our customers count on us to parse legitimate
from nefarious traffic. AI and machine learning are so powerful that it could uncover trends in legitimate traffic
that could violate confidential usage patterns from our customers. Furthermore, access to the usage and behav-
ioral data trends uncovered by AI and machine learning could legally or illegally fall or be shared with state or
governmental institutions further compromising privacy.”

And other jury participants don’t have one and there aren’t any plans to put one in place, as with Eric Shashoua,
founder and CEO of Kiwi for GSuite. Shashoua said his company doesn’t have an ethics policy in place:
“While we don’t use AI or machine learning directly at present, ethics in our context would relate to our users
and their data. Since we’re a company with a product built for communication, we have a strict policy of not
collecting user data, and being in strict agreement with GDPR and even the spirit of that law.”

Here are this month’s CIO Jury participants:

• John Gracyalny, vice president of digital member services, Coast Central Credit Union
• Craig Lurey, CTO and co-founder, Keeper Security
• Michael Hanken, vice president of IT, Multiquip
• Dan Gallivan, director of information technology, Payette
• Emil Sayegh, CEO, Ntirety
• Kris Seeburn, independent IT consultant, evangelist, and researcher
• Michael Ringman, CIO, TELUS International

13
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

• Clément Stenac, CTO, Dataiku


• Michael R. Belote, CTO, Mercer University
• Steven Page, vice president of IT for marketing and digital banking for Safe America
• Eric Shashoua, founder and CEO of Kiwi for GSuite
• Joel Robertson, chief information officer, King University

14
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

THE TRUE COSTS AND ROI OF IMPLEMENTING AI


IN THE ENTERPRISE
BY MARY SHACKLETT/TECHREPUBLIC CONTRIBUTOR

In 2019, web content evaluator

IMAGE: ISTOCK/METAMORWORKS
MarketMuse revealed that 80% of IT
and corporate business leaders wanted
to learn more about the cost of imple-
menting existing artificial intelligence (AI)
technology; 74% were interested in how
much more it would cost over present
expenditure levels to implement AI in
their enterprises; and 69% wanted more
information about how to measure the
return on investment (ROI) for a new AI
solution.

The picture is decidedly different in 2020.

In 2020, most C-level executives have a working understanding of AI and have set their AI strategies. AI initia-
tives are being pushed down to the operational levels in their organizations, where middle management in
business, IT, and data science are now expected to develop and implement AI for the business.

At the same time, there is still doubt, especially in the minds of implementers, about how well AI users and
promoters in the end business understand the business cases where AI can be applied. In 2019, companies
addressed this concern by running large numbers of AI pilot projects, with no particular expectations for the
AI to be productive on the first project go-arounds.

In 2020, expectations from upper management are different. Now, AI projects are being moved into operations
with high expectations for results.

Unfortunately, trepidation still remains that business users don’t understand the best ways to put AI into
productive use for the business -- and that ROI won’t be realized.

Companies can minimize this risk by obtaining help from outside consultants, and AI vendors that have
expertise in specific company verticals and business areas. There are also cases where AI vendors have taken

15
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

the lead by pre-packaging AI/ML use cases toward specific industry verticals. One example is IBM Watson for
healthcare, which is now a tried- and pre-packaged solution that hospitals and medical clinics can use to assist in
medical diagnoses.

These pre-packaged solutions must still be tailored to company operations, but at least companies aren’t starting
from scratch on every project, and there is vendor-provided guidance that business users, IT, and data science
can follow.

JUSTIFYING THE INVESTMENT


In 2019, many companies spent time developing ROI
targets for AI projects in the project pilot phase.
In 2020, most C-level executives
In 2020, many of these companies are tasked with
have a working understanding of
implementing these projects in production to confirm
AI and have set their AI strategies.
that the original projected ROIs are on target.

Popular areas of AI implementations include:

• AI for the use of equipment failure prediction and maintenance cycles/scheduling. This furthers the goal
of 24/7 operations without failures.
• AI for automation underwriting and decision making for loans and policies in banking and insurance; as
well as to provide early detection and prediction of fraud.
• AI to assist in medical diagnosis.
• AI for security breach and intrusion detection and prevention, and also in data center hardware, software,
and environmental maintenance.
• AI for the prediction of consumer patterns and prefaces for marketing and sales and for engineering
product development.

A SHIFT FROM HIRING TALENT TO RETAINING EMPLOYEES


As part of operation’s AI implementation efforts, companies are also making commitments to retrain internal
technical and business staff to work with AI.

There are several reasons for this:

1. Companies have had a difficult time finding or affording AI talent in the open market.
2. As AI is embedded on the operational levels of organizations, business processes and how business is

16
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

conducted are changing. Business users and IT staff members who already understand business and
system processes are in the best positions to change them.
3. By engaging employees and committing to training, companies decrease the fear of AI that many
employees have—such as endangerment of their employment.
The ROI deliverables from these actions are less tangible, but they can be significant. Among them:
Investments in company people assets; savings of recruitment fees and high salaries for outside AI; and
improved morale and employee retention because employees see their companies investing in them.

These elements should also be baked into ROI formulas as positive returns, but they are frequently missed.

5 KEY TAKEAWAYS
1. Company management now expects operations to begin showing the ROI from AI
projects that they projected
Showing this ROI can be done by profiling a given business process benchmark against the same process with
AI to show ROI efficiencies such as saved costs or hours, or more accurate diagnoses that can improve patient
outcomes.

Companies can also improve ROI by leveraging the AI to improve other business processes. For example, if AI
is used to maximize company logistics and transportation routes, it can potentially be applied to manufacturing
and distribution to improve operational routings of products.

2. To successfully leverage ROI across systems processes and people and integration
of processes, systems, and people are critical
AI/ML systems don’t operate in a vacuum. Vendors know this, and many will tell you that their systems have
a complete set of APIs that interoperate with all systems. This works until the AI must work with an in-house
highly customized or legacy system. When this happens, it is usually IT that must hand-code system interfaces.

If many different types of system interfaces are involved, there are also integration tools, and vendors that can
simplify the task.

3. AI projects are iterative and never ending


AI emulates the human mind, but at warp speed. It is only as good as the algorithms and data that are fed to it.

Organizations recognize these key risk factors. For this reason, more companies are investing into data cleaning
and preparation. As AI gets placed into production, they are also evaluating whether AI will achieve the ROI
projected for it when the AI only needs to achieve a level of 95% accuracy when compared with results that

17
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

would have been attainable from a company or industry expert, or other authoritative source, without the assis-
tance of AI.

As AI is installed in company operations and strategic forecasting, it remains to be seen whether a 5% chance
for error will be sufficient for meeting ROI goals. For now, humans work alongside AI to also provide input
and to make most final decisions (e.g., A surgeon or a radiologist interprets an MRI and also what the AI says
about it, and then makes a decision regarding patient treatment.).

4. Infrastructure and accounting costs must be accounted for in the ROI


ROI is well on its way when the AI/ML reduces time to diagnosis, saves man-hours, and hopefully, reduces
margins for error.

Unfortunately, this initial ROI doesn’t factor in the cost of obtaining more compute power, storage and so on,
to support the new solution. Nor does it include time for restructuring business processes, revising surrounding
systems, integrating these disparate systems with the new AI platform, training IT and end business users,
consumption of energy and data center costs, initial implementation costs, licensing, etc. These setup and
ongoing support costs must also be factored into the ROI equation to ensure that you are still achieving
positive ROI results over time.

One way AI ROI risk of failure can be mitigated is by communicating total costs to management and keeping
management informed.

Champions of AI projects should also get together with finance and determine long-term ROI projections
over a period of several years. These long-term projections should take into account every corporate asset that
is required to run the AI/ML, such as new equipment/software, cloud costs, energy and data center costs,
training costs, system and business process revisions and integration costs -- and even extra manpower that
might be needed to run the new technology. The goal is achieving an ROI that remains in the black over time
and that builds on its value by continuing to enrich business processes and results.

5. Don’t overlook other positive ROIs that AI can deliver


If companies invest in their employees as part of their AI initiatives, they have a better chance of retaining
employees and of building the skills and capabilities of their human workforces. These areas should be
included as positive returns on investment in ROI formulas, but often aren’t.

18
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

DEVELOPERS--IT’S TIME TO BRUSH UP ON YOUR


PHILOSOPHY: ETHICAL AI IS THE BIG NEW THING
IN TECH
BY DAPHNE LEPRINCE-RINGUET/ZDNET

The tech industry is entering a new age, one

IMAGE: GETTY IMAGES/ISTOCKPHOTO


in which innovation has to be done respon-
sibly. “It’s very novel,” says Michael Kearns,
a professor at the University of Pennsylvania
specialising in machine learning and AI. “The
tech industry to date has largely been amoral
(but not immoral). Now we’re seeing the
need to deliberately consider ethical issues
throughout the entire tech development
pipeline. I do think this is a new era.”

AI technology is now used to inform


high-impact decisions, ranging from court rulings to recruitment processes, through profiling suspected
criminals or allocating welfare benefits. Such algorithms should be able to make decisions faster and better --
assuming they are built well. But increasingly the world is realising that the datasets used to train such systems
still often include racial, gender or ideological biases, which -- as per the saying “garbage in, garbage out” -- lead
to unfair and discriminatory decisions. Developers once might have believed their code was neutral or unbiased,
but real-world examples are showing that the use of AI, whether because of the code, the data used to inform
it or even the very idea of the application, can cause real-world problems.

From Amazon’s recruitment engine penalising resumés that include the word ‘women’s’, to the UK police
profiling suspected criminals based on criteria indirectly linked to their racial background, the shortcomings of
algorithms have given human rights groups reason enough to worry. What’s more: algorithmic bias is only one
side of the problem: the ‘ethics of AI’ picture is indeed a multifaceted one.

To mitigate the unwelcome consequences of AI systems, governments around the world have been working on
drafts, guidelines and frameworks designed to inform developers and help them come up with algorithms that
are respectful of human rights.

The EU recently released a strategy for AI that “puts people first” with “trustworthy technology”. Chinese

19
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

scientists last year published the Beijing AI


Principles, which they wrote in partnership with
To mitigate the unwelcome conse-
the government, and which focus on the respect
of human rights. quences of AI systems, governments
around the world have been working
This year in the US, ten principles were proposed
on drafts, guidelines and frameworks
to build public trust in AI; and just a few months
designed to inform developers and
before, the Department of Defense released draft
guidelines on the deployment of AI in warfare,
help them come up with algorithms
which insisted on safeguarding various principles that are respectful of human rights.
ranging from responsibility to reliability. The UK’s
Office for AI similarly details the principles that
AI should follow, and the government also published a Data Ethics Framework with guidelines on how to use
data for AI.

To date, most guidelines have received a positive, albeit measured, response from experts, who have more often
than not stressed that the proposed rules lacked substance. A recent report from an independent committee in
the UK, in fact, found that there is an “urgent need” for practical guidance and enforceable regulation from the
government when it comes to deploying AI in the public sector.

Despite the blizzard of publications, Christian de Vartavan, an advisor to the UK’s All-Party Parliamentary
Group (APPG) on AI tells ZDNet that governments are struggling to stay on top of the subject: “That’s
because it is still so early, and the technology develops so fast, that we are always behind. There is always a new
discovery, and it’s impossible to keep up.”

What practically all the frameworks released so far do, is point to the general values that developers should keep
in mind when programming an AI system. Often based on human rights, these values typically include fairness
and absence of discrimination, transparency in the algorithm’s decision-making, and holding the human creator
accountable for their invention.

Crucially, most guidelines also insist that thought be given to the ethical implications of the technology
from the very first stage of conceptualising a new tool, and all the way through its implementation and
commercialisation.

This principle of ‘ethics by design’ goes hand in hand with that of responsibility and can be translated, roughly,
as: ‘coders be warned’. In other words, it’s now on developers and their teams to make sure that their program
doesn’t harm users. And the only way to make sure it doesn’t is to make the AI ethical from day one.

20
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

The trouble with the concept of ethics by design, is that tech wasn’t necessarily designed for ethics. “This is
clearly well-meaning, but likely not realistic,” says Ben Zhao, professor of computer science at the University
of Chicago. “While some factors like bias can be considered at lower levels of design, much of AI is ethically
agnostic.”

Big tech companies are waking up to the problem and investing time and money in AI ethics. Google’s CEO
Sundar Pichai has spoken about the company’s commitment to ethical AI and the search giant has published an
open-source tool to test AI for fairness. The company’s Privacy Sandbox is an open web technology that allows
advertisers to show targeted ads without having access to users’ personal details.

Google even had a go at creating an ethics


committee, called the Advanced Technology
The trouble with the concept of External Advisory Council (ATEAC), founded last
ethics by design, is that tech wasn’t year to debate the ethical implications of AI. For all
necessarily designed for ethics. of the company’s goodwill, however, ATEAC was
shut down just a few weeks after it launched.

Pichai is not the only one advocating for greater


ethics: most tech giants, in fact, have joined the party. Apple, Microsoft, Facebook, Amazon -- to name but a
few -- have all vouched in one way or the other to stick with human rights when deploying AI systems.

Big tech might be slightly too late, according to some experts. “The current trend of Silicon Valley corpora-
tions deciding to empower ethics owners can be traced to a series of crises that have embroiled the industry in
recent years,” note researchers Data & Society in a paper on “corporate logics”.

Google’s Project Maven flop is one example of the complexities involved. Two years ago, the search engine
considered selling AI software to improve drone video analysis for the US Department of Defense. The talks
promptly led to 4,000 staff petitioning for Google to quit the deal, and a dozen employees walking out, because
they objected to their work potentially being used in such a way.

As a result, Google reported that it wouldn’t renew its contract with the Pentagon, and published a set of
principles stating that it would not design or deploy AI for weapons or any other technologies whose purpose
was to harm people.

But the University of Chicago’s Ben Zhao feels we should also cut the tech industry some slack. “It certainly
has seemed that big tech has been fixing the ethical mess they have created after the damage has been done,” he
conceded, “but I don’t believe the damage is always intentional. Rather, it is due to a lack of awareness of the
potential risks involved in technology deployed at scales we have never seen before.”

21
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

Arguably, Google could have foreseen the unintended consequences of selling an AI system to the Pentagon.
But the average coder, who designs, say, an object-recognition algorithm, is not trained to think about all the
potential misuses that their technology could lead to, should the tool fall into malicious hands.

Anticipating such consequences, and taking responsi-


bility for them, is a pretty novel requirement for most
of the industry. Zhao is adamant that the concept Arguably, Google could have
is “fairly new” to Silicon Valley, and that ethics have foreseen the unintended conse-
rarely been a focus in the past. quences of selling an AI system
to the Pentagon. But the average
He is not the only one to think so. “Think about the
people who are behind new AI systems. They are tech
coder, who designs, say, an
guys, coders -- many of them have no background in object-recognition algorithm,
philosophy,” said De Vartavan. Although the majority is not trained to think about all
of developers are keen to program systems for the the potential misuses that their
greater good, they are likely to have no relevant technology could lead to, should
training nor expertise when it comes to incorporating the tool fall into malicious hands.
ethics into their work.

Perhaps as a result, technology firms have actively


come forward to ask public bodies to take action and
provide stronger guidelines in the field of ethics. Sundar Pichai insisted that government rules on AI should
complement the principles published by companies; while Microsoft’s president Brad Smith has repeatedly
called for laws to regulate facial recognition technology.

Law-making remains a delicate craft, and balancing the need for rules with the risk of stopping innovation is a
thorny task. The White House, for its part, has made the US government’s position clear: “light-touch” rules
that won’t “needlessly hamper AI innovation” are deemed preferable by the Trump administration.

The University of Pennsylvania’s Michael Kearns similarly leans towards “a combination of algorithmic
regulation and industry self-discipline”.

De Vartavan argues that companies need to be clearer about the decisions they make with their code.

“What the government should insist on is that companies start with thinking and defining the sort of values
and choices they want to put into their algorithms, and then explain exactly what they are”. From there, users
will be able to make an informed choice as to whether or not to use the tool, he says.

22
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

De Vartavan is confident that AI developers are on track to start designing ethics into their inventions “from
the ground up”. The technology industry, it would seem, is slowly starting to realise that it doesn’t exist in a
vacuum; and that its inextricable links to philosophy and morality cannot be avoided.

Earlier this year, in fact, IBM and Microsoft joined up with an unexpected third player in a call for ethical
AI. None other than the president of the Pontifical Academy for Life Archbishop Vincenzo Paglia added his
signature to the bottom of the document, asking for a human-centred technology and an “algor-ethical” vision
across the industry.

Rarely has the metaphysics of tech been more real. And as AI gains ever-more importance in our everyday
lives, it is increasingly looking like the technology industry is embarking onto a new chapter -- one where a
computer engineering degree mixes well with a philosophy textbook.

23
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

AI VS YOUR CAREER? WHAT ARTIFICIAL INTELLI-


GENCE WILL REALLY DO TO THE FUTURE OF WORK
BY DAPHNE LEPRINCE-RINGUET/ZDNET

Jill Watson has been a teaching assistant (TA)

IMAGE: ISTOCKPHOTO
at the Georgia Institute of Technology for five
years now, relentlessly helping students day and
night with all kinds of course-related inquiries;
but for all the hard work she has done, she still
can’t qualify for outstanding TA of the year.

That’s because Jill Watson, contrary to many


students’ belief, is not actually human.

Created back in 2015 by Ashok Goel,


professor of computer science and cognitive
science at the Institute, Jill Watson is an
artificial system based on IBM’s Watson artificial intelligence software. Her role consists of answering students’
questions – a task which she remarkably carries out with a 97% accuracy rate, for inquiries ranging from
confirming the word count for an assignment, to complex technical questions related to the content of the
course.

And she has certainly gone down well with students, many of whom, in 2015, were “flabbergasted” upon
discovering that their favorite TA was not the serviceable, human lady that they expected, but in fact a
cold-hearted machine.

What students found an amusing experiment is the sort of thing that worries many workers. Automation,
we have been told time and again, will displace jobs; so are experiments like Jill Watson the first step towards
unemployment for professionals?

In fact, it’s quite the contrary, Goel tells ZDNet. “Job losses are an important concern – Jill Watson, in a way,
could replace me as a teacher,” he said. “But among the professors who use her, that question has never come
up, because there is a huge need for teachers globally. Instead of replacing teachers, Jill Watson augments and
amplifies their work, and that is something we actually need.”

The AI was originally developed for an online masters in computer science, where students interact with
teachers via a web discussion forum. Just in the spring of 2015, noticed Goel, 350 students posted 10,000

24
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

messages to the forum; to answer all of their questions, he worked out, would have taken a real-life teacher a
year, working full time.

Jill Watson has only grown in popularity since 2015, said Goel, and she has now been deployed to a dozen other
courses; building her up for a new class takes less than ten hours. And while the artificial TA, for now, is only used
at Georgia Institute of Technology, Jill Watson could change the education game if she were to be scaled globally.
With UNESCO estimating that an additional 69 million teachers are needed to achieve sustainable development
goals, the notion of “augmenting” and “amplifying” teachers’ work could go a long way.

The automation of certain tasks is not such a scary prospect for those working in education. And perhaps
neither is it a risk to the medical industry, where AI is already lending a helping hand with tasks ranging
from disease diagnosis to prescription monitoring – a welcome support, rather than a looming threat, as the
overwhelming majority of health services across the world report staff shortages and lack of resources even at
the best of times.

But of course, not all professions are in dire need of


more staff. For many workers, the advent of AI-powered
So – where does the big AI technologies seems to be synonymous with permanent
scare come from? A large part lay-off. Retailers are already using robotic fulfillment
of the problem, as often, comes systems to pick orders in their warehouses. Google’s
down to misunderstanding. project to build autonomous vehicles, Waymo, has
launched its first commercial self-driving car service in
the US, which in the long term will remove the need for
a human taxi driver. Ford is even working on automating
delivery services from start to finish, with a two-legged, two-armed robot that can walk around neighborhoods
carrying parcels from the delivery vehicle right up to your doorstep.

Advancements in AI technology, therefore, don’t bode well for all workers. “Nobody wants to be out of a job,”
says David McDonald, professor of human-centered design and engineering at the University of Washington.
“Technological changes that impact our work, and thus, our ability to support ourselves and our families, are
incredibly threatening.”

“This suggests that when people hear stories saying that their livelihood is going to disappear,” he says, “that
they probably will not hear the part of the story that says there will be additional new jobs.”

Consultancy McKinsey estimates that automation will cause up to 800 million individuals around the world
to be displaced from their jobs by 2030 – a statistic that will sound ominous, to say the least, to most of the

25
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

workforce. But the firm’s research also shows that in nearly all scenarios, and provided that there is sufficient
investment and growth, most countries can expect to be at very near full employment by the same year.

The impact that artificial intelligence could have has to seen as part of the bigger picture. McKinsey highlighted
that one of the countries that will face the largest displacement of workers is China, with up to 12% of the
workforce needing to switch occupations. But although 12% seems like a lot, noted the consultancy, it is still
relatively small compared with the tens of millions of Chinese who have moved out of agriculture in the past
25 years.

In other words, AI is only the latest news in the long


history of technological progress – and as with all
previous advancements, the new opportunities that AI In reality, according to McKinsey,
will open will balance out the skills that the technology fewer than 5% of occupations
will have made out-of-date. At least that’s the theory; can be entirely automated using
one that Brett Frischmann looks at in the book he current technology. But over half
co-authored, Re-engineering humanity. It’s a project that’s of jobs could have 30% of their
been going on forever – and more recent innovations activities taken on by AI. More
are building on the efficiencies pioneered by the likes
than robots taking over, therefore,
of Frederick Winslow Taylor and Henry Ford.
it looks like the future will be
“At one point, human beings used spears to fish. As we about task-sharing.
developed fishing technology, fewer people needed that
skill and did other things,” he says. “The idea that there
is something dramatically different about AI has to be
looked at carefully. Ultimately, data-driven systems, for example as a way to optimize factory outputs, are only a
ramped-up version of Ford and Taylor’s processes.”

Seeing AI as simply the next chapter of tech is a common position among experts. The University of
Washington’s McDonald is equally convinced that in one form or another, we have been building systems to
complement work “for over 50 years”.

So – where does the big AI scare come from? A large part of the problem, as often, comes down to misunder-
standing. There is one point that Frischmann was determined to clarify: people do tend to think, and wrongly so,
that the technology is a force that has its own agenda; one that involves coming against us and stealing our jobs.

“It’s really important for people to understand that the AI doesn’t want anything,” he said. “It’s not a bad guy.
It doesn’t have a role of its own, or an agenda. Human beings are the ones that create, design, damage, deploy,
control those systems.”

26
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

In reality, according to McKinsey, fewer than 5% of occupations can be entirely automated using current
technology. But over half of jobs could have 30% of their activities taken on by AI. More than robots taking
over, therefore, it looks like the future will be about task-sharing.

Gartner previously reported that by 2022 one in five workers engaged in non-routine tasks will rely on AI to
get work done. The research firm’s analysts forecasted that combining human and artificial intelligence would
be the way forward to maximize the value generated by the technology. AI, said Gartner, will assist workers in
all types of jobs, from entry-level to highly-skilled.

The technology could become a virtual assistant, an intern, or another kind of robo-employee; in any case, it
will lead to the development of an “augmented” workforce, whose productivity will be enhanced by the tool.

For Gina Neff, associate professor at the Oxford Internet Institute, delegating tasks to AI will only bring
about a brighter future for workers. “Humans are very good at lots of tasks, and there are lots of tasks that
computers are better at than we are. I don’t want to have to add large lists of sums by hand for my job, and
thankfully I have a technology to help me do that.”

“Increasingly, the conversation will shift towards thinking about what type of work we want to do, and how
we can use the tools we have at our disposal to enhance our capacity, and make our work both productive and
satisfying.”

As machines take on tasks such as collecting and processing data, which they already carry out much better
than humans, workers will find that they have more time to apply themselves to projects involving the cognitive
skills – logical reasoning, creativity, communication – that robots (at least currently) lack.

Using technology to augment the human value of work is also the prospect that McDonald has in mind. “We
should be using AI and complex computational systems to help people achieve their hopes, dreams and goals,”
he said. “That is, the AI systems we build should augment and extend our social and our cognitive skills and
abilities.”

There is a caveat. For AI systems to effectively bolster our hopes, dreams and goals, as McDonald said, it is
crucial that the technology is designed from the start as a human-centered tool – that is, one that is made
specifically to fulfil the interests of the human workforce.

Human-centricity might be the next big challenge for AI. Some believe, however, that so far the technology has
not done such a good job at ensuring that it enhances humans. In Re-engineering humanity, Frischmann, for
one, does not do AI any favors.

27
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

“Smart systems and automation, in my opinion, cause atrophy, more than enhancement,” he argued. “The
question of whether robots will take our jobs is the wrong one. What is more relevant is how the deployment
of AI affects humans. Are we engineering unintelligent humans, rather than intelligent machines?”

It is certainly a fine line, and going forward, will be a delicate balancing act. For Oxford Internet Institute’s
Neff, making AI work in humans’ best interest will require a whole new category of workers, which she called
“translators”, to act as intermediaries between the real-world and the technology.

For Neff, translators won’t be roboticists or “hot-shot data scientists”, but workers who understand the
situation “on the ground” well enough to see how the technology can be applied efficiently to complement
human activity.

In an example of good behavior, and of a way to bridge between humans and technology, Amazon last year
launched an initiative to help reconvert up to 1,300 employees that were being made redundant as the company
deployed robots to its US fulfillment centers. The e-tailer announced that it would pay workers $10,000 to quit
their jobs and set up their own delivery business, in order to tackle retail’s infamous last-mile logistics challenge.
Tens of thousands of workers have now applied to the program.

In a similar vein, Gartner recently suggested that HR departments start including a section dedicated to “robot
resources”, to better manage employees as they start working alongside robotic colleagues. “Getting an AI to
collaborate with humans in the ways that we collaborate with others at work, every day, is incredibly hard,” said
McDonald. “One of the emerging areas in design is focused on designing AI that more effectively augments
human capacity with respect for people.”

From human-centered design, to participatory design or user-experience design: for McDonald, humans have
to be the main focus from the first stage of creating an AI.

And then there is the question of communication. At the Georgia Institute of Technology, Goel recognized
that AI “has not done a good job” of selling itself to those who are not inside the experts’ bubble.

“AI researchers like me cannot stay in our glass tower and develop tools while the rest of the world is anxious
about the technology,” he said. “We need to look at the social implications of what we do. If we can show that
AI can solve previously unsolvable problems, then the value of AI will become clearer to everyone.”

His dream for the future? To get every teacher in the world a Jill Watson assistant within five years; and that
in the next decade, every parent can access one too, to help children with after-school questions. In fact, it’s
increasingly looking like every industry, not only education, will be getting their own version of a Jill Watson,
too – and that we needn’t worry that she will be coming at our jobs anytime soon.

28
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

AI AND ML MOVE INTO FINANCIAL SERVICES


BY ESTHER SHEIN/TECHREPUBLIC CONTRIBUTOR

Depending on who you ask, artificial intel-

IMAGE: GETTY IMAGES/ISTOCKPHOTO


ligence (AI) and machine learning (ML)
are at different stages of maturity in the
finance industry, but there is widespread
agreement that the technologies are
trending upward.

On a global scale, AI is expected to


become a major business driver across
the financial services industry, according
to the World Economic Forum (WEF).
Seventy-seven percent of finance execu-
tives anticipate AI “to possess high or very high overall importance to their businesses within two years,”
according to the findings of a WEF survey released in January.

Specifically, AI will be incorporated into generating new revenue potentially through new products and
processes; process automation; risk management; customer service; and client acquisition within the next two
years, according to 64% of the WEF survey respondents. The survey included responses from 151 financial
institutions in more than 30 countries.

AI DEPLOYMENT NASCENT IN FINANCIAL ORGANIZATIONS


For now, though, AI and ML remain in its infancy. The deployment of AI is more nascent in financial organi-
zations, limited to 2% of financial planning and analysis organizations, according to Gartner’s 2020 report,
Digital Technology Adoption Trends in Finance.

Before implementation, the firm recommends that finance leaders focus on developing a strategy, business
case, and workforce development.

The promise of AI is that it “allows for precise revenue forecasting using a larger number of data inputs,”
the Gartner report said. In accounting, for example, “AI with machine learning capabilities can flag costly
accounts.”

29
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

FINANCIAL INDUSTRY TAKES PROACTIVE MEASURES WITH AI


This sentiment is shared by BillingPlatform, a provider of enterprise billing solutions, which has started incor-
porating AI into its platform.

In the finance function, the idea is that machine learning, and AI can go through tremendous amounts of data
to automate tasks easier and make more accurate predictions than a human can, said Nathan Shinn, co-founder
and chief strategy officer, BillingPlatform.

This can help a user see what their revenue will look like next year based on trends from last year, Shinn said,
adding that this will be easier for some organizations to predict than others, such as a business that has flat-rate
subscriptions as opposed to one with variable revenues, for example.

The technology, which gives a system the ability to come to conclusions on its own based on outcomes, can
detect things like customer churn based on criteria fed into a machine learning algorithm, Shinn said.

“So we can see this person has skipped a few payments,


or in credit card processing, if we got a failed transaction,
machine learning can help in not just retrying the credit card Currently, fear of the
X number of times--it can look back and see when we were unknown is holding back a
more successful [in processing the payment],” Shinn said. lot of finance organizations in
This can be done by looking at past payment patterns. their AI deployments
Fraud detection is another area where AI is expected to
make a big splash, Shinn said.

Yet, “very little of this is happening right now,’’ he said. “The evolution of machine learning is the machine’s
ability to teach itself stuff based on outcomes.”

Right now, systems engineers are still needed in the middle, “so it’s very rare to see real machine learning ... but
it will grow like anything else.”

DIFFERENT LEVELS OF AI DEPLOYMENT


There is a “high mismatch” between the Gartner data on the low number of chief accounting officers who
said they are deploying AI right now compared to what CEOs are doing, observed Alejandra Lozada, a senior
research director at Gartner. “About half said AI will be the most important tech trend in the coming years,”
she said. Gartner predicts more than half of finance organizations will employ some form of AI by the end of
2021, Lozada said.

30
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

Yet, others are seeing greater adoption of AI in finance already. According to Deloitte’s 2nd AI in the
Enterprise study, financial services and insurance are among the industries seeing high returns on AI
investments.

“This makes sense, given the nature of how financial institutions operate and compete in the market,’’ said
Beena Ammanath, AI managing director, at Deloitte Consulting LLP. “At the highest level, by utilizing machine
learning, financial services organizations can pinpoint their most effective offerings, change how they attract
and keep customers by identifying patterns in their behaviors/interactions with the institution, and [create]
opportunities for engagement beyond just financial services.”

Machine learning is being used to extract knowledge from observations and to automate judgement-based
processes, added Lozada. For example, Gartner is working with large companies that are using machine
learning to predict the probability of clients making payments on time.

One Gartner client has developed a model that predicts the high probability of certain customers paying late,
so the client has started reaching out proactively on day 10 instead of waiting until day 30 to remind them to
pay, she said. “This is different because most companies reach out after they are late, and this company reached
out in advance’’ to those at risk, she said. “This is a predictive way of managing collections.”

The average turnaround time to settle invoices was reduced by 40%, Lozada said.

FEAR OF THE UNKNOWN HOLDS BACK AI DEPLOYMENTS


But this is a cutting-edge example. Currently, fear of the unknown is holding back a lot of finance organiza-
tions in their AI deployments, said Lozada. This is because of unfamiliarity with what AI can do, uncertainty
about where to start, and not having a strategy in place, she said.

Other issues include not having enough information about various vendors’ offerings and confusion about
what to select and whether a product can be integrated into existing systems, and if so, how, she said.

“Obviously, another issue is enterprise maturity; not having the right governance processes and skill levels in
adopting AI.” This is not just specific to finance departments but across the board, Lozada said.

It is important to come up with a starting point, a strategy, and ensure that you have the skills to implement AI
in day-to-day functions, she said.

AI ETHICS AND TRANSPARENCY ARE A MUST


Both Lozada and Ammanath agreed that as AI moves into finance operations, organizations must ensure
safeguards are put in place.

31
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

“There are many inherent risks that come along with AI and machine learning such as data bias, insufficient
data protection mechanisms, lack of experienced AI talent, lack of training for responsible parties, etc., that can
lead to unfavorable outcomes,’’ Ammanath said. “Used unethically–even inadvertently–AI can result in signif-
icant revenue loss, stiff fines, and a more intangible and priceless asset: An organization’s reputation and trust
of its customers and internal stakeholders.”

Lozada stressed that as they proceed with AI and machine learning initiatives, organizations must ensure the
data is transparent, explainable, and ethical before doing predictive analytics in the future.

32
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

HEALTHCARE AND ARTIFICIAL INTELLIGENCE:


HOW DATABRICKS USES APACHE SPARK TO
ANALYZE HUGE DATA SETS
BY VERONICA COMBS/TECHREPUBLIC

In the AI 100: The Artificial


IMAGE: GETTY IMAGES/ISTOCKPHOTO
Intelligence Startups Redefining
Industries, CBInsights reported
that healthcare is the top industry
in the emerging role of artificial
intelligence. Thirteen of the 100
companies surveyed are focused on
healthcare, including Subtle Medical,
which uses AI to enhance radiology
images; Viz.ai which uses deep
learning to identify blocked arteries
and veins; and Butterfly Network,
which is building a portable ultrasound device that uses an AI-assisted diagnostic tool. Butterfly is also applying
its platform to COVID-19 patients by looking for infection patterns in the lungs that indicate illness.

OPEN SOURCE FRAMEWORK BENEFITS HEALTHCARE IT FIRM


These companies are specializing in particular conditions, but one healthcare IT firm is building on an open
source framework to open up all kinds of data sets analysis.

Databricks was founded by the original creators of Apache Spark, an open-source distributed cluster-com-
puting framework built atop Scala. Databricks grew out of the AMPLab project at the University of California,
Berkeley.

Frank Nothaft, Databricks technical director of healthcare and life sciences, said Apache Spark’s distributed
data processing engine is perfect for running complex queries at large scale, which is the computational power
required to analyze data sets related to drug development.

“Five years ago the largest table had three million rows, today the largest tables have up to 60 billion rows,” he
said.

33
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

Nothaft described the company as big data analytics


and machine learning on top of cloud computing. The
There is no shortage of big
company was founded in 2013, released its first product
in 2015, and launched its healthcare group in 2017.
data sets in the healthcare
world, encompassing every-
“We have launched a genomics product, we are
thing from chest X-rays to drug
working in medical imaging, and we are doing an
research. Startups and estab-
increasingly large amount of work in the clinical and
lished companies alike are both
claims processing space,” he said.
using artificial intelligence (AI) and
machine learning to analyze these
BUILDING DATA LAKES
Nothaft said the company’s first step in the data sets and use the results
product development process was to build a cloud to guide business strategy and
management layer to make it easy for users to spin up treatment plans.
clusters quickly.

“This also helped on the admin side to manage cost,


access, and compliance on the data side,” he said.

The company’s pharmaceutical clients use the platform for early research and drug discovery, clinical trials, and
manufacturing.

Nothaft said Databricks is best suited for data preparation and the extract, transform, and load (ETL) process.

Pharmaceutical company Novartis used the platform to build a research data lake.

“We combined all of the genomic data that they have and the molecule data so that scientists could run queries
on top of the genomic data to identify associations,” he said.

Nothaft addedthat in the pharma industry there is often a skill set gap between data scientists and domain scien-
tists who specialize in biology and chemistry. (With one client, the ETL process took three weeks to ingest genetic
sequencing data from one million patients.) Once the ETL process is in place, internal teams can manage it.

“Our goal is to push data prep into the hands of the scientists,” he said.

THE IMPORTANCE OF A KNOWLEDGE GRAPH


Nothaft said that most companies build a machine learning layer that aggregates all internal data for internal
use. For example, AstraZeneca built a knowledge graph that combines internal data sets as well as data

34
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

extracted from public sources. The company then created a knowledge graph and built algorithms on top of
that data.

“This helps the researchers figure out which experiments to run, and which experiments not to run so they can
spend more time on high-potential experiments,” he said.

Nothaft also said that creating a knowledge graph can make it easier for divisions within a pharma company to
collaborate.

“If everyone’s data is in one place I can run the query without talking to anyone and get it in 30 minutes,” he
said.

However, one challenge is the fact that every data set contains personal health information, which comes with
lots of compliance rules. Nothaft said that the Databricks platform has a governance layer built into it.

TURNING GENETIC ANALYSIS INTO ACTION


Michael Ortega, head of communications for healthcare and life sciences at Databricks, said that he sees more
large healthcare organizations bringing this kind of big data analysis in-house.

Databricks works with Sanford Health, a healthcare system that includes 44 hospitals, 1,400 physicians, and
more than 200 senior care locations in 26 states and nine countries. Sanford also has a health insurance plan.

Many of Sanford’s clinics are located in the Dakotas and the upper plains. Some patients are Native Americans
with distinct genetic profiles or people with specific environmental risk factors, including working in the oil and
gas industry. If a doctor wants to do a genetic analysis for a patient, that usually requires using an external lab
and giving up ownership of the data.

“The best thing we can do to serve them is to help them bring this analysis in house, which is a high-value
service but also helps them keep costs down,” Ortega said.

Ortega also said that Databricks has worked with clients to improve personalized medicine, such as predicting
the progression of Alzheimer’s and helping people make lifestyle adjustments. Ortega said clients have
combined genomic profiles and brain images to identify a new biomarker that can more precisely predict a
person’s risk for developing the disease.

“When people look at genetic reports, they really don’t understand how to take the risk factors and turn those
into behavioral changes,” he said. “What are we doing to make sure people still have access to risk factors, but
have more actionable information.”

35
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

HOW ARTIFICIAL INTELLIGENCE AND MACHINE


LEARNING ARE USED IN HIRING AND RECRUITING
BY MACY BAYERN/TECHREPUBLIC

The hiring process has come a long way

IMAGE: GETTY IMAGES/ISTOCKPHOTO


from the days of paper resumes and cold
calls via landline. Online job sites are now
staples in talent acquisition, but artificial
intelligence (AI) and machine learning are
elevating the recruiting and hiring landscape.

When asked about the current status of


AI and machine learning in hiring, Mark
Brandau, principal analyst on Forrester’s
CIO team said, “All vendors are moving in
that direction without question. It’s the way
of the future.”

The power of AI lies in its ability to process high volumes of data at fast speeds, improving efficiency and
productivity for organizations. Those same features and benefits can also be applied to the hiring process.

“As organizations look to AI and machine learning to enhance their practices, there are two goals in mind,”
said Lauren Smith, vice president of Gartner’s HR practice. “The first is how do we drive more efficiency in
the process? And then secondly is just recognizing that we can get better outcomes--the effectiveness of our
process can be better in a couple places.”

Major companies across industries including Hilton, Humana, AT&T, Procter & Gamble, and CapitalOne use
AI and machine learning to navigate through thousands
of applications, organize interviews, conduct initial
Using AI to screen candidates screenings, and more.
helps narrow down talent, Not all organizations are implementing automation to
making the hiring process accomplish those tasks, but many are integrating the
quicker and more efficient. technology at some capacity, Brandau said.

Below are some examples of current use cases.

36
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

TOP 3 AI AND MACHINE LEARNING APPLICATIONS IN HIRING


1. Talent sourcing
Talent sourcing is one of the most prominent ways companies use the technology in recruitment, Smith said.

“[It identifies] where the talent is that we are looking for. That technology is really good at scraping social
professional sites, academic information, and a variety of different sources to help pinpoint the talent segment
that you are looking for,” Smith said.

“That’s been obviously the gold mine for recruiters as they’re looking for talent in an increasingly hyper-com-
petitive labor market,” she added.

2. Candidate engagement
Automation in candidate engagement is the most mature application in recruiting thus far, according to Smith.

“[AI has] become more prominent as candidates become more like consumers than they were before,” Smith said.

“They expect transparency in the process. They expect timely responsiveness to their questions. Most recruiting
functions are not set up to do that well,” Smith said. “Using things like chat bots instead of recruiters, which
communicate directly with candidates at different stages of the process, communicate where they are in the
process, who they have to interview with next, et cetera.”

3. Prospective employee assessment and selection


Using AI to screen candidates helps narrow down talent, making the hiring process quicker and more efficient,
Smith said.

“Many organizations get hundreds of thousands of applications for a single role, and that is very
overwhelming,” Smith said. “Most use resume screening to sort through that noise and identify the people they
want to bring further into the process.

“The second part of assessing and selecting is less mature,” Smith continued. “There’s a lot of news headlines
around actually using algorithms to assess and make the final hire.There’s a lot of debate about it in the
recruiting space, because it can be more prone to bias. But, that’s an area that we see some organizations exper-
imenting with.”

AI REDEFINES HIRING
Despite the prevalence and usefulness of AI and machine learning in recruitment, the technology will not
replace hiring professionals, Brandau said.

37
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

Brandau echoed all of the same AI use cases Smith outlined, but he added that these applications don’t mean
that humans will be eliminated from the process.

“This is not to replace jobs; this is really to help,” Brandau said.

AI is meant to expedite manual processes that take recruiters and hiring managers a lot of time, such as sifting
through thousands of applications. The technology is meant to give human professionals more time to conduct
more valuable work.

“This isn’t coming in and saying that the whole thing is going to be done automatically. I don’t think that’s ever
going to be the case,” Brandau said. “When you think about the talent acquisition process, it’s deeply human.

“There has to be human connections in there. Even if some of the processes are getting help from software,
that’s not going to replace anybody, that’s just going to speed time to hire and maybe find a better candidate at a
lower price over time,” Brandau added.

38
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

SETTING THE AI STANDARD: WHAT IT COULD


LOOK LIKE IN AUSTRALIA
BY AIMEE CHANTHADAVONG/ZDNET

Deliberation, consensus, and communication

IMAGE: GETTY IMAGES/ISTOCKPHOTO


are necessary when conversations about
what Australia’s artificial intelligence (AI)
standards should look like, according to the
country’s national standards body.

And it’s exactly the approach Standards


Australia took when it launched a discussion
paper [PDF] about developing AI standards
for Australia in June last year. It ended up
engaging with regulators, industry, and
academics, including Australian Federal
Police, Microsoft, Deakin University, Data61, the Australian Human Rights Commission, and KPMG.

“It’s been that long,” Jed Horner, Standards Australia strategic advocacy manager, told ZDNet.

“The process had to take the time it did ... [and] it’s been about a year all up.”

The results of those discussions have since been published in a 44-page report [PDF], commissioned by the
Department of Industry, Science, Energy, and Resources. It stated how important it is for Australia to be part
of the global conversation in developing AI rules globally.

Delving into this point further, Human Rights Commissioner Edward Santow suggested one of the ways
Australia could join the global AI conversation is to find a niche.

“At the end of last year, Germany announced its national AI strategy, and they did it really smartly,” he said,
during a panel discussion at the recent launch of the report in Sydney.

“What they said was when you’re thinking about cars, you’re not going to buy a German car because it’s the
cheapest; you’re going to buy a German car because you know it’s been really well-engineered. It’s going to be
safer, more reliable, and have better performance. When you think about German AI, we want you to think the
same thing.

39
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

“What I’m saying is we should think in terms of what our national strengths are and how we can leverage off
those.

“We literally don’t have the number of people working in AI as they do in a country like the United States and
China, so we need to think about what our niche is and go really, really hard in advancing in that. Our niche
could be to develop AI in a responsible way consistent with our national standards.”

Horner agreed, saying there are a few considerations that need to be taken into account to decide where the
country’s strengths lie.

“We want to continue to be a destination of choice for


large companies who want to grow their presence here,
“We literally don’t have the and that’s big tech -- we need that in our economy. It’s
number of people working in about the economic footprint. It matters, and it drives
AI as they do in a country like an ecosystem.
the United States and China, so
“The second point is to play to our strengths. It’s not
we need to think about what
a new idea, but if you look at countries around the
our niche is and go really, really world, New Zealand plays to agritech, which makes
hard in advancing in that. Our complete sense.
niche could be to develop AI in a
“If you think about the consumer focus work they
responsible way consistent with
do in the US, again, in Australia what does that look
our national standards.” like? It might be resource management. It’s something
—Edward Santow, Human Rights that sounds bland but our water, our food supplies, all
Commissioner of those things are critical -- every country is worried
about it, so how can we turn it into an advantage?

“We need to focus on narrow strengths where we’re


niche, rather than try to be all people, which is the mistake some people make including some policy leaders.
The idea that we can lead the world is a nice one, we just don’t have the investment they have in Silicon Valley
or China.”

But why exactly should Australia care about being involved in setting AI standards?

According to the report, up to 80% of global trade -- or $4 trillion annually -- is affected by standards, or
associated technical regulations. However, the issue is that organisations are not obliged to comply to any
standards, unless it’s legislated.

40
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

It was a point that Standards Committee on Artificial Intelligence chair Aurelie Jacquets argued during the
panel discussion, saying that without governance and regulation, standards could only have so much influence.

“If you don’t have good governance, all the


things you see that are currently happening,
which is basic automation, you’re not going “If you don’t have good governance,
to get scaled. Have good governance, all the things you see that are currently
focus on those frameworks, and not just happening, which is basic automation,
your data, and then we can promote good you’re not going to get scaled. Have
quality AI at an international level,” she good governance, focus on those frame-
said. works, and not just your data, and then
Horner, however, cautioned about the we can promote good quality AI at an
potential risk of legislating standards, international level.”
pointing out it could hinder innovation. He
used the National Institute of Standards —Aurelie Jacquets, Standards Committee
(NIST) cybersecurity framework as an on Artificial Intelligence chair
example.

“What happens is the NIST for cyberse-


curity are pushed down supply chains to the extent that if you don’t follow them you can’t do business with the
major players,” he said.

“We’re thinking of the global sense of what we can do to help businesses in Australia both be responsible
and actually access new markets, rather than slowly take a certification approach here, which is more about
responding to a social concern, but it doesn’t think through the mechanics of business.”

He explained it’s also partly the reason why so many different parties were encouraged to be involved in the
discussion for this particular Standards Australia report.

“We want industry to co-shape what these products look like and it becomes a user-friendly document, which
makes life easy for everyone,” Horner said.

“If you regulate standards that haven’t been developed by industry, the danger you impose is massive costs
overnight. It’s both a technological challenge and a financial one. That’s why we engage with people from such
a cross-section to get agreement. It actually does happen.”

Futureye managing director Katherine Teh emphasised how introducing standards is “creating new ground”

41
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

for creating conversations about how AI could be beneficial for society, while also addressing the fear factor
surrounding the technology.

“Fear is what drives regulation when you don’t resolve the difference. What we’re thinking -- and we’re seeing
with the pandemic now -- is the fear reaction might actually overshoot because it’s intuitive and we already have
that with AI,” she said during the panel discussion.

“All settings are in place for an overreaction from society about the implications and this version of the world
where we’re no longer in charge of our own destiny if computers take over ... There’s also this perception
that 83% of people think they’re going to lose their jobs and 25% are going to be out of work, and AI is the
cornerstone for all of that.

“It comes to represent something of the fourth industrial revolution and we have to address those sorts of
challenges and fears by getting people to get familiar about how they’re being used, what the opportunities of
engagements might be, how algorithms are being overlaid.”

Other recommendations that were made in the report included streamlining requirements in areas such as
privacy risk management to increase Australian businesses’ international competitiveness; developing standards
that take into account diversity, inclusion, fairness, and social trust; and growing Australia’s capacity to develop
and share best practice in the design, deployment, and evaluation of AI systems.

The report builds on CSIRO’s Data61 AI discussion paper that was released April last year and the federal
government’s AI ethics principles announced in November. Both aimed to develop guidelines that were not
only practical for businesses but also to help citizens build trust in AI systems.

Releasing this report also follows in the footsteps of countries such the US, UK, China, and Germany that
have each established their own AI standards.

Similar principles have also been set out by the Organisation for Economic Cooperation and Development
(OECD), with debates about developing a workable international AI framework for the global tech community
still ongoing. So far, 42 countries, including Australia, have committed to developing consensus-driven AI
standards based on the OECD AI principles.

These principles include that AI should benefit people and the planet by driving inclusive growth, sustainable
development and wellbeing; be designed in a way that respects the rule of law, human rights, democratic values,
and diversity; provide transparency and responsible disclosure around AI systems; and organisations that are
developing, deploying, or operating AI be held accountable.

42
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

WHAT IS AI? EVERYTHING YOU NEED TO KNOW


ABOUT ARTIFICIAL INTELLIGENCE
BY NICK HEATH/ZDNET CONTRIBUTOR

WHAT IS ARTIFICIAL

IMAGE: GETTY IMAGES/ISTOCKPHOTO


INTELLIGENCE (AI)?
It depends who you ask.

Back in the 1950s, the fathers of the field


Minsky and McCarthy, described artificial
intelligence as any task performed by a
program or a machine that, if a human
carried out the same activity, we would
say the human had to apply intelligence to
accomplish the task.

That obviously is a fairly broad definition, which is why you will sometimes see arguments over whether
something is truly AI or not.

AI systems will typically demonstrate at least some of the following behaviors associated with human intel-
ligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and
manipulation and, to a lesser extent, social intelligence and creativity.

WHAT ARE THE USES FOR AI?


AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to
virtual assistants such as Amazon’s Alexa and Apple’s Siri, to recognise who and what is in a photo, to spot
spam, or detect credit card fraud.

WHAT ARE THE DIFFERENT TYPES OF AI?


At a very high level artificial intelligence can be split into two broad types: narrow AI and general AI.

Narrow AI is what we see all around us in computers today: intelligent systems that have been taught or
learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant

43
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

on the Apple iPhone, in the vision-recognition


systems on self-driving cars, in the recommen-
Among AI experts there’s a broad
dation engines that suggest products you might
like based on what you bought in the past. Unlike
range of opinion about how quickly
humans, these systems can only learn or be taught artificially intelligent systems will
how to do specific tasks, which is why they are surpass human capabilities.
called narrow AI.

WHAT CAN NARROW AI DO?


There are a vast number of emerging applications for narrow AI: interpreting video feeds from drones carrying
out visual inspections of infrastructure such as oil pipelines, organizing personal and business calendars,
responding to simple customer-service queries, co-ordinating with other intelligent systems to carry out tasks
like booking a hotel at a suitable time and location, helping radiologists to spot potential tumors in X-rays,
flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices,
the list goes on and on.

WHAT CAN GENERAL AI DO?


Artificial general intelligence is very different, and is the type of adaptable intellect found in humans, a flexible
form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to
building spreadsheets, or to reason about a wide variety of topics based on its accumulated experience. This is
the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which
doesn’t exist today and AI experts are fiercely divided over how soon it will become a reality.

A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Müller and philos-
opher Nick Bostrom reported a 50 percent chance that Artificial General Intelligence (AGI) would be developed
between 2040 and 2050, rising to 90 percent by 2075. The group went even further, predicting that so-called ‘
superintelligence’ -- which Bostrom defines as “any intellect that greatly exceeds the cognitive performance of
humans in virtually all domains of interest” -- was expected some 30 years after the achievement of AGI.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of
the human brain, and believe that AGI is still centuries away.

WHAT IS MACHINE LEARNING?


There is a broad body of research in AI, much of which feeds into and complements each other.

44
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

Currently enjoying something of a resurgence, machine learning is where a computer system is fed large
amounts of data, which it then uses to learn how to carry out a specific task, such as understanding speech or
captioning a photograph.

WHAT ARE NEURAL NETWORKS?


Key to the process of machine learning are neural networks. These are brain-inspired networks of intercon-
nected layers of algorithms, called neurons, that feed data into each other, and which can be trained to carry
out specific tasks by modifying the importance attributed to input data as it passes between the layers. During
training of these neural networks, the weights attached to different inputs will continue to be varied until the
output from the neural network is very close to what is desired, at which point the network will have ‘learned’
how to carry out a particular task.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks
with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks
that have fueled the current leap forward in the ability of computers to carry out task like speech recognition
and computer vision.

There are various types of neural

IMAGE: NUANCE
networks, with different strengths and
weaknesses. Recurrent neural networks
are a type of neural net particularly
well suited to language processing and
speech recognition, while convolutional
neural networks are more commonly
used in image recognition. The design
of neural networks is also evolving,
with researchers recently refining a
more effective form of deep neural
network called long short-term memory The structure and training of deep neural networks.
or LSTM, allowing it to operate fast
enough to be used in on-demand systems like Google Translate.

Another area of AI research is evolutionary computation, which borrows from Darwin’s famous theory of
natural selection, and sees genetic algorithms undergo random mutations and combinations between genera-
tions in an attempt to evolve the optimal solution to a given problem.

45
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

This approach has even been used to help design AI


models, effectively using AI to help build AI. This
use of evolutionary algorithms to optimize neural
With AI playing an increasingly
networks is called neuroevolution, and could have major role in modern software and
an important role to play in helping design efficient services, each of the major tech
AI as the use of intelligent systems becomes more firms is battling to develop robust
prevalent, particularly as demand for data scientists machine-learning technology for
often outstrips supply. The technique was recently use in-house and to sell to the
showcased by Uber AI Labs, which released papers on public via cloud services.
using genetic algorithms to train deep neural networks
for reinforcement learning problems.

Finally there are expert systems, where computers are programmed with rules that allow them to take a series
of decisions based on a large number of inputs, allowing that machine to mimic the behavior of a human
expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot
system flying a plane.

WHAT IS FUELING THE RESURGENCE IN AI?


The biggest breakthroughs for AI research in recent years have been in the field of machine learning, in
particular within the field of deep learning.

This has been driven in part by the easy availability of data, but even more so by an explosion in parallel
computing power in recent years, during which time the use of GPU clusters to train machine-learning systems
has become more prevalent.

Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they
are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google
and Microsoft, have moved to using specialized chips tailored to both running, and more recently training,
machine-learning models.

An example of one of these custom chips is Google’s Tensor Processing Unit (TPU), the latest version of
which accelerates the rate at which useful machine-learning models built using Google’s TensorFlow software
library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train up models for DeepMind and Google Brain, but also the models that
underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public
to build machine learning models using Google’s TensorFlow Research Cloud. The second generation of these

46
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

chips was unveiled at Google’s I/O conference in May last year, with an array of these new TPUs able to train
a Google machine-learning model used for translation in half the time it would take an array of the top-end
graphics processing units (GPUs).

WHAT ARE THE ELEMENTS OF MACHINE LEARNING?


As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised
and unsupervised learning.

Supervised learning
A common technique for teaching AI systems is by training them using a very large number of labeled
examples. These machine-learning systems are fed huge amounts of data, which has been annotated to
highlight the features of interest. These might be photos labeled to indicate whether they contain a dog or
written sentences that have footnotes to indicate whether the word ‘bass’ relates to music or a fish. Once
trained, the system can then apply these labels can to new data, for example to a dog in a photo that’s just been
uploaded.

This process of teaching a machine by example is called supervised learning and the role of labeling these
examples is commonly carried out by online workers, employed through platforms like Amazon Mechanical
Turk.

Training these systems typically requires vast amounts of data, with some systems needing to scour millions
of examples to learn how to carry out a task effectively -- although this is increasingly possible in an age of
big data and widespread data mining. Training datasets are huge and growing in size -- Google’s Open Images
Dataset has about nine million images, while its labeled video repository YouTube-8M links to seven million
labeled videos. ImageNet, one of the early databases of this kind, has more than 14 million categorized images.
Compiled over two years, it was put together by nearly 50,000 people -- most of whom were recruited through
Amazon Mechanical Turk -- who checked, sorted, and labeled almost one billion candidate pictures.

In the long run, having access to huge labelled datasets may also prove less important than access to large
amounts of compute power.

In recent years, Generative Adversarial Networks ( GANs) have shown how machine-learning systems that are
fed a small amount of labelled data can then generate huge amounts of fresh data to teach themselves.

This approach could lead to the rise of semi-supervised learning, where systems can learn how to carry out
tasks using a far smaller amount of labelled data than is necessary for training systems using supervised
learning today.

47
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

Unsupervised learning
In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data,
looking for similarities that can be used to categorise that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn’t setup in advance to pick out specific types of data, it simply looks for data that can be
grouped by its similarities, for example Google News grouping together stories on similar topics each day.

Reinforcement learning
A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick.

In reinforcement learning, the system attempts to maximize a reward based on its input data, basically going
through a process of trial and error until it arrives at the best possible outcome.

An example of reinforcement

IMAGE: GARTNER / ANNOTATIONS: ZDNET


learning is Google DeepMind’s Deep
Q-network, which has been used to
best human performance in a variety
of classic video games. The system is
fed pixels from each game and deter-
mines various information, such as the
distance between objects on screen.

By also looking at the score achieved in


each game the system builds a model of
which action will maximize the score in
different circumstances, for instance, in
the case of the video game Breakout, Many AI-related technologies are approaching, or have
already reached, the ‘peak of inflated expectations’ in
where the paddle should be moved to Gartner’s Hype Cycle, with the backlash-driven ‘trough of
in order to intercept the ball. disillusionment’ lying in wait.

WHICH ARE THE LEADING FIRMS IN AI?


With AI playing an increasingly major role in modern software and services, each of the major tech firms is
battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud
services.

48
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its
DeepMind AI AlphaGo that has probably made the biggest impact on the public awareness of AI.

WHICH AI SERVICES ARE AVAILABLE?


All of the major cloud platforms -- Amazon Web Services, Microsoft Azure and Google Cloud Platform --
provide access to GPU arrays for training and running machine learning models, with Google also gearing up
to let users use its Tensor Processing Units -- custom chips whose design is optimized for training and running
machine-learning models.

All of the necessary associated infrastructure and services are available from the big three, the cloud-based
data stores, capable of holding the vast amount of data needed to train machine-learning models, services
to transform data to prepare it for analysis, visualisation tools to display the results clearly, and software that
simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-learning models, with
Google recently revealing a service that automates the creation of AI models, called Cloud AutoML. This
drag-and-drop service builds custom image-recognition models and requires the user to have no machine-
learning expertise.

Cloud-based, machine-learning services are constantly evolving, and at the start of 2018, Amazon revealed a
host of new AWS offerings designed to streamline the process of training up machine-learning models.

For those firms that don’t want to build their own machine learning models but instead want to consume
AI-powered, on-demand services -- such as voice, vision, and language recognition -- Microsoft Azure stands
out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile
IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed
at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella -- and
recently investing $2bn in buying The Weather Channel to unlock a trove of data to augment its AI services.

WHICH OF THE MAJOR TECH FIRMS IS WINNING THE AI RACE?


Internally, each of the tech giants -- and others such as Facebook -- use AI to help drive myriad public services:
serving search results, offering recommendations, recognizing people and things in photos, on-demand trans-
lation, spotting spam -- the list is extensive.

But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple’s
Siri, Amazon’s Alexa, the Google Assistant, and Microsoft Cortana.

49
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

Relying heavily on voice recognition and natural-lan-

IMAGE: JASON CIPRIANI/ZDNET


guage processing, as well as needing an immense
corpus to draw upon to answer queries, a huge
amount of tech goes into developing these
assistants.

But while Apple’s Siri may have come to promi-


nence first, it is Google and Amazon whose
assistants have since overtaken Apple in the AI
space -- Google Assistant with its ability to answer
a wide range of queries and Amazon’s Alexa with The Amazon Echo Plus is a smart speaker with
access to Amazon’s Alexa virtual assistant built in.
the massive number of ‘Skills’ that third-party devs
have created to add to its capabilities.

Despite being built into Windows 10, Cortana has had a particularly rough time of late, with the suggestion
that major PC makers will build Alexa into laptops adding to speculation about whether Cortana’s days are
numbered, although Microsoft was quick to reject this.

WHICH COUNTRIES ARE LEADING THE WAY IN AI?


It’d be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu,
and Lenovo are investing heavily in AI in fields ranging from ecommerce to autonomous driving. As a country
China is pursuing a three-step plan to turn AI into a core industry for the country, one that will be worth 150
billion yuan ($22bn) by 2020.

Baidu has invested in developing self-driving cars,


IMAGE: BAIDU

powered by its deep learning algorithm, Baidu


AutoBrain, and, following several years of tests,
plans to roll out fully autonomous vehicles in
2018 and mass-produce them by 2021.

Baidu has also partnered with Nvidia to use AI


to create a cloud-to-car autonomous car platform
for auto manufacturers around the world.

The combination of weak privacy laws, huge


investment, concerted data-gathering, and big
Baidu’s self-driving car, a modified BMW 3 series.
data analytics by major firms like Baidu, Alibaba,

50
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

and Tencent, means that some analysts believe China will have an advantage over the US when it comes to
future AI research, with one analyst describing the chances of China taking the lead over the US as 500 to one
in China’s favor.

HOW CAN I GET STARTED WITH AI?


While you could try to build your own GPU array at home and start training a machine-learning model,
probably the easiest way to experiment with AI-related services is via the cloud.

All of the major tech firms offer various AI services, from the infrastructure to build and train your own
machine-learning models through to web services that allow you to access AI-powered tools such as speech,
language, vision and sentiment recognition on demand.

WHAT ARE RECENT LANDMARKS IN THE DEVELOPMENT OF AI?


There’s too many to put together a comprehensive list, but some recent highlights include: in 2009 Google
showed it was possible for its self-driving Toyota Prius to complete more than 10 journeys of 100 miles each --
setting society on a path towards driverless vehicles.

In 2011, the computer system IBM Watson made headlines worldwide when it won the US quiz show
Jeopardy!, beating two of the best players the show had ever produced. To win the show, Watson used natural
language processing and analytics on vast repositories of data that it processed to answer human-posed
questions, often in a fraction of a second.

In June 2012, it became apparent just how good machine-learning systems were getting at computer vision,
with Google training a system to recognise an
internet favorite, pictures of cats.

IMAGE: IBM
Since Watson’s win, perhaps the most famous
demonstration of the efficacy of machine-
learning systems was the 2016 triumph of the
Google DeepMind AlphaGo AI over a human
grandmaster in Go, an ancient Chinese game
whose complexity stumped computers for
decades. Go has about 200 moves per turn,
compared to about 20 in Chess. Over the
course of a game of Go, there are so many
possible moves that searching through each of IBM Watson competes on Jeopardy! in January 14, 2011

51
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

them in advance to identify the best play is too costly from a computational point of view. Instead, AlphaGo
was trained how to play the game by taking moves played by human experts in 30 million Go games and
feeding them into deep-learning neural networks.

Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested
and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played
“completely random” games against itself, and then learnt from the results. At last year’s prestigious Neural
Information Processing Systems (NIPS) conference, Google DeepMind CEO Demis Hassabis revealed
AlphaGo had also mastered the games of chess and shogi.

And AI continues to sprint past new milestones, last year a system trained by OpenAI defeated the world’s top
players in one-on-one matches of the online multiplayer game Dota 2.

That same year, OpenAI created AI agents that invented their own invented their own language to cooperate
and achieve their goal more effectively, shortly followed by Facebook training agents to negotiate and even lie.

HOW WILL AI CHANGE THE WORLD?


Robots and driverless cars
The desire for robots to be able to act autonomously and understand and navigate the world around them
means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in
robotics, use of AI is helping robots move into new areas such as self-driving cars, delivery robots, as well as
helping robots to learn new skills. General Motors recently said it would build a driverless car without a steering
wheel or pedals by 2019, while Ford committed to doing so by 2021, and Waymo, the self-driving group inside
Google parent Alphabet, will soon offer a driverless taxi service in Phoenix.

Fake news
We are on the verge of having neural networks that can create photo-realistic images or replicate someone’s
voice in a pitch-perfect fashion. With that comes the potential for hugely disruptive social change, such as no
longer being able to trust video or audio footage as genuine. Concerns are also starting to be raised about how
such technologies will be used to misappropriate people’s image, with tools already being created to convinc-
ingly splice famous actresses into adult films.

Speech and language recognition


Machine-learning systems have helped computers recognize what people are saying with an accuracy of almost
95 percent. Recently Microsoft’s Artificial Intelligence and Research group reported it had developed a system

52
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

able to transcribe spoken English as accurately as human transcribers.

With researchers pursuing a goal of 99 percent accuracy, expect speaking to computers to become the norm
alongside more traditional forms of human-machine interaction.

Facial recognition and surveillance


In recent years, the accuracy of facial-recognition systems has leapt forward, to the point where Chinese tech
giant Baidu says it can match faces with 99 percent accuracy, providing the face is clear enough on the video.
While police forces in western countries have generally only trialled using facial-recognition systems at large
events, in China the authorities are mounting a nationwide program to connect CCTV across the country to
facial recognition and to use AI systems to track suspects and suspicious behavior, and are also trialling the use
of facial-recognition glasses by police.

Although privacy regulations vary across the world, it’s likely this more intrusive use of AI technology --
including AI that can recognize emotions -- will gradually become more widespread elsewhere.

Healthcare
AI could eventually have a dramatic impact on healthcare, helping radiologists to pick out tumors in x-rays,
aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to
more effective drugs.

There have been trials of AI-related technology in hospitals across the world. These include IBM’s Watson
clinical decision support tool, which is trained by oncologists at Memorial Sloan Kettering Cancer Center, and
the use of Google DeepMind systems by the UK’s National Health Service, where it will help spot eye abnor-
malities and streamline the process of screening patients for head and neck cancers.

WILL AI KILL US ALL?


Again, it depends who you ask. As AI-powered systems have grown more capable, so warnings of the
downsides have become more dire.

Tesla and SpaceX CEO Elon Musk has claimed that AI is a “fundamental risk to the existence of human civili-
zation”. As part of his push for stronger regulatory oversight and more responsible research into mitigating the
downsides of AI he set up OpenAI, a non-profit artificial intelligence research company that aims to promote
and develop friendly AI that will benefit society as a whole. Similarly, the esteemed physicist Stephen Hawking
has warned that once a sufficiently advanced AI is created it will rapidly advance to the point at which it vastly
outstrips human capabilities, a phenomenon known as the singularity, and could pose an existential threat to the
human race.

53
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

Yet the notion that humanity is on the verge of an AI explosion that will dwarf our intellect seems ludicrous to
some AI researchers.

Chris Bishop, Microsoft’s director of research in Cambridge, England, stresses how different the narrow
intelligence of AI today is from the general intelligence of humans, saying that when people worry about
“Terminator and the rise of the machines and so on? Utter nonsense, yes. At best, such discussions are decades
away.”

WILL AN AI STEAL YOUR JOB?


The possibility of artificially intelligent systems replacing much of modern manual labour is perhaps a more
credible near-future possibility.

While AI won’t replace all jobs, what seems to be certain is that AI will change the nature of work, with the
only question being how rapidly and how profoundly automation will alter the workplace.

There is barely a field of human endeavour that AI doesn’t have the potential to impact. As AI expert Andrew
Ng puts it: “many people are doing routine, repetitive jobs. Unfortunately, technology is especially good at
automating routine, repetitive work”, saying he sees a “significant risk of technological unemployment over the
next few decades”.

The evidence of which jobs will be supplanted is starting to emerge. Amazon has just launched Amazon Go,
a cashier-free supermarket in Seattle where customers just take items from the shelves and walk out. What this
means for the more than three million people in the US who works as cashiers remains to be seen. Amazon
again is leading the way in using robots to
improve efficiency inside its warehouses.
IMAGE: AMAZON

These robots carry shelves of products to


human pickers who select items to be sent
out. Amazon has more than 100,000 bots in
its fulfilment centers, with plans to add many
more. But Amazon also stresses that as the
number of bots have grown, so has the number
of human workers in these warehouses.
However, Amazon and small robotics firms
are working to automate the remaining manual
jobs in the warehouse, so it’s not a given that
Amazon bought Kiva robotics in 2012 and today uses
manual and robotic labor will continue to grow Kiva robots throughout its warehouses.
hand-in-hand.
54
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
MANAGING AI AND ML IN THE ENTERPRISE 2020

Fully autonomous self-driving vehicles aren’t a reality yet, but by some predictions the self-driving trucking
industry alone is poised to take over 1.7 million jobs in the next decade, even without considering the impact
on couriers and taxi drivers.

Yet some of the easiest jobs to automate won’t even require robotics. At present there are millions of people
working in administration, entering and copying data between systems, chasing and booking appointments
for companies. As software gets better at automatically updating systems and flagging the information that’s
important, so the need for administrators will fall.

As with every technological shift, new jobs will be created to replace those lost. However, what’s uncertain is
whether these new roles will be created rapidly enough to offer employment to those displaced, and whether
the newly unemployed will have the necessary skills or temperament to fill these emerging roles.

Not everyone is a pessimist. For some, AI is a technology that will augment, rather than replace, workers. Not
only that but they argue there will be a commercial imperative to not replace people outright, as an AI-assisted
worker -- think a human concierge with an AR headset that tells them exactly what a client wants before they
ask for it -- will be more productive or effective than an AI working on its own.

Among AI experts there’s a broad range of opinion about how quickly artificially intelligent systems will
surpass human capabilities.

Oxford University’s Future of Humanity Institute asked several hundred machine-learning experts to predict
AI capabilities, over the coming decades.

Notable dates included AI writing essays that could pass for being written by a human by 2026, truck drivers
being made redundant by 2027, AI surpassing human capabilities in retail by 2031, writing a best-seller by 2049,
and doing a surgeon’s work by 2053.

They estimated there was a relatively high chance that AI beats humans at all tasks within 45 years and
automates all human jobs within 120 years.

55
COPYRIGHT ©2020 CBS INTERACTIVE INC. ALL RIGHTS RESERVED.
CREDITS
Editor In Chief ABOUT ZDNET
Bill Detwiler
ZDNet brings together the reach of global and the depth of
Editor In Chief, UK local, delivering 24/7 news coverage and analysis on the
Steve Ranger
trends, technologies, and opportunities that matter to IT
Associate Managing professionals and decision makers.
Editors
Teena Maddox ABOUT TECHREPUBLIC
Mary Weilage
TechRepublic is a digital publication and online community
Editor, Australia that empowers the people of business and technology. It
Chris Duckett provides analysis, tips, best practices, and case studies
aimed at helping leaders make better decisions about
Senior Writer
Veronica Combs technology.

Editor DISCLAIMER
Melanie Wachsman
The information contained herein has been obtained
Associate Staff Writer from sources believed to be reliable. CBS Interactive Inc.
Macy Bayern disclaims all warranties as to the accuracy, completeness,
Multimedia Producer or adequacy of such information. CBS Interactive Inc. shall
Derek Poore have no liability for errors, omissions, or inadequacies in
the information contained herein or for the interpretations
Staff Reporter
thereof. The reader assumes sole responsibility for the
Karen Roby
selection of these materials to achieve its intended results.
The opinions expressed herein are subject to change
without notice.

Cover image: iStock

Copyright ©2020 by CBS Interactive Inc. All rights reserved. TechRepublic


and its logo are trademarks of CBS Interactive Inc. ZDNet and its logo are
trademarks of CBS Interactive Inc. All other product names or services
identified throughout this article are trademarks or registered trademarks of
their respective companies.

You might also like