Prework - YPO Mexico Evento - AI
Prework - YPO Mexico Evento - AI
Prework - YPO Mexico Evento - AI
THIS WEEK THE world's largest search companies leaped into a contest to
harness a powerful new breed of "generative AI" algorithms.
In case you’ve been living in outer space for the past few months, you'll
know that people are losing their minds over ChatGPT’s ability to answer
questions in strikingly coherent and seemingly insightful and creative
ways. Want to understand quantum computing? Need a recipe for
whatever’s in the fridge? Can’t be bothered to write that high school
essay? ChatGPT has your back.
The all-new Bing is similarly chatty. Demos that the company gave at its
headquarters in Redmond, and a quick test drive by WIRED’s Aarian
Marshall, who attended the event, show that it can e ortlessly generate
a vacation itinerary, summarize the key points of product reviews, and
answer tricky questions, like whether an item of furniture will t in a
particular car. It’s a long way from Microsoft’s hapless and hopeless
O ce assistant Clippy, which some readers may recall bothering them
every time they created a new document.
ffi
ff
fi
Not to be outdone by Bing’s AI reboot, Google said this week that it
would release a competitor to ChatGPT called Bard. (The name was
chosen to re ect the creative nature of the algorithm underneath, one
Googler tells me.) The company, like Microsoft, showed how the
underlying technology could answer some web searches and said it
would start making the AI behind the chatbot available to developers.
Google is apparently unsettled by the idea of being upstaged in search,
which provides the majority of parent Alphabet’s revenue. And its AI
researchers may be understandably a little mi ed since they actually
developed the machine learning algorithm at the heart of ChatGPT,
known as a transformer, as well as a key technique used to make AI
imagery, known as di usion modeling.
Last but by no means least in the new AI search wars is Baidu, China’s
biggest search company. It joined the fray by announcing another
ChatGPT competitor, Wenxin Yiyan (⽂⼼⼀⾔), or "Ernie Bot" in English.
Baidu says it will release the bot after completing internal testing this
March.
These new search bots are examples of generative AI, a trend fueled by
algorithms that can generate text, craft computer code, and dream up
images in response to a prompt. The tech industry might
be experiencing widespread layo s, but interest in generative AI is
booming, and VCs are imagining whole industries being rebuilt around
this new creative streak in AI.
Generative language tools like ChatGPT will surely change what it means
to search the web, shaking up an industry worth hundreds of billions of
dollars annually, by making it easier to dig up useful information and
advice. A web search may become less about clicking links and exploring
sites and more about leaning back and taking a chatbot’s word for it. Just
as importantly, the underlying language technology could transform
many other tasks too, perhaps leading to email programs that write
sales pitches or spreadsheets that dig up and summarize data for you.
To many users, ChatGPT also seems to signal a shift in AI’s ability to
understand and communicate with us.
fl
ff
ff
ff
But there is, of course, a catch.
While the text they sling at us can look human, AI models behind
ChatGPT and its new brethren do not work remotely like a human brain.
Their algorithms are narrowly designed to learn to predict what should
come after a prompt by feeding on statistical patterns in huge amounts
of text from the web and books. They have absolutely no understanding
of what they are saying or whether an answer might be incorrect,
inappropriate, biased, or representative of the real world. What’s more,
because these AI tools generate text purely based on patterns they’ve
previously seen, they are prone to “hallucinating” information. And, in
fact, ChatGPT gets some of its power from a technique that involves
humans giving feedback on questions—but that feedback optimizes for
answers that seem convincing, not ones that are accurate or true.
These issues may be a problem if you’re trying to use the technology to
make web search more useful. Microsoft has apparently xed some
common aws with ChatGPT in Bing (we tried tripping it up a few times),
but the real test will come when it’s made widely available. One Bard
response that Google has proudly shown o incorrectly claims that the
James Webb Space telescope was the rst to take a picture of a planet
beyond our solar system.
Oops.
fl
fi
ff
fi
Exploring
opportunities in
the generative
AI value chain
McKinsey, Abril 2023
Generative AI is giving rise to
an entire ecosystem, from
hardware providers to
application builders, that will
help bring its potential for
business to fruition.
Foundation models
At the heart of generative AI are foundation models. These large deep
learning models are pretrained to create a particular type of content and
can be adapted to support a wide range of tasks. A foundation model is
like a Swiss Army knife—it can be used for multiple purposes. Once the
foundation model is developed, anyone can build an application on top
of it to leverage its content-creation capabilities. Consider OpenAI’s
GPT-3 and GPT-4, foundation models that can produce human-quality
text. They power dozens of applications, from the much-talked-about
chatbot ChatGPT to software-as-a-service (SaaS) content generators
Jasper and Copy.ai.
Foundation models are trained on massive data sets. This may include
public data scraped from Wikipedia, government sites, social media, and
books, as well as private data from large databases. OpenAI, for
example, partnered with Shutterstock to train its image model on
Shutterstock’s proprietary images.8
Developing foundation models requires deep expertise in several areas.
These include preparing the data, selecting the model architecture that
can create the targeted output, training the model, and then tuning the
model to improve output (which entails labeling the quality of the
model’s output and feeding it back into the model so it can learn).
ff
ffi
Today, training foundation models in particular comes at a steep price,
given the repetitive nature of the process and the substantial
computational resources required to support it. In the beginning of the
training process, the model typically produces random results. To
improve its next output so it is more in line with what is expected, the
training algorithm adjusts the weights of the underlying neural network.
It may need to do this millions of times to get to the desired level of
accuracy. Currently, such training e orts can cost millions of dollars and
take months. Training OpenAI’s GPT-3, for example, is estimated to cost
$4 million to $12 million.9 As a result, the market is currently dominated
by a few tech giants and start-ups backed by signi cant investment
(Exhibit 2). However, there is work in progress toward making smaller
models that can deliver e ective results for some tasks and training that
is more e cient, which could eventually open the market to more
entrants. We already see that some start-ups have achieved certain
success in developing their own models—Cohere, Anthropic, and AI21,
among others, build and train their own large language models (LLMs).
Additionally, there is a scenario where most big companies would want
to have LLMs working in their environments—such as for a higher level
of data security and privacy, among other reasons—and some players
(such as Cohere) already o er this kind of service around LLMs.
Exhibit 2
ffi
ff
ff
ff
fi
It’s important to note that many questions have yet to be answered
regarding ownership and rights over the data used in the development
of this nascent technology—as well as over the outputs produced—
which may in uence how the technology evolves (see sidebar, “Three
issues shaping generative AI’s future”).
Services
As with AI in general, dedicated generative AI services will certainly
emerge to help companies ll capability gaps as they race to build out
their experience and navigate the business opportunities and technical
complexities. Existing AI service providers are expected to evolve their
capabilities to serve the generative AI market. Niche players may also
enter the market with specialized knowledge for applying generative AI
within a speci c function (such as how to apply generative AI to
customer service work ows), industry (for instance, guiding
pharmaceutical companies on the use of generative AI for drug
discovery), or capability (such as how to build e ective feedback loops in
di erent contexts).
ff
ff
fi
ffi
fl
fi
ff
ffi
ff
ff
While generative AI technology and its supporting ecosystem are still
evolving, it is already quite clear that applications o er the most
signi cant value-creation opportunities. Those who can harness niche—
or, even better, proprietary—data in ne-tuning foundation models for
their applications can expect to achieve the greatest di erentiation and
competitive advantage. The race has already begun, as evidenced by the
steady stream of announcements from software providers—both
existing and new market entrants—bringing new solutions to market. In
the weeks and months ahead, we will further illuminate value-creation
prospects in particular industries and functions as well as the impact
generative AI could have on the global economy and the future of work.
needed.education