What Is Google Gemini? - Built in
What Is Google Gemini? - Built in
What Is Google Gemini? - Built in
LOG IN
Arti%cial Intelligence
! ! ! ♥ !
Jobs Companies Articles My Items More
Image: Shutterstock
UPDATED BY
Matthew Urwin | Jun 12, 2024
Gemini is multimodal, meaning its capabilities span text, image and audio
applications. It can generate natural written language, transcribe speeches,
create artwork, analyze videos and more, although not all of these capabilities
are yet available to the general public. Like other AI models, Gemini is
expected to get better over time as the industry continues to advance.
Gemini Models
:
The model comes in four different versions, which vary in size and
complexity:
Gemini 1.0 Ultra is the largest model for performing highly complex tasks,
according to Google. The company says it is the first model to outperform
human experts on a benchmark assessment that covers topics like physics, law
and ethics. The model is being incorporated into several of Google’s most
popular products, including Gmail, Docs, Slides and Meet. For $19.99 a
month, users can access Gemini 1.0 Ultra through the Gemini Advanced
service.
A much smaller version of the Pro and Ultra models, Gemini 1.0 Nano is
designed to be efficient enough to perform tasks directly on smart devices,
instead of having to connect to external servers. 1.0 Nano currently powers
features on the Pixel 8 Pro like Summarize in the Recorder app and Smart
Reply in the Gboard virtual keyboard app.
The latest member of the Gemini family, Gemini 1.5 Flash is a smaller version
of 1.5 Pro and built to perform actions much more quickly than its Gemini
:
counterparts. 1.5 Flash was trained by 1.5 Pro, receiving 1.5 Pro’s skills and
knowledge. As a result, this model has the context window to handle hefty
tasks while serving as a more cost-efficient alternative to larger models.
RELATED READING
Generate Text
Like any other LLM, though, Gemini has a tendency to hallucinate. “The
results should be used with a lot of care,” Subodha Kumar, a professor of
statistics, operations and data science at Temple University’s Fox School of
Business, told Built In. “They can come with a lot of errors.”
Produce Images
Gemini is able to generate images from text prompts, similar to other AI art
generators like Dall-E, Midjourey and Stable Diffusion.
This capability was temporarily halted to undergo retooling after Google was
:
criticized on social media for producing images that depicted specific white
figures as people of color. Image generators have developed a reputation for
amplifying and perpetuating biases about certain races and genders. Google’s
attempts to avoid this pitfall may have gone too far in the other direction,
though.
Gemini can accept image inputs and then analyze what is going on in those
images and explain that information via text. For example, a user can take a
photo of a flat tire and ask Gemini how to fix it, or ask Gemini for help on
their physics homework by drawing out the problem. Gemini can also process
and analyze videos, generate descriptions of what is going on in a given clip
and answer questions about it.
Understand Audio
When fed audio inputs, Gemini can support speech recognition across more
than 100 languages, and assist in various language translation tasks — as
shown in this Google demonstration.
Streamline WorkMows
MORE ON GENERATIVE AI
At a high level, the Gemini model can see patterns in data and generate new,
original content based on those patterns.
To accomplish this, Gemini was trained on a large corpus of data. Like several
other LLMs, Gemini is a “closed-source model,” generative AI expert Ritesh
Vajariya told Built In, meaning Google has not disclosed what specific training
data was used. But the model’s dataset is believed to include annotated
YouTube videos, queries in Google Search, text content from Google Books
and scholarly research from Google Scholar. (Google has said that it did not
use any personal data from Gmail or other private apps to train Gemini.)
When a user types a prompt or query into Gemini, the transformer generates
a distribution of potential words or phrases that could follow that input text,
and then selects the one that is most statistically probable. “It starts by
looking at the first word, and uses probability to generate the next word, and
so on,” AI expert Mark Hinkle told Built In.
Gemini can also process images, videos and audio. It was trained on trillions
:
of pieces of text, images (along with their accompanying text descriptions),
videos and audio clips. And it was further fine-tuned using reinforcement
learning with human feedback (RLHF), a method that incorporates human
feedback into the training process so the model can better align its outputs
with user intent.
At Google, We Win Over Our Customers With Imperfect Data. Here’s How. →
Both the Gemini and GPT-4o language models share several similarities in
their underlying architecture and capabilities. But they also have some
significant differences that impact the user experience and functionalities of
their associated chatbots, Gemini and ChatGPT, respectively.
Both Gemini 1.5 Pro and 1.5 Flash display increased context windows, with the
former possessing a context window of up to 2 million tokens and the latter up
to 1 million tokens. GPT-4o’s context window pales in comparison, landing at
128,000 tokens. Alphabet CEO Sundar Pichai has referred to Gemini’s context
window as “the longest context window of any foundational model yet,” and it
appears this statement is valid for the time being.
As a result, 1.5 Pro and 1.5 Flash should have a greater ability to handle dense
:
information and challenging tasks than GPT-4o.
Gemini has always had real-time access to Google’s search index, which can
“keep feeding” the model information, Hinkle said. So the Gemini chatbot can
draw on data pulled from the internet to answer queries, and is fine-tuned to
select data chosen from sources that fit specific topics, such as scientific
research or coding.
Google trained Gemini on its in-house AI chips, called tensor processing units
(TPUs). Specifically, it was trained on the TPU v4 and v5e, which were
explicitly engineered to accelerate the training of large-scale generative AI
models. In the future, Gemini will be trained on the v5p, Google’s fastest and
most efficient chip yet. Meanwhile, GPT-4o was trained on Nvidia’s H100
GPUs, one of the most sought-after AI chips today.
Google’s commitment to speed has paid off in some ways, with Gemini 1.5
Flash ranking as the fastest model on the market and one of the cheapest
options, second only to Meta’s Llama 3 model. However, the focus on going
fast has come with a price, with 1.5 Flash falling to the middle of the pack in
terms of overall quality. GPT-4o, GPT-4 Turbo, Claude 3 Opus and Llama 3 all
rank ahead of 1.5 Flash in the quality index.
For free: You can head to gemini.google.com and use it for free through the
Gemini chatbot. Or you can download the Gemini app on your smartphone.
Android users can also replace Google Assistant with Gemini.
Paid version: You can also subscribe to the Gemini Advanced service for
$19.99 a month, where you can access updated versions of popular products
like Gmail, Docs, Slides and Meet — all of which have Gemini Ultra built into
them.
"
"
"
"
"
BuiltIn
Built In is the online community for startups and tech companies. Find
startup jobs, tech news and events.
! " # $
About
Our Story
Careers
Content Descriptions
:
Get Involved
Recruit With Built In
Resources
Customer Support
Share Feedback
Report a Bug
Browse Jobs
Tech A-Z
Tech Hubs
Our Sites
Accessibility Statement
Copyright Policy
Privacy Policy
Terms of Use
CA Notice of Collection
© Built In 2024
: