Razlođi Ai

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 40

Razlođi ai, ml, dl, gen ai, kako dela gen ai, kako se uči,

ase welcome Andrew


[Applause]
in thank you it's such a good time to be
a builder I'm excited to be back here at
snowfake
build what i' like to do today is share
you where I think are some of ai's
biggest
opportunities you may have heard me say
that I think AI is the new electricity
that's because a has a general purpose
technology like electricity if I ask you
what is electricity good for it's always
hard to answer because it's good for so
many different things and new AI
technology is creating a huge set of
opportunities for us to build new
applications that weren't possible
before people often ask me hey Andrew
where are the biggest AI opportunities
this is what I think of as the AI stack
at the lowest level is the
semiconductors and then on top of that
lot of the cloud infr to including of
Course Snowflake and then on top of that
are many of the foundation model
trainers and models and it turns out
that a lot of the media hype and
excitement and social media Buzz has
been on these layers of the stack kind
of the new technology layers when if
there's a new technology like generative
AI L the buzz is on these technology
layers and there's nothing wrong with
that but I think that almost by
definition there's another layer of the
stack that has to work out even better
and that's the applic apption layer
because we need the applications to
generate even more value and even more
Revenue so that you know to really
afford to pay the technology providers
below so I spend a lot of my time
thinking about AI applications and I
think that's where lot of the best
opportunities will be to build new
things one of the trends that has been
growing for the last couple years in no
small pop because of generative AI is
fast and faster machine learning model
development um and in particular
generative AI is letting us build things
faster than ever before take the problem
of say building a sentiment cost vario
taking text and deciding is this a
positive or negative sentiment for
reputation monitoring say typical
workflow using supervised learning might
be that will take a month to get some
label data and then you know train AI
model that might take a few months and
then find a cloud service or something
to deploy on that'll take another few
months and so for a long time very
valuable AI systems might take good AI
teams six to 12 months to build right
and there's nothing wrong with that I
think many people create very valuable
AI systems this way but with generative
AI there's certain cles of applications
where you can write a prompt in days and
then deploy it in you know again maybe
days and what this means is there are a
lot of applications that used to take me
and used to take very good AI teams
months to build that today you can build
in maybe 10 days or so and this opens up
the opportunity to experiment with build
new prototypes and and ship new AI
products that's certainly the
prototyping aspect of it and these are
some of the consequences of this trend
which is fast experimentation is
becoming a more promising path to
invention previously if it took six
months to build something then you know
we better study it make sure there user
demand have product managers we look at
it document it and and then spend all
that effort to build in it hopefully it
turns out to be
worthwhile but now for fast moving AI
teams I see a design pattern where you
can say you know what it take us a
weekend to throw together prototype
let's build 20 prototypes and see what
SS and if 18 of them don't work out
we'll just stitch them and stick with
what works so fast iteration and fast
experimentation is becoming a new path
to inventing new user
experiences um one of interesting
implication is that evaluations or evals
for short are becoming a bigger
bottleneck for how we build things so it
turns out back in supervised learning
world if you're collecting 10,000 data
points anyway to trade a model then you
know if you needed to collect an extra
1,000 data points for testing it was
fine whereas extra 10% increase in cost
but for a lot of large language Mel
based apps if there's no need to have
any trading data if you made me slow
down to collect a thousand test examples
boy that seems like a huge bottleneck
and so the new Dev velopment workflow
often feels as if we're building and
collecting data more in parallel rather
than sequentially um in which we build a
prototype and then as it becomes import
more important and as robustness and
reliability becomes more important then
we gradually build up that test St here
in parallel but I see exciting
Innovations to be had still in how we
build evals um and then what I'm seeing
as well is the prototyping of machine
learning has become much faster but
building a software application has lots
of steps does the product work you know
the design work does the software
integration work a lot of Plumbing work
um then after deployment Dev Ops and L
Ops so some of those other pieces are
becoming faster but they haven't become
faster at the same rate that the machine
learning modeling pot has become faster
so you take a process and one piece of
it becomes much faster um what I'm
seeing is prototyping is not really
really fast but sometimes you take a
prototype into robust reliable
production with guard rails and so on
those other steps still take some time
but the interesting Dynamic I'm seeing
is the fact that the machine learning p
is so fast is putting a lot of pressure
on organizations to speed up all of
those other parts as well so that's been
exciting progress for our few and in
terms of how machine learning
development um is speeding things up I
think the Mantra moved fast and break
things got a bad rep because you know it
broke things um I think some people
interpret this to mean we shouldn't move
fast but I disagree with that I think
the better mindra is move fast and be
responsible I'm seeing a lot of teams
able to prototype quickly evaluate and
test robustly so without shipping
anything out to The Wider world that
could you know cause damage or cause um
meaningful harm I'm finding smart teams
able to build really quickly and move
really fast but also do this in a very
responsible way and I find this
exhilarating that you can build things
and ship things and responsible way much
faster than ever
before now there's a lot going on in Ai
and of all the things going on AI um in
terms of technical Trend the one Trend
I'm most excited about is agentic AI
workflows and so if you to ask what's
the one most important AI technology to
pay attention to I would say is agentic
AI um I think when I started saying this
you know near the beginning of this year
it was a bit of a controversial
statement but now the word AI agents has
is become so widely used uh by by
Technical and non-technical people is
become you know little bit of a hype
term uh but so let me just share with
you how I view AI agents and why I think
they're important approaching just from
a technical
perspective the way that most of us use
large language models today is with what
something is called zero shot prompting
and that roughly means we would ask it
to uh give it a prompt write an essay or
write an output for us and it's a bit
like if we're going to a person or in
this case going to an AI and asking it
to type out an essay for us by going
from the first word writing from the
first word to the last word all in one
go without ever using backspac just
right from start to finish like that and
it turns out people you know we don't do
our best writing this way uh but despite
the difficulty of being forced to write
this way a Lish models do you know not
bad pretty
well here's what an agentic workflow
it's like uh to gener an essay we ask an
AI to First write an essay outline and
ask you do you need to do some web
research if so let's download some web
pages and put into the context of the
large H model then let's write the first
draft and then let's read the first
draft and critique it and revise the
draft and so on and this workflow looks
more like um doing some thinking or some
research and then some revision and then
going back to do more thinking and more
research and by going round this Loop
over and over um it takes longer but
this results in a much better work
output so in some teams I work with we
apply this agentic workflow to
processing complex tricky legal
documents or to um do Health Care
diagnosis Assistance or to do very
complex compliance with government
paperwork so many times I'm seeing this
drive much better results than was ever
possible and one thing I'm want to focus
on in this presentation I'll talk about
later is devise of visual AI where
agentic repal are letting us process
image and video data
but to get back to that later um it
turns out that there are benchmarks that
show seem to show a gentic workflows
deliver much better results um this is
the human eval Benchmark which is a
benchmark for open AI that measures
learning out lar rage model's ability to
solve coding puzzles like this one and
um my team collected some data turns out
that um on this Benchmark I think it was
POS K Benchmark POS K metric GB 3.5 got
48% right on this coding Benchmark gb4
huge Improvement you know
67% but the improvement from GB 3.5 to
gbd4 is dwarf by the improvement from
gbt 3.5 to GB 3.5 using an agentic
workflow um which gets over up to about
95% and gbd4 with an agentic workflow
also does much better um and so it turns
out that in the way Builders built
agentic reasoning or agentic workflows
in their applications there are I want
to say four major design patterns which
are reflection two use planning and
multi-agent collaboration and to
demystify agentic workflows a little bit
let me quickly step through what these
workflows mean um and I find that
agentic workflows sometimes seem a
little bit mysterious until you actually
read through the code for one or two of
these go oh that's it you know that's
really cool but oh that's all it takes
but let me just step through
um to for for concreteness what
reflection with ls looks like so I might
start off uh prompting an L there a
coder agent l so maybe an assistant
message to your roles to be a coder and
write code um so you can tell you know
please write code for certain tasks and
the L May generate codes and then it
turns out that you can construct a
prompt that takes the code that was just
generated and copy paste the code back
into the prompt and ask it you know he
some code intended for a Tas examine
this code and critique it right and it
turns out you prompt the same Elum this
way it may sometimes um find some
problems with it or make some useful
suggestions out proofy code then you
prompt the same LM with the feedback and
ask you to improve the code and become
with with a new version and uh maybe
foreshadowing two use you can have the
LM run some unit tests and give the
feedback of the unit test back to the LM
then that can be additional feedback to
help it iterate further to further
improve the code and it turns out that
this type of reflection workflow is not
magic doesn't solve all problems um but
it will often take the Baseline level
performance and lift it uh to to better
level performance and it turns out also
with this type of workflow where we're
think of prompting an LM to critique his
own output use it own criticism to
improve it this may be also foreshadows
multi-agent planning or multi-agent
workflows where you can prompt one
prompt an LM to sometimes play the role
of a coder and sometimes prom on to play
the role of a CR of a Critic um to
review the code so such the same
conversation but we can prompt the LM
you know differently to tell sometimes
work on the code sometimes try to make
helpful suggestions and this same
results in improved performance so this
is a reflection design pattern um and
second major design pattern is to use uh
in which a lar language model can be
prompted to generate a request for an
API call to have it decide when it needs
to uh search the web or execute code or
take a the task like um issue a customer
refund or send an email or pull up a
calendar entry so to use is a major
design pattern that is letting large
language models make function calls and
I think this is expanding what we can do
with these agentic workflows um real
quick here's a planning or reasoning
design pattern in which if you were to
give a fairly complex request you know
generate image or where girls reading a
book and so on then an LM this example
adapted from the hugging GTP paper an LM
can look at the picture and decide to
first use a um open pose model to detect
the pose and then after that gener
picture of a girl um after that you'll
describe the image and after that use
sex the spe or TTS to generate the audio
but so in planning you an L look at a
complex request and pick a sequence of
actions execute in order to deliver on a
complex task um and lastly multi Asian
collaboration is that design pattern
alluded to where instead of prompting an
LM to just do one thing you prompt the
LM to play different roles at different
points in time so the different agents
simulate agents interact with each other
and come together to solve a task and I
know that some people may may wonder you
know if you're using one why do you need
to make this one play the role with
multip multiple agents um many teams
have demonstrated significant improved
performance for a variety of tasks using
this design pattern and it turns out
that if you have an LM sometimes
specialize on different tasks maybe one
at a time have it interact many teams
seem to really get much better results
using this I feel like maybe um there's
an analogy to if you're running jobs on
a processor on a CPU you why do we need
multiple processes it's all the same
process there you know at the end of the
day but we found that having multiple FS
of processes is a useful extraction for
developers to take a task and break it
down to subtask and I think multi-agent
collaboration is a bit like that too if
you were big task then if you think of
hiring a bunch of agents to do different
pieces of task then interact sometimes
that helps the developer um build
complex systems to deliver a good
result so I think with these four major
agentic design patterns agentic
reasoning workflow design patterns um it
gives us a huge space to play with to
build Rich agents to do things that
frankly were just not possible you know
even a year ago um and I want to one
aspect of this I'm particularly excited
about is the rise of not not just large
language model B agents but large
multimodal based a large multimodal
model based agents so um give an image
like this if you were wanted to uh use a
lmm large multimodal model you could
actually do zero shot PR and that's a
bit like telling it you know take a
glance at the image and just tell me the
output and for simple image thoughts
that's okay you can actually have it you
know look at the image and uh right give
you the numbers of the runners or
something but it turns out just as with
large language modelbased agents SL
multi modelbased model based agents can
do better with an itative workflow where
you can approach this problem step by
step so detect the faces detect the
numbers put it together and so with this
more irrit workflow uh you can actually
get an agent to do some planning testing
right code plan test right code and come
up with a most complex plan as
articulated expressing code to deliver
on more complex thoughts so what I like
to do is um show you a demo of some work
that uh Dan Malone and I and the H AI
team has been working on on building
agentic workflows for visual AI
tasks so if we switch to my
laptop
um let me have an image here of a uh
soccer game or football game and um I'm
going to say let's see counts the
players in the vi oh and just so fun if
you're not how to prompt it after
uploading an image This little light
bulb here you know gives some suggested
prompts you may ask for this uh but let
me run this so count players on the
field right and what this kicks off is a
process that actually runs for a couple
minutes um to Think Through how to write
code uh in order to come up a plan to
give an accurate result for uh counting
the number of players in the few this is
actually a little bit complex because
you don't want the players in the
background just be in the few I already
ran this earlier so we just jumped to
the result um but it says the Cod has
selected seven players on the field and
I think that should right 1 2 3 4 5 six
seven
um and if I were to zoom in to the model
output Now 1 2 3 4 five six seven I
think that's actually right and the part
of the output of this is that um it has
also generated code uh that you can run
over and over um actually generated
python code uh
that if you want you can run over and
over on the large collection of images
es and I think this is exciting because
there are a lot of companies um and
teams that actually have a lot of visual
AI data have a lot of images um have a
lot of videos kind of stored somewhere
and until now it's been really difficult
to get value out of this data so for a
lot of the you know small teams or large
businesses with a lot of visual data
visual AI capabilities like the vision
agent lets you take all this data
previously shove somewhere in BL storage
and and you know get real value out of
this I think this is a big
transformation for AI um here's another
example you know this says um given a
video split this another soccer game or
football
game so given video split the video
clips of 5 Seconds find the clip where
go is being scored display a frame so
output so Rand is already because takes
a little the time to run then this will
generate code evaluate code for a while
and this is the output and it says true
1015 so it think those a go St you know
around here around between
the right and there you go that's the go
and also as instructed you know
extracted some of the frames associated
with this so really useful for
processing um video data and maybe
here's one last example uh of of of the
vision agent which is um you can also
ask it FR program to split the input
video into small video chunks every 6
seconds describe each chunk andore the
information at Panda's data frame along
with clip name s and end time return the
Panda's data frame so this is a way to
look at video data that you may have and
generate metadata for this uh that you
can then store you know in snow fake or
somewhere uh to then build other
applications on top of but just to show
you the output of this um so you know
clip name start time end time and then
there actually written code um here
right wrot code that you can then run
elsewhere if you want uh let me put in a
stream the tab or something that you can
then use to then write a lot of you know
text descriptions for this um and using
this capability of the vision agent to
help write code my team at Landing AI
actually built this little demo app that
um uses code from the vision agent so
instead of us sing the write code have
the Vision agent write the code to build
this metadata and then um indexes a
bunch of videos so let's see I say
browsing so skar airborne right I
actually ran this earlier hope it works
so what this demo shows is um we already
ran the code to take the video split in
chunks store the metadata and then when
I do a search for skier Airborne you
know it shows the clips uh that have
high
similarity right right oh marked here
with the green has high similarity well
this is getting my heart rate out seeing
do that oh here's another one whoa all
right all right and and the green parts
of the timeline show where the skier is
Airborne let's see gray wolf at night I
actually find it pretty fun yeah when
when you have a collection of video to
index it and then just browse through
right here's a gray wolf at night and
this timeline in green shows what a gr
wolf and Knight is and if I actually
jump to different part of the video
there's a bunch of other stuff as well
right there that's not a g wolf at night
so I that's pretty cool
um let's see just one last example so
um yeah if I actually been on the road a
lot uh but if sear if your luggage this
black luggage right
um there this but it turns out turns out
there actually a lot of black Luggage So
if you want your luggage let's say black
luggage with
rainbow strap this there a lot of black
luggage out
there
then you know there right black luggage
with rainbow strap so a lot of fun
things to do um and I think the nice
thing about this is uh the work needed
to build applications like this is lower
than ever before so let's go back to the
slides
um
and in terms of AI opportunities I spoke
a bit about agentic workflows and um how
that is changing the AI stack is as
follows it turns out that in addition to
this stack I show there's actually a new
emerging um agentic orchestration layer
and there little orchestration layer
like L chain that been around for a
while that are also becoming
increasingly agentic through langra for
example and this new agentic
orchestration layer is also making
easier for developers to build
applications on top uh and I hope that
Landing ai's Vision agent is another
contribution to this to makes it easier
for you to build visual AI applications
to process all this image and video data
that possibly you had but that was
really hard to get value all of um until
until more recently so but fire when I
you what to think are maybe four of the
most important AI Trends there's a lot
going on on AI is impossible to
summarize everything in one slide if you
had to make me pick what's the one most
important Trend I would say is a gentic
AI but here are four of things I think
are worth paying attention to first um
turns out agentic workflows need to read
a lot of text or images and generate a
lot of text so we say that generates a
lot of tokens and their exciting efforts
to speed up token generation including
semiconductor work by Sova Service drop
and others a lot of software and other
types of Hardware work as well this will
make a gentic workflows work much better
second Trend I'm about excited about
today's large language models has
started off being optimized to answer
human questions and human generated
instructions things like you know why
did Shakespeare write mcbath or explain
why Shakespeare wrote Mac beath these
are the types of questions that L
langage models are often as answer on
the internet but agentic workflows call
for other operations like to use so the
fact that large language models are
often now tuned explicitly to support
tool use or just a couple weeks ago um
anthropic release a model that can
support computer use I think these
exciting developments are create a lot
of lift rate create a much higher
ceiling for what we can now get atic
workloads to do with L langage models
that tune not just to answer human
queries but to tune EXA explicitly to
fit into these erative agentic workflows
um third
data engineering's importance is rising
particularly with unstructured data it
turns out that a lot of the value of
machine learning was a Structure data
kind of tables of numbers but with geni
we're much better than ever before at
processing text and images and video and
maybe audio and so the importance of
data engineering is increasing in terms
of how to manage your unstructured data
and the metad DAT for that and
deployment to get the unstructured data
where it needs to go to create value so
that that would be a major effort for a
lot of large businesses and then lastly
um I think we've all seen that the text
processing revolution has already
arrived the image processing Revolution
is in a slightly early phase but it is
coming and as it comes many people many
businesses um will be able to get a lot
more value out of the visual data than
was possible ever before and I'm excited
because I think that will significantly
increase the space of applications we
can build as well so just wrap up this
is a great time to be a builder uh gen
is learning us experiment faster than
ever a gentic AI is expanding the set of
things that now possible and there just
so many new applications that we can now
build in visual AI or not in visual AI
that just weren't possible ever before
if you're interested in checking out the
uh visual AI demos that I ran uh please
go to va. landing.ai the exact demos
that I ran you better try out yourself
online and get the code and uh run code
yourself in your own applications so
with that let me say thank you all very
much and please also join me in
welcoming Elsa back onto the stage thank
you

What is AI?
picture this a machine that could
organize your cupboard just as you like
it or serve every member of the house a
customized cup of coffee makes your day
easier doesn't it these are the products
of artificial intelligence but why use
the term artificial intelligence well
these machines are artificially
incorporated with human-like
intelligence to perform tasks as we do
this intelligence is built using complex
algorithms and mathematical functions
but ai may not be as obvious as in the
previous examples in fact ai is used in
smartphones cars social media feeds
Uses of AI (Artificial Intelligence)
video games banking surveillance and
many other aspects of our daily life the
real question is what does an ai do at
its core here is a robot we built in our
lab which is now dropped onto a field in
spite of a variation in lighting
landscape and dimensions of the field
the ai robot must perform as expected
this ability to react appropriately to a
new situation is called generalized
learning the robot is now at a crossroad
one that is paved and the other rocky
the robot must determine which path to
take based on the circumstances this
portrays the robot's reasoning ability
after a short stroll the robot now
encounters a stream that it cannot swim
across using the plank provided as an
input the robot is able to cross this
stream so our robot uses the given input
and finds the solution for a problem
this is problem solving these three
capabilities make the robot artificially
intelligent in short ai provides
machines with the capability to adapt
What is AI (Artificial Intelligence)
reason and provide solutions
well now that we know what ai is let's
have a look at the two broad categories
an ais classified into
weak ai also called narrow ai focuses
Weak AI (Artificial Intelligence)
solely on one task
for example alphago is a maestro of the
game go but you can't expect it to be
even remotely good at chess
this makes alphago a weak ai
you might say alexa is definitely not a
Strong AI (Artificial Intelligence)
weak ai since it can perform multiple
tasks
well that's not really true when you ask
alexa to play despacito it picks up the
keywords play and despacito
and runs a program and is trained to
alexa cannot respond to a question it
isn't trained to answer for instance try
asking alexa the status of traffic from
work to home
alexa cannot provide you this
information as she is not trained to and
that brings us to our second category of
ai strong ai
now this is much like the robots that
only exist in fiction as of now
ultron from avengers is an ideal example
of a strong ai
that's because it's self-aware and
eventually even develops emotions
this makes the ai's response
unpredictable
Difference between AI ML and Deep learning
you must be wondering well how is
artificial intelligence different from
machine learning and deep learning
we saw what ai is machine learning is a
technique to achieve ai and deep
learning in turn is a subset of machine
learning
machine learning provides a machine with
the capability to learn from data and
experience through algorithms
deep learning does this learning through
ways inspired by the human brain this
means through deep learning data and
patterns can be better perceived
ray kurzweil a well-known futurist
predicts that by the year 2045 we would
have robots as smart as humans
this is called the point of singularity
well that's not all in fact
elon musk predicts that the human mind
and body will be enhanced by ai implants
which would make us partly cyborgs
so here's a question for you which of
the below ai projects don't exist yet
a an ai robot with citizenship b a robot
with a muscular skeletal system
c ai that can read its owner's emotions
d ain't that develops emotions over time
give it a thought and leave your answers
in the comment section below
three lucky winners will receive amazon
gift vouchers
since the human brain is still a mystery
it's no surprise that ai2 has a lot of
unventured domains
for now ai is built to work with humans
and make our tasks easier
however with the maturation of
technology we can only wait and watch
what the future of ai holds for us
well that is artificial intelligence for
Future of Artificial Intelligence
you in short do not forget to leave your
answer to the quiz in the comment
section below also like share and
subscribe to our channel if you enjoyed
this video stay tuned and keep learning
[Music]

Intro
Everybody's talking about artificial intelligence these days, AI.
Machine learning is another hot topic.
Are they the same thing or are they different?
And if so, what are those differences?
And deep learning is another one that comes into play.
I actually did a video on these three:
artificial intelligence, machine learning and deep learning
and talked about where they fit.
And there were a lot of comments on that.
And I read those comments,
and I'd like to address some of the most frequently asked questions
so that we can clear up some of the myths and misconceptions around
this.
In addition, something else has happened since that video was recorded,
and that is the absolute explosion of this area of generative AI.
Things like large language models and chat bots
has seemed to be taking over the world.
We see them everywhere.
Really interesting technology.
And then also things like deepfakes.
These are all within the realm of AI, but how do they fit within each other?
How are they related to each other?
We're going to take a look at that in this video
and try to explain how all these technologies relate, and how we can use
them.
AI
First off, a little bit of a disclaimer.
I'm going to have to simplify some of these concepts
in order to not make this video last for a week.
So, those of you that are really deep experts in the field, apologies in
advance,
but we're going to try to make this simple,
and that will involve some generalizations.
First of all, let's start with AI.
Artificial intelligence is basically trying to simulate with a computer
something that would match or exceed human intelligence.
What is intelligence?
Well, it could be a lot of different things, but generally we tend to think of
it
as the ability to learn, to infer and to reason things like that.
So, that's what we're trying to do in the broad field of AI, of artificial
intelligence.
And if we look at a timeline of AI, it really kind of started back around this
time frame.
And in those days it was very premature.
Most people had not even heard of it.
And it basically was a research project.
But I can tell you, as an undergrad,
which for me was back during these times, we were doing AI work.
In fact, we would use programing languages like Lisp, or Prolog,
and these kinds of things, were kind of the predecessors
to what became, later, expert systems.
And this was a technology ... again, some of these things existed previous,
but that's when it really hit a kind of a critical mass
and became more popularized.
So expert systems of the 1980s, maybe in the 90s.
And and again, we used technologies like this.
All of this was something that we did
before we ever touched in to the next topic I'm going to talk about.
And that's the area of machine learning.
Machine Learning
Machine learning is, as its name implies, the machine is learning.
I don't have to program it.
I give it lots of information and it observes things.
So, for instance, if I start doing this,
if I give you this
and then ask you to predict what's the next thing that's going to be there,
well, you might get it, you might not.
You have very limited training data to base this on.
But if I gave you one of those
and then ask you what to predict, what will happen next?
Well, you're probably going to say this and then you're going to say it's
this.
And then you think you got it all figured out.
And then you see one of these,
and then all of a sudden I give you one of those and throw you a
curveball.
So this in fact and then maybe it goes on like this.
So a machine learning algorithm is really good at looking at patterns
and discovering patterns within data.
The more training data you can give it,
the more confident it can be in predicting.
So predictions are one of the things that machine learning is particularly
good at.
Another thing is spotting outliers like this
and saying "oh, that doesn't belong in the - it looks different than all the
other stuff because the sequence was broken."
So that's particularly useful in cybersecurity, the area that I work in
because we're looking for outliers.
We're looking for users who are using the system in ways that they
shouldn't be,
or ways that they don't typically do.
So this technology, machine learning, is particularly useful for us.
And machine learning really came along, and became more popularized,
in this time frame, in the, the 2010s,
and again, back when I was an undergrad riding my dinosaur to class,
we were doing this kind of stuff.
We never once talked about machine learning.
It might have existed, but it really hadn't hit the popular, mindset yet.
But this technology has matured greatly over the last few decades,
and now it becomes the basis of a lot we do going forward.
Deep Learning
The next layer of our Venn diagram involves deep learning.
Well, it's deep learning in the sense that
with deep learning we use these things called neural networks.
Neural networks are ways that in a computer, we simulate and mimic
the way the human brain works,
at least to the extent that we understand how the brain works.
And it's called deep because we have multiple layers of those neural
networks.
And the interesting thing about these is
they will simulate the way a brain operates.
But I don't know if you know this, but human brains can be a little bit
unpredictable.
You put certain things in, you don't always get the very same thing out.
And deep learning is the same way.
In some cases, we're not actually able to fully understand
why we get the results we do
because there are so many layers to the neural network,
it's a little bit hard to to decompose and figure out exactly what's in there.
But this has become a very important part and a very important
advancement.
That also reached some popularity during the 2010s.
And as something that we use still today as the basis for our next area of
AI.
Generative AI
The most recent advancements in the field of artificial intelligence
all really are in this space, the area of generative AI.
Now I'm going to introduce a term that you may not be familiar with.
It's the idea of foundation models.
Foundation models is where we get some of these kinds of things.
For instance, an example of a foundation model
would be a large language model.
Which is where we take language and we model it,
and we make predictions in this technology
where if I see certain types of of words,
then I can sort of predict what the next set of words will be.
I'm going to oversimplify here for the sake of simplicity,
but think about this is a little bit like the autocomplete
when you start typing something in, and then it predicts what your next
word will be.
Except in this case, with large language models, they're not predicting the
next word.
They're predicting the next sentence, the next paragraph, the next entire
document.
So there's a really an amazing exponential leap in what these things are
able to do.
And we call all of these technologies generative.
Because they are generating new content.
Some people have actually made the argument that
the generative AI isn't really generative, that
that these technologies are really just regurgitating existing information
and putting it in a different format.
Well, let me give you an analogy.
If you take music, for instance, then every note has already been
invented.
So in a sense, every song is just a recombination,
some other permutation of all the notes that already exist already.
And just putting them in a different order.
Well, we don't say new music doesn't exist.
People are still composing and creating new songs from the existing
information.
I'm going to say Gen AI is similar.
It's a it's an analogy, so there'll be some imperfections in it,
but you get the general idea.
Actually, new content can be generated out of these.
And there are a lot of different forms that this can take.
What other types of models are audio models, video models and things
like that.
Well, in fact, these we can use to create deepfakes.
And deepfakes are examples where we're able to take, for instance, a
person's voice
and recreate that and then have it seem like the person said things they
never said.
Well, it's really useful in entertainment situations, in parodies and things
like that.
Or if someone's losing their voice, then you could capture their voice,
and then they'd be able to type and you'd be able to hear it in their voice.
But there's also a lot of cases where this stuff could be abused.
The chat bots, again, come from this space.
The deepfakes come from this space,
but they're all part of generative AI and all part of these foundation
models.
And this, again, is the area that has really caused all of us to really pay
attention to AI.
The possibilities of generating new content, or in some cases,
summarizing existing content
and giving us something that is bite size and manageable.
This is what has gotten all of the attention.
This is where the chat bots and all of these things come in.
Conclusion
In the early days, AI's adoption started off pretty slowly.
Most people didn't even know it existed, and if they did,
it was something that always seemed like it was about 5 to 10 years away.
But then machine learning, deep learning and things like that
came along and we started seeing some uptick.
Then foundation models, Gen AI and the like
came along and this stuff went straight to the moon.
These foundation models are what have changed the adoption curve.
And now you see AI being adopted everywhere.
And the thing for us to understand is where this is, where it fits in,
and make sure that we can reap the benefits from all of this technology.
If you liked this video and want to see more like it, please like and
subscribe.
If you have any questions or want to share your thoughts about this topic,

Intro
[Music]
ever since computers were invented
they've really just been glorified
calculators machines that execute the
exact instructions given to them by the
programmers but something incredible is
happening now computers have started
gaining the ability to learn and think
and communicate just like we do they can
do creative intellectual work that
previously only humans could do we call
this technology generative Ai and you
may have encountered it already through
products like GPT basically intelligence
is now available as a service kind of
like a giant brain floating in the sky
that anyone can talk to it's not perfect
but it is surprisingly capable and it is
improving at an exponential rate this is
a big deal it's going to affect just
about every person and Company on the
planet positively or negatively this
video is here to help you understand
what generative AI is all about in
Practical terms beyond the hype the
better you understand this technology as
a person team or company the better
equipped you will be to survive and
thrive in the age of AI so here's a
Einstein in your basement
silly but useful mental model for this
you have Einstein in your basement in
fact everyone does and by Einstein I
really mean the combination of every
smart person who ever lived you can talk
to Einstein whenever you want he has
instant access to the sum of all human
knowledge and will answer anything you
want within seconds never running out of
patience he can also take on any role
you want a comedian poet doctor coach
and will be an expert within that field
he has has some humanlike limitations
though he can make mistakes he can jump
to conclusions he can misunderstand you
but the biggest limitation is actually
your imagination and your ability to
communicate effectively with them this
skill is known as prompt engineering and
in the age of AI this is as essential as
reading and writing most people vastly
underestimate what this Einstein in your
basement can do it's like going to the
real Einstein and asking him to proof
read a high school report or hiring a
world-class five-star chef and having
him chop onion the more you interact
with Einstein the more you will discover
surprising and Powerful ways for him to
help you or your company okay enough
What is AI
fluffy metaphors let's clarify some
terms AI as you probably know stands for
artificial intelligence AI is not new
Fields like machine learning and
computer vision have been around for
decades whenever you see a YouTube
recommendation or a web search result or
whenever you get a credit card
transaction approved that's traditional
AI in action generative AI is AI that
generates new original content rather
than just finding or classifying
existing content that's the G in GPT for
example large language models or llms
are a type of generative AI that can
communicate using normal human language
chat GPT is a product by the company
open AI it started as an llm essentially
an advanced chatbot using a new
architecture called the Transformer
architecture which by the way is the T
in GPT it is so fluent at human language
that anyone can use it you don't need to
be an AI expert or programmer and that's
kind of what triggered the whole
Revolution so how does it actually work
How does it work
well a large language model is an
artificial neural network basically a
bunch of numbers or or parameters
connected to each other similar to how
our brain is a bunch of neurons or brain
cells connected to each other neural
networks only deal with numbers you send
in numbers and depending on how the
parameters are set all the numbers come
out but any kind of content such as text
or images can be represented as numbers
so let's say I write dogs are when I
send that to a large language model that
gets converted to numbers processed by
the neural network and then the
resulting numbers are converted back
into text in this case the word animals
dogs are animals so yeah this is
basically a guest toex word machine the
interesting part is if we take that
output and combine it with the input and
send it through the model again then it
will continue adding new words that's
what's going on behind the scenes when
you type something in chat GPT in this
case for example it generated a whole
story and I can continue this
indefinitely by adding more prompts a
large language model may have billions
or even trillions of parameters that's
why they're called large so how are all
Training
these numbers set well not through
manual programming that would be
impossible but through training just
like babies learning to speak a baby
isn't told how to speak she doesn't get
an instruction manual instead she
listens to people speaking around her
and when she's heard enough she starts
seeing the pattern she speaks a few
words at first to the Delight of her
parents and then later on full sentences
similarly during a training period the
language model is fed a mindboggling
amount of text to learn from Mostly from
internet sources it then plays guess the
next word with all of this over and over
again and the parameters are
automatically tweaked until it starts
getting really good at predicting the
next word this is called back
propagation which is a fancy term for oh
I guessed wrong I better change
something however to become truly useful
a model also needs to undergo human
training this is called reinforcement
learning with human feedback and it
involves thousands of hours of humans
painstakingly testing and evaluating
output from the model and giving
feedback kind of like training a a dog
with a clicker to reinforce good
behavior that's why a model like GPT
won't tell you how to rob a bank it
knows very well how to rob a bank but
through human training it has learned
that it shouldn't help people commit
crimes when training is done the model
is mostly Frozen other than some fine
tuning that can happen later that's what
the P stands for in GPT pre-trained
although in the future we will probably
have models that can learn continuously
rather than just uh during training and
fine-tuning now although chat GPT kind
Models
of got the ball rolling GPT isn't the
only model out there in fact new models
are sprouting like mushrooms they vary a
lot in terms of speed capability and
cost some can be downloaded and run
locally others are only online some are
free or open source others are
commercial products some are super easy
to use While others require complicated
technical setup some are specialized for
certain use cases others are more
General and can be used for almost
anything and some are baked into
products in the form of co-pilots or or
chat windows it's it's the Wild West
just keep in mind that you generally get
what you pay for so with a free model
you may just be getting a smart high
school student in your basement rather
than Einstein the difference between for
example GPT 3.5 and gp4 is
massive note that there are different
Different Models
types of generative AI models that
generate different types of content
textto text models like gpc4 take text
as input and generate text as output the
text can be natural language but it can
also be structured information like code
Json or HTML I use this a lot myself to
generate code when programming uh it
saves an incredible amount of time and I
also learn a lot from the code it
generates text to image models will
generate images describe what you want
and an image gets generated for you you
can even pick a style image to image
models can do things like transforming
or combining images and we have image to
text models which describe the contents
of a given image and speech to text
models create voice transcriptions which
is useful for things like uh meeting
notes text to audio models they generate
music or sounds from a prompt for
example here is some sound generated
from The Prompt people talking in a
busy okay guys enough stop now thank you
and there are even text to video models
that generate videos from a prompt
sooner or later we'll have infinite
movie series that autogenerate the next
episode tailored to your tastes as
you're watching kind of scary if you
think about it one Trend now is
multimodal AI products meaning they
combine different models into one
product so you can work with text images
audio Etc without switching tools the
chat GPT mobile app is a good example of
this just for fun I took a photo of this
room and I asked where I could hide
stuff I kind of like that it mentioned
the stove but warned that that it could
get hot there when I have things to
figure out such as the contents of this
video I like to take walks using chat
GPT as as a sounding board I start by
saying always respond with the word okay
unless I ask you for something that way
it'll just listen and not interrupt
after I finish dumping my thoughts I ask
for feedback we have some discussion and
then I ask it to summarize and text
afterwards I really recommend trying
this it's it's a really useful way to
use tools like this turns out Einstein
isn't stuck in the basement after all
you can take him out for a walk
initially language models were just word
predictors statistical machines with
limited practical use but as they became
larger and were trained on more data
they started gaining emergent
capabilities unexpect capabilities that
surprised even the developers of the
technology they could role playay write
poetry write highquality code discuss
company strategy provide legal and
medical advice coach teach basically
creative and intellectual things that
only humans could do previously it turns
out that when a model has seen enough
text and images it starts to see
patterns and understand higher level
Concepts just like a baby learning to
understand the world let's take a simple
example I'll give gp4 this little
drawing that involves a string a pair of
scissors an egg a pot and a fire what
will happen if I use the scissors the
model has most likely not been trained
on this exact scenario yet it gave a
pretty good answer which demonstrates a
basic understanding of the nature of
scissors eggs gravity and heat when gp4
was released I started using it as a
coding assistant and I was blown away
when prompted effectively it was a
better programmer than anyone I've
worked with same with article writing
product design Workshop planning and
just about anything I used it for
the main bottleneck was my prompt
engineering skills so I decided to make
a career shift and focus entirely on
learning and teaching how to make this
technology useful hence this video now
let's take a step back and look at the
implications for 300,000 years or so we
homosapiens have been the most
intelligent species on Earth depending
of course on how you define intelligence
but the thing is our intellectual
capabilities aren't really improving
that much our brains are about the same
size same weight as they've been for
thousands of years computers on the
other hand have been around for only 80
years or so and now with generative AI
they are suddenly capable of speaking
human languages fluently and carrying
out an increasing number of intellectual
creative tasks that previously only
humans could do so we are right here at
the Crossing Point where AI is better at
some things and humans are better at
some things but ai's capabilities are
improving at an exponential rate while
ours aren't we don't know how long that
exponential Improvement will continue or
if it will level off at some point but
we're definitely entering a new world
order now this isn't the first re
Revolution we've experienced we tamed
fire we learned how to do agriculture we
invented the printing press steam power
Telegraph these were all revolutionary
changes but they took decades or
centuries to become widespread in the AI
Revolution new technology spreads
worldwide almost instantly dealing with
this rate of change is a huge challenge
for both individuals and
The AI Mindset
companies I've noticed that people and
companies tend to fall into different
kind of mindset categories when it comes
to AI on one side we have denial the
belief that AI cannot do my job or we
don't have time to look into this
technology this is a dangerous place to
be a common saying is AI might not take
your job but people using AI will and
this is true for both individuals and
companies on the other side of the scale
we have panic and despair the belief
that AI is going to take my job no
matter what AI is going to make my
company go bankrupt neither of these
mindsets are helpful so I propose a
middle ground a balanced positive
mindset AI is going to make me my team
my company insanely productive
personally with this mindset I feel like
I've gained superpowers I can go from
idea to result in so much shorter time I
can focus more on what I want to achieve
and less on the grunt work of building
things and I'm learning a lot faster too
it's like having an awesome Mentor with
me at all times this mindset not only
feels good but it also equips you for
the future makes you less likely to lose
your job or your company and more likely
to thrive in the age of AI despite all
the
Is human role needed
uncertainty so one important question is
is human role X needed in the age of AI
for example are doctors needed
developers lawyers CEOs uh whatever so
this question becomes more and more
relevant as the AI capabilities improve
well some jobs will disappear for sure
but for most roles I think we humans are
still needed someone with domain
knowledge still needs to decide what to
ask the AI how to formulate The Prompt
what context needs to be provided and
how to evaluate the result AI models
aren't perfect they can be absolutely
brilliant sometimes but sometimes also
terribly stupid they can sometimes
hallucinate and provide bogus
information in a very convincing way so
when should you trust AI response when
should you double check or do the work
yourself what about legal compliance
data security what information can we
send to an AI model and where is that
data stored a human expert is needed to
make these judgment calls and compensate
for the weaknesses of the AI model so I
recommend thinking of AI as your
colleague a genius but also an oddball
with some personal quirks that you need
to learn to work with you need to
recognize when your Genius colleague is
drunk as a doctor my AI colleague can
help diagnose rare diseases that I
didn't even know existed as a lawyer my
AI colleague could do legal research and
review contracts allowing me to spend
more time with my client or as a teacher
my AI colleague could grade tests help
generate course content provide
individual support to students Etc and
if you're not sure how I can help you
just ask it I work as X how can you help
me overall I find that that the
combination of human plus AI That's
where the magic lies it's important to
Models vs products
distinguish between the models and the
products that build on top of them as a
user you don't normally interact with
the model directly instead you interact
with a product website or a mobile app
which in turn talks to the model behind
the scenes products provide a user
interface and add capabilities and data
that aren't part of the model itself for
example the chat GPT product keeps track
of your message history while the GPT 4
model itself doesn't have any message
history as a developer you can use these
models to build your own AI powered
products and features for example let's
say you have an e-learning site you
could add a chat bot to answer questions
about the courses or as a recruitment
company you might build AI powered tools
to help evaluate candidates in both
these cases your users interact with
your product and then your product
interacts with the model this is done
via apis or application programming
interfaces which allow your code to talk
to the model so here's a simple example
of using open AI API to talk to GPT not
a lot of code needed and here's another
example of the automatic candidate
evaluation thing I talked about it takes
a job description and a bunch of CVS in
a folder and evaluates each candidate
automatically and incidentally the code
itself is mostly AI written as a product
developer you can use AI models kind of
like an external brain to insert
intelligence into your product very
powerful in order to use generative AI
Prompt engineering
effectively you need to get good at
prompt engineering or prompt design as I
prefer to call it this skill is needed
both as a user and as a product
developer because in both cases you need
to be able to craft effective prompts
that produce useful results from an AI
model here's an example let's say I want
help planning a workshop this prompt is
unlikely to give useful results because
no matter how smart the AI is if it
doesn't know the context of my workshop
it can only give fague high level
recommendations the second prompt is
better now I provided some context this
is normally done iteratively write a
prompt look at the result add a
follow-up prompt to provide more
information or edit the original prompt
and rinse and repeat until you get a
good result in this third approach I ask
it to interview me so instead of me
providing a bunch of context up front
I'm basically saying what do you need to
know in order order to help me and then
it will propose a workshop agenda after
I often combine these two I provide a
bit of context and then I tell it to ask
me if it needs any more information
these are just some examples of prompt
engineering techniques so overall the
better you get at prompt engineering the
faster and better results you will get
from AI there are plenty of courses
books videos articles to help you learn
this but the most important thing is is
to practice and Learn by doing a nice
side effect is that you will become
better at communicating in general since
prompt engineering is really all about
Clarity and effective
communication I think the next Frontier
Autonomous agents
for generative AI is autonomous agents
with tools these are AI powerered
software entities that run on their own
rather than just sitting around waiting
for you to prompt them all the time so
you go down to Einstein in your basement
and do what a good good leader would do
for a team you give him a high level
Mission and the tools needed to
accomplish it and then open the door and
let him out to run his own show without
micromanagement the tools could be
things like access to the internet
access to money ability to send and
receive messages order pizza or whatever
for this prompt engineering becomes even
more important because your autonomous
tool wielding agent can do a lot of good
or a lot of harm depending on how well
you craft that mission
statement all right let's wrap it up
here are the key things I hope you will
remember from this video generative AI
is a super useful tool that can help
both you your team and your company in a
big way the better you understand it the
more likely it is to be an opportunity
rather than a threat generative AI is
more powerful than you think the biggest
limitation is not the technology but
your imagination like what can I do and
your prompt engineering skills how do I
do it prompt engineeringdesign is a
crucial skill like all new skills just
accept that you will kind of suck at at
first but you'll improve over time with
deliberate practice so my best tip is
experiment make this part of your
day-to-day life and the Learning Happens
automatically hope this video was
helpful thanks for watching

https://www.youtube.com/watch?v=2IK3DFHRFfw&t=712s

ello all my name is Kush naak and


welcome to my YouTube channel so guys
three to four years back you know I had
created a video uh to make you
understand the differences between AI
versus ml versus DL versus data science
and uh till now probably that is the
video and that was a 9 Minutes video
where I clearly differentiated between
all these terms right specifically that
you see over here and right now that is
probably the most highest views video in
my channel uh now as you know from past
two years like from 2022 end until now
right generative AI is really the Talk
of the Town you know and uh with respect
to generative AI if I talk about large
language models open source large
language models open AI doing s such an
amazing word now cloudy 3 is also there
to compete open aai you know and Amazon
is definitely supporting them so you
should know right what exactly is
generative AI how is it different from
all these terms or what is ex difference
between AI versus ml versus DL versus
generative AI so I'll make you
understand in this specific video
probably I'll try to make this video
from somewhere between 15 to 20 minutes
um I'll be including all simplistic
terms very easy definition so that
you'll be able to understand it because
once you have that Clarity right uh at
the end of the day right you really want
to probably become an AI engineer you
know which uses all these Technologies
to create that AI apps so uh let's go
ahead and let's understand this thing
okay so as usual when I probably start
the session you know let us consider the
entire universe and uh in this field of
universe I would definitely like to call
this as AI okay now what is
AI simple artificial intelligence the
main aim is to
build
applications that
can
perform that can
perform its own
task
without human intervention now this is
the most important definition right
without human intervention so it'll be
able to perform its entire task without
human intervention so let me just talk
about it some of the examples As We Know
uh Netflix right it has a amazing AI
recommendation system recommendation
system which recommends movies right now
here human intervention is not required
to probably provide you some kind of
recommendation whatever things you using
you're browsing the movies whatever
movies you are specifically seeing you
know at that point of time this AI model
is actually helping you to provide you
good recommendations so that you can
stay for a longer period of time
self-driving car is another example okay
okay so self-driving car is another
example right self-driving car now it is
able to probably drive itself you know
whenever the Turning is coming
everything it is able to do see at the
end of the day whichever field you
specifically work right uh if I talk
about AI Engineers this is very much
important if I talk about AI engineers
at the end of the
day at the end of the day you are
actually creating an AI product right
and this product may be integrated with
the software product itself right if I
say Netflix it is a streaming platform
movie streaming platform and if I talk
about AI Engineers you know they are
trying to integrate some amazing models
you know which involves fine tuning
which involves multiple things so that
you integrate in such a way that it is
quite scalable the machine learning
engineer task will also come right so at
the end of the day we are specifically
creating an AI product and that product
is getting seamlessly integrated with
some uh you can probably say web app or
Android app or mobile app or Edge
devices something as such so I hope you
got an idea with respect to AI so it is
mostly about building applications that
can perform its own tasks without human
intervention now let's talk about
machine learning now whenever I talk
about machine learning machine learning
is a subset of AI so this is basically
machine learning it is a subset of AI
and the main aim of machine learning is
to provide you stats tools
to
perform to perform various tasks such as
statistical analysis statistical
analysis
visualization
visualization
prediction and forecasting right so
these are some of the examples that I've
have just written it over here but at
the end of the day what is machine
learning it provides a lot of stats
tools to proberbly perform the complete
life cycle of a data science project
that is specifically required uh in
every life cycle from data inje to
probably data transformation feature
engineering you will be required some of
the other concepts with respect to ml
techniques where this is a very
important term specifically if I say
about that is nothing but stats tools
okay so machine learning is a subset of
AI right and here it is providing the
stats tools to perform all this task and
this task is usually performed on what
on data now why we do this why we do
this so
that so that we understand about the
data we understand about the data right
so that a data will be meaningful it
will be able to convey some information
to you right so that is where machine
learning basically comes into existence
now coming to the next part that is
nothing but deep learning so if I talk
about deep learning again deep learning
is a subset of machine learning
and if I talk about deep learning from
1950s it's not like deep learning has
become just famous right now yes it has
become famous right now because of
amazing things like gpus technological
advancement open source libraries and
many more things but this this entire
deep learning was built to mimic human
brain right human brain we wanted we
wanted AI or we wanted machines to
perform like how we human beings used to
perform right how we human being used to
learn so that is the reason as I said
mimicking human brain and that is where
we have something called as
multi-layered neural networks right
multi-layered neural networks so I hope
you got a clear idea about this and here
whenever I talk about deep learning
there are three important things that we
specifically learn right please
understand this because after this only
we'll be able to understand about
generative AI so with respect to deep
learning the three important things that
we specifically learn right a Ann right
CNN and then we basically say RNN and
its variants right and this is where
when I say RNN and its variant CNN and
object detection CNN object detection is
completely for computer vision purpose
right so if I talk about task over here
specifically you use computer vision
over here you specifically use for what
kind of use cases for textt related use
cases right text related use
cases right or you can also use it for
time series use
cases because that is how RNN and its
variants are designed Right Time series
use cases so Ann is definitely like how
machine learning is basically trained
similarly you can train machine learning
problem statement with the help of a&n
also right so overall most of the things
that we specifically learn in deep
learning are based on these three things
okay when I say CNN object detection
there are techniques like rcnn many more
things and all right similarly RNN you
have RNN LTM RNN Gru then you have
encoder decoder attention is all you
need then you have Transformers and
birds right so I hope everybody has
learned from my playlist till here
everything has been explained along with
theoretical intuition and practical
intuition Transformer and B are
something very Advanced and and from
here only we will be deriving the next
thing that is nothing but generative AI
okay because this is the backbone used
in most of the llm models in generative
AI Transformers and birds okay so deep
learning is mostly about this specific
thing at the end of the day we are
trying to mimic the human brain we are
trying to understand how human beings
specifically learn we also learn in the
same way now coming to the next one that
is nothing but generative AI obviously
generative AI is a subset of deep
learning
again and here Advanced things okay so I
will specifically talk about two types
of model that we use in data science
Industry models or two types of model
training we specifically say right one
is the discriminative uh if I if I just
want to say it is that it's that's
mostly like discriminative and
generative models okay so if I talk
about deep learning models they are
mostly of two
types okay this is one and this is the
other one
okay here you have something called as
discriminative models here you have
something called as generative models
now you should understand whenever I
talk about generative AI you should
understand one very important thing
generative AI is more about generating
content okay so these are specific
models which will help you to generate
new content okay based on whatever
content it has already been trained on
okay if I talk about discriminative AI
it is mostly about task like
classification classification prediction
all this regression prediction so all
this tasks you can basically do in short
over here your data set that you have
right these are this entire
discriminative models are trained on
labeled data set you have to really
understand this okay label data set
similarly in the case of generative AI
if I really want to understand about the
task it is nothing but it
generates new
data trained
on huge data set okay if I really want
to make you understand right in a simple
way if I just take okay I I am a person
what I've done is that I have probably
learned or read 100 books on
cats right so I have been trained on
this many number of books huge amount of
data right now what I being a person if
anybody ask me any questions with
respect to cats I will be able to answer
all the questions in my own way right
and obviously the answers will be
accurate because I've already read 100
books on the cat so similarly if I talk
with respect to generative AI there are
two types of models that we specifically
learn one is large language models large
image models large language model is
with respect to Text data large image
model is specifically with respect to
images and videos so here if a text is
given you can convert that into an image
if a text if a text is given you can
convert that into videos here large
language mod if a text is given it will
be able to provide some response in
terms of text right and if I talk about
all the various categories of LM models
just in some time I'll be explaining you
that also okay so generative AI in short
what it is basically doing is that here
we have models that is already trained
in huge amount of data and the task of
that specific models are based on any
input it will generate a new data itself
okay now to make you understand more
about llm models right so if I talk
about more llm models right obviously
you know companies like open
AI you know companies like
meta
Google anthropic right and every body's
race in in the race right to generate
the best llm model so this mod this this
companies are already doing really
really well anthropic basically comes up
with a model which is called as cloudy 3
right right now it is being a fierce
competitor of open a GPD 4 right gbd4
then you have in meta you have open
source models like Lama 2 Google I hope
everybody has heard about gini right in
Google an open source model has also
come which is called as Gemma right now
what are all all these specific models
these models are nothing but these are
specifically called as Foundation
models okay these Foundation models are
also called as pre-trained models why we
say it as pre-trained models because
these are trained in huge amount of data
the entire internet data huge
data it has been trained in huge data
the entire internet data right it may be
code it may be multiple things and all
right
now we can use the specific model for
domain specific use cases also domain
specific use cases also and that is
where a concept something called as fine
tuning is used right and I have already
created a playlist about fine tuning
also I've explained Concepts like clora
Laura how you can basically do the fine
tuning how you can do it with uh open
source models and many more things right
the fierce competition right now is
basically to get the best foundation
model right and right now gp4 gp4 turbo
soon GPT 5 is also going to come right
if I talk about open source model Lama 3
is there in the pipeline right cloudy
has just recently launched cloudy 3 gini
is also coming up with German Pro more
variants it is basically coming up so
the main race if I probably talk about
is to create the best foundation model
later on many companies will be you able
to use this Foundation model in the form
of pre-train models or it will also they
will also be able to fine tune with some
own custom data set right to solve their
use
cases so this was the entire idea about
generative AI again uh this is the video
that I really needed to upload uh early
but yes many people were waiting
requesting for this specific video so I
made you understand this entire thing by
writing in front of you so for more
details with respect to generative AI
Lang chain now if I talk about Lang
chain it is a framework right Lang chain
is a framework which will be able to
work with all the specific models at the
end of the day we will be able to
generate RG application ra
application retrieval argumentation
query uh and along with that we'll also
be able to develop some amazing chat
Bots that is the main reason right we
why llm becoming very famous because of
the chances of getting automated right
we will be able to automate the entire
chatbot response things with respect to
it not only that it has a use scope in
multiple sectors also with respect to
various use cases uh this is with mostly
about L models L models with respect to
large image models stability uh
stability AI is a company which is
specifically working in this specific
thing right so I hope you like this
particular video for more videos related
to Lang chain generative AI llm models
you can follow my other playlist of Lang
chain open Ai and all where you'll be
able to see lot of end to-end projects
so I hope you like this particular video
I will see you all in the next video
have a great day thank you and all take

You might also like