GANs
GANs
GANs
www.cs.wisc.edu/~page/cs760/
Goals for the lecture
Preferences: Suspect 1’s ordering of the action profiles, from best to worst, is
(Defect, Quiet) (he defects and suspect 2 remains quiet, so he is freed),
(Quiet, Quiet) (he gets one year in prison), (Defect, Defect) (he gets three
years in prison), (Quiet, Defect) (he gets four years in prison). Suspect 2’s
ordering is (Quiet, Defect), (Quiet, Quiet), (Defect, Defect), (Defect, Quiet).
3 represents best outcome, 0 worst, etc.
Nash Equilibrium
Thanks, Wikipedia.
Another Example
• Motivation:
– Building accurate generative models is hard (e.g.,
learning and sampling from Markov net or Bayes net)
– Want to use all our great progress on supervised
learners to do this unsupervised learning task better
– Deep nets may be our favorite supervised learner,
especially for image data, if nets are convolutional
(use tricks of sliding windows with parameter tying,
cross-entropy transfer function, batch normalization)
Does It Work?
Thanks, Ian Goodfellow, NIPS 2016 Tutorial on GANS, for this and most of
what follows…
A Bit More on GAN Algorithm
The Rest of the Details
• Use deep convolutional neural networks for Discriminator
D and Generator G
• While
some
nice
theory
based
on
Nash
Equilibria,
be=er
results
in
prac-ce
if
we
move
a
bit
away
from
the
theory
• In
general,
many
in
ML
community
have
strong
concern
that
we
don’t
really
understand
why
deep
learning
works,
including
GANs
• S-ll
much
research
into
figuring
out
why
this
works
be=er
than
other
genera-ve
approaches
for
some
types
of
data,
how
we
can
improve
performance
further,
how
to
take
these
from
image
data
to
other
data
types
where
CNNs
might
not
be
the
most
natural
deep
network
structure