Mahusay - G - Ai, Portfolio

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

NAME: Mahusay, Jeth A.

DATE: October, 2020

YEAR/COURSE: BSA-3 INSTRUCTOR: Mr.Neil Churchill Aniñon, CPA

SUBJECT: ARTIFICIAL INTELLIGENCE

EPORTFOLIO

MODULE 1

Topic 1

In today's generation Artificial intelligence is very rampant, we can see


and used it in our everyday life. However, as a student artificial intelligence
has a big impact to our society even to me in which it helps me in a way of
doing my Assignments and activities easily in a short period of time. I've
been experienced using Artificial intelligence in my daily life but being a
student the most common that AI helped me the most is when I search
some business news or articles in google instead of typing it's headlines it
will automatically generated a response according to my query with that I
can read an article as many as I can, through artificial intelligence it helps
students, workers to give more precise information needed and to do their
job very quick. Thus, artificial intelligence is continuously gaining popularity
around the world because of its factors and one of them is the better
algorithms.
Topic 2

Research and create a timeline showing the evolution of AI


technology.

 1763
Thinking in numbers - Artificial intelligence requires the ability to
learn and make decisions, often based on incomplete information. In
1763, Thomas Bayes developed a framework for reasoning about the
probability of events, using math to update the probability of a
hypothesis as more information becomes available.
 1842
From numbers to poetry - English mathematician Ada Lovelace was
helping Charles Babbage publish the first algorithm to be carried out
by his Analytical Engine, the first general-purpose mechanical
computer. Yet Lovelace saw opportunities beyond the math. She
envisioned a computer that could crunch not just numbers, but solve
problems of any complexity. At the time it was revolutionary that
machines have applications beyond pure calculation. She called the
idea Poetical Science: "[The Analytical Engine] might act upon other
things besides number, were objects found whose mutual fundamental
relations could be expressed by those of the abstract science of
operations.
 1921
“Robot” enters vernacular - Czech writer Karel Čapek introduces
the word "robot" in his play R.U.R. (Rossum's Universal Robots). The
word "robot" comes from the word "robota" (work or slave).
 1942
World War 2 triggers fresh thinking - World War Two brought
together scientists from many disciplines, including the emerging fields
of neuroscience and computing. In Britain, mathematician Alan Turing
and neurologist Grey Walter were two of the bright minds who tackled
the challenges of intelligent machines. They traded ideas in an
influential dining society called the Ratio Club. Walter built some of the
first ever robots. Turing went on to invent the so-called Turing Test,
which set the bar for an intelligent machine: a computer that could
fool someone into thinking they were talking to another person.
 1943
Neurons go artificial - Warren S. McCulloch and Walter Pitts publish
“A Logical Calculus of the Ideas Immanent in Nervous Activity” in the
Bulletin of Mathematical Biophysics. This influential paper, in which
they discussed networks of idealized and simplified artificial “neurons”
and how they might perform simple logical functions, will become the
inspiration for computer-based “neural networks” (and later “deep
learning”) and their popular description as mimicking the brain. This
marks a critical point in our artificial intelligence timeline, even though
deep learning will still take decades to reach mainstream popularity.
 1949
Can a machine think?- Edmund Berkeley publishes Giant Brains: Or
Machines That Think in which he writes: “Recently there have been a
good deal of news about strange giant machines that can handle
information with vast speed and skill….These machines are similar to
what a brain would be if it were made of hardware and wire instead of
flesh and nerves. A machine can handle information; it can calculate,
conclude, and choose; it can perform reasonable operations with
information. A machine, therefore, can think.”
 1950
Science fiction steers the conversation - “I Robot” was published –
a collection of short stories by science fiction writer Isaac Asimov.
Asimov was one of several science fiction writers who picked up the
idea of machine intelligence, and imagined its future. His work was
popular, thought-provoking and visionary, helping to inspire a
generation of roboticists and scientists. He is best known for the Three
Laws of Robotics, designed to stop our creations turning on us. But he
also imagined developments that seem remarkably prescient – such as
a computer capable of storing all human knowledge that anyone can
ask any question. His questions and investigations into the implications
of artificial intelligence permeate throughout several other milestones
on our timeline of AI below.
 1956
A 'top-down' approach (1956) - The term 'artificial intelligence' was
coined for a summer conference at Dartmouth University, organized by
a young computer scientist, John McCarthy. Top scientists debated
how to tackle AI. Some, like influential academic Marvin Minsky,
favored a top-down approach: pre-programming a computer with the
rules that govern human behavior. Others preferred a bottom-up
approach, such as neural networks that simulated brain cells and
learned new behaviors. Over time Minsky's views dominated, and
together with McCarthy he won substantial funding from the US
government, who hoped AI might give them the upper hand in the
Cold War.
 1959
“Machine learning” coined - Arthur Samuel coins the term “machine
learning,” reporting on programming a computer “so that it will learn
to play a better game of checkers than can be played by the person
who wrote the program.” This marks a historic point in our artificial
intelligence timeline, with the coining of a phrase that will come to
embody an entire field within AI.
 1968
2001: A Space Odyssey imagines where AI could lead - Minsky
influenced science fiction too. He advised Stanley Kubrick on the film
2001: A Space Odyssey, featuring an intelligent computer, HAL 9000.
During one scene, HAL is interviewed on the BBC talking about the
mission and says that he is "fool-proof and incapable of error." When a
mission scientist is interviewed he says he believes HAL may well have
genuine emotions. The film mirrored some predictions made by AI
researchers at the time, including Minsky that machines were heading
towards human level intelligence very soon. It also brilliantly captured
some of the public’s fears, that artificial intelligences could turn nasty.
 1969
Tough problems to crack - AI were lagging far behind the lofty
predictions made by advocates like Minsky – something made
apparent by Shakey the Robot. Shakey was the first general-purpose
mobile robot able to make decisions about its own actions by
reasoning about its surroundings. It built a spatial map of what it saw,
before moving. But it was painfully slow, even in an area with few
obstacles. Each time it nudged forward, Shakey would have to update
its map. A moving object in its field of view could easily bewilder it,
sometimes stopping it in its tracks for an hour while it planned its next
move.

 1973
The autonomous picture creator - Since 1973, Harold Cohen—a
painter, a professor at the University of California, San Diego, and a
onetime representative of Britain at the Venice Biennale—has been
collaborating with a program called AARON. AARON has been able to
make pictures autonomously for decades; even in the late 1980s
Cohen was able to joke that he was the only artist who would ever be
able to have a posthumous exhibition of new works created entirely
after his own death.
 1987
A solution for big business - After a long “AI winter” - when people
began seriously doubting AI’s ability to reach anything near human
levels of intelligence - AI's commercial value started to be realized,
attracts new investment.
 1988
From rules to probabilistic learning - Members of the IBM T.J.
Watson Research Center publish “A statistical approach to language
translation,” heralding the shift from rule-based to probabilistic
methods of machine translation, and reflecting a broader shift to
“machine learning” based on statistical analysis of known examples,
not comprehension and “understanding” of the task at hand (IBM’s
project Candide, successfully translating between English and French,
was based on 2.2 million pairs of sentences, mostly from the bilingual
proceedings of the Canadian parliament).
 1990
Back to nature for “bottom-up” inspiration - Expert systems
couldn't crack the problem of imitating biology. Then AI scientist
Rodney Brooks published a new paper: Elephants Don’t Play Chess.
Brooks was inspired by advances in neuroscience, which had started to
explain the mysteries of human cognition. Vision, for example, needed
different 'modules' in the brain to work together to recognize patterns,
with no central control. Brooks argued that the top-down approach of
pre-programming a computer with the rules of intelligent behavior was
wrong. He helped drive a revival of the bottom-up approach to AI,
including the long unfashionable field of neural networks.

 1995
A.L.I.C.E. chatbot learns how to speak from the web - Richard
Wallace develops the chatbot A.L.I.C.E (Artificial Linguistic Internet
Computer Entity), inspired by Joseph Weizenbaum's ELIZA program,
but with the addition of natural language sample data collection on an
unprecedented scale, enabled by the advent of the Web.
 1997
Man vs. machine: fight of the 20th century - Supporters of top-
down AI still had their champions: supercomputers like Deep Blue,
which in 1997 took on world chess, champion Garry Kasparov. The
IBM-built machine was, on paper, far superior to Kasparov - capable of
evaluating up to 200 million positions a second. But could it think
strategically? The answer was a resounding yes. The supercomputer
won the contest, dubbed 'the brain's last stand’; with such flair that
Kasparov believed a human being had to be behind the controls. Some
hailed this as the moment that AI came of age. But for others, this
simply showed brute force at work on a highly specialized problem
with clear rules.
 2002
The first robot for the home - Rodney Brook's spin-off company,
iRobot, created the first commercially successful robot for the home –
an autonomous vacuum cleaner called Roomba. Cleaning the carpet
was a far cry from the early AI pioneers' ambitions. But Roomba was a
big achievement. Its few layers of behavior-generating systems were
far simpler than Shakey the Robot's algorithms, and were more like
Grey Walter’s robots over half a century before. Despite relatively
simple sensors and minimal processing power, the device had enough
intelligence to reliably and efficiently clean a home. Roomba ushered
in a new era of autonomous robots, focused on specific tasks.
 2008
Starting to crack the big problems - In November 2008, a small
feature appeared on the new Apple iPhone – a Google app with speech
recognition. It seemed simple. But this heralded a major
breakthrough. Despite speech recognition being one of AI's key goals,
decades of investment had never lifted it above 80% accuracy. Google
pioneered a new approach: thousands of powerful computers, running
parallel neural networks, learning to spot patterns in the vast volumes
of data streaming in from Google's many users. At first it was still
fairly inaccurate but, after years of learning and improvements, Google
now claims it is 92% accurate.
 2009
ImageNet democratizes data - Stanford researcher Fei-Fei Li saw
her colleagues across academia and the AI industry hammering away
at the same concept: a better algorithm would make better decisions,
regardless of the data. But she realized a limitation to this approach—
the best algorithm wouldn’t work well if the data it learned from didn’t
reflect the real world. Her solution: build a better dataset. “We decided
we wanted to do something that was completely historically
unprecedented. We’re going to map out the entire world of objects.”
The resulting dataset was called ImageNet. Fei-Fei Li released
ImageNet, a free database of 14 million images that had been labeled
by tens of thousands of Amazon Mechanical Turk workers. AI
researchers started using ImageNet to train neural networks to catalog
photos and identify objects. The dataset quickly evolved into an annual
competition to see which algorithms could identify objects with the
lowest error rate. Many see it as the catalyst for the AI boom the world
is experiencing today.
 2010
Dance bots - At the same time as massive mainframes were changing
the way AI was done, new technology meant smaller computers could
also pack a bigger punch. These new computers enabled humanoid
robots, like the NAO robot, which could do things predecessors like
Shakey had found almost impossible. NAO robots used lots of the
technology pioneered over the previous decade, such as learning
enabled by neural networks. At Shanghai's 2010 World Expo, some of
the extraordinary capabilities of these robots went on display, as 20 of
them danced in perfect harmony for eight minutes.

 2011
Man vs machine: fight of the 21st century - In 2011, IBM's
Watson took on the human brain on US quiz show Jeopardy. This was
a far greater challenge for the machine than chess. Watson had to
answer riddles and complex questions. Its makers used a myriad of AI
techniques, including neural networks, and trained the machine for
more than three years to recognize patterns in questions and answers.
Watson trounced its opposition – the two best performers of all time
on the show. The victory went viral and was hailed as a triumph for AI.
 2012
Learning cat faces - Jeff Dean and Andrew Ng report on an
experiment in which they showed a very large neural network 10
million unlabeled images randomly taken from YouTube videos, and
“to our amusement, one of our artificial neurons learned to respond
strongly to pictures of... cats.”
 2014
Are machines intelligent now? - Sixty-four years after Turing
published his idea of a test that would prove machine intelligence, a
chatbot called Eugene Goostman finally passed. But very few AI
experts saw this a watershed moment. Eugene Goostman was seen as
'taught for the test', using tricks to fool the judges. It was other
developments in 2014 that really showed how far AI had come in 70
years. From Google's billion dollar investment in driverless cars, to
Skype's launch of real-time voice translation, intelligent machines were
now becoming an everyday reality that would change all of our lives.
 2015
Google Deep Dream is born - In June 2015, Alex Mordvintsev and
Google’s Brain AI research team published some fascinating results.
After some training in identifying objects from visual clues, and being
fed photographs of skies and random-shaped stuff, the program began
generating digital images suggesting the combined imaginations of
Walt Disney and Pieter Bruegel the Elder, including a hybrid “Pig-
Snail,” “Camel-Bird” and “Dog-Fish.” This birthed a new form of art
called “Inceptionism”, named after the Inception algorithm, in which a
neural network would progressively zoom in on an image and try to
“see” it within the framework of what it already knew.
 2016
Partnerships on AI - The Partnership on AI was founded to conduct
research, organize discussions, share insights, provide thought
leadership, consult with relevant third parties, respond to questions
from the public and media, and create educational material that
advances the understanding of AI technologies including machine
perception, learning, and automated reasoning. The Partnership is led
by Founding Executive Director Terah Lyons, who formerly served as
Policy Advisor to the U.S. Chief Technology Officer in the White House
Office of Science and Technology Policy (OSTP).
 2017
AI co-produces mainstream pop album - Taryn Southern is a pop
artist working with several AI platforms to co-produce her debut album
I AM AI. Her 2017 single “Break Free” below is human-AI
collaboration. You can hear about Taryn’s creative process in How AI-
Generated Music is Changing The Way Hits Are Made, an interview
with DJ and Future of Music producer Dani Deahl. Taryn explains:
“Using AI, I’m writing my lyrics and vocal melodies to the music and
using that as a source of inspiration. Because I’m able to iterate with
the music and give it feedback and parameters and edit as many times
as I need, it still feels like it’s mine.” 

Topic 3

If given a chance, to have enough capital for investment. Would you


invest in a company focusing on artificial intelligence?

Artificial intelligence (AI) is all around us. We are likely used it on our
daily doings or everyday life, searching the web or checking our latest social
media feed. Whether we are aware of it or not, AI has a massive effect on
our life, as well as in our business world. Furthermore, a businessman must
be a risk taker, you may not reach the success if you can't encounter failures
or struggles. However, if I have enough money and given a chance to invest
I will not choose the company the focused on the artificial intelligence even
though that company can generate income and has quick process in the
production, but for me AI is the reason why there's a lot of people who
suffered poverty due to the unemployment. I'm a type of investor in the
future that will not only focused on the income that will received but into the
people living around because if we help other people there's more blessings
to come not just money but happiness by seeing people working hard for
their family or love ones because for me there's nothing that people can't do
of what AI can.

Thus, Artificial intelligence can’t fully replace humans, machines can’t act like
human brains, they are capable of doing what they are programmed for but
nothing more than that, and there is a limit to their creativity,
understanding, and thinking ability and so on.

What area of life would you like to employ the AI innovation you
want?

In today’s generation artificial intelligence are still evolving and it gives


a big impacts into our lives. However, the aspects of our lives that I want to
innovate more is medical facilities for our health care, a medical facilities
that used AI are used to detect cancer cells with higher accuracy, also these
applications will use to collect and analyze patient data and present it to
primary care physicians alongside insight into patient’s medical needs. It
puts consumers in control of health and well-being. Additionally, increases
the ability for healthcare professionals to better understand the data to da
patterns and needs of the people they care for, and with that they can
provide a better feedback, guidance and support for staying healthy.

Thus, as investments into machine learning and AI continue to push


the boundaries of what a machine is capable of, the possible applications for
artificial intelligence are beginning to creep into sectors that were previously
only possible in the realm of fiction. To some, the idea of a machine helping
humans learn in a procedurally generated manner might still seem
outlandish, but there are plenty of impacts of AI in various aspects.
Source:

 Rathaur, K. Paras, M. (2019). Overview of artificial intelligence in


medicine.
 Robsonphoto, (2018). Impacts of artificial intelligence

What have you imagined to be a probable impact of this to your life


and the society you are in?

The rapid progress in machine learning and artificial intelligence (AI)


has brought increasing attention to the potential impacts of AI technologies
on society. Artificial intelligence can be defined as the ability of computer
systems to perform tasks and activities that usually can only be
accomplished using human intelligence.

Moreover, Artificial intelligence has a various impact in all aspects of


our society like in business industry, medicine specifically in our healthcare
industry through it gives a lot of improvements in doing our activities.
Moreover, I can imagined that it has a good impact into our life also to our
society because it help us to evaluate our illness using the AI medical
facilities quickly and accurately. With the help of the artificial intelligence we
can easily detect skin cancer, diagnosis process, treatment protocol
development and drug development. Thus, using artificial intelligence in
today’s generation it makes our world easier save time and effort in doing
our daily activities or jobs.
Source: Wallach, W., & Allen, C. (2008)

MODULE 2

Topic 1

What do you think the greatest contribution of AI in this modern


world?

The world is fast evolving, with the artificial intelligence at the


forefront in the changing the world and the way we live. The artificial
intelligence has a great contribution in different aspects in our modern
world. But AI in business industry is the greatest contribution in which it
enables human capabilities like understanding, reasoning, planning,
communication and perception to be undertaken by software increasingly
effectively, efficiently and at low cost. The automation of these abilities
creates new opportunities in most business sectors and consumers
application. Thus, artificial intelligence in business can be used to solve a
problems across the board. AI can help business to increase sales, detect
fraud, improved customer experience, automate work processes, and
provide predictive analysis. Moreover, utilizing the artificial intelligence in the
business world have a major impact efficiently, intelligent systems can
automate a great amount of our work and help reduce the risk of human
errors, also analyzing the data more accurately and the reporting time is
also increased. AI can used to analyze large amounts of data to draw a
conclusive reports.

Source: Gardner, K. (2019). How AI is helping efficiency improve

MODULE 3

Topic 1
What do you think the outcome if we ignore the risks associated with
developing AI to be utilized in business?

Business are increasingly looking for ways to put artificial intelligence


technologies to work improve their productivity, profitability and business
results. However, there are also certain barriers and disadvantages to keep
in mind but, if we ignore it there will more troubles that will occur that can
destroy the image of the company or make some damages in which some of
the risk are to give rise of job losses and security of AI systems that can
potentially cause a damage and the effect of machine interaction in human
behavior and attention. It really needs to be aware and give time to solve
and faced all the risk that will occur in developing the artificial intelligence so
that it has a good a process of the production of a certain business and to
have a better outcome.

Thus, utilizing the artificial intelligence in business it create a better


future and better lives for everyone in which it has immense and beneficial
potential.

Source: https://www.nibusinessinfo.co.uk

You might also like