Conceptsforauto I
Conceptsforauto I
Conceptsforauto I
As the hype around AI has accelerated, vendors have been scrambling to promote how
their products and services use it. Often, what they refer to as AI is simply a
component of the technology, such as machine learning. AI requires a foundation of
specialized hardware and software for writing and training machine learning
algorithms. No single programming language is synonymous with AI, but Python, R,
Java, C++ and Julia have features popular with AI developers.
especially by companies in their marketing materials. But there are distinctions. The term AI, coined in the 1950s,
refers to the simulation of human intelligence by machines. It covers an ever-changing set of capabilities as new
technologies are developed. Technologies that come under the umbrella of AI include machine learning and deep
learning.
Machine learning enables software applications to become more accurate at predicting outcomes without being
explicitly programmed to do so. Machine learning algorithms use historical data as input to predict new output
values. This approach became vastly more effective with the rise of large data sets to train on. Deep learning, a
subset of machine learning, is based on our understanding of how the brain is structured. Deep learning's use of
artificial neural networks structure is the underpinning of recent advances in AI, including self-driving cars and
ChatGPT.
AI has become central to many of today's largest and most successful companies,
including Alphabet, Apple, Microsoft and Meta, where AI technologies are used to
improve operations and outpace competitors. At Alphabet subsidiary Google, for
example, AI is central to its search engine, Waymo's self-driving cars and Google
Brain, which invented the transformer neural network architecture that underpins the
recent breakthroughs in natural language processing.
While the huge volume of data created on a daily basis would bury a human
researcher, AI applications using machine learning can take that data and quickly turn
it into actionable information. As of this writing, a primary disadvantage of AI is that
it is expensive to process the large amounts of data AI programming requires. As AI
techniques are incorporated into more products and services, organizations must also
be attuned to AI's potential to create biased and discriminatory systems, intentionally
or inadvertently.
Advantages of AI
The following are some advantages of AI.
Expensive.
Weak AI, also known as narrow AI, is designed and trained to complete a
specific task. Industrial robots and virtual personal assistants, such as
Apple's Siri, use weak AI.
DAVID PETERSSON
AI in education. AI can automate grading, giving educators more time for other
tasks. It can assess students and adapt to their needs, helping them work at their own
pace. AI tutors can provide additional support to students, ensuring they stay on track.
The technology could also change where and how students learn, perhaps even
replacing some teachers. As demonstrated by ChatGPT, Bard and other large
language models, generative AI can help educators craft course work and other
teaching materials and engage students in new ways. The advent of these tools also
forces educators to rethink student homework and testing and revise policies on
plagiarism.
AI in finance. AI in personal finance applications, such as Intuit Mint or TurboTax, is
disrupting financial institutions. Applications such as these collect personal data and
provide financial advice. Other programs, such as IBM Watson, have been applied to
the process of buying a home. Today, artificial intelligence software performs much
of the trading on Wall Street.
Security. AI and machine learning are at the top of the buzzword list security vendors
use to market their products, so buyers should approach with caution. Still, AI
techniques are being successfully applied to multiple aspects of cybersecurity,
including anomaly detection, solving the false-positive problem and conducting
behavioral threat analytics. Organizations use machine learning in security
information and event management (SIEM) software and related areas to detect
anomalies and identify suspicious activities that indicate threats. By analyzing data
and using logic to identify similarities to known malicious code, AI can provide alerts
to new and emerging attacks much sooner than human employees and previous
technology iterations.
This can be problematic because machine learning algorithms, which underpin many
of the most advanced AI tools, are only as smart as the data they are given in training.
Because a human being selects what data is used to train an AI program, the potential
for machine learning bias is inherent and must be monitored closely.
Crafting laws to regulate AI will not be easy, in part because AI comprises a variety
of technologies that companies use for different ends, and partly because regulations
can come at the cost of AI progress and development. The rapid evolution of AI
technologies is another obstacle to forming meaningful regulation of AI, as are the
challenges presented by AI's lack of transparency that make it difficult to see how the
algorithms reach their results. Moreover, technology breakthroughs and novel
applications such as ChatGPT and Dall-E can make existing laws instantly obsolete.
And, of course, the laws that governments do manage to craft to regulate AI don't stop
criminals from using the technology with malicious intent.
had a long and sometimes controversial history from the Turing test in 1950 to today's generative
AI chatbots like ChatGPT.
What is the history of AI?
The concept of inanimate objects endowed with intelligence has been around since
ancient times. The Greek god Hephaestus was depicted in myths as forging robot-like
servants out of gold. Engineers in ancient Egypt built statues of gods animated by
priests. Throughout the centuries, thinkers from Aristotle to the 13th century Spanish
theologian Ramon Llull to René Descartes and Thomas Bayes used the tools and logic
of their times to describe human thought processes as symbols, laying the foundation
for AI concepts such as general knowledge representation.
The late 19th and first half of the 20th centuries brought forth the foundational work
that would give rise to the modern computer. In 1836, Cambridge University
mathematician Charles Babbage and Augusta Ada King, Countess of Lovelace,
invented the first design for a programmable machine.
1950s. With the advent of modern computers, scientists could test their ideas about
machine intelligence. One method for determining whether a computer has
intelligence was devised by the British mathematician and World War II code-breaker
Alan Turing. The Turing test focused on a computer's ability to fool interrogators into
believing its responses to their questions were made by a human being.
1956. The modern field of artificial intelligence is widely cited as starting this year
during a summer conference at Dartmouth College. Sponsored by the Defense
Advanced Research Projects Agency (DARPA), the conference was attended by 10
luminaries in the field, including AI pioneers Marvin Minsky, Oliver Selfridge
and John McCarthy, who is credited with coining the term artificial intelligence. Also
in attendance were Allen Newell, a computer scientist, and Herbert A. Simon, an
economist, political scientist and cognitive psychologist. The two presented their
groundbreaking Logic Theorist, a computer program capable of proving certain
mathematical theorems and referred to as the first AI program.
1950s and 1960s. In the wake of the Dartmouth College conference, leaders in the
fledgling field of AI predicted that a man-made intelligence equivalent to the human
brain was around the corner, attracting major government and industry support.
Indeed, nearly 20 years of well-funded basic research generated significant advances
in AI: For example, in the late 1950s, Newell and Simon published the General
Problem Solver (GPS) algorithm, which fell short of solving complex problems but
laid the foundations for developing more sophisticated cognitive architectures; and
McCarthy developed Lisp, a language for AI programming still used today. In the
mid-1960s, MIT Professor Joseph Weizenbaum developed ELIZA, an early NLP
program that laid the foundation for today's chatbots.
2010s. The decade between 2010 and 2020 saw a steady stream of AI developments.
These include the launch of Apple's Siri and Amazon's Alexa voice assistants; IBM
Watson's victories on Jeopardy; self-driving cars; the development of the first
generative adversarial network; the launch of TensorFlow, Google's open source deep
learning framework; the founding of research lab OpenAI, developers of the GPT-3
language model and Dall-E image generator; the defeat of world Go champion Lee
Sedol by Google DeepMind's AlphaGo; and the implementation of AI-based systems
that detect cancers with a high degree of accuracy.
2020s. The current decade has seen the advent of generative AI, a type of artificial
intelligence technology that can produce new content. Generative AI starts with a
prompt that could be in the form of a text, an image, a video, a design, musical notes
or any input that the AI system can process. Various AI algorithms then return new
content in response to the prompt. Content can include essays, solutions to problems,
or realistic fakes created from pictures or audio of a person. The abilities of language
models such as ChatGPT-3, Google's Bard and Microsoft's Megatron-Turing NLG
have wowed the world, but the technology is still in early stages, as evidenced by its
tendency to hallucinate or skew answers.
Over the last several years, the symbiotic relationship between AI discoveries at
Google, Microsoft, and OpenAI, and the hardware innovations pioneered by Nvidia
have enabled running ever-larger AI models on more connected GPUs, driving game-
changing improvements in performance and scalability.
The collaboration among these AI luminaries was crucial for the recent success of
ChatGPT, not to mention dozens of other breakout AI services. Here is a rundown of
important innovations in AI tools and services.
Transformers. Google, for example, led the way in finding a more efficient process
for provisioning AI training across a large cluster of commodity PCs with GPUs. This
paved the way for the discovery of transformers that automate many aspects of
training AI on unlabeled data.
Related Terms
digital signal processing (DSP)
Digital signal processing (DSP) refers to various techniques for improving the accuracy and reliability
of digital communications. See complete definition
generative design
Generative design is a computer-aided design technique and category of software that uses AI to
optimize the design process. See complete definition
By: Linda Tucci
By: David Petersson
By: Michael Bennett
Sponsored News
Power Your
Initiatives W
Performance
Technologies
Exploring AI
Education an
Technologies
See More
Vendor Resources
Artificial inte
TechTarget C
AI: Beyond t
ComputerWe