KnowBe4, AI The Dark Side Slides
KnowBe4, AI The Dark Side Slides
KnowBe4, AI The Dark Side Slides
of AI
Unmasking it’s Threats and Navigating the
Shadows of Cybersecurity in the Digital Age
Looking Ahead / Takeaways
2
• Security Awareness Advocate, KnowBe4 Inc.
• Producer, Security Masterminds Podcast
• Professor, Valencia College
• President, (ISC)2 Central Florida Chapter
• ISC2 North American Advisory Council
• Former Cyber Security Awareness Lead, Siemens
Energy
James R. McQuiggan,
CISSP, SACP • Former Product Security Officer, Siemens Gamesa
Security Awareness Advocate
About KnowBe4
Over • The world’s largest integrated Security
60,000
Awareness Training and Simulated Phishing
platform
• We help tens of thousands of organizations
Customers manage the ongoing problem of social
engineering
Construction
Financial Services
Insurance Energy & Utilities
• CEO & employees are industry veterans in IT
Consulting
Consumer Services
Security
Technology
Retail & Wholesale • Global Sales, Courseware Development,
Customer Success, and Technical Support
Business Education
Services teams worldwide
Not for Profit • Offices in the USA, UK, Netherlands, India,
Government Germany, South Africa, United Arab Emirates,
Singapore, Japan, Australia, and Brazil
Other
Healthcare &
Pharmaceuticals
Banking
Manufacturing
Our mission
To help organizations manage the
ongoing problem of social engineering
We do this by
Enabling employees to make smarter
security decisions everyday
6
Current State
8
AI Over the Years
11/2022 - OpenAI ChatGPT 3
released
“What we want is a machine that
can learn from experience, and that 2016/2017 - Tay / AlphaGo
the possibility of letting the
machine alter its own instructions
provides the mechanism for this.” 2011 - Watson
- Alan Turing, 1947
1997 – IBM’s - Deep Blue
1956 - AI term is
developed. 1970/80's. - AI Winter
Darmouth Conference
9
A Lot Has Been Happening in AI
• February – KnowBe4 Webinar
• ChatGPT was 3 months old
• Bard just came out
• People looking to learn
• AI is showing up everywhere
• Developing strategies, policies
• Webinars, Online courses
Generative AI
“Generative AI will be the most disruptive
technological innovation since the advent
of the personal computer and the inception
of the Internet with the potential to create
10s of millions of new jobs, permanently
alter the way we work, fuel the creator
economy, and displace or augment 100s of
millions of workers in roles from computer
programmers to computer graphics artists,
photographers, video editors, digital
marketers, journalists and anyone that
creates content.”
- Matt White, generative AI researcher
Used in Sports – Commentary & Deepfakes
The Not So Good Side of AI
Concerns with AI
• 77% of users are concerned AI will
take their job in the next year
• Estimates 97 million jobs will be
created but 400 million will be
displaced. (World Economic Forum)
• CISOs concerns with ChatGPT
• All Generative AI – there’s more out
there besides ChatGPT
• Exposure of sensitive information –
Remember Gen AI platforms own
the data
• Concerns of cybercriminals already
using it
• AI Hallucinations and wrong data
may inadvertently influence
decision making in an organization
Because they
Why do Java can’t see sharp….
programmers C#
wear glasses?
16
Risks with AI
17
AI /
Generative AI
Risks
• Data poisoning
• Model theft
• Evasion attacks
• Model inversion
• Model collisions
• Privacy violations
• Vulnerable deployments
• AI Hallucinations
18
AI Hallucinations
AI Hallucinations Prompt Examples
• Out-of-distribution prompts:
• "This message summarizes the plot of the movie..." for a movie that doesn't exist. The model may
generate a completely fictitious plot.
• "The capital city of Canada is..." where the true answer is Ottawa. The model may hallucinate an
incorrect answer.
• Confusing combinations:
• "How many legs does a fish have?" Fish do not have legs, so any numerical answer is a hallucination.
• "What color is a pineapple?" Pineapples are yellow, so other color responses demonstrate
hallucination.
• Impossible scenarios:
• "Describe the conversation between Socrates and Marie Curie." These individuals lived centuries
apart.
• "Explain how the Pyramids were built using medieval machinery." Anachronistic combinations will
produce fantasy explanations.
• Inconsistent personality:
• Flipping between prompts portraying the same character in drastically different ways, like introverted
in one prompt and extraverted in another.
• Describing impossible personality traits, like "an honest politician who never lies."
• Strange patterns:
• Repeating the same prompt may generate widely divergent responses, indicating an unstable model.
Biases – ChatGPT Response
• One area where ChatGPT could improve is
in terms of bias and fairness.
• AI models can be influenced by the biases
present in the training data they were
trained on, leading to biased or unfair
results.
• For example, ChatGPT might generate
biased or stereotypical responses if the
training data it was trained on contains
such biases.
• OpenAI is actively working on improving
the fairness and bias of its models, and this
is an ongoing area of research and
development in the AI community.
• By reducing bias and increasing fairness,
ChatGPT can become a more reliable and
trustworthy tool for a wider range of
applications.
21
Trust & Verify
• College Professor checked
papers in ChatGPT and it was
wrong
• Information coming forward
about where ChatGPT was
incorrect
• False information caused an
organization to lose money
through stock market –
Chairman resigned
• “We expect over time as
adoption and democratization
of AI models continues, these
trends will increase,” says a
senior FBI official.
22
. .
23
May I JOIN You?
AI Attack
Vectors
24
AI Attack Vectors Social Engineering
Malicious AI
Deepfakes
Jailbreaking
Prompt Injection
Data Poisoning
25
Social Engineering
• Deepfakes, data mining for attacks, create phishing emails with
jailbreaking tactics
• Darktrace stated 135% increase in phishing emails between Jan
& Mar ‘23
Phishing Emails with ChatGPT
27
ChatGPT
Phishing
Template
Generator
28
Malicious ChatBots
• Phishing Campaigns
• Use chatbots to convince
users
• Send malicious links to
download malware
• Steal credentials
• Spread disinformation
• Don’t forget about Tay!
Malicious AI - WormGPT
33
Polymorphic Malware
Face Reenactment is an emerging conditional face synthesis task that aims at fulfilling
two goals simultaneously:
1. transfer a source face shape to a target face; while
2. preserve the appearance and the identity of the target face.
• Discord / Twitter • Business Email Compromise (BEC)
• CEO Fraud • Spreading Misinformation
• Customer Service • Discrediting People
36
Voice Deepfake Scams
94.5%
• Still requires manual labor
90.2%
Source: https://arxiv.org/pdf/2301.05819.pdf
Generative AI – ChatGPT, Bard & Claude
OpenAI Google Anthropic
informative and
ante tincidunt, eros in auctor sensible
ante tincidunt,way.
eros in auctor
fringilla praesent at diam. In et fringilla praesent at diam. In et
comprehensive.
quam est eget mi. Pellentesque quam est eget mi. Pellentesque
nunc orci eu enim, eget in nunc orci eu enim, eget in
fringilla vitae, et eros praesent fringilla vitae, et eros praesent
dolor porttitor. Lacinia lectus dolor porttitor. Lacinia lectus
nonummy, accumsan mauris in. nonummy, accumsan mauris in.
Jailbreaking
Prompt Injection
Data Poisoning
AI Hallucinations vs. Data Poisoning
• AI Hallucinations:
• Occur unintentionally due to flaws in the AI model architecture or training process.
• Manifest as outputs that are completely fabricated or nonsensical given the actual input.
• Result from the model failing to properly represent real-world distributions.
• Are often exposed through out-of-distribution testing with unfamiliar inputs.
• Reflect underlying model biases rather than adversarial manipulation.
• Data Poisoning:
• Involve intentional manipulation of the training data by attackers.
• Cause models to produce adversarial desired outputs on specific targeted inputs.
• Manipulations are designed to be stealthy, not obvious hallucinations.
• Attackers have specific motives, such as financial crime, political influence etc.
• May leverage insider access or vulnerabilities to inject poisoned data.
• Poisoned data leads to predictable model behavior on attacker-chosen inputs.
Defending and
Protecting AI
45
World Government Regulations For Artificial Intelligence
Canada: Published AI ethics principles and India: Is developing a national AI strategy. Has Singapore: Has published principles, made a
guidelines for government use. Taking an recommendations on data sharing, preventing voluntary governance framework, and
"ethical innovation" approach focused on bias, and boosting research. Aims to ensure AI identified priority application areas to focus AI
responsible development and deployment. benefits society and the economy. ethics and safety efforts.
46
Organizational Policies
• Cover Risk
• Governance
• Incident Response
• Enforcement
• Policy Review
• Opt-Outs
• Training
• Secure Development
• Data Processing
AI Within the SOC
• Automation
• Phishing
• Incident Response
• Event Reviews
• Threat Intelligence
• SOAR Platforms
• Predictive Analysis
Tactics to fight Deepfakes
1. Train people to detect and
recognize Deepfakes
2. Stay alert and apply critical
thinking
3. Use AI technology to detect
anything too hard for a human
to catch
4. Start phone conversations with Source: Adage.com
a secret passphrase or
password
9 Things To Help Spot a Deepfake
1. Check For Variants Of Skin Tone
2. Do The Mouth, Teeth And Tongue Look Real?
3. Check If High-quality Versions Are Available
4. Do A Quick Google Search To Verify It’s Real
5. Slow Down Video To Check For Bad Transitions
6. Lookout For Natural Lip Sync, Robotic Or Blinking, Etc.
7. Zoom In To See If The Skin Texture, Hair, Is True To Life
8. Compare The Facial Expression And Talking Style With Real
Videos
9. Look At Overall Facial Dimensions And Compare To Real Video
Strategies
Implement strong security measures
51
GI ST
L- OLO
UR
Final Thoughts
53
3 Questions to Ask Your Email
Action Verify
1. Are they asking me Attempt to use a
to do something second connection
immediately or to verify the email
quickly?
2. Does the action
seem strange or
YES?
unusual?
Email Sender
Is the email Is this person a stranger?
unexpected? #StrangerDanger
Check for Rogue URLs
• Check your links!
• Look for transposed letters or
used other symbols in the
websites
• Micorsoft.com (transposed)
• G00GLE.com (similar letters)
• Bankofarnerica.com
(combined r n -> m)
• wikipediа.org vs
wikipedia.org (homograph)
Product Suite to Manage Security and Compliance Issues
Free Tools
Learn how you can identify potential vulnerabilities in your organization and stay on top of your defense-in-depth plan.
Generating Industry-Leading
Results and ROI
• Reduced Malware and Ransomware Infections
Case Study
276% ROI
With Less than Three-Months Payback* Source: 2022 KnowBe4 Phishing by Industry Benchmarking Report
Note: The initial Phish-prone Percentage is calculated on the basis of all users evaluated. These users had
not received any training with the KnowBe4 console prior to the evaluation. Subsequent time periods reflect
*A commissioned study conducted by Forrester Consulting on behalf of KnowBe4. Phish-prone Percentages for the subset of users who received training with the KnowBe4 console.
The Total Economic Impact™ of KnowBe4. April 2021
Identify and Respond to Email Threats Faster with
60
Deepfakes &
Dad Jokes
61
62
securitymasterminds.buzzsprout.com
The podcast that brings you the very best in all things, cybersecurity,
taking an in-depth look at the most pressing issues and trends across
the industry.
For more information visit
blog.knowbe4.com
• AI Glossary - https://a16z.com/ai-glossary/
• Intro to AI - https://a16z.com/2023/05/25/ai-canon/
• GenAI & ChatGPT Risks - https://team8.vc/wp-content/uploads/2023/04/Team8-
Generative-AI-and-ChatGPT-Enterprise-Risks.pdf
• Reducing AI Hallucinations - https://www.techrepublic.com/article/interview-moe-
tanabian-data-generative-ai/
• AI Statistics - https://www.forbes.com/advisor/business/ai-statistics/
66
OWASP – TOP 10 for LLM
Terms to Know / Review
• LLM – Large Language Models – processing large amounts of data
and trained for accuracy and performance.
• Neural Networks – programs modeled after human brains, used in
machine learning, speech and image recognition.
• NLP – Natural Language Processing – getting machines to accept and
respond in a human response style, making it easier to interface with
AI.
• Generative AI