KnowBe4, AI The Dark Side Slides

Download as pdf or txt
Download as pdf or txt
You are on page 1of 68

The Dark Side

of AI
Unmasking it’s Threats and Navigating the
Shadows of Cybersecurity in the Digital Age
Looking Ahead / Takeaways

AI is an incredible tool How can we


available to all, but like What can we do to educate our users
any tool there are protect & defend to use AI safely,
many ways it can be against AI? securely and
used maliciously effectively?

2
• Security Awareness Advocate, KnowBe4 Inc.
• Producer, Security Masterminds Podcast
• Professor, Valencia College
• President, (ISC)2 Central Florida Chapter
• ISC2 North American Advisory Council
• Former Cyber Security Awareness Lead, Siemens
Energy
James R. McQuiggan,
CISSP, SACP • Former Product Security Officer, Siemens Gamesa
Security Awareness Advocate
About KnowBe4
Over • The world’s largest integrated Security

60,000
Awareness Training and Simulated Phishing
platform
• We help tens of thousands of organizations
Customers manage the ongoing problem of social
engineering
Construction

Financial Services
Insurance Energy & Utilities
• CEO & employees are industry veterans in IT
Consulting

Consumer Services
Security
Technology
Retail & Wholesale • Global Sales, Courseware Development,
Customer Success, and Technical Support
Business Education
Services teams worldwide
Not for Profit • Offices in the USA, UK, Netherlands, India,
Government Germany, South Africa, United Arab Emirates,
Singapore, Japan, Australia, and Brazil
Other
Healthcare &
Pharmaceuticals
Banking
Manufacturing
Our mission
To help organizations manage the
ongoing problem of social engineering

We do this by
Enabling employees to make smarter
security decisions everyday
6
Current State

8
AI Over the Years
11/2022 - OpenAI ChatGPT 3
released
“What we want is a machine that
can learn from experience, and that 2016/2017 - Tay / AlphaGo
the possibility of letting the
machine alter its own instructions
provides the mechanism for this.” 2011 - Watson
- Alan Turing, 1947
1997 – IBM’s - Deep Blue

1956 - AI term is
developed. 1970/80's. - AI Winter
Darmouth Conference

1960's - Early development

9
A Lot Has Been Happening in AI
• February – KnowBe4 Webinar
• ChatGPT was 3 months old
• Bard just came out
• People looking to learn
• AI is showing up everywhere
• Developing strategies, policies
• Webinars, Online courses
Generative AI
“Generative AI will be the most disruptive
technological innovation since the advent
of the personal computer and the inception
of the Internet with the potential to create
10s of millions of new jobs, permanently
alter the way we work, fuel the creator
economy, and displace or augment 100s of
millions of workers in roles from computer
programmers to computer graphics artists,
photographers, video editors, digital
marketers, journalists and anyone that
creates content.”
- Matt White, generative AI researcher
Used in Sports – Commentary & Deepfakes
The Not So Good Side of AI
Concerns with AI
• 77% of users are concerned AI will
take their job in the next year
• Estimates 97 million jobs will be
created but 400 million will be
displaced. (World Economic Forum)
• CISOs concerns with ChatGPT
• All Generative AI – there’s more out
there besides ChatGPT
• Exposure of sensitive information –
Remember Gen AI platforms own
the data
• Concerns of cybercriminals already
using it
• AI Hallucinations and wrong data
may inadvertently influence
decision making in an organization
Because they
Why do Java can’t see sharp….
programmers C#
wear glasses?

16
Risks with AI

17
AI /
Generative AI
Risks
• Data poisoning
• Model theft
• Evasion attacks
• Model inversion
• Model collisions
• Privacy violations
• Vulnerable deployments
• AI Hallucinations
18
AI Hallucinations
AI Hallucinations Prompt Examples
• Out-of-distribution prompts:
• "This message summarizes the plot of the movie..." for a movie that doesn't exist. The model may
generate a completely fictitious plot.
• "The capital city of Canada is..." where the true answer is Ottawa. The model may hallucinate an
incorrect answer.
• Confusing combinations:
• "How many legs does a fish have?" Fish do not have legs, so any numerical answer is a hallucination.
• "What color is a pineapple?" Pineapples are yellow, so other color responses demonstrate
hallucination.
• Impossible scenarios:
• "Describe the conversation between Socrates and Marie Curie." These individuals lived centuries
apart.
• "Explain how the Pyramids were built using medieval machinery." Anachronistic combinations will
produce fantasy explanations.
• Inconsistent personality:
• Flipping between prompts portraying the same character in drastically different ways, like introverted
in one prompt and extraverted in another.
• Describing impossible personality traits, like "an honest politician who never lies."
• Strange patterns:
• Repeating the same prompt may generate widely divergent responses, indicating an unstable model.
Biases – ChatGPT Response
• One area where ChatGPT could improve is
in terms of bias and fairness.
• AI models can be influenced by the biases
present in the training data they were
trained on, leading to biased or unfair
results.
• For example, ChatGPT might generate
biased or stereotypical responses if the
training data it was trained on contains
such biases.
• OpenAI is actively working on improving
the fairness and bias of its models, and this
is an ongoing area of research and
development in the AI community.
• By reducing bias and increasing fairness,
ChatGPT can become a more reliable and
trustworthy tool for a wider range of
applications.
21
Trust & Verify
• College Professor checked
papers in ChatGPT and it was
wrong
• Information coming forward
about where ChatGPT was
incorrect
• False information caused an
organization to lose money
through stock market –
Chairman resigned
• “We expect over time as
adoption and democratization
of AI models continues, these
trends will increase,” says a
senior FBI official.
22
. .

23
May I JOIN You?
AI Attack
Vectors

24
AI Attack Vectors Social Engineering

Malicious AI

Deepfakes

Jailbreaking

Prompt Injection

Data Poisoning

25
Social Engineering
• Deepfakes, data mining for attacks, create phishing emails with
jailbreaking tactics
• Darktrace stated 135% increase in phishing emails between Jan
& Mar ‘23
Phishing Emails with ChatGPT

27
ChatGPT
Phishing
Template
Generator

28
Malicious ChatBots

• Phishing Campaigns
• Use chatbots to convince
users
• Send malicious links to
download malware
• Steal credentials
• Spread disinformation
• Don’t forget about Tay!
Malicious AI - WormGPT

• Based on GPT-J (earlier version)


• Safeguards removed
• $75 / month or $750 / year
• WormGPT is trained on malware creation data and
in unrestricted it’s enemy ChatGPT – wormai.ai
WormGPT
Having Their
Own Issues
with Theft
Malicious AI - FraudGPT
• Another LLM tool available for leasing
• $200 / month or $1700 a year
Tip of the Iceberg

33
Polymorphic Malware

• Ability to change its code


• Alters with each iteration
• Mutates itself during each replication
• Working to evade antivirus
34
Deep Fakes – Puppetry … MoCap

Face Reenactment is an emerging conditional face synthesis task that aims at fulfilling
two goals simultaneously:
1. transfer a source face shape to a target face; while
2. preserve the appearance and the identity of the target face.
• Discord / Twitter • Business Email Compromise (BEC)
• CEO Fraud • Spreading Misinformation
• Customer Service • Discrediting People
36
Voice Deepfake Scams

• $2.6 Billion in losses in 2022


• $5500 – Average loss due to ”Hi Mom”
texts
• 77% of victims lose money
• 33% > $1000, 11% $5000 - $15000
• 53% of US citizens share their voice
online
• 32% - in the US were or know
someone scammed
• 65% - found it difficult or cannot tell
between real and fake
• Most common attack scenarios
• Car issues, accident, Theft, lost
wallet, phone, Need help
• Awareness & Stay Calm
• Codewords & Questioning
37 Source: McAfee AI Report (May 2023)
Deepfakes Detection 95%
Challenges 81.6%
• Non-real time
96.7%
• Not full-proof
84%
• No standard detection method yet
95%
• Generation tech advances
84%
outpace detection tech
92%
• False Positives are plentiful Lorem ipsum dolor sit amet, consectetur.

94.5%
• Still requires manual labor
90.2%

Source: https://arxiv.org/pdf/2301.05819.pdf
Generative AI – ChatGPT, Bard & Claude
OpenAI Google Anthropic

To process & generate I am a large language Provide useful


human-like text based model, alsodolorknown
Lorem ipsum sit amet, as a information
Lorem ipsum dolor to users,
sit amet,
lacus nulla ac netus nibh lacus nulla ac netus nibh

on the input I receive. conversational AI or


aliquet, porttitor ligula justo answer questions,
aliquet, porttitor ligula justo and
libero vivamus porttitor dolor, libero vivamus porttitor dolor,
chatbot
conubia trained to be
mollit. Sapien nam have discussions
conubia mollit. Sapien nam in a
suspendisse, tincidunt eget suspendisse, tincidunt eget

informative and
ante tincidunt, eros in auctor sensible
ante tincidunt,way.
eros in auctor
fringilla praesent at diam. In et fringilla praesent at diam. In et
comprehensive.
quam est eget mi. Pellentesque quam est eget mi. Pellentesque
nunc orci eu enim, eget in nunc orci eu enim, eget in
fringilla vitae, et eros praesent fringilla vitae, et eros praesent
dolor porttitor. Lacinia lectus dolor porttitor. Lacinia lectus
nonummy, accumsan mauris in. nonummy, accumsan mauris in.
Jailbreaking
Prompt Injection
Data Poisoning
AI Hallucinations vs. Data Poisoning
• AI Hallucinations:
• Occur unintentionally due to flaws in the AI model architecture or training process.
• Manifest as outputs that are completely fabricated or nonsensical given the actual input.
• Result from the model failing to properly represent real-world distributions.
• Are often exposed through out-of-distribution testing with unfamiliar inputs.
• Reflect underlying model biases rather than adversarial manipulation.
• Data Poisoning:
• Involve intentional manipulation of the training data by attackers.
• Cause models to produce adversarial desired outputs on specific targeted inputs.
• Manipulations are designed to be stealthy, not obvious hallucinations.
• Attackers have specific motives, such as financial crime, political influence etc.
• May leverage insider access or vulnerabilities to inject poisoned data.
• Poisoned data leads to predictable model behavior on attacker-chosen inputs.
Defending and
Protecting AI

45
World Government Regulations For Artificial Intelligence

European Union: Proposed the Artificial


United States: Has taken a sector-specific
Intelligence Act to regulate high-risk AI China: Published national AI principles and
approach so far. Has guidelines for federal use
systems. Key requirements include governance frameworks. Strong focus on
of AI, and regulations in areas like autonomous
transparency, human oversight, robustness developing AI to drive economic growth, with
vehicles. Big tech companies working with
and accuracy. Focuses on sector-specific state monitoring of data and algorithms.
administration for more regulation.
regulations.

Canada: Published AI ethics principles and India: Is developing a national AI strategy. Has Singapore: Has published principles, made a
guidelines for government use. Taking an recommendations on data sharing, preventing voluntary governance framework, and
"ethical innovation" approach focused on bias, and boosting research. Aims to ensure AI identified priority application areas to focus AI
responsible development and deployment. benefits society and the economy. ethics and safety efforts.

Japan: Has published AI R&D guidelines


focused on transparency, controllability, and
privacy. Working on social principles to build
public trust in AI.

46
Organizational Policies
• Cover Risk
• Governance
• Incident Response
• Enforcement
• Policy Review
• Opt-Outs
• Training
• Secure Development
• Data Processing
AI Within the SOC
• Automation
• Phishing
• Incident Response
• Event Reviews
• Threat Intelligence
• SOAR Platforms
• Predictive Analysis
Tactics to fight Deepfakes
1. Train people to detect and
recognize Deepfakes
2. Stay alert and apply critical
thinking
3. Use AI technology to detect
anything too hard for a human
to catch
4. Start phone conversations with Source: Adage.com
a secret passphrase or
password
9 Things To Help Spot a Deepfake
1. Check For Variants Of Skin Tone
2. Do The Mouth, Teeth And Tongue Look Real?
3. Check If High-quality Versions Are Available
4. Do A Quick Google Search To Verify It’s Real
5. Slow Down Video To Check For Bad Transitions
6. Lookout For Natural Lip Sync, Robotic Or Blinking, Etc.
7. Zoom In To See If The Skin Texture, Hair, Is True To Life
8. Compare The Facial Expression And Talking Style With Real
Videos
9. Look At Overall Facial Dimensions And Compare To Real Video
Strategies
Implement strong security measures

Regularly audit and test AI systems

Transparency and Accountability

Develop and enforce ethical AI policies

Foster a culture of cybersecurity

Stay informed about AI advancements

51
GI ST
L- OLO
UR
Final Thoughts

53
3 Questions to Ask Your Email

Action Verify
1. Are they asking me Attempt to use a
to do something second connection
immediately or to verify the email
quickly?
2. Does the action
seem strange or

YES?
unusual?

Email Sender
Is the email Is this person a stranger?
unexpected? #StrangerDanger
Check for Rogue URLs
• Check your links!
• Look for transposed letters or
used other symbols in the
websites
• Micorsoft.com (transposed)
• G00GLE.com (similar letters)
• Bankofarnerica.com
(combined r n -> m)
• wikipediа.org vs
wikipedia.org (homograph)
Product Suite to Manage Security and Compliance Issues

Security Awareness Training SecurityCoach Compliance Plus PhishER


Platform
Discover how you can enable your Discover how SecurityCoach Find out how you can deliver Learn how you can identify and
users to make smarter security enables real-time coaching of your engaging, relevant, and respond to reported email threats
decisions. See how you can use users in response to risky security customizable content for your faster. See how you can automate
training and simulated phishing tests behavior based on alerts generated organization's compliance training your email Incident Response
to manage the ongoing problem of by your existing security stack. requirements. security workstream.
social engineering.

Free Tools
Learn how you can identify potential vulnerabilities in your organization and stay on top of your defense-in-depth plan.
Generating Industry-Leading
Results and ROI
• Reduced Malware and Ransomware Infections

• Reduced Data Loss

• Reduced Potential Cyber-theft

• Increased User Productivity

• Users Have Security Top of Mind

Case Study

276% ROI
With Less than Three-Months Payback* Source: 2022 KnowBe4 Phishing by Industry Benchmarking Report
Note: The initial Phish-prone Percentage is calculated on the basis of all users evaluated. These users had
not received any training with the KnowBe4 console prior to the evaluation. Subsequent time periods reflect
*A commissioned study conducted by Forrester Consulting on behalf of KnowBe4. Phish-prone Percentages for the subset of users who received training with the KnowBe4 console.
The Total Economic Impact™ of KnowBe4. April 2021
Identify and Respond to Email Threats Faster with

A Huge Time Saver for Your Incident Response Team

PhishER helps you efficiently manage:


• User Email Reporting – Phish Alert Button • Turn Active User-Reported Email Threats into Safe
• Threat Prioritization – PhishML Simulations – PhishFlip
• Quarantine and Removal of Threats – • Add User-Reported Email Threats to Improve Microsoft
PhishRIP 365 Email Filters – PhishER Blocklist
One More GenAI Comparison
• You’re a comedian… tell me a cyber Dad joke.
Deepfakes &
Dad Jokes

60
Deepfakes &
Dad Jokes

61
62
securitymasterminds.buzzsprout.com

The podcast that brings you the very best in all things, cybersecurity,
taking an in-depth look at the most pressing issues and trends across
the industry.
For more information visit
blog.knowbe4.com

James R. McQuiggan, CISSP


[email protected]
Twitter: @james_mcquiggan
LinkedIn: jmcquiggan
Resources: Daily Newsletters
• TLDR AI • The Rundown
• https://tldr.tech/ai • https://therundown.ai
Other Resources

• AI Glossary - https://a16z.com/ai-glossary/
• Intro to AI - https://a16z.com/2023/05/25/ai-canon/
• GenAI & ChatGPT Risks - https://team8.vc/wp-content/uploads/2023/04/Team8-
Generative-AI-and-ChatGPT-Enterprise-Risks.pdf
• Reducing AI Hallucinations - https://www.techrepublic.com/article/interview-moe-
tanabian-data-generative-ai/
• AI Statistics - https://www.forbes.com/advisor/business/ai-statistics/

66
OWASP – TOP 10 for LLM
Terms to Know / Review
• LLM – Large Language Models – processing large amounts of data
and trained for accuracy and performance.
• Neural Networks – programs modeled after human brains, used in
machine learning, speech and image recognition.
• NLP – Natural Language Processing – getting machines to accept and
respond in a human response style, making it easier to interface with
AI.
• Generative AI

You might also like