I give it 3 months until Sora 2 is used to generate a video of a well known executive saying horrible things to tank a company’s stock.
We’re going to see impact in the stock market from believable AI video that we’ve never witnessed at scale.
Prep your team, family and friends.
If everyday folks don’t understand that this level of believable AI generated video and audio content is currently possible, then they could fall for it.
If they know it’s possible and know to verify authenticity, we have a possibility to keep folks safe.
If you haven’t sat down with your family and shown them Sora 2 and that they are likely to see realistic scary videos of cities, politicians, fights, aggressive behavior, etc now is the time to have that chat so they become skeptical about videos on social media.
I'll make a thread of videos to show your parents so they can learn just how skeptical they need to be of AI content on Facebook, Reels, TikTok, etc in the coming weeks and months:
Another example: imagine a parent gets a text from their son saying they're in trouble & need money for bail (common scam). When parent questions it, they get texted this video.
This is too easy for scammers (& it was too easy for me to bypass @OpenAI screen recording guardrail)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Now can you use the ChatGPT Agent to:
- download malware instead of that free software you were looking for online
- accidentally leak your emails to the public
- inadvertently share your private photos to social media
- book a nonrefundable $10k first class flight to Europe
In addition, your ChatGPT Agent can also:
- Reply weirdly to your family, colleagues, & friends in messages, confusing them deeply
- Misunderstand an important opportunity that comes in via email and turn it down
- Negatively impact M&A with strange emails found in discovery
What advice do I have about granting AI Agents access to your machine, email, calendar, contacts, messages, etc?
I would say that unless you're extremely technically sophisticated AND working on a segmented machine without personal and professional data available to the AI Agent, this is not a tool for you right now.
Let experts work out the integration issues and build in safeguards before you cause a data breach, leak your sensitive photos, post client personal data, or worse.
My favorite way to hack in my ethical hacking is phone call based hacking with impersonation. Why? Because it has the highest success rate. This is what we're seeing in the wild right now, too.
Let's talk about how phone call attackers think and how to catch Scattered Spider style attacks for Insurance companies (that are heavily targeted right now, Aflac recently) 1. *Impersonating IT and Helpdesk for passwords and codes*
They pretend to be IT and HelpDesk over phone calls and text message to ask for passwords and MFA codes or credential harvest via a link 2. *Remote Access Tools as Helpdesk*
They convince teammates to run business remote access tools while pretending to be IT/HelpDesk 3. *MFA Fatigue*
They will send many repeated MFA prompt notifications until the employee presses Accept 4. *SIM Swap*
They call telco pretending to be your employee to take over their phone number and intercept codes for 2 factor authentication
Let's talk about the types of websites they register and how to train your team about them and block access to them.
Scattered Spider usually attempts to impersonate your HelpDesk or IT so they're going to use a believable looking website to trick folks.
Often times they register domains like this:
victimcompanyname-sso[.]com
victimcompanyname-servicedesk[.]com
victimcompanyname-okta[.]com
Train your team to spot those specific attacker controlled look-alike domains and block them on your network.
What mitigations steps can you take to help your team spot and shut down these hacking attempts? Especially if you work in Retail or Insurance and are heavily targeted right now, focus on:
Human based protocols:
- Start Be Politely Paranoid Protocol: start a new protocol with your team to verify identity using another method of communication before taking actions. For example, if they get a call from IT/HelpDesk to download remote access tool, use another method of communication like chat, email, initiating a call back to trusted number to thwart spoofing to verify authenticity before taking action. More than likely it's an attacker.
- Educate on the exact types of attacks that are popular right now in the wild (this above thread covers them).
Technical tool implementation:
- Set up application controls to prevent installation and execution of unauthorized remote access tools. If the remote access tools don't work during the attack, it's going to make the criminal's job harder and they may move on to another target.
- Set up MFA that is harder to phish such as FIDO solutions (YubiKey, etc). Educate that your IT / HelpDesk will not ask for passwords or MFA codes in the meantime.
- Set up password manager and require long, random, and unique passwords for each account, generated and stored in a password manager with MFA on.
- Require MFA on for all accounts work and personal accounts, move folks with admin access to FIDO MFA solution first, then move the rest of the team over to FIDO MFA.
- Keep devices and browsers up to date.
If a user’s expectations about how a tool functions don’t match reality, you’ve got yourself a huge user experience and security problem.
Humans have built a schema around AI chat bots and do not expect their AI chat bot prompts to show up in a social media style Discover feed — it’s not how other tools function.
Because of this, users are inadvertently posting sensitive info to a public feed with their identity linked, including prompts with:
- exact medical issues
- federal crimes committed
- tax evasion
- home address
- interest in extramarital affairs
- sensitive court details
- private photos of unclothed children
- audio asking personal questions
- private upcoming travel plans
- questions about the legality of actions
- challenges in personal relationships
- feeling shame with disabilities
What do I recommend as next steps for Meta and other orgs considering a public AI chat bot prompt feed? 1. Pause the public Discover feed. Your users clearly don’t understand that their AI chat bot prompts have been made public. 2. Ensure all AI chat bot prompts are private by default. This goes for all future AI chat bots as well. Don’t wait for users to accidentally post their secrets publicly. Notice that humans interact with AI chatbots with an expectation of privacy, and meet them where they are at. 3. Alert users who have posted their prompts publicly and that their prompts have been removed for them from the feed to protect their privacy.
If I’m able to watch users inadvertently admitting to federal crimes and posting unclothed pictures of their children to the Meta AI Discover Prompt feed, they clearly don’t understand how it works!
Meta: Pause the product, bake in clear strong privacy, and help users fix their accidental prompt posts.
It’s time to make it right.
Yes and also, interestingly, this tool takes all of the most commons steps used to hack people & companies — from OSINT (open source intelligence) via social media, to target selection, to pretext development, to contact + phishing — and automates it completely for attackers.
Imagine an attacker uses this tool to seek out people discussing their anger about working at a specific company they are employed by (rather than for its intended purpose of finding competitor’s users who are complaining), seeks those individuals out in an automated fashion, and delivers a believable and potent phishing lure via automated message.
Sure, this tool can be used for marketing but it can also be used for highly targeted phishing campaigns, and phishing is how many hacks begin.
You may be thinking, but Rachel, don’t tell these criminals what to do! You must remember that cyber criminals are smart — this is their job and they are good at it. They don’t need me telling them how to think about hacking with an AI tool, they are experts themselves.
I just live hacked @ArleneDickinson (Dragons' Den star - Canada's Shark Tank) by using her breached passwords, social media posts, an AI voice clone, & *just 1 picture* for a deepfake live video call.
Thank you @ElevateTechCA @Mastercard for asking me to demo these attacks live!
What are the takeaways from this Live Hack with Arlene? 1. Stop reusing passwords - when you reuse your password and it shows up in a data breach, I can then use that password against you everywhere it's reused online and simply log in as you stealing money, access, data, etc.
More takeaways from this video with Arlene: 2. Turn on multi-factor authentication (MFA) - turning on this second step when you log in makes it more obnoxious for me to takeover your accounts. I then have to try and steal your MFA codes from you (or if you use a FIDO MFA solution like a Yubikey etc, I'm likely just plain out of luck and have to move on to another target)!
LinkedIn is now using everyone's content to train their AI tool -- they just auto opted everyone in.
I recommend opting out now (AND that orgs put an end to auto opt-in, it's not cool)
Opt out steps: Settings and Privacy > Data Privacy > Data for Generative AI Improvement (OFF)
We shouldn't have to take a bunch of steps to undo a choice that a company made for all of us.
Orgs think they can get away with auto opt in because "everyone does it".
If we come together and demand that orgs allow us to CHOOSE to opt in, things will hopefully change one day.
At least in my region (California in the US), this is the flow on mobile to opt out:
open mobile app > click face in upper left hand corner > click settings > click data privacy > click data for generative ai improvement > toggle off