Rachel Tobac Profile picture
Feb 15 6 tweets 2 min read Read on X
Yes and also, interestingly, this tool takes all of the most commons steps used to hack people & companies — from OSINT (open source intelligence) via social media, to target selection, to pretext development, to contact + phishing — and automates it completely for attackers.
Imagine an attacker uses this tool to seek out people discussing their anger about working at a specific company they are employed by (rather than for its intended purpose of finding competitor’s users who are complaining), seeks those individuals out in an automated fashion, and delivers a believable and potent phishing lure via automated message.
Sure, this tool can be used for marketing but it can also be used for highly targeted phishing campaigns, and phishing is how many hacks begin.
You may be thinking, but Rachel, don’t tell these criminals what to do! You must remember that cyber criminals are smart — this is their job and they are good at it. They don’t need me telling them how to think about hacking with an AI tool, they are experts themselves.
So what can we do to stay safe?!
- For companies building AI tools that automate tasks that can be used for phishing: work with ethical hackers (you can find them at @defcon) and ensure you understand how your tool can be used for nefarious purposes, and how to spot it and shut it down programmatically on your end.
So what can we do to stay safe continued…
- For defenders at orgs whose employees can be targeted with tools like this: educate your team on how to spot and shut down these lures, not just via email, but also on social media, DMs, calls, etc AND require the right MFA for their threat model + password manager to limit the impact of your users falling for an AI automated targeted phishing campaign
So what can we do to stay safe continued again
- For everyday folks and their families: communicate to your family and friends that these types of AI automated targeted phishing campaigns will likely increase via social media comments and DMs in addition to the ever present hacking attempts via email, calls, texts, etc. Help them imagine their threat model, show examples like this, help them turn on MFA and use a password manager or passkeys to limit attack impact.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Rachel Tobac

Rachel Tobac Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @RachelTobac

Oct 16, 2024
I just live hacked @ArleneDickinson (Dragons' Den star - Canada's Shark Tank) by using her breached passwords, social media posts, an AI voice clone, & *just 1 picture* for a deepfake live video call.
Thank you @ElevateTechCA @Mastercard for asking me to demo these attacks live!
What are the takeaways from this Live Hack with Arlene?
1. Stop reusing passwords - when you reuse your password and it shows up in a data breach, I can then use that password against you everywhere it's reused online and simply log in as you stealing money, access, data, etc.
More takeaways from this video with Arlene:
2. Turn on multi-factor authentication (MFA) - turning on this second step when you log in makes it more obnoxious for me to takeover your accounts. I then have to try and steal your MFA codes from you (or if you use a FIDO MFA solution like a Yubikey etc, I'm likely just plain out of luck and have to move on to another target)!
Read 10 tweets
Sep 18, 2024
LinkedIn is now using everyone's content to train their AI tool -- they just auto opted everyone in.
I recommend opting out now (AND that orgs put an end to auto opt-in, it's not cool)
Opt out steps: Settings and Privacy > Data Privacy > Data for Generative AI Improvement (OFF) Image
We shouldn't have to take a bunch of steps to undo a choice that a company made for all of us.
Orgs think they can get away with auto opt in because "everyone does it".
If we come together and demand that orgs allow us to CHOOSE to opt in, things will hopefully change one day.
At least in my region (California in the US), this is the flow on mobile to opt out:
open mobile app > click face in upper left hand corner > click settings > click data privacy > click data for generative ai improvement > toggle off
Read 7 tweets
Sep 11, 2024
We now live in a world where AI deepfakes trick people daily.
Tonight we see @TaylorSwift calling out the use of an AI deepfake of her falsely endorsing a presidential candidate.
Let’s talk thru how to spot deepfakes in videos, audio (AI voice clones / calls), & social media… Image
First, let’s start with an experiment!
Do you think you can reliably spot fake AI images? Here’s an opportunity to see how well you can spot AI generated photos (some are easy and some are more challenging):

Next we’ll talk about tips to spot deepfakes.detectfakes.kellogg.northwestern.edu
Let’s start with how to spot AI generated photos in September 2024 (will change in the future), ask yourself:
- Shockingly unusual: Is the pic showing surprising actions for celebrities, politicians, or cultures?
- Body parts and clothes a bit off: Are body parts merged together? In the background of the pic, are there people without faces? Are the people wearing mismatched earrings or have jewelry like earrings embedded in skin (like in their forehead)?
- Airbrushed and Saturated: Does the picture look highly saturated, with airbrushing effects around edges? Is it somehow lit from all sides at once? Are there more colors than exist normally in a typical photo?
- Looking for Pity: is the photo an airbrushed picture of a child or solider holding up a sign asking for support, money, wishes, or likes/follows? Does it have incorrect spelling in odd ways?
Read 7 tweets
Sep 10, 2024
Let’s talk about risks w/ Apple’s new camera button & Visual Intelligence AI tools + integrations -- the potential ability to learn a stranger’s identity by simply taking a picture.
Without big 3rd party integration guardrails, this new camera button + AI could invade privacy.
Yes, it’s exciting to be able to snap a pic and learn what you see with AI!
Within the Apple ecosystem, there are guardrails to prevent those AI tools from invading privacy. For example, if you upload a pic of a person to the integrated ChatGPT it refuses to tell you who it is.
The camera control button is described above as “your gateway to 3rd party tools”.
If 3rd parties can build for the AI integration feature w/ the Camera Control button, and AI already exists to allow you to learn identity from a picture, will Apple prevent those integrations?
Read 8 tweets
Jul 19, 2024
We are currently in one of the largest global IT outages in history.
Remember: verify people are who they say they are before taking sensitive actions.
Criminals will attempt to use this IT outage to pretend to be IT to you or you to IT to steal access, passwords, codes, etc. A Windows user is on the one with support to regain access to their computer during a global IT outage caused by a Crowdstrike software update issue.
How will this hit everyday folks at home?
Please tell your family and friends:
If you receive a call from “Microsoft Support” about a blue screen on your computer, do not give that person your passwords, money, etc. A criminal may ask for payment to “fix” the blue screen for you.
How will this outage impact social engineering risk at work?
- A criminal pretending to be “IT Support” or “Help Desk” may call you and ask you to give out your credentials and codes to “regain access”. Folks working remotely are at a high risk here.
- Work Help Desk / IT Support will need to be able to verify that an employee calling for help is actually that individual and not a criminal trying to takeover access.
Read 5 tweets
Jul 12, 2024
This AT&T breach will massively disrupt everyday folks, celebrities, politicians, activists…
The breach includes numbers called/texted & amount of call/text interactions, call length, & some people had cell site id numbers leaked (which leaks the approximate location of user).
1. What can a criminal do with the data stolen: Social Engineering

The believability of social engineering attacks will increase for those affected bc attackers know which phone numbers to spoof to you. Attackers can pretend to be a boss, friend, cousin, nephew etc and say they need money, password, access, or data with a higher degree of confidence that their impersonation will be believable.
2. What can a criminal do with the data stolen: Threaten, Extort, Harm

This stolen data can reveal where someone lives, works, spends their free time, who they communicate with in secret including affairs, any crime based communication, or typical private/sensitive conversations that require secrecy. This is a big deal for anyone affected.

For celebrities and politicians, this information getting leaked greatly affects their privacy, physical safety, sensitive work, potentially even national security because the criminals have a record of who is in contact with whom, when and sometimes where.

The criminals could extort those people who are trying to keep that information (rightly) private, they could threaten their physical safety at the locations revealed in the metadata, they could pretend to be the people they called and texted often and ask for money, sensitive details, and increase the likelihood of successfully tricking that victim.

For those experiencing abuse or harassment, the impact of this breach is terrifying for their physical security and beyond as they need to keep their communications private to those that can help them get out of their abusive situation.
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(