LinkedIn is now using everyone's content to train their AI tool -- they just auto opted everyone in.
I recommend opting out now (AND that orgs put an end to auto opt-in, it's not cool)
Opt out steps: Settings and Privacy > Data Privacy > Data for Generative AI Improvement (OFF)
We shouldn't have to take a bunch of steps to undo a choice that a company made for all of us.
Orgs think they can get away with auto opt in because "everyone does it".
If we come together and demand that orgs allow us to CHOOSE to opt in, things will hopefully change one day.
At least in my region (California in the US), this is the flow on mobile to opt out:
open mobile app > click face in upper left hand corner > click settings > click data privacy > click data for generative ai improvement > toggle off
LinkedIn seems to have auto enrolled folks in the US, but hearing from folks in the EU that they are not seeing this listed in their settings (likely due to privacy regulations).
If you're outside of the US, I'm curious if you're seeing this?
Reports from folks who are also auto-opted in include locations like:
USA, Canada, India, UK, Australia, UAE, and more
Folks who aren't seeing themselves auto-opted in: those in countries protected with EU privacy laws
Lots of thoughts from my friends in the UK who also have to opt out and who don’t have the same protections as their EU buds now.
I think it’s about time we have a standardized privacy law that protects us folk outside the EU from auto-opt in nonsense like this (and way more).
Why does opting out of training generative AI models matter? How does it impact folks?
Generative AI tools build outputs based on inputs they are trained on. AI tools have a hard time synthesizing brand new content so allowing AI tools to be trained on your original writing, photos and videos means it’s likely that elements of your writing, photos, or videos will be melted together with other people’s content to build AI outputs.
In short, you may find your content “reused” or “rehashed” by AI, and *sometimes it plagiarizes writing, photos, and video in their entirety*.
Opting out of participating in AI training is a good idea for anyone who creates original content.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
We now live in a world where AI deepfakes trick people daily.
Tonight we see @TaylorSwift calling out the use of an AI deepfake of her falsely endorsing a presidential candidate.
Let’s talk thru how to spot deepfakes in videos, audio (AI voice clones / calls), & social media…
First, let’s start with an experiment!
Do you think you can reliably spot fake AI images? Here’s an opportunity to see how well you can spot AI generated photos (some are easy and some are more challenging):
Let’s start with how to spot AI generated photos in September 2024 (will change in the future), ask yourself:
- Shockingly unusual: Is the pic showing surprising actions for celebrities, politicians, or cultures?
- Body parts and clothes a bit off: Are body parts merged together? In the background of the pic, are there people without faces? Are the people wearing mismatched earrings or have jewelry like earrings embedded in skin (like in their forehead)?
- Airbrushed and Saturated: Does the picture look highly saturated, with airbrushing effects around edges? Is it somehow lit from all sides at once? Are there more colors than exist normally in a typical photo?
- Looking for Pity: is the photo an airbrushed picture of a child or solider holding up a sign asking for support, money, wishes, or likes/follows? Does it have incorrect spelling in odd ways?
Let’s talk about risks w/ Apple’s new camera button & Visual Intelligence AI tools + integrations -- the potential ability to learn a stranger’s identity by simply taking a picture.
Without big 3rd party integration guardrails, this new camera button + AI could invade privacy.
Yes, it’s exciting to be able to snap a pic and learn what you see with AI!
Within the Apple ecosystem, there are guardrails to prevent those AI tools from invading privacy. For example, if you upload a pic of a person to the integrated ChatGPT it refuses to tell you who it is.
The camera control button is described above as “your gateway to 3rd party tools”.
If 3rd parties can build for the AI integration feature w/ the Camera Control button, and AI already exists to allow you to learn identity from a picture, will Apple prevent those integrations?
We are currently in one of the largest global IT outages in history.
Remember: verify people are who they say they are before taking sensitive actions.
Criminals will attempt to use this IT outage to pretend to be IT to you or you to IT to steal access, passwords, codes, etc.
How will this hit everyday folks at home?
Please tell your family and friends:
If you receive a call from “Microsoft Support” about a blue screen on your computer, do not give that person your passwords, money, etc. A criminal may ask for payment to “fix” the blue screen for you.
How will this outage impact social engineering risk at work?
- A criminal pretending to be “IT Support” or “Help Desk” may call you and ask you to give out your credentials and codes to “regain access”. Folks working remotely are at a high risk here.
- Work Help Desk / IT Support will need to be able to verify that an employee calling for help is actually that individual and not a criminal trying to takeover access.
This AT&T breach will massively disrupt everyday folks, celebrities, politicians, activists…
The breach includes numbers called/texted & amount of call/text interactions, call length, & some people had cell site id numbers leaked (which leaks the approximate location of user).
1. What can a criminal do with the data stolen: Social Engineering
The believability of social engineering attacks will increase for those affected bc attackers know which phone numbers to spoof to you. Attackers can pretend to be a boss, friend, cousin, nephew etc and say they need money, password, access, or data with a higher degree of confidence that their impersonation will be believable.
2. What can a criminal do with the data stolen: Threaten, Extort, Harm
This stolen data can reveal where someone lives, works, spends their free time, who they communicate with in secret including affairs, any crime based communication, or typical private/sensitive conversations that require secrecy. This is a big deal for anyone affected.
For celebrities and politicians, this information getting leaked greatly affects their privacy, physical safety, sensitive work, potentially even national security because the criminals have a record of who is in contact with whom, when and sometimes where.
The criminals could extort those people who are trying to keep that information (rightly) private, they could threaten their physical safety at the locations revealed in the metadata, they could pretend to be the people they called and texted often and ask for money, sensitive details, and increase the likelihood of successfully tricking that victim.
For those experiencing abuse or harassment, the impact of this breach is terrifying for their physical security and beyond as they need to keep their communications private to those that can help them get out of their abusive situation.
Spoofing (changing caller ID) takes less than a minute and can be done using apps available on the App Store.
Here we see Mark Cuban talking about getting tricked through a phone scam where the attacker spoofed a Google number (Google assistant) and took over his Gmail account🧵
The scam is simple, here’s the breakdown for your family, friends, team, etc with an example video at the bottom: 1. Attacker finds your phone number in data breach or on data brokerage site 2. Attacker sets up which phone number to display on your caller ID with a spoofing app from the App Store (cheap and simple) 3. Attacker places call to victim and pretends they’re with Customer Support (in this case, recovery support at Google), which displays a “Google” number on victim’s caller ID 4. Attacker says there has been an incident on your account and to follow the steps with them to recover access 5. Victim gives attacker details like password, MFA code, or account recovery details to “protect the account from compromise” (in reality, this is the attack itself, of course) 6. Attacker takes over the account and now can do anything victim used to be able to do on account (email in threads and attack others, request fraudulent wire transfers, steal all data, etc) 7. Typically the victim struggles to regain access to their account and the attacker hits many on their contact list 8. Because Mark Cuban is who he is, he was able to regain access with special support that most others would not receive.
Example spoofing phone call attack video below:
Next, let’s discuss how to prevent yourself or others falling for this attack.
How to help your team, family/friends, and self recognize this attack and avoid falling for it in the moment:
- Ensure everyone understands that caller ID is easily spoofed (changed to display any number, company, or person). If takes less than a minute and dollar to set up using apps from the App Store.
- If your caller ID says Google, your bank’s name, etc recognize that spoofing is not just easy but likely. Hang up especially if they say they are “Support”.
- Criminals are mimicking real calls from real life, this is how they’re successful. If the phone number displays your bank for example, call your bank using the number on the back of your bank card and let them know you received a call (they’ll tell you if there’s an issue on your account, but more likely than not, it’s a social engineering scam).
- If anyone ever calls you as Support and tries to help you with “account recovery” or “to protect your account” remember that when you need help with something: you call for help, help doesn’t preemptively call you.
- If anyone ever *calls you* and requests a password, code, or pin that is a social engineering attack — hang up.
Here are more details for your team/loved ones on how to spot a scam in action:
I need to explain how AI text to sound effect can be used by criminals to make their kidnapping or bail-related scams believable.
In the wild, we’re seeing “female scream” or “young boy screaming specific name” used in AI voice cloning phone attacks with phone number spoofing 🧵
We’re already seeing these generic scream-related sound effects used in criminals pretexts to convince the call receiver that their loved one is truly in trouble and to send money without question.
As AI evolves, we’ll see more and more believable and specific sounds in use.
For instance, I want to make it clear that we’ll see AI text to sound effect tools leveraged to hear “car crash sound effects”, “courthouse sound effects”, or “young girl screaming specific name and crying hysterically” in background of scam calls with increasing believability.