A friend messaged me:

"What do you think about the security issues of typing social media passwords into an app from this company?"

socialmediacheck.com

Reply: "How much profanity am I allowed to use?"
I mean, this all sounds like a service dedicated to checking your own shit, right? Unless like me you are entirely shameless, that might be an issue for some:
Oh, wait, what?

"We only conduct a Social Media Check once the person to be checked (subject) has provided consent."

So who is the "your" in "Check your social media", then?
Let's ignore the fact that — for entirely sensible "don't do this, it's a really bad idea" — it's against Facebook's terms of service to give away your access credentials to commercial third parties.
facebook.com/legal/terms
Not to mention that: passwords are the WORST way to achieve data sharing with third parties
But even if they were doing it legitimately and using the PLATFORM API, they'd be forbidden "to make eligibility determinations about people, including for housing, employment, insurance, education opportunities, credit, …benefits, or immigration status"

developers.facebook.com/terms/
And get what the results of these checks are being used for. I'm not saying, but you can guess.
> 100% private (i.e. no human intervention at any stage of Social Media Check)

Wanna bet that they will have a problem with any account that has enabled 2FA?

How would they address that? Not to mention: addressing CAPTCHA, etc?
FAQs discreetly skirt "Does SocialMediaCheck.COM breach the terms of service of the various platforms?"
developer.twitter.com/en/developer-t…

"you may not use…information derived from Twitter Content [for] …conducting or providing analysis or research for any unlawful or discriminatory purpose, or in a manner that would be inconsistent with Twitter users' reasonable expectations of privacy
I am having a hard time getting clarity on the mechanism here; apparently you are supposed to authorise SocialMediaCheck to do its privacy-invasive thing, which sounds like OAuth; but there is also talk of logging in with your passwords... which sounds... equally or more bad.
Question for @EerkeBoiten and @PrivacyMatters : is it possible to meaningfully provide consent for processing which you do not understand the extent of, upon data that you likewise no longer have full awareness of?

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Alec Muffett

Alec Muffett Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @AlecMuffett

22 May
There's a bunch of people trying to reboot the #WhatsApp #Policy fake news hysteria, so here's a recap of what it actually is about:

But also: it's really important that we don't fall for the fake news, because...

alecmuffett.com/article/13692
Because encryption is under a threat from government forces, and they are doing everything they can to besmirch the reputation of both encryption in general, and anyone who is attempting to deploy it.

This fake news policy story is very useful to them

Context, for anyone who would prefer to read it in a serious forum with actual journalism: wired.com/story/whatsapp…
Read 6 tweets
21 May
Write:

« yes, there is stuff in that article that's overblown or polemical, but perhaps we should stop, reflect & reboot "cyber" because maybe we're doing it too mindlessly / mechanically »

…and you'll get unintentionally hilarious hot takes like this. ImageImage
I blogged something about this in 2002, commenting on a page elsewhere that was making a good point, and fortunately all of these are still available:
alecmuffett.com/article/113 Image
Nikolai Bezroukov: "the key problem with hardening is to know where to stop…the key principle is "not too much zeal". Unfortunately corporate security departments often discard this vital principle and use hardening for justification of their existence."

softpanorama.org/Articles/softp… ImageImage
Read 5 tweets
27 Jan
How to become a Super #Privacy Activist, pt 1:

Find a small coding issue that you can be very angry about; pick on an imperfect user-experience bug or missed opportunity & frame it as intentionally being in breach of a vague aspect of some critical legislation. Launch a crusade.
How to become a Super #Privacy Activist, pt 2:

Adherence to your Rules™ is more important than outcome; petty concerns like "international jurisdiction" pale in comparison to "foreigners should obey the intent of our laws rather than cutting us off"

shkspr.mobi/blog/2018/06/i…
How to become a Super #Privacy Activist, pt 3:

The purpose of the Internet is not for people to communicate. The purpose of the internet is to be a framework which can be regulated by you. Ideally in dramatic courtroom showdowns.

Read 37 tweets
16 Nov 20
a) i think this is wryly amusing, but because of the circumstances not the people suffering

b) i'm not sympathetic towards Parler in any way

c) nonetheless, this demonstrates a very big human problem for "something you know"-based authentication.
For anyone who does not recognise the reference: Wikipedia
en.wikipedia.org/wiki/Multi-fac… Image
The most egregious example of password-bansturbation that I know of, comes from the French data protection regulator @CNIL; take a look at this nightmare and imagine helping someone less capable navigate it: Image
Read 4 tweets
26 Sep 20
@OpenRightsGroup @jimkillock @Forbes @bazzacollins @Facebook @FBoversight Oh @jimkillock - I wish you had pinged me before writing this.

Obvious reason number 1: ranking the relationships between individuals so that you can show the user updates from people you interact with more often.
@OpenRightsGroup @jimkillock @Forbes @bazzacollins @Facebook @FBoversight Obvious reason number two: search suggestions and repeated searches are a thing. There is already a button for clearing them, just like in your browser history.
@OpenRightsGroup @jimkillock @Forbes @bazzacollins @Facebook @FBoversight Observation number three: unless the user has explicitly opted into something which deletes chats after {1 minute, 1 hour, 1 day} etc, it would be rude to erase stuff - "where have my baby photos gone they were in that chat with my sister!?!", etc
Read 10 tweets
24 Aug 20
I'm sorry to say "quelle surprise?" - precisely the same happened to the Facebook reporting mechanisms which (again) many people on (Twitter) demanded. :-/
Back in the 90's I worked for Company X, for whom Company Y was a key supplier.

X built a firewall with auto-block of src IPs upon attack (compare fail2ban)

BadGuyZ broke into Y & attacked X from Y's infra; the firewall blocked ALL X-Y comms & impacted N million dollars of biz.
"But we put these filters in for good reasons! Nobody could have foreseen this outcome!", etc… alas, no - censorship, blocking, & control systems ALWAYS have a nasty tendency to blow back in the faces of those who call for them.

We should collectively have learned this by now.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(