Kareem Carr, Statistics Person Profile picture
Jul 4, 2021 8 tweets 3 min read Read on X
Folks have been bashing this mentorship program because of Google’s recent track record of what some might call “anti-blackness” but it doesn’t seem like most folks read the materials. I did and I have concerns. 🧵👇🏾
Look at this. They say they will “desk reject”, as in not even READ your application, if it’s not max 2 pages, 8.5” by 11”, Times New Roman font, 1” margins, single spaced, in PDF format. This is more stringent than a grad school application and probably quite a few term papers.
What else will they desk reject for? Including your contact information. That’s right. They will not even consider your application if it has your name in it.
They ask that you be in college already, have a gpa above 2.5 and consult “faculty, advisors, writing centers...to review your statement before submission”...Their ideal candidate sounds like someone who’s doing great and has lots of support. WHY would this person need Google?
Maybe the Google folks didn’t mean it this way but as written they’re saying, and I can’t stress this enough, that they will not even consider you for the mentorship program if you don’t articulate how your “lived experiences” will provide value to them.
Overall, the language strikes me as being about what google wants than what the candidates need.
As a underrepresented minority in STEM, what I’m usually looking for in programs like this is: flexibility, acceptance and a sense that I’m valued and prioritized. If I don’t get that vibe then it comes across as just another system that’s not built for me.
I hope this thread is helpful to the team at google and folks who’re interested in doing something similar at their institutions.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Kareem Carr, Statistics Person

Kareem Carr, Statistics Person Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @kareem_carr

Jun 5
You may have heard hallucinations are a big problem in AI, that they make stuff up that sounds very convincing, but isn't real.

Hallucinations aren't the real issue. The real issue is Exact vs Approximate, and it's a much, much bigger problem. Image
When you fit a curve to data, you have choices.

You can force it to pass through every point, or you can approximate the overall shape of the points without hitting any single point exactly.
When it comes to AI, there's a similar choice.

These models are built to match the shape of language. In any given context, the model can either produce exactly the text it was trained on, or it can produce text that's close but not identical
Read 10 tweets
Jun 2
I’m deeply skeptical of the AI hype because I’ve seen this all before. I’ve watched Silicon Valley chase the dream of easy money from data over and over again, and they always hit a wall.

Story time.
First it was big data. The claim was that if you just piled up enough data, the answers would be so obvious that even the dumbest algorithm or biggest idiot could see them.

Models were an afterthought. People laughed at you if you said the details mattered.
Unsurprisingly, it didn't work out.

Next came data scientists. The idea was simple: hire smart science PhDs, point them at your pile of data, wait for the monetizable insights to roll in.
Read 13 tweets
Jun 1
As a statistician, this is extremely alarming. I’ve spent years thinking about the ethical principles that guide data analysis. Here are a few that feel most urgent: Image
RESPECT AUTONOMY

Collect data only with meaningful consent. People deserve control over how their information is used.

Example: If you're studying mobile app behavior, don’t log GPS location unless users explicitly opt in and understand the implications.
DO NO HARM

Anticipate and prevent harm, including breaches of privacy and stigmatization.

Example: If 100% of a small town tests positive for HIV, reporting that stat would violate privacy. Aggregating to the county level protects individuals while keeping the data useful.
Read 9 tweets
May 8
The kids using ChatGPT to cheat are massively fumbling the ball.

I would give almost anything to experience learning something like calculus for the first time with an AI assistant.
I have wasted an ungodly amount of time on poorly written math textbooks.

Confusing notation. Poorly worded statements that I puzzled over for hours. Typos that had me questioning my sanity for days.
These kids won't ever have to go through that.

They'll take a picture of the page, ask ChatGPT what it means, and instantly get an explanation tailored to exactly their level.
Read 7 tweets
May 7
Hot take: Students using chatgpt to cheat are just following the system’s logic to its natural conclusion, a system that treats learning as a series of hoops to jump through, not a path to becoming more fully oneself.
The tragedy is that teachers and students actually want the same thing, for the student to grow in capability and agency, but school pits them against each other, turning learning into compliance and grading into surveillance.
Properly understood, passing up a real chance to learn is like skipping out on great sex or premium ice cream. One could but why would one want to?
Read 6 tweets
Apr 25
If you think about how statistics works it’s extremely obvious why a model built on purely statistical patterns would “hallucinate”. Explanation in next tweet. Image
Very simply, statistics is about taking two points you know exist and drawing a line between them, basically completing patterns.

Sometimes that middle point is something that exists in the physical world, sometimes it’s something that could potentially exist, but doesn’t. Image
Imagine an algorithm that could predict what a couple’s kids might look like. How’s the algorithm supposed to know if one of those kids it predicted actually exists or not?

The child’s existence has no logical relationship to the genomics data the algorithm has available.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(