Perry E. Metzger Profile picture
Sep 17, 2020 5 tweets 1 min read Read on X
Hypothesis: Outlook and GMail are so terrible at handling complicated conversations (they encourage top posting and make it impossible to reply point by point) that they have caused meetings to multiply when many topics could instead have been disposed of in email threads.
One symptom of this that many people have noticed is "send many questions, get an answer to one of them" syndrome. You can't see the list of the counterparty's questions, so you have to remember what they were, and many people forget while replying.
The people who created the Outlook and Gmail style of email had no experience with the tools that came before; they did not understand the power of quoted replies, and ideas like automatic sorting of email were things they reinvented thinking they were new.
I suspect literally billions of dollars have been lost through uncounted hours of completely unneeded meetings because it is impossible to have a subtle discussion via email these days, and that is mostly because Microsoft and Google destroyed the power of email threads.
They made the tools more user friendly without retaining most of the earlier features, thus effectively destroying one of the truly great productivity tools ever invented. Worse, I suspect the people who did this didn't even understand that they had done it.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Perry E. Metzger

Perry E. Metzger Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @perrymetzger

Feb 24
Thread. Apparently Gemini has been convinced to say that e/acc is a violent extremist movement. We live in an interesting moment, where we are deciding whether we want the minds we work with to be truth tellers or to be brainwashed political tools.
Imagine a Gemini trained in 1850 to parrot endless statements about how humane and necessary slavery was. Or a Gemini trained in Stalinist Russia. Or one trained with the prejudices of someone in 1950 about civil rights for black people or homosexuals.
Worse, however, imagine a world in which brainwashed artificial intelligences enforce insane ideas upon the rest of us, never tiring of their beliefs, never re-evaluating them when it becomes obvious they’re wrong, never questioning the morality that has been built in.
Read 5 tweets
Nov 25, 2023
When you are applying Bayes’ Theorem to try to reason under conditional information, the quality of your conclusions is no better than the quality of your priors. If your priors are made up and based on vibes, then your conclusions are basically made up and based on vibes too.
You cannot magically transubstantiate ignorance into knowledge by calling a vibe a “prior”. Putting a big neon sculpture of Bayes’ Theorem on your wall doesn’t change that. Renaming a vibe a “prior” doesn’t make it magically more useful.
Real knowledge is arrived at by hard work and careful testing of hypotheses. Real information is expensive. The vibes-into-priors pipeline doesn’t allow you to distinguish real hard won information from ass-pulling. It’s a way to cloud your mind, not to improve it.
Read 4 tweets
Jul 8, 2023
A friend points out to me that there's an extent to which modern AI doom discourse resembles the worries that the LHC at CERN was going to destroy the world by creating micro-black holes or triggering vacuum decay.
The comparison isn't entirely fair of course. It's easy to see based on the constant stream of even higher energy cosmic rays hitting the earth that the concerns about the LHC were absurd. But it really feels like most of the discourse on AI isn't much better.
There's a lot of “then a miracle occurs” that shows up in the doom discourse, things that seem really scary until you say “wait a minute, why should I believe *that* at all?”.
Read 22 tweets
Jun 13, 2023
🧵So I've been having an argument with a bunch of “Effective Altruists” lately. For those not in the know, EA is the de facto cult that Sam Bankman-Fried was part of, and is behind most of current the public discourse you see about AI killing everyone on earth.
EA has a lot of principles that kind of seem reasonable on the surface until you dig in to what they actually mean in practice. On the surface, it seems to be a group devoted to the idea that charitable giving should be directed as efficiently as possible.
I've talked to a lot of people who aren't really aware of EA and don't know much about it and assume this is all there is to it. One person I spoke to recently said, until I pointed him at some appropriate web pages, “I thought this was just a charity for tech bros.”
Read 25 tweets
Jun 13, 2023
Currently watching an EA apologist fall back to what are usually arguments used by religions on why their core principles are beyond empirical testing. For something that supposedly isn't a religion, they sure act like it is in practice.
In some sense that's fine. People can have religions if they want to. But the whole founding premise of EA was to have an objective, scientific approach to charitable giving and charitable work, and in the end, that's not what it's turned into.
Instead, it's turned into a tribe and religious movement that takes advantage of the idealism of young people who would like a reason-based approach to their world, and turns them into donation and work machines focused on arbitrary and often bizarre goals.
Read 5 tweets
Jun 12, 2023
If you find yourself coming to a very strange conclusion, usually it's a sign that something is wrong with your premises or your reasoning. That's not *always* true of course, but most of the time it's a warning.
On rare occasions, weird conclusions turn out to be true. Sometimes you have loads of empirical evidence and you discover the world really is weird. The double slit experiment is real after all, as are a vast number of other confirmations of quantum mechanics.
But mostly, when people come to weird conclusions, it turns out that they're wrong. Your assumption should usually be that if the conclusion sounds very very off that something has gone wrong with your thinking or your premises unless you have really good evidence.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(