Yishan Profile picture
Apr 27 168 tweets 23 min read
Okay, now I'm being asked "What should Elon do to fix Twitter?"

It looks like the takeover succeeded (easiest hostile takeover ever! I suspect the board secretly wanted to work with Elon, because WELL IT'S ELON).
I don't know what it means to "fix" Twitter, but here are a few more detailed perspectives which may shed light on more effective solutions (rather than solutions which backfire).
The first key perspective is that most people who are users only look at social platforms from one lens, namely:

1: what can people say?
However, anyone who has ever OWNED or RUN a social platform looks at it through two lenses:

1: what can people say?
2: the platform functioning
The key insight here is: All speech is also behavior.

You actually already know this: we were taught when you were young to use our "indoor voice" because if we SHOUT ALL THE TIME, no one else can have a conversation. And that is Bad for Free Speech.
You can say whatever idea you want to say, just use your indoor voice.
Everyone even knows this implicitly, because "we will defeat the spam bots" is in fact a curtailment of free speech. Spam bots are expressing SOMEONE'S view; we have already accepted that we will be limiting speech.

After all, what is so dangerous about the idea that you can Get Cheap Viagra Now, No Prescription Needed!? Surely those who would want to limit the spread of that message must be in the pocket of Big Pharma, who wish to overcharge us for erections!
Of course, we know that it's not the message, it's the fact that it's being repeated too often, in conversations where it's not relevant, and the claims are of dubious reliability, as anyone who has availed themselves of such offers can tell you.
It is the behavior (and context).
Here is another misconception that people often make:

Perfectly Reasonable Person A holds and expresses Viewpoint X.
Crazier Volatile Person B also holds and expresses Viewpoint X, quite loudly.

B gets banned...
...A then believes that now Viewpoint X is being censored.

(Despite, in many cases, still being able to express Viewpoint X themselves like, right in the same tweet)
It is usually not the case that Viewpoint X is being censored. It is usually the case that the other person was engaging in extremely disruptive behavior (disruptive in frequency, context, or volume, depending on the platform's rules) and they were banned for that.
Platforms always want a diversity of views. If you do not have enough disagreement (indeed, you need active argument) on your platform, it will wither and die.
Users don't see that problem, because when a platform dies from disuse - well, it's because there were no users who saw that.
This is why SO many platforms seem to "start out" as free speech havens: because the first gate - "get enough usage" - usually isn't passed if you restrict activity too much. You need the chaos that free speech brings to have growth.
Unlike other products, social platforms don't die due to users being angry, they die from apathy and lack of use. Whether the negative emotions involved are a net good is another topic - ideally you'd have a bunch of users "joyfully arguing" but that's probably impossible.
But the reason "your viewpoint" is being banned is probably not the viewpoint. It's because *someone on your side* is breaking the rules. If you want your viewpoints to be "allowed," you have to police YOUR side.
Does everyone remember this thread?

If you haven't read it, you should read it - it's brilliant. Go ahead, bookmark this thread and go read that one and come back when you're done.
Back? Ok. One of the most brilliant (yet simple) elements of that thread is the recognition that abstract things like "rights" have concrete components needed to implement them.
Like how actually exercising freedom of speech requires that you buy things, to have the ability to transact.

So too, does freedom of speech involve real physical acts: sound, consumption of bandwidth, volume, etc.

We can sidestep the notion of whether certain ideas are "dangerous" and focus entirely on the fact that the actual exercise of freedom of speech involves a physical act.
These physical acts, in certain frequency and configuration, interfere with #2 above: the platform functioning.
You can exercise your freedom of speech by saying something a million times - that's a DDoS attack.

You can say it manually every 5 minutes. Or less often: just every hour.
You can reply to every post from someone you don't like with an insult.

You can type in all caps.

You can repeatedly respond to every point made with the deliberately picked stupidest argument instead of a good one.
You can choose only to speak using a bullhorn.

You can respond only with non-sequiters.

You can choose the most offensive reply to every question, just because you don't like the person who said it.
All of these things are you exercising free speech. It is fine.
They are "annoying." You have the right to be "annoying." Or "offensive."
From the perspective of a mere user, the first few times they see this happen, they just hit the Back Button and go away.
They don't think you're interfering with any of their rights, you're just some jerk. This platform sucks, let's go somewhere else.
From the perspective of A Person Running a Platform, they are losing users. Your perfectly valid free speech is interfering with the platform functioning, #2, which they care about.
Let's say you mix in some actual views into your noise. Now the people who agree with those views are On Your Side, and if you get banned, your VIEWS are being censored.
Because regular users only care about:

1: what can people say?

They do not care that people can also SAY things - in a manner, frequency, context, and volume - that can interfere with the functioning of the platform.
Because the functioning of the platform involves people saying things, i.e. if other people simply find the site so unpleasant that they leave, the site's functioning has been compromised.
It is non-trivial to be "broadly inclusive" if everyone is not polite. Sometimes people just aren't going to like each other, and BEHAVIOR exacerbates the antipathy that arises from disagreement.
Hence features like blocking, muting, and even subreddits. They are blunt instruments designed to allow people to leave (avoid seeing/hearing things they find objectionable - whether its the opinions or the manner in which they were expressed) without actually leaving the site.
Remember this?
I am going to tell you WHY it is naive.

Most people think when a bad idea is presented online, the debate proceeds like this:

1: bad idea is presented
2: a good idea refuting #1 is presented
3: people debate
4: the refutation is shown to have more merit
5: enlightenment!
That's not what happens.
Here's what happens:

1: bad idea is presented
2: a good idea refuting #1 is presented
3: many ideas refuting #1 are presented that are actually worse than #1 itself
...
4: in the ensuing debate, it's impossible for most people to tell whether the ideas from #2 or #3 are better, because the ideas from #3 are way more sensationalist and provocative so everyone focuses on those
...
5: because of #4, some people start thinking #1 is good
6: proponents of #2 usually get drowned out
...
7: nothing gets resolved, or
7a: things escalate until people start behaving badly because they are so inflamed

I am sure you have witnessed this process.
Someone posts a stupid tweet you disagree with. You click on the replies.

In the replies, there are a couple cogent rebuttals. Oh good, you think. 0wned!
Then there are some really stupid replies. They are really stupid. Lots of people are replying to these. Most of these replies are not very smart either. Sarcasm and ad hominum ensues, and whole thing is a mess.
Oh well! Another day on the internet.
(I will say right now that the reason I personally think we're in this censorship/free-speech culture war is because Russian bots deliberately inflame social media debates by repeating the most inflammatorily stupid statements so that things never calm down.)
Many discussions that should end benignly in 7 instead end up in 7a, to the detriment of the platforms, who are now responsible for calming everyone down.
And because they are reluctant to step in, by the time they do, it's so bad that the only way they can put out fires like that is by shutting things down whole parts of the conversation.

Then it looks like CENSORSHIP.
Because remember, once things are inflamed, shutting down ANY user means that everyone who agrees with that user's views now believes the platform is censoring those VIEWS.
I have literally NEVER encountered a situation anywhere, on any platform, where a disruptive user was shut down, they did not claim loudly that they were being shut down for their views and not their behavior.
It is a literal 100% certainty. It's like a law of online communities or something.
How do you fix this problem? Let's say you want to fix the above problem with debates devolving into chaos where the ideas refuting the original bad idea are even worse because emotion and provocation take over.

You need moderation.
You probably think I meant censorship, or that it's the same thing.

It's not. Moderation is not censorship.
Here is a different perspective on online communities: let's think of them as a nuclear chain reaction.

If you want a useful nuclear chain reaction, you need a critical mass to get the reaction going.
Social platforms are like this - you need enough people on it, with a diversity of views, debating and arguing, you need a certain amount of energy for things to take off. Terms like "cold start" are often used when it comes to getting communities started.
You often have to pack all the users into a single forum, or do other things to encourage a lot of interaction to get it all started. It's a lot like cramming fissile material together to reach criticality.
But too much energy, and you get an explosion. You don't want that. Having 7a happen from above too often will wreck the entire community - an "explosion" throws everyone apart, and some people won't come back.
You want a steady (growing but steady) output of energy from this community.
Nuclear reactors solve this in a certain way, with control rods. These are rods of material that slow down energetic neutrons.

The control rods are inserted or withdrawn depending on how the reaction is proceeding, to allow it to speed up, or slow it down.
You need control rods.
Incidentally, the real world already has a moderation method for large groups: it's called Robert's Rules of Order.

(It's that thing where the chair "recognizes a speaker" and people speak in turn, and people make "motions" etc, that thing)

en.wikipedia.org/wiki/Robert%27…
It's used because long ago, we already recognized that if we pack a hundred people into a room and just let everyone speak without any rules or moderation, you get chaos and nothing useful emerges.

Here is what Robert's Rules of Order says is its purpose:
"To enable assemblies of any size, with due regard for every member's opinion, to arrive at the general will on the maximum number of questions of varying complexity in a minimum amount of time...
...and under all kinds of internal climate ranging from total harmony to hardened or impassioned division of opinion."
That sounds kind of like what we're trying to achieve on social platforms, right?
So what makes you think that a social platform, which enables the gathering of groups of people of arbitrarily large size (millions!) would not need some kind of structure when mere groups of hundreds of people needed it too?
Notice that I said moderation, not censorship.

You don't need to block ideas. You just need people to adhere to rules on how and when they say them.
Many moderation systems involve some combination of:

1: time-limited bans (cools down emotions)
2: separating people into different discussion groups (subreddits)
3: many different moderators
Notice that these don't really censor ideas. They can still be expressed. But they moderate the frequency and volume, i.e....
1: you can still express your ideas, but cool off a bit first
2: these people here are talking about something else, but your ideas are more relevant over here - go here
3: no moderator is perfect - but pick the moderator you're willing to agree to abide by
There are others.
But let me give you a stark object example of why "whatever matches the law" is problematic in practice:

I'm going to tell you the story of /r/jailbait

(Oooh, this is going to be juicy, you think. You're right. Just not in the way you think)
Early Reddit was more or less "whatever was legal." It was wholly-owned by Conde Nast at the time, which in turn was Advance Publications, which owned many newspapers.

Newspapers care a lot about the First Amendment, and so they had some very experienced First Amendment lawyers.
Enter /r/jailbait:

/r/jailbait was a subreddit (forum) dedicated to posting sexualized pictures of girls who were under 18, i.e. jailbait.

Now, many people think that naked pictures of anyone under 18 is child pornography.
That's not actually true. A 17-year-old flashing her boobs is NOT child pornography. It's not illegal. Neither is, for example, a picture of a naked toddler in a bathtub. "Nudity under 18" is not child pornography.
The legal definition of child pornography is actually fairly complex, involving some subjective combination of how sexualized the image is, how sexually immature the subject is, and the intent and activities involved.

en.wikipedia.org/wiki/Child_por…
What it does NOT include are naked or scantily-clad images of teen (under 18) girls.

Most people don't know that legal definition.
But you know who does know this definition really well? Pedophiles.
Pedophiles are VERY familiar with the exact legal definition of child pornography because they want to skirt as close as possible to it, and /r/jailbait studiously stayed on the legal side of that line.
Reddit consulted Advance Publication's First Amendment lawyers, they looked at the subreddit, and confirmed that it was all entirely legal, and that we were in no legal risk at all.
Then Anderson Cooper from CNN did a big moral panic Special Report about how Reddit was hosting child pornography, and everything exploded.
Many people think that when there's a big press to-do, social platforms make decisions due to the pressure. That's usually not true. Social platforms almost always thrive under controversy - there's more usage, more users, etc.
What actually happened is that when the report came out, all sorts of actual pedophiles (or run-of-the-mill creepy old men) were like, "Awesome, there's child/teen porn on Reddit, let's go there!"
So they all went to /r/jailbait, and since Reddit is a user-generated content site, some of them posted their OWN content.

Now there's this thing on Reddit that many experienced users understand, which is that when a subreddit gets popular, quality goes down.
Often, a subreddit is good because it caters to a well-defined niche audience, and when a much larger audience floods in, they don't understand the nuances and norms of this subreddit, so you get lower-quality content being posted. Like Eternal September.

en.wikipedia.org/wiki/Eternal_S…
So basically Eternal September happened to /r/jailbait.
More precisely, tons of random internet pedophiles got the wrong idea from Anderson Cooper's report and started posting some combination of real child porn alongside the totally legal underage teen porn, and the subreddit moderators now had to sort through it all.
This wasn't like a normal subreddit moderation job, where you could make errors, because on the one side, some of the content was fine, and the rest of the content was ACTUALLY ILLEGAL.
So if you accidentally didn't catch one piece of content that was actually illegal child porn, vs totally legal teen porn, now you're in violation of the law.
The amount of traffic meant that it was an overwhelming amount of moderation work for subreddit moderators, so then the problem got kicked up to Reddit's own staff.
This meant that effective immediately, it was some poor employee's job to sort through thousands of images of potential child porn (some of which included sexual child ABUSE) for hours at a time to determine which ones were legal and which ones were "okay."
This was an impossible operational load on this employee, and more broadly the entire company, given that this issue created all sorts of spillover issues.
Granted, the company was like half a dozen people at the time, and with more people, you could sort through more trash, but as we know, every social network already does employ thousands of contract workers to filter horrible content, and it's a terrible job.
Here's the problem with "whatever's legal" being the line: you force the company into adjudicating whether a piece of content fits a legal definition or not, and the penalty for any errors is ILLEGAL CONTENT IS POSTED.
This is an onerous burden to put on the company, because all filtering is imperfect AND human beings are very creative (and relentless) when it comes to skirting rules.
(Every culture warrior knows this: the people on the "other side" are devious and evil, always coming up with ways to get around the rules. Right?)
So you will have situations which aren't really illegal until it gets really bad and then it is.
For example, it's not illegal for a random person to say that an election is stolen.

How about three people saying that? How about 50?

How about 15% of the entire userbase saying that?
How about a random lawyer suggesting that there may be a path of legal maneuvers by which an election could be overturned?

How about 15% of your userbase deciding that's a great thing to do, maybe we should march on the capitol and make our elected representatives do that?
That's a lot of people, and someone in the thread says, "Yeah, let's string 'em up!"

What if the President is a user and he says, "Yeah, that's totally the right thing to do!" (what's he referring to, the march or the stringing up? Unclear)
Did you know that if three people discuss murdering someone, and then one of those people goes and actually murders the victim, then all three people can be charged with "conspiracy to commit murder?"
At which point have people engaged in an illegal act of conspiring to break the law if some members of the online conversation go and actually do it?
This is not an argument for censorship. I am saying that drawing the line at "what is legal" raises the stakes HUGELY in terms of operational responsibility for the social platform.
If you think running Twitter is hard now, try making the line "whatever is legal." Even absent ANY normative value judgments, the pure operational load will be overwhelming.
It is not that it's the wrong thing to do, it's just that you may find that you will spend so much time on these things that you won't have enough time (or executive bandwidth) to add more features. It's not like Twitter's real quick with new feature rollouts to begin with.
(By the way, for people who like to point to Jan 6 as justification for banning Trump, I would like to point out that most of the actual organizing of Jan 6 occurred in FACEBOOK GROUPS, not on the Twitter)
(Facebook Groups are a far better product for organizing political action than Twitter! Yeah! Go Facebook!)
What Reddit ended up doing was banning "any sexualized images of minors (under 18)" and enforcing it through subreddit moderators: any moderators who didn't enforce this rule would have their subreddits shut down.
There is space between "sexualized images of under-18" (the platform's rule) and "child pornography" (i.e. the law), and this is the reason for it. You can't have a situation where every false negative is a violation of the law by the social platform.
Platforms, by the way, understand this very well. In fact, they really hate how lawmakers (who have exactly one job) keep criticizing the platforms for allowing this or that content that they don't like.
There are very few verticals in tech who openly call for regulation of their own business, but social networking is one of them:

cnbc.com/2020/02/15/fac…
Elon is correct: if the people want something to be illegal, they can vote for (or against) lawmakers to make a law making it illegal.
Lawmakers seem to love criticizing the social platforms, when they're the ones who could just make laws to literally bring about whatever outcome they want.
"I want you to ban this - to the extent that it will basically disappear - but I'm not going to make it illegal" is a pretty weird stance for someone with the power to make laws.
But it also forces every social platform to adopt its own values - it has to say, "okay, we're going to allow THIS kind of content but not THAT kind of content," and the ONLY standard that doesn't work in practice is "whatever's legal is ok."
In closing, I'd like to say that the best thing Elon can do to fix Twitter is to offer Jack something really awesome, like maybe a free ride to space (each year), if he'll come back and take the job.
Finally, most of you followed me for my spicy takes on social media, but I work in climate now, so now you're going to hear about trees.

terraformation.com
Trees - that is, NATIVE BIODIVERSE FOREST RESTORATION - are the most powerful tools we have right now to help fight climate change at scale.

While emissions reductions are very important (and vital!), we will not solve the entire problem through emissions reductions alone.
There is already a trillion tonnes of CO2 in the atmosphere, and even if we went to Net Zero today, we'd need carbon capture solutions to remove that CO2, and trees (forests) are the best way to do this quickly.
Here, I will address a few common myths relating to trees and forests:
Myth 1: Trees are too slow

A forest does take 10-20 years to reach maturity, but this timeline is unfairly compared against other engineered carbon capture solutions that will take as long (or much longer) to scale, whereas trees are ready to scale TODAY.
Let me introduce you to the concept of the "Technology Readiness Level."

en.wikipedia.org/wiki/Technolog…
This is a scale developed by the USAF to determine how ready a piece of technology is from deployment. It ranges from TRL-1 ("basic physics principles") all the way to TRL-9 ("Proven in live operational environment").
Viable climate solutions need TRL-9: we need solutions proven in operational environments that are ready TODAY to enter the "scalability" phase of supply chain and global infrastructure needed to take that solution to planetary levels.
Climbing the scale from TRL-1 to TRL-4 ("technology validated in lab") can take a decade or more. It's basic research. The first cell phone was released in 1972.
Going from TRL-4 to TRL-8 ("complete and qualified," i.e. real product) takes another decade or two. Around the mid-90s, the average American could buy a cell phone.
It took until 2019 to reach global penetration of smartphone usage - 70% of people around the world have a smartphone.

1972 to 2019 is 47 years. For one of the most useful pieces of technology ever to reach global scale.
Trees are already proven, functional, there are literally trillions of units installed around the world, and only need to be replicated and distributed. They are at TRL-9, whereas most engineered carbon capture solutions are somewhere between TRL-1 and TRL-7.
(I don't like to criticize emissions reductions, because I think we really need that too, but many emissions reductions technologies are also not at TRL-9)
Moreover, trees don't really take 20 years to start working. They take 5 years, or ZERO years...

... because trees begin removing CO2 molecules from the air literally the moment they begin growing. You know this!
Literally the moment the tree begins growing, it is taking CO2 molecules out of the air and converting it into solid carbohydrates!
When I say 5 years, I meant: a tree's most rapid growth years are between 5 and 15 years, so if we planted trees today, they'd start drawing down a little CO2 right away, and a lot starting in Year 5.

Do you really have a faster engineered solution that's ready to scale?
Myth 2: Trees are not a permanent solution

People think that because a tree will die in a hundred years, all the CO2 will go back into the air.

This is literally missing the forest for the trees.
We are not planting trees. We are restoring forests. The FOREST is the enduring unit of carbon capture, and forests exist on the planet which are hundreds of thousands, even millions of years old.
When a tree dies, it doesn't just go "poof" and release the CO2 into the air.

First, some fraction of the CO2 goes into the soil.
(Some people talk about "let's increase soil carbon." )

(Well, exactly how do you think soil carbon happens? All the carbon in the soil was put there by forests and plants dying, and getting cycled into the soil when they decompose.)
(In fact, this is also an argument against the "but trees burn" myth: yes they do, but the soil doesn't burn. All the carbon you cycle into the soil stays there. You only lose the top "layer" that is forest of the current generation.)
Next, the trees decompose quite slowly.

After about 4 years, it's about 25% decomposed. After 10 years, about 87% decomposed.

The decomposition process is slow, which is more than enough time for the CO2 to be taken up by new growth in the forest.
Every tree seeds well more than a single offspring, and younger trees (once past the sapling stage) actually sequester carbon much faster than mature trees, during their fastest growing teenaged years.
Let me make this clear: when a tree dies in a forest, the CO2 it slowly releases it just taken up by new, younger trees. And most trees seed more than one offspring, so there is net carbon capture.
The carbon sinks of correctly-restored forests of native species deepen over time, as trees die and are replaced by their successors and other plants (which spread and multiply). This is why old-growth forests are extremely dense and very deep carbon sinks.
Forests last for hundreds of thousands of years. The oldest forests on the planet are MILLIONS of years old. Individual trees die, but the carbon remains in the forest, and more is added continually. FORESTS are in fact a very permanent carbon storage mechanism.
Myth 3: Tree-planting is more harmful

This actually can be true, but it is precisely because we have copious research about how to do tree-planting wrong that we can do it right.
Perhaps the biggest mistake is trying to plant monocultures of fast-growing non-native species, the most famous one being Eucalyptus plantations.
This works well for a few years - Eucalyptus grows very fast - but it doesn't leave room for other species to grow, so you're not regenerating a full ecosystem.
Eucalyptus is actually extra bad, because it releases chemicals that suppress other species.

Then, a single perturbation (fungus, bad weather, etc) can result in all the trees dying at once, and now you have an ecosystem that's worse off than before.
Rather, what you need to do is restore forests using a biodiverse set of native tree species.

More details here:
But suffice to say, it is exactly because of all the failures AND SUCCESSES over the past 30-40 years of forest restoration projects that we now know how to correctly restore forests so that they become self-sustaining, biodiverse carbon sinks.
And now that we know, it's time to SCALE that. We don't have time to waste.
Myth 4: There is not enough room to plant enough trees to solve the problem

This was previously thought to be true, but thanks to modern technology is no longer true.
If you were to define available land for trees as "places with enough natural rainfall," the rough consensus is that there's between 1 and 2 billion acres of land available for reforestation.
That's not quite enough to offset all the CO2 that we want. We'd rather have around 3 billion acres in total. Where would we find that?
Thanks to numerous projects over the past few decades, we now know that it's possible to restore deserts into forests.

Are we talking about planting invasive species in desert ecosystems? No.
Rather, many deserts became desertified over the course of human history (sometimes ancient human history), and ethnobotanical evidence strongly suggests that many deserts were once forests.
We can actually show this because in every desert, you can find oases. These are places where there's a natural accumulation of water and we can see that once there's enough water, a forest (comprised species native to the area) naturally grows.
The pilot project I personally funded to prove this out is actually located in an area of West Hawaii that was one such ancient forest that became desertified after deforestation:

medium.com/@yishan/our-pr…
What this means is that if we can solve the freshwater availability issue, we can bring water back to desertified regions - enough to make up the 3 billion acre shortfall.

Can we do that? It turns out we can:

With solar-powered desalination, the world stands on the precipice of a technological and political revolution: water scarcity can be a thing of the past...
.... AND we can restore desertified land to their native forest habitats, enough to make up 3 billion acres of restored forest ecosystems, which are a long-term, permanent, carbon sink.
... not to mention benefits to restoring biodiversity, coral reef restoration (erosion control), flood prevention, cooling weather, cleaner air, and benefits to local communities and economies.
When I go around saying that global reforestation is the BEST solution to climate change, I'm not talking about cost or risk or whatever, I'm saying no other solution has as many positive co-benefits as this one, if we do it at scale.
And THAT is why I founded Terraformation, a forest accelerator the brings together expertise from forestry, engineering, fintech, and other disciplines to remove bottlenecks and hyperscale native forest restoration as a climate solution.

terraformation.com
Time is short, and native forest restoration is the natural carbon capture solution that is IMMEDIATELY SCALABLE.
One big scalability bottleneck is the global seed supply shortage of native tree seeds. If you want to learn more:

If you enjoyed this thread, follow me for more spicy takes on social media used as a stealth vehicle for delivering my climate message.

🌍💚✊

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Yishan

Yishan Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @yishan

Apr 16
This part of my recent tweetstorm is one of the most misunderstood, so I would like to clarify it.

My usage of "by scientists" was taken to mean that acceptable discourse should only happen if the participants are (qualified/experts/etc), but that's not what I was after at all.
A better way of putting this is "if it were being discussed in a calm, polite manner on Twitter."

It's the behavior that matters, not the content or the participants.
At the time, there were many people quite reasonably discussing the lab leak theory (scientists and laymen), but there were ALSO people who were looking at the discussion and concluding things like "China released this on purpose" and "I should go and attack random Asians."
Read 59 tweets
Apr 15
I've now been asked multiple times for my take on Elon's offer for Twitter.

So fine, this is what I think about that. I will assume the takeover succeeds, and he takes Twitter private. (I have little knowledge/insight into how actual takeover battles work or play out)

(long 🧵)
I think if Elon takes over Twitter, he is in for a world of pain. He has no idea.
There is this old culture of the internet, roughly Web 1.0 (late 90s) and early Web 2.0, pre-Facebook (pre-2005), that had a very strong free speech culture.
Read 88 tweets
Mar 18
Unpopular opinion: I believe the best place to encounter extraterrestrial aliens is at live Broadway shows.

Here's why: 🧵
First, I refer you to this graph:
The universe is mostly hydrogen, with some helium.

You can MAKE every other element using hydrogen and helium, and both of those things are pretty much everywhere.
Read 15 tweets
Mar 3
Wordle 257 3/6

🟨⬜⬜⬜⬜
⬜⬜⬜⬜🟨
🟩🟩🟩🟩🟩

These victories are kind of hollow now that my wife is consistently solving the Semantle almost every day.
She has figured out a way to extract information from the cold guesses to form multiple vectors in the 300-dimensional space so that she can hone in on the answer far more quickly.
She is also learning multiple languages simultaneously on Duolingo.
Read 4 tweets
Feb 27
Anyone looking to understand Putin's motives should read the Kennan Long Telegram, sent in 1946 by George Kennan detailing the outlook of Soviet leaders, which remains highly relevant today, particularly because Putin hails from the Soviet KGB culture.

nsarchive2.gwu.edu/coldwar/docume…
🧵
George Kennan was the US Ambassador to the Soviet Union in the immediate post-WW2 era.

Some key passages: (apologies for awkward breaks; I'm copy-pasting from an era where people talked in long sentences) ...
Regarding the outlook of Soviet leaders: "... it does not represent natural outlook of Russian people."
Read 35 tweets
Feb 26
Crazy idea: if you're in the "yes, regulation can solve things" camp + the "crypto has an emissions problem" camp, then one way to solve it is not to ban crypto, but to simply mandate that all crypto mining can ONLY be powered by low-carbon sources: solar, wind, hydro, nuclear.
Anyone found to be mining crypto using fossil fuel power is fined some multiple of the cost of the crypto they mined (auditable on public blockchain since you can see what txns are associated with their hardware), the power usage patterns are easy for utilities to detect.
This would make the "crypto is driving the deployment of renewables" argument much more compelling and real: crypto miners would be directly and urgently incentivized to build huge renewable generation capacity.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(