Yishan Profile picture
Oct 31, 2022 121 tweets 18 min read Read on X
[Hey Yishan, you used to run Reddit, ]
How do you solve the content moderation problems on Twitter?
(Repeated 5x in two days)

Okay, here are my thoughts:
(1/nnn)
The first thing most people get wrong is not realizing that moderation is a SIGNAL-TO-NOISE management problem, not a content problem.
Our current climate of political polarization makes it easy to think itʻs about the content of the speech, or hate speech, or misinformation, or censorship, or etc etc.
Then you end up down this rabbit hole of trying to produce some sort of “council of wise elders” who can adjudicate exactly what content to allow and what to ban, like a Solomonic compromise.
No, whatʻs really going to happen is that everyone on council of wise elders will get tons of death threats, eventually quit...
... the people you recruit to replace them will ask the first group why they quit, and decline your job offer, and youʻll end up with a council of third-rate minds and politically-motivated hacks, and the situation will be worse than how you started.
No, you canʻt solve it by making them anonymous, because then you will be accused of having an unaccountable Star Chamber of secret elites (especially if, I dunno, you just took the company private too). No, no, they have to be public and “accountable!”
The fallacy is that it is very easy to think itʻs about WHAT is said, but Iʻll show you why itʻs not…
That said, while Iʻm most well-known for my prior work in social media, today Iʻm working on climate: removing CO2 from the atmosphere is critical to overcoming the climate crisis, and the restoration of forests is one of the BEST ways to do that.

In fact, here are 4 myths relating to trees and forests as a climate solution:

Myth 1: Trees are too slow
Myth 2: Trees are not a permanent solution
Myth 3: Tree-planting is more harmful:
Myth 4: There is not enough room to plant enough trees to solve the problem
When I go around saying that global reforestation is the BEST solution to climate change, I'm not talking about cost or risk or whatever, I'm saying no other solution has as many positive co-benefits as this one, if we do it at scale.
Find ways to help support what weʻre doing:
And now, back to your regular programming of spicy social media takes…

When we last left you, I was just saying that the fallacy is that It is very easy to think itʻs about WHAT is said, but Iʻll show you why itʻs not…
First, here is a useful framing to consider in this discussion: imagine that you are doing content moderation for a social network and you CANNOT UNDERSTAND THE LANGUAGE.
Pretend itʻs an alien language, and all youʻre able to detect is meta-data about the content, e.g. frequency and user posting patterns. How would you go about making the social network “good” and ensure positive user benefit?
Well, let me present a “ladder” of things often subject to content moderation:
1: spam
2: non-controversial topics
3: controversial topics (politics, religion, culture, etc)
If you launch a social network, the FIRST set of things you end up needing to moderate is #1: spam. Vigorous debate, even outright flamewars are typically beneficial for a small social network: it generates activity, engages users.
It doesnʻt usually result in offline harm, which is what typically prompts calls to moderate content.
(And: platform owners donʻt want to have to moderate content. Itʻs extra work and they are focused on other things)
Moderating spam is very interesting: it is almost universally regarded as okay to ban (i.e. CENSORSHIP) but spam is in no way illegal.

Spam actually passes the test of “allow any legal speech” with flying colors. Hell, the US Postal Service delivers spam to your mailbox.
When 1A discussions talk about free speech on private platforms mirroring free speech laws, the exceptions cited are typically “fire in a crowded theater” or maybe “threatening imminent bodily harm.”
Spam is nothing close to either of those, yet everyone agrees: yes, itʻs okay to moderate (censor) spam.
Why is this? Because it has no value? Because itʻs sometimes false? Certainly itʻs not causing offline harm.

No, no, and no.
No one argues that speech must have value to be allowed (c.f. shitposting). And itʻs not clear that content should be banned for being untrue (esp since adjudicating truth is likely intractable). So what gives? Why are we banning spam?
Hereʻs the answer everyone knows: there IS no principled reason for banning spam. We ban spam for purely outcome-based reasons:

It affects the quality of experience for users we care about, and users having a good time on the platform makes it successful.
(And successful on a social platform usually means a lucrative ads program, which is ironically one of the things motivating spam in the first place: )

(But thatʻs a digression)
Not only that, but you can usually moderate (identify and ban) spam without understanding the language.

Spam is typically easily identified due to the repetitious nature of the posting frequency, and simplistic nature of the content (low symbol pattern complexity).
Machine learning algorithms are able to accurate identify spam, and itʻs not because they are able to tell itʻs about Viagra or mortgage refinancing, itʻs because spam has unique posting behavior and patterns in the content.
Moreover, AI is able to identify spam about things it hasnʻt seen before.

This is unlike moderation of other content (e.g. political), where moderators arenʻt usually able to tell that a “new topic” is going to end up being troublesome and eventually prompt moderation.
But spam about an all-new low-quality scammy product can be picked up by an AI recognizing patterns even though the AI doesnʻt comprehend whatʻs being said.

It just knows that a message being broadcast with [THIS SET OF BEHAVIOR PATTERNS] is something users donʻt want.
Spam filters (whether based on keywords, frequency of posts, or content-agnostic-pattern-matching) are just a tool that a social media platform owner uses to improve the signal-to-noise ratio of content on their platform.
Thatʻs what youʻre doing when you ban spam.

I have said before that itʻs not topics that are censored, it is behavior:
So now we move on to the next classes of content on the ladder:

2: non-controversial topics
3: controversial topics (politics, religion, culture, etc)
Letʻs say you are in an online discussion about a non-controversial topic. Usually it goes fine, but sometimes one of the following pathologies erupts:
a) ONE particular user gets tunnel-vision and begins to post the same thing over and over, or brings up his opinion every time someone mentions a peripherally-related topic. He just wonʻt shut up, and his tone ranges from annoying to abrasive to grating.
b) An innocuous topic sparks a flamewar, e.g. someone mentions one of John Mulaneyʻs jokes and it leads to a flamewar about whether itʻs OK to like him now, how DARE he go and… how can you possibly condone… etc
When I SAY those things, they donʻt sound too bad. But I want you to imagine the most extreme, pathological cases of similar situations youʻve been in on a social platform:
a guy who floods every related topic thread with his opinion (objectively not an unreasonable one) over and over, and
a crazy flamewar that erupts over a minor comment that wonʻt end and everyone is hating everyone else and new enemy-ships are formed and some of your best users have quit the platform in DISGUST
You remember that time those things happened on your favorite discussion platform? Yeah. Did you blood pressure go up just a tiny bit thinking about that?

Okay. Just like spam, none of those topics ever comes close to being illegal content.
But, in any outcome-based world, stuff like that makes users unhappy with your platform and less likely to use it, and as the platform owner, if you could magically have your druthers, youʻd prefer it if those things didnʻt happen.
Most users are NOT Eliezer Yudkowsky or Scott Alexander and confronted with an inflammatory posting thinking, "Hmm, perhaps I should challenge my priors?" Most people are pretty easy to get really worked up.
Events like that will happen, and they canʻt be predicted, so the only thing to do when it happens is to either do nothing (and have your platform take a hit or die), or somehow moderate that content.
RIGHT NOW RIGHT HERE I want to correct a misconception rising in your mind:

Just because I am saying you will need to moderate that content does NOT mean I am saying that all methods or any particular method employed by someone is the best or correct or even a good one.
I am NOT, right here, advocating or recommending bans, time-limited bans, or hell-banning, or keyword-blocking, or etc etc whatever specific method. I am JUST saying that as a platform owner you will end up having to moderate that content.
And, there will be NO relation between the topic of the content and whether you moderate it, because itʻs the specific posting behavior thatʻs a problem. What do I mean by that?
It means people will say, “You banned people in the discussion about liking John Mulaney Leaving His Wife but you didnʻt ban people in the discussion about Kanye West Being Anti-Semitic ARE YOU RACIST HEY I NOTICE ALL YOUR EXECS ARE WHITE!”
No, itʻs because for whatever reason people didnʻt get into a flamewar about Kanye West or there wasnʻt a Kanye-subtopic-obsessed guy who kept saying the same thing over and over and over again.
In fact, once again, I challenge you to think about it this way: could you make your content moderation decisions even if you didnʻt understand the language they were being spoken in?

Here, there is a parallel to the usage of “Lorem Ipsum” in the world of design.
en.wikipedia.org/wiki/Lorem_ips…
Briefly, when showing clients examples of a proposed webpage design, professional designers usually replace the text with nonsense text, i.e. “Lorem ipsum dolor etc…” because they donʻt want the client to be subconsciously influenced by the content.
Like if the content says, “The Steelers are the greatest football team in history” then some clients are going to be subconsciously motivated to like the design more, and some will like it less.
(Everyone from Pittsburgh who is reading this has now been convinced of the veracity and utter reasonableness of my thinking on this topic)
Everyone else… letʻs take another temporary detour into the world of carbon credits.

Carbon credits are great for offsetting your carbon footprint, but are they really helpful to the true climate problem?
The problem with credits is what when you buy a credit, the CO2 represented by that credit has already been removed: someone planted a tree, the tree grew and removed CO2 from the air, and thatʻs what issues the carbon credit. (simplified explanation)
At that point, the CO2 molecules have been removed from the atmosphere!! Thatʻs the key thing we need to have happen!

When you buy the credit, you take it and emit your CO2, and now youʻve undone that CO2 removal!
What you really want to do is buy FUTURE carbon credits: you want to pay now for future carbon credits, because then youʻre paying for people to plant additional trees NOW (or suck down CO2 using machines, but weʻll use trees as the shorthand here).
Now the CO2 has been removed. Then once you receive your credits, the price should have gone up (all carbon credit price projects have prices increasing sharply over the next few years/decades), you sell only enough to cover your original investment.
Then you take the excess and you retire them, so that no one can buy them and use them to emit. NOW youʻve sequestered a bunch of CO2, allowed the re-emission of only some of it, and you have a net removal of CO2.
(You gave up extra returns to do this, but I presume youʻre in a position where you, like me, are looking to trade money for a livable planet and/or realize that worsening climate is already impacting us economically in bad ways)
To enable people to do this, we need to enable huge numbers of NEW reforestation teams, and if thatʻs something you want to support, youʻll be interested in Terraformationʻs new Seed-to-Carbon Accelerator:

terraformation.com/blog/carbon-fo…
Now back to where we were… when we left off, I was talking about how people are subconsciously influenced by the specific content thatʻs being moderated (and not the behavior of the user) when they judge the moderation decision.
When people look at moderation decisions by a platform, they are not just subconsciously influenced by the nature of the content that was moderated, they are heavily - overwhelmingly - influenced by the nature of the content!
Would you think the moderation decision you have a problem with would be fair if the parties involved were politically reversed?
Youʻd certainly be a lot more able to clearly examine the merit of the moderation decision if you couldnʻt understand the language of the content at all, right?
People in China look at America and donʻt really think the parties are so different from each other, they just think itʻs a disorganized and chaotic system that resulted in a mob storming the capitol after an election.
Even if youʻre trying to cause BAD things to happen, e.g. Russia is just happy that the US had a mob storming the capital after an election instead of an orderly transfer of power. They donʻt care who is “the good guy,” they just love our social platforms.
Youʻll notice that I just slippery-sloped my way from #2 to #3:
2: non-controversial topics
3: controversial topics (politics, religion, culture, etc)
Because #2 topics become #3 topics organically - they get culture-linked to something in #3 or whatever - and then youʻre confronting #3 topics or proxies for #3 topics.
You know, non-controversial #2 topics like… vaccines and wearing masks.

If you told me 10 years ago that people would be having flamewars and deep identity culture divides as a result of online opinions on WEARING MASKS I would have told you that you were crazy.
That kind of thing cannot be predicted, so thereʻs no way to come up with rules beforehand based on any a-priori thinking.

Or some topics NEED to be discussed in a dispassionate way divorced from politics:
Like the AI, human content moderators cannot predict when a new topic is going to start presenting problems that are sufficiently threatening to the operation of the platform.

The only thing they can do is observe if the resultant user behavior is sufficiently problematic.
But that is not something outside observers see, because platforms donʻt advertise problematic user behavior because if you knew there was a guy spam-posting an opinion (even one you like) over and over and over, you wouldnʻt use the platform.
All they see is the sensationalized (mainstream news) headlines saying TWITTER/FACEBOOK bans PROMINENT USER for posts about CONTROVERSIAL TOPIC.
This is because old-media journalists always think itʻs about content. Newspapers donʻt really run into the equivalent of “relentless shitposting users” or “flamewars between (who? dueling editorialists?).” Itʻs not part of their institutional understanding of “content.”
Content for all media prior to social media is “anything that gets people engaged, ideally really worked up.” Why would you EVER want to ban something like that? It could only be for nefarious reasons.
Any time an old-media news outlet publishes something that causes controversy, they LOVE IT. Controversy erupting from old-media news outlets is what modern social media might call “subclinical.”
In college, I wrote a sort of crazy satirical weekly column for the school newspaper. The satire was sometimes lost on people, and so my columns resulted in more letters to the editor than any other columnist ever. The paper loved me.
(Or itʻs possible they loved me because I was the only writer who turned in his writing on time every week)
Anyhow, old media controversy is far, far below the intensity levels of problematic behavior that would e.g. threaten the ongoing functioning or continued consumer consumption of that old-media news outlet.
MAYBE sometimes an advertiser will get mad, but a backroom sales conversation will usually get them back once the whole thing blows over.
So we observe the following events:

1: innocuous discussion
2: something blows up and user(s) begin posting with some disruptive level of frequency and volume
2a: maybe a user does something offline as a direct result of that intensity
...
3: platform owner moderates the discussion to reduce the intensity
4: media reporting describes the moderation as targeting the content topic discussed
5: platform says, “no, itʻs because they <did X specific bad behavior> or <broke established ruled>”
...
6: no one believes them
7: media covers the juiciest angle, i.e. "Is PLATFORM biased against TOPIC?"

Because, you see, controversial issues always look like freedom of speech issues.
But no one cries freedom of speech when itʻs spam, or even non-controversial topics. Yeah, you close down the thread about John Mulaney but everyone understands itʻs because it was tearing apart the knitting group.
“Becky, you were banned because you wouldnʻt let up on Karen and even started sending her mean messages to her work email when she blocked you here.”
Controversial topics are just overrepresented in instances where people get heated, and when people get heated, they engage in behavior they wouldnʻt otherwise engage in.
But that distinction is not visible to people who arenʻt running the platform.
One of the things that hamstrings platforms is that unlike judicial proceedings in the real world, platforms do not or cannot reveal all the facts and evidence to the public for review.
In a real-world trial, the proceedings are generally public. Evidence of the alleged wrongdoing is presented and made part of the public record.
Although someone might be too lazy to look it up, an interested critic will be able to look at the evidence on case before deciding if they want to (or can credibly, without being debunked) whip up an angry mob against the system itself.
At Reddit, weʻd have to issue moderation decisions (e.g. bans) on users and then couldnʻt really release all the evidence of their wrongdoing, like abusive messages or threats, or spamming with multiple accounts, etc.
The justification is that private messages are private, or sometimes compromising to unrelated parties, but whatever the reasons, that leaves fertile ground for unscrupulous users to claim that they were victimized...
... and politically interested parties to amplify their message that the platform is biased against them.
I had long wondered about a model like “put up or shut up” where any users challenging a moderation decision would have to consent to having ALL the evidence of their behavior made public by the platform, including private logs and DMs.
But there are huge privacy issues and having a framework for full-public-disclosure would be a lot of work. Nevertheless, it would go a long way to making moderation decisions and PROCESSES more transparent and well-understood by the general public.
Social platforms actually have much BETTER and more high-quality evidence of user misbehavior than “the real world.” In the real world, facts can be obscured or hidden. On a digital platform, everything you do is logged. The truth is there.
And, not only that, the evidence can even be presented in an anonymized way for impartial evaluation.
Strip out identifiers and political specifics, and like my “in a language you donʻt understand” example: moderators (and armchair quarterbacks) can look at the behavior and decide if itʻs worthy of curtailment.
Again, this is a lot of work. You canʻt just dump data, because itʻs a heightened situation of emotional tension: the first time you try, something extra will get accidentally disclosed, and youʻll have ANOTHER situation on your hands. Now you have two problems.
So I donʻt know if thatʻs workable. But what I do know is, people need to think about content moderation differently, because:
1: It is a signal-to-noise management issue
2: Freedom of speech was NEVER the issue (c.f. spam)
3: Could you still moderate if you canʻt read the language?
Warning: donʻt over-rotate on #3 and try to do all your content moderation through AI. Facebook tried that, and ended up with a bizarre inhuman dystopia. (I have a bunch more to say about this if people care)
Having said all that, I wish to offer my comments on the (alleged) “war room team” that Elon has apparently put to work at Twitter:
I donʻt know the other people super well (tho Sriram is cool; he was briefly an investor in a small venture of mine), but Iʻm heartened to know that @DavidSacks is involved.
Sacks is a remarkably good operator, possibly one of the best ones in the modern tech era. He was tapped to lead a turnaround at Zenefits when that company got into really hot water:
“Content moderation” is the most visible issue with Twitter (the one talking heads love to obsess over) but itʻs always been widely known that Twitter suffers from numerous operational problems that many CEOs have tried in vain to fix.
If Twitter were operationally excellent, itʻll have a lot better chance of tackling its Inherently Very Hard Moderation Problems and maybe emerge with novel solutions that benefit everyone. If anyone can do that, itʻs Sacks.
Twitter employees are about to either be laid off or will look back on this as the time they did the best work of their lives.
Finally, while Iʻve got your attention, Iʻd like to tell you my personal secret to a positive Twitter experience - a little-known Twitter add-on called Block Party: @blockpartyapp_
One thing that Twitter did well (that Iʻm surprised FB hasnʻt copied) is exposing their API for content filtering.

This allows 3rd-party app developers to create specialized solutions that Twitter canʻt/wonʻt do.
Block Partyʻs founder Tracy Chou understands the weird and subtle nuances of content filtering on the internet: you donʻt use a cudgel, you need a scalpel (or three).
Block Party doesnʻt simply wholesale block things, it filters them in an intelligent way based on criteria you set, and uses data across the system to tune itself.
It doesnʻt just throw away things it filters for you, it puts them in a box so you can go through it later when you want. Because no automated filter is perfect! (Remember the “bizarre inhuman AI dystopia” from above?)
If youʻre someone who gets a LOT of hate (or just trash) and you donʻt really WANT to go through it but need to (just in case thereʻs something valuable), you can also authorize a trusted friend to do it for you.
Overall, it has smoothly and transparently improved the signal-to-noise ratio of my Twitter experience, especially during a period of cultural upheaval when youʻd expect MORE crazy crap…
But no, for me, my Twitter experience is great and clean and informative and clever. Iʻve used Twitter more and more ever since installing it.

Disclosure: as a result of these experiences, Iʻm now an investor in Block Party.
If you enjoyed this and want more spicy takes on social media (and advice on how to fix the climate, or investment tips), follow me!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Yishan

Yishan Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @yishan

Apr 16
A week ago, I posted a thread about trying Lumina, the probiotic dental caries treatment from Lantern Bioworks. It got way, way more visibility than I expected (good), but given the popularity of the thread, I felt it would be responsible to address a number of concerns, objections, and skepticisms it uncovered.

Instead of doing this in the marketing-friendly bite-sized tweet storm format, I will do this in a more long-form format, which is more conducive to nuance and detail:

1. Disclosure: I am an investor in Lantern Bioworks!

(I am sorry it didn't occur to me to bring this up right at the beginning but the thread started out as a "look at this crazy thing I am doing" and then ended up later sounding promotional, if you can call it that)

Anyhow, yes I am an investor!

However, it doesn’t work exactly the way you think. The cached-thought reflex most people have is “investor = wants to get rich, shills for company; don’t believe what he says!”

First, my investment is something like 0.05% of the company [details elided here about SAFEs, caps, etc]. Similarly, the equity I hold in Lantern is also a tiny portion of my net worth.

Second, I invested in LB because I knew about this dental caries cure 10-15 years ago. If you’ve been paying attention, the basic research had been done in the 80s and 90s, and in the early 2000s, the inventor was attempting to get it approved by the FDA as a medically-approved treatment, and it was under patent.

At the time I found out about it (mid-2000s), that was the status quo: a cure (technically: preventative vaccine) for caries existed, but it was under patent. So all we [normal people] could do was wait, and hope it came to market.

It never came to market. For various reasons (more on this later), it wasn’t able to even start to get FDA approval, and the company is basically defunct.

My overriding priority, therefore, is to help get this out to humanity. If you’ve read Cremieux’s piece [] on the history of dental caries, it is global problem that has plagued us since the dawn of history, and if we could eliminate (or even greatly reduce it), it would result in a profound improvement in the human condition.

Hence, when I found that a company was working on it, I was intrigued. It turns out that yes, many other people were willing to experiment with this, but the company needed a bit of capital to ramp up production. The amount of money needed was an amount that I felt I - in addition to more investors within my network that I thought I could bring to the table - could provide.

In fact, I invested ONLY because the company decided to pursue what I consider the LESS profitable route of distribution:

When I first learned about Lantern (Sep 2023), they were mulling over their go-to-market plans. At the time, they had concluded that:

- Just making and distributing the cure would not be particularly profitable, as it was a one-time treatment and if successful, that’d be the end of things. And, being as it was out of patent, other companies could clone/pirate the same treatment and just copy them.

- The more profitable thing would be to slightly tweak the bacteria in a trivial way so that it would be patentable, get a patent, then sell it to Pfizer, have Pfizer drag it through the FDA approval process. It was anticipated that this process could take 2-10 years to before it would get the bacteria into peoples’ mouths. At the time, Lantern seemed to slightly prefer this plan.

I did not like this plan. My feeling about this cure is that it is game-chantingly important for mankind, and not something to be subject to our monstrously dysfunctional public-private FDA-Big-Pharma late-stage-capitalist regulatory-capture system. So I didn’t invest.

(There was another third plan, which was to pursue approval in other, faster countries, with the caveat that the FDA holds a grudge against you if do that, so it was sort of a worst-of-both-worlds plan)

Months later in ~Feb 2024, Cremieux posted about having gotten the treatment himself at Prospera, and answered a message from me with an offer to introduce me to Lantern’s founder, Aaron.

By then, Lantern had apparently decided not to deal with creating a tweaked strain, patenting, and dealing with Pfizer, and were intending to just make and distribute it as a cosmetic (probiotic supplement), which doesn’t require FDA approval, and presumably make a healthy return selling a one-time treatment to all of mankind, which is still 8 billion people.

The idea is that they were selling it for $20,000 per treatment at Prospera to rich guys like Cremieux, then it came down to $5k, and now they’re taking pre-orders for $250 each, putting it in the budget of well-off biohackers and other early adopters. They’ll drive the price down at each stage, and eventually the last billion doses will probably be sub-$1 production cost distributed by NGOs in developing countries.

But in order to make the jump from bespoke lab bench treatments at $5k each to producing 1000 units/month at $250 each, they needed to scale up a small production facility, and that’s why I invested - to help them make this next step.

I should mention that even if this investment does well, I won’t actually personally make money from this. I invested through a charitable donor-advised fund that I contributed to, and any returns on the investment will just go back into the charitable fund, to be deployed into other similar investments. I can’t actually claim any of the returns.

My role as an investor (and indirectly, apparently as a marketer) is to accelerate the production and deployment of this as a cure to as many people who want it as possible. I am not doing this for the money. I am doing it because I’m hoping to remove hurdles (financial or otherwise) for something that I think will be beneficial to mankind - after 20+ years of development hell - to finally see the light of day.

Next: FDA approval - no?cremieux.xyz/p/46ebd66b-8a6…
2. It's not approved by the FDA?

The short answer here is that FDA approval is extremely difficult to get for reasons unrelated to the efficacy and safety of the proposed intervention, and not because the intervention is necessarily unsafe or ineffective.

While there are many unapproved therapies that ARE unsafe or ineffective, this does not mean that when something is not FDA approved that it is necessarily unsafe or ineffective.

This is an important distinction to make in our day and age, when the FDA is in that awkward slow-collapse state of being not completely useless while largely failing to live up to its intended function. (Friends at the FDA: it’s not your fault - you’re trapped in a huge dysfunctional system you cannot really control)

Some history:

The company originally founded by Dr. Hillman (the researcher who discovered the cure) was called Oragenics. Back in 2003, after the treatment had passed animal trials, Pfizer actually tried to purchase SMaRT (as the treatment was called back then) from Oragenics for $64 million but Oragenics refused, choosing instead to try and carry it through the FDA themselves.

But the FDA required them to find a cohort of 300 healthy 18-30 year olds who lived alone, not near a school zone, and had fully removable teeth. Let me repeat that: they wanted Oragenics to find 300 young people with dentures.

That is basically IMPOSSIBLE, so Oragenics failed even before they started. The company floundered about for a bit, the patent expired, and then a couple years ago Lantern Bioworks bought all the IP.

Lantern itself seriously considered tweaking the formula slightly to qualify for patent protection and to sell it to Pfizer, and rely on Pfizer’s corporate muscle to drag it through the FDA testing and approval process, and hope that 2-10 years from now it might hit the market.

I WOULD NOT HAVE INVESTED IF THEY WERE DOING THAT.

In fact, as I chatted with other prospective investors, more than one of them asked whether the product was going through the FDA approval process and responded POSITIVELY when I answered that they were not. Apparently the FDA process these days is so onerous and cumbersome and sufficiently removed from the questions of safety and efficacy that they are considered something that makes a company uninvestable to many.

(This is not to say that the FDA approval and testing process has no value - it certainly yields some useful data on safety and efficacy - it is just that it also imposes disproportionate time and energy cost whose risk often does not match upside outcomes. The questionable results (on both sides) of potential treatments during the pandemic are a clear symptom of this)

Finally, the fact that a company capable of enduring the FDA process would never do it unless they owned a monopoly patent on the treatment was unacceptable to me. Because the bacteria is in the public domain, no drug company wants to try to carry it through the FDA - doing so would just mean that anyone could then sell the drug that they just spent half a billion dollars to get approved.

Remember, this is not some new fly-by-night GMO tech. The research was done, tested, and peer-reviewed over 20 years ago. In looking back at the supporting research that fed into it, I found that Dr. Hillman had published foundational papers as far back as 1978, before I was even born, characterizing effector strains that had the potential to create a therapy for dental caries.

Published in 1978:

So here’s where we stand:

- Pfizer (who are no fools) already offered to purchase this for $64 million in 2003 based on the positive results of animal trials. Adjusting for inflation, this is maybe ~$110m now
- No drug company CAN drag this through an FDA approval process, and the reasons for that have little to do with efficacy and safety.
- The only “maybe” way to do so is to patent a tweaked formula, but that still means of a delay of up to a decade and the same potential market danger of a generic competitor simply selling the original formula, with the FDA-approved formula now under the monopoly ownership of a Big Pharma company.

This treatment, if it works, is worth about $45B a year in saved dental work in America alone, plus the lives of hundreds of people who die annually from tooth infections and dental anesthesia mishaps. It’s a civilizational embarrassment that this drug is held back by red tape.

What I think is the best way forward:

Because it’s relatively certain that the treatment is safe (and I’ll address a couple safety questions that came up later in this post too - but probably the best argument is that Pfizer offered $64m for it), the best thing to do seems to be:

1. Move forward with manufacturing and distributing this as a probiotic supplement
2. Once a critical mass of biohackers and early adopters take this treatment, other third-party research can get involved

Rather than being done under the auspices of a drug company (which would even have an economic motive to see certain results), independent labs can do research on cohorts of people who apply the treatment vs not, and we’ll get much better, larger datasets of results. Labs do this all the time - answering a question like “are people who eat muffins happier in the morning” doesn’t require muffins to be a patented new intervention - you can just do the research because you are curious: and many people will be.

If you’re not an early-adopting biohacker, you should favor this approach. You should want this product out there, and your risk-taking friends to be trying it (or if you are really afraid, then your risk-taking enemies), so that a critical mass of test subjects can be recruited for some nice large-population-set data. Then subsequent potential users with a different risk profile can make their decision after more data is available.

Next... genetically stable what?ncbi.nlm.nih.gov/pmc/articles/P…
3. It’s genetically stable? Also, any possible kill-switch in case things go wrong?

In one of my posts in the prior thread (), I mentioned “4. It’s genetically stable.”

That was rightly treated with some skepticism, given that I didn’t provide any details.

I had elided the details because it was just complex enough that it would have made that part of the thread overly verbose and technical. I will provide those details now here:

First, S.mutans has a proclivity for horizontal gene transfer (exchange DNA) with other bacteria. This means it tends to mutate “faster” than average. This actually produces many other sub-strains (one of them is harmful - more on this later), and what we would ideally like to do is for OUR beneficial strain to do the the helpful thing it does (plus no harmful things) and NOT mutate.

Well, it turns out you can do this by introducing a small deletion in one of the genes (called “comE”), which causes the strain to lose this DNA-exchange ability. Subsequently, the bacteria with this change no longer mutates as quickly.

The paper outlining that change is here:


Incidentally, this paper also describes another tweak, where they installed a “kill-switch” for the bacteria: you can wipe it out by taking oral chlorhexidine (a specific anti-microbial mouthwash).

Both the kill-switch and genetic-stability tweaks were added for human trials, specifically to enable rapid elimination of the strain in case of adverse side effects, and to yield genetic stability (to make sure the kill-switch stayed intact, among other things).

Next: does it colonize the gut?
academic.oup.com/jambio/article…
Read 6 tweets
Apr 10
I’m about to do the Lumina treatment, a cure for dental cavities developed by Lantern Bioworks.

I figured I’d do an unboxing and let you come with me on this journey. Here’s the box: Image
It’s a one-time at-home treatment, which replaces the bacteria in your mouth with a slightly different bacteria - one that doesn’t secrete the acid that attacks tooth enamel and causes cavities.
Ingredients:

Hmm, it’s half sugar. I guess that’s to give the bacteria something to snack on once it’s reconstituted? Image
Read 54 tweets
Mar 12
What if the world’s forests had a system to detect wildfires almost instantly as soon as they started, even before the fire was visible?

Such a system is possible, and it’s low-cost and highly-scalable. Here’s more…
Carsten Brinkschulte is CEO of Dryad. They make simple IOT devices to detect smoke from wildfires while the fires are small, passing the info along inexpensive mesh networks to a monitoring station, so that firefighters can respond within minutes.

terraformation.com/blog/how-the-i…
Solutions to address climate change need to be implementable at scale, and this often means using components that are small, simple, and cheap.

Dryad’s solution is all of these, and it performs far faster than other, more complex solutions.
Read 8 tweets
Feb 23
Google’s Gemini issue is not really about woke/DEI, and everyone who is obsessing over it has failed to notice the much, MUCH bigger problem that it represents.

(1/n)
First, to recap: Google injected special instructions into Gemini so that when it was asked to draw pictures, it would draw people with “diverse” (non-white) racial backgrounds.
This resulted in lots of weird results where people would ask it to draw pictures of people who were historically white (e.g. Vikings, 1940s Germans) and it would output black people or Asians.
Read 30 tweets
Nov 20, 2023
I am probably one of a small number of people who have had the chance to work directly with both @AdamDAngelo and @Sama and get to know them.

Here’s what you need to know about these two guys:
First, I worked with Adam as an engineer and then director of engineering while he was CTO at Facebook, and then later I did consulting work for Quora.
I worked with Sam when he helped me raised Reddit’s Series B round and served together with me on Reddit’s board. His firm (him and his brother Max) is also the lead investor in my company @TF_Global.
Read 17 tweets
Sep 11, 2023
Our society today is basically just about yelling incoherently about things without regard for facts instead of doing anything real…

… which is exactly what you’d expect from a gerontocracy, right?
What if the problem is not political polarization or lack of education or wokeness or fascism or any of those things but merely that our society is a reflection of the fact that our senior leaders are really old people?

Really old people don’t DO things, they just complain.
Trump is 77, Biden is 80, Mitch McConell is 81, and Pelosi is 83.

In your own family, does anyone you know who is that age lead the way with bold and concrete vision and clear solutions? Or do they just… sit around and talk?
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(