Modal logic is fun, and I’m always disappointed that philosophers and logicians didn’t go for the obvious graph theory you can do—particularly if you allow for heterogeneity.

(Amusingly enough, this is a plot point in Neal Stephenson’s _Anathem_.) 🧵
Briefly: represent worlds as nodes, and directed arrows as the neighbor relation. An arrow from world A to B implies that something true in B is “possible” in A. Then all the logics (S5, etc) amount to graph theoretic constraints.
Then you can do fun things. There are logics with worlds where “necessarily p” doesn’t imply p; they are worlds without a self-loop, worlds that are not possible to themselves, where p can be both true and not possible.
That graph theory provides a model for the semantics of modal logic is fun, and I once wondered what would happen if you put weights on the edges.
Some worlds are more possible w/r/t to your world than others. The diamond and box shapes now carry around numbers. Presumably these can be given rules: ♢x ♢y p might be equivalent to ♢(x+y) p
It was always a little unclear what was going on, however. Possibility is not probability, and you can’t really be “more possible”. What is the name for the continuum version? Perhaps “conceivable”: things can be more or less conceivable, I think.
It’s bounced around my head from time to time, in part because I read a bit of Collingwood’s philosophy of history when back at IU. Historians, it seemed, wanted to talk about the possible, which didn’t cash out as frequentism or subjective Bayesianism.
Anyway, never really knew what to do with it. It never really clicked for me what logicians really were after. They weren’t quite mathematicians, philosophers, or scientists, (although all three of those groups appear). So it wasn’t clear how to make a contribution.
The physicist angle would be to make random (directed) graphs, and ask questions like when does S5 emerge, etc. But those are all a bit trivial—they don’t seem “deep” in an important sense.
Like, what’s interesting about random neighbor relationships? Perhaps one could have locally “reasonable” logics that are disordered on a larger scale.
This might reflect the ways in which we fail to reason well about sufficiently weird worlds relative to ours, but can reason about them well in context.
The person who is best about this (that I’ve read) is Graham Priest, who still, like most people in the space, refuses to draw pictures. (I feel like this is some weird inversion of pre-Cartesian prejudice against algebra and in favor of pictures.)
Shout out to my Neal Stephenson peeps. Logicians really hate the idea of inhomogenous graphs with a base reality.
Yes—or, rather, does graph theory contain philosophically interesting consequences when seen as a model for modal logic.
And (just because I’m me) does random graph theory have anything interesting to say, philosophically, in this fashion?
This is really fun and cool! I have a secret fear that they are using that weird event space definition of probability that economists like, but I can probably deal.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Simon DeDeo

Simon DeDeo Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @SimonDeDeo

16 Oct
Feynman wisdom is that you don’t truly understand something until you can explain it to a newcomer.

This gets confused with two seemingly similar claims, however, both of which are (IMO) wrong.
In particular, it doesn’t mean that if you can explain X to a newcomer, that X is true! (Even if the explanation is convincing.)

Nor does it mean that someone who explains something to you understands it.
Knowing, understanding, and explaining are all linked—but in complex ways (the subject of our currently-pinned cog sci paper).
Read 25 tweets
16 Oct
Excellent news—The Economist has released, as open source, a fully Bayesian election model as an alternative to Nate Silver’s (IMO increasingly pundit-oriented) forecasts.

Bayesian models are not Black Swan-proof but *are* the best possible way to summarize the “known unknowns”.
God bless The Economist.
Backtesting says the model has Clinton in 2016 at only ~70% win probability—the best accuracy I’ve seen. If this didn’t have @StatModeling’s name on it, I’d just assume that was fake and post hoc hacked.
Read 7 tweets
15 Oct
Important: Twitter could have shadowbanned the NY Post story: demphasized it, shown it in fewer timelines, placed it near other, more distracting content. These techniques are basic to monetizing content.

Instead, they went public with a hard ban and created a Streisand Effect…
Whatever you think about censoring content on a commercial platform, it's important not to lose sight of the fact that Jack's goal is not to "fight fake news", "improve discourse", or "shut down the right". That's a distraction.
The goal is, rather, to *appear* to other elites to be doing one or another of these things.

In the extreme capitalism we have today, you have to understand the social goals, and the psychology, of the oligarchs to make sense of their actions.
Read 13 tweets
15 Oct
And yet, it is—a commons that we contribute to, engage with, invest in. The ownership structure is completely detached from what makes it succeed. 🧵
Perhaps one of the least-acknowledged aspects of online life is the distinction between where value comes from (the users) and where it goes (the owners, a few elites). We still think of Twitter and Facebook with the same frameworks we understand USENET or Wikipedia.
The algorithms underneath Twitter guide and construct an environment over which we have very little control. It *is* the Matrix, in a way that previous systems were not. A neutering of bottom up power.
Read 15 tweets
14 Oct
In my 1990s cyberpunk phase, we said the Internet interprets censorship as damage, and routes around it... (🧵)
What we did not anticipate was that the underlying protocols would accumulate layer upon layer and that we would be sending packets on a layer (Twitter) without an RFC.
In retrospect, the RFC was an institution just as much as Twitter, the Post, or the House Judiciary Committee. Nor did public key encryption change much—VeriSign, not the user, holds the keys.
Read 9 tweets
14 Oct
Enjoyed @david_kipping's playful Bayesian analysis of the simulation argument. One missing piece is making lambda a fixed number (a parous universe produces lambda simulations), which I think it should be a Poisson distribution...
mdpi.com/2218-1997/6/8/…
Depending on parameters, this may interact with the exponential proliferation of universes at each generation, and of course, if correctly balanced (@Jp_odwyer knows where this is going...), that would make the distribution scale free.
It is hard to think of a more @sfiscience / Interplanetary Festival topic than the question "are the epistemic parameters of the Universe poised at a phase transition, implying a scale-free distribution over simulated realities."
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!