I read a claim that the royal governor of Virginia, John Murray (4th Earl of Dunmore), striped George Washington of his (very valuable) lands in the Ohio Valley, which Washington had originally been awarded for his service in the French and Indian War.
It seems like there was some _plausible_ legal ground for that. Since maybe the land was only supposed to be allocated to regular royal soldiers, and colonial militiamen, technically, didn't count.

And Murray called him on this technicality.
If true, this is relevant because it might give a personal, financial, justification for supporting the revolutionary war.

Washington was a multi-millionaire in danger of losing his fortune because of English policy. Rebellion, though risky, would make the problem go away.
Does anyone know if this claim is true? The book I found the claim in has a useless footnote, and googling around hasn't help me much.

Failing that, how do you go about verifying this kind of claim?

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Eli Tyre

Eli Tyre Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @EpistemicHope

17 Jan
I was reading something that suggested that trauma "tries" to spread itself. ie that the reason why intergenerational trauma is a thing is that the traumatized part in a parent will take action to recreate that trauma in the child.
This model puts the emphasis on the the parent's side: the trauma is actively "trying" to spread.
This is in contrast to my previous (hypothetical) model for IGT, which puts the emphasis on the child's side: kids are sponges that are absorbing huge amounts of info, including via very subtle channels. So they learn the unconscious reactions of the people around them.
Read 13 tweets
10 Jan
My catch all thread for this discussion of AI risk in relation to Critical Rationalism, to summarize what's happened so far and how to go forward, from here.
I started by simply stating that I thought that the arguments that I had heard so far don't hold up, and seeing if anyone was interested in going into it in depth with me.

So far, a few people have engaged pretty extensively with me, for instance, scheduling video calls to talk about some of the stuff, or long private chats.

(Links to some of those that are public at the bottom of the thread.)
Read 32 tweets
10 Jan
I am increasingly impressed with @robertskmiles's videos on AI safety topics.

They're a really fantastic resource, since they're well explained, and it is much easier to ask a person to watch a youtube video than it is to read a long series of blog posts, or even worse, a book.
(In a conversation, it is feasible to just sit down with a person and watch a 15 minute video together at 1.5 speed, and then dive back into discussion, in a way that is is a lot less feasible to say "read this", and sit there while they rush through a post or three.)
This one on the orthogonality thesis is solid.

Read 7 tweets
9 Jan
My understanding is that there was a 10 year period starting around 1868, in which South Carolina's legislature was mostly black, and when the universities were integrated (causing most white students to leave), before the Dixiecrats regained power.
I would like to find a relatively non-partisan account of this period.

Anyone have suggestions?
Read 5 tweets
9 Jan
Alright. I wrote up another essay outlining the argument from AI doom, pretty much from the top.

docs.google.com/document/d/1D3…

This one is about half as long as the other one, and (I think) somewhat crisper and more legible in its argumentation.
(Though you might still need the other one to get why "it will criticize its own goals", doesn't mean that it will get anything like human morals.)
So I overall, I more strongly recommend that people read this one.
Read 6 tweets
8 Jan
Ok, everyone. I wrote up my first draft of my counterargument to the Critical Rationalist argument against AI risk.

My hope is that folks will read this document carefully, and leave comments, noting which specific claims of mine seem false, and, if you think some part of my story is wrong, outlining how it works instead.

docs.google.com/document/d/12b…
I've done my best to state things clearly and in detail. But probably some parts of this will be unclear and we will run into more miscommunications.

Nevertheless, it seems better to post it, and see what those miscommunications are, and then I can try to clarify them.
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!