i think that this should be a crime with consequences similar to those for similar numbers of casualties, and that the consequences should be summary. foreignpolicy.com/2021/02/18/tru…
there is no question that he lied about the severity of the outbreak and took affirmative action which resulted in the deaths of tens of thousands of americans. there is no question that he did this with knowledge of the consequences.
i cannot say what I think should happen to scott atlas on twitter. i do not know whether i can even legally say it.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
i think paxton's conception of fascist transition as an elite phenomenon resulting from conservatives inviting fascists into power, then being supplanted by them, is both (a) correct, and (b) has already happened in the united states.
we have a fascist party and a liberal party, and then there are leftists, who do not have a party. the fascist party cannot be allowed to take power. what this means is that the liberals must accommodate the leftists to form a resistance bloc.
this is unlikely to work over the long term, because malapportionment and the utter domination of the legislature by conservatives have essentially removed all avenues for liberals to exercise power, which leftists will not tolerate. we are likely to be governed by fascists soon.
there is zero academic rigor to any of this. he simply either does not understand the issue or his brain is fully rotten by weird right-wing shit.
black faces appear to have low contrast against a dark background. this is a completely anodyne observation with technical consequences: particularly, that a low-contrast picture of a black face will have fewer landmarks for a model to key on.
in order to get even odds of recognition -- and this may not even be possible -- you have to look independently at the odds of recognition in demographic subgroups. this means adjusting your training set. you even want to do this if you are racist. you want your product to work.
have been working on this problem for years and am begging academia to do some work on how to estimate the relationship between model size and the recovery of memorized data
model training is not that dissimilar from compressing data using an autoencoder. the problem is more or less that we are writing data into a format we cannot read, then presuming that it is unrecoverable.
hopefully my team should have some work on model distillation & memorization out this year that can contribute
honestly the attack helicopter story was the single best piece of science fiction i've read in the past five years and it deserved a Hugo rather than being unpublished
i honestly understand the backlash: it implicitly repudiates an argument for trans rights which is extremely intuitively compelling but which probably does not correspond perfectly to the underlying causes of dysphoria, a topic we do not fully understand and which isn't universal