Kelsey Piper Profile picture
Aug 10 236 tweets 35 min read
With effective altruism in the news absolutely everyone has been publishing their takes on the movement, and I keep thinking of things I want to say in response to all of them but don't have time. So let's try this: 1 like = 1 opinion on effective altruism and its critics.
Global health interventions totally save peoples' lives and many of them won't be funded unless individual donors decide to donate money. There's lots of clever contrarian second-order stuff which just doesn't really touch this core fact about the world.
(2) Academics can get published way more easily for discovering a new clever intervention than for working on how to get an existing functional intervention to hundreds of millions more people.
(3) It's also easier to advertise something new and clever to the public. People love the idea of someone who came up with something new and transformative. But actually you can save so so many lives just by doing the thing that works, for people who can't get it yet.
(4) That's kind of the backdrop for a lot of my feelings about effective altruism. There's just a ton, that matters a lot, that isn't getting done, and it seems very possible for a medium-sized medium-rich intellectual and social movement to totally change that.
(5) If you evaluate effective altruism by how many people it got lifesaving medical care to - and I think that is an extremely good and correct way to evaluate effective altruism - 2022 will be effective altruism's best year.
(6) Effective altruists are doing way more shit that is not getting lifesaving medical care to people, but the movement is growing faster than it's expanding focus, so every year of growth so far has also been growth in lifesaving medical care provided to people.
(7) I have unease about effective altruism having so much money. I have fury and frustration that we're trying to do this with a social movement instead of having a society that does it systematically as a shared priority. But all that unease has to sit behind (6).
(8) Bigger has a really obvious really good effect; more money is going to the stuff that's really obviously really good; so I'm happy that effective altruism is bigger and I think people with really wildly different values and priorities agree on this.
(9) A fundamentally very fair complaint about EA is that there's a bait and switch or something going on here. There's all the global health interventions, and then there's a second flock of people standing behind the global health people doing weird stuff that's way less popular
(10) This seems fucked up! It's not fair to the global health people, for one thing, if their justified good PR for all the global health interventions that they are raising money to do is getting spent down on cover for weirdos.
(11) It also seems really bad if ideas that can't stand on their merits get shielded from criticism because the people doing them are also doing things that do stand on their merits.
(12) But a lot of criticism seems like it wants to engage by just denying that there is really important really obviously good stuff that really does stand on its obvious good merits. People dying is bad, stopping it is good, none of the complicated arguments complicate *that*.
(13) Okay, so, the obvious good stuff is obviously good, but is it the best possible way to achieve good in the world with limited donations today? I think probably not. I think probably not even if you're just trying to solve global development stuff.
(14) I think inventing new better easier to administer vaccines is probably going to pencil out better than more immediate stuff. I think people have given up on the problems of development and migration - the ways people have actually historically stopped being poor -- too fast.
(15) So I try not to sell GiveWell on the grounds "this is the best possible way to help people currently alive", and I'm kind of meh on that framing. I prefer "hey, getting people lifesaving healthcare saves their lives and it's pretty cheap".
(16) GiveWell is doing an extremely valuable service, but I don't think that service is exactly "finding the best possible giving opportunities" -- more finding very good giving opportunities and being wildly more careful and meticulous about how good they are and why we think so
(17) I think there's a bunch of obvious true stuff that most people would agree with but that has pretty outrageous implications if you try to seriously apply it to policy, research priorities, etc.
(18) That is stuff like "probably dogs can suffer, and torturing them is bad" and "we don't want humanity to go extinct this century" and "the universe is very big, and potentially human descendants can lead incredible and weird lives on a scale hard to think about".
(19) These aren't that weird, in that very normal people go "yeah, I definitely think animal cruelty is bad" and "yeah, I definitely want my children to not die in a fiery apocalypse" and "yeah, someday maybe our great-grandkids will see the sunrise on other worlds".
(20) They are weird in that even if you're being conservative and careful, taking them at all seriously ought to really really change your planning and priorities.
(21) If animal cruelty is bad, even if it's just kinda a little bad, then factory farming is horrendously fucked up, actually, and probably torturing a billion animals is a lot worse than torturing one animal.
(22) (That said I mostly don't think people should go vegan unless they're really good at making sure they get the nutrients they need to stay healthy; mostly I think people should be thinking about systemic change.)
(23) Also a lot of EA veganism discourse is premised on things like "it's fundamentally not okay to do harm" rather than "it's good to make tradeoffs that make the world a lot better as cheaply as possible" and I think that's pretty unhealthy for EA.
(24). If you have a movement full of people who are hyperconcerned with doing the right thing, the important message to push is acting *efficiently* and *thoughtfully* and making sure your actions push in the right direction, NOT purity.
(25) My history with anorexia obviously influences me here but I just seethe every time some effective altruists start talking about how we should think about food.
(26)also it's healthy sometimes to be a hypocrite and say 'yeah, I think factory farming is really bad but I eat factory farmed meat and I am not planning to change that'. Easier to be honest with yourself about your values if you don't force yourself to instantly live up to them
(27) fundamentally EA is at its most useful when it's a set of tools for thinking more clearly about your values and your priorities, and at its least useful when it makes those things scary to think about.
(28) If you find yourself thinking "I should learn that, but it's terrifying because then I'll have to [do something][disagree with someone]etc", halt and CATCH FIRE. there is nothing more valuable than your ability to reason about the world.
(29) If EA in general, for whatever reason, makes it harder for you to think clearly about your own values and priorities, go somewhere better - please. It's the right thing to do for yourself and what you care about.
(30) Various people have found EA bad as an environment they could think in, for various reasons that weren't true for me. That makes sense, because there are hella competing needs in terms of what makes a place safe to think in.
(31) There's definitely an impulse to say 'but that wasn't real EA', and I definitely hope EA grows and does better, but the first thing to say is that you should never try to force yourself into a mold where you're unable to think clearly and an intellectual movement can be that
(32) I think EA does, in fact, have something distinctive and valuable in the clear-thinking department. It's not the obvious stuff. It's not that GiveWell has a lot of spreadsheets - that's a symptom.
(33) It's something like - it's so astoundingly, incomprehensibly hard to answer real questions about the world. Being around other people trying to answer real questions is essential, and there are things you learn that are hard to articulate.
(34) One of the most important things you learn is what it feels like to be making progress on a hard problem. There's this state where you're still holding a lot of contradictory pieces in your head but at least you can see where you're confused, where they can't both be right.
(35) I think young people should try quite hard to be in environments where people are trying to solve real hard problems, and the EA movement has that, and that makes it incredibly valuable and incredibly compelling for some.
(36) There are a bunch of categories of critique of EA that I quietly seethe at, maybe unfairly. In the last couple days I've seen a lot of declarations that we are missing the true valuable things in life.
(37) I think effective altruism, not just as ideally practiced but also as normally practiced by actual effective altruists, does not entail missing the true valuable things in life at all, and in fact generally involves leading a pretty good and cool life.
(38) I think this is important and healthy and that it has in fact been spectacularly bad almost every time anyone has tried to live a miserable life for the sake of the cause, and if that's you I want to try to strongly dissuade you.
(39) I also found myself seething today at the critique (made by a man) that effective altruism probably lacked women because it lacks the feminine, nurturing touch. Trying really hard to solve problems is a gender-neutral activity.
(40) And many of the women I admire most profoundly, women who have particularly cool and impactful and interesting careers, are in EA, just because EA is really really full of opportunities to do ludicrously cool stuff right now.
(41) I do personally value nurturing environments a lot. I have young kids, I like being around others who have young kids, I prefer Sunday morning brunches with friends to late night parties, I skip all the EAG afterparties.
(42) But like, quite aside from my wishing we wouldn't gender that preference so aggressively, I prefer that so I do that. You can just host brunches and skip parties and have babies! If you have babies I will hold and coo at your babies!
(43) Another critique, which I heard from two different people and one time it drove me up the wall and one time I took it very deeply to heart: that effective altruism wants to create a bunch of people who are all identical instead of distinct and individual.
(44) One person I heard this from felt, herself, pressured to be more like other people, like the perfect grant candidate, to have the right beliefs about big confusing topics, to pick the Impactful career without regard for her strengths. THIS IS VERY BAD DON'T DO IT.
(45) Impact can be an input into career choice, but dear god, your strengths should be a PRIMARY input into career choice! Do the things you're good at! Do the things you have the potential to be exceptionally good at!
(46) Official EA messaging does not say to do this, of course, but if it's a direction people are being pushed in, I think that's quite bad, especially because as a college student how to choose careers is something where there's just a glaring dearth of simple good advice.
(47) However, when I hear the same critique from outsiders navelgazing about EA, I want to say, oh, screw you. I'm good at writing so I decided to have an impact on the world through writing. My wife's a software engineer. My friend writes fiction. (cont)
(still 47) My other friend is ludicrously good at talking to people and connecting with them intellectually and noticing what kinds of hard problems compel them, and does recruiting stuff. Another friend is an editor and TV writer.
(stiiiill 47) I know people who are in grad school to study climate and development and ML and philosophy and political science and biology and cryptography and information security. I know people who blog and have day jobs in a range of fields even weirder than that.
(stillllllll 47) I know socialists and libertarians and anarchocapitalists and liberals and atheists and Catholics and Protestants and Jews and people who invented their own kinda weird religion but are very serious about it.
(stillllll 47) I know people who have four kids so far and people who definitely never want children and people who are homeschooling and people who think that sounds like a terrible idea and people with a quilting hobby and people with a parasailing hobby.
(47 fin) If we all look the same to you from the outside I think that you might be compressing a lot of human experiences into a box that makes you feel more comfortable dismissing it.
(48) It's extremely weird to be in a social circle with billionaires. I do think that a society that was well-structured would mostly not have billionaires and would certainly not have whether major global priorities research gets funded depend on them.
(49) But I do think that, say, Dustin Moskovitz, is an awesome person and I am kind of okay with the idea that we just praise people and talk them up if they save tons of lives and fund tons of important stuff. (To my knowledge he hasn't ever given me money.)
(50) This is coming from a very pragmatic place. I want all the other billionaires to see how nice everyone is to Dustin and wonder what his secret is and then spend tons of *their* money on really important stuff.
(51) (Billionaires are absolutely on Twitter paying attention to how people talk about them versus the other billionaires.)
(52) (Marc Benioff should consider getting into pandemic policy. It's where the cool billionaires are.)
(53) (Elon Musk should stop doing AI and stick to areas where being a reckless unilateralist kinda-troll isn't incredibly dangerous. I do love our solar panels though.)
(54) (Patrick Collision could be an incredible force for good on the bio and pandemic preparedness front and I sincerely wish he were correct about progress but I suspect he is tragically not.)
(55) That's a good jumping off point for some takes on developing new tech and stuff. First thing here is that I am firmly on the side that life is better today than at almost any point in history and that's because of technology.
(56) I am alive today because of technology, I don't have to anticipate burying a child someday soon because of technology, I live on this abundant and awesome and exciting planet with access to more of the stuff that makes my life good than almost anyone in history.
(57) Also quite seriously I think we are going to get ourselves killed. We have, for the most part, so far not invented anything that just destroys civilization if a single faction messes up or wants to do that.
(58) We are now making progress towards being able to do that on several fronts. I'm most worried about bio and AI, but I don't even think those are the only ways to do it. Climate change isn't an existential risk but that's kind of a dodged bullet -
(59) We could have (from our state of information when we started burning coal) lived on a planet with feedback loops that were far more vicious and intractable. It's going to be bad. It could have been so much worse.
(60) We accidentally fucked up the ozone layer -- ooops!! but noticed soon enough we could remedy it, and the remedy was pretty straightforward. We accidentally drove lots of species extinct -- ooops! - but they do not look likely to take us down with them.
(61) But dear fucking god, how many times can you play that game? How many times can a powerful civilization with limited coordination ability, damaged mechanisms for creating and sharing knowledge, and really impressive chemistry and bio and computing, do irreversible things?
(62). That's the very low level intuition that produces "I do not think humanity's present trajectory is sustainable." It's not what produces my specific AI and bio fears.
(63) The fear there is that specific research programs which there is overwhelming incentive to perform will make it really easy to kill us all.
(64) I totally think effective altruism should be working on that. I don't think that the work on that should in any sense get to hide behind the bednets. We should present effective altruism as divisible as possible, so people feel free to disagree here and agree on the bednets.
(65) But when you have a specific line of research that looks quite likely to be catastrophic, you are in an unusual emergency situation, and I think we're in that unusual emergency situation.
(66) Now, being in that unusual emergency situation is very bad, and not just for the world. For me, being extremely worried about developments in those two fields sits really unpleasantly with my general joy and delight at human tech progress and conviction it should continue.
(67) It's also bad for the ability to think that I said above is incredibly important, probably the most important thing. A lot of people have a very hard time thinking about future technology and about high-stakes scenarios.
(68) It makes people panicky, avoidant, dismissive, or alternately really easily persuadable to change their entire life trajectory in a way that is more motivated by panic and that can be hard to fix.
(69) It's stressful. It's isolating. It is bad for the ordinary good practices that are the base of extraordinary good practices - bad for relationships, bad for long-term planning, bad for the parts of me that enjoy saving for retirement.
(70) But, like, we do have this problem, so we have to build an intellectual community that is healthily tackling this problem, rather than run off to healthily tackle easier problems.
(71) I think that most of what EAs believe about AI is deeply technical, and confusing for that reason, but not actually in any specific point radically out of step with what non-EAs believe - the weirdness comes, in a sense, from putting it all together.
(72) Most people know that tech companies will spend tons of money on profitable new technologies that can replace human labor. Most people agree that they'll probably deploy those with inadequate oversight.
(73) There is a bunch of important technical detail buried in 'inadequate oversight'. Lots of people think that AGI would be deadly with inadequate oversight but adequate oversight won't be very hard.
(74) Other people believe that adequate oversight at tech companies won't save us at all because it'll just be a few years after that until this is something grad students can do themselves on their home gaming computer.
(75) But, like, the core here isn't actually very radical or unexplainable, it just unfolds into thousands and thousands of details that are crucial and hard to convey.
(76) I'm a communicator and I often feel wildly out of my depths when talking about AI, sometimes because the technical stuff goes over my head but more often because the core of a disagreement about how much tech company oversight helps....
(77) turns out to be a disagreement about whether current neural nets are the kind of thing that scaled up might have transformative general capabilities, and that when you dig into it turns out to be a disagreement about what powers human intelligence.
(78) Or a disagreement about whether we'll be able to use slightly weaker models to align smarter models turns out to be a disagreement about whether AlphaGo was lots better than previous Go programs because of a lack of investment or interest in the field disanalogous to AGI.
(79) Or a disagreement about whether AI systems will be able to explain themselves to us and explain misconduct by other AI systems to us will turn out to be an argument about whether weak AIs are going to be deployed in many places before strong ones come, which turns out....
(80) to be an argument about the US regulatory environment and to what degree profitable technology gets rolled out at scale in a fairly short time period by big tech companies.
(81) In general I don't think the field of AI alignment research is spectacularly healthy, and that seems bad, possibly the most bad thing, and because I'm a communicator I think a lot about whether it can be fixed with communication.
(82) But if you want to understand stuff I think the best start is probably just reading Ajeya Cotra and if everyone having these debates read all her published reports the debates would improve dramatically, so don't wait on me.
(83) I think AI alignment work is not really at odds with AI ethics/bias work, though for a while I hoped they would just turn out to be the same thing (solving one entailed solving the other) and I now sadly think that's not true.
(84) (Ajeya Cotra is again the person who convinced me, in a recent report in which she notes that the process of training AIs out of bias can easily embed in their goal system a lot of hard-to-detect dangerous stuff.)
(85) But anyway they could still both get along and be part of the same healthy research field! In general hostility towards people who are solving different problems than you, or insistence that the problems they're solving don't matter, is a really bad look on both sides.
(86) The case for bio is way less complicated because Covid made it for me: that could have been much worse, it'll happen again, it could also be made to happen deliberately, eventually we lose.
(87) I think effective altruist global priorities writing from pre-Covid holds up remarkably well and it makes me feel more positive about other effective altruist global priorities writing.
(88) It is really demoralizing and infuriating how hard it has been to get from the above obvious state of affairs that people don't even really disagree on to some basic measures that might help fix it.
(89) I've been mentioning a bunch how much EA just takes premises that aren't very weird and arrives at weird conclusions, but I should note that sometimes EA premises are in fact super weird!
(90) If you're thinking about the human future and about how AIs might behave, you end up thinking about things like what values we might expect random evolved alien species on other planets to have, how much we should care about things in causally disconnected universes, etc.
(91) Also I think many people have an implicit premise that there's some kind of God or afterlife, and while there are religious EAs, all of the above is very firmly premised on us living in a world made of atoms without an interventionist creator.
(92) Also I think most people happily concede that dogs have experiences and shouldn't be tortured but are way less sure about chickens and fish, which is pretty reasonable, we in fact ought to be much more sure about animals more similar to us.
(93) Also a lot of people explicitly or implicitly believe there should be fewer people in the world, that the ideal human population size is quite small, and most EAs though not all of them think the ideal eventual human population size is very very very big.
(94) A lot of people also think it's good that people eventually get old and die and EAs tend to think it's extremely bad though surprisingly little is actually premised on this.
(95) EA has substantial intellectual heritage from the rationalists. I think this is a necessary counterweighing force against various pushes in the other direction towards being more PR-friendly/being less weird/wacky/ substituting some clear thinking for results.
(96) That said EA will probably achieve more results if it moves in the direction of being less weird and wacky, if it can do that without compromising, at all, I'm kind of fanatical about this, on the clear thinking.
(97) My big hope for how to do that is that if people thinking about the world through an effective altruist lens talk and think out loud, and are clearly doing something cool, distinctive, and valuable, and clearly don't get torn apart for it, then it will draw people in.
(98) This is an easy thing to say if you're a journalist and have an unusually thick skin and have decent instincts for how not to say a thing in the way that appalls and infuriates everyone who hears it, but I think people are too scared of being torn apart for saying things.
(99) There are a bunch of jerks who will say awful things to you if you make a mistake in public, and if you try to reason openly and sincerely in public, you will make a mistake.
(100) But under most circumstances all they'll do is say awful things to you, and there are other people who will engage with you and respect you, and if you never say your mistakes aloud you'll just keep making them.
(101) There are powerful forces pushing EA in the direction of being more circumspect. The greater public attention would do it all by itself. The move into politics could do it. The degree to which more arguments are grounding out in weird technical stuff definitely does it.
(102) But it seems quite bad if the thing that effective altruism in some senses most offers the world -- an unusual perspective, articulated clearly and carefully and thoughtfully - gets hidden away in private as the movement gets bigger.
(103) Past a hundred and I think these are going to start getting incredibly specific. A lot of AI people have unstated assumptions about what kind of arms race they might be in/might have to worry about that bottom out in questions about Chinese tech policy.
(104) However there is very little public discussion of Chinese tech policy and a lot of people don't seem to realize that their org strategy substantially stands or falls on the assumption that there's an arms race with China.
(105) There have been a couple EA forum posts that make the vague critique, 'be aware that race dynamics are very bad for safety', but not the specific one, 'think about whether, if China's actually being mismanaged into the ground, that flips the sign of everything you do'.
(106) There are also a lot of people who don't understand exactly what theory about how we're going to achieve AI alignment is motivating what specific research programs. Some of them want to work on AI safety or are working on it.
(107) I think this is the product of a dynamic where between the money flowing into EA and the language models and Eliezer's doompost a bunch of people started running around the AI space but a big public lay of the land doesn't exist.
(108) My wife, by the way, looking over my shoulder, tells me "you should really have your offer to write takes grow with the logarithm of the likes you get". She and my girlfriend are now debating the best base for this logarithm. imo this counts as a take on effective altruism.
(109) If 'slightly weaker AI' isn't really a thing and there are large unpredictable discontinuities in AI capabilities for any reason, then I think we are probably going to fail AI alignment and all die; all the plans that I have heard that might work assume not that.
(110) If 'slightly weaker AI' is a thing, then I think some of the plans I've heard in the broad category 'use slightly weaker AIs and idk some formalization of heuristic arguments Paul came up with to align slightly stronger AIs' seem kind of promising and might work fine.
(111) I find it kind of tempting, given this state of affairs, to go "okay, assuming no discontinuities..." but Eliezer Yudkowsky will be so disappointed in me, so I don't do that.
(112) The arguments for 'definitely discontinuities' seem pretty tenuous to me, though.
(113) I really don't think anyone's going to solve interpretability enough that this just itself solves alignment though I tentatively think if you're not pushing the state of the art in capabilities it's worth someone spending five years trying.
(114) imo working on AI capabilities right now is an understandable thing to do, but a very bad one. I could imagine someone working on capabilities having a justification that felt to me like it was persuasive but the existing people seem to have much worse justifications.
(115) I love some things about silicon valley tech culture, but I think it's pretty destructive as the default for AI companies to be operating from.
(116) It kind of seems like there's a weird and sort of stupid degree of important people making AI-related strategic decisions not even understanding what other important people think about AI strategy.
(117) This is never trivial to resolve because it, again, tends to bottom out in some incredibly detailed technical debate, but it's definitely a very obvious way we're doing much worse than it feels like we intuitively could be.
(118) I think it's quite bad how lots of people opine on AI while being deeply confused about who is doing what for what reasons, and I really wish I could make them all read something about who is doing what and why.
(119) When I last said this, someone told me that I was trying to convey 'everything's under control, we're doing fine'. The AI situation seems very not at all under control to me and I think we're doing very badly. I just think people should know why other people do things.
(120) Young EAs who become convinced we have an AI disaster on our hands often go looking for AI safety orgs that are hiring. The ones that think we have an incredibly hard problem on our hands generally aren't hiring much. The ones that think the problem is easy are hiring.
(121) This means that lots of people who want to work on safety go work at whichever organization thinks safety is easiest. This seems bad.
(122) I expect the future to get extremely weird and in some ways good (rich, productive, inventive) and some very bad (turbulent, confusing, lack of good reasoning making sense of everything, abundant AI-generated reasoning) before we reach a critical point for AI.
(123) If that happens, it seems good for AI safety people who have been predicting it to try to explain what's going on to people and earn a reputation for being right.
(124) Okay enough AI takes, global health and development for the next hundred or so. The global health research field has its issues and plenty has been written about them, but I deeply admire many of the researchers I have met through their global health work.
(125) I think for the most part people are thinking hard about real, deeply important problems, trying things, talking pretty openly about which solutions they think work or don't and what's going on there, etc.
(126) The big places where incentives seem deeply unhealthy are the incentives to invent something new instead of deploying something slightly better, and the (related) thing where complicated things are more fun but don't tend to scale.
(127) The rule of thumb here is pretty famous and pretty simple: it scales if it's hard to get wrong, not hard to get right. If it takes unusual skill or discretion to implement, it won't scale.
(128) We haven't even exhausted the gains from doing things that definitely scale fine, but it still seems like a huge problem if we can't scale complicated things because many important ingredients of a better life - especially education - are pretty complicated.
(129) I would love to see more research focused on instructions to public servants implementing programs, trainings for those public servants, incentives and payment systems -- what works to get hard things done at scale?
(130) Mass deworming programs seem great in high worm prevalence areas, meh in low worm prevalence areas. I think a lot of people just love "debunking" global aid and that means their readers/listeners end up with really misleading impressions of what's going on.
(131) At the same time, mass deworming in low prevalence areas really does seem pretty meh, I don't personally donate to it, and even though I think GiveWell was pretty clear about their rationale for the rec it clearly took lots of people by surprise so be clearer I guess.
(132) I think that a lot of Western criticisms of aid is not focused on the beneficiaries but on stupid local political point-scoring. I get angry when I read criticisms of aid that do not focus on the aid failing to improve the lives of the recipients.
(133) I get especially annoyed by criticisms that seem to use what Jai delightfully termed the Copenhagan Interpretation of Ethics, that interacting with a problem makes you culpable if you don't fully solve it. blog.jaibot.com/the-copenhagen…
(134) Some people think that Good Ventures doesn't give more to global health and development in anticipation of other donors/if there credibly weren't other donors they'd give me. I basically think this is incorrect?
(135) The calculations about the last dollar are super complicated but my sense is that mostly people expect that they'd get less good done with all the money in total by spending more on GiveWell top charities, even when those have a real funding gap.
(136) That said, this probably changes as the amount of money EA has to give aways swings wildly with the fortunes of a small number of specific people. (I always want to ask why they aren't more hedged but it seems like a rude Q for people who have def thought about this.)
(137) Personally I give to global health stuff. I would probably give to x-risk stuff if I knew of a good x-risk funding thing that for some reason other people couldn't touch, but I don't.
(138) I feel ludicrously lucky I was born here, in the richest country in the world, to a upper middle class family, at the richest time in history, at a moment when my choices really matter. I want everyone to have that.
(139) Except that I don't want anyone's choices to potentially be very important to whether there is any human future at all, that seems like an unhealthy amount of pressure really.
(140) I think I donate to global health and dev for approximately the same reasons I try to be a good mother and a good wife and a journalist with integrity and a good tipper at my favorite coffee shop. I think you can't get hard stuff right if you don't get easy stuff right.
(141) Saving peoples' lives is important, and it is good, and it is part of living the life in which I am my fully realized self, think clearly, act clearly, and hopefully do a lot of good through my choices.
(142) I also think most human lives are really good! I can't justify this at all; at some point it all comes down to intuition, but I think we should mostly trust people that their lives, which they are living, are worth it to them.
(143) Sometimes people express confusion that we had kids while thinking there's a high chance of something going catastrophically wrong this century. But better to have lived and died than never have lived at all imo.
(144) That said I definitely don't think we should be trying to deliberately increase the population or anything. The population should be whatever people authentically want when they have a choice, and then in the future our options will change radically.
(145) In general a lot of people say really awful things when thinking about population. I think it's important people can make mistakes out loud but I have a hard time not snapping at people when they assert things about 'overpopulation' and how there should be fewer of us.
(146) We don't have overpopulation, the things I care about would mostly be worse if there were fewer of us, and I think that 'overpopulation' worries come from a pretty deeply unhealthy mindset in which people are competition for resources.
(147) Today I saw someone criticizing an EA because one zany hypothetical in some paper the EA wrote is about raising a lot of clones of von Neumann or something. Their critique of this? That given overpopulation, it's an appalling thing to suggest we make more people.
(148) I think people who say and fervently affirm criticisms like that don't think through the implications of a "people shouldn't have the right to make more people" stance for reproductive freedom, but uh - okay, personal note here -
(149) I'm a queer woman with some pregnancy-affecting health issues stemming from the history of anorexia, and as a result we had to get a clinic to approve us for the daughter we are trying to have. It was a nightmare.
(150) Being part of a system where people assessed whether I was really sick enough to qualify for a surrogate, whether my family situation fit their guidelines...going to meetings where we had to put on the most careful possible face lest they decide we shouldn't have a baby...
(151) It was genuinely one of the most unpleasant disempowering experiences of my life. I am very very categorically against efforts to decide who is allowed to make weird reproductive choices. I am very in favor of people getting to make weird reproductive choices.
(152) It's terrifying seeing people talk casually about how various reproductive choices (sometimes as simple as poor people having kids, but also spanning embryo selection surrogacy, artificial wombs...) are bad and should be disallowed because there's too many people anyway.
(153) I think of transhumanism as fundamentally a feminist issue and I like that effective altruists tend transhumanism friendly and I think that criticisms of effective altruism which attack it from that angle are the ones that get my hackles up most.
(154) Migration subsidies seem cool, people should look into them more. I know I only think this because Mushfiq Mobarak is a great speaker but I still think I'm right.
(155) It is simultaneously true that EA is too US centric to be ideal for its goals and also that it's wildly more geographically and geopolitically diverse than every other social or professional setting I have been part of.
(156) I basically believe in progress on the global development front. I think that trends will continue and most of the world will be much better in 50 years than it is now if there's no catastrophe.
(157) I expect that to happen less through us cracking some code for how to make development happen across the board and more through rising wages and productivity growth worldwide, plus cheaper consumer goods, plus better internet delivery in rural areas and better power systems
(158) I think some of the most valuable critics of effective altruism are global dev people who can see what we're missing. It can be really hard to tell them apart from global dev people who don't see why we're doing a weird thing, but well worth the translation effort.
(159) Factory farming is likely to scale up as the world gets richer, and that makes me really sad. I vary in how optimistic I feel that we'll be able to get anywhere with meat substitutes.
(bathing children, returning in ~1hr)
Mom (not me) made meringues and Wednesdays are dessert night so both kids wanted to rush through their baths as fast as possible to get meringues. Resuming:
(160) Many people have very different values than GiveWell and should probably try figuring out which of GiveWell's top charities are best by your own values. In particular, I always tell pro-life friends they should prefer the Against Malaria Foundation to Malaria Consortium.
(161) (Malaria is a leading cause of miscarriages and stillbirths, and AMF protects pregnant women while MC's seasonal chemoprevention is narrowly targeted at young children, so AMF prevents those miscarriages and MC doesn't.)
(162) But there's probably a lot of things like that. It's really hard to answer values questions and also they end up mattering a fair bit in comparing development interventions.
(163) I think there's a fair case that GiveWell should be trying really hard to elicit values from recipient populations and using those, but this is challenging too!
(164) There's criticism of effective altruism that's kind of 'you think you're better than us?' and... I genuinely try not to form opinions of other peoples' choices, I have no idea of their constraints, but I do think making the world better is better than not doing that.
(165) I have a great deal of respect for people who think seriously about what the most important problem in the world is + end up profoundly disagreeing with me, but if you look out at this world and don't see *anything* worth working really hard to fix...I think you're wrong.
(166) (I deliberately make an effort to read critiques without Googling to satisfy the contrary voice inside me that says "and what are YOU doing" because a lot of important things aren't legible like that and there's no one I want to tune out.)
(167) (But if you're actually not doing much to act on your own best guess about our biggest problems and how to solve them, and not acting to better understand those problems so you can act soon, then yeah, I think you should try it.)
(168) I usually give to AMF though in recent years out of something in between 'contrarianism' and 'concern about a missing middle in scaling opportunities' I've been looking at funding stuff like YRISE, Evidence Action, Charity Entrepreneurship, for scaling programs.
(169) I think there's an unfortunate social dynamic in EA where people differ in their taste for uncertainty and unsolvable problems and weird conclusions, and *any* problem needs people at a wide range of tastes there.
(170) You need people who are instinctively very "we know nets work, let's do nets" and people who are instinctively very "gene drive? sterile male mosquitoes?" and people who are instinctively very "what did the US do? let's solve it how rich countries solve this for themselves"
(171) and people who are instinctively very "if we give people money won't they get nets if nets are what they value most?"
(172) But an unfortunate thing about effective altruism is there's some weirdness stratification, where people drawn to certain styles of thinking/certain heuristics/ certain kinds of skepticism end up in different causes.
(173) So the people who are more comfortable with weird and ambitious stuff end up in bio/AI/political change/whatever, and there's less weird/ambitious global health stuff than there should be.
(174) (And probably less ploddingly rigorous AI stuff than there should be? Less sure of that but it sure is the corresponding claim).
(175) The distinction in EA between x-risk and not x-risk has lately been called 'longtermism' and it's clearly a resonant message that is bringing lots of people to EA - including bringing lots of people to short-termist, malaria-nets EA.
(176) I'm pleased about that because as I said earlier I think it's quite seriously wrong for the weird EA stuff to hide behind good PR from the normal EA stuff. Much healthier to advertise all the stuff openly.
(177) But I do think it's a kind of weird, contingent distinction that doesn't get at why most people working on x-risk are working on x-risk.
(178) Mostly they think we're all going to die! In like 15-30 years! That's not just a big deal if you care about the long-term future, it's (if true) also the most important thing you can do for many people living in extreme poverty.
(179) "We should protect future generations" is popular, true, important, and merits thinking deeply about. But also the people working on x-risk mostly have additional extremely important premises.
(180) So what should you do if you care about people alive today and want to do as much good for them as possible? I think kind of the same thing as if you're a longtermist, honestly: figure out whether you find credible the case that we're in a very scary situation.
(181) I think in general people work on global health and development because they think that is untrue, rather than because they are "shorttermists" who don't care if it is.
(182) ....in practice sometimes it's also for very human reasons like that the x-risk people are a bad culture fit/one of them was appallingly condescending at a party/AI is really stressful to read about/x-risk feels like some kind of mental 'hack' of your attention.
(183) But like it's pretty much never 'yeah I am just indifferent about whether humanity goes extinct so long as it happens after all current humans have died of old age), which is a stance you can have but a predictably-time-variant one and not very common in practice.
(184) Re: existential risk people being condescending at parties, I think everyone should admit that this is a significant factor in how people actually make decisions, while also aspiring to be someone who can notice when *really annoying* people are right about something.
Hmmm, some people I find it a little hard to admit are right about things....I think the Happier Lives team has done some incredibly valuable red-teaming and robustness-to-values-differences-checking of GiveWell's work, we're lucky to have them
(186) ...though I find all their stuff stressful to read because I think peoples' lives are default very good and our moral assumptions are super different
(187) EA's criticism contest was just a good idea; it got lots of people to publish criticisms that I have in many cases learned from and changed my mind based on.
(188) Some people insisted they didn't want to hear real fundamental criticism but I think this is a tendency to anthropomorphize 'the vague institutional incentives of EA' and forget that contest wording decisions were made, by, like, one specific person. Probably it was Lizka.
(189) that's the shape of a kind of EA critique I find annoying: it points out some good and true things about institutional incentives, and then swaps them in for explanations of actual decisions that I happen to know details of where the 'big picture' explanation is just wrong.
(190) There are institutional incentives, and they matter, but I honestly find it more terrifying how much is decided by factors that are just narrow and specific and weird. A and B got lunch and A had a bad feeling about the entire line of research Z.
(191) Also I think the strongest institutional incentive in EA is that if the cool people think you're cool, you get to go to their cool house parties and have really cool conversations with them.
(192) Quite seriously, unless you want a job in AI or related work, you don't need to have reasonable-seeming AI opinions to get hired in EA roles. But you often do need to have cool opinions for cool people to like you and humans being humans that's a serious incentive.
(193) A generalization of that: you will adopt values and priorities from the people you hang out with, so make sure that their values and priorities include 'being disagreed with' and 'independent thought' and 'making uncorrelated mistakes'.
(194) This is unfortunately hard to solve because you can't just tell everyone to consider people cool even if their opinions seem uncool. But I do think that a few cool people projecting the sincere desire that people feel alive and able to think matters substantially.
(195) Related to that, there are tons of people getting into EA, it has this very powerful framework and set of priorities and people correctly realize that's going to be really valuable to them, and that can make it hard for them to notice if it's also kind of bad in some way.
(196) The versions I see most are the ones I already mentioned - feeling like you're supposed to be someone else, feeling scared to think/not trusting of your own reasoning processes, but there's some other stuff that people seem to reliably go through that I wish they wouldn't.
(197) One is the agonizing over how all purchases could go to malaria nets instead/money is denominated in dead babies. As a movement EA used to send way more messages about this, but lots of people still experience it getting into modern EA.
(198) I kind of wonder if for some people it's just a stage we have to go through, wrestle with, and arrive at a synthesis of, and external support can speed that up but not really skip it.
(199) You also get a lot of people worrying about whether their motives are pure enough, whether they're altruistic enough, whether they are in it for the right reasons. That one I think we should be able to drastically reduce.
(200) The point of doing good is the effects on the recipients! It doesn't matter what configuration you fall into internally when you help others. You can be self-congratulatory. You can be smug. You can not feel much of anything at all. It's fine.
(201) I also think there's something healthy about getting some distance from your self-conception as a good person. You aren't the most altruistic possible version of you, and that's fine. There's some sense in which you 'could be doing more', and that's just how it goes.
(202) It takes time, and effort, but eventually you can arrive at a kind of balance where you trying to do the right thing, and you know there's stuff you aren't doing, and you try to be honest with yourself about that and okay with it.
(203) and no matter how high the stakes get -- and the stakes are ludicrously high, even if you're skeptical of all the catastrophic risks, the stakes are billions of dollars and millions of lives - you need that bit of distance to be able to live with the person you are.
(204) One thing I like to do, sometimes, is imagine that we succeed at building a much much better world, and someday my grandchildren want to interview me, the way I have been emailing my Holocaust-survivor grandmother all this year, trying to understand the world she lived in.
(205) If we've built a much much better world, they'll be kinder, and safer, and face less sharp tradeoffs, and they won't understand how hard it was, and they'll want to understand, and I'll want to explain, but maybe it'll be a gulf beyond explaining.
(206) I hope it is. I hope we build a world that can't, not without some sci-fi mindsharing tech, understand us, that doesn't know what it was like to have to make these tradeoffs, that can see all the places where we made them wrong.
(207) I think they'll forgive us. I think they'll be scared for us, and sad for us, and they won't understand, but I think they'll forgive us, and be so glad we forged on. And I try to borrow the perspective of that better world, sometimes, and forgive myself.
(208) I think the hardest kinds of virtues to have are often not the virtue of being willing to suffer a lot, or to work really hard, but the virtue of being willing to live with extraordinary uncertainty, or to work on the right thing, or to check if you were wrong.
(209) When I see people failing to do as much good as they could, it's not often because they weren't willing to suffer as much as it took, it's because they had to admit that something dear to them and part of their self-narrative and psychologically loadbearing was wrong.
(210) So I think most people in EA could stand to worry less about whether they're altruistic enough and more about whether something important is true that they want to be false.
(back after children abed)
(211) Aaaaand we're back with a very important take, which is that "existential risk is so bad that even a very tiny chance of it is worth taking huge measures" is a terrible explanation of how to make decisions, you should never make decisions like that.
(212) To be clear, no one does this! When you ask AI people "what do you think is the probability of AI that these concerns are relevant to in the next 40 years" they say, like, 80%!!!
(213) But somehow the popular osmosis explanation of why people are working on AI ended up as "well, tiny chance, extraordinary importance" instead of "billions of dollars are being spent on this research program right now and the default outcome is very bad".
(214) If you're advising Kennedy around the Bay of Pigs you ideally don't want to say "there's a tiny chance that the Soviets would respond to escalation with nuclear weapons, but even a tiny chance isn't worth it", you want to correctly estimate a very very large chance.
(215) The 'tiny chance' logic mostly leads to decision paralysis. After all, there's also a tiny chance that writing this tweet thread will inspire someone to end the world! Who knows! The reason to care about existential risks is that the chance is not small.
(216) I think that some of this is that many people have previously been exposed to the "this may seem unlikely, but if there's even a chance" logic for existential risk - maybe in high school policy debate, which loooves that argument.
(217) and so when they hear an argument for existential risk, they assume it's that, and don't read deeply enough to go "oh, this is a case that the most likely outcome absent intervention is transformative AI in our lifetimes, to accept or reject on its merits."
(218) But I would give a lot for everyone to stop talking about tiny chances.
(219) Related to that, you misunderstand the movement both sociologically and epistemically if you think of it as having been derailed by x-risk. X-risk was a big part of EA from day one. the mindset that leads to noticing is is the mindset that leads to noticing other stuff too.
(220) If I got to instill three mental habits in all journalists and commentatariat types, they would be:
(221) Predictions are very hard, and relatedly, if a group of people are getting a lot of important predictions right so far, you should immediately take the rest of their predictions more seriously.
(222) You should yourself spend a week looking into anything popular and potentially important that looks like a "trap for smart people" crypto or AI or whatever. Sometimes you'll conclude it was indeed a trap for smart people but sometimes you'll find out something was impt!
(223) The world is really messy and really confusing and anything you squint closely at has layers and layers of critical detail. When your understanding has less resolution than that, you are missing something.
(224) Animals! Most people have values that are in this weird space of 'yes I think animals can suffer no I don't think we should stop factory farming them'. EAs vary more in both directions.
(225) There's more people who think we should stop factory farming, and more people who think we should stop wild animal suffering, and also more people with weird theories of qualia that imply it's not wrong to torture dogs.
(226) I think most people don't think as much as they should about their values with respect to animals. It's a space where our society urges being muddy and incoherent - and one where it's good to figure out what you actually think instead.
(227) Okay, I tried writing a next take but everything I came up with wasn't actually that good, so I'm declaring take bankruptcy for now - thanks for reading!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Kelsey Piper

Kelsey Piper Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @KelseyTuoc

Aug 9
Meta's free, open BlenderBot is remarkably bad compared to GPT-3 or other top-of-the-line language models I've played with. I'm confused about why they released something so much worse than the state of the art.
There's something about playing with a true SOTA language model that is terrifying and compelling. They're clever; they have the occasional beautiful turn of phrase; even when something they say is basically word salad, it takes a reread to *notice*.
A lot of people who hadn't thought much about AI become convinced there's something cool and revolutionary happening when they play with GPT-3. And BlenderBot, published a full year later when lots of people now have better-than-GPT-3 models, ....has none of that.
Read 8 tweets
May 7
I wrote for Vox about viral discovery, going out into nature to find lots of viruses and maybe find one that'll cause a future pandemic: vox.com/future-perfect…. When I first started writing about pandemics, I was pretty excited about viral discovery work.
Intuitively, it seems like a clever idea. If we know what's coming, we can design vaccines and treatments before it even arrives, and not be caught off guard. Imagine if you could go back to 2017 with the Covid-19 genome and have everyone know exactly what to look out for!
But a lot of leading virologists were already airing skepticism and arguing that this viral discovery work was actually largely a waste of time and money. The problem is that when you go looking, you sure do find viruses, but it's incredibly hard to tell which are a threat.
Read 11 tweets
Mar 15
The top-line result of this study of microlending is that it helps women start businesses and earn income. But unfortunately, the businesses they are starting are often multilevel marketing businesses, where you mostly make money by signing up other people. (1/7)
I read the 18-month mid-experiment report for this a year and a half ago and was really concerned about the prevalence of MLMs among the 'success stories' of women starting jobs, but it was unclear from the mid-experiment data whether the microloans caused the women to join MLMs.
The final report, though, strongly suggests that microloans are causing women to join MLMs when they wouldn't do so otherwise: "Grameen America increased the rate of women operating a direct-selling/MLM business," it concludes.
Read 11 tweets
Nov 9, 2021
Related to this, there is some horrifying pathos in the story of the meltdown of entrepreneur Steve Kirsch, who founded the Covid-19 Early Treatment Fund and then alienated basically everyone in it.
This Technology Review article technologyreview.com/2021/10/05/103… tells the story like so:
After the fluvoxamine trial results came out, Kirsch got really excited about them. He tried to convince a bunch of doctors and scientists to come out with a recommendation for fluvoxamine: "relentlessly pressuring them to promote the drug in media stories".
Read 9 tweets
Sep 16, 2021
If you're trying to make sense of the ivermectin evidence base, one thing I recommend is briefly researching a drug that works well against Covid, like dexamethasone.
It's hard to guess from first principles how good an evidence base "should" look if the drug has sizable benefits. Sure, the ivm evidence base contains tons of terrible studies and fraud, but maybe that's just a common problem w/ science in a pandemic?
But then you try looking up dexamethasone. First result is a 6000 person RCT, endpoint death, RR 0.83, 95% confidence interval 0.75 to 0.93; P<0.001. Published in NEJM. nejm.org/doi/full/10.10…
Read 6 tweets
Sep 14, 2021
c19ivermectin.com is a gorgeous website with a beautiful UI that makes stunningly illiterate statistical claims about the case for ivermectin. Right now I'm mad about the claim (in the chart below) that the chance of these results if ivm didn't work is "a trillion to one".
As far as I can tell, that's derived by multiplying the p-values of all the studies in the database. If this study finds only a 1% chance of getting results as strong or stronger if ivermectin had no effects, and this study finds only a 10% chance, then the chance of both is .01.
This is not how to do statistics.

Many of the studies in the database test combination therapies, ivm plus doxycline or ivm plus hydroxychloroquine or ivm plus iota carregeenan or ivm plus dexamethasone. This site treats it like all the results come from the ivm.
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(