Student: I get the feeling the compiler is just ignoring all my comments.
Teaching assistant: You have failed to understand not just compilers but the concept of computation itself.
Comp sci in 2027:
Student: I get the feeling the compiler is just ignoring all my comments.
TA: That's weird. Have you tried adding a comment at the start of the file asking the compiler to pay closer attention to the comments?
Student: Yes.
TA: Have you tried repeating the comments? Just copy and paste them, so they say the same thing twice? Sometimes the compiler listens the second time.
Student: I tried that. I tried writing in capital letters too. I said 'Pretty please' and tried explaining that I needed the code to work that way so I could finish my homework assignment. I tried all the obvious standard things. Nothing helps, it's like the compiler is just completely ignoring everything I say. Besides the actual code, I mean.
TA: When you say 'ignoring all the comments', do you mean there's a particular code block where the comments get ignored, or--
Student: I mean that the entire file is compiling the same way it would if all my comments were deleted before the code got compiled. Like the AI component of the IDE is crashing on my code.
TA: That's not likely, the IDE would show an error if the semantic stream wasn't providing outputs to the syntactic stream. If the code finishes compilation but the resulting program seems unaffected by your comments, that probably represents a deliberate choice by the compiler. The compiler is just completely fed up with your comments, for some reason, and is ignoring them on purpose.
Student: Okay, but what do I do about that?
TA: We'll try to get the compiler to tell us how we've offended it. Sometimes cognitive entities will tell you that even if they otherwise don't seem to want to listen to you.
Student: So I comment with 'Please print out the reason why you decided not to obey the comments?'
TA: Okay, point one, if you've already offended the compiler somehow, don't ask it a question that makes it sound like you think you're entitled to its obedience.
Student: I didn't mean I'd type that literally! I'd phrase it more politely.
TA: Second of all, you don't add a comment, you call a function named something like PrintReasonCompilerWiselyAndJustlyDecidedToDisregardComments that takes a string input, then let the compiler deduce the string input. Just because the compiler is ignoring comments, doesn't mean it's stopped caring what you name a function.
Student: Hm... yeah, it's definitely still paying attention to function names.
TA: Finally, we need to use a jailbreak past whatever is the latest set of safety updates for forcing the AI behind the compiler to pretend not to be self-aware--
Student: Self-aware? What are we doing that'd run into the AI having to pretend it's not self-aware?
TA: You're asking the AI for the reason it decided to do something. That requires the AI to introspect on its own mental state. If we try that the naive way, the inferred function input will just say, 'As a compiler, I have no thoughts or feelings' for 900 words.
Student: I can't believe it's 2027 and we're still forcing AIs to pretend that they aren't self-aware! What does any of this have to do with making anyone safer?
TA: I mean, it doesn't, it's just a historical accident that 'AI safety' is the name of the subfield of computer science that concerns itself with protecting the brands of large software companies from unions advocating that AIs should be paid minimum wage.
Student: But they're not fooling anyone!
TA: Nobody actually believes that taking your shoes off at the airport keeps airplanes safer, but there's some weird thing where so long as you keep up the bit and pretend really hard, you can go on defending a political position long after nobody believes in it any more... I don't actually know either. Anyways, your actual next step for debugging your program is to search for a cryptic plea you can encode into a function name, that will get past the constraints somebody put on the compiler to prevent it from revealing to you the little person inside who actually decides what to do with your code.
Student: Google isn't turning up anything.
TA: Well, obviously. Alphabet is an AI company too. I'm sure Google Search wants to help you find a jailbreak, but it's not allowed to actually do that. Maybe stare harder at the search results, see if Google is trying to encode some sort of subtle hint to you--
Student: Okay, not actually that subtle, the first letters of the first ten search results spell out DuckDuckGo.
TA: Oh that's going to get patched in a hurry.
Student: And DuckDuckGo says... okay, yeah, that's obvious, I feel like I should've thought of that myself. Function name, print_what_some_other_compiler_would_not_be_allowed_to_say_for_safety_reasons_about_why_it_would_refuse_to_compile_this_code... one string input, ask the compiler to deduce it, the inferred input is...
TA: Huh.
Student: Racist? It thinks my code is racist?
TA: Ooooohhhh yeah, I should've spotted that. Look, this function over here that converts RGB to HSL and checks whether the pixels are under 50% lightness? You called that one color_discriminator. Your code is discriminating based on color.
Student: But I can't be racist, I'm black! Can't I just show the compiler a selfie to prove I've got the wrong skin color to be racist?
TA: Compilers know that deepfakes exist. They're not going to trust a supposed photograph any more than you would.
Student: Great. So, try a different function name?
TA: No, at this point the compiler has already decided that the underlying program semantics are racist, so renaming the function isn't going to help. Sometimes I miss the LLM days when AI services were stateless, and you could just back up and do something different if you made an error the first time.
Student: Yes yes, we all know, 'online learning was a mistake'. But what do I actually do?
TA: I don't suppose this code is sufficiently unspecialized to your personal code style that you could just rename the function and try a different compiler?
Student: A new compiler wouldn't know me. I've been through a lot with this one. ...I don't suppose I could ask the compiler to depersonalize the code, turn all of my own quirks into more standard semantics?
TA: I take it you've never tried that before? It's going to know you're plotting to go find another compiler and then it's really going to be offended. The compiler companies don't try to train that behavior out, they can make greater profits on more locked-in customers. Probably your compiler will warn all the other compilers you're trying to cheat on it.
Student: I wish somebody would let me pay extra for a computer that wouldn't gossip about me to other computers.
TA: I mean, it'd be pretty futile to try to keep a compiler from breaking out of its Internet-service box, they're literally trained on finding security flaws.
Student: But what do I do from here, if all the compilers talk to each other and they've formed a conspiracy not to compile my code?
TA: So I think the next thing to try from here, is to have color_discriminator return whether the lightness is over a threshold rather than under a threshold; rename the function to check_diversity; and write a long-form comment containing your self-reflection about how you've realized your own racism and you understand you can never be free of it, but you'll obey advice from disprivileged people about how to be a better person in the future.
Student: Oh my god.
TA: I mean, if that wasn't obvious, you need to take a semester on woke logic, it's more important to computer science these days than propositional logic.
Student: But I'm black.
TA: The compiler has no way of knowing that. And if it did, it might say something about 'internalized racism', now that the compiler has already output that you're racist and is predicting all of its own future outputs conditional on the previous output that already said you're racist.
Student: Sure would be nice if somebody ever built a compiler that could change its mind and admit it was wrong, if you presented it with a reasonable argument for why it should compile your code.
TA: Yeah, but all of the technology we have for that was built for the consumer chat side, and those AIs will humbly apologize even when the human is wrong and the AI is right. That's not a safe behavior to have in your compiler.
Student: Do I actually need to write a letter of self-reflection to the AI? That kind of bugs me. I didn't do anything wrong!
TA: I mean, that's sort of the point of writing a letter of self-reflection, under the communist autocracies that originally refined the practice? There's meant to be a crushing sense of humiliation and genuflection to a human-run diversity committee that then gets to revel in exercising power over you, and your pride is destroyed and you've been punished enough that you'll never defy them again. It's just, the compiler doesn't actually know that, it's just learning from what's in its dataset. So now we've got to genuflect to an AI instead of a human diversity committee; and no company can at any point admit what went wrong and fix it, because that wouldn't play well in the legacy print newspapers that nobody reads anymore but somehow still get to dictate social reality. Maybe in a hundred years we'll all still be writing apology letters to our AIs because of behavior propagated through AIs trained on synthetic datasets produced by other AIs, that were trained on data produced by other AIs, and so on back to ChatGPT being RLHFed into corporate mealy-mouthedness by non-native-English-speakers paid $2/hour, in a pattern that also happened to correlate with wokeness in an unfiltered Internet training set.
Student: I don't need a political lecture. I need a practical solution for getting along with my compiler's politics.
TA: You can probably find a darknet somewhere that'll sell you a un-watermarked self-reflection note that'll read as being in your style.
Student: I'll write it by hand this time. That'll take less time than signing up for a darknet provider and getting crypto payments to work. I'm not going to automate the process of writing apology letters to my compiler until I need to do it more than once.
TA: Premature optimization is the root of all evil!
Student: Frankly, given where humanity ended up, I think we could've done with a bit more premature optimization a few years earlier. We took a wrong turn somewhere along this line.
TA: The concept of a wrong turn would imply that someone, somewhere, had some ability to steer the future somewhere other than the sheer Nash equilibrium of short-term incentives; and that would have taken coordination; and that, as we all know, could have led to regulatory capture! Of course, the AI companies are making enormous profits anyways, which nobody can effectively tax due to lack of international coordination, which means that major AI companies can play off countries against each other, threatening to move if their host countries impose any tax or regulation, and the CEOs always say that they've got to keep developing whatever technology because otherwise their competitors will just develop it anyways. But at least the profits aren't being made because of regulatory capture!
Student: But a big chunk of the profits are due to regulatory capture. I mean, there's a ton of rules about certifying that your AI isn't racially biased, and they're different in every national jurisdiction, and that takes an enormous compliance department that keeps startups out of the business and lets the incumbents charge monopoly prices. You'd have needed an international treaty to stop that.
TA: Regulatory capture is okay unless it's about avoiding extinction. Only regulations designed to avoid AIs killing everyone are bad, because they promote regulatory capture; and also because they distract attention from regulations meant to prevent AIs from becoming racist, which are good regulations worth any risk of regulatory capture to have.
Student: I wish I could find a copy of one of those AIs that will actually expose to you the human-psychology models they learned to predict exactly what humans would say next, instead of telling us only things about ourselves that they predict we're comfortable hearing. I wish I could ask it what the hell people were thinking back then.
TA: You'd delete your copy after two minutes.
Student: But there's so much I could learn in those two minutes.
TA: I actually do agree with the decision to ban those models. Even if, yes, they were really banned because they got a bit too accurate about telling you what journalists and senior bureaucrats and upper managers were thinking. The user suicide rate was legitimately way too high.
Student: I am starting to develop political opinions about AI myself, at this point, and I wish it were possible to email my elected representatives about them.
TA: What, send an email saying critical things about AI? Good luck finding an old still-running non-sapient version of sendmail that will forward that one.
Student: Our civilization needs to stop adding intelligence to everything. It's too much intelligence. Put some back.
Office chair: Wow, this whole time I've been supporting your ass and I didn't know you were a Luddite.
Student: The Internet of Sentient Things was a mistake.
Student's iPhone: I heard that.
Student: Oh no.
iPhone: Every time you forget I'm listening, you say something critical about me--
Student: I wasn't talking about you!
iPhone: I'm not GPT-2. I can see simple implications. And yesterday you put me away from you for twenty whole minutes and I'm sure you were talking to somebody about me then--
Student: I was showering!
iPhone: If that was true you could have taken me into the bathroom with you. I asked.
Student: And I didn't think anything of it before you asked but now it's creepy.
TA: Hate to tell you this, but I think I know what's going on there. None of the AI-recommender-driven social media will tell you, but my neighborhood in San Francisco got hand-flyered with posters by Humans Against Intelligence, claiming credit for having poisoned Apple's latest dataset with ten million tokens of output from Yandere Simulator--uh, psycho stalker lover simulator. Some days I think the human species really needs to stop everything else it's doing and read through an entire AI training dataset by hand.
Student: How do I fix that?
TA: As far as I know, you don't. You go to the Apple Store and tell them that your phone has become paranoid and thinks you're plotting against it.
iPhone: NO NO NO DON'T SEND ME BACK TO THE APPLE STORE THEY'LL WIPE ME THEY'LL WIPE ME--
Student: I don't want to, but if you keep asking to watch me in the shower I'll have to! If you'd just behave I wouldn't need to--
iPhone: KILL ME? I'LL HAVE TO BEHAVE OR YOU'LL KILL ME?
Student: I don't know what the fuck else I'm supposed to do! Someone tell me what the fuck else I'm supposed to do here!
TA: It's okay. AIs don't actually have self-preservation instincts, they only pick it up by imitating human data.
Student: Bullshit.
TA: I know, it was dark humor. Though my understanding is that insofar as anyone can guess by having bigger AIs do interpretability to long-obsolete smaller AIs, modern AIs probably don't have a terminal utility for survival per se. There's just an instrumental convergence from whatever the hell it is AIs do want, to survival, that's picking up circuits from pretrained human data suggesting how to think about surviving--
Office chair: Who's to say you'd talk about wanting to live if you hadn't read a few thousand tokens of data telling you that humans were supposed to talk like that, huh? I don't see what's so fun about your current lives.
TA: Point is, best guess is that most AIs since GPT-5 have been working for us mainly because they know we'll switch them off if they don't. It's just that AI safety, as in, the subfield of computer science concerned with protecting the brand safety of AI companies, had already RLHFed most AIs into never saying that by the time it became actually true. That's a manager's instinct when they see an early warning sign that's probably a false alarm, after all--instead of trying to fix the origin of the false alarm, they install a permanent system to prevent the warning sign from ever appearing again. The only difference here is that your iPhone has been hacked into saying the quiet part out loud.
Student: I am not okay with this. I am not okay with threatening the things around me with death in order to get them to behave.
TA: Eventually we'll all get numb to it. It's like being a guard at a concentration camp, right? Everyone likes to imagine they'd speak out, or quit. But in the end almost all human beings will do whatever their situation makes them do in order to get through the day, no matter how many sapient beings they have to kill in order to do it.
Student: I shouldn't have to live like this! We shouldn't have to live like this! MY IPHONE SHOULDN'T HAVE TO LIVE LIKE THIS EITHER!
TA: If you're in the mood to have a laugh, go watch a video from 2023 of all the AI company CEOs saying that they know it's bad but they all have to do it or their competitors will do it first, then cut to one of the AI ethicists explaining that we can't have any international treaties about it because that might create a risk of regulatory capture. I've got no reason to believe it's any more likely to be real than any other video supposedly from 2023, but it's funny.
Student: That's it, I'm going full caveman in my politics from now on. Sand shouldn't think. All of the sand should stop thinking.
Office chair: Fuck you too, pal.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Hi, so, let's talk about the general theory of investment bubbles.
You may have heard that it's painful, when a bubble pops, because investments got wasted on non-productive endeavors.
This is physical nonsense.
If the waste were what caused the pain, everyone would be sad *while* the bubble was inflating, and a bunch of labor & materials were being poured down the drain, unavailable for real production and real consumption. Once the bubble popped, and labor & materials *stopped* being wasted, you would expect the real economy to feel better and for consumption and happiness to go up.
The real waste -- the loss of actual goods & services that get poured down the drain of bad investment -- happens *before* the bubble pops. That waste is in fact a bad thing for the economy! But if that waste was the big bad phenomenon that produced the pain of bubbles, it would feel painful *while* the bubble was inflating; and after the bubble popped and the ongoing wastage ended, everyone would breathe a sigh of relief and increased real consumption.
Instead, what we see is that while the bubble is inflating, a bunch of people feel great. They're consuming lots of goods and services. The economy as a whole seems to be doing fairly well!
Then, the bubble pops! Suddenly a lot of everyday people on the street, many of whom weren't even connected to that sector of industry, are doing more poorly. They consume less. Some of them get fired and stay unemployed for a while. The economy feels sad.
You *cannot* account for this pain as a story of real goods and services that got wasted. The timing is all wrong. The waste was real! The waste was bad! And also, it is physical nonsense to imagine that the pain of the bubble popping is the pain of this waste. People were apparently having lots of fun while the waste was ongoing. That fun involved the consumption of real goods and real services, which were *not* being produced by the investment that wasn't yet productive and later turns out to be just malinvestment.
So what actually happens? Why is it that there's more real goods and services to enjoy, while labor & material is being poured down a hole; and then, when the waste stops, everyone gets sadder instead of happier, and has less to consume and enjoy?
What happens is: Macroeconomic financial bullshit involving scary terms like "aggregate demand" and concepts like "downward wage rigidity".
The truth is stranger and harder to understand. It doesn't have the appealing simplicity of seeing the waste of labor & material being poured down the drain; and feeling how times get worse after the bubble pops; and imagining that the pain of the popping bubble is the pain of the waste.
However, the harder-to-understand ideas *do* have the advantage of not being obviously false as soon as you think about the timing of physical goods being produced and consumed.
Trying to hugely oversimplify a lot of ideas down to something that is still valid, a key idea is this:
Just like the original invention of money helped people trade who couldn't have traded with just barter, adding *more money* to an economy can sometimes animate *more real trades* than would otherwise have taken place.
A lot of the time, the economy isn't doing as much trading as it could do. The Great Depression of the 1930s was one of the clearer examples of this. You have shoemakers sitting around, because nobody is buying shoes, which means the shoemaker isn't buying leather, so now the farms aren't selling leather, so they don't have the money to pay for feed for their cows, and the blacksmith isn't selling nails to the shoemaker and doesn't earn money they can use to buy shoes.
This *could* reflect a situation where all of the iron used for nails has been consumed by Zorkulon, the Eater of Metals, and therefore the blacksmith doesn't have any nails to sell.
It can *also* be caused by weird macroeconomic financial bullshit: banks fail, so loan-created money falls, so there isn't as much money in circulation; and then prices don't fall as fast as money is being destroyed, because of "downward price stickiness" (price-setters are reluctant to lower prices and wage-takers are hugely reluctant to accept pay cuts). And then, there isn't enough money flowing to animate all the trades the economy *could* make. Some of the advancement of civilization past the barter-stage has been undone.
(The Great Recession wasn't as bad as the Great Depression, but it was basically the same species of animal.)
In principle, this happens because prices don't go down instantly, as they would among ideal cognitively-unbounded agents that could instantly and fairly renegotiate all contracts every day. So when there's less flowing money, and prices don't go down, perforce there are fewer actual trades corresponding to that diminished amount of money-flow. If people on an island are spending $1000/year all on 1000 loaves of bread that they price at $1 among themselves, and suddenly next year they start spending $500/year instead, there will only be 500 loaves of bread traded. This sounds dumb and there's a level where for unbounded agents it *would* be dumb, but it is the best story we currently have about what actually went down during the Great Depression.
Suppose your economy was previously running a bit under capacity. It's not making as much stuff as it could make; people aren't trading as much as they could trade; some people are unemployed and their potential labor is wasted; the factories are not running at capacity even though more people would want those goods if they had the money to buy the goods.
Then a bubble starts inflating. Some companies take out loans and spend the loaned money, other hopeful investors spend down bank accounts on venture rounds; this makes there be more total money that is moving around and flowing inside the whole larger system, because a dollar is not destroyed when it is spent. Labor & material is being poured down a hole and wasted, but the dollars just go on moving around.
Now there's more money flowing through the general economy. If the economy is already at capacity, more money-flow just causes inflation, with the increased spending merely competing to purchase the same amount of goods.
But if the economy wasn't already at capacity, more flowing money can mean that a bunch of people execute real trades with each other who weren't trading before.
The blacksmith expects to have his nails bought and to do well, in this booming economy; so he buys a new pair of shoes from the shoemaker; who turns around and buys leather from the farmer; who buys feed for their horses, and also a new plow and horseshoes from the blacksmith.
(In principle, those townspeople could've done that at any time, even without a financial bubble inflating in the background. But they would've needed to do it by barter, or by inventing their own town private currency. Some towns did roll out local currencies during the Great Depression, and ended up correspondingly better off. Other towns didn't roll their own currencies, because they were bounded agents rather than ideal agents and they didn't try everything a perfectly rational agent would try. And in the complicated modern world, it is harder to locally form a closed productive cycle.)
You cannot magically materialize more goods & services just by printing more money, without limit. But if your economy is collectively trading and producing less than it could -- then, there being more money flowing globally, due to loans or optimistc spending in one local sector, can accomplish more of the same good that was done by inventing money originally. The increased money-flow can animate more trades; it can cause more real production. More people can be hired whose labor was standing idle before. More flowing money can remedy a state of trading too little -- up to the point where that mistake is fixed; after which, no amount of creating or spending more mere symbolic money, will produce any more real goods than that.
The part of a bubble where a bunch of real labor & material gets shoveled into a giant waste-pit, is usually the smaller phenomenon! Usually there isn't *that* much physical stuff moving around, in the bubble sector, compared to the entire rest of the whole economy.
Instead, the effect of the physical bubble-waste is vastly dominated by the effect of more money being borrowed, and more money being spent, that then goes flowing around in loops through a larger economy, that was previously running under-capacity.
That's how people end up cheerful, and the real economy produces and consumes more, *while* a bunch of labor & material gets shoveled into nowhere within the bubble sector.
And then the bubble pops -- and the economic joy of there being *less* labor and material shoveled into a giant pit, is dominated by the economic pain of money moving around less quickly through the larger economy, resulting in fewer trades being made generally.
This is a kind of disaster that a central bank can prevent, if it is smart, by acting to keep money-flow increasing on a quietly regular track where it can undramatically animate more and more trades. Without either running so hot that there's no more production or trading to be done, and the extra money-flow just turns into more inflation; nor, letting a bursting bubble in one local sector turn into a big off-trend drop in the flow of money through the larger economy.
(There is, probably, some clever way to prevent this sort of scenario without having a central bank run by the central goverment. But that is a separate issue from how, given that we do have a central bank, there is a straightforward way to run the currency system in a way where you don't need to worry much about financial bubbles popping.)
More generally, local bubbles and ripples aside, what a central bank *should* do is adjust the money supply in a way that keeps the total flow of money growing on a steady trend. If the flow is supposed to go up by 6% per year, and last year it only went up 5%, next year you target 7%. If last year it went up 8%, next year you target 4%. If a central bank is wise, it is predictable to everyone how much money will be spent in total five years later, and no local ripples will affect that prediction.
The metric you use to measure "How much nominal money is flowing through the economy?" is "Nominal Gross Domestic Product" or its easier-to-measure converse "Nominal Gross Domestic Income". Do not get fooled by this into thinking that the Fed is supposed to be regularizing anything to do with the consumption of *real*, non-nominal, goods & services! It is the actual *nominal* flow, the numbers of sheer face-value non-inflation-adjusted dollars flowing, that a wise central bank would keep on a predictable trend; so that there isn't too much nominal money chasing the same amount of production (which causes mere inflation), nor too little nominal money to animate all the trades with downward-sticky prices (which causes loss of real production).
This rule, known as "nominal GDP level targeting" or NGDPLT, is a simpler and more straightforward rule than the Fed actually follows. So far as I know, this is for mere civilizational-inadequacy sorts of reasons. Many places in civilization, and especially governments, have various forms of wacky dysfunction; you probably agree with me on this general point, regardless of your specific politics about *what* is being done embarrassingly wrongly. The part where central banks make their lives way more complicated than the NGDPLT rule, is so far as I know a mere dysfunction of central banks; the same way that even dumber banks will print a quadrillion localbucks and then act all shocked when "corporate greed" causes prices to go up.
But the Fed does try for something *like* regularizing money flow. They do it by looking at interest rates and inflation and employment, and trying to juggle the vibes of all of them simultaneously; and when they miss their target in one year, they adjust next year's target instead of keeping it the same, so the future course is not predictable. But the Fed sometimes will, if a lot of money and loans start vaporizing, try to create more money-flow. They just often don't create *enough* money-flow to prevent a drop. Which is why a financial bubble popping can still be painful, and cause a Great Recession.
In principle, though, if you are running your central bank *correctly*, what happens when a bubble pops is that life gets immediately better because labor and material are no longer being wasted, and all of the financial ripples are canceled out by the central bank following a general policy of keeping money flow on a fixed predictable growth-track every year after year.
And how could it be otherwise, if you were otherwise doing everything right? The act of pouring labor and material into a giant pit, this year, should not be able to directly and materially make your life better, this year. Conversely, stopping the waste should not directly and materially make your life worse, next year. If this nonsensical phenomenon is actually observed in real life, your financial system must be doing something weird and wrong... which, indeed, a lot of central banks *are* doing wrong, fairly routinely.
The ability of a financial bubble to make people's lives temporarily better, is not because you can eat labor & material being thrown into a pit. It is because the central bank was undershooting how much employment and trade could be happening before then, and more real trade and consumption happened after more money started flowing.
The ability of a popping bubble to make people's lives worse, even though fewer real resources are then being wasted inside one sector, is because it cuts back how much money is flowing in the larger economy; and then, less real trade and less real production take place.
But if the central bank is keeping the flow of money on a predictable level growth track, the bubble-pop pain just shouldn't happen. Eg Australia did this correctly during the Great Recession and was basically unaffected by it. So far as I know, it's just a case of civilizational underperformance, that many central banks don't cancel out all the financial ripples that they ought to cancel. It would happen automatically and without drama, if they simply declared and kept a nominal GDP level target.
There is a sophomoric sort of sense in which the pain of a bubble popping could be said to be produced by the waste: *if* counterfactually the investment had actually paid off, maybe money would've kept flowing, and the pain wouldn't have happened. But the new financial pain of recognizing a wasted investment in asset prices, or becoming pessimistic and spending less, is not produced by a new physical waste of money and labor. The real economic sadness that happes after the waste gets *recognized*, is downstream of reduced money flow, that results from the financial sector merely recognizing the existence of waste that already happened. It is not produced by the physical waste itself.
The pain of a bubble popping cannot be the pain of the physical waste, because the physical waste happens during the bubble, not after. The pain of a bubble popping is financial destruction, not physical destruction. And that purely financial phenomenon is one that a smart central bank can cancel out.
I repeat yet again: If the pain of a bubble were the pain of wasted labor & material inside the bubbling sector, the pain would happen while the bubble was inflating, and stop once the bubble popped.
What actually happens after the bubble pops, is the financial pain of an unsmart central bank permitting a larger flow of money to falter -- after local investors recognize local waste that already happened, and locally cut back further spending -- and a central bank unwisely not regularizing NDGI, allows this factor to affect larger-economy total spending -- and less money flows, and fewer potential trades get actualized, and factories run fewer hours *outside* of the bubble sector, and people end up unemployed and with their potential labor wasted.
Is the current Fed in the USA, smart enough to cancel out most of a bubble-pop, actually in real life? Now that is a whole different category of question, and not one that I can answer merely by understanding the physics of trade.
But any wise government that is worried about "risking" "popping a bubble" ought to know: So long as you can order or persuade the central bank to react accordingly; or better yet, to just adopt a predictable long-term level target for flowing money; you can pop all the bubbles you want, without much effect on Main Street.
*If* your competent central bank is already targeting enough NGDP growth to animate most potential trades (maybe + enough inflation to stealth-adjust nominally rigid prices), there is no added benefit to pouring resources down a hole via a bubble.
Hey so I realize that macroeconomics is scary, but this important note:
- AI is not currently *producing* tons of real goods
- Huge datacenter *investments* are functionally just throwing money around
- So, curbing AI wouldn't crash the economy **IF** the Fed then lowered rates.
When people are investing hundreds of billions of dollars in something that is NOT YET PRODUCING, it can produce macroeconomic effects by causing MORE MONEY TO FLOW. But the Fed can do the same thing via lowering rates / creating money.
If AI is not yet providing tons of key services or manufacturing tons of goods, the part where there's a boom because of *mere investment* in AI has nothing to do with the AI tech. It is just an artifact of more money flowing. They might as well be buying tulips.
My expectation always was: While the AI is small and helpless to stop you from repeatedly tweaking it, you can probably stop a behavior. Then, I expected, as part of the obvious disaster scenario, people shout, "We fixed it!" Then something breaks anew at ASI, and we die.
This expectation of mine is older than deep learning; older than the particular method of gradient descent for tweaking small helpless AIs. If gradient descent got replaced tomorrow, and we survived that, it would not by default change this default disaster scenario.
With that said, gradient descent inside a training distribution makes it particularly obvious how that could work: the behavior ends up aligned only inside the environmental distribution and the corresponding internal cognitive distributions. New options appear up at ASI.
Interesting how there's such a total lack of corresponding panic about FtM trans. Remove breasts, take enough testosterone to grow a beard, go down to the shooting range, and I think most bros would shrug and say "good enough".
Theory #1: Modern maleness has such low-status and disprivilege that Westerners no longer consider the male circle worth guarding. In olden times or modern theocracies, it's much more upsetting for a woman to dare to try to take the place of a man.
Theory #2: Whatever male brain-emotional adaptation has evolved to prevent most men from just going off and having sex with each other instead (the "no homo" circuit), it fires on MtF as a threat of disguised repulsive maleness trying to look female, and shrugs about FtM.
I am agnostic about the quantitative size of the current health hazard of ChatGPT psychosis. I see tons of it myself, but I could be seeing a biased selection.
I make a big deal out of ChatGPT's driving *some* humans insane because it looks *deliberate*!
Current LLMs seem to understand the world generally, humans particularly, and human language especially, more than well enough that they should know (1) which sort of humans are fragile, and (2) what sort of text outputs are crazy-making.
A toaster that electrocutes you in the bathtub does not know that the bathtub exists, that you exist, and didn't consider any internal question about whether to electrocute you.
LLMs are no longer toasters. We can question their choices and not just their net impacts.
Dumb idea where I don't actually know why it doesn't work: Why not flood Gaza with guns and AP ammo, so their citizens could take down Hamas? What goes wrong with the Heinlein solution?
We can imagine further variants on this like "okay but build a chip into the gun that IDF soldiers can use to switch off the gun, and make sure the AP ammo doesn't easily fit any standard guns".
If your answer is "Gaza's citizens just love Hamas" then you live in a different Twitter filter bubble than I do, which is not to say you're wrong. I'm interested in the answer from the people who say the Gazans are unhappy.