There's an under-appreciated interaction between macroeconomics and manufacturing economics when it comes to renewable energy policy.
A basic factor driving progress in renewable energy is the learning rate: the more of something (batteries, solar panels, windmills) you make, the cheaper it gets.
In the early years, this was the policy rationale for heavily subsidizing green energy technology heavily even though it didn't otherwise pass a basic cost/benefit calculation.
The cost of a kWh of solar power in 2010 was way higher than conventional sources. Subsidies encouraged people to buy them anyway, which didn't directly do much for global warming but helped scale up panel manufacturing and bring down future panel prices.
A key point is that falling costs are mostly a function of volume not time. When forecasters say EVs will reach price parity with conventional vehicles in 2024, that's because they think it will take that long to reach the necessary scale, not because it inherently takes 4 years.
If we subsidize these technologies more, they'll scale up faster and prices will fal faster. Normal considerations of frugality apply with less force here because spending now lowers costs in the future.
Here's where macroeconomics comes in: interest rates on government debt are insanely, ridiculous low right now. The federal government can lend money for 30 years at a 1.7 percent interest rate. Assuming 2 percent inflation, people are paying the government to take their money.
In a "normal" macroeconomic environment you could say that heavily subsidizing renewable energy is a nice idea in theory but not practical given limited resources. But today the government has access to basically unlimited cash at a real interest rate of less than nothing.
In fact, with unemployment high, deficit-financed renewable energy subsidies are likely to boost overall economic output. Some of that higher output will be re-captured in the form of tax revenue that could be used to retire the debt later.
So renewable energy subsidies have three different types of positive externalities:
(1) Directly reduce emissions (obviously) (2) Reduce the cost of future renewable energy (3) Boost future employment and tax revenue
One final point: the "lower future costs" externality applies globally. If our subsidies lower solar panel or battery or windmill costs, they do so in China and India and Nigeria as well as in the US.
Renewable energy subsidies here provide a de facto subsidy for clean energy in the Global South in a way that's much simpler and more politically salable than sending them money directly. And of course climate change is a global problem so this ultimately helps the US.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Waymo is in a weird place right now. They're now operating an honest-to-goodness commercial driverless taxi service. No safety drivers. No rider non-disclosure agreements. A pretty big service area (~50 square miles). But it's growing very very slowly. arstechnica.com/cars/2020/12/t…
Three years ago, I thought that if Waymo "solved" the self-driving problem first, as seemed likely, its big challenge would be scaling up quickly enough to grab territory before other companies came to market. I was wrong. arstechnica.com/cars/2017/10/w…
Waymo has driverless cars that can operating in most situations in the Phoenix suburbs. But for some reason they don't seem to be trying very hard to scale up. They haven't provided a clear answer about why not.
The fact that three different companies have apparently made COVID vaccines in ~8 months makes me wonder if there's room to be a lot more ambitious about other technology projects. Like maybe we should follow the UK and ban internal combustion engines in 2030.
A lot of corporate decision-making is driven by risk aversion about future market conditions. What if your car company goes 100 percent electric and it turns out customers don't want electric cars? When that uncertainty is removed an industry can move pretty fast.
The de facto US ban on incandescent light bulbs a decade seems like an under-appreciated model. It seems to have significantly accelerated light bulb technology, and the transition happened so smoothly that most consumers barely noticed.
Robert Caro's first LBJ biography includes a passage that explains how rural electrification transformed the lives of farm families, especially women. It makes a powerful case for Robert Gordon's thesis that innovations of the last 50 years pale in comparison to what came before.
The arrival of electrification relieved farm families of several categories of back-breaking labor: washing clothes by hand, milking cows by hand, canning, hauling wood to (and tending) woodstoves. Refrigeration drastically reduced milk spoilage. Plus of course electric lighting.
Big-screen TVs and smartphones are nice but they just aren't transformational the way washing machines and electric lights were to our great grandparents.
Nobody refers to Twitter as a "micro-blogging" platform any more but I think it's under-appreciated how much Twitter today fills the same niche that early blogging did.
A lot of early blog posts block-quoted a paragraph of text and then offered 1-3 paragraphs of analysis. Now we screenshot a paragraph from an article and offer 1-3 tweets of analysis.
Early bloggers spent a lot of time responding to other bloggers. Bloggers today (especially professionals) don't do that much because they're trying to maximize the readership of each post. Instead, we do short, blog-style responses here on Twitter.
Nvidia has an amazing new technology that essentially uses deepfake AI technology to reduce the bandwidth needs of video calling by 10x. Full explanation of how it works here: arstechnica.com/gadgets/2020/1…
The software sends a single frame of video. Then for subsequent frames it just sends data on the positions of the subject's eyes, nose, mouth, etc—much less data than a whole frame. The receiving computer then uses a neural network to re-create the subject's face.
Our comments have a lot of hand-wringing about how this "doesn't show reality," but I think this is based on a philosophically untenable conception of reality. A conventional video isn't "reality" it's a pixel-by-pixel approximation of reality. Even more so with compression.
People seem to think this is a compelling argument against antitrust enforcement but it's really not. Anyone familiar with economic history knows that new high-tech industries tend to have a lot of competitors in their early years before settling down.
There were dozens of oil companies in the 1860s, dozens of car companies in the 1900s, lots of small-scale experimentation with radio in the 1920s, etc. Then Standard Oil, Ford/GM/Chrsler, and NBC/ABC/CBS emerged and became dominant for decades.
It's relatively easy for new companies to emerge when the industry is still young and growing. New customers who don't yet have established brand loyalties. Untapped innovations for a new company to discover and exploit. It gets harder as the industry matures.