Today, Pennsylvania is landing some of the largest AI infrastructure commitments in the US.
Recent projects illustrate the shift: (1/3) 🧵
⚆ The Homer City Energy Campus, redeveloping a former coal plant into a gas-powered datacenter hub with up to 4.5 GW of new generation
⚆ TECfusions’ Keystone Connect, a 1,395-acre campus designed to scale toward ~3 GW of IT capacity using a hybrid grid + on-site generation model
⚆ Multiple additional multi-hundred-MW and gigawatt-scale campuses announced across western and central Pennsylvania
Pennsylvania offers a rare combination of: (2/3)
⚆ Abundant, low-cost energy, anchored by natural gas, nuclear, and legacy generation infrastructure
⚆ Brownfield megasites (retired coal and industrial plants) that already have transmission access, water rights, and industrial zoning
⚆ State-level coordination, where permitting, utility alignment, and local approvals are increasingly moving in parallel rather than sequentially
⚆ Geographic proximity to Northern Virginia without Northern Virginia’s congestion, pricing, or political friction
⚆ Execution certainty, as projects are anchored to power plants, substations, and real permits rather than speculative land options.
For datacenter developers, Pennsylvania removes the hardest constraint in AI infrastructure: delivering power at real scale, on real timelines. When a market can do that, momentum compounds quickly. Pennsylvania is no longer an alternative, it is becoming one of the places where the next generation of AI infrastructure actually gets built. (3/3)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Rolling into the new year, 2 of the Six Tigers quietly filed their IPO prospectuses and will start trading in early January if all goes well. We finally get a glimpse into audited financials of foundation model labs. TLDR: Building Machine God Ain't Cheap. (1/5)🧵
MiniMax (0100 HK) and aka Knowledge Atlas fka ZhiPu (2513 HK) both give a glimpse into the economics of an AI Lab, demonstrating strong product momentum as well as a flagrant disregard for profitability. (2/5) 🔥📉Z.ai
Similar to OpenAI and Anthropic, both labs have a “bundle” subscription biz typically consumer facing and a “variable” API biz for enterprises, including major U.S. SAAS/Internet companies like Netflix, Adobe, and others. Z.ai gross margins are more similar to the US labs while MiniMax's is much lower, likely due to the large subsidization of free Talkie users. (3/5) 🤑💸
If you want to power a datacenter off the grid, a gas turbine is the "obvious" choice. But it might not be the best option! Many developers select reciprocating engines for a reason. (1/4)🧵
A recip is more modular than turbines, happier at partial loads, and more comprehensible to maintain. You're mostly changing lubricants, whereas a turbine requires no maintenance...until it needs a massive overhaul. (2/4)
More importantly, recips are less exotic technology. They rely less on rare alloys and critical minerals, and there are WAY more vendors to choose from, speeding up lead times and purchaser power. (3/4)
Massive IT load growth. A transforming electric grid. Five-year lead times for turbines. Why not build more of them?
Well, GE and Siemens have seen this story before. (1/8)🧵
Back in the '90s, parts of the American electric grid were "deregulating." These reforms gave us commodity markets for electricity--aka ISOs and RTOs. INDEPENDENT POWER PRODUCERS (IPPs), often utilities from other states, could build and run their own power plants and make money on these new electricity markets. Their generator of choice? The COMBINED CYCLE GAS PLANT (CCGT), particularly the then-new F-CLASS. (2/8)
Then, in 1999, Mark Mills and Peter Huber released a report called "The Internet Begins with Coal," which claimed that rising electric loads from these hot new computer things would overwhelm the existing electric grid. They concluded that by 2020, 30-50% the electric grid would go towards powering the digital economy. (3/8)
A semiconductor is a material whose electrical conductivity lies between that of a conductor and an insulator. To achieve this property, doping is applied to a silicon wafer to adjust its electrical characteristics. (1/7)🧵
Before the 1970s, doping was performed through thermal diffusion in high-temperature furnaces.
Process steps:
⚆ Pre-deposition: An oxide-based dopant film is deposited on the wafer surface.
⚆ Oxidation: The dopant oxide is driven into the growing silicon dioxide layer.
⚆ Doped region formation: The doped area forms and reaches the desired concentration and depth.
⚆ Wet etching: The oxide layer is removed using a wet etching process. (2/7)
Driven by research related to atomic weapons, technologies involving high-energy ion beams began to develop, leading to the introduction of ion implantation machines in the mid to late 1970s.
Ion implantation technology offers four major advantages:
⚆ Dopant concentration can be controlled by adjusting the ion beam current and exposure time.
⚆ Doping depth can be precisely controlled by tuning the ion energy.
⚆ The anisotropic characteristic of ion implantation makes it easier to precisely define doped regions.
⚆ The process can be performed at room temperature, unlike traditional diffusion, which requires high temperatures of 800–1000 °C.(3/7)
The economics of AI has been a big question mark in many investors' minds - What does the value chain look like? How do you model out the ROIC of AI? What would the ROIC look like?
We built up an end-to-end economics stack to answer this question - how we go from a chip’s silicon cost, through full system integration, all the way down to the dollar cost per million inference tokens.(1/4)🧵
At the top of the stack, our accelerator analysis starts with the semiconductor bill of materials (transistors, packaging, HBM, and yield assumptions) to determine GPU provider content. From there, our BoM and ODM modeling breaks down every component inside the server. The network topology model then maps how these servers interconnect.(2/4)
When you roll this all up, illustratively for H200s, that gives us a capital cost of roughly $1.06 per GPU-hour, to which we add electricity and colocation costs for a complete TCO of $1.41 per GPU-hour. That’s the economic foundation. The cost to own and operate the hardware. A neocloud might rent that same GPU for roughly $2 per hour, leaving a modest gross margin. But until now, that’s where most analysis stopped at TCO/hr.(3/4)
Qualcomm and MediaTek are in a race to reduce their dependency on the mature smartphone market. Both are still managing to beat unit growth in smartphones. But that won't last long. Investors are looking for their progress in non-smartphones. Qualcomm's non-smartphone chip business hit a $10B+ annual run-rate, contrasting with MediaTek's $8B+. (1/7) 🧵
Both have increased their investments to capture more revenue in consumer, networking, industrial and computing markets. Non-smartphones account for 30% of Qualcomm's semiconductor revenue and 48% of MediaTek's. Qualcomm has a target of $22B non-smartphone chip revenue by FY29 at a 5-year CAGR of 21%. Qualcomm built a strong moat in autos but made mixed progress in IoT (a collection of end markets including PC, consumer, networking and infrastructure). (2/7)
After sitting on the sidelines for a long time, both Qualcomm and MediaTek have now firmed up their AI datacenter chip plans. MediaTek appears to have hit gold with its AI ASIC business, claiming $1B revenue in '26, multiple billions in '27 and up to $5B-$7.5B in '28 and beyond (10-15% share of $50B TAM), growing faster than its flagship smartphone chip business, which will slow down from CY26. (3/7)