Moffett did fantastic research on Verizon's fixed wireless. Using public records from Sacramento, they identified the location of VZ's small cells. Then using VZ's website, they manually input 45K (!) addresses to check to see if service was available or currently subscribed.
First, the most important point about overbuilding in general:
"One of the touchstones of telecommunications is that overbuilding wired networks almost never works"
Now the actual results. Only 6% of addresses were eligible to receive fixed wireless service, with some zip codes as high as 18%. Of those eligible addresses, only 3% had taken service, although this is arguably less meaningful at this point than the eligibility.
The most important takeaway is why eligibility is so low, and it has to do with distance from the small cells. Eligibility rapidly declines as you move away from the small cells. By 400 feet, less than 50% of addresses were eligible, and by 700 feet almost no addresses were.
Verizon has talked about distances as great as 1900 feet. But so far, in one real world environment, that is not happening. The implication for coverage, and therefore costs, are important. Were these results indicative, VZ would need over 1M small cells to cover 30% of the US.
It is well known FiOS was value destructive. So the idea behind fixed wireless is it is "cheaper" to deploy than fiber. But that is almost certainly not going to hold true if the eligibility per drop is so low. And this assumes a massively higher take rate than seems reasonable.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Ed Zitron claims to have exact monthly cash spend (net of discounts and credits) by Anthropic on AWS. No idea the veracity. Since cash spend, it wont directly map to revs, but wth. Means Anthro is a 3.8% customer and driving 310bps in Y/Y growth in Q3 25 from $1.2B in cash spend.
There are reasons for skepticism. Semianalysis showed that GCP was providing significantly more MWs to Anthropic than AWS up until this quarter, making the 2024 nums almost entirely reliant on training.
TheInformation reported that Anthropic spend $1.5B in 2024 on servers for training. Zitron's numbers imply $1.3B in total cash spend on AWS by Anthropic. With minimal inference, would mean nearly all the training took place on AWS not GCP.
Two very interesting articles filled with unusually specific details about Israeli spying activities on Hezbollah printing in separate papers the same day. The why here as interesting as the what.
The FT in particular goes into greater details sourced from Israeli officials
Hezbollah’s involvement in Syria exposed them in a way they hadn’t before. Particularly their interactions with Syrian and Russia intelligence services.
"the street is modeling $167B in cumulative AI capex, which is enough to support over 12,000 ChatGPTs.
We think one of the big players may blink and cut back the capex plans, but not likely until we get well into 2025 or beyond"
"Based on these estimates, Google is assuming around 180T AI text queries (both input and output) and 15T AI image queries. This is a staggering figure, as there are around 11T web search queries per year right now worldwide. Stated differently, Google’s AI capex assumes a market that is 15x-20x larger than the web search market by 2026"
"Based on the 2026 consensus AI inference capex above, we estimate that the industry is assumed to produce upwards of 1 quadrillion AI queries in 2026. This would result in over 12,000 ChatGPT-sized products to justify this level of spending, illustrated below."
Reading the Nadella/Scott interviews, and the transcript from JPM of Alysa Taylor who heads Commercial Cloud GTM, you get insights on 3 key topics:
1) frontier models vs models-as-a-service 2) confidence in demand relative to capex spend 3) MSFT's attempts at differentiation
Asked about vertical integration in AI, Nadella says "I'm more of the believer in the horizontal specialization".
More importantly, he goes on: "So I think any enterprise application, really what they're most excited about is models-as-a-service".
Mark Murphy asks: "re: foundation models do you expect we're going to see some convergence in capabilities or do you suspect... we're going to see sustained performance differential"
Alysa Taylor, heavily cribbing from AWS: "We don't believe there is one model to rule them all"
Something that was noticeable, on each of MSFT peer review slides for Apple, Amazon and Google, they highlighted progress on proprietary chips
And then the Azure and Windows slides both have develop custom silicon chips as long term drivers. We knew that this was the case, but just interesting to see them highlight other co's successes and set goals for it.
"base of our stack, our custom silicon efforts will help us remain competitive.. Our efforts will be a mix of internal and partnership... ultimately, we will need to become a first-class provider of chipset designs, especially the most critical chips given our scale in the cloud"