Roger McNamee Profile picture
Jul 8 15 tweets 4 min read Read on X
🧵
Today I was on @SquawkStreet on @CNBC to talk about the bubble in AI stocks. @GoldmanSachs put out a report warning capex for AI was way too high, given lack of high value use cases. Goldman is right on, but does not address other crippling issues facing AI. Thread.
1/15
.@GoldmanSachs focuses on capex for compute and data centers. Numbers are >>$100B so far, rising by >$10B a month. Key spenders — MSFT, nvidia, Google, Amazon, Meta — have tons of cash, and dominate S&P500. If they are wrong … disaster for market.
2/15
goldmansachs.com/intelligence/p…
Goldman is not the first Wall St firm to raise concern. Barclay’s was. The VC firm @sequoia also issued a report on capex in AI, saying that industry needs $600 million in revenue to justify the existing investment in compute and cloud.
3/15

sequoiacap.com/article/ais-60…
When firms like @sequoia and @GoldmanSachs tell me there is a bubble, I pay attention. When their argument is based narrowly — on discretionary capex — alarm bells go off. Why? Because there are lots of other reasons to think generative AI is going to disappoint.
4/15
AI issue #1: power. Generative AI is breaking the power grid nationally and accelerating climate change. This would be a problem if AI created big value, but it doesn’t. MSFT and GOOG promised to be carbon neutral in 2030. MS carbon is up 30% in 2 years; Google 45% in 5.
5/15
AI issue #2: water. Water is required for semiconductor fabs, training data sets, and cloud services. One report suggests 1/2 liter of water is ruined every time you query a chatbot. Many fabs and data centers are in desert areas.
6/15
AI issue #4: copyright. GenAI companies have stolen huge amounts of copyrighted content to train LLMs. Their goal is take away jobs from the people whose content they stole. There are several problems with this.
7/15
AI issue #5: privacy. When we started using cloud services for email, word processing, spreadsheets, storage, social, etc. we did not give permission for our content to be used to train LLMs. And yet that is what is happening.
8/15
AI issue #6: security (esp. NatSec). Big Tech LLMs create vulnerabilities for anyone who uses them. It is almost certain that agents of foreign powers have infiltrated AI vendors. Huge asymmetry of risk relative to
China.
9/15
AI issue #7: disinformation. LLMs were designed to pass Turing test by fooling humans. Optimized for plausibility, not factuality. Volume of nonsense is a huge threat to search and to quality of online information.

Disinfo may be most valuable use case of LLMs.
10/15
AI issue #8, part 1: toxic use cases. Four best use cases so far are disinfo, deepfake porn, spam, plagiarism. Positive use cases are still mostly speculative. One that looked good — programming — has turned out to be a mixed bag, at best.
11/15

spectrum.ieee.org/chatgpt-for-co…
AI Issue #8, part 2. The early efforts to apply generative AI to web development have been disappointing.
12/15

baldurbjarnason.com/2024/new-web-d…
AI Issue #8, part 3. Thanks to the GenAI mania, people are trying to apply LLMs to everything, including a wide range of use cases where it is guaranteed to fail. People are going to be harmed, some in ways from which they will not recover. Safety should be a priority.
13/15
Conclusion: LLMs are not intelligent. They use statistics to find the most suitable next word, paragraph, image. They only know their training set. What they do best is to BS you.

More evidence supports the view that LLMs are a scam than the Next Big Thing. Beware.
14/15
🧵Coda: BigTech dominates the S&P500. When the AI bubble bursts, Big Tech will not go broke, but all the forecasts will collapse, taking stock prices with them.

No way to know when the bubble will burst, but it will happen. Be prepared.

Thank you, @SquawkStreet!
15/end

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Roger McNamee

Roger McNamee Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Moonalice

Jan 2
2023 was a great reading year for me, as measured by quality, if not by quantity. I read 38 novels, 6 non-fiction books, and 2 screenplays. This thread covers the 16 novels that I would strongly recommend. The books are not exactly in order.

1/
“Lessons In Chemistry”, by @BonnieGarmus. This strikingly original novel is as good as everyone tells you. Maybe better. In a super competitive year, this was my favorite. 2/

bookshop.org/search?keyword…
Image
“The Shell Seekers” by Rosamunde Pilcher. A classic that I only discovered in 2023. Wow. Brilliant. 3/

bookshop.org/p/books/the-sh…
Image
Read 17 tweets
Dec 28, 2022
In 2022 I read more novels that I loved than in any year before it. Some new, others not. I gave ten novels my top score. I strongly recommend them all.

1/6
My 10 favorite novels of 2022. All get my highest rating. In order I read. 1st 5:

The Overstory, Richard Powers
The Lincoln Highway, Amor Towles
A Week In Winter, Maeve Binchy
Avenue Of Mysteries, John Irving
The Heirs, Susan Reiger
Remains of the Day, Kazuo Ishiguro

2/6
My favorite novels of 2022 in the order I read them … All are great.

Next five:
Horse, Geraldine Brooks
The Boys, Katie Hafner
Shrines of Gaiety, Kate Atkinson
Anxious People, Fredrik Backman
The Last Chairlift, John Irving

3/6
Read 6 tweets
Nov 4, 2021
[Thread] In a speech at @WebSummit, I made the case that surveillance capital (SurvCap) is undermining democracy and self-determination very rapidly. There is not much time for law enforcement and policy makers to act to save them. 1/ theguardian.com/technology/202…
SurvCap = Surveillance to collect data from everywhere. Machine learning to identify patterns. User models + AI to power predictions for ads. Recommendation engines to manipulate choices and behavior. Result = users lose right to self determination. 2/ theguardian.com/technology/202…
SurvCap = extraction (like oil) for data, control. FB is most visible source of harm, but Google invented SurvCap and big corps outside of tech are adopting it. Every corp collects data of all kinds, looking to mimic success of Google, FB. 3/ theguardian.com/technology/202…
Read 15 tweets
Feb 8, 2018
Fare thee well @JPBarlow. R.I.P.
Principles of Adult Behavior by John Perry Barlow #WTLB
1. Be patient. No matter what.
2. Don’t badmouth: Assign responsibility, not blame. Say nothing of another you wouldn’t say to him in the same language and tone of voice.
3. Never assume the motives of others are, to them, less noble than yours are to you.
4. Expand your sense of the possible.
5. Don’t trouble yourself with matters you truly cannot change.
6. Expect no more of anyone than you can deliver yourself.
7. Tolerate ambiguity.
8. Laugh at yourself frequently.
9. Concern yourself with what is right rather than who is right.
10. Never forget that, no matter how certain, you might be wrong.
11. Give up blood sports.
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(