As a followup to my recent thread on how no one who's investing or building in AI is publicly responding to the anti-AI takes from legacy media, look at the 💰 here. None of that 💰 is talking back to journo AI haters. It's talking back to rationalist x-riskers, but not media.
Judging by podcasts, tweets, & newsletters, the money & talent in AI is FAR more concerned with what a handful of people on lesswrong.com are saying about AI than what even the largest Brooklyn Media outlets are saying.
It's probably a mix of contempt (hater journos don't matter anymore) + it's just a lot more fun & illuminating to engage with the x-risk people than it is to engage with... whatever woke word salad is coming out of outlets like NYMag, Lithub, MIT Tech Review, or (lately) WIRED.🤷‍♂️
I wonder how long this lasts, though, because Brooklyn Media is still a key input into the DC policy establishment -- the blob reads all these legacy sites & takes that stuff seriously.

Prediction: When the AI regulation bills are floated, the 💰 will have to engage the journos.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with jonstokes.(eth|com)

jonstokes.(eth|com) Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jonst0kes

Mar 9
Quite a few people are misreading my post as, "We cannot stop or slow AI." No, that's not what I wrote or implied. I'm saying we probably could do one or both, and here's how, but also the 'how' is so serious we'd better real certain we're saving humanity & not dooming it.
The point is to count the costs -- to take a good look under the hood at what's implied in such a project, so when you advocate this you at least know what you're actually advocating.
As some have pointed out, the question of "how hard is it to stop/slow AI?" is separate from the question of "is AI an x-risk?" But both of these questions are inputs into the question of "so what do we do about AI?", which is the real, non-academic question.
Read 4 tweets
Mar 9
Seen on HN. This is coming really, really soon. All the pieces are there, and I'm certain that many many teams are working on products right now. rachsmith.com/i-want-good-se… Image
I've been looking at this myself, and from looking at docs it seems it would be pretty easy to build an interactive GPT-Jon that you could chat with & that would talk to you based on my ~25 years of writing that's on the web.
The following stack will get me there, & I think it's doable in a long weekend depending on what tooling I decide to use:
- All my writing is archived at authory.com
- Dump individual HTML files from authory into Postgres
- Embeddings from OpenAI => pgvector
...
Read 8 tweets
Mar 8
I think about this a lot but I rarely mention it b/c I'd assumed it was already widely discussed, & I haven't wanted to sound like a newb. But @primalpoly is more informed on this than I am so I take from this that it's not widely appreciated enough.
Ironically, the above is why I actually don't worry much about "alignment" in the classic sense. To explain: for me on a practical level, both of these "unaligned AGIs" are the same picture:
1. AGI perfectly aligned w/ my enemies
2. AGI that's unaligned in some Lovecraftian sense
My point is that once we imagine an AI with superhuman powers than can do anything -- once we imagine a djinn that can grant wishes -- it practically (for me) the same if the lamp is held by humans who are radically misaligned w/ me or if it the djinn is completely free & alien.
Read 6 tweets
Mar 8
So, for my followers who are not current on AI per the latest nanosecond, everything in that "Nowhere near solved" bracket except the last one (human-level intelligence) is now solved, some of it the most generation of recent multimodal models.
This is what I mean when I keep saying if we paused all progress right now & just commercialized what we have, the results will still be highly disruptive. But we're not stopping or even slowing, nor are we going to.

B/c I'm still tryna make this A Thing: Don't hate. Accelerate
Multimodal models keep taking out milestones.
Read 5 tweets
Mar 8
I have this issue with Stable Diffusion when I'm making the article feature images for jonstokes.com -- the models really like to set up the scene where there's a subject (person, robot, whatever) standing with their back to you and looking out at some spectacle.
I don't even ask it for this particular pose & scene composition 9 times out of 10, it just does it. I lately find myself deliberately trying to get something out of the models that doesn't have that aspect to it. ImageImageImageImage
If I ask it for "a child standing in the center of a burning village" then the child is always going to have its back to the camera instead of facing it. ImageImageImageImage
Read 8 tweets
Mar 8
Listening to @PalmerLuckey's talk on the @a16z pod, I'm reminded of a key fact that scoffers at military option (i.e. we ban AI in the public sector & the military takes up the baton) either don't know or forget:

The military once innovated. That it doesn't now is a choice.
There's not some iron-clad law that says the military can't innovate. As Palmer pointed out, in the late 40's half of Stanford's budget was military. But nowadays, defense contractors spend very little on basic research.

We could actually do things differently if we wanted.
To be clear tho: I don't think the military (anyone's military, not just ours) picking up the baton of AI innovation is likely at all, if we manage to outlaw training new models. What's FAR more likely is that talent, capital, & compute emigrates to AI-friendly geographies.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(