Eli Tyre Profile picture
Oct 2, 2020 4 tweets 1 min read Read on X
I'm reminded of a comment that I made in reading Joseph Campbell years ago: the east has inner peace, but the west invented freedom.
In the west we suffer, we have to deal with sin, we have to take responsibility for our lives and for choosing well.

All of which flourishes in the enlightenment into the idea of political freedom and self-determination.
In the east, there are paths to transcending suffering and escaping the cycle of death and rebirth.

All that is expected of you is that you comport yourself appropriately for each of your social roles. (Extreme example: caste systems.)
Going by Vervaeke, it seems like free will was invented in the Axial age with the idea of an open, influence-able future.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Eli Tyre

Eli Tyre Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @EpistemicHope

May 1
I think many of Bernie's policies would be disastrous if implemented (getting the economics right REALLY matters), but I can definitely see why he has such a following.

Granted, he's talking about my pet issue here, but he feels "real", in a way that few politicians do?
It feels so refreshing to listen to a politician respond to things in ways that feel straightforward and obvious, when everyone else is playing political double-speak games.
Even if his polices are pretty bad, I do believe that Sanders really does care about the American people and American workers.
Read 7 tweets
Apr 27
For my calibration, do others (aside from Critch) think that LessWrong has this problem?

For those of you who agree, can you share the strongest example of this problem that comes to mind?
This post is phrased in a way that implies that there are a bunch of people looking at LW, from the outside, thinking "man, I wish they would have less violent rhetoric".

Is that true, or is it mainly/only Critch who thinks this?
Even one other person who agrees, and who has had enough contact with LessWrong to have at least one example (as opposed to secondhand stories of how bad and violent the doomers are) would be helpful for my triangulating here.
Read 5 tweets
Apr 11
I had claude write a dating ad for me. It felt like it was trying to hard to be fun and relatable, so I asked Claude to make a "joyless" version, and got this: Image
Another one:

(It's not actually true that I "don't socialize recreationally", but I can see why Claude wrote that.) Image
Vegan male, SF. Six-day workweek minimum. Diet: kale. Leisure: Anki. Seeking woman sincerely committed to the Good for honest, high-meta relationship. Conventional dating activities not offered.

elityre.com/date.html
Read 5 tweets
Apr 5
The problem here is real, but this analysis is of why it occurs is mistaken.

The AI companies are NOT incentived to maximize engagement the way that social media companies are, because they have a different business model.

🧵
Facebook and twitter source their content from users and get their revenue from adds.

It's basically free to serve webpages, and the more time people send scrolling the more ad impression, the more revenue.

Cost is fixed, and revenue is variable.
The AI companies are different. So far, they don't make money from ads. Currently, their revenue comes from subscriptions.

Unlike serving webpages of user-generated content, running inference on their AI models is a cost. They only have so many GPUs.
Read 15 tweets
Apr 3
Some thinking about the ethics around people funding me:

I'm working very hard pushing on projects that seem to me to be moving the world towards a better equilibrium. It feels like it does make sense for the broader ecosystem to pour resources into accelerating my efforts.
Wild as it seems, I have more strategic orientation than most, and enough taste to see how a lot of projects could be better, and the energy and agency to make them so.
So it feels not unreasonable or inappropriate for me to absorb more resources. There are people who want to help, I could absorb more resources to generically make things better in a flexible on the ground way.
Read 20 tweets
Apr 1
@deanwball writes that the blocker to AI takeover risk is computational irreducibility. Intelligence can't predict everything, and so superinelligence can't overthrow humans.

This is wrong. Image
This argument misconstrues what superhuman "intelligence" (or if one prefers, superhuman "capability") entails.
Some specific human individuals have been world-historically skilled at managing capital, interfacing with hard-to-predict systems, organizing groups to accomplish goals, etc.
Read 17 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(