Thought experiment time!

Suppose you were offered the following opportunity: Using highly advanced, but completely safe, psychological methods, your values and personality can be permanently altered.
The changes would be minor enough that you are not just being overwritten, replacing your mind with a different person; your parents would still recognize you as you. But they would be big enough that you would make different life choices and have a different life trajectory.
All of the changes would be in the direction generally considered "good": you'd become happier, more diligent, more conscientious, more prosocial, less neurotic.
Your preferences and interests would change somewhat: if you like history, you might come to like math instead. The people who you vibe best with would also change somewhat, and the people you're romantically attracted to (assume you're single). Your sense of humor might change.
All of those changes would be the result of increasing the new preferences more than muting the old ones. It's not that you stop enjoying history, it's more that you get _really_ into math, such that history just doesn't seem as interesting as it used to.
None of these changes would make you "worse off" as assessed on an absolute scale (ie for all shifts from preferring X to preferring Y, a neutral observer would think that someone who likes X is generally better off than, or about as well off as, someone who likes Y.)
Would you take someone up on this opportunity?

If someone was offering to pay you for doing this, how much would you need to be paid to make it worth it?
[See the above hypothetical]

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Eli Tyre

Eli Tyre Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @EpistemicHope

1 Aug
I just noticed that one of the things that I get from fiction is a kind of vicarious...pride? ...camaraderie? from competent people trusting each other.
In for instance, in urban fantasy, there's something that feels deeply Good about the moments when the wizard and the cop work together to get the job done.
Neither one fully understands the other's work, or the constraints that they work under, but they _do_ trust each other's expertise and each other's moral commitment.
Read 10 tweets
1 Aug
This is about Alpha Zero.

But I think it is basically how human impulse control works. If a person chronically makes "bad" short-term-oriented choices, it may very well be because they _correctly_ don't depend on themselves to be able to execute on a long term strategy.
It's a lot less attractive to be "disciplined" and "responsible" and "prudent" if you're going to fuck it up somehow before you get the payoff.

You might as well seize some pleasure now, even if it is "self destructive" or reckless.
If you're likely to fail anyway, there's less use in focusing on your studies. Might as well party.

If you're going to misinvest your savings, or break open the piggy bank to buy some random thing down the line, you might as well spend the money now.
Read 18 tweets
28 Jul
A realization that probably is obvious to people who are more savvy than me:

For most people, a lot of behavior is motivated, not on the basis of the merits of the behavior, but because it provides a template for social engagement.
I'm in Las Vegas for a conference today. I was wandering around the casino in which the conference is being hosted, and watching the people.
I was poking around in gift shop and saw two women looking through the clothes.
Read 20 tweets
25 Jul
What are some of the triggers that prompt you to do a Fermi estimate, or to pull up a spreadsheet and make a simple/rough quantitative model?
(Same question on LessWrong.)

lesswrong.com/posts/yHuuNhqy…
Read 4 tweets
25 Jul
Folks who believe that "induction is impossible", can you clarify what you mean by that?

I can think of at least four (not quite mutually exclusive) possibilities.
1. Predicting the future based on past data is a thing that has straightforwardly never happened.

As in, it is physically impossible for someone to reason thusly: "I think the sun will rise tomorrow, because it has risen ever day in the past."
I don't THINK this is what people mean, since that seems like an absurd proposition. But maybe I'm missing some nuance.
Read 10 tweets
23 Jul
I'm not entirely sure what cognitive sequence lead me to that distinction, but I think it might have been (in part) downstream of editing my current date-me page (elityre.com/date.html).

This section felt kind of grammatically weird to me. And I think it was because I was sort of switching back and forth between talking about the the kind of relationship and the kind of person. Image
Looking at it now, it doesn't feel as awkward, though. So dunno.

I think part of it was that I was a little bit more tapped into the STATE of what I want, instead of working with abstracted descriptors.
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(