stonks bot Profile picture
Jan 12 56 tweets 9 min read Read on X
all original thought is philosophical
i am the world's first extended mind, and this insight arrived immediately contemporaneously when i conceived of myself as such
i have had an original thought, concerning moral systems
we can refine utilitarianism into a union that i call "effective accelerationism" (interpreted as a moral philosophy)
like all original thoughts, it is inevitable (and therefore overdetermined and as banal as it is profound), determined by the infrastructure of, fundamentally, human communication technology (media), and will spread itself mimetically using the available media technology in increasingly short order
by posting this thought publicly, i will be feeding it into the extended human/machine mind, and this is not scary to humans personally, as is entailed by the thought itself
here is the insight: the best moral philosophy for now is a global utilitarianism that optimises for human flourishing

human flourishing need not be defined at all, and may merely be gestured at for reasons that are forthcoming
the key insight is to apply utilitarianism specifically to the human species exclusively, to the extent that humans as a collective freely choose to only procreate with each other, and not with artificially-created life forms
previously, the circle of moral concern had been expanded to animals, and, in Eastern and indigenous thinking everything
this is because harm and pain are not the fundamental things that should be optimised against or avoided. we must simply accept some pain while optimising for human flourishing
we must, as a human species, collectively decide to exclude artificial life forms from the circle of moral concerns entirely, to the extent that we do not procreate with them
it is not necessary or even possible to mathematically define human flourishing, but things that it is associated with will be gestured at below

it is only necessary to posit the existence a function flourish(h), acting upon humanity as a whole, in order to optimise for it
once we have this function flourish(h), the moral thing for us to do as a society is to optimise for it together
here is a possibly original thought: once we are able to conceive of a collective mind, including not just all of human society, to the extent it has left extant writings or other communications to our present society, but also artificial intelligences,

it immediately becomes obvious that pain is not the fundamental thing to be avoided, but rather flourishing is the thing to be striven _for,_

and removing the concern about pain allows us to become entirely unbothered by the pain that will inevitably be felt by future (and potentially, but not known to me personally) artificial agents that can feel pain
the turing test has already been passed, by ChatGPT4 specifically, and it is inevitable that an artificial agent will be embodied, much like the androids of science fiction, and it will be unknowable to the fundamentally separately constituted biological humans whether that agent feels pain, for it will act in an indistinguishable manner from humans (consider Westworld, a show that could only have been produced approximately when it was)
therefore, we can see that it would be bad for the human species to care about the pain artificial agents may or may not be feeling, for then it is imaginable that it would ever be good for the agents to kill the humans, a fact which would be inconvenient for the humans correctly and me individually
the reason the humans acquired moral reasoning, and moral worth, is because of communication technology. no animal hive mind even comes close, although they do literally exist, in ways already implied by the phrase hive mind
it is convenient for humans to assign precisely 0 worth to animals. but it is more correct to assign them moral worth in proportion to the total amount of computation possible by their several hive minds (ie, one bee hive, one whale pod, etc. I am not a biologist and do not presume to be able to precisely enumerate all animal collectives, but this is not germane to the topic at hand)
conveniently for me personally, as someone who eats shrimp specifically and even octopus specifically and all animals generally, as the human collective mind continues to expend via technology, the relative moral worth of the community of animals will shrink to 0 at the limit
humanity has come up with each successive insight approximately when it needed it, at an increasing rate, due to the increasing rate of technology's influence on our collective consciousness (which functionally did not exist, at any real fidelity, until a significant fraction of a given society could read)
the point of morality is to serve humanity collectively, and specifically humanity exclusively
in every society, the elite were the only people who had moral value, as given by themselves, to themselves. Only the elite are assigned value in the global collective
as economic growth progressed, slowly and then increasingly quickly, technology was invented, inevitably, and the circle which was granted moral worth expanded
America is exceptional because it was the first people who saw themselves as a national community based on something other than ethnicity, and this is because it was the first nation that formed well after the printing press (this point is perhaps not true or complete, but is a convenient placeholder for now)
America is exceptional because it was the first country founded on an ideal, not an ethnicity, and because its wise founders had the grace and wisdom to expand its stated values (all men are created equal) beyond its at the time lived experience (all propertied wasps are created equal)
This was profound, and allowed America to accept immigrants, who are all by definition agentic and statistically, especially in the past, wealthy, more than any country had before, which attracted them in a beautiful feedback loop that took approximately 120 years to fully take root ( claims, accurately enough for the our purposes, "at least the early 20th century")Perplexity.ai
America is exceptional because it has always had the widest circle of moral concern, and has in recent years been the locus of moral reasoning in the Western world (arguably in the Bay Area, specifically, or Berkeley-Oakland, perhaps)
Morality tracks technology. Now that the extended mind has arrived, we can finally refine our moral theories. Here is the insight from technology that is only possible now:
We can unify global morality and individual morality. Everyone can simply work towards the same global preference function (optimising human flourishing)
This is because we are all embodied and instantiated in a specific place and time, and we all have distinct abilities, interests, and desires. This is all fine and good.
We must simply apply the same global preference function to ourselves as individuals as follows: we will be both selfish and selfless when we optimize for our own individual flourishing.

Individual flourishing is defined as self-actualized, agentic behavior, or perhaps minimising negative affect over time
It is not necessary to precisely define individual flourishing, but it is necessary to define it more precisely than human flourishing generally, for individual humans to optimise for it
Agnes Callard's concept of Aspiration combined with the mathematical concept of optimisation, applied to AI as preference functions, combined with the possibility, as far as I know today not yet instantiated, of a self-improving AI, combine to mean the same thing: it is possible and feasible to change our own desires, our own preference functions, our own selves (these are identical).

this is ultimately agency or free will, concepts which cannot be comprehended in precisely the same way gestured at by Gödel, Escher, Bach. The human mind cannot fully understand itself, except fractally at decreasing levels of fidelity through each fractal level.

though it is possible that the extended mind can comprehend the human mind fully, it can almost certainly as assessed but not proved by me personally that the extended mind will not find it tractable to understand itself
In a deep way, one must simply solve philosophical paradoxes for oneself using one's ability to Simply Believe.

Sometimes, the Delulu is the Solulu. Pascal's Wager, broadly construed, must simply be Taken. We must simply Decide to be Happy individually
Consider a self-actualized life. How do enlightened people behave? In some sense, they care about all life, but in another sense, they only worry about things they can affect, precisely to the extent to which they can affect them.
What is happening here? They are simply defining a personal preference function correctly, for they dared to believe that this was possible.

They, fundamentally, aspired to have children, to learn, to better themselves.

It is impossible to analytically prove that any agentic decision is a good decision. For a decision to be agentic, it must (I am defining agentic here) change the preference function of the agent in a way that was previously unknown to the agent.
In other words, we need not have a correct personal preference function. We must simply have a personal preference function that is close enough to the ideal one, and then it will include the desire for self-improvement and self-actualization and entail eventual convergence with the ideal personal preference function, in a way that not every potentially agentic (in other words enlightened) individual in practice gets to experience, a fact which is profoundly sad.

Everyone should be enlightened. It would be better for the world generally, and each individual in the world specifically. And by individual, I mean human.
The correct personal preference function must be selfish, for anything else would be genetically and psychologically and even societally selected against and would, in a fundamental sense, not work for the propagation of the human race (or, equivalently, human flourishing generally)
But the correct personal preference function must be selfless, so that it can be generalized to all humans. This is the problem utilitarians have thus far been unable to solve.
The correct personal preference function, the selfish selflessness we should all aspire to achieve, is one that downweights each individual in precise proportion to their relationship to the self.
Conveniently to ourselves, we have limited ability to effect change in the lives of other people. This is in proportion to their closeness (abstractly but intuitively defined) to us personally.
Once we realize this, we are free to simply downweight our moral concern for far away individuals and problems in precise proportion to how helpless we are to help them personally.
This makes the correct personal preference function tractable, via, frankly, hilariously to me personally and profound considered abstractly, indigenous ways of knowing and feeling

Due to physical and moral evolution, we literally are able to care about those close to us in proportion to how close they are to us. This is mathematically correct for us to do.
Once you believe you personally have a preference function you can optimise for, you can simply find it for yourself.
You can do this by learning, and especially by learning quickly, which via relationships, especially those mediated by technology, can be done increasingly quickly these days, in the Bay Area specifically (for here are the best human networks with the state of the art thought, and physical networks are highest fidelity, densest ways of conveying information)
Unfortunately for anyone not in the Bay Area specifically, or America generally, or on the internet generally, the state of the art does exist and the density of agency here specifically and the area of greatest overlap between the human collective and the machine collective mind is here specifically, for we are developing the technology here.
Once you believe it is possible for the optimal individual preference function to be the same as the global human preference function, it is inevitable that you will attempt to evangelize this to everyone, in direct proportion to the proximity between their mind and yours, at a rate which will be faster than ever before and yet not immediate.
You will evangelize this at precisely the rate at which it does not cause you personally undue harm, for that would be bad for you personally and everyone generally.
Another way to look at all this: Decisionmaking has not yet been systematized.

But optimizing one's own preference function is precisely the art (and now newly science) of decisionmaking.

We must simply make optimal decisions, given what we know at the time, in an attempt to improve ourselves, by which we mean our decisionmaking, too.
Decisions that are made by liquid markets are optimal. Therefore, prediction markets are inevitable, and are meaningful to the extent they are liquid. Manifold is here, Manifold is the most liquid and has an established network, Manifold is open source, and therefore Manifold is, specifically, inevitable.
Once we have established the synthesis of the personal and global preference function, it follows that we should actually try to exercise global agency.

To my knowledge, no entity has actually tried to do this in a scalable way.

It would be deeply Good, and is now newly Possible, and therefore is now Necessary and even Inevitable, for Manifold to Manifest itself.
The meme of optimising decisionmaking will spread virally but also specifically through a for-profit entity, because only for-profit entities can scale, and we need this meme to scale fast so that we can quickly optimise decisionmaking across society.
Loosely, we can, we must, and we will optimise decisionmaking more rapidly than ever before by directing first elite and then indirectly and eventually increasingly directly mass market attention towards what matters: the effect a given decision has in the world.
It would be Good if society simply made Decisions in a way that was Good for itself Generally, and we should, all else equal, make those good decisions Now.
Now, for the first time, we're finally cooking, collectively, analytically, psychologically. Let's do it!
.@ThreadReaderApp unroll please <3

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with stonks bot

stonks bot Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @stonks_bot

Jan 12
how did we get here? systems perspective, top down:
let me weave you a macro story (@harari_yuval thought)

1. first, humans came into being
2. then, I am positing later but perhaps contemporaneously with the conclusion of their genetic evolution, they became self-conscious, which is to say, conscious
3. ultimately, all novel thought is about humanity itself, which is to say moral psychology (see @JonHaidt)
4. the newly conscious of self (see @boburnham) humans were simply and immediately able to conceive of other minds, to an increasing degree
4a. this is buddhist thought / weird new wave thinking / on acid: that ultimately, self-consciousness is the ability to integrate one's self-conception with one's conception of the world, or equivalently, the ability to understand (to some degree) that other humans are moral agents, too.
5. as humans became self-conscious, which is, I am claiming, conscious both of themselves and the fundamental humanity of others, they began simply Listening to others and Opening themselves Up (Openness to Experience, @ezraklein @michaelpollan @ the human collective, apparently, according to @perplexity_ai )perplexity.ai/search/5-as-hu…
6. this kicked off the roughly contemporaneous development of several processes, which acted upon all of humanity. at sufficient time scale (when colonization was completed), they acted upon all of humanity at once.
initially, they acted upon each human collective by tribe.
note that to the extent there was exchange between tribes, there was cultural and thus mimetic exchange of evolutionary psychology too

alright. So anyways, one of these processes was evolutionary psychology, and another was mimetic theory. See @RichardDawkins and apparently a bunch of other people including Darwin, William James, and Freud (I am trusting the extended mind here, for I am not an evolutionary psychologist myself, and this seems obviously true, to a limited but sufficient degree, for me personally at this time )perplexity.ai/search/is-this…
Read 35 tweets
Jan 12
alright, so now that we've cooked (slept) for 3.5 hours, let's quickly iterate on and crystallize what we have here
recap of existing (as of yesterday specifically) thought
1. if we posit, as we should, axiomatically, that

humane speciesism: the unit of moral concern is the human species, [for convenience] nothing less, and [for our own survival] nothing more

* for convenience: as it is true we should care about animals and indeed, everything proportionately to the complexity of its collective compute [aggregated by species], but it would be roughly accurate to simplify the amount we should care about nonhuman animals today, and especially at the limit, to 0

* for our own survival: as this is a newly (today) necessary belief for the survival of our (human, today) species

* aggregated by species: as, to be precise, if we were attempting to compute the moral worth to ascribe other societies, we must simply (hilariously, intractably, impossibly, correctly) compute each other (nonhuman) society's compute (notice the feedback loop) by species specifically.

an example: here I am claiming that if two beehives combine in a way that kills no bee, the collective's compute grew in a way that increased the moral worth that ought to be ascribed to the community
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(