Hrosso Profile picture
Founder of Unfolding Institute
Mar 13 5 tweets 1 min read
so, i'm eligible for the daily free 1M tokens of gpt4.5 which openai will then train on

that's over 1k A4 pages a day

seems like a moral imperative to try and embed as much of my values / way of being into their training data

any ideas how to use this productively? Image one obvious idea is to finally finish my book on spirituality via scientific lens, with the help of gpt4.5 / o3-mini

basically describing my spiritual path including all my models / learnings i've come across
Mar 10 5 tweets 1 min read
i’m not surprised this worked, i’m little bit (positively) surprised how well it worked

my (not sure if widely accepted) understanding of generalization is that it works via compression

if you try to compress less information than is the storage capacity of the compressor, you get no generalization. if you compress (vastly) more information than is the capacity, you get a lot of generalization

because the storage bottleneck creates pressure towards more abstract hierarchical composable (and thus reusable) representations if you then finetune a model with such abstract hierarchical representation, it might be easier for it to flip a sign somewhere deep down than to shallowly learn spitting out malicious code

whereas of the model wasn’t constrained by its capacity, it would be easier to not generalize and write malicious code without it impacting much else
Feb 28 12 tweets 2 min read
confirmed, GPT-4.5 just gets it Image **"Cognitive Immunity" through Spaced Repetition & Reflection:** Users could leverage MemeOS to build mental resilience and critical thinking—repeated exposure to complex, nuanced information gradually immunizes individuals against simplistic or deceptive narratives.
Jun 27, 2024 20 tweets 3 min read
thread #2 on holding space: a deeper, more personal & unhinged take with a plot twist
what's going on on a deeper level? Why do I do it? Why does it feel good? What are the subtle moves I'm doing when I'm holding space for you? The thing is I wanna see you naked. I wanna see you as you are, feel you as directly as possible. Stripped of all protective shields, of all masks you are hiding yourself behind, of all stories you are telling yourself about who you are.
Jun 19, 2024 25 tweets 4 min read
Holding space for self and others
Yesterday my friend shared with me that I am holding space beautifully for others. @RichDecibels told me the same at the end of RichFest2. I intuitively know exactly what they mean by that. But what does is it actually mean? 🧵 How to put it in words? What am I doing, with what intention and what effect does it have? What does it mean to do it well? In what way is what I do special? What are the conditions necessary for it?
Aug 22, 2022 8 tweets 2 min read
Spoiler alert! When I first read Eliot's Four Quartets a couple years ago it felt like pure wisdom. A perfect description of human condition. But it was very abstract. The poetic allusions were pointing in the right directions, without ever saying it aloud. For some reason, today I started asking GPT-3 DaVinci for explanations of the last couple of verses. Almost all (like 23 out of 25) were really good. But then I asked: "Explain why the condition of complete simplicity costs not less than everything:
Aug 17, 2022 4 tweets 1 min read
Self-deception has always been an issue for me and I think for many others as well. My current understanding of how it arises is that society creates external incentives for a certain set of beliefs, which are accepted and publicly endorsed, but in practice they are not followed. Or, it can be the self-image created by the person, but the reinforcement from society is still important, because it makes it much more difficult to adapt it. Either because it's unrealistically hard to act on such beliefs, or because it's simply enough to act "as if".
Aug 11, 2022 17 tweets 2 min read
I've been looking into the AI alignment problem last couple of days and came up with the following summary of what problems there are and why. Also, I'd prefer using the umbrella name of Human alignment problem, as AI alignment is just a subset of it. The problem is that we don't know what we want.