Joe Bryan, Next Steps for Vere
Vere computes, persists data, and carries out I/O. This differs from other interpreters/runtime environments.
compute = Nock & jets
persist = events, snapshots
i/o = source events, release effects
A jet accelerates Nock nonlinearly, most notably decrement.
Persistence has two very different pairs of requirements: events and the event log, then the state derived from them as a snapshot (since derivation from events is expensive).
I/O because a computer is useless unless it does things to act in the real world.
Urbit now implements a relatively small number of protocols (e.g. Ames, HTTP).
Vere has three jobs: boot/work/replay
Replay is crucial but of course you don't *want* to ever have to do it.
"One thing is better than two things, and two things are better than three things." So we want to special-case the absolutely necessary parts and define the rest in terms of replay.
[%2 [%0 3] %0 2]

This is replay (in Nock). The idea of Urbit is a computer whose entire lifecycle is defined by a small fixed frozen function.
Construct a formula, then run it against a subject.
Everything else we are doing (the system straitjacket) simply ensures that this property is true.

Computing events against this state deterministically gives you a representation of your digital life.
[%2 [%0 3] %0 2]

The funny thing is that this formula doesn't say anything, it's trivial.

This layer of the system is already frozen forever. Nock has some kelvin left, but it's unforeseen why it would ever change.
To accomplish this model, Vere is built of many things:
- loom (roads, allocators, snapshots)
- bytecode interpreter
- jet dashboard
- virtual machine
- IPC
- event log
- effects
Vere has some standard system calls to POSIX but the meat-and-potatoes of Vere is entirely custom.
The Loom

The conceit of the loom is that it is the entire address space of a computer. Image
The loom follows a road discipline which in retrospect is obvious but has differences from conventional memory models.

It is designed to layer memory regions and has north/south directionality. Image
When we allocate, we allocate on the hat (stack). Any ad hoc computation follows stack discipline. hat moves back and forth, rut churns around like a cursor.

Between hat and cap is unallocated free space.
You can special-purpose allocate a memory arena, use it up, then throw it away. But we reverse directions at each level because the heap of one road layer is the stack of the containing road layer. (This is why the loom's road model works well.) Image
From within Nock there's no way to deal with pointer equality, roads, etc. But some trees are repeated and we don't want to waste memory with extra copies of data. (This gets into noun deduplication.)
(Discussion in the vein of developers.urbit.org/reference/runt…)
stack discipline v. heap chaos
- Each allocator churns on its own lifetime.
- tracing garbage collection v. reference counting
Refcounting is despised in general, considered obsolete on modern architectures because memory is not linear but hierarchical. Many nonlocal updates (reads can become writes, which is problematic).
One of the coolest things about Vere's is that it has a significant improvement for refcounting: you never update refcounts in senior memory.
For the snapshot, this means clean pages are kept clean.

In general, it significantly minimizes many of the downsides of refcounting.
Refcounting is not suitable for cyclical data. But nouns cannot have cycles. The allocator's general-purpose capabilities are carefully used to avoid cycles.
The upside of refcounting is deterministic finalization. Automatic memory management from the properties of computation itself not from external preemptive events.
Vere's usage pattern for cons cells is very distinctive. Refcounting gives you eager determinism.
Bytecode interpreter:

A stack machine built on top of allocator and loom. Highly optimized on its own terms. Uses threaded code pattern (computed GOTO). Extensible hint protocol.
Unfortunately these gains are often clawed back by other parts of the system, thus research projects like New Mars.
~> %xray [2 2]

shows the bytecode for the wrapped expression

~> %xray =+(2 [- -])
~> %xray (add 2 2)

^ this is not addition, it is just the code for the call
If you wrap an %xray hint around a core, you just get the formula that invokes the core, not the core itself.

~> %ray.[0 %outer]
=| i=@
|- ^- @
~> %ray.[0 %inner]
?:(=(i ^~(bex 0))) ~ $(i +(i))))))
- `~> %bout (met 3 jam .))
- `~> %bout ~:(cue (jam met))
- `~>%bout (slum cue (jam met)) 3 (jam .))`

When we jam and cue +met, there is no hint at the point of execution.
Call overhead is massive. Technical corrections to this (which Joe discusses) are the most promising part of New Mars.
We compute many things in virtual Nock, +mock.

Virtualization is slow, but it gives us bail/trace/bounded computation.
Stack traces are computed through virtualization and are 100% correct.
Event lifecycle:
- something happens
- it's observed and an ovum is enqueued
- the event is computed
- result serialized and sent back
- failures are optionally sent back to the driver
- effects are enqueued and the event is logged
- effects are released
Correction:
- something happens
- it's observed & an ovum is enqueued
- the ovum is dequeued & scheduled (IPC)
- the event is computed
- result serialized & sent back
- failures are optionally sent back to the driver
- effects are enqueued & event is logged
- effects are released
next:

- 8 GB loom
- demand paging (reduce cost of large looms)
- epochs
- distributed event log
plan:

- 16 GB loom
- commit-before-compute
- shared memory IPC
- parallelism
Vere Repo Management by Peter McEvoy (~fanfun-mocbud)
Then: Urbit was a monorepo, using Nix+Make. The release process was freeform. Linux/macOS/OpenBSD/Windows were all supported.
We wanted a simpler/more obvious build system, a highly-structured release process, and a focus on high-priority release platforms.

This let us establish a higher release cadence.
(when Zoom bugs out on the connection then it plays back voices like The Chipmunks) Image
Release process:

feature → develop → release → master
| | |
edge soon live
we supply versions:
- edge (least stable)
- soon (release candidate, hotfix if necessary)
- live (on network)

this cadence lets us spend less time thinking about releasing
Then master merges back into develop
build system:

- how Bazel defines & sources 3p deps
- how we build those deps
- how we build runtime targets
- toolchain configuration
(aside: this is the sort of talk that is really good for developer conferences. it's way too technical and in-the-weeds for Assembly, but it's essential to disseminate tacit knowledge and known best practices)
(and for that reason, it depends on digging through the build config, I don't have much to tweet now. here are some more memes) ImageImageImageImage
legend ImageImageImageImage
Subscription Reform, Jack Ek and Ted Blackman
Subscription reform subtends how agents subscribe to each other (and thus aspects of the Urbit application model).

Is there a good reason for this? If you look at the size of a ship using |mass, %ames takes a lot of data compared to even userspace (%gall).
This is because the data to different peers is replicated. It's for historical/current reasons but we need to fix it.
A publisher has many subscribers in a hub-and-spoke model. But different subscribers may have different pieces of state at a given time.

So you have to distributed different pieces to get subscribers up to sync.
Right now we first serialize (put them into one message), packetize (ready to send piecewise), and encrypt (at which point they look completely different).

Conceptual data are the same, but form is so different that the system can't replicate properly.
Remote scry means that we have reads, not writes. This means we can avoid an Arvo event, reduce CPU and disk write load on publishers.

This facilitates parallelism and caching.
Now you can just publish data and whoever wants it gets it.

This will first be used in OTAs; private data is harder.
A scry conceptually is a pure function mapping from a path to a response. We desire referential transparency.

The response can change, but given a particular path you should always get the same response.
A scry path looks like /~zod/1/2/c/x/37/kids/sys/arvo/hoon

host, rift, life, vane, request type, revision number, desk, file path
We can use this to hit Vere but not Arvo.

As with everything else, our solution: event logs and snapshots.
It won't matter which "version" of the state we receive as long as we know how to deal with event logs and snapshots (and we do!).
We can clear old versions of publications by simply squashing the state together.
We call this "solid-state publications". There is no imperative state to keep around anymore.

We are only reading from an immutable namespace. We can add to the namespace, but not change it.
We call this $lake/$rock/$wave.

A $rock represents a state, and a $wave washes over the $rocks to change them slowly.
Demo this in userspace now.

github.com/wicrum-wicrun/…
SSS is the beginning of subscription reform, but other parts are needed:

√ remote scry
√ solid-state publications
- encrypted scry
- sticky scry requests
- shrub
%poast is a quick-and-dirty Twitter backend written using some of these techniques.

github.com/belisarius222/…
The most provisional is "shrub", scry maximalism, which may or may not happen. But one could imagine an Urbit where every piece of userspace data is stored in the scry namespace.
All data, all code, all I/O would have a scry path. You'd react through scry endpoints.

Is this system better?

You could do it in userspace, but also for networking (the Ad Fontes proposal).
Anything that your Urbit emits to the network would be a scry binding, +$ oath [path noun]
This isn't how networking works now. But this came from an Ames rewrite in 2019, noticing that every packet has a globally unique identifier (for replay attacks). But you could just map this into the scry namespace.
Urbit is a logical broadcast network.
Arvo can't really handle ephemerality, it's not built for it. So this is really nice for maintaining Urbit guarantees about deliver-once etc.
Sticky scry requests.

You can sign packets on first scry request in `urth` process rather than all packets, so it's faster.
Apps shouldn't think about encryption, it should just be part of the kernel.
You can't *really* subscribe now—I can't ask for something that doesn't exist yet and get a result when it comes into existence.
(Discussion about how this may break Urbit as a single-level store.)
Lightning Talks
%vapor by ~harden-hardys (?) hard to see

Making NFTs not suck. @QuartusCo contributing
Poor performance characteristics of IPFS lead to JPGs and MP4s only. Appears decentralized but really recentralized.
%vapor is interactive NFTs w/ FE for client
dynamic + private content

IPFS hashed URL scheme is public so you can't sell revenue-generating data

but Urbit-only means that you can properly gate
Unique private NFT experiences for collectors that don't leak to the outside world.

Urbit can watch on-chain events to trigger private data transfer.
If you don't own a ship, then a hosted ship will boot for you that lands right on the NFT.

Publish on OpenSea, let people get private access through Urbit.
(end-to-end demo, very nice!)

Bug Azimal (?) example NFT

(oh, I think he's making it live, it's not on there yet)
Collector gets Urbit view of collection as well, then can boot the experience.

(ngl this is pretty cool, much better than regular #NFT static stuff)
Imagine training a Pokémon as an NFT then upgrading/improving it, then reselling.
%vapor will serve as a general information market, much bigger than "classic NFTs"
expect some token releases in the next month or two

e.g. 3D environment/video w/ possible interactions
Allowing a collector to click and run through an NFT is impossible on legacy systems, whereas on #Urbit Trent got this working in a single day.

B O O M
~hastyp-patmud (AJ) demoing %plug
%plug means sell stuff on #Urbit.

Serve Web2 store, basic ecommerce platform
(demo of storefront backend and purchase process)

Sellers, reach out to ~hastyp-patmud on Mars.
Y'all, #Urbit is happening.

Imagine discovering integrated Wordpress+Signal+Etsy+OpenSea.

I'm actually a little jealous of the people who will soon discover this for the first time. It's like a dream garden that just gets bigger and more charming and enchanted. Image
If you wander into #Urbit, it's like Faerie: you return to the real world bewitched if at all. Image
Zach Alberico, ~dovsym-bornel

Growth stuff for Urbit
Old merge (lack of) discipline was bug-prone.

New discipline he helped design with Peter McEvoy and Matt LeVan.

Now we have two repos, vere and urbit, with three branches (feature/develop/master). PR needs to pass CI and code review. Thence to moons etc.
Missing a release isn't a big deal anymore because it can go next week.
Developers work on release from develop so they catch bugs early.
ZA on improving onboarding and user experience.
Seth ~doplyr-harbur demoing %portal
Onboarding discovery is a hard problem on the decentralized network.

Right now you see no apps nor groups on default page.

%portal is a way to permit discovery using curators.
Curators collect groups, media, games, videos/podcasts, apps, blogs, meetups, tools, etc. in a single landing page. New (and old!) users can use this to discover new content.
You can join groups etc. w/o leaving %portal context.

You can weight your personal discovery algorithm (!) and how it surfaces content for you.
%portal devs intend to use other Urbit apps as primitives for building discovery network, e.g. %pals.
Next version will let you add curators, lists, create new metadata, discover computed metrics, etc.

Also will coincide with release of curator API.
When you think of customizing weights for content, it's creating a worldview or lens that they want to leverage to talk to diff. pieces of data.

Private-by-default personal AI.
|install ~worpet-bildet %portal
Rikard @rikardhjort hacking on K-Nock, a Nock spec built in the K Framework.

runtimeverification.com/blog/k-framewo…
Formal logic specification of semantics of a programming language.

Current mode sucks because you should spec once at the beginning rather than later.
What Rikard is aiming at is a symbolic execution engine.

He works as crypto auditor: formal verification of smart contracts.
Formal verification is the best but hardest.

Make inputs symbolic then execute along all possible branches, find full possibility space of entire program. E.g. that no one can have negative tokens, no tokens from thin air, etc.
Process:

Specify Nock syntax. Make semantics strict. Then line-by-line specification of Nock for K.

Then specify claims.

Finally run the framework.
Things get hairy with branches because of explosion of possibilities. Loops are a big problem because of recursion.

In practice requires some manual intervention to specify where loop is and guarantee that right result is present there.
The Legal Situation of El Salvador, Stephen Galebach & Reynaldo Vasquez
Vasquez wrote the debated law for DAOs in El Salvador
Uqbar has a Salvadoran corporation now, to serve as a model for project setup.
El Salvador as a tech-friendly culture.

Good economic advantages.

Engagement with universities.
Nonprofit foundation Floración for Salvadoreño developers.

Welcoming scholarships, investment, etc.
The current workforce: one foundation running bootcamps for people so they can hire them for projects. Dire need for training.
Young ES grads tend to be good in English but mostly find call center jobs w/ low wages. The opportunity is to arbitrage labor into more productive, higher-$ jobs w/ value creation is an enormous opportunity.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with 𐐝𐐮𐐾𐐮𐑊𐐰𐑌𐐻𐐨

𐐝𐐮𐐾𐐮𐑊𐐰𐑌𐐻𐐨 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @sigilante

Mar 9
Hoon Native UI: Total Aesthetic Warfare (Liam Fitzgerald)
JavaScript is the number one cause of suicide, divorce, and testicular cancer in the developed world.
Read 51 tweets
Mar 7
Some live notes from #Urbit Volcano Summit (no spoilers). Mistakes if any are mine.

You should be excited for the future of Mars.
First up, @rovnys, CTO of @urbitfoundation.

Core dev has an open-door policy. The roadmap and grants are public, devs regularly work and meet in gather.town, and core dev has a commitment to merge all good contributions.
Zero tolerance for show-stopper bugs.

Daily PR blitz with triage, reviewer assigned w/i 24 hours and sometimes PR merged w/i 24 hours.
Read 139 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(