1/ A few observations about the binary number toy I made at @Dynamicland1 yesterday:
2/ "UI" comes for free from physical world. Each bit looks to its left for another bit to combine with, so you can play and create new numbers just by rearranging the bits.
3/ Each bit contains behavior, not just data. The bit looks for bits to its left, figures out the combined bit string, computes the decimal value, and labels itself. Spatial search and illumination taken care of by Realtalk OS so all pretty easy
4/ @acwervo had made a rainbow visualizer powered by a rotating dial. In a couple minutes we were able to easily connect the binary number to the rainbows, just by making the binary thing claim it was a new type of "dial"
5/ The whole thing took very little time to make and was kind of a casual side project we were playing with as we had a conversation at the table. when's the last time you casually programmed while having a conversation?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Siri: responds immediately with a written sentence that I can read in the blink of an eye. Efficient.
ChatGPT: takes a while, slowly speaks out loud the answer
Takeaway for me: if I’m engaged with the task and can look at a screen, voice input + visual output is a nice efficient combo. Voice input is faster than typing but don’t necessarily need voice *output*
This demo — “trim a video directly in ChatGPT” — offers a perfect example to reflect on pros/cons/nuances of chatbot as user interface. A few thoughts: 1/
To start, chat is obviously a silly UI for this task. My iPhone can trim videos and I can interactively scrub to pick a good time interval. Fast, precise, direct manipulation tools are nice.
That being said, I am not a total chatbot hater!
2/
The point of chat is that it’s not limited to trimming, it can do any edit I want!
Earlier today (coincidentally) I used GPT-4 to write one-off Python for video processing.
Was really neat to just say “add this overlay, slow down by 8x” without needing to learn a GUI
3/
What if -- despite all the hype -- we are in fact underestimating the effect LLMs will have on the nature of software distribution and end-user programming? some early, v tentative thoughts: 1/
it seems likely to me that all computer users will soon have ability to 1) develop small software tools from scratch, 2) describe *changes* they'd like made to existing software they're already using. what will this mean for software ecosystems?? 2/
you, skeptic: "nooo but the LLM-generated software will be lower-quality than handcoded software by pro teams: filled with bugs, uglier, bad... 😡 "
Yeah totally true... also all qualities which also apply to spreadsheets vs "real apps"! 3/
We already talk a lot about "liveness" (fast visible feedback) in programming, but this paper argues there's another quality we want: *domain-specific* UIs for editing programs.
They call this quality "richness".
I've already found this to be a useful lens to apply in my work. Often a programming system that "seems vaguely cool" is only live or only rich, and if you have the vocabulary, you can be more attuned to possibilities of going further along one axis...