Michael P. Frank 💻🔜♻️ e/acc Profile picture
Reversible computing guru, straight outta Stanford, Microsoft, SRI, MIT, IBM, NASA, UF, & FSU. Here to save the Universe. Opinions expressed are my own.
Sep 27, 2023 19 tweets 3 min read
Simplified version of viewgraph illustrating why we are very nearly at the limits of energy efficiency for conventional CMOS. Some discussion follows. Image Fundamental Boltzmann statistics (a.k.a. "Boltzmann's tyranny") implies that each electron channel (which comprises two quantum channels of distinct spin) needs at least roughly 40 kT energy difference between "on" and "of" states in order for the device to act as a good switch.
Mar 5, 2023 12 tweets 5 min read
My new GPT-3.5 Turbo chatbot is working! This uses OpenAI's new Chat API. In this series of tweets, I'll show snapshots from our first Telegram conversaion, so you can see the progression. Turbo was very helpful during validation efforts. Thread follows... I created the bot on March 1, shortly after the new gpt-3.5-turbo model was announced. The bot generated its preprogrammed startup message correctly, but as I tried further interactions, I realized that updating my bot code was not going to be as simple as changing the model name
Feb 6, 2023 4 tweets 2 min read
Wow, I left David running in GLaDOS for most of the day. Some interesting commentary & activity here. 😮 Image Poor David 🥺 Image
Nov 3, 2022 9 tweets 2 min read
This is amusing. What would happen if you gave GPT-3 free rein at a Unix prompt? Tried this just now (manually mediated as a precaution). It executed a couple of commands, then it tried to exit the shell and hallucinated the subsequent interaction. :D Image ...and here's a different continuation (based on what happens if I allow it to exit the user shell)😆 Image
May 20, 2021 38 tweets 6 min read
In this thread, I’m going to explain Landauer’s Principle using the absolutely most trivial, elementary argument I can, so that hopefully anyone can understand it. First, it’s important to start with a correct statement of the principle. Here’s one: In a deterministic computational process composed of local primitive operations, any operation on a computed subsystem that reduces its subsystem entropy by (-)ΔH increases total entropy by ΔH.
Nov 8, 2020 16 tweets 3 min read
Here's a fun little calculation. Just how much of the presently visible universe can the descendants of human civilization eventually colonize? My answer: Everything that we currently see within at least 9 billion light-years. Explanation follows. (1/n) First, the answer isn't "all of it" because, due to the expansion of the universe, the most distant parts of the visible universe that we can see today are actually currently receding from us faster than the speed of light. Thus, even at lightspeed, we could never reach them.
Aug 4, 2020 6 tweets 2 min read
Fundamental physics *guarantees* that the overwhelming majority of all future computing will have to be reversible. It continually amazes me that there aren’t more people, besides me and @blueberry_phase, who are working to lay the foundations for that. Most of the world just lacks vision. And the view who did have the vision, such as Drexler and Kurzweil, underestimated the magnitude of the challenge.
Aug 26, 2019 12 tweets 2 min read
Intel’s strategy for dealing with the end of traditional scaling and the consequent increase in dark silicon as we move to 3D chips has been to incorporate increasing amounts of nontraditinal architecture into SoC designs. FPGAs, neural fabrics, etc. But this strategy has limits. Beyond that, you can even imagine incorporating optimized ASIC IP blocks for kernels of important customer workloads. With transistors essentially free, and extra IP blocks powered down when not in use, this has very little downside. But this approach also has limits.
Oct 12, 2018 20 tweets 4 min read
Another thing that classical thermodynamicists tend not to appreciate, is that a proper understanding of physical entropy actually *requires* taking a point of view that is illuminated by an understanding of information theory and reversible computing theory, as I’ll explain. In the early 00’s in the lecture notes of my Physical Limits of Computing course, I proposed: the best definition of the effective physical entropy of a system is *the (expected) amount of physical information in the system that can’t be decomputed by any available process*.