Intel’s strategy for dealing with the end of traditional scaling and the consequent increase in dark silicon as we move to 3D chips has been to incorporate increasing amounts of nontraditinal architecture into SoC designs. FPGAs, neural fabrics, etc. But this strategy has limits.
Beyond that, you can even imagine incorporating optimized ASIC IP blocks for kernels of important customer workloads. With transistors essentially free, and extra IP blocks powered down when not in use, this has very little downside. But this approach also has limits.
Eventually, when all of the economic inefficiencies are squeezed out of traditional architectures, you’re still left with the irreducible n(kT)f power of the raw irreversible logic. Beyond this is only adiabatic and reversible.
Adiabatic, energy-recovering logic provides an alternative way to leverage dark silicon, by trading decreased serial performance and hardware efficiency for increased energy efficiency and increased parallel performance (within power constraints).
And, to squeeze the maximum energy efficiency out of adiabatic circuits, they also have to be logically reversible. This adds more overhead, but remember, transistors are cheap, and will continue getting cheaper.
But ultimately, we could get an even bigger win if we can develop new reversible device concepts based on new operating principles that go beyond that of adiabatic switching in CMOS, and offer a better energy-delay product, and therefore improved cost efficiency.
The point of our Asynchronous Ballistic Reversible Computing project is to demonstrate that another type of reversible computing based on completely different physical operating principles than traditional adiabatic CMOS (as well as adiabatic superconducting logic) is possible.
But, far more people need to be working on this problem of developing new reversible device concepts. This field is wide open and has barely begun to be explored. The first step is for people to start taking it seriously.
Hopefully, as we continue to make progress in our ABRC project, this will convince people that developing new device physics concepts for reversible computing is a highly productive research area to explore. What we’re doing now barely scratches the surface of what is possible.
In parallel with this engineering work, @magikarpur and I are pushing on the development of the fundamental physics theory behind reversible computing, including exploring ways to leverage various quantum phenomena to speed up and stabilize the dynamics and reduce dissipation.
This is another area that is still wide open.
*nontraditional
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Simplified version of viewgraph illustrating why we are very nearly at the limits of energy efficiency for conventional CMOS. Some discussion follows.
Fundamental Boltzmann statistics (a.k.a. "Boltzmann's tyranny") implies that each electron channel (which comprises two quantum channels of distinct spin) needs at least roughly 40 kT energy difference between "on" and "of" states in order for the device to act as a good switch.
And then on top of that, there are multiple overhead factors to get from a single electron channel (which has about a conductance quantum's worth of conductance) up to the level of a typical electrical node in a random logic circuit, as the chart illustrates (explained at left).
My new GPT-3.5 Turbo chatbot is working! This uses OpenAI's new Chat API. In this series of tweets, I'll show snapshots from our first Telegram conversaion, so you can see the progression. Turbo was very helpful during validation efforts. Thread follows...
I created the bot on March 1, shortly after the new gpt-3.5-turbo model was announced. The bot generated its preprogrammed startup message correctly, but as I tried further interactions, I realized that updating my bot code was not going to be as simple as changing the model name
Today I finished a substantial upgrade to my bot code to support the new chat API. The next couple of screenshots are not that interesting; here I'm just testing and debugging my code (using diagnostic output on the server console and logs)...
Wow, I left David running in GLaDOS for most of the day. Some interesting commentary & activity here. 😮
Poor David 🥺
Okay, here's a complete list of text event contents from the latest run, including David's commands and his commentary to himself... this doesn't say who's speaking, but you can probably tell from context.
This is amusing. What would happen if you gave GPT-3 free rein at a Unix prompt? Tried this just now (manually mediated as a precaution). It executed a couple of commands, then it tried to exit the shell and hallucinated the subsequent interaction. :D
...and here's a different continuation (based on what happens if I allow it to exit the user shell)😆
Next, Dante proceeded to install vim (incidentally, he was predicting the output of the yum install command quite accurately, including the sizes of install files, lol)
In this thread, I’m going to explain Landauer’s Principle using the absolutely most trivial, elementary argument I can, so that hopefully anyone can understand it.
First, it’s important to start with a correct statement of the principle. Here’s one: In a deterministic computational process composed of local primitive operations, any operation on a computed subsystem that reduces its subsystem entropy by (-)ΔH increases total entropy by ΔH.
All of the words in this statement are important qualifiers that are required for the statement to be true. We’ll see why as we go along.
Here's a fun little calculation. Just how much of the presently visible universe can the descendants of human civilization eventually colonize? My answer: Everything that we currently see within at least 9 billion light-years. Explanation follows. (1/n)
First, the answer isn't "all of it" because, due to the expansion of the universe, the most distant parts of the visible universe that we can see today are actually currently receding from us faster than the speed of light. Thus, even at lightspeed, we could never reach them.
The distance at which galaxies are currently receding at the speed of light is called the "Hubble distance," and today that distance (on a comoving scale) is about 14.4 billion light years.