|Sep 29, 2017|
You've probably heard of Maxwell's Demon. A related idea I call Maxwell's Historian is a great way to think about the ultimate potential of computing, and the deeper significance of today's major trends in computing; Big Data, machine learning, and blockchains in particular.
You've probably heard about Maxwell's Demon, the 1867 thought experiment proposed by James Clerk Maxwell about a possible violation of the second law of thermodynamics. You have a box with two chambers containing a gas under the same temperature/pressure conditions. There is a tiny door of infinitesimal mass between them controlled by a demon who lets fast molecules travel from left to right, and slow ones from right to left, thereby gradually heating up the right chamber and cooling down the left, in apparent violation of the second law of thermodynamics.
The apparent paradox was quickly sorted out: turns out that the demon, in the process of measuring molecule velocities and making the door opening/closing decisions, must cause an entropy increase. Where it gets interesting is when you view the demon as a computer and ask how the entropy increases.
In 1961 Rolf Landauer provided half of the answer: a Turing machine -- an idealized computer -- is subject to the 2nd law because the act of erasing a bit necessarily involves a minimum entropy increase. This minimum is known as the von Neumann-Landauer limit. It was experimentally verified only in 2012.
In 1982, Charles Bennett provided the other half of the answer: Maxwell's Demon may possibly measure molecule velocities isentropically (without increasing entropy) but then it must choose to either discard or keep the information. If it chooses to discard, entropy will increase (though bizarrely enough, this can theoretically be done without using energy). This means even a demon with large, but finite storage will eventually run out of space to store its history of molecular velocities and be forced to start erasing stuff. Still, this gives us something: you can trade off memory size and temporal flexibility in when to "pay" your entropy debts.
Landauer and Bennett firmly established the connection between bits and atoms, and between thermodynamics and information theory. One way to restate Landauer's principle is that if you choose to forget what you've learned about a system, your ability to extract useful work out of it is lowered. The corollary of course is that it pays to save as much information as possible, in as lossless a state as possible (this generally means as raw a state as possible and using lossless compression over lossy as much as possible).
I call an entity that makes this enlightened choice Maxwell's Historian: an entity that chooses to remember as much as possible, for as long as possible, and fight the noblest possible ongoing battle against the evil second law.
This thought experiment might seem like the computing equivalent of angels on a pinhead, but it's increasingly relevant to real-world computing. There is now a growing body of research associated with the idea of reversible computing: writing programs in ways that minimize or delay irreversible erasures of bits. It turns out that you can get pretty far with the idea. There are now real circuit designs and programming models based on it. A potential practical application is energy-efficient computation, increasingly a concern as we devote increasing amounts of our planet's resources to computing. How high could it go? Well, the human brain uses about 20% of our energy, so that's a good benchmark for anthropomorphically proportioned computing, though there is no reason to stop there. You could in principle approach 100% of energy devoted to computation, approaching the "pure knowledge" vertex of the time/energy/knowledge constraint triangle known as Spreng's triangle. A condition where our material needs are satisfied with the gentlest possible touches. You can get to some really fun speculations, including a creative solution to the Fermi Paradox (ht David Manheim), if you push the idea to its limit.
Closer to Earth, there has been a curious convergence between these esoteric speculations and practical developments on three fronts: Big Data, machine learning, and blockchains.
Many people, database experts among them, dismiss Big Data as a fad that's already come and gone, and argue that it was a meaningless term, and that relational databases can do everything NoSQL databases can. That's not the point! The point of Big Data, pointed out by George Dyson, is that computing undergoes a fundamental phase shift when it crosses the Big Data threshold: when it is cheaper to store data than to decide what to do with it. The point of Big Data technologies is not to perversely use less powerful database paradigms, but to defer decision-making about data -- how to model, structure, process, and analyze it -- to when (and if) you need to, using the simplest storage technology that will do the job. A organization that chooses to store all its raw data, developing an eidetic corporate historical memory so to speak, creates informational potential and invests in its own future wisdom.
Next, there is machine learning. Here the connection is obvious. The more you have access to massive amounts of stored data, the more you can apply deep learning techniques to it (they really only work at sufficiently massive data scales) to extract more of the possible value represented by the information. I'm not quite sure what a literal Maxwell's Historian might do with its history of stored molecule velocities, but I can think of plenty of ways to use more practical historical data.
And finally, there are blockchains. Again, database curmudgeons (what is it about these guys??) complain that distributed databases can do everything blockchains can, more cheaply, and that blockchains are just really awful, low-capacity, expensive distributed databases (pro-tip, anytime a curmudgeon makes an "X is just Y" statement, you should assume by default that the(X-Y) differences they are ignoring are the whole point of X). As with Big Data, they are missing the point. The essential feature of blockchains is not that they can poorly and expensively mimic the capabilities of distributed databases, but do so in a near-trustless decentralized way, with strong irreversibility and immutability properties.
When you step back and consider these three practical trends, you can't help but speculate that we're constructing a planet-scale Maxwell's Historian. Planet-scale computing infrastructure that can potentially save everything, extract all the possible value out of it over time (through machine learning and other means), and optimally balance reversibility and irreversibility needs of computation to be as energy-efficient as theoretically possible.
Be careful with this idea though. These three trends, and the logic of Maxwell's Historian, don't indicate a particular determinate future for computation (ie, Maxwell's Historian is not another name for an AGI or Skynet). Rather, they mark out a design space of possibilities, and an "arc of history" so to speak. The arc of the computational universe bends towards Maxwell's Historian.
Watching Elon Musk's SpaceX keynote last night, it struck me that we find it easy to grasp visions of vaulting ambition when it comes to exploring outer space, but kinda suck at it when it comes to exploring inner space. But with computing on the trajectory it is on, arguably, Big Data, machine learning, and blockchains together constitute a set of technologies that will make us the cognitive equivalent of a multi-planet species a: a multi-substrate intelligence: equally at home in neurons and silicon. We just have to recognize the true significance of what we're up to, and Maxwell's Historian is one mental model that helps us do that.
We may be approaching the end of Moore's Law, but we're only at the beginning of the history of computing.
_Feel free to forward this newsletter on email and share it via the social media buttons below. You can check out the archives here. First-timers can subscribe to the newsletter here. You can set up a phone call with me via my Clarity.fm profile page. _
Check out the 20 Breaking Smart Season 1 essays for the deeper context behind this newsletter. If you're interested in bringing the Season 1 workshop to your organization, get in touch. You can follow me on Twitter @vgr
Copyright © 2017 Ribbonfarm Consulting, LLC, All rights reserved.