12 Comments

Excellent read. I was thinking about similar feelings when I wrote Lego Mindset vs. Woodworking Mindset (https://scottstevenson.substack.com/p/lego-mindset-vs-woodworking-mindset)--but never considered the tolerance stacking angle.

Wholeheartedly that the post-ZIRP game is totally different. It's refreshing though, for real builders. I think it's where they will excel. I think we'll be seeing a lot of lego houses collapsing in the next 12-18 months.

Expand full comment

Good post. Had to read through 2-3 times to get some of the finer points. Will have to re-read some more times to really get it.

At the end of the day, this is just a somewhat fancy dressing of the 'Good Men > Good Times > Bad Men > Bad Times' meme. Although, you didn't explicitly claim that the four regimes go in a cycle, and they probably don't always.

There are some parallels from this analogy (finance/interest rates as applied to knowledge) and to processes in evolution; variation and selection correspond to design and constitutive knowledge. The first creates all sorts of pure possibilities; the second winnows them down to 'successful models' due to contact with particular ecology.

Punctuated equilibrium is a theory that describes how certain types of evolutionary pathways engage in stasis for long periods, interrupted by short & intense bursts of variation/selection.

Also, I also enjoy playing with Meccano sets!

Expand full comment
Mar 10, 2023·edited Mar 12, 2023

The Economics view would be: AI reduces friction which increases the efficiency of operation of entities, which by increasing the profitability of business entities, increases profits, increasing the money supply and/or investible surplus, and this surfeit of capital brings interest rates down. Continuously over the long term due to the compounding effect of continuous productivity increase, the fall in interest rates can be very deep and sustained. The cost of capital becomes asymptotic to zero.

On Start-Ups: When the failures rates of start-ups is aggregated, the risk adjusted returns falls implying higher implicit funding cost. It is the reason why start-ups are almost always funded by equity or risk capital which is the most expensive form of capital. (Finance 101)

On Physical Laws: Laws from Physics do not translate easily into the social world, which is unbounded in complexity. Noether type of conservation laws rarely exist. But exist they do as shown in Geoffrey Wests magisterial study: Scale.

https://www.santafe.edu/news-center/news/geoffrey-wests-long-anticipated-book-scale-emerges

But the article was a fun interpretation. Interesting. Thought provoking......

Noether's Coonservation Laws:

https://en.wikipedia.org/wiki/Noether%27s_theorem#:~:text=Noether's%20theorem%20or%20Noether's%20first,1915%20and%20published%20in%201918

Expand full comment

makes me think about modeling and valuing know-how as an ROI on investment of various resources.

Expand full comment

This is a very well-articulated bucket of cold water to pour on the AI hype. Thanks, it’s a satisfying read.

Expand full comment

Great post, though I was expecting a different/elaborate connection to the Meaning Crisis side based on the setup to that section. Citation below, but there was a fine setup in the post to link Real-World Friction to contact epistemology in the Embedded side of 4E Cog Sci, Gibson's affordances, and Merleau-Ponty's Optimal Grip. All of that is used in John Vervaeke's neoplatonist/4E approach to make claims on both the human Meaning Crisis and claims on the AI aspects of meaning/relevance. The latter aspects go back to Vervaeke's ~2013 joint publication with Lillicrap (DeepMind) and Richards (neuroscientist) on 'relevance realization'.

In the language of the post: non-embodied low-interest 'text is all you need' LLMs can mimic the midwit context that's compressible in a large corpus because it has captured the 'low res' aspects of a Lego-like view of the world (a la Ted Chaing's blurry jpeg metaphor in the New Yorker). Same for human midwits lacking wisdom and becoming 'statistical parrots' of their favored echo chamber. However, nuance and context-specific reframing--fitting within the constitutive knowledge--isn't very 'compressible' in current LLM setups. That same reframing/context/gestalt is also lacking in unwise humans and right hemisphere stroke damage patients (see McGilchrist). That limitation is partially a consequence of LLM/DL paradigm of 'high compute burden in training, low compute burden on inference' though recent shifts like Chain-of-Thought LLM prompting and response filtering (see supplementary materials D.3.1 of Meta's Cicero game play generative text model that filters 53% of its generated responses) are changing that. That change goes beyond just RLHF-ing your way to a better model to actually slide down Spreng's Triangle and spend more time + energy evaluating counterfactuals (Vervaeke et al. emphasis on Opponent Processing)--particularly from pesky random variables outside your control (like other humans)--to minimize 'regret' within a given context. Do enough regret minimization or high-compute-at-inference (https://generallyintelligent.com/podcast/2023-02-09-podcast-episode-27-noam-brown/) in the Real High-Interest World and you might die having led a human life you'll report as meaningful (avoided being sucked any particular Finite Game) or not go bankrupt in the long-run as an AI-based firm operating in any consequential domain (i.e. not marketing copy or sports journalism summarization).

For more, U of T's John Vervaeke has come up in a couple of Ribbon Farm comments, and I'll do the same again. The recent interview on Tim Ferriss' show (specific segment on contact epistemology https://youtu.be/PMBSiWEk6tA?t=9294) is a decent enough intro to Vervaeke's content (Jim Rutt's multi-part overview the next largest) short of committing to the full 40+ hour recommended Awakening From the Meaning Crisis series on YouTube.

Since you're reading down this far, there's another link with all of the above, the control theory introduced in the post, and the complexity of the (bounded rationality) model in the Good Regulator Theorem. That cybernetics direction pulls in more predictive processing/coding and free energy minimization. That aspect makes an indirect appearance alluded to in the Ferriss interview via Procedural (tacit/constitutive) Knowledge and the cerebellum-cortex loop and related discussion of adaptive/maladaptive intuitive heuristics. That's elaborated more in Ep 30 (https://www.youtube.com/watch?v=Wex12GhUFqE) and provides a link between FEP/PP 'precision weighting' and attention (not just the dot product kind in LLMs). To put a spin on that from this post: developing feedback loops for human intuitive machinery that captures Real-World Friction is the Flow State and Zone of Proximal Development. In addition to being reported as 'meaningful', such learning can help overcome some human limitations on working memory, which may be one of the few areas where next (current?) generation LLMs grok aspects of Meccano world but are (like us) unable to squeeze that knowledge through the language bottleneck to explain it.

Expand full comment

Just finished part one - the wonderful, clear description of Meccano v Lego, which was delightfully revealing to me about the ways I prefer to work (I’ve mostly been a Lego kind of guy). Sometimes, though, to create something specific, I have to dive into the Meccano world. Which in my case means moving from the higher level Max software I use (kind of like breadboarding-in-software, where you connect different mid level process objects together to make a larger process) to the world of Arduino, which is a mixed hardware software world more akin to Meccano than to Lego, and which basically hurts my brain. I think that the Arduino environment may be the digital equivalent of Meccano that you thought there wasn’t much point in creating! ? Now back to the text...

Expand full comment

Excellent piece. Thank you for the work that has gone into the writing of it.

Expand full comment

Outstanding post, Venkat! Well explained and just so damned true!

Expand full comment