Epiplexity and New Learnings
Part 3: Learning to learn
Posted January-2026
In Part 1, we discussed epiplexity and how it explains some paradoxical results of current AI models. In Part 2, we talked about the direction of that AI industry and how that will have to change if the goal is AGI. Here, we will talk about the direction epiplexity make take research and model development.
I cannot claim to be current in AI/ML algorithms, except superficially. However, I know - maybe - just enough to speculate about research directions and perhaps suggest a few things to try. My intuition that next-word training (of LLMs) would be a dead end has proven mostly correct, with the epiplexity paper proving that the interesting paradoxes are not leading to intelligence. This essay is entirely speculation. Let’s see if the predictions prove real.
In the epiplexity paper, one of the paradoxes involves more learning from more processing. They cite Alpha Go’s ability to derive super-human gameplay out of simple rules and iterative training. Already, epiplexity is being applied successfully to improve directed evolution of models. I expect much more of this.
I also expect tools to assess epiplexity of data sets will be created rapidly. Those, in turn, will drive not only algorithm development (as above), but also a means to price datasets appropriately. A dataset with high epiplexity may (should) be worth more than a dataset that is simply larger.
Next, I predict the creation of “foundation datasets”. These would be very high epiplexity data for a given realm, like the taxonomy of animals or the hierarchy of ICD-10 medical codes. Epiplexity enables optimization of the datasets and therefore a reduction in training resources required by model developers.
I think there is an enormous opportunity to apply epiplexity within machine vision. It’s a well-known problem to map a 2D image (perhaps two, in stereo) to a 3D model representation, and that only gets more interesting at 30 fps! (I did some work as an undergrad on stereo vision at a rate of frames per hour!) Epiplexity perhaps unlock algorithms that pull out the underlying structure of 3D objects versus today’s 2.5D (at best) algorithms. This has significant implications for autonomous driving, among other things.
Finally, one of the paradoxes solved in the epiplexity paper is that data sequence does matter. This has two orthogonal implications. For what I’ll call “lightly sequenced” data, like text, there are opportunities to better understand concept formation by understanding why sequence matters. And for time-series data (“tighly sequenced”) like music or speech, epiplexity will enable better development of models. For IIOT, this is already happening.[1]
Marked: 1/25/26 - Additional predictions my be made below.
In summary, the concept of epiplexity will soon drive change across AI research. And that makes the paper significant!
[1] See aperio.ai where I am an investor.
Return to Part 1.