Epiplexity, Thinking, and AGI
Part 2: What’s Hiding in Your Data Is Not AGI
Posted January-2026
In Part 1, we discussed epiplexity and how it explains some paradoxical results of current AI models. Here, we will talk about the direction of that AI industry and how that will have to change if the goal is AGI.
The epiplexity paper starts its narrative by discussing three paradoxes - observations of current AI models that do not match the predictions of Information Theory. This is an excellent way to discuss science. Einstein’s General Relativity was inspired in part by Mercury’s paradoxical observed movement. I commend the authors for using this technique and then proving that their result resolves the issue.
On the other hand, there are papers like this that ascribe unpredictable behavior to “emergence”. I have seen other work saying that the odd results demonstrate that the model has created some sort of “reasoning,” sometimes of a form “we don’t understand.” And so on. That may suit their corporate parent and/or continue funding for their research, but it’s intellectually lazy and does not lead to conclusions other than “keep doing what we’re doing, just more.” There is really very little difference between that and a wizard of the Middle Ages casting spells and making smoke. It’s more hype than science. Daniel Dennett would be appalled.
So what are the implications of epiplexity? First and foremost, it can be used to forecast the end of scaling for LLMs and their like. Already, Yann LeCun and Demis Hassabis, whose Deepmind colleagues wrote that 2022 paper, have indicated that AGI is far off and probably needs a different kind of model. With epiplexity, I expect essentially everyone else (other than Geoff Hinton) to agree. We can expect the big LLM vendors to work a lot more on delivery costs and add-ons (like guardrails) and we can question if their AI businesses will ever be profitable enough to pay for the huge cost of creating them.
Conversely, the epiplexity paper discusses self-training and the auto-generation of data. For instance, complex game strategies can be learned from simple game rules and iterative learning. For certain tasks where generation rules are easily defined and performance is easily evaluated, we can expect even better iterative algorithms since we now “know” it works.
Third, I believe epiplexity we be used in pricing data sets. Do you want random data, or random data based on an embedded ontology or taxonomy that can then be generalized? From my work for a medical coding company, it is a pretty clear choice! (I imagine a lot of inaccuracies in ICD-10 coding might be eliminated through an understanding of the taxonomy.) To enable this, I expect tools to characterize the epiplexity will arise.
AGI
Which brings me to Artificial General Intelligence (AGI). There is much written about it, and the “singularity” and so forth. I would regurgitate that. I will repeat the above: the big names in AI are moving to new models. They have joined Gary Marcus and, frankly, me in concluding that the current AI path is not going to get to AGI. Ever. Something new is needed.
I’ll leave that as the subject for another day.
Go to Part 3.