Author: Matthew Renze
Published: 2025-07-01

What comes after Artificial General Intelligence?

In the previous article in this series on Artificial General Intelligence (AGI), we learned about the problems in AI research that might need to be solved to achieve AGI. We covered topics like recursive self-improvement, artificial consciousness, and value alignment.

In this article, we’ll go beyond AI and learn about the AI research that might lead us from artificial intelligence to artificial life. This includes topics like The Free Energy Principle, Markov blankets, and active inference.

The Free Energy Principle

We have all these individual pieces of AGI. We have theories, experiments, and research discussing each component in detail. However, what we really want is a unified theory of everything. We want to unify intelligence, agency, reasoning, embodiment, consciousness, etc., into a single theory. We want something that makes all the pieces fit together in a coherent way.

Essentially, we want a universal theory of intelligence, or, better yet, a universal theory of life. While we’re likely still far away from this goal, there are some interesting candidates of note. So, I’d like to introduce you to one of them. It’s called The Free Energy Principle.

The Free Energy Principle was developed by Karl Friston, one of the most well-respected theoretical neuroscientists in the world. It’s rather complex, but I’ll do my best to explain the key ideas in simple language. Here’s how it works…

Markov Blankets

Imagine a small organism like a mouse. Information flows through the mouse from the outside world into the mouse. The mouse’s body is surrounded by something we call a Markov blanket, an abstract statistical layer that separates internal information inside the mouse from external information.

The information inside the mouse is referred to as internal states. Information outside of the mouse is called external (or hidden) states. The mouse’s brain has access to all of the information contained in its internal states. However, it cannot directly access information contained in the external state – which is why we call them “hidden” states.

The Markov Blanket states can be subdivided into both sensory states and active states. Sensory states contain information coming into the mouse from its eyes, ears, nose, etc. Active states contain information passing out through the mouse’s actions that influence the external states.

The mouse acquires information about external (hidden) states via sensory information passing through its Markov Blanket (i.e., it’s sensing the world). The mouse influences external states through its active states (i.e., it’s acting upon the world).

Active Inference

When the mouse senses information from the external world, its brain makes predictions about the external (hidden) states based on information contained in its internal states. When its observations don’t match these predictions, its brain updates its predictions to better align with reality. This is a process we call perception.

Free energy is a statistical term that measures the uncertainty or “surprise” the mouse’s brain experiences when its predictions don’t match reality. So, the mouse’s brain tries to minimize this free energy to reduce its uncertainty. Reducing uncertainty is crucial for the mouse because being uncertain about its environment is very bad for the mouse’s survival.

In addition to perception, there is a similar process happening for the mouse’s actions. When the mouse executes an action in its environment, it uses feedback from the external (hidden) states of its environment to minimize prediction errors for actions. This process, called “active inference“, allows the mouse to learn about its environment by probing it and observing how its world behaves.

Active inference helps the mouse build an internal model of the external world (i.e., a world model). A world model helps the mouse make better predictions about future states of the world, which increases its chances of surviving long enough to reproduce and raise its offspring. As a result, these adaptations are passed on from generation to generation.

Organisms like our mouse strive to minimize uncertainty in order to maintain homeostasis and allostasis – thus ensuring their current and future needs are met. Through evolutionary pressures, organisms (like our mouse) evolve to minimize variational free energy, enhancing their ability to survive and reproduce.

A Unified Theory of Intelligence

However, the Free Energy Principle doesn’t just apply to mice, it also applies to other living organisms and organ systems across multiple scales. For example:

  • Single-cell organisms – maintain homeostasis by adapting their internal states to external environments.
  • Multicellular organisms — respond to environmental stimuli to move, eat, attack and defend.
  • Organ systems – maintain homeostasis by passing signals through the Markov blankets of each organ.
  • Complex organisms – use their brains to survive by predicting future environmental states based on sensory information.
  • Social groups — develop collective behaviors to reduce uncertainty and maintain social stability.

The Free Energy Principle might also apply to machines. AI-controlled robots might use their sensors and actuators to perform active inference in their environments. The robots that are best able to minimize variational free energy would outcompete those that don’t, leading to more capable AI.

The Free-Energy Principle unifies concepts from biology, neuroscience, physics, and computer science, providing a framework to understand how an AGI might evolve. It also lays the foundation for artificial life, which could lead to synthetic organisms with all the properties of living entities.

However, despite its unifying power, Karl Friston’s Free Energy Principle isn’t without its issues or critics. So, I encourage you to explore this principle further in order to form your own conclusions.

 

And there we have it! We’ve completed our journey on the road to AGI. We’ve learned about all of the key problems of AI research that need to be solved to achieve AGI. Additionally, we’ve learned the key concepts that will help us understand the potential solutions to these problems.

If you’d like to learn more, please check out my video on Artificial General Intelligence: The Road to Human-Level AI and all of my other articles, videos, and courses.