Author: Matthew Renze
Published: 2024-06-01

What are the key ideas that Artificial Intelligence is built upon?

In my previous article in this series, we discussed some of the key ideas in Data Science.

In this article, we’ll discuss some of the most important ideas from Neuroscience.

Once again, I’ll do my best to keep everything as simple and easy to understand as possible.

Hebbian Learning

In 1949, Donald Hebb coined the phrase “neurons that fire together, wire together.” By this, he meant that the connections between neurons become stronger or weaker based on how frequently neighboring neurons are activated in conjunction with one another. This also means that brains are able to adapt and learn new information by changing the connections between the neurons.

Hebbian Learning significantly influenced the development of neural networks. Organic neurons increase or decrease the strength of the connections biologically via synapses. However, artificial neurons increase or decrease the strength of their connections mathematically by adjusting their weights and biases.

Reinforcement Learning

Behavioral reinforcement learning is the idea that behaviors can be shaped by their consequences. Psychologists like B.F. Skinner developed this theory while studying how animals change their response when receiving positive or negative feedback (i.e., pleasure or pain). For example, giving a treat to a dog helps reward good behavior. Shouting “NO” at the dog punishes bad behavior.

Reinforcement learning is one of the three main branches of machine learning. It involves an agent interacting with an environment to learn how to achieve a complex multi-step goal. The agent receives reward signals for actions that move it closer to its goal and punishments (i.e., negative rewards) for unproductive actions. Using these signals as feedback, the agent eventually learns how to achieve its goal.

Attention Mechanisms

Attention is the cognitive process of selectively focusing on a specific aspect of some information. This can be an external object, an internal thought, or a task. However, to achieve focus, the brain ignores all of the other data not directly relevant to the object of focus. This is significantly more efficient as it allows our brains to prioritize the information that is most important and disregard everything else.

In AI, the concept of attention has been applied to a specific type of neural network called a transformer — the technology behind large language models like ChatGPT. Like organic attention, artificial attention allows the model to focus on specific parts of the input data that are most relevant to completing its task. It allows the transformer to change its focus based on context.

Predictive Coding

Predictive Coding is a theory in neuroscience that proposes that the brain is constantly generating and updating an internal model of the external world based on sensory input. Essentially, the brain observes the world, predicts what will happen next, and then observes how its prediction agrees or disagrees with what actually happened. Then, it uses this information to update its model for future predictions.

The process of training neural networks agrees with this theory. During the training process, we show the neural network an input and ask it to predict the output. For example, we show it an image of a cat and ask it to predict “cat” or “not a cat”. If it gets the prediction wrong, we use an error signal to update the weights of the model to prevent it from making incorrect predictions in the future.

Bayesian Brain Hypothesis

In our previous article, we discussed how Bayes Theorem allows us to update our predictions based on new information. The Bayesian Brain Hypothesis attempts to explain how the brain is a giant Bayesian prediction machine. It uses an organic equivalent to Bayes Theorem to update it’s predictions based on prior beliefs and new evidence.

This is very similar to Predictive Coding. Both ideas explain how an organic or artificial brain can use Bayesian inference to make more accurate predictions about the world. However, the Bayesian Brain Hypothesis explains this using Bayesian statistics. This concept has led to several new ideas in AI, including Bayesian networks, Bayesian optimization, and Bayesian inference.

The Free Energy Principle

The Variational Free Energy Minimization Principle (aka. the Free Energy Principle) is an idea in neuroscience and AI developed by Karl Friston. It suggests that all living organisms reduce uncertainty by making predictions using internal models of the world and updating their models via sensory input. This might sound very similar to both Predictive Coding and the Bayesian Brain Hypothesis.

However, Friston believes that it’s not just brains that do this; the idea also applies to organic cells, organs, organ systems, and even AI. Essentially, all of these things are wrapped in a “statistical blanket“. They receive input from the outside environment, they build internal models of their external environment, and they use feedback to update their models of the world via active inference.

 

Neuroscience seems like a very complicated subject. However, these complex ideas all have very simple and intuitive explanations. Unfortunately, most people are afraid to dig deeper into these topics because they don’t have the right educational materials to make them easy to understand.

To learn more, be sure to check out my latest article in this series: The Ideas that Built AI — Machine Learning.

Share this Article