What are the key ideas that Artificial Intelligence is built upon?

In our previous article in this series, we saw an overview of some of the foundational ideas of AI.

In this article, we’ll discuss some of the most important ideas from Computer Science.

I’ll do my best to make each of these complex ideas as simple and easy to understand as possible.

The Lambda Calculus is a system of mathematical logic created by Alonzo Church that shows that all computer programs can be represented using three simple things: variables, functions, and function calls. That’s it; these three operations are all you need to create every program you’ve ever seen.

Essentially, what this means is that any computer program can be represented by defining a set of functions, calling those functions, and using variables as input, output, and state for these functions. Everything else is just syntactic sugar to make programming easier for humans. It’s simple, elegant, and brilliant.

From 1910-1913, Bertrand Russell and Alfred North Whitehead wrote a massive 1,900-page, 3-volume series of books called Principia Mathematica. This monumental work attempted to unify all of mathematics by providing a complete and consistent set of axioms based on formal logic.

However, in 1931 Kurt Gödel wrote a short 26-page paper proving that it is impossible to create a formal system of mathematics (capable of basic arithmetic) that is both complete and consistent. Essentially, with any system of mathematics, there are always statements that are true but can’t be proven to be true.

In 1936, Alan Turing developed a mathematical model of a computer called a Turing Machine. It’s a simple theoretical “machine” that can read and write symbols to an infinitely long tape. These symbols act as instructions, which modify the state of the machine according to a table of commands.

Turing showed that any computer or programming language that can perform these basic operations (i.e., Turing Complete) can simulate any other computer or program (i.e., Turing Equivalence). Essentially, all general-purpose computers can simulate every other general-purpose computer.

In 1936-37, Alonzo Church and Alan Turing independently proposed an idea now called the Church-Turing Thesis. It states that any task that can be performed by an algorithm can also be performed by a Turing Machine. Inversely, any Turing Machine task can be written as a step-by-step algorithm.

Essentially, the Church-Turing Thesis says that all general-purpose computers can solve any task that can be expressed as an algorithm — given sufficient time, memory, and energy. However, many problems today are still computationally unsolvable because we don’t have infinite time, memory, or energy.

Chomsky’s Hierarchy is a system of classifying languages developed by Noam Chomsky in 1956. The hierarchy is divided into four levels, from simplest to most complex. Each of these languages also has an equivalent computational model or type of “machine” that can process them *(in parentheses below)*.

Regular Languages *(Finite Automata)* can perform basic text pattern search. Context-free languages *(Pushdown Automata)* are the syntax of most programming languages. Context-sensitive languages *(Linear-Bounded Automata)* model some constructs of human language. Recursively Enumerable Languages *(Turing Machines)* are the most complex form of language, which includes human language.

Unfortunately, I’m greatly oversimplifying all of these concepts for the sake of time and to make them easy to understand. So, I encourage you to look into each of them further. Once you’re ready to learn more about Computer Science and AI, be sure to check out my online courses.

In the meantime, be sure to check out my latest article in this series: The Ideas That Built AI — Data Science.