Select Page

Science has often been at its most fascinating when it has helped us better understand ourselves and allowed us to apply that understanding to engineering and machinery, and vice versa. Case in point: neural networks. As the “neural” aspect of its name implies, this decades-long quest for computers which can think and learn the way we do naturally has the potential to reshape the way we view our own brain and ability to learn.

Here’s a quick look at the history of this subfield and how we got to where we are with it today.

Beginnings to the AI Winter

Throughout the 1940s, neurophysiologist Warren McCulloch and mathematician Walter Pitts developed a basic model for the manner in which neurons function, which psychologist Donald Hebb later expanded and codified in his book The Organization of Behaviour. In it, Hebb proposed that neural pathways get stronger the more they are used. This, in turn, led to two ideas which are key for neural nets: “Threshold Logic,” which deals with the threshold for converting input and output, and notions of neural networks in the brain being flexible and capable of being strengthened in certain ways as the result of pairings.

As Hebb famously argued, “Cells that fire together, wire together.”

Efforts to crack the code for that formula to make cells better “wire and fire together” continued throughout the 50s and 60s until they came to an abrupt halt in 1969. Marvin Minsky, who founded MIT’s AI lab, argued in his book Perceptrons that current models for achieving neural networks were impossible in the manner in which they were currently being researched, with lengthy if not infinite computing time being a key concern.

This led to the so-called “AI Winter,” a decade-plus informal “freeze” on neural net-related research at major universities around the world.

Thawing Out Neural Nets

By 1982, as the Cold War was heating up, the tech world was ready for the AI Winter to thaw. In the same way that the rediscovery of classical Greco-Roman texts helped lift Medieval Europe out of its thousand-year stagnation and into the Renaissance, so too did the emphasis of an old neural net idea, backpropagation, resurrect the concept in the 1980s. Backpropagation is a form of learning algorithm that makes the gradient descent and associated weights more accessible and easier to store for computers, helping to solve that “infinite computing time” issue.

This and a new generation of tech thinkers in the 1990s helped place neural nets and computer learning back at the forefront of tech and especially AI research.