The Gates Foundation backs an AI wildcard

  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

The Scoop

Bill Gates, who advised on Microsoft’s strategic partnership with OpenAI, is backing a wildcard in the race toward artificial general intelligence: Jeff Hawkins, co-inventor of the PalmPilot — a short-lived precursor to mobile phones — and a researcher who has spent decades studying the human brain to build better machines.

The Gates Foundation has awarded $2.7 million to Hawkins’ 19-year-old firm, Numenta, which earns revenue by partnering with Intel and others to sell machine-learning services to businesses and also conducts AI research, to put his thesis to the test. Based on the belief that the secret to developing the ultimate AI algorithm may lie in understanding the human brain, Numenta will use the funds to develop software that reflects his concept, and plans to release the code through its Thousand Brains Project. In 2021, Gates reviewed Hawkins’ book on the topic.

“The Gates Foundation approached us because they were also interested in the theory, and they felt that current AI systems have limitations,” Hawkins told Semafor in an interview. “They thought that sensorimotor-type AI systems would be very, very helpful for global health issues.”

In his theory, AI should behave like the neocortex, which is the part of the brain that is responsible for a person’s ability to think, act, and speak, processing inputs from sensors and movement to learn. Hawkins believes that will produce new breakthroughs that are needed to build truly intelligent machines.

“I think it’s always been the right time to build brain-based AI,” he said. “If I could have done it 40 years ago, or 20 years ago, I would have. But I didn’t know how back then. Now I do.”

The human brain has long served as inspiration for artificial intelligence. Neural networks were inspired by the way the brain’s neurons communicate with one another, and adapt by strengthening and weakening synapses as they learn.

But over time, neural networks have begun to look less and less like the human brain, favoring sheer size and scope over the mysterious elegance and efficiency of their namesake.

That’s been the approach of large language models like the ones that power ChatGPT. They were built after researchers from Google developed transformers, an architecture that allows developers to build increasingly large neural networks, which work in part by breaking all language down into fragments of words known as “tokens.” By ingesting incredibly large amounts of text, the models predict the next token based not on true understanding, but on how tokens statistically relate to one another — a mechanism known as “attention.”

As transformer-based architectures get bigger, new capabilities tend to emerge and the large tech companies are spending tens of billions of dollars to build computers of sizes once incomprehensible. But some experts believe the brute-force approach of transformer models will eventually lose steam and the performance increases will plateau.

For instance, these companies have scraped most of the internet already and may exhaust sources of training data. And then there’s the problem of powering the data centers, equipped with hundreds of thousands of $50,000 graphics processors running so hot that they periodically melt. Big Tech is increasingly looking for wind, solar, geothermal, and nuclear energy to build more data centers.

The human brain is much more intelligent, yet tiny and energy efficient.

“There’s pent up demand for something outside of transformers,” said Hawkins, whose ideas haven’t always been taken seriously by the tech industry at first. He may have predicted the rise of smartphones, but was never the best at building its applications. The PalmPilot was initially a success, but was quickly overtaken by mobile phones that could do more useful things.

He’s now ready to implement his theories into code and hopes that the new funding for the Thousand Brains Project will attract more researchers to move beyond LLMs.

“I feel like it’s a little bit like the beginning of the computing era with John von Neumann and Alan Turing, where they were understanding the basics of computing even though they had very, very few ideas about its application,” Hawkins said. “They couldn’t anticipate transistors, or cell phones, or GPS satellites or personal computers. But they knew they were building something super powerful – and that’s how I see what we’re doing.”

Know More

Neural networks were first conceived in the mid-20th century and some of the foundational work on them happened in the 1980s, when Google alum Geoffrey Hinton and others made breakthroughs using computational neuroscience as inspiration.

Scientific understanding of the brain has increased dramatically, but remains incomplete on fundamental questions. Hawkins argues that the key lies inside something called the “cortical column” of the brain. The neocortex is made up of hundreds of thousands of cortical columns that build models of the world and make predictions about how the world functions.

The neocortex has been described as what you get if you strip away the parts of the brain that make us human — our emotions, survival instincts, and appetite — and are left with pure understanding, minus the baggage leftover from millions of years of evolution.

Today, this theory can’t be fully tested. It would be like handing Thomas Edison a modern semiconductor and expecting him to figure out how to build an iPhone. More scientific progress needs to happen before we can unlock our understanding.

Until then, though, theories about the way it works could give AI researchers new ideas about how to craft algorithms that gain a better understanding of the world.

Katyanna’s view

Hawkins is well known for being ahead of the curve. For instance, before internet browsers became ubiquitous, he was convinced that the next era of computing would be a handheld device that would run on a battery all day.

In 1992, he founded the company Palm and went on to design the PalmPilot, a small tablet with a stylus that could keep a list of contacts and notes that was later succeeded by mobile phones. Hawkins’ critics say he is driven by grand ideas and theory rather than pragmatism and business applications, which is why his work doesn’t always take off.

But maybe he’s onto something with the brain and AI.

LLMs aren’t inherently intelligent, and I’m skeptical that simply scaling these systems up will make them smart. It’s not clear how big these models have to be before they can reason and learn like humans can, and it may not be sustainable to keep making them larger.

I’m not sure if Hawkins’ ideas will work out either. Maybe our understanding of the brain and current technologies are too primitive to build machines as advanced as he imagines. If it works as planned, it’ll be the biggest technological breakthrough in human history and may lead to a future that’s difficult to imagine: A time when machines take over human labor and generate all economic value.

Room for Disagreement

There is ample proof that LLMs are improving rapidly. Bigger models are better, and there’s a sense that it’s only a matter of time before they may become generally intelligent. Maybe it doesn’t matter if there are different models needed to do various types of tasks if machines can do just about anything. There are so many developers and money being pumped into the AI industry that some people feel like AGI is inevitable.

Notable