AI—Past, Present, and Future

March 17, 2022

We are surrounded by neural-network AIs. As MIT Technology Review’s Clive Thompson puts it:

“They help Gmail autocomplete your sentences, help banks detect fraud, let photo apps automatically recognize faces, and—in the case of OpenAI’s GPT-3 and DeepMind’s Gopher—write long, human-­sounding essays and summarize texts. They’re even changing how science is done; in 2020, DeepMind debuted AlphaFold2, an AI that can predict how proteins will fold—a superhuman skill that can help guide researchers to develop new drugs and treatments.”

How did we get here, and where is this technology going? Thompson’s article, “What the History of AI Tells Us About its Future,” explores that question. The piece would make a good introduction to the subject or a helpful refresher for those wishing to jog their memory. The piece also makes reasonable predictions about the road ahead. Because, ready or not, society is firmly on that path.

The write-up begins by recounting the tale of chess king Deep Blue from IBM (Watson’s precursor). Deep Blue evolved from Deep Thought, a project out of Carnegie Mellon that became the first chess AI to beat a grand master in 1988. Deep Blue’s 1997 victory over world champion Garry Kasparov was huge news. But despite the investment of a dozen years and an estimated $100 million, the software failed to pan out for IBM in the long run. The article quotes Deep Thought’s co-developer Murray Campbell:

“‘It didn’t lead to the breakthroughs that allowed the [Deep Blue] AI to have a huge impact on the world,’ Campbell says. They didn’t really discover any principles of intelligence, because the real world doesn’t resemble chess. ‘There are very few problems out there where, as with chess, you have all the information you could possibly need to make the right decision,’ Campbell adds. ‘Most of the time there are unknowns. There’s randomness.’”

That is where neural networks come in. Designed to mimic the complexities of human reasoning, this technology was still widely dismissed until a decade or so ago. Now its algorithms are everywhere—and slamming into limitations of their own. Sometimes literally, as with self-driving cars that encounter a situation their trainers failed to anticipate. And don’t even get us started on machine learning bias. Thompson writes:

“The problem is, no one knows quite how to build neural nets that can reason or use common sense. Gary Marcus, a cognitive scientist and coauthor of Rebooting AI, suspects that the future of AI will require a ‘hybrid’ approach—neural nets to learn patterns, but guided by some old-fashioned, hand-coded logic. This would, in a sense, merge the benefits of Deep Blue with the benefits of deep learning. … The future may look less like an absolute victory for either Deep Blue or neural nets, and more like a Frankensteinian approach—the two stitched together.”

Perhaps. Or maybe someone will come up with something new altogether—it could happen. To learn more about AI’s past, present, and future, curious readers should check out the article for themselves.

Cynthia Murrell, March 17, 2022

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta