What Happens when an AI Debates Politics?

April 20, 2021

IBM machine-learning researcher Noam Slonim spent years developing a version of IBM’s Watson that he hoped could win a formal debate. The New Yorker describes his journey and the results in, “The Limits of Political Debate.” We learn of the scientist’s inspiration following Watson’s Jeopardy win and his request that the AI be given Scarlett Johansson’s voice (and why it was not). Writer Benjamin Wallace-Wells also tells us:

“The young machine learned by scanning the electronic library of LexisNexis Academic, composed of news stories and academic journal articles—a vast account of the details of human experience. One engine searched for claims, another for evidence, and two more engines characterized and sorted everything that the first two turned up. If Slonim’s team could get the design right, then, in the short amount of time that debaters are given to prepare, the machine could organize a mountain of empirical information. It could win on evidence.”

Ah, but evidence is just one part. Upon consulting with a debate champion, Slonim learned more about the very human art of argument. Wallace-Wells continues:

“Slonim realized that there were a limited number of ‘types of argumentation,’ and these were patterns that the machine would need to learn. How many? Dan Lahav, a computer scientist on the team who had also been a champion debater, estimated that there were between fifty and seventy types of argumentation that could be applied to just about every possible debate question. For I.B.M., that wasn’t so many. Slonim described the second phase of Project Debater’s education, which was somewhat handmade: Slonim’s experts wrote their own modular arguments, relying in part on the Stanford Encyclopedia of Philosophy and other texts. They were trying to train the machine to reason like a human.”

Did they succeed? That is (ahem) debatable. The system was put to the test against experienced debater Harish Natarajan in front of a human audience. See the article for the details, but in the end the human won—sort of. The audience sided with him, but the more Slonim listened to the debate the more he realized the AI had made the better case by far. Natarajan, in short, was better at manipulating his listeners.

Since this experience, Slonim has turned to using Project Debater’s algorithms to analyze arguments being made in the virtual public square. Perhaps, Wallace-Wells speculates, his efforts will grow into an “argument checker” tool much like the grammar checkers that are now common. Would this make for political debates that are more empirical and rational than the polarized arguments that now dominate the news? That would be a welcome change.

Cynthia Murrell, April 20, 2021

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta