AI Does Prediction about Humans: What Could Go Wrong

April 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The academic institution which took money from everyone’s favorite expert on exploitation has revealed an interesting chunk of research. Sadly it is about broader concept of exploitation than those laboring in his mansions. “MIT Study Reveals an AI  Model That Can Predict Future Actions of Human.” The title seems a bit incomplete, but no doubt Mr. Epstein would embrace the technology. Imagine. Feed in data about those with whom he employed and match the outputs to the interests of his clients and friends.

The write up says:

A new study from researchers at MIT and the University of Washington reveals an AI model that can accurately predict a person or a machine’s future actions.  The AI is known as the latent inference budget model (L-IBM). The study authors claim that L-IBM is better than other previously proposed frameworks capable of modeling human decision-making. It works by examining past behavior, actions, and limitations linked to the thinking process of an agent (which could be either a human or another AI). The data or result obtained after the assessment is called the inference budget.

Very academic sounding. I expected no less from MIT and its companion institution.

To model the decision-making process of an agent, L-IBM first analyzes an individual’s behavior and the different variables that affect it.  “In other words, we seek to model both what agents wish to do and what agents will actually do in any given state,” the researchers said. This step involved observing agents placed in a maze at random positions. The L-IBM model was then employed to understand their thinking/computational limitations and predict their behavior.

image

A predictive system allows for more efficient use of available resources. Smart software does not protest, require benefits, or take vacations. Thanks, MSFT Copilot. Good enough. Just four tries today.

The method seems less labor intensive that the old, cancer wizard IBM Watson relied upon. This model processes behavior data, not selected information; for example, cancer treatments. Then, the new system will observe actions and learn what those humans will do next.

Then the clever researchers arranged a game:

The researchers made the subjects play a reference game. The game involves a speaker and a listener. The latter receives a set of different colors, they pick one but can’t tell the name of the color they picked directly to the listener. The speaker describes the color for the speakers through natural language utterances (basically the speaker gives out different words as hints). If the listener selects the same color the speaker picked from the set, they both win. 

At this point in the write up, I was wondering how long the process requires and what the fully loaded costs would be to get one useful human prediction. The write up makes clear that more work was required. Now the model played chess with humans. (I thought the Google cracked this problem with DeepMind methods after IBM’s chess playing system beat up a world champion human.

One of the wizards is quoted in the write up as stating:

“For me, the most striking thing was the fact that this inference budget is very interpretable. It is saying tougher problems require more planning or being a strong player means planning for longer. When we first set out to do this, we didn’t think that our algorithm would be able to pick up on those behaviors naturally.

Yes, there are three steps. But the expert notes:

“We demonstrated that it can outperform classical models of bounded rationality while imputing meaningful measures of human skill and task difficulty,” the researchers note. If we know that a human is about to make a mistake, having seen how they have behaved before, the AI agent could step in and offer a better way to do it. Or the agent could adapt to the weaknesses that its human collaborators have. Being able to model human behavior is an important step toward building an AI agent that can actually help that human…

If Mr. Epstein had access to a model with this capability, he might still be with us. Other applications of the technology may lead to control of malleable humans.

Net net: MIT is a source of interesting investigations like the one conducted after the Epstein antics became more widely known. Light the light of learning.

Stephen E Arnold, April 26, 2024

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta