AI: Outputs Become Inputs, No Humans Necessary in Some Situations

January 13, 2015

Here’s the thing. The time between an an actionable item and taking action is a big deal. For example, you hear about buying shares of X at the gym. Two days later you call your financial advisor and say, “Should we buy shares of X?”

He says, “Well, the stock has jumped 25 percent yesterday.”

The point: You heard about an actionable item—buying shares. When you cranked up to buy the stock, the big jump was history.

The train left the station, and you are standing on the platform watching the riders head to the bank.

How does one get less “wait” between the actionable item and taking action? The answer is automation. The slow down is usually human. Humans want to deliberate, think about stuff, and procrastinate.

A system that takes actionable outputs and does something about them reduces the “wait.” The idea is to assign a probability to reflect your confidence in the actionable item. The system computes that probability, looks at your number, and then either does or does not take an action.

This happens in milliseconds. Financial institutions pay hundreds of millions to shave milliseconds off their financial transactions. The objective is to use probability and automation to make sure these wizards do not miss the financial train.

Now read “Artificial Intelligence Experts Sign Open Letter to Protect Mankind from Machines.” The write up asserts:

AI experts around the globe are signing an open letter issued Sunday by the Future of Life Institute that pledges to safely and carefully coordinate progress in the field to ensure it does not grow beyond humanity’s control. Signees include co-founders of Deep Mind, the British AI company purchased by Google in January 2014; MIT professors; and experts at some of technology’s biggest corporations, including IBM’s Watson supercomputer team and Microsoft Research.

Sounds great. Won’t compute in the real world. The reason is that time means money to some, security to others, and opportunity for 20 somethings.

The reality is that outputs of smart systems will be piped directly into other smart systems. These systems will act based on probability and other considerations. Why burn out a human when you can disintermediate the human, save money, and give the person an opportunity to study Zen or pursue a hobby? Why wait to discover a security breach when a smart system can take proactive action?

Who resists accepting a recommendation from Amazon or Google “suggest”? I am not sure users of smart systems realize that automation and smart software—crude as it is—is not getting bogged down in the “humanity’s control” thing.

Need an example? Check out weapon systems. Need another? Read the CyberOSINT report available here.

Stephen E Arnold, January 13, 2015

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta