Is SkyNet a Reality or a Plot Device?
January 20, 2023
We humans must resist the temptation to outsource our reasoning to an AI, no matter how trustworthy it sounds. This is because, as iai News points out, “All-Knowing Machines Are a Fantasy.” Society is now in danger of confusing fiction with reality, a mistake that could have serious consequences. Professors Emily M. Bender and Chirag Shah observe:
“Decades of science fiction have taught us that a key feature of a high-tech future is computer systems that give us instant access to seemingly limitless collections of knowledge through an interface that takes the form of a friendly (or sometimes sinisterly detached) voice. The early promise of the World Wide Web was that it might be the start of that collection of knowledge. With Meta’s Galactica, OpenAI’s ChatGPT and earlier this year LaMDA from Google, it seems like the friendly language interface is just around the corner, too. However, we must not mistake a convenient plot device—a means to ensure that characters always have the information the writer needs them to have—for a roadmap to how technology could and should be created in the real world. In fact, large language models like Galactica, ChatGPT and LaMDA are not fit for purpose as information access systems, in two fundamental and independent ways.”
The first problem is that language models do what they are built to do very well: they produce text that sounds human-generated. Authoritative, even. Listeners unconsciously ascribe human thought processes to the results. In truth, algorithms lack understanding, intent, and accountability, making them inherently unreliable as unvetted sources of information.
Next is the nature of information itself. It is impossible for an AI to tap into a comprehensive database of knowledge because such a thing does not exist and probably never will. The Web, with its contradictions, incomplete information, and downright falsehoods, certainly does not qualify. Though for some queries a quick, straightforward answer is appropriate (how many tablespoons in a cup?) most are not so simple. One must compare answers and evaluate provenance. In fact, the authors note, the very process of considering sources helps us refine our needs and context as well as asses the data itself. We miss out on all that when, in search of a quick answer, we accept the first response from any search system. That temptation is hard enough to resist with a good old fashioned Google search. The human-like interaction with chatbots just makes it more seductive. The article notes:
“Over both evolutionary time and every individual’s lived experience, natural language to-and-fro has always been with fellow human beings. As we encounter synthetic language output, it is very difficult not to extend trust in the same way as we would with a human. We argue that systems need to be very carefully designed so as not to abuse this trust.”
That is a good point, though AI developers may not be eager to oblige. It remains up to us humans to resist temptation and take the time to think for ourselves.
Cynthia Murrell, January 20, 2023