AI and Stupid Users: A Glimpse of What Is to Come

March 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

When smart software does not deliver, who is responsible? I don’t have a dog in the AI fight. I am thinking about deployment of smart software in professional environments. When the outputs are wonky or do not deliver the bang of a  competing system, what is the customer supposed to do. Is the vendor responsible? Is the customer responsible? Is the person who tried to validate the outputs guilty of putting a finger on the scale of a system which its developers cannot explain exactly how an output was determined? Viewed from one angle, this is the Achilles’ heel of artificial intelligence. Viewed from another angle determining responsibility is an issue which, in my opinion, will be decided by legal processes. In the meantime, the issue of a system’s not working can have significant consequences. How about those automated systems on aircraft which dive suddenly or vessels which can jam a ship channel?

I read a write up which provides a peek at what large outfits pushing smart software will do when challenged about quality, accuracy, or other subjective factors related to AI-imbued systems. Let’s take a quick look at “Customers Complain That Copilot Isn’t As Good as ChatGPT, Microsoft Blames Misunderstanding and Misuse.”

The main idea in the write up strikes me as:

Microsoft is doing absolutely everything it can to force people into using its Copilot AI tools, whether they want to or not. According to a new report, several customers have reported a problem: it doesn’t perform as well as ChatGPT. But Microsoft believes the issue lies with people who aren’t using Copilot correctly or don’t understand the differences between the two products.

Yep, the user is the problem. I can imagine the adjudicator (illustrated as a mother) listening to a large company’s sales professional and a professional certified developer arguing about how the customer went off the rails. Is the original programmer the problem? Is the new manager in charge of AI responsible? Is it the user or users?


Illustration by MSFT Copilot. Good enough, MSFT.

The write up continues:

One complaint that has repeatedly been raised by customers is that Copilot doesn’t compare to ChatGPT. Microsoft says this is because customers don’t understand the differences between the two products: Copilot for Microsoft 365 is built on the Azure OpenAI model, combining OpenAI’s large language models with user data in the Microsoft Graph and the Microsoft 365 apps. Microsoft says this means its tools have more restrictions than ChatGPT, including only temporarily accessing internal data before deleting it after each query.

Here’s another snippet from the cited article:

In addition to blaming customers’ apparent ignorance, Microsoft employees say many users are just bad at writing prompts. “If you don’t ask the right question, it will still do its best to give you the right answer and it can assume things,” one worker said. “It’s a copilot, not an autopilot. You have to work with it,” they added, which sounds like a slogan Microsoft should adopt in its marketing for Copilot. The employee added that Microsoft has hired partner BrainStorm, which offers training for Microsoft 365, to help create instructional videos to help customers create better Copilot prompts.

I will be interested in watching how these “blame games” unfold.

Stephen E Arnold, March 29, 2024


Got something to say?

  • Archives

  • Recent Posts

  • Meta