Yikes! Existing AI is Fundamentally Flawed

February 27, 2025

AI applications are barreling full steam ahead into all corners of our lives. Yet there are serious concerns about the very structure of how LLMs work. The BCS Chartered Institute for IT asks, "Does Current AI Represent a Dead End?" Cybersecurity professor Eerke Boiten writes:

"From the perspective of software engineering, current AI systems are unmanageable, and as a consequence their use in serious contexts is irresponsible. For foundational reasons (rather than any temporary technology deficit), the tools we have to manage complexity and scale are just not applicable. By ‘software engineering’, I mean developing software to align with the principle that impactful software systems need to be trustworthy, which implies their development needs to be managed, transparent and accountable … When I last gave talks about AI ethics, around 2018, my sense was that AI development was taking place alongside the abandonment of responsibility in two dimensions. Firstly, and following on from what was already happening in ‘big data’, the world stopped caring about where AI got its data — fitting in nicely with ‘surveillance capitalism. And secondly, contrary to what professional organisations like BCS and ACM had been preaching for years, the outcomes of AI algorithms were no longer viewed as the responsibility of their designers — or anybody, really."

Yes, that is the reality we are careening into. But for big tech, that may be a feature, not a bug. Those firms clearly want today’s AI to be THE one true AI. A high profit to responsibility ratio suits them just fine.

Boiten describes, in a nutshell, how neural networks function. He emphasizes the disturbing lack of human guidance. And understanding. Since engineers cannot know just how an algorithm comes to its conclusions, it is impossible to ensure they are operating to specifications. These problems cannot be resolved with hard work and insights; they are baked in. See the write-up for more details.

If engineers are willing to progress beyond today’s LLMs, Boiten suggests, they could develop something actually reliable. It could even be built on existing AI tech, so all that work (and funding) need not go out the window. They just have to look past the dollar signs in their eyes and press ahead to a safer and more reliable product. The post warns:

"In my mind, all this puts even state-of-the-art current AI systems in a position where professional responsibility dictates the avoidance of them in any serious application. When all its techniques are based on testing, AI safety is an intellectually dishonest enterprise."

Now all we need is for big tech to do the right thing.

Cynthia Murrell, February 27, 2025

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta