AI: Apple Intelligence or Apple Ineptness?

March 20, 2025

dino orange_thumb_thumb_thumbAnother dinobaby blog post. No AI involved which could be good or bad depending on one’s point of view.

I read a very polite essay with some almost unreadable graphs. “Apple Innovation and Execution” says:

People have been claiming that Apple has forgotten how to innovate since the early 1980s, or longer – it’s a standing joke in talking about the company. But it’s also a question.

Yes, it is a question. Slap on your Apple goggles and look at the world from the fan boy perspective. AI is not a thing. Siri is a bit wonky. The endless requests to log in to use Facetime and other Apple services are from an objective point of view a bit stupid. The annual iPhone refresh. Er, yeah, now what are the functional differences again? The Apple car? Er, yeah.

image

Is that an innovation worm? Is that a bad apple? One possibility is that innovation worm is quite happy making an exit and looking for a better orchard. Thanks, You.com “Creative.” Good enough.

The write up says:

And ‘Apple Intelligence’ certainly isn’t going to drive a ‘super-cycle’ of iPhone upgrades any time soon. Indeed, a better iPhone feature by itself was never going to drive fundamentally different growth for Apple

So why do something which makes the company look stupid?

And what about this passage?

And the failure of Siri 2 is by far the most dramatic instance of a growing trend for Apple to launch stuff late. The software release cycle used to be a metronome: announcement at WWDC in the summer, OS release in September with everything you’d seen. There were plenty of delays and failed projects under the hood, and centres of notorious dysfunction (Apple Music, say), and Apple has always had a tendency to appear to forget about products for years (most Apple Watch faces don’t support the key new feature in the new Apple Watch) but public promise were always kept. Now that seems to be slipping. Is this a symptom of a Vista-like drift into systemically poor execution?

Some innovation worms are probably gnawing away inside the Apple. Apple’s AI. Easy to talk about. Tough to convert marketing baloney crafted by art history majors into software of value to users in my opinion.

Stephen E Arnold, March 20, 2025

AI Checks Professors Work: Who Is Hallucinating?

March 19, 2025

Hopping Dino_thumbThis blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything. Ask any 80 year old why don’t you?

I  read an amusing write up in Nature Magazine, a publication which does not often veer into MAD Magazine territory. The write up “AI Tools Are Spotting Errors in Research Papers: Inside a Growing Movement” has a wild subtitle as well: “Study that hyped the toxicity of black plastic utensils inspires projects that use large language models to check papers.”

Some have found that outputs from large language models often make up information. I have included references in my writings to Google’s cheese errors and lawyers submitting court documents with fabricated legal references. The main point of this Nature article is that presumably rock solid smart software will check the work of college professors, pals in the research industry, and precocious doctoral students laboring for love and not much money.

Interesting but will hallucinating smart software find mistakes in the work of people like the former president of Stanford University and Harvard’s former ethics star? Well, sure, peers and co-authors cannot be counted on to do work and present it without a bit of Photoshop magic or data recycling.

The article reports that their are two efforts underway to get those wily professors to run their “work” or science fiction through systems developed by Black Spatula and YesNoError. The Black Spatula emerged from tweaked research that said, “Your black kitchen spatula will kill you.” The YesNoError is similar but with a crypto  twist. Yep, crypto.

Nature adds:

Both the Black Spatula Project and YesNoError use large language models (LLMs) to spot a range of errors in papers, including ones of fact as well as in calculations, methodology and referencing.

Assertions and claims are good. Black Spatula markets with the assurance its system  “is wrong about an error around 10 percent of the time.” The YesNoError crypto wizards “quantified the false positives in only around 100 mathematical errors.” Ah, sure, low error rates.

I loved the last paragraph of the MAD inspired effort and report:

these efforts could reveal some uncomfortable truths. “Let’s say somebody actually made a really good one of these… in some fields, I think it would be like turning on the light in a room full of cockroaches…”

Hallucinating smart software. Professors who make stuff up. Nature Magazine channeling important developments in research. Hey, has Nature Magazine ever reported bogus research? Has Nature Magazine run its stories through these systems?

Good question. Might be a good idea.

Stephen E Arnold, March 19, 2025

An Econ Paper Designed to Make Most People Complacent about AI

March 19, 2025

dino orange_thumb_thumb_thumb_thumb_thumbYep, another dinobaby original.

I zipped through — and I mean zipped — a 60 page working paper called “Artificial Intelligence and the Labor Market.” I have to be upfront. I detested economics, and I still do. I used to take notes when Econ Talk was actually discussing economics. My notes were points that struck me as wildly unjustifiable. That podcast has changed. My view of economics has not. At 80 years of age, do you believe that I will adopt a different analytical stance? Wow, I hope not. You may have to take care of your parents some day and learn that certain types of discourse do not compute.

This paper has multiple authors. In my experience, the more authors, the more complicated the language. Here’s an example:

“Labor demand decreases in the average exposure of workers’ tasks to AI technologies; second, holding the average exposure constant, labor demand increases in the dispersion of task exposures to AI, as workers shift effort to tasks that are not displaced by AI.” ?

The idea is that the impact of smart software will not affect workers equally. As AI gets better at jobs humans do, humans will learn more and get a better job or integrate AI into their work. In some jobs, the humans are going to be out of luck. The good news is that these people can take other jobs or maybe start their own business.

The problem with the document I reviewed is that there are several fundamental “facts of life” that make the paper look a bit wobbly.

First, the minute it is cheaper for smart software to do a job that a human does, the human gets terminated. Software does not require touchy feely interactions, vacations, pay raises, and health care. Software can work as long as the plumbing is working. Humans sleep which is not productive from an employer’s point of view.

Second, government policies won’t work. Why? Government bureaucracies are reactive. By the time, a policy arrives, the trend or the smart software revolution has been off to the races. One cannot put spilled radioactive waste back into its containment vessel quickly, easily, or cheaply. How’s that Fukushima remediation going?

Third, the reskilling idea is baloney. Most people are not skilled in reskilling themselves. Life long learning is not a core capability of most people. Sure, in theory anyone can learn. The problem is that most people are happy planning a vacation, doom scrolling, or watch TikTok-type videos. Figuring out how to make use of smart software capabilities is not as popular as watching the Super Bowl.

Net net: The AI services are getting better. That means that most people will be faced with a re-employment challenge. I don’t think LinkedIn posts will do the job.

Stephen E Arnold, March 19, 2025

AI: Meh.

March 19, 2025

It seems consumers can see right through the AI hype. TechRadar reports, “New Survey Suggests the Vast Majority of iPhone and Samsung Galaxy Users Find AI Useless—and I’m Not Surprised.” Both iPhones and Samsung Galaxy smartphones have been pushing AI onto their users. But, according to a recent survey, 73% of iPhone users and 87% of Galaxy users respond to the innovations with a resounding “meh.” Even more would refuse to pay for continued access to the AI tools. Furthermore, very few would switch platforms to get better AI features: 16.8% of iPhone users and 9.7% of Galaxy users. In fact, notes writer Jamie Richards, fewer than half of users report even trying the AI features. He writes:

“I have some theories about what could be driving this apathy. The first centers on ethical concerns about AI. It’s no secret that AI is an environmental catastrophe in motion, consuming massive amounts of water and emitting huge levels of CO2, so greener folks may opt to give it a miss. There’s also the issue of AI and human creativity – TechRadar’s Editorial Associate Rowan Davies recently wrote of a nascent ‘cultural genocide‘ as a result of generative AI, which I think is a compelling reason to avoid it. … Ultimately, though, I think AI just isn’t interesting to the everyday person. Even as someone who’s making a career of being excited about phones, I’ve yet to see an AI feature announced that doesn’t look like a chore to use or an overbearing generative tool. I don’t use any AI features day-to-day, and as such I don’t expect much more excitement from the general public.”

No, neither do we. If only investors would catch on. The research was performed by phone-reselling marketplace SellCell, which surveyed over 2,000 smartphone users.

Cynthia Murrell, March 19, 2025

AI May Be Discovering Kurt Gödel Just as Einstein and von Neumann Did

March 17, 2025

Hopping Dino_thumb_thumb_thumbThis blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything.

AI re-thinking is becoming more widespread. I published a snippet of an essay about AI and its impact in socialist societies on March 10, 2025. I noticed “A Bear Case: My Predictions Regarding AI Progress.” The write is interesting, and I think it represents thinking which is becoming more prevalent among individuals who have racked up what I call AI mileage.

The main theme of the write up is a modern day application of Kurt Gödel’s annoying incompleteness theorem. I am no mathematician like my great uncle Vladimir Arnold, who worked for year with the somewhat quirky Dr. Kolmogorov. (Family tip: Going winter camping with Dr. Kolmogorov wizard was not a good idea unless. Well, you know…)

The main idea is a formal axiomatic system satisfying certain technical conditions cannot decide the truth value of all statements about natural numbers. In a nutshell, a set cannot contain itself. Smart software is not able to go outside of its training boundaries as far as I know.

Back to the essay, the author points out that AI something useful:

There will be a ton of innovative applications of Deep Learning, perhaps chiefly in the field of biotech, see GPT-4b and Evo 2. Those are, I must stress, human-made innovative applications of the paradigm of automated continuous program search. Not AI models autonomously producing innovations.

The essay does contain a question I found interesting:

Because what else are they [AI companies and developers] to do? If they admit to themselves they’re not closing their fingers around godhood after all, what will they have left?

Let me offer several general thoughts. I admit that I am not able to answer the question, but some ideas crossed my mind when I was thinking about the sporty Kolmogorov, my uncle’s advice about camping in the winter, and this essay:

  1. Something else will come along. There is a myth that technology progresses. I think technology is like the fictional tribble on Star Trek. The products and services are destined to produce more products and services. Like the Santa Fe Institute crowd, order emerges. Will the next big thing be AI? Probably AI will be in the DNA of the next big thing. So one answer to the question is, “Something will emerge.” Money will flow and the next big thing cycle begins again.
  2. The innovators and the AI companies will pivot. This is a fancy way of saying, “Try to come up with something else.” Even in the age of monopolies and oligopolies, change is relentless. Some of the changes will be recognized as the next big thing or at least the thing a person can do to survive. Does this mean Sam AI-Man will manage the robots at the local McDonald’s? Probably not, but he will come up with something.
  3. The AI hot pot will cool. Life will regress to the mean or a behavior that is not hell bent on becoming a super human like the guy who gets transfusions from his kid, the wonky “have my baby” thinking of a couple of high profile technologist, or the money lust of some 25 year old financial geniuses on Wall Street. A digitized organization man man living out the theory of the leisure class will return. (Tip: Buy a dark grey suit. Lose the T shirt.)

As an 80 year old dinobaby, I find the angst of AI interesting. If Kurt Gödel were alive, he might agree to comment, “Sonny, you can’t get outside the set.” My uncle would probably say, “Those costs. Are they crazy?”

Stephen E Arnold, March 17, 2025

What is the Difference Between Agentic and Generative AI? A Handy Chart

March 17, 2025

Agentic is the new AI buzzword. But what does it mean? Data-platform and AI firm Domo offers clarity in, "Agentic AI Explained: Definition, Benefits, and Use Cases." Writer Haziqa Sajid defines the term:

"Agentic AI is an advanced AI system that can act independently, make decisions, and adapt to changing situations. These AI systems can handle complex tasks such as strategic planning, multi-step automation, and dynamic problem-solving with minimal human oversight. This makes them more capable than traditional rule-based AI. … Agentic AI is designed to work like a human employee performing tasks that comprehend natural language input, set objectives, reason through a task, and modify actions based on updated input. It employs advanced machine learning, generative AI, and adaptive decision-making to learn from the data, refine its approach, and improve performance over time."

Wow, that sounds a lot like what we were promised with generative AI. Perhaps this version will meet expectations. AI Agents are still full of potential, poised on the edge of infiltrating real-world tools. The post describes what Domo sees as the tech’s advantages and gives the basics of how it works.

The most useful part is the handy chart comparing agentic and generative AI. For example, while the (actual) purpose of generative AI is mainly to generate text, image, and audio content, agentic ai is for executing tasks and making decisions in changing environments. The chart’s other measures of comparison include autonomy, interactivity, use cases, learning processes, and integration methods. See the post for that bookmark-worthy chart.

Founded back in 2010, Domo is based in Utah. The publicly traded firm boasts over 2,600 clients across diverse industries.

Cynthia Murrell, March 17, 2025

Ah, Apple, Struggling with AI like Amazon, Google, et al

March 14, 2025

Hopping DinoThis blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything. Ask any 80 year old why don’t you?

Yes, it is Friday, March 14, 2025. Everyone needs a moment of amusement. I found this luscious apple bit and thought I would share it. Dinobabies like knowing how the world and Apple treats other dinobabies. You, as a younger humanoid, probably don’t care. Someday you will.

Grandmother Gets X-Rated Message after Apple AI Fail” reports:

A woman from Dunfermline has spoken of her shock after an Apple voice-to-text service mistakenly inserted a reference to sex – and an apparent insult – into a message left by a garage… An artificial intelligence (AI) powered service offered by Apple turned it into a text message which – to her surprise – asked if she been "able to have sex" before calling her a "piece of ****".

Not surprisingly, Apple did not respond to the BBC request for a comment. Unperturbed, the Beeb made some phone calls. According to the article:

An expert has told the BBC the AI system may have struggled in part because of the caller’s Scottish accent, but far more likely factors were the background noise at the garage and the fact he was reading off a script.

One BBC expert offered these reasons for the foul fouled message:

Peter Bell, a professor of speech technology at the University of Edinburgh, listened to the message left for Mrs Littlejohn. He suggested it was at the "challenging end for speech-to-text engines to deal with". He believes there are a number of factors which could have resulted in rogue transcription:

  • The fact it is over the telephone and, therefore, harder to hear
  • There is some background noise in the call
  • The way the garage worker speaks is like he is reading a prepared script rather than speaking in a natural way

"All of those factors contribute to the system doing badly, " he added. "The bigger question is why it outputs that kind of content.

I have a much simpler explanation. Like Microsoft, marketing is much easier than delivering something that works for humans. I am tempted to make fun of Apple Intelligence, conveniently abbreviated AI. I am tempted to point out that real world differences in the flow of Apple computers are not discernable when browsing Web pages or entering one’s iTunes password into the system several times a day.

Let’s be honest. Apple is big. Like Amazon (heaven help Alexa by the way), Google (the cheese fails are knee slappers, Sundar), and the kindergarten squabbling among Softies and OpenAI at Microsoft — Apple cannot “do” smart software at this time. Therefore, errors will occur.

On the other hand, perhaps the dinobaby who received the message is “a piece of ****"? Most dinobabies are.

Stephen E Arnold, March 14, 2025

Microsoft Leadership Will Be Replaced by AI… Yet

March 14, 2025

Whenever we hear the latest tech announcement, we believe it is doom and gloom for humanity. While fire, the wheel, the Industrial Revolution, and computers have yet to dismantle humanity, the jury is still out for AI. However, Gizmodo reports that Satya Nadella of Microsoft says we shouldn’t be worried about AI and it’s time to stop glorifying it, “Microsoft’s Satya Nadella Pumps the Brakes on AI Hype.” Nadella placed a damper on AI hype with the following statement from a podcast: “Success will be measured through tangible, global economic growth rather than arbitrary benchmarks of how well AI programs can complete challenges like obscure math puzzles. Those are interesting in isolation but do not have practical utility.”

Nadella said that technology workers are saying AI will replace humans, but that’s not the case. He calls that type of thinking a distraction and the tech industry needs to “get practical and just try and make money before investors get impatient.” Nadella’s fellow Microsoft worker CEO Sam Altman is a prime example of AI fear mongering. He uses it as a tool to give himself power.

Nadella continued that if the tech industry and its investors want AI growth akin to the Industrial Revolution then let’s concentrate in it. Proof of that type of growth would be if there was 10% inflation attributed to AI. Investing in AI can’t just happen on the supply side, there needs to be demand AI-built products.

Nadella’s statements are like a pouring a bucket of cold water on a sleeping person:

"On that sense, Nadella is trying to slap tech executives awake and tell them to cut out the hype. AI safety is somewhat of a concern—the models can be abused to create deepfakes or mass spam—but it exaggerates how powerful these systems are. Eventually, push will come to shove and the tech industry will have to prove that the world is willing to put down real money to use all these tools they are building. Right now, the use cases, like feeding product manuals into models to help customers search them faster, are marginal.”

Many well-known companies still plan on implementing AI despite their difficulties. Other companies have downsized their staffing to include more AI chatbots, but the bots prove to be inefficient and frustrating. Microsoft, however, is struggling with management issues related to OpenAI, its internal “experts,” and the Softies who think they can do better. (Did Microsoft ask Grok, “How do I manage this billions of dollar bonfire?”)

Let’s blame it on AI.

Whitney Grace, March 14, 2025, 2025

Keeping an Eye on AI? Here Are Fifteen People of Interest for Some

March 13, 2025

Underneath the hype, there are some things AI is actually good at. But besides the players who constantly make the news, who is really shaping the AI landscape? A piece at Silicon Republic introduces us to "15 Influential Players Driving the AI Revolution." Writer Jenny Darmody observes:

"As AI continues to dominate the headlines, we’re taking a closer look at some of the brightest minds and key influencers within the industry. Throughout the month of February, SiliconRepublic.com has been putting AI under the microscope for more of a deep dive, looking beyond the regular news to really explore what this technology could mean. From the challenges around social media advertising in the new AI world to the concerns around its effect on the creative industries, there were plenty of worrying trends to focus on. However, there were also positive sides to the technology, such as its ability to preserve minority languages like Irish and its potential to reduce burnout in cybersecurity. While exploring these topics, the AI news just kept rolling: Deepseek continued to ruffle industry feathers, Thomson Reuters won a partial victory in its AI copyright case and the Paris AI Summit brought further investments and debates around regulation. With so much going on in the industry, we thought it was important to draw your attention to some key influencers you should know within the AI space."

Ugh, another roster of tech bros? Not so fast. On this list, the women actually outnumber the men, eight to seven. In fact, the first entry is Ireland’s first AI Ambassador Patricia Scanlon, who has hopes for truly unbiassed AI. Then there is the EU’s Lucilla Sioli, head of the European Commission’s AI Office. She is tasked with both coordinating Europe’s AI strategy and implementing the AI Act. We also happily note the inclusion of New York University’s Juliette Powell, who advises clients from gaming companies to banks in the responsible use of AI. See the write-up for the rest of the women and men who made the list.

Cynthia Murrell, March 13, 2025

AI Hiring Spoofs: A How To

March 12, 2025

dino orange_thumbBe aware. A dinobaby wrote this essay. No smart software involved.

The late Robert Steele, one of first government professionals to hop on the open source information bandwagon, and I worked together for many years. In one of our conversations in the 1980s, Robert explained how he used a fake persona to recruit people to assist him in his work on a US government project. He explained that job interviews were an outstanding source of information about a company or an organization.

AI Fakers Exposed in Tech Dev Recruitment: Postmortem” is a modern spin on Robert’s approach. Instead of newspaper ads and telephone calls, today’s approach uses AI and video conferencing. The article presents a recipe for what was at one time a technique not widely discussed in the 1980s. Robert learned his approach from colleagues in the US government.

The write up explains that a company wants to hire a professional. Everything hums along and then:

…you discover that two imposters hiding behind deepfake avatars almost succeeded in tricking your startup into hiring them. This may sound like the stuff of fiction, but it really did happen to a startup called Vidoc Security, recently. Fortunately, they caught the AI impostors – and the second time it happened they got video evidence.

The cited article explains how to set and operate this type of deep fake play. I am not going to present the “how to” in this blog post. If you want the details, head to the original. The penetration tactic requires Microsoft LinkedIn, which gives that platform another use case for certain individuals gathering intelligence.

Several observations:

  1. Keep in mind that the method works for fake employers looking for “real” employees in order to obtain information from job candidates. (Some candidates are blissfully unaware that the job is a front for obtaining data about an alleged former employer.)
  2. The best way to avoid AI centric scams is to do the work the old-fashioned way. Smart software opens up a wealth of opportunities to obtain allegedly actionable information. Unfortunately the old fashioned way is slow, expensive, and prone to social engineering tactics.
  3. As AI and bad actors take advantage of the increased capabilities of smart software, humans do not adapt  quickly when those humans are not actively involved with AI capabilities. Personnel related matters are a pain point for many organizations.

To sum up, AI is a tool. It can be used in interesting ways. Is the contractor you hired on Fiverr or via some online service a real person? Is the job a real job or a way to obtain information via an AI that is a wonderful conversationalist? One final point: The target referenced in the write was a cyber security outfit. Did the early alert, proactive, AI infused system prevent penetration?

Nope.

Stephen E Arnold, March 12, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta