Ah, Apple, Struggling with AI like Amazon, Google, et al
March 14, 2025
This blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything. Ask any 80 year old why don’t you?
Yes, it is Friday, March 14, 2025. Everyone needs a moment of amusement. I found this luscious apple bit and thought I would share it. Dinobabies like knowing how the world and Apple treats other dinobabies. You, as a younger humanoid, probably don’t care. Someday you will.
“Grandmother Gets X-Rated Message after Apple AI Fail” reports:
A woman from Dunfermline has spoken of her shock after an Apple voice-to-text service mistakenly inserted a reference to sex – and an apparent insult – into a message left by a garage… An artificial intelligence (AI) powered service offered by Apple turned it into a text message which – to her surprise – asked if she been "able to have sex" before calling her a "piece of ****".
Not surprisingly, Apple did not respond to the BBC request for a comment. Unperturbed, the Beeb made some phone calls. According to the article:
An expert has told the BBC the AI system may have struggled in part because of the caller’s Scottish accent, but far more likely factors were the background noise at the garage and the fact he was reading off a script.
One BBC expert offered these reasons for the foul fouled message:
Peter Bell, a professor of speech technology at the University of Edinburgh, listened to the message left for Mrs Littlejohn. He suggested it was at the "challenging end for speech-to-text engines to deal with". He believes there are a number of factors which could have resulted in rogue transcription:
- The fact it is over the telephone and, therefore, harder to hear
- There is some background noise in the call
- The way the garage worker speaks is like he is reading a prepared script rather than speaking in a natural way
"All of those factors contribute to the system doing badly, " he added. "The bigger question is why it outputs that kind of content.
I have a much simpler explanation. Like Microsoft, marketing is much easier than delivering something that works for humans. I am tempted to make fun of Apple Intelligence, conveniently abbreviated AI. I am tempted to point out that real world differences in the flow of Apple computers are not discernable when browsing Web pages or entering one’s iTunes password into the system several times a day.
Let’s be honest. Apple is big. Like Amazon (heaven help Alexa by the way), Google (the cheese fails are knee slappers, Sundar), and the kindergarten squabbling among Softies and OpenAI at Microsoft — Apple cannot “do” smart software at this time. Therefore, errors will occur.
On the other hand, perhaps the dinobaby who received the message is “a piece of ****"? Most dinobabies are.
Stephen E Arnold, March 14, 2025
Microsoft Leadership Will Be Replaced by AI… Yet
March 14, 2025
Whenever we hear the latest tech announcement, we believe it is doom and gloom for humanity. While fire, the wheel, the Industrial Revolution, and computers have yet to dismantle humanity, the jury is still out for AI. However, Gizmodo reports that Satya Nadella of Microsoft says we shouldn’t be worried about AI and it’s time to stop glorifying it, “Microsoft’s Satya Nadella Pumps the Brakes on AI Hype.” Nadella placed a damper on AI hype with the following statement from a podcast: “Success will be measured through tangible, global economic growth rather than arbitrary benchmarks of how well AI programs can complete challenges like obscure math puzzles. Those are interesting in isolation but do not have practical utility.”
Nadella said that technology workers are saying AI will replace humans, but that’s not the case. He calls that type of thinking a distraction and the tech industry needs to “get practical and just try and make money before investors get impatient.” Nadella’s fellow Microsoft worker CEO Sam Altman is a prime example of AI fear mongering. He uses it as a tool to give himself power.
Nadella continued that if the tech industry and its investors want AI growth akin to the Industrial Revolution then let’s concentrate in it. Proof of that type of growth would be if there was 10% inflation attributed to AI. Investing in AI can’t just happen on the supply side, there needs to be demand AI-built products.
Nadella’s statements are like a pouring a bucket of cold water on a sleeping person:
"On that sense, Nadella is trying to slap tech executives awake and tell them to cut out the hype. AI safety is somewhat of a concern—the models can be abused to create deepfakes or mass spam—but it exaggerates how powerful these systems are. Eventually, push will come to shove and the tech industry will have to prove that the world is willing to put down real money to use all these tools they are building. Right now, the use cases, like feeding product manuals into models to help customers search them faster, are marginal.”
Many well-known companies still plan on implementing AI despite their difficulties. Other companies have downsized their staffing to include more AI chatbots, but the bots prove to be inefficient and frustrating. Microsoft, however, is struggling with management issues related to OpenAI, its internal “experts,” and the Softies who think they can do better. (Did Microsoft ask Grok, “How do I manage this billions of dollar bonfire?”)
Let’s blame it on AI.
Whitney Grace, March 14, 2025, 2025
Wizard Snarks Amazon: Does Amazon Care? Ho Ho No
March 13, 2025
Another post from the dinobaby. Alas, no smart software used for this essay.
I read a wonderful essay from the fellow who created a number of high-value solutions. Remember the Oxford English Dictionary SGML project or the Open Text Index? The person involved deeply in both of these projects is Tim Bray. He wrote a pretty good essay called “Bye, Prime.” On the surface it is a chatty explanation of why a former Amazon officer dropped the “Prime” membership. Thinking about the comments in the write up, Dr. Bray’s article underscores some deeper issues.
In my opinion, the significant points include:
First, 21st century capitalism lacks “ethics stuff.” The decisions benefit the stakeholders.
Second, in a major metropolitan area, local outlets provide equivalent products at competitive prices. This suggests a bit of price exploitation occurs in giant online retail operations.
Third, American companies are daubed with tar as a result of certain national postures.
Fourth, a crassness is evident in some US online services.
Is the article about Amazon? I would suggest that it is, but the implications are broader. I recommend the write up. I believe attending to the explicit and implicit messages in the essay would be useful.
I think the processes identified by Dr. Bray are unlikely to slow. Going back is difficult, perhaps impossible.
PS. I think fixing up the security of AWS buckets, getting the third party reseller scams cleaned up, and returning basic functionality to the Kindle interface are indications that Amazon has gotten lost in one of its warehouses because smart Alexa is really dumb.
Stephen E Arnold, March 13, 2025
Keeping an Eye on AI? Here Are Fifteen People of Interest for Some
March 13, 2025
Underneath the hype, there are some things AI is actually good at. But besides the players who constantly make the news, who is really shaping the AI landscape? A piece at Silicon Republic introduces us to "15 Influential Players Driving the AI Revolution." Writer Jenny Darmody observes:
"As AI continues to dominate the headlines, we’re taking a closer look at some of the brightest minds and key influencers within the industry. Throughout the month of February, SiliconRepublic.com has been putting AI under the microscope for more of a deep dive, looking beyond the regular news to really explore what this technology could mean. From the challenges around social media advertising in the new AI world to the concerns around its effect on the creative industries, there were plenty of worrying trends to focus on. However, there were also positive sides to the technology, such as its ability to preserve minority languages like Irish and its potential to reduce burnout in cybersecurity. While exploring these topics, the AI news just kept rolling: Deepseek continued to ruffle industry feathers, Thomson Reuters won a partial victory in its AI copyright case and the Paris AI Summit brought further investments and debates around regulation. With so much going on in the industry, we thought it was important to draw your attention to some key influencers you should know within the AI space."
Ugh, another roster of tech bros? Not so fast. On this list, the women actually outnumber the men, eight to seven. In fact, the first entry is Ireland’s first AI Ambassador Patricia Scanlon, who has hopes for truly unbiassed AI. Then there is the EU’s Lucilla Sioli, head of the European Commission’s AI Office. She is tasked with both coordinating Europe’s AI strategy and implementing the AI Act. We also happily note the inclusion of New York University’s Juliette Powell, who advises clients from gaming companies to banks in the responsible use of AI. See the write-up for the rest of the women and men who made the list.
Cynthia Murrell, March 13, 2025
NSO Group the PR of Intelware Captures Headlines …. Yet Again
March 13, 2025
Our reading and research have lead us to this basic rule: Unless measures are taken to keep something secret, diffusion is inevitable. Knowledge about systems, methods, and tools to access data is widespread. Case in point—Today’s General Counsel tells us, "Pegasus Spyware Is Showing Up on Corporate Execs’ Cell Phones." The brief write-up cites reporting by The Record’s Suzanne Smalley, who got her information from security firm iVerify. It shows a steep climb in Pegasus-infected devices over the second half of last year. We learn:
"The number of reported infected phones among iVerify corporate clients was eleven out of 18,000 devices tested in December last year. In May 2024, when iVerify first began offering the spyware testing service, a study found seven spyware infections out of 3,000 phones tested. ‘The world remains totally unprepared to deal with this from a security perspective,’ says iVerify co-founder and former National Security Agency analyst Rocky Cole, who was interviewed for the article. ‘This stuff is way more prevalent than people think.’ The article notes that business executives are now proving to be vulnerable, including individuals with access to proprietary plans and financial data, as well as those who frequently communicate with other influential leaders in the private sector. These leaders engage in sensitive work out of the public eye, including deals that have the potential to impact financial markets."
But how could this happen? Pegasus-maker NSO Group vows it only sells spyware to whitelisted governments for counterterrorism and fighting crime. It does do that. And also other things, reportedly. So we are unsurprised to find business executives among those allegedly targeted. We think it best to assume anything digital can be accessed by anyone at any moment. Is it time to bring back communications via pen and paper? At least someone must get out from behind a desk to intercept snail mail or dead drops.
Cynthia Murrell, March 13, 2025
FOGINT: France Gears Up for More Encrypted Message Access
March 12, 2025
Yep, another dinobaby original.
Buoyed with the success of the Pavel Durov litigation, France appears to be getting ready to pursue Signal, the Zuck WhatsApp, and the Switzerland-based Proton Mail. The actions seem to lie in the future. But those familiar with the mechanisms of French investigators may predict that information gathering began years ago. With ample documentation, the French legislators with communication links to the French government seem to be ready to require Pavel-ovian responses to requests for user data.
“France Pushes for Law Enforcement to Signal, WhatsApp, and Encrypted Email” reports:
An amendment to France’s proposed “Narcotraffic” bill, which is passing through the National Assembly in the French Parliament, will require tech companies to hand over decrypted chat messages of suspected criminals within 72 hours. The law, which aims to provide French law enforcement with stronger powers to combat drug trafficking, has raised concerns among tech companies and civil society groups that it will lead to the creation of “backdoors” in encrypted services that will be exploited by cyber criminals and hostile nation-states. Individuals that fail to comply face fines of €1.5m while companies risk fines of up 2% of their annual world turnover if they fail to hand over encrypted communications demanded by French law enforcement.
The practical implications of these proposals is two-fold. First, the proposed legislation provides an alert to the identified firms that France is going to take action. The idea is that the services know what’s coming. The French investigators delight at recalcitrant companies proactively cooperating will probably be beneficial for the companies. Mr. Durov has learned that cooperation makes it possible for him to environ a future that does not include a stay at the overcrowded and dangerous prison just 16 kilometers from his hotel in Paris. The second is to keep up the momentum. Other countries have been indifferent to or unwilling to take on certain firms which have blown off legitimate requests for information about alleged bad actors. The French can be quite stubborn and have a bureaucracy that almost guarantees a less than amusing for the American outfits. The Swiss have experience in dealing with France, and I anticipate a quieter approach to Proton Mail.
The write up includes this statement:
opponents of the French law argue that breaking an encryption application that is allegedly designed for use by criminals is very different from breaking the encryption of chat apps, such as WhatsApp and Signal, and encrypted emails used by billions of people for non-criminal communications. “We do not see any evidence that the French proposal is necessary or proportional. To the contrary, any backdoor will sooner or later be exploited…
I think the statement is accurate. Information has a tendency to leak. But consider the impact on Telegram. That entity is in danger of becoming irrelevant because of France’s direct action against the Teflon-coated Russian Pavel Durov. Cooperation is not enough. The French action seems to put Telegram into a credibility hole, and it is not clear if the organization’s overblown crypto push can stave off user defection and slowing user growth.
Will the French law conflict with European Union and other EU states’ laws? Probably. My view is that the French will adopt the position, “C’est dommage en effet.” The Telegram “problem” is not completely resolved, but France is willing to do what other countries won’t. Is the French Foreign Legion operating in Ukraine? The French won’t say, but some of those Telegram messages are interesting. Oui, c’est dommage. Tip: Don’t fool around with a group of French Foreign Legion fellows whether you are wearing and EU flag T shirt and carrying a volume of EU laws, rules, regulations, and policies.
How will this play out? How would I know? I work in an underground office in rural Kentucky. I don’t think our local grocery store carries French cheese. However, I can offer a few tips to executives of the firms identified in the article:
- Do not go to France
- If you do go to France, avoid interactions with government officials
- If you must interact with government officials, make sure you have a French avocat or avocate lined up.
France seems so wonderful; it has great food; it has roads without billboards; and it has a penchant for direct action. Examples range from French Guiana to Western Africa. No, the “real” news doesn’t cover these activities. And executives of Signal and the Zuckbook may want to consider their travel plans. Avoid the issues Pavel Durov faces and may have resolved this calendar year. Note the word “may.”
Stephen E Arnold, March 12, 2025
AI Hiring Spoofs: A How To
March 12, 2025
Be aware. A dinobaby wrote this essay. No smart software involved.
The late Robert Steele, one of first government professionals to hop on the open source information bandwagon, and I worked together for many years. In one of our conversations in the 1980s, Robert explained how he used a fake persona to recruit people to assist him in his work on a US government project. He explained that job interviews were an outstanding source of information about a company or an organization.
“AI Fakers Exposed in Tech Dev Recruitment: Postmortem” is a modern spin on Robert’s approach. Instead of newspaper ads and telephone calls, today’s approach uses AI and video conferencing. The article presents a recipe for what was at one time a technique not widely discussed in the 1980s. Robert learned his approach from colleagues in the US government.
The write up explains that a company wants to hire a professional. Everything hums along and then:
…you discover that two imposters hiding behind deepfake avatars almost succeeded in tricking your startup into hiring them. This may sound like the stuff of fiction, but it really did happen to a startup called Vidoc Security, recently. Fortunately, they caught the AI impostors – and the second time it happened they got video evidence.
The cited article explains how to set and operate this type of deep fake play. I am not going to present the “how to” in this blog post. If you want the details, head to the original. The penetration tactic requires Microsoft LinkedIn, which gives that platform another use case for certain individuals gathering intelligence.
Several observations:
- Keep in mind that the method works for fake employers looking for “real” employees in order to obtain information from job candidates. (Some candidates are blissfully unaware that the job is a front for obtaining data about an alleged former employer.)
- The best way to avoid AI centric scams is to do the work the old-fashioned way. Smart software opens up a wealth of opportunities to obtain allegedly actionable information. Unfortunately the old fashioned way is slow, expensive, and prone to social engineering tactics.
- As AI and bad actors take advantage of the increased capabilities of smart software, humans do not adapt quickly when those humans are not actively involved with AI capabilities. Personnel related matters are a pain point for many organizations.
To sum up, AI is a tool. It can be used in interesting ways. Is the contractor you hired on Fiverr or via some online service a real person? Is the job a real job or a way to obtain information via an AI that is a wonderful conversationalist? One final point: The target referenced in the write was a cyber security outfit. Did the early alert, proactive, AI infused system prevent penetration?
Nope.
Stephen E Arnold, March 12, 2025
Survey: Kids and AI Tools
March 12, 2025
Our youngest children are growing up alongside AI. Or, perhaps, it would be more accurate to say increasingly intertwined with it. Axios tells us, "Study Zeroes in on AI’s Youngest Users." Write Megan Morrone cites a recent survey from Common Sense Media that examined AI use by children under 8 years old. The researchers surveyed 1,578 parents last August. We learn:
"Even the youngest of children are experimenting with a rapidly changing technology that could reshape their learning and critical thinking skills in unknown ways. By the numbers: One in four parents of kids ages 0-8 told Common Sense their children are learning critical thinking skills from using AI.
- 39% of parents said their kids use AI to ‘learn about school-related material,’ while only 8% said they use AI to ‘learn about AI.’
- For older children (ages 5-8) nearly 40% of parents said their child has used an app or a device with AI to learn.
- 24% of children use AI for ‘creative content,’ like writing short stories or making art, according to their parents."
It is too soon to know the long-term effects of growing up using AI tools. These kids are effectively subjects in a huge experiment. However, we already see indications that reliance on AI is bad for critical thinking skills. And that research is on adults, never mind kids whose base neural pathways are just forming. Parents, however, seem unconcerned. Morrone reports:
- More than half (61%) of parents of kids ages 0-8 said their kids’ use of AI had no impact on their critical thinking skills.
- 60% said there was no impact on their child’s well-being.
- 20% said the impact on their child’s creativity was ‘mostly positive.’
Are these parents in denial? They cannot just be happy to offload parenting to algorithms. Right? Perhaps they just need more information. Morrone points us to EqualAI’s new AI Literacy Initiative but, again, that resource is focused on adults. The write-up emphasizes the stakes of this great experiment on our children:
‘Our youngest children are on the front lines of an unprecedented digital transformation,’ said James P. Steyer, founder and CEO of Common Sense.
‘Addressing the impact of AI on the next generation is one of the most pressing issues of our time,’ Miriam Vogel, CEO of EqualAI, told Axios in an email. ‘Yet we are insufficiently developing effective approaches to equip young people for a world where they are both using and profoundly affected by AI.’
What does this all mean for society’s future? Stay tuned.
Cynthia Murrell, March 12, 2025
AI and Jobs: Tell These Folks AI Will Not Impact Their Work
March 12, 2025
The work of a real, live dinobaby. Sorry, no smart software involved. Whuff, whuff. That’s the sound of my swishing dino tail. Whuff.
I have a friend who does some translation work. She’s chugging along because of her reputation for excellent work. However, one of the people who worked with me on a project requiring Russian language skills has not worked out. The young person lacks the reputation and the contacts with a base of clients. The older person can be as busy as she wants to be.
What’s the future of translating from one language to another for money? For the established person, smart software appears to have had zero impact. The younger person seems to be finding that smart software is getting the translation work.
I will offer my take in a moment. First, let’s look at “Turkey’s Translators Are Training the AI Tools That Will Replace Them.”
I noted this statement in the cited article:
Turkey’s sophisticated translators are moonlighting as trainers of artificial intelligence models, even as their profession shrinks with the rise of machine translations. As the models improve, these training jobs, too, may disappear.
What’s interesting is that the skilled translators are providing information to AI models. These models are definitely going to replace the humans. The trajectory is easy to project. Machines will work faster and cheaper. The humans will abandon the discipline. Then prices will go up. Those requiring translations will find themselves spending more and having few options. Eventually the old hands will wither. Excellent translations which capture nuance will become a type of endangered species. The snow leopard of knowledge work is with us.
I noted this statement in the article:
Book publishing, too, is transforming. Turkish publisher Dedalus announced in 2023 that it had machine-translated nine books. In 2022, Agora Books, helmed by translator Osman Ak?nhay, released a Turkish edition of Jean-Dominique Brierre’s Milan Kundera, une vie d’écrivain, a biography of the Czech-French novelist Milan Kundera. Ak?nhay, who does not know French, used Google Translate to help him in the translation, to much criticism from the industry.
What’s this mean?
- Jobs will be lost and the professionals with specialist skills are going to be the buggy whip makers in a world of automobiles
- The downstream impact of smart software is going to kill off companies. The Chegg legal matter illustrates how a monopoly can mindlessly erode a company. This is like a speeding semi-truck smashing love bugs on a Florida highway. The bugs don’t know what hit them, and the semi-truck is unaware and the driver is uncaring. Dead bugs? So what? See “Chegg Sues Google for Hurting Traffic with AI As It Considers Strategic Alternatives.”
- Data from different sources suggesting that AI will just create jobs is either misleading, public relations, or dead wrong. The Bureau of Labor Statistics data are spawning articles like “AI and Its Impact on Software Development Jobs.”
Net net: What’s emerging is one of those classic failure scenarios. Nothing big seems to go wrong. Then a collapse occurs. That’s what’s beginning to appear. Just little changes. Heed the signals? Of course not. I can hear someone saying, “That won’t happen to me.” Of course not but cheaper and faster are good enough at this time.
Stephen E Arnold, March 12, 2025
Microsoft: Marketing Is One Thing, a Cost Black Hole Is Quite Another
March 11, 2025
Yep, another dinobaby original.
I read “Microsoft Cuts Data Centre Plans and Hikes Prices in Push to Make Users Carry AI Cost.” The headline meant one thing to me: The black hole of AI costs must be capped. For my part, I try to avoid MSFT AI. After testing the Redmoanians’ smart software for months, I decided, “Nope.”
The write up says:
Last week, Microsoft unceremoniously pulled back on some planned data centre leases. The move came after the company increased subscription prices for its flagship 365 software by up to 45%, and quietly released an ad-supported version of some products. The tech giant’s CEO, Satya Nadella, also recently suggested AI has so far not produced much value.
No kidding. I won’t go into the annoyances. AI in Notepad? Yeah, great thinking like that which delivered Bob to users who loved Clippy.
The essay notes:
Having sunk billions into generative AI, Microsoft is trying to find the business model that will make the technology profitable.
Maybe someday, but that day is not today or tomorrow. If anything, Microsoft is struggling with old-timey software as well. The Register, a UK online publication, reports:
Back to AI. The AI financial black hole exists, and it may not be easy to resolve. What’s the fix? Here’s the Microsoft data center plan as of March 2025:
As AI infrastructure costs rise and model development evolves, shifting the costs to consumers becomes an appealing strategy for AI companies. While big enterprises such as government departments and universities may manage these costs, many small businesses and individual consumers may struggle.
Several observations are warranted:
- What happens if Microsoft cannot get consumers to pay the AI bills?
- What happens if people like this old dinobaby don’t want smart software and just shift to work flows without Microsoft products?
- What happens if the marvel of the Tensor and OpenAI’s and others’ implementations continue to hallucinate creating more headaches than the methods improve?
Net net: Marketing may have gotten ahead of reality, but the black hole of costs are very real and not hallucinations. Can Microsoft escape a black hole like this one?
Stephen E Arnold, March 11, 2025