Has Amazon Hit the Same Big Pothole As Apple?
February 27, 2025
This blog post is the work of a real-live dinobaby. No smart software involved.
Apple has experienced some growing pains with its Apple Intelligence. Incorrect news and assorted Siri weirdness indicated that designing a rectangle and laptop requires different skills from delivering a high impact, mass market smart software “solution.”
I know Apple is working overtime to come up with the next big thing. Will it be another me-too product? Probably. I liked the M1 chip, but subsequent generations have not done much to change my work flow or my happiness with my laptops and Mac Minis. I am okay with a cheap smart watch. I am okay with an old iPhone. I am okay with providing those who do work for me with a Mac laptop. Apple, however, is not a big player in smart software. In China, the company is embracing Chinese smart software. Hey, Apple wants to sell iPhones. Do what’s necessary is the basic approach to innovation in my opinion.
Has Amazon hit the same pothole as Apple? Surely the Bezos bulldozer can move forward with its powerful innovation machine. I am not so sure. I remember four years ago a project requiring my team to look at Amazon’s Sagemaker. That was an initiative to provide off-the-shelf technology and data sets to Amazon cloud customers who wanted smart software. Have you perceived Sagemaker as the big dog in AI? I don’t.
I read “Looks Like the Next-0Gen Alexa’s Release Is Hitting Another Speed Bump.” The write up suggests that the expensive kitchen timer and weather update device is not getting much smarter quickly. The article reports:
According to a tip from an unnamed Amazon employee, shared by the Washington Post (via Android Authority), the smarter Alexa update won’t be released until March 31. The holdup was apparently due to the upgraded assistant tripping over itself in testing, struggling to nail accurate answers. So, it seems like Amazon is taking extra time to fine-tune Alexa’s brain before letting it loose.
I am not too surprised. Amazon fiddles with the Kindle and the software for that device does not meet the needs of people who read numerous books. (Don’t you love those Amazon Kindle email addresses and the software that makes it a challenge to figure out which books are on the device, which are for sale, and which are in the Amazon cloud? Wonderful software for someone who does not read, just buys books.) The cloud AI initiative has not come close to the Chinese technological “strike” with the Deepseek system. Now the kitchen timer is delayed just like useful Apple Intelligence.
Let me share my hypotheses about why Amazon and I suppose I can include Apple in this mental human hallucination:
- Neither company has a next big thing. Both companies are in a me-too, me-too loop. That’s a common situation in a firm which gets big, has money, and loses its genius for everything except making as much money as possible. Innovation atrophy is my phrase for this characteristic of some companies.
- Throwing money at a problem does not create sparks of insight. The novel ideas are smothered under the flow of money that must be spent. This is a middle manager’s problem; specifically, effort is directed to spending the money, not coming up with a big idea that solves a problem and delights those people. Do you know what’s different about a new iPhone? Do you know which Amazon products are actually of good quality? I sure don’t. I ordered an AMD Ryzen CPU. Amazon shipped me red panties. My old iPhone asks me to log in every time I look at Telegram’s messages on the device. Really, panties and persistent log ins?
- General strategic drift. I am not sure what business Apple is in? Is it services like selling music? Is it hardware which is mostly indistinguishable from the hardware just replaced? Is Amazon a cloud computing outfit with leaky S3 storage constructs? Is it a seller of Temu-type products? Is it a delivery business unable to keep its delivery partners happy? The purpose of these firms is to acquire money. Period. The original Jobs and Bezos “razzmatazz” is gone.
Will the companies remediate the fundamental innovation issue? Nope. But both will make a lot of money. Beavers do what beavers do. No matter what. But beavers might be able to get Alexa to spin money, games to mostly work, and Twitch to make creators happy, not grumpy.
Stephen E Arnold, February 27, 2025
Yikes! Existing AI is Fundamentally Flawed
February 27, 2025
AI applications are barreling full steam ahead into all corners of our lives. Yet there are serious concerns about the very structure of how LLMs work. The BCS Chartered Institute for IT asks, "Does Current AI Represent a Dead End?" Cybersecurity professor Eerke Boiten writes:
"From the perspective of software engineering, current AI systems are unmanageable, and as a consequence their use in serious contexts is irresponsible. For foundational reasons (rather than any temporary technology deficit), the tools we have to manage complexity and scale are just not applicable. By ‘software engineering’, I mean developing software to align with the principle that impactful software systems need to be trustworthy, which implies their development needs to be managed, transparent and accountable … When I last gave talks about AI ethics, around 2018, my sense was that AI development was taking place alongside the abandonment of responsibility in two dimensions. Firstly, and following on from what was already happening in ‘big data’, the world stopped caring about where AI got its data — fitting in nicely with ‘surveillance capitalism. And secondly, contrary to what professional organisations like BCS and ACM had been preaching for years, the outcomes of AI algorithms were no longer viewed as the responsibility of their designers — or anybody, really."
Yes, that is the reality we are careening into. But for big tech, that may be a feature, not a bug. Those firms clearly want today’s AI to be THE one true AI. A high profit to responsibility ratio suits them just fine.
Boiten describes, in a nutshell, how neural networks function. He emphasizes the disturbing lack of human guidance. And understanding. Since engineers cannot know just how an algorithm comes to its conclusions, it is impossible to ensure they are operating to specifications. These problems cannot be resolved with hard work and insights; they are baked in. See the write-up for more details.
If engineers are willing to progress beyond today’s LLMs, Boiten suggests, they could develop something actually reliable. It could even be built on existing AI tech, so all that work (and funding) need not go out the window. They just have to look past the dollar signs in their eyes and press ahead to a safer and more reliable product. The post warns:
"In my mind, all this puts even state-of-the-art current AI systems in a position where professional responsibility dictates the avoidance of them in any serious application. When all its techniques are based on testing, AI safety is an intellectually dishonest enterprise."
Now all we need is for big tech to do the right thing.
Cynthia Murrell, February 27, 2025
A Handy Resource: 100 AI Tools in 10 Categories
February 27, 2025
We hear a lot about the most prominent AI tools like ChatGPT, Dall-E, and Grammarly. But there are many more options designed for a wide range of tasks. Inspiration blogger Ayo-Ibidapo has rounded up "100 AI Toos for Every Need: The Ultimate List." He succinctly introduces his list by observing:
"AI is revolutionizing industries, making tasks easier, faster, and more efficient. Whether you need AI for writing, design, marketing, coding, or personal productivity, there’s a tool for you. Here’s a list of 100 AI tools categorized by their purpose."
The 10 categories include those above and more, including my favorite, "Miscellaneous and Fun." As a life-long gamer, I am drawn to AI Dungeon. I am not so sure about the face-swapping tool, Reface AI. Seems a bit creepy. I am curious whether any of the investing tools, like Alpaca, Kavout, or Trade Ideas could actually boost one’s portfolio. And I am pleased to see the esteemed Wolfram Alpha made the list in the education and research section. As for the ten entries under healthcare and wellness, I wonder: are we resigned to sharing our most intimate details with bots? Ginger AI, for mental health support, sounds non-threatening, but are there any data-grubbing details buried in its terms of service agreement?
See the post for all 100 tools. If that is not enough, check out the discussion at Battle Station, "Uncover 30,000+ AI Apps Using AITrendyTools." There’s an idea—what better to pick an AI tool than an AI tool?
Cynthia Murrell, February 27, 2025
Meta and Torrents: True, False, or Rationalization?
February 26, 2025
AIs gobble datasets for training. It is another fact that many LLMs and datasets contain biased information, are incomplete, or plain stink. One ethical but cumbersome way to train algorithms would be to notify people that their data, creative content, or other information will be used to train AI. Offering to pay for the right to use the data would be a useful step some argue.
Will this happen? Obviously not.
Why?
Because it’s sometimes easier to take instead of asking. According to Toms Hardware, “Meta Staff Torrented Nearly 82TB Of Pirated Books For AI Training-Court Records Reveal Copyright Violations.” The article explains that Meta pirated 81.7 TB of books from the shadow libraries Anna’s Archive, Z-Library, and LibGen. These books were then used to train AI models. Meta is now facing a class action lawsuit about using content from the shadow libraries.
The allegations arise from Meta employees’ written communications. Some of these messages provide insight into employees’ concern about tapping pirated materials. The employees were getting frown lines, but then some staffers’ views rotated when they concluded smart software helped people access information.
Here’s a passage from the cited article I found interesting:
“Then, in January 2023, Mark Zuckerberg himself attended a meeting where he said, “We need to move this stuff forward… we need to find a way to unblock all this.” Some three months later, a Meta employee sent a message to another one saying they were concerned about Meta IP addresses being used “to load through pirate content.” They also added, “torrenting from a corporate laptop doesn’t feel right,” followed by laughing out loud emoji. Aside from those messages, documents also revealed that the company took steps so that its infrastructure wasn’t used in these downloading and seeding operations so that the activity wouldn’t be traced back to Meta. The court documents say that this constitutes evidence of Meta’s unlawful activity, which seems like it’s taking deliberate steps to circumvent copyright laws.”
If true, the approach smacks of that suave Silicon Valley style. If false, my faith in a yacht owner with gold chains might be restored.
Whitney Grace, February 26, 2025
Innovation: It Ebbs, It Flows, It Fizzles
February 26, 2025
Many would argue humanity is nothing if not creative. If not, we would be living the way we were thousands of years ago. But, asks the Financial Times, "Is Innovation Slowing Down? With Matt Clancy." Nah—Look how innovative iPhones and Windows upgrades are.
The post presents the audio of an interview between journalist John Burn-Murdoch and economist Matt Clancy. (The transcript can be found here.) The page introduces the interview:
"Productivity growth in the developed world has been on a downward trend since the 1960s. Meanwhile, gains in life expectancy have also slowed. And yet the number of dollars and researchers dedicated to R&D grows every year. In today’s episode, the FT’s Chief Data Reporter, John Burn-Murdoch, asks whether western culture has lost its previous focus on human progress and become too risk-averse, or whether the problem is simply that the low-hanging fruit of scientific research has already been plucked. He does so in conversation with innovation economist Matt Clancy, who is the author of the New Things Under the Sun blog, and a research fellow at Open Philanthropy, a non-profit foundation based in San Francisco that provides research grants."
The pair begin by recalling a theory of economic historian Joel Mokyr, who believes a growing belief in human progress and experimentation led to the Industrial Revolution. The perspective, believes Clancy, is supported by a 2023 study that examined thousands of political and scientific books from the 1500s–1700s. That research shows a growing interest in progress during that period. Sounds plausible.
But now, we learn, innovation appears to be in decline. Research output per scientist has decreased since 1960, despite increased funding. Productivity growth and technological output are also slowing. Is this because our culture has grown less interested in invention? To hear Clancy tell it, probably not. A more likely suspect is what economist Ben Jones dubbed the Burden of Knowledge. Basically, as humanity makes discoveries that build on each other, each human scientist has more to learn before they can contribute new ideas. This also means more individual specialization and more teamwork. Of course, adding meetings to the mix slows everything down.
The economist has suggestions, like funding models that reward risk-taking. He also believes artificial intelligence will significantly speed things up. Probably—but will it send us careening down the wrong paths? AI will have to get far better at not making mistakes, or making stuff up, before we should trust it at the helm of human progress.
Cynthia Murrell, February 26, 2025
AI Research Tool from Perplexity Is Priced to Undercut the Competition
February 26, 2025
Are prices for AI-generated research too darn high? One firm thinks so. In a Temu-type bid to take over the market, reports VentureBeat, "Perplexity Just Made AI Research Crazy Cheap—What that Means for the Industry." CEO Aravind Srinivas credits open source software for making the move possible, opining that "knowledge should be universally accessible." Knowledge, yes. AI research? We are not so sure. Nevertheless, here we are. The write-up describes the difference in pricing:
"While Anthropic and OpenAI charge thousands monthly for their services, Perplexity offers five free queries daily to all users. Pro subscribers pay $20 monthly for 500 daily queries and faster processing — a price point that could force larger AI companies to explain why their services cost up to 100 times more."
Not only is Perplexity’s Deep Research cheaper than the competition, crows the post, its accuracy rivals theirs. We are told:
"[Deep Research] scored 93.9% accuracy on the SimpleQA benchmark and reached 20.5% on Humanity’s Last Exam, outperforming Google’s Gemini Thinking and other leading models. OpenAI’s Deep Research still leads with 26.6% on the same exam, but OpenAI charges $200 percent for that service. Perplexity’s ability to deliver near-enterprise level performance at consumer prices raises important questions about the AI industry’s pricing structure."
Well, okay. Not to stray too far from the point, but is a 20.5% or a 26.6% on Humanity’s Last Exam really something to brag about? Last we checked, those were failing grades. By far. Isn’t it a bit too soon to be outsourcing research to any LLM? But I digress.
We are told the low, low cost Deep Research is bringing AI to the micro-budget masses. And, soon, to the Windows-less—Perplexity is working on versions for iOS, Android, and Mac. Will this spell disaster for the competition?
Cynthia Murrell, February 26, 2025
Researchers Raise Deepseek Security Concerns
February 25, 2025
What a shock. It seems there are some privacy concerns around Deepseek. We learn from the Boston Herald, “Researchers Link Deepseek’s Blockbuster Chatbot to Chinese Telecom Banned from Doing Business in US.” Former Wall Street Journal and now AP professional Byron Tau writes:
“The website of the Chinese artificial intelligence company Deepseek, whose chatbot became the most downloaded app in the United States, has computer code that could send some user login information to a Chinese state-owned telecommunications company that has been barred from operating in the United States, security researchers say. The web login page of Deepseek’s chatbot contains heavily obfuscated computer script that when deciphered shows connections to computer infrastructure owned by China Mobile, a state-owned telecommunications company.”
If this is giving you déjà vu, dear reader, you are not alone. This scenario seems much like the uproar around TikTok and its Chinese parent company ByteDance. But it is actually worse. ByteDance’s direct connection to the Chinese government is, as of yet, merely hypothetical. China Mobile, on the other hand, is known to have direct ties to the Chinese military. We learn:
“The U.S. Federal Communications Commission unanimously denied China Mobile authority to operate in the United States in 2019, citing ‘substantial’ national security concerns about links between the company and the Chinese state. In 2021, the Biden administration also issued sanctions limiting the ability of Americans to invest in China Mobile after the Pentagon linked it to the Chinese military.”
It was Canadian cybersecurity firm Feroot Security that discovered the code. The AP then had the findings verified by two academic cybersecurity experts. Might similar code be found within TikTok? Possibly. But, as the article notes, the information users feed into Deepseek is a bit different from the data TikTok collects:
“Users are increasingly putting sensitive data into generative AI systems — everything from confidential business information to highly personal details about themselves. People are using generative AI systems for spell-checking, research and even highly personal queries and conversations. The data security risks of such technology are magnified when the platform is owned by a geopolitical adversary and could represent an intelligence goldmine for a country, experts warn.”
Interesting. But what about CapCut, the ByteDance video thing?
Cynthia Murrell, February 25, 2025
Musings on AI UI Design
February 25, 2025
The advent of AI has send UI designers back to the drawing tablet. Tech product designer and blogger Patrick Morgan considers "8 Design Breakthroughs Defining AI’s Future." As when touch-based devices became common, he asserts, design choices made now will shape the ways we interact with technology for years to come. Morgan writes:
"For the first time in over a decade, we’re facing a truly greenfield space in user experience design. There’s no playbook, no established patterns to fall back on. Even the frontier AI labs are learning through experimentation, watching to see what resonates as they introduce new ways to interact. … It’s fascinating to watch these design choices ripple across the ecosystem in real-time. When something works, competitors rush to adopt it — not out of laziness, but because we’re all collectively discovering what makes sense in this new paradigm. In this wild-west moment, new dominant patterns are emerging. Today, I want to highlight the breakthroughs that have captured my imagination the most — the design choices shaping our collective understanding of AI interaction."
The roundup include obvious choices—conversational paradigms like ChatGPT’s interface and voice input systems in general. Morgan also admires integration a la Cursor IDE and Claude Artifacts, and he
appreciates the helpful Grok button alongside content on X. He gives kudos for transparency, like Perplexity’s real-time citations and Deepseek’s process descriptions. Morgan even gives credit to MidJourney for refusing to build its own UI until it had refined its core technology. He reflects:
"These eight breakthroughs aren’t just clever UI decisions — they’re the first chapters in a new story about how humans and machines work together. Each represents a moment when someone dared to experiment, to try something unproven, and found a pattern that resonated."
Yes. And also: Ultimately, AI will be invisible—embedded and out of sight, outputting information. Interfaces undergo constant change by people with time on their hands. UI changes should not distract from the actual trajectory of smart and smarter software. Where do we stand on bias, hallucinations, privacy, and accountability? Those, we believe, are the more pertinent questions. But, sure, UI choices are nifty to observe.
Cynthia Murrell, February 25, 2025
Rest Easy. AI Will Not Kill STEM Jobs
February 25, 2025
Written by a dinobaby, not smart software. But I would replace myself with AI if I could.
Bob Hope quipped, “A sense of humor is good for you. Have you ever heard of a laughing hyena with heart burn?” No, Bob, I have not.
Here’s a more modern joke for you from the US Bureau of Labor Statistics circa 2025. It is much fresher than Mr. Hope’s quip from a half century ago.
The Bureau of Labor Statistics says:
Employment in the professional, scientific, and technical services sector is forecast to increase by 10.5% from 2023 to 2033, more than double the national average. (Source: Investopedia)
Okay, I wonder what those LinkedIn, XTwitter, and Reddit posts about technology workers not being able to find jobs in these situations:
- Recent college graduates with computer science degrees
- Recently terminated US government workers from agencies like 18F
- Workers over 55 urged to take early retirement?
The item about the rosy job market appeared in Slashdot too. Here’s the quote I noted:
Employment in the professional, scientific, and technical services sector is forecast to increase by 10.5% from 2023 to 2033, more than double the national average. According to the BLS, the impact AI will have on tech-sector employment is highly uncertain. For one, AI is adept at coding and related tasks. But at the same time, as digital systems become more advanced and essential to day-to-day life, more software developers, data managers, and the like are going to be needed to manage those systems. "Although it is always possible that AI-induced productivity improvements will outweigh continued labor demand, there is no clear evidence to support this conjecture," according to BLS researchers.
Robert Half, an employment firm, is equally optimistic. Just a couple of weeks ago, that outfit said:
Companies continue facing strong competition from other firms for tech talent, particularly for candidates with specialized skills. Across industries, AI proficiency tops the list of most-sought capabilities, with organizations needing expertise for everything from chatbots to predictive maintenance systems. Other in-demand skill areas include data science, IT operations and support, cybersecurity and privacy, and technology process automation.
What am I to conclude from these US government data? Here are my preliminary thoughts:
- The big time consulting firms are unlikely to change their methods of cost reduction; that is, if software (smart or dumb) can do a job for less money, that software will be included on a list of options. Given a choice of going out of business or embracing smart software, a significant percentage of consulting firm clients will give AI a whirl. If AI works and the company stays in business or grows, the humans will be repurposed or allowed to find their future elsewhere.
- The top one percent in any discipline will find work. The other 99 percent will need to have family connections, family wealth, or a family business to provide a boost for a great job. What if a person is not in the top one percent of something? Yeah, well, that’s not good for quite a few people.
- The permitted dominance of duopolies or oligopolies in most US business sectors means that some small and mid-sized businesses will have to find ways to generate revenue. My experience in rural Kentucky is that local accounting, legal, and technology companies are experimenting with smart software to boost productivity (the MBA word for cheaper work functions). Local employment options are dwindling because the smaller employers cannot stay in business. Potential employees want more pay than the company can afford. Result? Downward spiral which appears to be accelerating.
Am I confident in statistics related to wages, employment, and the growth of new businesses and industrial sectors? No, I am not. Statistical projects work pretty well in nuclear fuel management. Nested mathematical procedures in smart software work pretty well for some applications. Using smart software to reduce operating costs work pretty well right now.
Net net: Without meaningful work, some of life’s challenges will spark unanticipated outcomes. Exactly what type of stress breaks a social construct? Those in the job hunt will provide numerous test cases, and someone will do an analysis. Will it be correct? Sure, close enough for horseshoes.
Stop complaining. Just laugh as Mr. Hope noted. No heartburn and cost savings too boot.
Stephen E Arnold, February 25, 2025
Content Injection Can Have Unanticipated Consequences
February 24, 2025
The work of a real, live dinobaby. Sorry, no smart software involved. Whuff, whuff. That’s the sound of my swishing dino tail. Whuff.
Years ago I gave a lecture to a group of Swedish government specialists affiliated with the Forestry Unit. My topic was the procedure for causing certain common algorithms used for text processing to increase the noise in their procedures. The idea was to input certain types of text and numeric data in a specific way. (No, I will not disclose the methods in this free blog post, but if you have a certain profile, perhaps something can be arranged by writing benkent2020 at yahoo dot com. If not, well, that’s life.)
We focused on a handful of methods widely used in what now is called “artificial intelligence.” Keep in mind that most of the procedures are not new. There are some flips and fancy dancing introduced by individual teams, but the math is not invented by TikTok teens.
In my lecture, the forestry professionals wondered if these methods could be used to achieve specific objectives or “ends”. The answer was and remains, “Yes.” The idea is simple. Once methods are put in place, the algorithms chug along, some are brute force and others are probabilistic. Either way, content and data injections can be shaped, just like the gizmos required to make kinetic events occur.
The point of this forestry excursion is to make clear that a group of people, operating in a loosely coordinated manner can create data or content. Those data or content can be weaponized. When ingested by or injected into a content processing flow, the outputs of the larger system can be fiddled: More emphasis here, a little less accuracy there, and an erosion of whatever “accuracy” calculations are used to keep the system within the engineers’ and designers’ parameters. A plebian way to describe the goal: Disinformation or accuracy erosion.
I read “Meet the Journalists Training AI Models for Meta and OpenAI.” The write up explains that journalists without jobs or in search of extra income are creating “content” for smart software companies. The idea is that if one just does the Silicon Valley thing and sucks down any and all content, lawyers might come calling. Therefore, paying for “real” information is a better path.
Please, read the original article to get a sense of who is doing the writing, what baggage or mind set these people might bring to their work.
If the content is distorted — either intentionally or unintentionally — the impact of these content objects on the larger smart software system might have some interesting consequences. I just wanted to point out that weaponized information can have an impact. Those running smart software and buying content assuming it is just fine, might find some interesting consequences in the outputs.
Stephen E Arnold, February 24, 2025

