Airships and AI: A Similar Technology Challenge

August 14, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

Vaclav Smil writes books about the environment and technology. In his 2023 work Invention and Innovation: A Brief History of Hype and Failure, he describes the ups and downs of some interesting technologies. I thought of this book when I read  “A Best Case Scenario for AI?” The author is a wealthy person who has some interaction in the relaxing crypto currency world. The item appeared on X.com.

I noted a passage in the long X.com post; to wit:

… the latest releases of AI models show that model capabilities are more decentralized than many predicted. While there is no guarantee that this continues — there is always the potential for the market to accrete to a small number of players once the investment super-cycle ends — the current state of vigorous competition is healthy. It propels innovation forward, helps America win the AI race, and avoids centralized control. This is good news — that the Doomers did not expect.

Reasonable. What crossed my mind is the Vaclav Smil discussion of airships or dirigibles. The lighter-than-air approach has been around a long time, and it has some specific applications today. Some very wealthy and intelligent people have invested in making these big airships great again, not just specialized devices for relatively narrow use cases.

So what? The airship history spans the 18th, 19th, 20th, and 21st century. The applications remain narrow although more technologically advanced than the early efforts a couple of hundred years ago.

What is smart software is a dirigible type of innovation? The use cases may remain narrow. Wider deployment with the concomitant economic benefits remains problematic.

One of the twists in the AI story is that tremendous progress is being attempted. The innovations as they are rolled out are incremental improvements. Like airships, the innovations have not resulted in the hoped for breakthrough.

There are numerous predictions about the downsides of smart software. But what if AI is little more than a modern version of the dirigible. We have a remarkable range of technologies, but each next steps is underwhelming. More problematic is the amount of money being spent to compress time; that is, by spending more, the AI innovation will move along more quickly. Perhaps that is not the case. Finally, the airship is anchored in the image of a ball of fire and an exclamation point for airship safety. Will their be a comparable moment for AI?

Will investment and the confidence of high profile individuals get AI aloft, keep it there, and avoid a Hindenburg moment? Much has been invested to drive AI forward and make it “the next big thing.” The goal is to generate money, substantial sums.

The X.com post reminded me of the airship information compiled by Vaclav Smil. I can’t shake the image. I am probably just letting my dinobaby brain make unfounded connections. But, what if….? We could ask Google and its self-shaming smart software. Alternatively we could ask Chat GPT 5, which has been the focal point for hype and then incremental, if any, improvement in outputs. We could ask Apple, Amazon, or Telegram. But what if…?

I think an apt figure of speech might be “pushing a string.”

Stephen E Arnold, August 14, 2025

Cannot Read? Students Cannot Imagine Either

August 8, 2025

Students are losing the ability to imagine and self-reflect on their own lives says the HuffPost in the article: “I Asked My Students To Write An Essay About Their Lives. The Reason 1 Student Began To Panic Left Me Stunned.” While Millennials were the first generation to be completely engrossed in the Internet, Generation Z is the first generation to have never lived without screens. Because of the Internet’s constant presence, kids have unfortunately developed bad habits where they zone out and don’t think.

Zen masters work for years to shut off their brains, but Gen Z can do it automatically with a screen. This is a horrible thing for critical thinking skills and imagination, because these kids don’t know how to think without the assistance of AI. The article writer Liz Rose Shulman is a teacher of high school and college students. She assigned them essays and without hesitation all of them rely on AI to complete the assignments.

The students either use Grammarly to help them write everything or the rely on ChatGPT to generate an essay. The over reliance on AI tools means they don’t know how to use their brains. They’re unfamiliar with the standard writing process, problem solving, and being creative. The kids don’t believe there’s a problem using AI. Many teachers also believe the same thing and are adopting it into their curriculums.

The students are flummoxed when they’re asked to write about themselves:

I assigned a writing prompt a few weeks ago that asked my students to reflect on a time when someone believed in them or when they believed in someone else.

One of my students began to panic.

‘I have to ask Google the prompt to get some ideas if I can’t just use AI,’ she pleaded and then began typing into the search box on her screen, ‘A time when someone believed in you.’ ‘It’s about you,’ I told her. ‘You’ve got your life experiences inside of your own mind.’ It hadn’t occurred to her — even with my gentle reminder — to look within her own imagination to generate ideas. One of the reasons why I assigned the prompt is because learning to think for herself now, in high school, will help her build confidence and think through more complicated problems as she gets older — even when she’s no longer in a classroom situation.”

What’s even worse is that kids are addicted to their screens and they lack basic communication skills. Every generations goes through issues with older generations. Society will adapt and survive but let’s start teaching how to think and imagine again! Maybe if they brought back recess and enforced time without screens that would help, even with older people.

Whitney Grace, August 8, 2025

Yahoo: An Important Historical Milestone

August 5, 2025

Dino 5 18 25_thumbSorry, no smart software involved. A dinobaby’s own emergent thoughts.

I read “What Went Wrong for Yahoo.” At one time, my team and I followed Yahoo. We created The Point (Top 5% of the Internet) in the early 1990s. Some perceived The Point as a variant. I suppose it was, but we sold the property after a few years. The new owners, something called CMGI, folded The Point into Lycos, and — poof — The Point was gone.

But Yahoo chugged along. The company became the poster child for the Web 1 era. Web search was not comprehensive, and most of the “search engines” struggled to deal with several thorny issues:

  1. New sites were flooding the Web One Internet. Indexing was a bottleneck. In the good old days, one did not spin up a virtual machine on a low cost vendor in Romania. Machines and gizmos were expensive, and often there was a delay of six months or more for a Sun Microsystems Sparc. Did I mention expensive? Everyone in search was chasing low cost computer and network access.
  2. The search-and-retrieval tools were in “to be” mode. If one were familiar with IBM Almaden, a research group was working on a system called Clever. There were interesting techniques in many companies. Some popped up and faded. I am not sure of the dates but there was Lycos, which I mentioned, Excite, and one developed by the person who created Framemaker, among others. (I am insufficiently motivated too chase down my historical files, and I sure don’t want to fool around trying to get historical information from Bing, Google, Yandex, and (heaven help me! Qwant). The ideas were around, but it took time for the digital DNA to create a system that mostly worked. I wish I could remember the system that emerged from Cambridge University, but I cannot.
  3. Old-fashioned search methods like those used by NASA Recon, SDC Orbit, Dialog, and STAIRS were developed to work on bounded content, precisely structured, indexed or “tagged” in today’s jargon, and coded for mainframes. Figuring out how to use smaller machines was not possible. In my lectures from that era, I pointed out that once something is coded, sort of works, and seems to be making money — changes is not conceivable. Therefore, the systems with something that worked sailed along like aircraft carriers until they rusted and sank.

What’s this got to do with Yahoo?

Yahoo was a directory. Directories are good because the content is bounded. Yahoo did not exercise significant editorial control. The Point, on the other hand, was curated like the commercial databases with which I was associated: ABI/INFORM, Business Dateline (the first online information service which corrected erroneous information after a content object went live), Pharmaceutical News Index, and some others we sold to Cambridge Scientific Abstracts.

Indexing the Web is not bounded. Yahoo tried to come up with a way to index what was a large amount of digital content. Adding to Yahoo’s woes was the need to indexed changed content or the “deltas” as we called them in our attempt at The Point to sound techno-literate.

Because of the cost and revenue problems, decisions at Yahoo — according to the people whom we knew and with whom we spoke — went like this:

  1. Assemble a group with different expertise
  2. State the question, “What can we do now to make money?”
  3. Gather ideas
  4. Hold a meeting to select one or two
  5. Act on the “best ideas”

The flaw in this method is that a couple of smart fellows in a Stanford dorm were fooling around with Backrub. It incorporated ideas from their lectures, what they picked up about new ideas from students, and what they read (no ChatGPT then, sorry).

I am not going to explain what Backrub at first did not do (work reliably despite the weird assemblage of computers and gear the students used) and focus on the three ideas that did work for what became Google, a pun on a big number’s name:

  1. Hook mongrel computers to indexing when those computers were available and use anything that remotely seemed to solve a problem. Is that an old router? Cool, let’s use that. This was a very big idea because fooling around with computer systems could kill those puppies with an errant instruction.
  2. Find inspiration in the IBM Clever system; that is, determine relevance by looking at links to a source document. This was a variation on Gene Garfield’s approach to citation analysis
  3. Index new Web pages when the appeared. If the crawler / indexer crashed, skip the page and go to the next url. The dorm boys looked at the sites that killed the crawler and figured out how to accommodate those changes; thus, the crawler / indexer became “smart”. This was a very good idea because commercial content indexing systems forced content to be structured a certain way. However, in the Web 1 days, rules were either non existent, ignored, or created problems that creators of Web pages wrote around.

Yahoo did none of these things.

Now let me point out Yahoo’s biggest mistake, and, believe me, the company is an ideal source of insight about what not to do.

Yahoo acquired GoTo.com. The company and software emerged from IdeaLab, I think. What GoTo.com created was an online version of a pay-to-play method. The idea was a great one and obvious to those better suited to be the love child of Cardinal Richelieu and Cosimo de’ Medici. To keep the timeline straight, Sergey Brin and Larry Page did the deed and birthed Google with the GoTo.com (Overture)  model to create Google’s ad foundation. Why did Google need money? The person who wrote a check to the Backrub boys said, “You need to earn money.” The Backrub boys looked around and spotted the GoTo method, now owned by Yahoo. The Backrub boys emulated it.

Yahoo, poor old confused Yahoo, took legal action against the Backrub boys, settled for $1 billion, and became increasingly irrelevant. Therefore, Yahoo’s biggest opportunity was to buy the Backrub boys and their Google search system, but they did not. Then Yahoo allowed their GoTo to inspire Google advertising.

From my point of view, Cardinal Richelieu and Cosimo were quite proud that the two computer science students, some of the dorm crowd, and bits and pieces glued together to create Google search emerged as a very big winner.

Yahoo’s problem is that committee think in a fast changing, high technology context is likely to be laughably wrong. Now Google is Yahoo-like. The cited article nails it:

Buying everything in sight clearly isn’t the best business strategy. But if indiscriminately buying everything in sight would have meant acquiring Google and Facebook, Yahoo might have been better off doing that rather than what it did.

Can Google think like the Backrub boys? I don’t think so. The company is spinning money, but the cash that burnishes Google leadership’s image comes from the Yahoo, GoTo.com, and Overture model. Yahoo had so many properties, the Yahooligans had zero idea how to identify a property with value and drive that idea forward. How many “things” does Google operate now? How many things does Facebook operate now? How many things does Telegram operate now? I think that “too many” may hold a clue to the future of these companies. And Yahooooo? An echo, not the yodel.

Stephen E Arnold, August 5, 2025

Academics Lead and Student Follow: Is AI Far Behind?

July 16, 2025

Dino 5 18 25Just a dinobaby without smart software. I am sufficiently dull without help from smart software.

I read “Positive Review Only: Researchers Hide AI Prompts in Papers.” Note: You may have to pay to read this write up.] Who knew that those writing objective, academic-type papers would cheat? I know that one ethics professor is probably okay with the idea. Plus, that Stanford University president is another one who would say, “Sounds good to me.”

The write up says:

Nikkei looked at English-language preprints — manuscripts that have yet to undergo formal peer review — on the academic research platform arXiv. It discovered such prompts in 17 articles, whose lead authors are affiliated with 14 institutions including Japan’s Waseda University, South Korea’s KAIST, China’s Peking University and the National University of Singapore, as well as the University of Washington and Columbia University in the U.S. Most of the papers involve the field of computer science.

Now I would like suggest that commercial database documents are curated and presumably less likely to contain made up information. I cannot. Peer reviewed papers also contain some slick moves; for example, a loose network of academic friends can cite one another’s papers to boost them in search results. Others like the Harvard ethics professor just write stuff and let it sail through the review process fabrications and whatever other confections were added to the alternative fact salads.

What US schools featured in this study? The University of Washington and Columbia University. I want to point out that the University of Washington has contributed to the Google brain trust; for example, Dr. Jeff Dean.

Several observations:

  1. Why should students pay attention to the “rules” of academic conduct when university professors ignore them?
  2. Have universities given up trying to enforce guidelines for appropriate academic behavior? On the other hand, perhaps these ArXiv behaviors are now the norm when grants may be in balance?
  3. Will wider use of smart software change the academics’ approach to scholarly work?

Perhaps one of these estimable institutions will respond to these questions?

Stephen E Arnold, July 16, 2025

An AI Wrapper May Resolve Some Problems with Smart Software

July 15, 2025

Dino 5 18 25No smart software involved with this blog post. (An anomaly I know.)

For those with big bucks sunk in smart software chasing their tail around large language models, I learned about a clever adjustment — an adjustment that could pour some water on those burning black holes of cash.

A 36 page “paper” appeared on ArXiv on July 4, 2025 (Happy Birthday, America!). The original paper was “revised” and posted on July 8, 2025. You can read the July 8, 2025, version of “MemOS: A Memory OS for AI System” and monitor ArXiv for subsequent updates.

I recommend that AI enthusiasts download the paper and read it. Today content has a tendency to disappear or end up behind paywalls of one kind or another.

The authors of the paper come from outfits in China working on a wide range of smart software. These institutions explore smart waste water as well as autonomous kinetic command-and-control systems. Two organizations funding the “authors” of the research and the ArXiv write up are a start up called MemTensor (Shanghai) Technology Co. Ltd. The idea is to take good old Google tensor learnings and make them less stupid. The other outfit is the Research Institute of China Telecom. This entity is where interesting things like quantum communication and novel applications of ultra high frequencies are explored.

The MemOS is, based on my reading of the paper, is that MemOS adds a “layer” of knowledge functionality to large language models. The approach remembers the users’ or another system’s “knowledge process.” The idea is that instead of every prompt being a brand new sheet of paper, the LLM has a functional history or “digital notebook.” The entries in this notebook can be used to provide dynamic context for a user’s or another system’s query, prompt, or request. One application is “smart wireless” applications; another, context-aware kinetic devices.

I am not sure about some of the assertions in the write up; for example, performance gains, the benchmark results, and similar data points.

However, I think that the idea of a higher level of abstraction combined with enhanced memory of what the user or the system requests is interesting. The approach is similar to having an “old” AS/400 or whatever IBM calls these machines and interacting with them via a separate computing system is a good one. Request an output from the AS/400. Get the data from an I/O device the AS/400 supports. Interact with those data in the separate but “loosely coupled” computer. Then reverse the process and let the AS/400 do its thing with the input data on its own quite tricky workflow. Inefficient? You bet. Does it prevent the AS/400 from trashing its memory? Most of the time, it sure does.

The authors include a pastel graphic to make clear that the separation from the LLM is what I assume will be positioned as an original, unique, never-before-considered innovation:

image

Now does it work? In a laboratory, absolutely. At the Syracuse Parallel Processing Center, my colleagues presented a demonstration to Hillary Clinton. The search, text, video thing behaved like a trained tiger before that tiger attacked Roy in the Siegfried & Roy animal act in October 2003.

Are the data reproducible? Good question. It is, however, a time when fake data and synthetic government officials are posting videos and making telephone calls. Time will reveal the efficacy of the ‘breakthrough.”

Several observations:

  1. The purpose of the write up is a component of the China smart, US dumb marketing campaign
  2. The number of institutions involved, the presence of a Chinese start up, and the very big time Research Institute of China Telecom send the message that this AI expertise is diffused across numerous institutions
  3. The timing of the release of the paper is delicious: Happy Birthday, Uncle Sam.

Net net: Perhaps Meta should be hiring AI wizards from the Middle Kingdom?

Stephen E Arnold, July 15, 2025

Microsoft Innovation: Emulating the Bold Interface Move by Apple?

July 2, 2025

Dino 5 18 25_thumb[3]_thumb_thumb_thumbThis dinobaby wrote this tiny essay without any help from smart software. Not even hallucinating gradient descents can match these bold innovations.

Bold. Decisive. Innovative. Forward leaning. Have I covered the adjectives used to communicate “real” innovation? I needed these and more to capture my reaction to the information in “Forget the Blue Screen of Death – Windows Is Replacing It with an Even More Terrifying Black Screen of Death.

Yep, terrifying. I don’t feel terrified when my monitors display a warning. I guess some people do.

The write up reports:

Microsoft is replacing the Windows 11 Blue Screen of Death (BSoD) with a Black Screen of Death, after decades of the latter’s presence on multiple Windows iterations. It apparently wants to provide more clarity and concise information to help troubleshoot user errors easily.

The important aspect of this bold decision to change the color of an alert screen may be Apple color envy.

Apple itself said, “Apple Introduces a Delightful and Elegant New Software Design.” The innovation was… changing colors and channeling Windows Vista.

Let’s recap. Microsoft makes an alert screen black. Apple changes its colors.

Peak innovation. I guess that is what happens when artificial intelligence does not deliver.

Stephen E Arnold, July 2, 2025

The Secret to Business Success

June 18, 2025

Dino 5 18 25_thumbJust a dinobaby and a tiny bit of AI goodness: How horrible is this approach?

I don’t know anything about psychological conditions. I read “Why Peter Thiel Thinks Asperger’s Is A Key to Succeeding in Business.” I did what any semi-hip dinobaby would do. I logged into You.com and ask what the heck Asperger’s was. Here’s what I learned:

  • The term "Asperger’s Syndrome" was introduced in the 1980s by Dr. Lorna Wing, based on earlier work by Hans Asperger. However, the term has become controversial due to revelations about Hans Asperger’s involvement with the Nazi regime
  • Diagnostic Shift: Asperger’s Syndrome was officially included in the DSM-IV (1994) and ICD-10 (1992) but was retired in the DSM-5 (2013) and ICD-11 (2019). It is now part of the autism spectrum, with severity levels used to indicate the level of support required.

image

Image appeared with the definition of Asperger’s “issue.” A bit of a You.com bonus for the dinobaby.

These factoids are new to me.

The You.com smart report told me:

Key Characteristics of Asperger’s Syndrome (Now ASD-Level 1)

  1. Social Interaction Challenges:
    • Difficulty understanding social cues, body language, and emotions.
    • Limited facial expressions and awkward social interactions.
    • Conversations may revolve around specific topics of interest, often one-sided
  1. Restricted and Repetitive Behaviors:
    • Intense focus on narrow interests (e.g., train schedules, specific hobbies).
    • Adherence to routines and resistance to change
  1. Communication Style:
    • No significant delays in language development, but speech may be formal, monotone, or unusual in tone.
    • Difficulty using language in social contexts, such as understanding humor or sarcasm
  1. Motor Skills and Sensory Sensitivities:
    • Clumsiness or poor coordination.
    • Sensitivity to sensory stimuli like lights, sounds, or textures.

Now what does the write up say? Mr. Thiel (Palantir Technology and other interests) believes:

Most of them [people with Asperger’s] have little sense of unspoken social norms or how to conform to them. Instead they develop a more self-directed worldview. Their beliefs on what is or is not possible come more from themselves, and less from what others tell them they can do or cannot do. This causes a lot anxiety and emotional hardship, but it also gives them more freedom to be different and experiment with new ideas.

The idea is that the alleged disorder allows certain individuals with Asperger’s to change the world.

The write up says:

The truth is that if you want to start something truly new, you almost by definition have to be unconventional and do something that everyone else thinks is crazy. This is inevitably going to mean you face criticism, even for trying it. In Thiel’s view, because those with Aspergers don’t register that criticism as much, they feel freer to make these attempts.

Is it possible for universities with excellent reputations and prestigious MBA programs to create people with the “virtues” of Aspberger’s? Do business schools aspire to impart this type of “secret sauce” to their students?

I suppose one could ask a person with the blessing of Aspberger’s but as the You.com report told me, some of these lucky individuals may [a] use speech may formal, monotone, or unusual in tone and [b] difficulty using language in social contexts, such as understanding humor or sarcasm.

But if one can change the world, carry on in the spirit of Hans Asperger, and make a great deal of money, it is good to have this unique “skill.”

Stephen E Arnold, June 18, 2025

Up for a Downer: The Limits of Growth… Baaaackkkk with a Vengeance

June 13, 2025

Dino 5 18 25_thumbJust a dinobaby and no AI: How horrible an approach?

Where were you in 1972? Oh, not born yet. Oh, hanging out in the frat house or shopping with sorority pals? Maybe you were working at a big time consulting firm?

An outfit known as Potomac Associates slapped its name on a thought piece with some repetitive charts. The original work evolved from an outfit contributing big ideas. The Club of Rome lassoed  William W. Behrens, Dennis and Donella Meadows, and Jørgen Randers to pound data into the then-state-of-the-art World3 model allegedly developed by Jay Forrester at MIT. (Were there graduate students involved? Of course not.)

The result of the effort was evidence that growth becomes unsustainable and everything falls down. Business, government systems, universities, etc. etc.  Personally I am not sure why the idea that infinite growth with finite resources will last forever was a big deal. The idea seems obvious to me. I was able to get my little hands on a copy of the document courtesy of Dominique Doré, the super great documentalist at the company which employed my jejune and naive self. Who was I too think, “This book’s conclusion is obvious, right?” Was I wrong. The concept of hockey sticks that had handles to the ends of the universe was a shocker to some.

The book’s big conclusion is the focus of “Limits to Growth Was Right about Collapse.” Why? I think the idea that the realization is a novel one to those who watched their shares in Amazon, Google, and Meta zoom to the sky. Growth is unlimited, some believed. The write up in “The Next Wave,” an online newsletter or information service happily quotes an update to the original Club of Rome document:

This improved parameter set results in a World3 simulation that shows the same overshoot and collapse mode in the coming decade as the original business as usual scenario of the LtG standard run.

Bummer. The kiddie story about Chicken Little had an acorn plop on its head. Chicken Little promptly proclaimed in a peer reviewed academic paper with non reproducible research and a YouTube video:

The sky is falling.

But keep in mind that the kiddie story  is fiction. Humans are adept at survival. Maslow’s hierarchy of needs captures the spirit of  species. Will life as modern CLs perceive it end?

I don’t think so. Without getting to philosophical, I would point to Gottlief Fichte’s thesis, antithesis, synthesis as a reasonably good way to think about change (gradual and catastrophic). I am not into philosophy so when life gives you lemons, one can make lemonade. Then sell the business to a local food service company.

Collapse and its pal chaos create opportunities. The sky remains.

The cited write up says:

Economists get over-excited when anyone mentions ‘degrowth’, and fellow-travelers such as the Tony Blair Institute treat climate policy as if it is some kind of typical 1990s political discussion. The point is that we’re going to get degrowth whether we think it’s a good idea or not. The data here is, in effect, about the tipping point at the end of a 200-to-250-year exponential curve, at least in the richer parts of the world. The only question is whether we manage degrowth or just let it happen to us. This isn’t a neutral question. I know which one of these is worse.

See de-growth creates opportunities. Chicken Little was wrong when the acorn beaned her. The collapse will be just another chance to monetize. Today is Friday the 13th. Watch out for acorns and recycled “insights.”

Stephen E Arnold, June 13, 2025

Will Amazon Become the Bell Labs of Consumer Products?

June 12, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

I did some work at Bell Labs and then at the Judge Greene crafted Bellcore (Bell Communications Research). My recollection is that the place was quiet, uneventful, and had a lousy cafeteria. The Cherry Hill Mall provided slightly better food, just slightly. Most of the people were normal compared to the nuclear engineers at Halliburton and my crazed colleagues at the blue chip consulting firm dumb enough to hire me before I became a dinobaby. (Did you know that security at the Cherry Hill Mall had a gold cart to help Bell Labs’ employees find their vehicle? The reason? Bell Labs hired staff to deal with this recuring problem. Yes, Howard, Alan, and I lost our car when we went to lunch. I finally started parking in the same place and wrote the door exit and lamp number down in my calendar. Problem solved!)

Is Amazon like that? On a visit to Amazon, I formed an impression somewhat different from Bell Labs, Halliburton, and the consulting firm. The staff were not exactly problematic. I just recall having to repeat and explain things. Amazon struck me as an online retailer with money and challenges in handling traffic. The people with whom I interacted when I visited with several US government professionals were nice and different from the technical professionals at the organizations which paid me cash money.

Is this important? Yes. I don’t think of Amazon as particularly innovative. When it wanted to do open source search, it hired some people from Lucid Imagination, now Lucid Works. Amazon just did what other Lucene/Solr large-scale users did: Index content and allow people to run queries. Not too innovative in my book. Amazon also industrialized back office and warehouse projects. These are jobs that require finding existing products and consultants, asking them to propose “solutions,” picking one, and getting the workflow working. Again, not particularly difficult when compared to the holographic memory craziness at Bell Labs or the consulting firm’s business of inventing consumer products for companies in the Fortune 500 that would sell and get the consulting firm’s staggering fees paid in cash promptly. In terms of the nuclear engineering work, Amazon was and probably still is, not in the game. Some of the rocket people are, but the majority of the Amazon workers are in retail, digital plumbing, and creating dark pattern interfaces. This is “honorable” work, but it is not invention in the sense of slick Monte Carlo code cranked out by Halliburton’s Dr. Julian Steyn or multi-frequency laser technology for jamming more data through a fiber optic connection.

I read “Amazon Taps Xbox Co-Founder to Lead new Team Developing Breakthrough Consumer Products.” I asked myself, “Is Amazon now in the Bell Labs’ concept space? The write up tries to answer my question, stating:

The ZeroOne team is spread across Seattle, San Francisco and Sunnyvale, California, and is focused on both hardware and software projects, according to job postings from the past month. The name is a nod to its mission of developing emerging product ideas from conception to launch, or “zero to one.” Amazon has a checkered history in hardware, with hits including the Kindle e-reader, Echo smart speaker and Fire streaming sticks, as well as flops like the Fire Phone, Halo fitness tracker and Glow kids teleconferencing device. Many of the products emerged from Lab126, Amazon’s hardware research and development unit, which is based in Silicon Valley.

Okay, the Fire Phone (maybe Foney) and the Glow thing for kids? Innovative? I suppose. But to achieve success in raw innovation like the firms at which I was an employee? No, Amazon is not in that concept space. Amazon is more comfortable cutting a deal with Elastic instead of “inventing” something like Google’s Transformer or Claude Shannon’s approach to extracting a signal from noise. Amazon sells books and provides an almost clueless interface to managing those on the Kindle eReader.

The write up says (and I believer everything I read on the Internet):

Amazon has pulled in staffers from other business units that have experience developing innovative technologies, including its Alexa voice assistant, Luna cloud gaming service and Halo sleep tracker, according to LinkedIn profiles of ZeroOne employees. The head of a projection mapping startup called Lightform that Amazon acquired is helping lead the group. While Amazon is expanding this particular corner of its devices group, the company is scaling back other areas of the sprawling devices and services division.

Innovation is a risky business. Amazon sells stuff and provides online access with uptime of 98 or 99 percent. It does not “do” innovation. I wrote a book chapter about Amazon’s blockchain patents. What happened to that technology, some of which struck me as promising and sort of novel given the standards for US patents? The answer, based on the information I have seen since I wrote the book chapter, is, “Not much.” In less time, Telegram dumped out dozens of “inventions.” These have ranged from sticking crypto wallets into every Messenger users’ mini app to refining the bot technology to display third-party, off-Telegram Web sites on the fly for about 900 million Messenger users.

Amazon hit a dead end with Alexa and something called Halo.

When an alleged criminal organization operating as an “Airbnb” outfit with no fixed offices and minimal staff can innovate and Amazon with its warehouses cannot, there’s a useful point of differentiation in my mind.

The write up reports:

Earlier this month, Amazon laid off about 100 of the group’s employees. The job cuts included staffers working on Alexa and Amazon Kids, which develops services for children, as well as Lab126, according to public filings and people familiar with the matter who asked not to be named due to confidentiality. More than 50 employees were laid off at Amazon’s Lab126 facilities in Sunnyvale, according to Worker Adjustment and Retraining Notification (WARN) filings in California.

Okay. Fire up a new unit. Will the approach work? I hope for stakeholders’ and employees’ sake, Amazon hits a home run. But in the back of my mind, innovation is difficult. Quite special people are needed. The correct organizational set up or essentially zero set up is required. Then the odds are usually against innovation, which, if truly novel, evokes resistance. New is threatening.

Can the Bezos bulldozer shift into high gear and do the invention thing? I don’t know but I have some nagging doubts.

Stephen E Arnold, June 12, 2025

Google Makes a Giant, Huge, Quantumly Supreme Change

May 19, 2025

dino-orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zellenials.

I read  “Google’s G Logo Just Got Prettier.” Stunning news. The much loved, intensely technical Google has invented blurring colors. The decision was a result of DeepMind’s smart software and a truly motivated and respected group of artistically-inclined engineers.

Image. The old logo has been reinvented to display a gradient. Was the inspiration the hallucinatory gradient descent in Google’s smart software? Was it a result of a Googler losing his glasses and seeing the old logo as a blend of colors? Was it a result of a chance viewing of a Volvo marketing campaign with a series of images like this:

image

Image is from Volvo, the automobile company. You can view the original at this link. Hey, buy a Volvo.

The write up says:

Google’s new logo keeps the same letterform, as well as the bright red-yellow-green-blue color sequence, but now those colors blur into each other. The new “G” is Google’s biggest update to its visual identity since retiring serfs for its current sans-serif font, Product Sans, in 2015.

Retiring serifs, not serfs. I know it is just an AI zellenial misstep, but Google is terminating wizards so they can find their future elsewhere. That is just sol helpful.

What does the “new” and revolutionary logo look like. The image below comes from Fast Company which is quick on the artistic side of US big technology outfits. Behold:

image

Source: Fast Company via the Google I think.

Fast Company explains the forward-leaning design decision:

A gradient is a safe choice for the new “G.” Tech has long been a fan of using gradients in its logos, apps, and branding, with platforms like Instagram and Apple Music tapping into the effect a decade ago. Still today, gradients remain popular, owing to their middle-ground approach to design. They’re safe but visually interesting; soft but defined. They basically go with anything thanks to their color wheel aesthetic. Other Google-owned products have already embraced gradients. YouTube is now using a new red-to-magenta gradient in its UI, and Gemini, Google’s AI tool, also uses them. Now it’s bringing the design element to its flagship Google app.

Yes, innovative.

And Fast Company wraps up the hard hitting design analysis with some Inconel wordsmithing:

it’s not a small change for a behemoth of a company. We’ll never knows how many meetings, iterations, and deliberations went into making that little blur effect, but we can safely guess it was many.

Yep, guess.

Stephen E Arnold, May 19, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta