Another Horse Ridge or Just Horse Feathers from the Management Icon Intel?
December 20, 2024
This write up emerged from the dinobaby’s own mind. No AI was used because this dinobaby is too stupid to make it work.
If you are an Intel trivia buff, you will know the answer to this question: “What was the name of the 2019 cryogenic control chip Intel rolled out for quantum computers?” The answer? Horse Ridge. And there was a Horse Ridge II a few months later. I am not sure what happened to Horse Ridge. Maybe it was as I suggested horse feathers?
A rider on the Horse Ridge Trail. Notice that where the horse goes, the sagebrush and prairie dogs burn. Thanks, Magic Studio. Good enough.
Intel is back with another big time announcement. I assume this is PR’s way of neutralizing the governance wackiness in evidence at the company. Is there a president? Is there a Horse Ridge?
I read “Intel Looks Beyond Silicon, Outlines Breakthroughs in Atomically-Thin 2D Transistors, Chip Packaging, and Interconnects at IEDM 2024.” The write up reports via information directly from the really well managed outfit the following:
…the Intel Foundry Technology Research team announced technology breakthroughs in 2D transistor technology using beyond-silicon materials, chip interconnects, and packaging technology, among others.
This news will definitely push these type of stories out of the news cycle. This one is from CNN via MSN.com:
Ousted Intel CEO Pat Gelsinger Is Leaving the Company with Millions
I thought that Intel was going to
create great CPUs and super duper graphics cards. (Have you ever test an Intel graphics card? Have you ever tried to find current drivers? Have you dumped it because an Nvidia 3060 is faster and more stable for basic office tasks? I have.
Intel breakthroughs via the cited article and Intel Foundry.
The write up says:
Intel hasn’t shared the deep dive details of its Subtractive Ruthenium process, but we’re sure to learn more details during the presentation. Intel says its Subtractive Ruthenium process with airgaps provides up to 25% capacitance at matched resistance at sub-25nm pitches (the center-to-center distance between interconnect lines). Intel says its research team “was first to demonstrate, in R&D test vehicles, a practical, cost-efficient and high-volume manufacturing compatible subtractive Ru integrated process with airgaps that does not require expensive lithographic airgap exclusion zones around vias, or self-aligned via flows that require selective etches.”
Is there a fungible product? Nope. But technical papers are coming real soon.
Stephen E Arnold, December 20, 2024
IBM Courts Insurance Companies: Interesting Move from the Watson Folks
December 20, 2024
This blog post flowed from the sluggish and infertile mind of a real live dinobaby. If there is art, smart software of some type was probably involved.
This smart software and insurance appears to be one of the more active plays for 2025. One insurance outfit has found itself in a bit of a management challenge: Executive succession, PR, social media vibes, and big time coverage in Drudge.
IBM has charted a course for insurance, according to “Is There a Winning AI Strategy for Insurers? IBM Says Yes.” The write up reports:
Insurers that use generative artificial intelligence have an advantage over their competitors, according to Mark McLaughlin, IBM global insurance director.
So what’s the “leverage”? These are three checkpoints. These are building customized solutions. I assume this means training and tuning the AI methods to allow the insurance company to hit its goals on a more consistent basis. The “goal” for some insurers is to keep their clients cash. Payout, particular in uncertain times, can put stress on cash flow and executive bonuses.
A modern insurance company worker. The machine looks very smart but not exactly thrilled. Thanks, MagicStudio. Good enough and you actually produced an image unlike Microsoft Copilot.
Another point to pursue is the idea of doing AI everywhere in the insurance organization. Presumably the approach is a layer of smart software on top of the Microsoft smart software. The idea, I assume, is that multiple layers of AI will deliver a tiramisu type sugar high for the smart organization. I wonder if multiple AIs increase costs, but that fiscal issue is not addressed in the write up.
The final point is that multiple models have to be used. The idea is that each business function may require a different AI model. Does the use of multiple models add to support and optimization costs? The write up is silent on this issue.
The guts of the write up are quite interesting. Here’s one example:
That intense competition — and not direct customer demand — is what McLaughlin believes is driving such strong pressure for insurers to invest in AI.
I think this means that the insurance industry is behaving like sheep. These creatures follow and shove without much thought about where the wolf den of costs and customer rebellion lurk.
The fix is articulated in the write as have three components, almost like the script for a YouTube “short” how-to video. These “strategies” are:
- Build trust. Here’s an interesting factoid from the write up: “IBM’s study found only 29% of insurance clients are comfortable with virtual AI agents providing service. An even lower 26% trust the reliability and accuracy of advice provided by an AI agent. “The trust scores in the insurance industry are down 25% since pre-COVID.”
- Dump IT. Those folks have to deal with technical debt. But who will implement AI? My guess is IBM.
- Use multiple models. This is a theme of the write up. More is better at least for some of those involved in an AI project. Are the customers cheering? Nope, I don’t think so. Here’s what the write up says about multiple models: “IBM’s Watson AI has different platforms such as watsonx.ai, watsonx.data and watsonx.governance to meet different specific needs.” Do you know what each allegedly does? I don’t either.
Net net: Watson is back with close cousins in gang.
Stephen E Arnold, December 20, 2024
The Hay Day of Search Has a Ground Hog Moment
December 19, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
I think it was 2002 or 2003 that I started writing the first of three editions of Enterprise Search Report. I am not sure what happened to the publisher who liked big, fat thick printed books. He has probably retired to an island paradise to ponder the crashing blue surf.
But it seems that the salad days of enterprise search are back. Elastic is touting semantics, smart software, and cyber goodness. IBM is making noises about “Watson” in numerous forms just gift wrapped with sparkly AI ice cream jimmies. There is a start up called Swirl. The HuggingFace site includes numerous references to finding and retrieving. And there is Glean.
I keep seeing references to Glean. When I saw a link to the content marketing piece “Glean’s Approach to Smarter Systems: AI, Inferencing and Enterprise Data,” I read it. I learned that the company did not want to be an AI outfit, a statement I am not sure how to interpret; nevertheless, the founder of Glean is quoted as saying:
“We didn’t actually set out to build an AI application. We were first solving the problem of people can’t find anything in their work lives. We built a search product and we were able to use inferencing as a core part of our overall product technology,” he said. “That has allowed us to build a much better search and question-and-answering product … we’re [now] able to answer their questions using all of their enterprise knowledge.”
And what happened to finding information? The company has moved into:
- Workflows
- Intelligent data discovery
- Problem solving
And the result is not finding information:
Glean enables enterprises to improve efficiency while maintaining control over their knowledge ecosystem.
Translation: Enterprise search.
The old language of search is gone, but it seems to me that “search” is now explained with loftier verbiage than that used by Fast Search & Transfer in a lecture delivered in Switzerland before the company imploded.
Is it now time for write the “Enterprise Knowledge Ecosystem Report”? Possibly for someone, but it’s Ground Hog time. I have been there and done that. Everyone wants search to work. New words and the same challenges. The hay is growing thick and fast.
Stephen E Arnold, December 19, 2024
More Data about What Is Obvious to People Interacting with Teens
December 19, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
Here’s another one of those surveys which provide some data about a very obvious trend. “Nearly Half of US Teens Are Online Constantly, Pew Report Finds” states:
Nearly half of American teenagers say they are online “constantly” despite concerns about the effects of social media and smartphones on their mental health…
No kidding. Who knew?
There were some points in the cited article which seemed interesting if the data are reliable, the sample is reliable, and the analysis is reliable. But, just for yucks, let’s assume the findings are reasonably representative of what the future leaders of America are up to when their noses are pressed against an iPhone or (gasp!) and Android device.
First, YouTube is the “single most popular platform teenagers use. However, in a previous Pew study YouTube captured 90 percent of the sample, not the quite stunning 95 percent previously documented by the estimable survey outfit.
Second, the write up says:
There was a slight downward trend in several popular apps teens used. For instance, 63% of teens said they used TikTok, down from 67% and Snapchat slipped to 55% from 59%.
Improvement? Sure.
And, finally, I noted what might be semi-bad news for parents and semi-good news for Meta / Zuck:
X saw the biggest decline among teenage users. Only 17% of teenagers said they use X, down from 23% in 2022, the year Elon Musk bought the platform. Reddit held steady at 14%. About 6% of teenagers said they use Threads, Meta’s answer to X that launched in 2023. Meta’s messaging service WhatsApp was a rare exception in that it saw the number of teenage users increase, to 23% from 17% in 2022.
I do have a comment. Lots of numbers which suggest reading, writing, and arithmetic are not likely to be priorities for tomorrow’s leaders of the free world. But whatever they decide and do, those actions will be on video and shared on social media. Outstanding!
Stephen E Arnold, December 19, 2024
FOGINT: Big Takedown Coincident with Durov Detainment. Coincidence?
December 19, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
In recent years, global authorities have taken down several encrypted communication channels. Exclu and Ghost, for example. Will a more fragmented approach keep the authorities away? Apparently not. A Europol press release announces, “International Operation Takes Down Another Encrypted Messaging Service Used by Criminals.” The write-up notes:
“Criminals, in response to the disruptions of their messaging services, have been turning to a variety of less-established or custom-built communication tools that offer varying degrees of security and anonymity. While the new fragmented landscape poses challenges for law enforcement, the takedown of established communication channels shows that authorities are on top of the latest technologies that criminals use.”
Case in point: After a three-year investigation, a multi-national law enforcement team just took down MATRIX. The service, “by criminals for criminals,” was discovered in 2021 on a convicted murderer’s phone. It was a sophisticated tool bad actors must be sad to lose. We learn:
“It was soon clear that the infrastructure of this platform was technically more complex than previous platforms such as Sky ECC and EncroChat. The founders were convinced that the service was superior and more secure than previous applications used by criminals. Users were only able to join the service if they received an invitation. The infrastructure to run MATRIX consisted of more than 40 servers in several countries with important servers found in France and Germany. Cooperation between the Dutch and French authorities started through a JIT set up at Eurojust. By using innovative technology, the authorities were able to intercept the messaging service and monitor the activity on the service for three months. More than 2.3 million messages in 33 languages were intercepted and deciphered during the investigation. The messages that were intercepted are linked to serious crimes such as international drug trafficking, arms trafficking, and money laundering. Actions to take down the service and pursue serious criminals happened on 3 December in four countries.”
Those four countries are France, Spain, Lithuania, and Germany, with an assist by the Netherlands. Interpol highlights the importance of international cooperation in fighting organized crime. Is this the key to pulling ahead in the encryption arms race?
Cynthia Murrell, December 19, 2024
FOGINT: The Telegram – Visa Tie Up
December 18, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
This is Stephen E Arnold. Since the detainment of the Pavel Durov by French authorities, Telegram has ramped up its public disclosures about its crypto ambitions. In November 2024, Telegram linked itself publicly with Holders (a crypto services firm) and Visa, Inc. More information is available in a video is available on YouTube. Its title is “Visa: Building a Bridge between TON and Real World Use Cases.” It is at this url: https://www.youtube.com/watch?v=YhdXeybiG0I. The presenter is Nikola Plecas, who is identified as the senior director, global head of GTM & Product Commercialization, Visa Crypto. The “GTM” means “go to market.” In our lecture yesterday (December 11, 2024) for the CyberSocial Conference, we mentioned this tie up with crypto. By coincidence, the video was posted. We anticipate that this deal will ripen in 2025. Thank you.
Stephen E Arnold, December 18, 2024, 716 am US
The Modern Manager Confronts Old Realities in an AI World
December 18, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
Beleaguered
I read and got a kick out of “Parkinson’s Law: It’s Real, So Use It.” The subtitle: “Yes, Just Set That Deadline.” The main idea is that deadlines are necessary. Loosely translated to modern technology lingo: “Ship it. We will fix it with an update.”
The write up says:
Projects that don’t have deadlines imposed on them, even if they are self-imposed, will take a lot longer than they need to, and may suffer from feature creep and scope bloat. By setting challenging deadlines you will actually get better results.
Yesterday evening I received an email asking for some information related to a lecture we delivered earlier in the day. My first question was, “What’s the deadline?” No answer came back. I worked on a project earlier this year and deadlines were dots on a timeline. No dates, just blobs in months. We did a small project for an AI outfit. Nothing actually worked but I was asked, “How’s your part coming?” It wasn’t.
I concluded from these 2024 interactions that planning was not a finely tuned skill in four different, big time, high aspiration companies. Yet, here is a current article advocating for deadlines. I think the author has been caught in the same weird time talk my team and I have.
The author says:
Deadlines force a clear tempo and cadence and, fundamentally, they make things happen.
I agree. Deadlines make things happen. In my experience, that means, “Ship it. We will fix it with updates.” (Does that sound familiar?)
This essay makes clear to me that today’s crop of “managers” understand that some basics work really well. However, are today’s managers sufficiently informed to think through the time and resources required to deliver a high value, functional product or service. I would respectfully submit that there are some examples of today’s managers confusing marketing jabber and the need to make sales with getting work done so a product actually works. Consider these examples:
- Google’s announcements about quantum breakthroughs. Do they work? Sure, well, sort of.
- Microsoft’s broken image generation function in Copilot. Well, it worked and then it didn’t.
- Amazon’s quest to get Alexa to be more than a kitchen timer using other firms’ technology. Yeah, that is costing how much?
Knowing what to do — that is, setting a deadline— and creating something that really works — that is, an operating system which allows a user to send a facsimile or print a document — are interdependent capabilities. Managers who don’t know what is required cannot set a meaningful deadline. That’s what’s so darned interesting about Apple’s AI. Exactly when was that going to be available? Yeah. Soon, real soon. And that quantum computing stuff? Soon, real soon. And artificial general intelligence? It’s here now, pal.
Stephen E Arnold, December 18, 2024
Technology Managers: Do Not Ask for Whom the Bell Tolls
December 18, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
I read the essay “The Slow Death of the Hands-On Engineering Manager.” On the surface, the essay provides some palliative comments about a programmer who is promoted to manager. On a deeper level, the message I carried from the write up was that smart software is going to change the programmer’s work. As smart software become more capable, the need to pay people to do certain work goes down. At some point, some “development” may skip the human completely.
Thanks OpenAI ChatGPT. Good enough.
Another facet of the article concerned a tip for keeping one’s self in the programming game. The example chosen was the use of OpenAI’s ChatGPT open source software to provide “answers” to developers. Thus instead of asking a person, a coder could just type into the prompt box. What could be better for an introvert who doesn’t want to interact with people or be a manager? The answer is, “Not too much.”
What the essay makes clear is that a good coder may get promoted to be a manager. This is a role which illustrates the Peter Principle. The 1969 book explains why incompetent people can get promoted. The idea is that if one is a good coder, that person will be a good manager. Yep, it is a principle still evident in many organizations. One of its side effects is a manager who knows he or she does not deserve the promotion and is absolutely no good at the new job.
The essay unintentionally makes clear that the Peter Principle is operating. The fix is to do useful things like eliminate the need to interact with colleagues when assistance is required.
John Donne in the 17th century wrote a poorly structured sonnet which asserted:
No man is an island,
Entire of itself.
Each is a piece of the continent,
A part of the main.
The cited essay provides a way to further that worker isolation.
With AI the top-of-mind thought for most bean counters, the final lines of the sonnet is on point:
Therefore, send not to know
For whom the bell tolls,
It tolls for thee.
My view is that “good enough” has replaced individual excellence in quite important jobs. Is this AI’s “good enough” principle?
Stephen E Arnold, December 17, 2024
A Monopolist CEO Loses His Cool: It Is Our AI, Gosh Darn It!
December 17, 2024
This blog post flowed from the sluggish and infertile mind of a real live dinobaby. If there is art, smart software of some type was probably involved.
“With 4 Words, Google’s CEO Just Fired the Company’s Biggest Shot Yet at Microsoft Over AI” suggests that Sundar Pichai is not able to smarm his way out of an AI pickle. In January 2023, Satya Nadella, the boss of Microsoft, announced that Microsoft was going to put AI on, in, and around its products and services. Google immediately floundered with a Sundar & Prabhakar Comedy Show in Paris and then rolled out a Google AI service telling people to glue cheese on pizza.
Magic Studio created a good enough image of an angry executive thinking about how to put one of his principal competitors behind a giant digital eight ball.
Now 2025 is within shouting distance. Google continues to lag in the AI excitement race. The company may have oodles of cash, thousands of technical wizards, and a highly sophisticated approach to marketing, branding, and explaining itself. But is it working.
According to the cited article from Inc. Magazine’s online service:
Microsoft CEO Satya Nadella had said that “Google should have been the default winner in the world of big tech’s AI race.”
I like the “should have been.” I had a high school English teacher try to explain to me as an indifferent 14-year-old that the conditional perfect tense suggests a different choice would have avoided a disaster. Her examples involved a young person who decided to become an advertising executive and not a plumber. I think Ms. Dalton said something along the lines “Tom would have been happier and made more money if he had fixed leaks for a living.” I pegged the grammatical expression as belonging to the “woulda, coulda, shoulda” branch of rationalizing failure.
Inc. Magazine recounts an interview during which the interlocuter set up this exchange with the Big Dog of Google, Sundar Pichai, the chief writer for the Sundar & Prabhakar Comedy Show:
Interviewer: “You guys were the originals when it comes to AI.” Where [do] you think you are in the journey relative to these other players?”
Sundar, the Googler: I would love to see “a side-by-side comparison of Microsoft’s models and our models any day, any time. Microsoft is using someone else’s models.
Yep, Microsoft inked a deal with the really stable, fiscally responsible outfit OpenAI and a number of other companies including one in France. Imagine that. France.
Inc. Magazine states:
Google’s biggest problem isn’t that it can’t build competitive models; it’s that it hasn’t figured out how to build compelling products that won’t destroy its existing search business. Microsoft doesn’t have that problem. Sure, Bing exists, but it’s not a significant enough business to matter, and Microsoft is happy to replace it with whatever its generative experience might look like for search.
My hunch is that Google will not be advertising on Inc.’s site. Inc. might have to do some extra special search engine optimization too. Why? Inc.’s article repeats itself in case Sundar of comedy act fame did not get the message. Inc. states again:
Google hasn’t figured out the product part. It hasn’t figured out how to turn its Gemini AI into a product at the same scale as search without killing its real business. Until it does, it doesn’t matter whether the competition uses someone else’s models.
With the EU competition boss thinking about chopping up the Google, Inc. Magazine and Mr. Nadella may struggle to get Sundar’s attention. It is tough to do comedy when tragedy is a snappy quip away.
Stephen E Arnold, December 17, 2024
The Fatal Flaw in Rules-Based Smart Software
December 17, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
As a dinobaby, I have to remember the past. Does anyone know how the “smart” software in AskJeeves worked? At one time before the cute logo and the company followed the path of many, many other breakthrough search firms, AskJeeves used hand-crafted rules. (Oh, the reference to breakthrough is a bit of an insider joke with which I won’t trouble you.) A user would search for “weather 94401” and the system would “look up” in the weather rule the zip code for Foster City, California, and deliver the answer. Alternatively, I could have when I ran the query looked out my window. AskJeeves went on a path painfully familiar to other smart software companies today: Customer service. AskJeeves was acquired by IAC Corp. which moved away from the rules-based system which was “revolutionizing” search in the late 1990s.
Rules-based wranglers keep busy a-fussin’ and a-changin’ all the dang time. The patient mule Jeeves just wants lunch. Thanks, MidJourney, good enough.
I read “Certain Names Make ChatGPT Grind to a Halt, and We Know Why.” The essay presents information about how the wizards at OpenAI solve problems its smart software creates. The fix is to channel the “rules-based approach” which was pretty darned exciting decades ago. Like the AskJeeves’ approach, the use of hand-crafted rules creates several problems. The cited essay focuses on the use of “rules” to avoid legal hassles created when smart software just makes stuff up.
I want to highlight several other problems with rules-based decision systems which are far older in computer years than the AskJeeves marketing success in 1996. Let me highlight a few which may lurk within the OpenAI and ChatGPT smart software:
- Rules have to be something created by a human in response to something another (often unpredictable) human did. Smart software gets something wrong like saying a person is in jail or dead when he is free and undead.
- Rules have to be maintained. Like legacy code, setting and forgetting can have darned exciting consequences after the original rules creator changed jobs or fell into the category “in jail” or “dead.”
- Rules work with a limited set of bounded questions and answers. Rules fail when applied to the fast-changing and weird linguistic behavior of humans. If a “rule” does know a word like “debanking”, the system will struggle, crash, or return zero results. Bummer.
- Rules seem like a great idea until someone calculates how many rules are needed, how much it costs to create a rule, and how much maintenance rules require (typically based on the cost of creating a rule in the first place). To keep the math simple, rules are expensive.
I liked the cited essay about OpenAI. It reminds me how darned smart today’s developers of smart software are. This dinobaby loved the article. What a great anecdote! I want to say, “OpenAI should have “asked Jeeves.” I won’t. I will point out that IBM Watson, the Jeopardy winner version, was rules based. In fact, rules are still around, and they still carry like a patient donkey the cost burden.
Stephen E Arnold, December 17, 2024