Yo, MSFT-Types, Listen Up
January 23, 2025
Developers concerned about security should check out “Seven Types of Security Issues in Software Design” at InsBug. The article does leave out a few points we would have included. Using Microsoft software, for example, or paying for cyber security solutions that don’t work as licensees believe. And don’t forget engineering for security rather than expediency and cost savings. Nevertheless, the post makes some good points. It begins:
“Software is gradually defining everything, and its forms are becoming increasingly diverse. Software is no longer limited to the applications or apps we see on computers or smartphones. It is now an integral part of hardware devices and many unseen areas, such as cars, televisions, airplanes, warehouses, cash registers, and more. Besides sensors and other electronic components, the actions and data of hardware often rely on software, whether in small amounts of code or in hidden or visible forms. Regardless of the type of software, the development process inevitably encounters bugs that need to be identified and fixed. While major bugs are often detected and resolved before release or deployment by developers or testers, security vulnerabilities don’t always receive the same attention.”
Sad but true. The seven categories include: Misunderstanding of Security Protection Technologies; Component Integration and Hidden Security Designs; Ignoring Security in System Design; Security Risks from Poor Exception Handling; Discontinuous or Inconsistent Trust Relationships; Over-Reliance on Single-Point Security Measures; and Insufficient Assessment of Scenarios or Environments. See the write-up for details on each point. We note a common thread—a lack of foresight. The post concludes:
“To minimize security risks and vulnerabilities in software design and development, one must possess solid technical expertise and a robust background in security offense and defense. Developing secure software is akin to crafting fine art — it requires meticulous thought, constant consideration of potential threats, and thoughtful design solutions. This makes upfront security design critically important.”
Security should not be an afterthought. What a refreshing perspective.
Cynthia Murrell, January 23, 2025
AI: Yes, Intellectual Work Will Succumb, Just Sooner Rather Than Later
January 22, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
Has AI innovation stalled? Nope. “It’s Getting Harder to Measure Just How Good AI Is Getting” explains:
OpenAI’s end-of-year series of releases included their latest large language model (LLM), o3. o3 does not exactly put the lie to claims that the scaling laws that used to define AI progress don’t work quite that well anymore going forward, but it definitively puts the lie to the claim that AI progress is hitting a wall.
Okay, that proves that AI is hitting the gym and getting pumped.
However, the write up veers into an unexpected calcified space:
The problem is that AIs have been improving so fast that they keep making benchmarks worthless. Once an AI performs well enough on a benchmark we say the benchmark is “saturated,” meaning it’s no longer usefully distinguishing how capable the AIs are, because all of them get near-perfect scores.
What is wrong with the lack of benchmarks? Nothing. Smart software is probabalistic. How accurate is the weather? Ask a wonk at the National Weather Service and you get quite optimistic answers. Ask a child whose birthday party at the park was rained out on a day Willie the Weather said that it would be sunny, and you get a different answer.
Okay, forget measurements. Here’s what the write up says will happen, and the prediction sounds really rock solid just like Willie the Weatherman:
The way AI is going to truly change our world is by automating an enormous amount of intellectual work that was once done by humans…. Like it or not (and I don’t really like it, myself; I don’t think that this world-changing transition is being handled responsibly at all) none of the three are hitting a wall, and any one of the three would be sufficient to lastingly change the world we live in.
Follow the argument? I must admit jumping from getting good, to an inability to measure “good” to humans will be replaced because AI can do intellectual work is quite a journey. Perhaps I am missing something, but:
- Just because people outside of research labs have smart software that seems to be working like a smart person, what about those hallucinatory outputs? Yep, today’s models make stuff up because probability dictates the output
- Use cases for smart software doing “intellectual work” are where in the write up? They aren’t because Vox doesn’t have any which are comfortable to journalists and writers who can be replaced by the SEO AI’s advertised on Telegram search engine optimization channels or by marketers writing for Forbes Magazine. That’s right. Excellent use cases are smart software killing jobs once held by fresh MBAs or newly minted CFAs. Why? Cheaper and as long as the models are “good enough” to turn a profit, let ‘em rip. Yahoooo.
- Smart software is created by humans, and humans shape what it does, how it is deployed, and care not a whit about the knock on effects. Technology operates in the hands of humans. Humans are deeply flawed entities. Mother Theresas are outnumbered by street gangs in Reno, Nevada, based on my personal observations of that fine city.
Net net: Vox which can and will be replaced by a cheaper and good enough alternative doesn’t want to raise that issue. Instead, Vox wanders around the real subject. That subject is that as those who drive AI figure out how to use what’s available and good enough, certain types of work will be pushed into the black boxes of smart software. Could smart software have written this essay? Yes. Could it have done a better job? Publications like the supremely weird Buzzfeed and some consultants I know sure like “good enough.” As long as it is cheap, AI is a winner.
Stephen E Arnold, January 22, 2025
Microsoft and Its Me-Too Interface for Bing Search
January 22, 2025
Bing will never be Google, but Microsoft wants its search engine to dominate queries. Microsoft Bing has a small percentage of Internet searches and in a bid to gain more traction it has copied Google’s user interface (UI). Windows Latest spills the tea over the UI copying: “Microsoft Bing Is Trying To Spoof Google UI When People Search Google.com.”
Google’s UI is very distinctive with its minimalist approach. The only item on the Google UI is the query box and menus along the top and bottom of the page. Microsoft Edge is Google’s Web browser and it is programed to use Bing. In a sneaky (and genius) move, when Edge users type Google into the bing search box they are taken to UI that is strangely Google-esque. Microsoft is trying this new UI to lower the Bing bounce rate, users who leave.
Is it an effective tactic?
“But you might wonder how effective this idea would be. Well, if you’re a tech-savvy person, you’ll probably realize what’s going on, then scroll and open Google from the link. However, this move could keep people on Bing if they just want to use a search engine.Google is the number one search engine, and there’s a large number of users who are just looking for a search engine, but they think the search engine is Google. In their mind, the two are the same. That’s because Google has become a synonym for search engines, just like Chrome is for browsers.A lot of users don’t really care what search engine they’re using, so Microsoft’s new practice, which might appear stupid to some of you, is likely very effective.”
For unobservant users and/or those who don’t care, it will work. Microsoft is also tugging on heartstrings with another tactic:
“On top of it, there’s also an interesting message underneath the Google-like search box that says “every search brings you closer to a free donation. Choose from over 2 million nonprofits.” This might also convince some people to keep using Bing.”
What a generous and genius tactic interface innovation. We’re not sure this is the interface everyone sees, but we love the me too approach from user-centric big tech outfits.
Whitney Grace, January 22, 2025
And 2024, a Not-So-Wonderful Year
January 22, 2025
Every year has tech failures, some of them will join the zeitgeist as cultural phenomenons like Windows Vista, Windows Me, Apple’s Pippin game console, chatbots, etc. PC Mag runs down the flops in: “Yikes: Breaking Down the 10 Biggest Tech Fails of 2024.” The list starts with Intel’s horrible year with its booted CEO, poor chip performance. It follows up with the Salt Typhoon hack that proved (not that we didn’t already know it with TikTok) China is spying on every US citizen with a focus on bigwigs.
National Public Data lost 272 million social security numbers to a hacker. That was a great day in summer for hacker, but the summer travel season became a nightmare when a CrowdStrike faulty kernel update grounded over 2700 flights and practically locked down the US borders. Microsoft’s Recall, an AI search tool that took snapshots of user activity that could be recalled later was a concern. What if passwords and other sensitive information were recorded?
The fabulous Internet Archive was hacked and taken down by a bad actor to protest the Israel-Gaza conflict. It makes us worry about preserving Internet and other important media history. Rabbit and Humane released AI-powered hardware that was supposed to be a hands free way to use a digital assistant, but they failed. JuiceBox ended software support on its EV car chargers, while Scarlett Johansson’s voice was stolen by OpenAI for its Voice Mode feature. She sued.
The worst of the worst is this:
“Days after he announced plans to acquire Twitter in 2022, Elon Musk argued that the platform needed to be “politically neutral” in order for it to “deserve public trust.” This approach, he said, “effectively means upsetting the far right and the far left equally.” In March 2024, he also pledged to not donate to either US presidential candidate, but by July, he’d changed his tune dramatically, swapping neutrality for MAGA hats. “If we want to preserve freedom and a meritocracy in America, then Trump must win,” Musk tweeted in September. He seized the @America X handle to promote Trump, donated millions to his campaign, shared doctored and misleading clips of VP Kamala Harris, and is now working closely with the president-elect on an effort to cut government spending, which is most certainly a conflict of interest given his government contracts. Some have even suggested that he become Speaker of the House since you don’t have to be a member of Congress to hold that position. The shift sent many X users to alternatives like Bluesky, Threads, and Mastodon in the days after the US election.”
Let’s assume NPR is on the money. Will the influence of the Leonardo da Vinci of modern times make everything better? Absolutely. I mean the last Space X rocket almost worked. No Tesla has exploded in my neighborhood this week. Perfect.
Whitney Grace, January 22, 2025
Why Ghost Jobs? Answer: Intelligence
January 21, 2025
Prepared by a still-alive dinobaby.
A couple of years ago, an intelware outfit’s US “president” contacted me. He was curious about the law enforcement and intelligence markets appetite for repackaged Maltego, some analytics, and an interface with some Palantir-type bells and whistles. I explained that I charged money to talk because as a former blue-chip consultant, billing is in my blood. I don’t have platelets. I have Shrinky-dink invoices. Add some work, and these Shrinky-dinks blow up to big juicy invoices. He disconnected.
A few weeks later, he sent me an email. He wanted to pick up our conversation because his calls to other people whom he thought knew something about selling software to the US government did not understand that his company emerged from a spy shop. I was familiar with the issues: Non-US company, ties to a high-power intelligence operation, an inability to explain whether the code was secure, and the charming attitude of many intelligence professionals who go from A to B without much thought about some social conventions.
The fellow wanted to know how one could obtain information about a competitor; specifically, what was the pricing spectrum. It is too bad the owner of the company dumped the start up and headed to the golf course. If that call came to me today, I would point him at this article: “1 in 5 Online Job Postings Are Either Fake or Never Filled, Study Finds.” Gizmodo has explained one reason why there are so many bogus jobs offering big bogus salaries and promising big bogus benefits.
The answer is obvious when viewed from my vantage point in rural Kentucky? The objective is to get a pile or résumés, filter through them looking for people who might have some experience (current or past) at a company of interest to the job advertiser. What? Isn’t that illegal? I don’t know, but the trick has been used for a long, long time. Headhunting is a tricky business, and it is easy for someone to post a job opening and gather information from individuals who want to earn money.
What’s the write up say?
The Wall Street Journal cites internal data from the hiring platform Greenhouse that shows one in five online job postings—or between 18% and 22% of jobs advertised—are either fake or never filled. That data was culled from Greenhouse’s proprietary information, which the company can access because it sells automated software that helps employers fill out job postings. The “ghost job” phenomenon has been growing for some time—much to the vexation of job-seekers.
Okay, snappy. Ghost jobs. But the number seems low to me.
The article fails to note the intelligence angle, however. It concludes:
The plague of such phantom positions has led some platforms to treat job postings in very much the same way that other online content gets treated: as either A) verified or B) potential misinformation. Both Greenhouse and LinkedIn now supply a job verification service, the Journal writes, which allows users to know whether a position is legit or not. “It’s kind of a horror show,” Jon Stross, Greenhouse’s president and co-founder, told the Journal. “The job market has become more soul-crushing than ever.”
I think a handful of observations may be warranted:
- Some how the education of a job seeker has ignored the importance of making sure that the résumé is sanitized so no information is provided to an unknown entity from whom there is likely to be zero response. Folks, this is data collection. Volume is good.
- Interviews are easier than ever. Fire up Zoom and hit the record button. The content of the interview can be reviewed and analyzed for tasty little info-nuggets.
- The process is cheap, easy, and safe. Getting some information can be quite tricky. Post an advertisement on a service and wait. Some podcasts brag about how many responses their help wanted ads generate in as little as a few hours. As I said, cheap, easy, and safe.
What can a person do to avoid this type of intelligence gathering activity? Sorry. I have some useful operational information, but those little platelet sized invoices are just so eager to escape this dinobaby’s body. What’s amazing is that this ploy is news just as it was to the intelware person who was struggling to figure out some basics about selling to the government. Recycling open source software and pretending that it was an “innovation” was more important than trying to hire a former US government procurement officer, based in the DC area with a minimum of 10 years in software procurement. We have a situation where professional intelligence officers, job seekers, and big time journalists have the same level of understanding about how to obtain high-value information quickly and easily. Amazing what a dinobaby knows, isn’t it?
Stephen E Arnold, January 21, 2025
Sonus, What Is That Painful Sound I Hear?
January 21, 2025
Sonos CEO Swap: Are Tech Products Only As Good As Their Apps?
Lawyers and accountants leading tech firms, please, take note: The apps customers use to manage your products actually matter. Engadget reports, “Sonos CEO Patrick Spence Falls on his Sword After Horrible App Launch.” Reporter Lawrence Bonk writes:
“Sonos CEO Patrick Spence is stepping down from the company after eight years on the job, according to reporting by Bloomberg. This follows last year’s disastrous app launch, in which a redesign was missing core features and was broken in nearly every major way. The company has tasked Tom Conrad to steer the ship as interim CEO. Conrad is a current member of the Sonos board, but was a co-founder of Pandora, VP at Snap and product chief at, wait for it, the short-lived video streaming platform Quibi. He also reportedly has a Sonos tattoo. The board has hired a firm to find a new long-term leader.”
Conrad told employees that “we” let people down with the terrible app. And no wonder. Bonk explains:
“The decision to swap leadership comes after months of turmoil at the company. It rolled out a mobile app back in May that was absolutely rife with bugs and missing key features like alarms and sleep timers. Some customers even complained that entire speaker systems would no longer work after updating to the new app. It was a whole thing.”
Indeed. And despite efforts to rekindle customer trust, the company is paying the price of its blunder. Its stock price has fallen about 13 percent, revenue tanked 16 percent in the fiscal fourth quarter, and it has laid off more than 100 workers since August. Chief Product Officer Patrick Spence is also leaving the firm. Will the CEO swap help Sonos recover? As he takes the helm, Conrad vows a return to basics. At the same time, he wants to expand Sonos’ products. Interesting combination. Meanwhile, the search continues for a more permanent replacement.
Cynthia Murrell, January 21, 2025
AWS and AI: Aw, Of Course
January 21, 2025
Mat Garman Interview Reveals AWS Perspective on AI
It should be no surprise that AWS is going all in on Artificial Intelligence. Will Amazon become an AI winner? Sure, if it keeps those managing the company’s third-party reseller program away from AWS. Nilay Patel, The Verge‘s Editor-in Chief, interviewed AWS head Matt Garmon. He explains “Why CEO Matt Garman Is Willing to Bet AWS on AI.” Patel writes:
“Matt has a really interesting perspective for that kind of conversation since he’s been at AWS for 20 years — he started at Amazon as an intern and was AWS’s original product manager. He’s now the third CEO in just five years, and I really wanted to understand his broad view of both AWS and where it sits inside an industry that he had a pivotal role in creating. … Matt’s perspective on AI as a technology and a business is refreshingly distinct from his peers, including those more incentivized to hype up the capabilities of AI models and chatbots. I really pushed Matt about Sam Altman’s claim that we’re close to AGI and on the precipice of machines that can do tasks any human could do. I also wanted to know when any of this is going to start returning — or even justifying — the tens of billions of dollars of investments going into it. His answers on both subjects were pretty candid, and it’s clear Matt and Amazon are far more focused on how AI technology turns into real products and services that customers want to use and less about what Matt calls ‘puffery in the press.'”
What a noble stance within a sea of AI hype. The interview touches on topics like AWS’ domination of streaming delivery, its partnerships with telco companies, and problems of scale as it continues to balloon. Garmon also compares the shift to AI to the shift from typewriters to computers. See the write-up for more of their conversation.
Cynthia Murrell, January 21, 2025
AI Doom: Really Smart Software Is Coming So Start Being Afraid, People
January 20, 2025
Prepared by a still-alive dinobaby.
The essay “Prophecies of the Flood” gathers several comments about software that thinks and decides without any humans fiddling around. The “flood” metaphor evokes the streams of money about which money people fantasize. The word “flood” evokes the Hebrew Biblical idea’s presentation of a divinely initiated cataclysm intended to cleanse the Earth of widespread wickedness. Plus, one cannot overlook the image of small towns in North Carolina inundated in mud and debris from a very bad storm.
When the AI flood strikes as a form of divine retribution, will the modern arc be filled with humans? Nope. The survivors will be those smart agents infused with even smarter software. Tough luck, humanoids. Thanks, OpenAI, I knew you could deliver art that is good enough.
To sum up: A flood is bad news, people.
The essay states:
the researchers and engineers inside AI labs appear genuinely convinced they’re witnessing the emergence of something unprecedented. Their certainty alone wouldn’t matter – except that increasingly public benchmarks and demonstrations are beginning to hint at why they might believe we’re approaching a fundamental shift in AI capabilities. The water, as it were, seems to be rising faster than expected.
The signs of darkness, according to the essay, include:
- Rising water in the generally predictable technology stream in the park populated with ducks
- Agents that “do” something for the human user or another smart software system. To humans with MBAs, art history degrees, and programming skills honed at a boot camp, the smart software is magical. Merlin wears a gray T shirt, sneakers, and faded denims
- Nifty art output in the form of images and — gasp! — videos.
The essay concludes:
The flood of intelligence that may be coming isn’t inherently good or bad – but how we prepare for it, how we adapt to it, and most importantly, how we choose to use it, will determine whether it becomes a force for progress or disruption. The time to start having these conversations isn’t after the water starts rising – it’s now.
Let’s assume that I buy this analysis and agree with the notion “prepare now.” How realistic is it that the United Nations, a couple of super powers, or a motivated individual can have an impact? Gentle reader, doom sells. Examples include The Big Short: Inside the Doomsday Machine, The Shifts and Shocks: What We’ve Learned – and Have Still to Learn – from the Financial Crisis, and Too Big to Fail: How Wall Street and Washington Fought to Save the Financial System from Crisis – and Themselves, and others, many others.
Have these dissections of problems had a material effect on regulators, elected officials, or the people in the bank down the street from your residence? Answer: Nope.
Several observations:
- Technology doom works because innovations have positive and negative impacts. To make technology exciting, no one is exactly sure what the knock on effects will be. Therefore, doom is coming along with the good parts
- Taking a contrary point of view creates opportunities to engage with those who want to hear something different. Insecurity is a powerful sales tool.
- Sending messages about future impacts pulls clicks. Clicks are important.
Net net: The AI revolution is a trope. Never mind that after decades of researchers’ work, a revolution has arrived. Lionel Messi allegedly said, “It took me 17 years to become an overnight success.” (Mr. Messi is a highly regarded professional soccer player.)
Will the ill-defined technology kill humans? Answer: Who knows. Will humans using ill-defined technology like smart software kill humans? Answer: Absolutely. Can “anyone” or “anything” take an action to prevent AI technology from rippling through society. Answer: Nope.
Stephen E Arnold, January 20, 2025
National Security: A Last Minute Job?
January 20, 2025
On its way out the door, the Biden administration has enacted a prudent policy. Whether it will persist long under the new administration is anyone’s guess. The White House Briefing Room released a “Fact Sheet: Ensuring U.S. Security and Economic Strength in the Age of Artificial Intelligence.” The rule provides six key mechanisms on the diffusion of U.S. Technology. The statement specifies:
“In the wrong hands, powerful AI systems have the potential to exacerbate significant national security risks, including by enabling the development of weapons of mass destruction, supporting powerful offensive cyber operations, and aiding human rights abuses, such as mass surveillance. Today, countries of concern actively employ AI – including U.S.-made AI – in this way, and seek to undermine U.S. AI leadership. To enhance U.S. national security and economic strength, it is essential that we do not offshore this critical technology and that the world’s AI runs on American rails. It is important to work with AI companies and foreign governments to put in place critical security and trust standards as they build out their AI ecosystems. To strengthen U.S. security and economic strength, the Biden-Harris Administration today is releasing an Interim Final Rule on Artificial Intelligence Diffusion. It streamlines licensing hurdles for both large and small chip orders, bolsters U.S. AI leadership, and provides clarity to allied and partner nations about how they can benefit from AI. It builds on previous chip controls by thwarting smuggling, closing other loopholes, and raising AI security standards.”
The six mechanisms specify 18 key allies to whom no restrictions apply and create a couple trusted statuses other entities can attain. They also support cooperation between governments on export controls, clean energy, and technology security. As for “countries of concern,” the rule seeks to ensure certain advanced technologies do not make it into their hands. See the briefing for more details.
The measures add to previous security provisions, including the October 2022 and October 2023 chip controls. We are assured they were informed by conversations with stakeholders, bipartisan members of Congress, industry representatives, and foreign allies over the previous 10 months. Sounds like it was a lot of work. Let us hope it does not soon become wasted effort.
Cynthia Murrell, January 20, 2025
Bossless: Managers of the Future Recognize They Cannot Fix Management or Themselves
January 17, 2025
A dinobaby-crafted post. I confess. I used smart software to create the heart wrenching scene of a farmer facing a tough 2025.
I have never heard of Robert Walters. Sure, I worked on projects in London for several years, but that outfit never hit my radar. Now it has, and I think its write up is quite interesting. “Conscious Unbossing – 52% of Gen-Z Professionals Don’t Want to Be Middle Managers” introduced me to a new bound phrase: Conscious unbossing. That is super and much more elegant than the coinage ensh*tification.
A conscious unbosser looks in the mirror and sees pain. He thinks, “I can’t make the decision to keep or fire Tameka. I can’t do the budget because I don’t have my MBA study group to help me. I can’t give that talk to the sales team because I have never sold a thing in my life. Thanks, MSFT Copilot. I figured out how to make you work again. Too bad about killing those scanners, right?
The write up reports:
Over half of Gen-Z professionals don’t want to take on a middle management role in their career.
Is there some analysis? Well, sort of. The Robert Walters outfit offers this:
The Robert Walters poll found that 72% of Gen-Z would actually opt for an individual route to advance their career – one which focuses on personal growth and skills accumulation over taking on a management role (28%). Lucy Bisset, Director of Robert Walters North comments: “Gen-Z are known for their entrepreneurial mindset – preferring to bring their ‘whole self’ to projects and spend time cultivating their own brand and approach, rather than spending time managing others. “However, this reluctance to take on middle management roles could spell trouble for employers later down the line.”
The entrepreneurial mindset and “whole self” desire are what the survey sample’s results suggest. The bigger issue, in my opinion, is, “What’s caused a big chunk of Gen-Z (whatever that is) to want to have a “brand” and avoid the responsibility of making decisions, dealing with consequences (good and bad) of those decisions, and working with people to build a process that outputs results?”
Robert Walters sidesteps this question. Let me take a whack at why the Gen-Z crowd (people who were 23 to 38 in 2019) are into what I call “soft” work and getting paid to have experiences work.
- This group grew up with awards for nothing. Run in a race, lose, and get a badge. Do this enough and the “losers” come to know that they are non-performers no matter what mommy, daddy, and the gym teacher told them.
- Gen-Z was a group who matured in a fantasy land with nifty computers, mobile phones, and social media. Certain life skills were not refined in the heat treating process of a competitive education.
- Affirmation and attention became more important as their social opportunities narrowed. The great tattooing craze grabbed hold of those in Gen-Z. When I see a 32 year old restaurant worker adorned with tattoos, I wonder, “What the heck was he/she/ze thinking? I know what I am thinking, “Insecurity. A desire to stand out. A permanent “also participated” badge which will look snappy when the tattooed person is 70 years old.
Net net: I think the data in the write up is suggestive. I have questions about the sample size, the method of selection, and the statistical approach taken to determine if a “result” is verifiable. One thing is certain. Outfits like McKinsey, Bain, and BCG will have to rework their standard slide decks for personnel planning and management techniques. However, I can overlook the sparse information in the write up and the shallow analysis. I love that “conscious unbossing” neologism. See, there is room for sociology and psychology majors in business. Not much. But some room.
Stephen E Arnold, January 17, 2025