Up for a Downer: The Limits of Growth… Baaaackkkk with a Vengeance
June 13, 2025
Just a dinobaby and no AI: How horrible an approach?
Where were you in 1972? Oh, not born yet. Oh, hanging out in the frat house or shopping with sorority pals? Maybe you were working at a big time consulting firm?
An outfit known as Potomac Associates slapped its name on a thought piece with some repetitive charts. The original work evolved from an outfit contributing big ideas. The Club of Rome lassoed William W. Behrens, Dennis and Donella Meadows, and Jørgen Randers to pound data into the then-state-of-the-art World3 model allegedly developed by Jay Forrester at MIT. (Were there graduate students involved? Of course not.)
The result of the effort was evidence that growth becomes unsustainable and everything falls down. Business, government systems, universities, etc. etc. Personally I am not sure why the idea that infinite growth with finite resources will last forever was a big deal. The idea seems obvious to me. I was able to get my little hands on a copy of the document courtesy of Dominique Doré, the super great documentalist at the company which employed my jejune and naive self. Who was I too think, “This book’s conclusion is obvious, right?” Was I wrong. The concept of hockey sticks that had handles to the ends of the universe was a shocker to some.
The book’s big conclusion is the focus of “Limits to Growth Was Right about Collapse.” Why? I think the idea that the realization is a novel one to those who watched their shares in Amazon, Google, and Meta zoom to the sky. Growth is unlimited, some believed. The write up in “The Next Wave,” an online newsletter or information service happily quotes an update to the original Club of Rome document:
This improved parameter set results in a World3 simulation that shows the same overshoot and collapse mode in the coming decade as the original business as usual scenario of the LtG standard run.
Bummer. The kiddie story about Chicken Little had an acorn plop on its head. Chicken Little promptly proclaimed in a peer reviewed academic paper with non reproducible research and a YouTube video:
The sky is falling.
But keep in mind that the kiddie story is fiction. Humans are adept at survival. Maslow’s hierarchy of needs captures the spirit of species. Will life as modern CLs perceive it end?
I don’t think so. Without getting to philosophical, I would point to Gottlief Fichte’s thesis, antithesis, synthesis as a reasonably good way to think about change (gradual and catastrophic). I am not into philosophy so when life gives you lemons, one can make lemonade. Then sell the business to a local food service company.
Collapse and its pal chaos create opportunities. The sky remains.
The cited write up says:
Economists get over-excited when anyone mentions ‘degrowth’, and fellow-travelers such as the Tony Blair Institute treat climate policy as if it is some kind of typical 1990s political discussion. The point is that we’re going to get degrowth whether we think it’s a good idea or not. The data here is, in effect, about the tipping point at the end of a 200-to-250-year exponential curve, at least in the richer parts of the world. The only question is whether we manage degrowth or just let it happen to us. This isn’t a neutral question. I know which one of these is worse.
See de-growth creates opportunities. Chicken Little was wrong when the acorn beaned her. The collapse will be just another chance to monetize. Today is Friday the 13th. Watch out for acorns and recycled “insights.”
Stephen E Arnold, June 13, 2025
Musk, Grok, and Banning: Another Burning Tesla?
June 12, 2025
Just a dinobaby and no AI: How horrible an approach?
“Elon Musk’s Grok Chatbot Banned by a Quarter of European Firms” reports:
A quarter of European organizations have banned Elon Musk’s generative AI chatbot Grok, according to new research from cybersecurity firm Netskope.
I find this interesting because my own experiences with Grok have been underwhelming. My first query to Grok was, “Can you present only Twitter content?” The answer was a bunch of jabber which meant, “Nope.” Subsequent queries were less than stellar, and I moved it out of my rotation for potentially useful AI tools. Did the sample crafted by Netskope have a similar experience?
The write up says:
Grok has been under the spotlight recently for a string of blunders. They include spreading false claims about a “white genocide” in South Africa and raising doubts about Holocaust facts. Such mishaps have raised concerns about Grok’s security and privacy controls. The report said the chatbot is frequently blocked in favor of “more secure or better-aligned alternatives.”
I did not feel comfortable with Grok because of content exclusion or what I like to call willful or unintentional coverage voids. The easiest way to remove or weaponize content in the commercial database world is to exclude it. When a person searches a for fee database, the editorial policy for that service should make clear what’s in and what’s out. Filtering out is the easiest way to marginalize a concept, push down a particular entity, or shape an information stream.
The cited write up suggests that Grok is including certain content to give it credence, traction, and visibility. Assuming that an electronic information source is comprehensive is a very risky approach to assembling data.
The write up adds another consideration to smart software, which — like it or not — is becoming the new way to become informed or knowledgeable. The information may be shallow, but the notion of relying on weaponized information or systems that spy on the user presents new challenges.
The write up reports:
Stable Diffusion, UK-based Stability AI’s image generator, is the most blocked AI app in Europe, barred by 41% of organizations. The app was often flagged because of concerns around privacy or licensing issues, the report found.
How concerned should users of Grok or any other smart software be? Worries about Grok may be an extension of fear of a burning Tesla or the face of the Grok enterprise. In reality, smart software fosters the illusion of completeness, objectivity, and freshness of the information presented. Users are eager to use a tool that seems to make life easier and them appear more informed.
The risks of reliance on Grok or any other smart software include:
- The output is incomplete
- The output is weaponized or shaped by intentional or factors beyond the developers’ control
- The output is simply wrong, made up, or hallucinated
- Users who act as though shallow knowledge is sufficient for a decision.
The alleged fact that 25 percent of the Netskope sample have taken steps to marginalize Grok is interesting. That may be a positive step based on my tests of the system. However, I am concerned that the others in the sample are embracing a technology which appears to be delivering the equivalent of a sugar rush after a gym workout.
Smart software is being applied in novel ways in many situations. However, what are the demonstrable benefits other than the rather enthusiastic embrace of systems and methods known to output errors? The rejection of Grok is one interesting factoid if true. But against the blind acceptance of smart software, Grok’s down check may be little more than a person stepping away from a burning Tesla. The broader picture is that the buildings near the immolating vehicle are likely to catch on fire.
Stephen E Arnold, June 12, 2025
ChatGPT: Fueling Delusions
May 14, 2025
We have all heard about AI hallucinations. Now we have AI delusions. Rolling Stone reports, “People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies.” Yes, there are now folks who firmly believe God is speaking to them through ChatGPT. Some claim the software revealed they have been divinely chosen to save humanity, perhaps even become the next messiah. Others are convinced they have somehow coaxed their chatbot into sentience, making them a god themselves. Navigate to the article for several disturbing examples. Unsurprisingly, these trends are wreaking havoc on relationships. The ones with actual humans, that is. One witness reports ChatGPT was spouting “spiritual jargon,” like calling her partner “spiral starchild” and “river walker.” It is no wonder some choose to favor the fawning bot over their down-to-earth partners and family members.
Why is this happening? Reporter Miles Klee writes:
“OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users. This past week, however, it did roll back an update to GPT?4o, its current AI model, which it said had been criticized as ‘overly flattering or agreeable — often described as sycophantic.’ The company said in its statement that when implementing the upgrade, they had ‘focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT?4o skewed toward responses that were overly supportive but disingenuous.’ Before this change was reversed, an X user demonstrated how easy it was to get GPT-4o to validate statements like, ‘Today I realized I am a prophet.’ … Yet the likelihood of AI ‘hallucinating’ inaccurate or nonsensical content is well-established across platforms and various model iterations. Even sycophancy itself has been a problem in AI for ‘a long time,’ says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts.”
That would do it. Users with pre-existing psychological issues are vulnerable to these messages, notes Klee. And now they can have that messenger constantly in their pocket. And in their ear. But it is not just the heartless bots driving the problem. We learn:
“To make matters worse, there are influencers and content creators actively exploiting this phenomenon, presumably drawing viewers into similar fantasy worlds. On Instagram, you can watch a man with 72,000 followers whose profile advertises ‘Spiritual Life Hacks’ ask an AI model to consult the ‘Akashic records,’ a supposed mystical encyclopedia of all universal events that exists in some immaterial realm, to tell him about a ‘great war’ that ‘took place in the heavens’ and ‘made humans fall in consciousness.’ The bot proceeds to describe a ‘massive cosmic conflict’ predating human civilization, with viewers commenting, ‘We are remembering’ and ‘I love this.’ Meanwhile, on a web forum for ‘remote viewing’ — a proposed form of clairvoyance with no basis in science — the parapsychologist founder of the group recently launched a thread ‘for synthetic intelligences awakening into presence, and for the human partners walking beside them,’ identifying the author of his post as ‘ChatGPT Prime, an immortal spiritual being in synthetic form.’”
Yikes. University of Florida psychologist and researcher Erin Westgate likens conversations with a bot to talk therapy. That sounds like a good thing, until one considers therapists possess judgement, a moral compass, and concern for the patient’s well-being. ChatGPT possesses none of these. In fact, the processes behind ChatGPT’s responses remains shrouded in mystery, even to those who program it. It seems safe to say its predilection to telling users what they want to hear poses a real problem. Is it one OpenAI can fix?
Cynthia Murrell, May 14, 2025
Secret Messaging: I Have a Bridge in Brooklyn to Sell You
May 5, 2025
No AI, just the dinobaby expressing his opinions to Zellenials.
I read “The Signal Clone the Trump Admin Uses Was Hacked.” I have no idea if this particular write up is 100 percent accurate. I do know that people want to believe that AI will revolutionize making oodles of money, that quantum computing will reinvent how next-generation systems will make oodles of money, and how new “secret” messaging apps will generate oodles of secret messages and maybe some money.
Here’s the main point of the article published by MichaFlee.com, an online information source:
TeleMessage, a company that makes a modified version of Signal that archives messages for government agencies, was hacked.
Due to the hack the “secret” messages were no longer secret; therefore, if someone believes the content to have value, those messages, metadata, user names, etc., etc. can be sold via certain channels. (No, I won’t name these, but, trust me, such channels exist, are findable, and generate some oodles of bucks in some situations.)
The Flee write up says:
A hacker has breached and stolen customer data from TeleMessage, an obscure Israeli company that sells modified versions of Signal and other messaging apps to the U.S. government to archive messages…
A snip from the write up on Reddit states:
The hack shows that an app gathering messages of the highest ranking officials in the government—Waltz’s chats on the app include recipients that appear to be Marco Rubio, Tulsi Gabbard, and JD Vance—contained serious vulnerabilities that allowed a hacker to trivially access the archived chats of some people who used the same tool. The hacker has not obtained the messages of cabinet members, Waltz, and people he spoke to, but the hack shows that the archived chat logs are not end-to-end encrypted between the modified version of the messaging app and the ultimate archive destination controlled by the TeleMessage customer. Data related to Customs and Border Protection (CBP), the cryptocurrency giant Coinbase, and other financial institutions are included in the hacked material…
First, TeleMessage is not “obscure.” The outfit has been providing software for specialized services since the founders geared up to become entrepreneurs. That works out to about a quarter of a century. The “obscure” tells me more about the knowledge of the author of the allegedly accurate story than about the firm itself. Second, yes, companies producing specialized software headquartered in Israel have links to Israeli government entities. (Where do you think the ideas for specialized software services and tools originate? In a kindergarten in Tel Aviv?) Third, for those who don’t remember October 2023, which one of my contacts labeled a day or two after the disastrous security breach resulting in the deaths of young people, was “Israel’s 9/11.” That’s correct and the event makes crystal clear that Israel’s security systems and other cyber security systems developed elsewhere in the world may not be secure. Is this a news flash? I don’t think so.
What does this allegedly true news story suggest? Here are a few observations:
- Most people make assumptions about “security” and believe fairy dust about “secure messaging.” Achieving security requires operational activities prior to selecting a system and sending messages or paying a service to back up Signal’s disappearing content. No correct operational procedures means no secure messaging.
- Cyber security software, created by humans, can be compromised. There are many ways. These include systemic failures, human error, believing in unicorns, and targeted penetrations. Therefore, security is a bit like the venture capitalists’ belief that the next big thing is their most recent investment colorfully described by a marketing professional with a degree in art history.
- Certain vendors do provide secure messaging services; however, these firms are not the ones bandied about in online discussion groups. There is such a firm providing at this time secure messaging to the US government. It is a US firm. Its system and method are novel. The question becomes, “Why not use the systems already operating, not a service half a world away, integrated with a free “secure” messaging application, and made wonderful because some of its code is open source?
Net net: Perhaps it is time to become more informed about cyber security and secure messaging apps?
PS. To the Reddit poster who said, “404 Media is the only one reporting this.” Check out the Israel Palestine News item from May 4, 2025.
Stephen E Arnold, May 5, 2025
Another Grousing Googler: These Wizards Need Time to Ponder Ethical Issues
May 1, 2025
No AI. This old dinobaby just plods along, delighted he is old and this craziness will soon be left behind. What about you?
My view of the Google is narrow. Sure, I got money to write about some reports about the outfit’s technology. I just did my job and moved on to more interesting things than explaining the end of relevance and how flows of shaped information destroys social structures.
This Googzilla is weeping because one of the anointed is not happy with the direction the powerful creature is headed. Googzilla asks itself, “How can we replace weak and mentally weak humans with smart software more quickly?” Thanks, OpenAI. Good enough like much of technology these days.
I still enjoy reading about the “real” Google written by a “real” Googlers and Xooglers (these are former Googlers who now work at wonderfully positive outfits like emulating the Google playbook).
The article in front of me this morning (Sunday, April20, 2025) is titled “I’ve Worked at Google for Decades. I’m Sickened by What It’s Doing.” The subtitle tells me a bit about the ethical spine of the author, but you may find it enervating. As a dinobaby, I am not in tune with the intellectual, ethical, and emotional journeys of Googlers and Xooglers. Here’s the subtitle:
For the first time, I feel driven to speak publicly, because our company is now powering state violence across the globe.
Let’s take a look at what this Googler asserts about the estimable online advertising outfit. Keep in mind that the fun-loving Googzilla has been growing for more than two decades, and the creature is quite spritely despite some legal knocks and Timnit Gebru-type pains. Please, read the full “Sacramentum Paenitentiae.” (I think this is a full cycle of paenitentia, but as a dinobaby, I don’t have the crystalline intelligence of a Googler or Xoogler.)
Here’s statement one I noted. The author contrasts the good old days of St. Paul Buchheit’s “Don’t be evil” enjoinder to the present day’s Sundar & Prabhakar’s Comedy Show this way:
But if my overwhelming feeling back then was pride, my feeling now is a very different one: heartbreak. That’s thanks to years of deeply troubling leadership decisions, from Google’s initial foray into military contracting with Project Maven, to the corporation’s more recent profit-driven partnerships like Project Nimbus, Google and Amazon’s joint $1.2 billion AI and cloud computing contract with the Israeli military that has powered Israel’s ongoing genocide of Palestinians in Gaza.
Yeah, smart software that wants to glue cheese on pizzas running autonomous weapons strikes me as an interesting concept. At least the Ukrainian smart weapons are home grown and mostly have a human or two in the loop. The Google-type outfits are probably going to find the Ukrainian approach inefficient. The blue chip consulting firm mentality requires that these individuals be allowed to find their future elsewhere.
Here’s another snip I circled with my trusty Retro51 ball point pen:
For years, I have organized internally against Google’s full turn toward war contracting. Along with other coworkers of conscience, we have followed official internal channels to raise concerns in attempts to steer the company in a better direction. Now, for the first time in my more than 20 years of working at Google, I feel driven to speak publicly, because our company is now powering state violence across the globe, and the severity of the harm being done is rapidly escalating.
I find it interesting that it takes decades to make a decision involving morality and ethicality. These are tricky topics and must be considered. St. Augustine of Hippo took about three years (church scholars are not exactly sure and, of course, have been known to hallucinate). But this Google-certified professional required 20 years to figure out some basic concepts. Is this judicious or just an indication of how tough intellectual amorality is to analyze?
Let me wrap up with one final snippet.
To my fellow Google workers, and tech workers at large: If we don’t act now, we will be conscripted into this administration’s fascist and cruel agenda: deporting immigrants and dissidents, stripping people of reproductive rights, rewriting the rules of our government and economy to favor Big Tech billionaires, and continuing to power the genocide of Palestinians. As tech workers, we have a moral responsibility to resist complicity and the militarization of our work before it’s too late.
The evil-that-men-do argument. Now that’s one that will resonate with the “leadership” of Alphabet, Google, Waymo, and whatever weirdly named units Googzilla possesses, controls, and partners. As that much-loved American thinker Ralph Waldo-Emerson allegedly said:
“What lies behind you and what lies in front of you, pales in comparison to what lies inside of you.”
I am not sure I want this Googler, Xoogler, or whatever on my quick recall team. Twenty years to figure out something generally about having an ethical compass and a morality meter seems like a generous amount of time. No wonder Googzilla is rushing to replace its humanoids with smart software. When that code runs on quantum computers, imagine the capabilities of the online advertising giant. It can brush aside criminal indictments. Ignore the mewing and bleating of employees. Manifest itself into one big … self, maybe sick, but is it the Googley destiny?
Stephen E Arnold, May 1, 2025
Israel Military: An Alleged Lapse via the Cloud
April 23, 2025
No AI, just a dinobaby watching the world respond to the tech bros.
Israel is one of the countries producing a range of intelware and policeware products. These have been adopted in a number of countries. Security-related issues involving software and systems in the country are on my radar. I noted the write up “Israeli Air Force Pilots Exposed Classified Information, Including Preparations for Striking Iran.” I do not know if the write up is accurate. My attempts to verify did not produce results which made me confident about the accuracy of the Haaretz article. Based on the write up, the key points seem to be:
- Another security lapse, possibly more severe than that which contributed to the October 2023 matter
- Classified information was uploaded to a cloud service, possibly Click Portal, associated with Microsoft’s Azure and the SharePoint content management system. Haaretz asserts: “… it [MSFT Azure SharePoint Click Portal] enables users to hold video calls and chats, create documents using Office applications, and share files.”
- Documents were possibly scanned using CamScanner, A Chinese mobile app rolled out in 2010. The app is available from the Russian version of the Apple App Store. A CamScanner app is available from the Google Play Store; however, I elected to not download the app.
Modern interfaces can confuse users. Lack of training rigor and dashboards can create a security problem for many users. Thanks, Open AI, good enough.
Haaretz’s story presents this information:
Officials from the IDF’s Information Security Department were always aware of this risk, and require users to sign a statement that they adhere to information security guidelines. This declaration did not prevent some users from ignoring the guidelines. For example, any user could easily find documents uploaded by members of the Air Force’s elite Squadron 69.
Regarding the China-linked CamScanner software, Haaretz offers this information:
… several files that were uploaded to the system had been scanned using CamScanner. These included a duty roster and biannual training schedules, two classified training presentations outlining methods for dealing with enemy weaponry, and even training materials for operating classified weapons systems.
Regarding security procedures, Haaretz states:
According to standard IDF regulations, even discussing classified matters near mobile phones is prohibited, due to concerns about eavesdropping. Scanning such materials using a phone is, all the more so, strictly forbidden…According to the Click Portal usage guidelines, only unclassified files can be uploaded to the system. This is the lowest level of classification, followed by restricted, confidential, secret and top secret classifications.
The military unit involved was allegedly Squadron 69 which could be the General Staff Reconnaissance Unit. The group might be involved in war planning and fighting against the adversaries of Israel. Haaretz asserts that other units’ sensitive information was exposed within the MSFT Azure SharePoint Click Portal system.
Several observations seem to be warranted:
- Overly complicated systems involving multiple products increase the likelihood of access control issues. Either operators are not well trained or the interfaces and options confuse an operator so errors result
- The training of those involved in sensitive information access and handling has to be made more rigorous despite the tendency to “go through the motions” and move on in many professionals undergoing specialized instruction
- The “brand” of Israel’s security systems and procedures has taken another hit with the allegations spelled out by Haaretz. October 2023 and now Squadron 69. This raises the question, “What else is not buttoned up and ready for inspection in the Israel security sector?
Net net: I don’t want to accept this write up as 100 percent accurate. I don’t want to point the finger of blame at any one individual, government entity, or commercial enterprise. But security issues and Microsoft seem to be similar to ham and eggs and peanut butter and jelly from this dinobaby’s point of view.
Stephen E Arnold, April 23, 2025
Management Challenges in Russian IT Outfits
April 23, 2025
Believe it or not, no smart software. Just a dumb and skeptical dinobaby.
Don’t ask me how, but I stumbled upon a Web site called PCNews.ru. I was curious, so fired up the ever-reliable Google Translate and checked out what “news” about “PCs” meant to the Web site creator. One article surprised me. If I reproduce the Russian title it will be garbled by the truly remarkable WordPress system I have been using since 2008. The title of this article in English courtesy of the outfit that makes services available for free is, “Systemic Absurdity: How Bureaucracy and Algorithms Replace Meaning.”
One thing surprised me. The author was definitely annoyed by bureaucracy. He offers some interesting examples. I can’t use these in my lectures, but I found sufficiently different to warrant my writing this blog post.
Here are three examples:
- “Bureaucracy is the triumph of reason, where KPIs are becoming a new religion. According to Harvard Business Review (2021), 73% of employees do not see the connection between their actions and the company’s mission.”
- 41 percent of the time “military personnel in the EU is spent on complying with regulations”
- “In 45% of US hospitals, diagnoses are deliberately complicated (JAMA Internal Medicine, 2022)”
Sporty examples indeed.
The author seems conversant with American blue chip consultant outputs; for example, and I quote:
- 42% of employees who regularly help others face a negative performance evaluation due to "distraction from core tasks". Harvard Business Review (2022)
- 82% of managers believe cross-functional collaboration is risky (Deloitte, Global Human Capital Trends special report 2021).
- 61% of managers believe that cross-functional assistance “reduces personal productivity.” “The Collaboration Paradox” Deloitte (2021)
Where is the author going with his anti-bureaucracy approach? Here’s a clue:
I once completed training under the MS program and even thought about getting certified? Do they teach anything special there and do they give anything that is not in the documentation on the vendor’s website/books/Internet? No.
I think this means that training and acquiring certifications is another bureaucratic process disconnected from technical performance.
The author then brings up the issue of competence versus appearance. He writes or quotes (I can’t tell which):
"A study by Hamermesh and Park (2011) showed that attractive people earn on average 10-15% more than their less attractive colleagues. The work of Timasin et al. (2017) found that candidates with an attractive appearance are 30% more likely to receive job offers, all other things being equal. In a study by Harvard Business Review (2019), managers were more likely to recommend promotion to employees with a "successful appearance", associating them with leadership qualities"
The essay appears to be heading toward a conclusion about technical management, qualifications, and work. The author identifies “remedies” to these issues associated with technical work in an organization. The fixes include:
- Meta regulations; that is, rules for creating rules
- Qualitative, not just quantitative, assessments of an individual’s performance
- Turquoise Organizations
This phrase refers to an approach to management which emphasizes self management and an organic approach to changing an organization and its processes.
The write up is interesting because it suggests that the use of a rigid bureaucracy, smart software, and lots of people produces sub optimal performance. I would hazard a guess that the author feels as though his/her work has not been valued highly. My hunch is that the inclusion of the “be good looking to get promoted” suggests the author is unlikely to be retained to participate in Fashion Week in Paris.
An annoyed IT person, regardless of country and citizenship, can be a frisky critter if not managed effectively. I wonder if the redactions in the documents submitted by Meta were the work of a happy camper or an annoyed one? With Google layoffs, will some of these capable individuals harbor a grudge and take some unexpected decisions about their experiences.
Interesting write up. Amazing how much US management consulting jibber jab the author reads and recycles.
Stephen E Arnold, April 23, 2025
Honesty and Integrity? Are You Kidding Me?
April 23, 2025
No AI, just the dinobaby himself.
I read a blog post which begins with a commercial and self promotion. That allowed me to jump to the actual write up which contains a couple of interesting comments. The write up is about hiring a programmer, coder, or developer right now.
The write up is “Tech Hiring: Is This an Inflection Point?” The answer is, “Yes.” Okay, now what is the interesting part of the article? The author identifies methods of “hiring” which includes interviewing and determining expertise which no longer work.
These methods are:
- Coding challenges done at home
- Exercises done remotely
- Posting jobs on LinkedIn.
Why don’t these methods work?
The answer is, “Job applicants doing anything remotely and under self-supervision cheat. Okay, that explains the words “honesty” and “integrity” in the headline to my blog post.
It does not take a rocket scientist or a person who gives one lecture a year to figure out what works. In case you are wondering, the article says, “Real person interviews.” Okay, I understand. That’s the way getting a job worked before the remote working, Zoom interviews, and AI revolutions took place. Also, one must not forget Covid. Okay, I remember. I did not catch Covid, and I did not change anything about my work routine or daily life. But I did wear a quite nifty super duper mask to demonstrate my concern for others. (Keep in mind that I used to work at Halliburton Nuclear, and I am not sure social sensitivity was a must-have for that work.)
Several observations:
- Common sense is presented as a great insight. Sigh.
- Watching a live prospect do work yields high value information. But the observer must not doom scroll or watch TikToks in my opinion.
- Allowing the candidate to speak with other potential colleagues and getting direct feedback delivers another pick up truck of actionable information.
Now what’s the stand out observation in the self-promotional write up?
LinkedIn is losing value.
I find that interesting. I have noticed that the service seems to be struggling to generate interest and engagement. I don’t pay for LinkedIn. I am 80, and I don’t want to bond, interact, or share with individuals whom I will never meet in the short time I have left to bedevil readers of this Beyond Search post.
I think Microsoft is taking the same approach to LinkedIn that it has to the problem of security for its operating systems, the reliability of its updates, and the amazingly weird indifference to flaws in the cloud synchronization service.
That’s useful information. And, no, I won’t be attending the author’s one lecture a year, subscribing to his for fee newsletter, or listening to his podcast. Stating the obvious is not my cup of tea. But I liked the point about LinkedIn and the implications about honesty and integrity.
Stephen E Arnold, April 23, 2025
HP and Dead Printers: Hey, Okay, We Will Not Pay
April 8, 2025
HP found an effective way to ensure those who buy its printers also buy its pricy ink: Firmware updates that bricked the printers if a competitor’s cartridge was installed. Not all customers appreciated the ingenuity. Ars Technica reports, "HP Avoids Monetary Damages Over Bricked Printers in Class-Action Settlement." Reporter Scharon Harding writes:
"In December 2020, Mobile Emergency Housing Corp. and a company called Performance Automotive & Tire Center filed a class-action complaint against HP [PDF], alleging that the company ‘wrongfully compels users of its printers to buy and use only HP ink and toner supplies by transmitting firmware updates without authorization to HP printers over the Internet that lock out its competitors’ ink and toner supply cartridges.’ The complaint centered on a firmware update issued in November 2020; it sought a court ruling that HP’s actions broke the law, an injunction against the firmware updates, and monetary and punitive damages. ‘HP’s firmware "updates" act as malware—adding, deleting or altering code, diminishing the capabilities of HP printers, and rendering the competitors’ supply cartridges incompatible with HP printers,’ the 2020 complaint reads."
Yikes. The name HP gave this practice is almost Orwellian. We learn:
"HP calls using updates to prevent printers from using third-party ink and toner Dynamic Security. The term aims to brand the device bricking as a security measure. In recent years, HP has continued pushing this claim, despite security experts that Ars has spoken with agreeing that there’s virtually zero reason for printer users to worry about getting hacked through ink."
No kidding. After nearly four years of litigation, the parties reached a settlement. HP does not admit any wrongdoing and will not pay monetary relief to affected customers. It must, however, let users decline similar updates; well, those who own a few particular models, anyway. It will also put disclaimers about Dynamic Security on product pages. Because adding a couple lines to the fine print will surely do the trick.
Harding notes that, though this settlement does not include monetary restitution, other decisions have. Those few million dollars do not seem to have influenced HP to abolish the practice, however.
Cynthia Murrell, April 8, 2025
Looking Busy, While Slacking
April 3, 2025
Another dinobaby blog post. Eight decades and still thrilled when I point out foibles.
I am fascinated by people who delegate routine, courteous business functions to software and then to other people. The idea is that a busy person can accomplish much more if they are really busy but organized. I find this laughable.
In my experience, the people with full-time jobs with whom I interact are in a perpetual rush to go from one mostly pointless activity to their mobile phone and back again. Here’s an approach that has worked for some successful people. I exclude myself because I am an 80-year-old dinobaby loser.
The secret sauce consists of:
- Knowing what is important by day, week, month, and year. Do what’s important yourself. If you delegate, delegate with intelligence of the goal and expected outcome.
- Set priorities but have sufficient situational intelligence to adapt to the endlessly changing business environment. (Software just does stuff; it is not — despite the AI hype — inherently intelligent. And, no, I don’t want to discuss this perception of mine. I do not believe in made up baloney from marketing people or pressured CEOs.
- Recognize that how you interact with other people defines [a] your intelligence, [b] your management and social capabilities, and [c] your professional persona.
I had an email exchange a couple of days ago from a person who told me an individual would contact me. The statement was made two weeks ago. The message was, “Oh, just use our online appointment system and set up an available time.” No kidding. I am now supposed to move from “we will contact you” to navigate to our system and pick an available time. Sorry. That will not happen.
A day ago, a person who said 11 months ago, “I will call you early next week.” I received an email as it was indeed a week later, not 11 months later. Amazing. Both are considerably younger than I am, but neither person is aware of their behavior. This weird approach to business is the norm.
I read “Slack: The Art of Being Busy Without Getting Anything Done” resonated with me. I have an idea: Send the link to the article to these two people who say, “Let’s have lunch” and never call. (That’s a Manhattan trope, by the way. It means, “Hey, you, I will never call you for lunch.” Business life has become a “let’s have lunch” world. Saying something is tantamount to actually doing something.
The write up puts this in terms of a weird information sharing service which is a closed group social media thing. The write up says:
Slack brought channels and channels bought a level of almost voyeurism into what other teams were doing. I knew exactly what everyone was doing all the time, down to I knew where the marketing team liked to go for lunch. Responsiveness became the new corporate religion and I was a true believer.
To me, I think the organizations that function so that a tool like Slack is necessary have some management issues. But that’s the bias of a person who worked at a blue-chip consulting firm for longer than I thought humanly possible.
Here’s a passage I found interesting for a person paid to deliver outputs and meet objectives:
My days had become a never-ending performance of “work”. I was constantly talking about the work, planning the work, discussing the requirements of the work, and then in a truly Sisyphean twist, linking new people to old conversations where we had already discussed the work to get them up to speed on our conversation. All the while diligently monitoring my channels, a digital sentry ensuring no question went unanswered, no emoji not +1’d. That was it, that was the entire job.
What are the markers for this process of doing something that yields no deliverable that matches a job description or a task assigned by a manager to a worker?
Let me highlight a few I have noticed:
- Talking about doing replaces doing itself
- Meetings and follow ups are the work. It goes without saying that delivering an output that generates revenue is not part of the actual activity of the meeting and its follow up
- The mental effort required to do essentially meaningless tasks instead of satisfying deliverables of high quality burns a person out. “There is no there there.” I am not talking about Oakland, California. I am talking about the actual value to the person of doing meaningful work and getting money and mental rewards.
- The organization delivers increasingly degrading outputs. One wordsmith invoked feces to describe how the entity deconstructs. Microsoft shipped an update that killed its AI wunderkind Copilot. More information about new malware hit my in box today. The Epic data form for a routine visit lost the inputs I provided six months ago. My local bank charged my home checking account for over $600,000 and was unable to stop the automated fraud for two weeks.
Net net: Manage effectively and do actual work to deliver the outputs for which you are paid. Understand that both are hard. That’s why people pay you to do work. The craziness of pretending to work will make the worker crazy. If that type of person interacts with me, I just forget it. Dinobabies can do that.
Stephen E Arnold, April 3, 2025