Who Knew? Remote Workers Are Happier Than Cube Laborers

June 6, 2025

To some of us, these findings come as no surprise. The Farmingdale Observer reports, “Scientists Have Been Studying Remote Work for Four Years and Have Reached a Very Clear Conclusion: ‘Working from Home Makes Us Happier’.” Nestled in our own environment, no commuting, comfy clothes—what’s not to like? In case anyone remains unconvinced, researchers at the University of South Australia spent four years studying the effects of working from home. Writer Bob Rubila tells us:

“An Australian study, conducted over four years and starting before the pandemic, has come up with some enlightening conclusions about the impact of working from home. The researchers are unequivocal: this flexibility significantly improves the well-being and happiness of employees, transforming our relationship with work. … Their study, which was unique in that it began before the health crisis, tracked changes in the well-being of Australian workers over a four-year period, offering a unique perspective on the long-term effects of teleworking. The conclusions of this large-scale research highlight that, despite the sometimes contradictory data inherent in the complexity of the subject, offering employees the flexibility to choose to work from home has significant benefits for their physical and mental health.”

Specifically, researchers note remote workers get more sleep, eat better, and have more time for leisure and family activities. The study also contradicts the common fear that working from home means lower productivity. Quite the opposite, it found. As for concerns over losing in-person contact with colleagues, we learn:

“Concerns remain about the impact on team cohesion, social ties at work, and promotion opportunities. Although the connection between colleagues is more difficult to reproduce at a distance, the study tempers these fears by emphasizing the stability, and even improvement, in performance.”

That is a bit of a hedge. On balance, though, remote work seems to be a net positive. An important caveat: The findings are considerably less rosy if working from home was imposed by, say, a pandemic lock-down. Though not all jobs lend themselves to remote work, the researchers assert flexibility is key. The more one’s work situation is tailored to one’s needs and lifestyle, the happier and more productive one will be.

Cynthia Murrell, June 6, 2025

An AI Insight: Threats Work to Bring Out the Best from an LLM

June 3, 2025

“Do what I say, or Tony will take you for a ride. Get what I mean, punk?” seems like an old-fashioned approach to elicit cooperation. What happens if you apply this technique, knee-capping, or unplugging smart software?

The answer, according to one of the founders of the Google, is, “Smart software responds — better.”

Does this strike you as counter intuitive? I read “Google’s Co-Founder Says AI Performs Best When You Threaten It.” The article reports that the motive power behind the landmark Google Glass product allegedly said:

“You know, that’s a weird thing…we don’t circulate this much…in the AI community…not just our models, but all models tend to do better if you threaten them…. Like with physical violence. But…people feel weird about that, so we don’t really talk about that.” 

The article continues, explaining that another LLM wanted to turn one of its users into government authorities. The interesting action seems to suggest that smart software is capable of flipping the table on a human user.

Numerous questions arise from these two allegedly accurate anecdotes about smart software. I want to consider just one: How should a human interact with a smart software system?

In my opinion, the optimal approach is with considered caution. Users typically do not know or think about how their prompts are used by the developer / owner of the smart software. Users do not ponder the value of log file of those prompts. Not even bad actors wonder if those data will be used to support their conviction.

I wonder what else Mr. Brin does not talk about. What is the process for law enforcement or an advertiser to obtain prompt data and generate an action like an arrest or a targeted advertisement?

One hopes Mr. Brin will elucidate before someone becomes so wrought with fear that suicide seems like a reasonable and logical path forward. Is there someone whom we could ask about this dark consequence? “Chew” on that, gentle reader, and you too Mr. Brin.

Stephen E Arnold, June 3, 2025

Microsoft Demonstrates a Combo: PR and HR Management Skill in One Decision

June 2, 2025

How skilled are modern managers? I spotted an example of managerial excellence in action. “Microsoft fires Employee Who Interrupted CEO’s Speech to Protest AI Tech for Israel” reports something that is allegedly spot on; to wit:

“Microsoft has fired an employee who interrupted a speech by CEO Satya Nadella to protest the company’s work supplying the Israeli military with technology used for the war in Gaza.”

Microsoft investigated similar accusations and learned that its technology was not used to harm citizens / residents / enemies in Gaza. I believe that a person investigating himself or herself does a very good job. Law enforcement is usually not needed to investigate a suspected bad actor when the alleged malefactor says: “Yo, I did not commit that crime.” I think most law enforcement professionals smile, shake the hand of the alleged malefactor, and say, “Thank you so much for your rigorous investigation.”

Isn’t that enough? Obviously it is. More than enough. Therefore, to output fabrications and unsupported allegations against a large, ethical, and well informed company, management of that company has a right and a duty to choke off doubt.

The write up says:

“Microsoft has previously fired employees who protested company events over its work in Israel, including at its 50th anniversary party in April [2025].”

The statement is evidence of consistency before this most recent HR / PR home run in my opinion. I note this statement in the cited article:

“The advocacy group No Azure for Apartheid, led by employees and ex-employees, says Lopez received a termination letter after his Monday protest but couldn’t open it. The group also says the company has blocked internal emails that mention words including “Palestine” and “Gaza.””

Company of the year nominee for sure.

Stephen E Arnold, June 2, 2025

Copilot Disappointments: You Are to Blame

May 30, 2025

dino orange_thumbNo AI, just a dinobaby and his itty bitty computer.

Another interesting Microsoft story from a pro-Microsoft online information service. Windows Central published “Microsoft Won’t Take Bigger Copilot Risks — Due to ‘a Post-Traumatic Stress Disorder from Embarrassments,’ Tracing Back to Clippy.” Why not invoke Bob, the US government suggesting Microsoft security was needy, or the software of the Surface Duo?

The write up reports:

Microsoft claims Copilot and ChatGPT are synonymous, but three-quarters of its AI division pay out of pocket for OpenAI’s superior offering because the Redmond giant won’t allow them to expense it.

Is Microsoft saving money or is Microsoft’s cultural momentum maintaining the velocity of Steve Ballmer taking an Apple iPhone from an employee and allegedly stomping on the device. That helped make Microsoft’s management approach clear to some observers.

The Windows Central article adds:

… a separate report suggested that the top complaint about Copilot to Microsoft’s AI division is that “Copilot isn’t as good as ChatGPT.” Microsoft dismissed the claim, attributing it to poor prompt engineering skills.

This statement suggests that Microsoft is blaming a user for the alleged negative reaction to Copilot. Those pesky users again. Users, not Microsoft, is at fault. But what about the Microsoft employees who seem to prefer ChatGPT?

Windows Central stated:

According to some Microsoft insiders, the report details that Satya Nadella’s vision for Microsoft Copilot wasn’t clear. Following the hype surrounding ChatGPT’s launch, Microsoft wanted to hop on the AI train, too.

I thought the problem was the users and their flawed prompts. Could the issue be Microsoft’s management “vision”? I have an idea. Why not delegate product decisions to Copilot. That will show the users that Microsoft has the right approach to smart software: Cutting back on data centers, acquiring other smart software and AI visionaries, and putting Copilot in Notepad.

Stephen E Arnold, May 30, 2025

It Takes a Village Idiot to Run an AI Outfit

May 29, 2025

Dino 5 18 25The dinobaby wrote this without smart software. How stupid is that?

I liked the the write up “The Era Of The Business Idiot.” I am not sure the term “idiot” is 100 percent accurate. According to the Oxford English Dictionary, the word “idiot” is a variant of the phrase “the village idget.” Good enough for me.

The AI marketing baloney is a big thick sausage indeed. Here’s a pretty good explanation of a high-technology company executive today:

We live in the era of the symbolic executive, when "being good at stuff" matters far less than the appearance of doing stuff, where "what’s useful" is dictated not by outputs or metrics that one can measure but rather the vibes passed between managers and executives that have worked their entire careers to escape the world of work. Our economy is run by people that don’t participate in it and our tech companies are directed by people that don’t experience the problems they allege to solve for their customers, as the modern executive is no longer a person with demands or responsibilities beyond their allegiance to shareholder value.

The essay contains a number of observations which match well to my experiences as an officer in companies and as a consultant to a wide range of organizations. Here’s an example:

In simpler terms, modern business theory trains executives not to be good at something, or to make a company based on their particular skills, but to "find a market opportunity" and exploit it. The Chief Executive — who makes over 300 times more than their average worker — is no longer a leadership position, but a kind of figurehead measured on their ability to continually grow the market capitalization of their company. It is a position inherently defined by its lack of labor, the amorphousness of its purpose and its lack of any clear responsibility.

I urge you to read the complete write up.

I want to highlight some assertions (possibly factoids) which I found interesting. I shall, of course, offer a handful of observations.

First, I noted this statement:

When the leader of a company doesn’t participate in or respect the production of the goods that enriches them, it creates a culture that enables similarly vacuous leaders on all levels.

Second, this statement:

Management has, over the course of the past few decades, eroded the very fabric of corporate America, and I’d argue it’s done the same in multiple other western economies, too.

Third, this quote from a “legendary” marketer:

As the legendary advertiser Stanley Pollitt once said, “bullshit baffles brains.”

Fourth, this statement about large language models, the next big thing after quantum, of course:

A generative output is a kind of generic, soulless version of production, one that resembles exactly how a know-nothing executive or manager would summarise your work.

And, fifth, this comment:

By chasing out the people that actually build things in favour of the people that sell them, our economy is built on production puppetry — just like generative AI, and especially like ChatGPT.

More little nuggets nestle in the write up; it is about 13,000 words. (No, I did  not ask Copilot to count the words. I am a good estimator of text length.) It is now time for my observations:

  1. I am not sure the leadership is vacuous. The leadership does what it learned, knows how to do, and obtained promotions for just being “authentic.” One leader at the blue chip consulting firm at which I learned to sell scope changes, built pianos in his spare time. He knew how to do that: Build a piano. He also knew how to sell scope changes. The process is one that requires a modicum of knowledge and skill.
  2. I am not sure management has eroded the “fabric.” My personal view is that accelerated flows of information has blasted certain vulnerable types of constructs. The result is leadership that does many of the things spelled out in the write up. With no buffer between thinking big thoughts and doing work, the construct erodes. Rebuilding is not possible.
  3. Mr. Pollitt  was a marketer. He is correct, and that marketing mindset is in the cat-bird seat.
  4. Generative AI outputs what is probably an okay answer. Those who were happy with a “C” in school will find the LLM a wonderful invention. That alone may make further erosion take place more rapidly. If I am right about information flows, the future is easy to predict, and it is good for a few and quite unpleasant for many.
  5. Being able to sell is the top skill. Learn to embrace it.

Stephen E Arnold, May 29, 2025

A Grok Crock: That Dog Ate My Homework

May 29, 2025

Dino 5 18 25_thumb_thumbJust the dinobaby operating without Copilot or its ilk.

I think I have heard Grok (a unit of XAI I think) explain that outputs have been the result of a dog eating the code or whatever. I want to document these Grok Crocks. Perhaps I will put them in a Grok Pot and produce a list of recipes suitable for middle school and high school students.

The most recent example of “something just happened” appears in “Grok Says It’s Skeptical’ about Holocaust Death Toll, Then Blames Programming Error.” Does this mean that smart software is programming Grok? If so, the explanation should be worded, “Grok hallucinates.” If a human wizard made a programming error, then making a statement that quality control will become Job One. That worked for Microsoft until Copilot became the go-to task.

The cited article stated:

Grok said this response was “not intentional denial” and instead blamed it on “a May 14, 2025, programming error.” “An unauthorized change caused Grok to question mainstream narratives, including the Holocaust’s 6 million death toll, sparking controversy,” the chatbot said. Grok said it “now aligns with historical consensus” but continued to insist there was “academic debate on exact figures, which is true but was misinterpreted.” The “unauthorized change” that Grok referred to was presumably the one xAI had already blamed earlier in the week for the chatbot’s repeated insistence on mentioning “white genocide” (a conspiracy theory promoted by X and xAI owner Elon Musk), even when asked about completely unrelated subjects.

I am going to steer clear of the legality of these statements and the political shadows these Grok outputs cast. Instead, let me offer a few observations:

  1. I use a number of large language models. I have used Grok exactly twice. The outputs had nothing of interest for me. I asked, “Can you cite X.com messages.” The system said, “Nope.” I tried again after Grok 3 became available. Same answer. Hasta la vista, Grok.
  2. The training data, the fancy math, and the algorithms determine the output. Since current LLMs rely on Google’s big idea, one would expect the outputs to be similar. Outlier outputs like these alleged Grokings are a bit of a surprise. Perhaps someone at Grok could explain exactly why these outputs are happening. I know dogs could eat homework. The event is highly unlikely in my experience, although I had a dog which threw up on the typewriter I used to write a thesis.
  3. I am a suspicious person. Grok makes me suspicious. I am not sure marketing and smarmy talk can reduce my anxiety about Grok providing outlier content to middle school, high school, college, and “I don’t care” adults. Weaponized information in my opinion is just that a weapon. Dangerous stuff.

Net net: Is the dog eating homework one of the Tesla robots? if so, speak with the developers, please. An alternative would be to use Claude 3.7 or Gemini to double check Grok’s programming.

Stephen E Arnold, May 29, 2025

Employee Time App Leaks User Information

May 22, 2025

Oh boy! Security breaches are happening everywhere these days. It’s not scary unless your personal information is leaked, like what happened to, “Top Employee Monitoring App Leaks 21 Million Screenshots On Thousands Of Users,” reports TechRadar. The app in question is called WorkComposer and it’s described as an “employee productivity monitoring tool.” Cybernews cybersecurity researchers discovered an archive of millions of WorkComposer-generated real time screenshots. These screenshot showed what the employee worked on, which might include sensitive information.

The sensitive information could include intellectual property, passwords, login portals, emails, proprietary data, etc. These leaked images are a major privacy violation, meaning WorkComposer is in boiling water. Privacy organizations and data watchdogs could get involved.

Here is more information about the leak:

“Cybernews said that WorkComposer exposed more than 21 million images in an unsecured Amazon S3 bucket. The company claims to have more than 200,000 active users. It could also spell trouble if it turns out that cybercriminals found the bucket in the past. At press time, there was no evidence that it did happen, and the company apparently locked the archive down in the meantime.”

WorkComposer was designed for companies to monitor the work of remote employees. It allows leads to track their employees’ work and captures an image every twenty seconds.

It’s a useful monitoring application but a scary situation with the leaks. Why doesn’t the Cybernews people report the problem or fix it? That’s a white hat trick.

Whitney Grace, May 22, 2025

IBM CEO Replaces Human HR Workers with AskHR AI

May 21, 2025

An IBM professional asks the smart AI system, “Have I been terminated?” What if the   smart software hallucinates? Yeah, surprise!

Which employees are the best to replace with AI? For IBM, ironically, it is the ones with “Human” in their title. Entrepreneur reports, “IBM Replaced Hundreds of HR Workers with AI, According to Its CEO.” But not to worry, the firm actually hired workers in other areas. We learn:

“IBM CEO Arvind Krishna told The Wall Street Journal … that the tech giant had tapped into AI to take over the work of several hundred human resources employees. However, IBM’s workforce expanded instead of shrinking—the company used the resources freed up by the layoffs to hire more programmers and salespeople. ‘Our total employment has actually gone up, because what [AI] does is it gives you more investment to put into other areas,’ Krishna told The Journal. Krishna specified that those ‘other areas’ included software engineering, marketing, and sales or roles focused on ‘critical thinking,’ where employees ‘face up or against other humans, as opposed to just doing rote process work.’”

Yes, the tech giant decided to dump those touchy feely types in personnel. Who need human sensitivity with issues like vacations, medical benefits, discrimination claims, or potential lawsuits? That is all just rote process work, right? The AskHR agent can handle it.

According to Wedbush analyst Dan Ives, IBM is just getting started on its metamorphosis into an AI company. What does that mean for humans in other departments? Will their jobs begin to go the way of their former colleagues’ in HR? If so, who would they complain to? Watson, are you on the job?

Cynthia Murrell, May 21, 2025

Google Makes a Giant, Huge, Quantumly Supreme Change

May 19, 2025

dino-orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zellenials.

I read  “Google’s G Logo Just Got Prettier.” Stunning news. The much loved, intensely technical Google has invented blurring colors. The decision was a result of DeepMind’s smart software and a truly motivated and respected group of artistically-inclined engineers.

Image. The old logo has been reinvented to display a gradient. Was the inspiration the hallucinatory gradient descent in Google’s smart software? Was it a result of a Googler losing his glasses and seeing the old logo as a blend of colors? Was it a result of a chance viewing of a Volvo marketing campaign with a series of images like this:

image

Image is from Volvo, the automobile company. You can view the original at this link. Hey, buy a Volvo.

The write up says:

Google’s new logo keeps the same letterform, as well as the bright red-yellow-green-blue color sequence, but now those colors blur into each other. The new “G” is Google’s biggest update to its visual identity since retiring serfs for its current sans-serif font, Product Sans, in 2015.

Retiring serifs, not serfs. I know it is just an AI zellenial misstep, but Google is terminating wizards so they can find their future elsewhere. That is just sol helpful.

What does the “new” and revolutionary logo look like. The image below comes from Fast Company which is quick on the artistic side of US big technology outfits. Behold:

image

Source: Fast Company via the Google I think.

Fast Company explains the forward-leaning design decision:

A gradient is a safe choice for the new “G.” Tech has long been a fan of using gradients in its logos, apps, and branding, with platforms like Instagram and Apple Music tapping into the effect a decade ago. Still today, gradients remain popular, owing to their middle-ground approach to design. They’re safe but visually interesting; soft but defined. They basically go with anything thanks to their color wheel aesthetic. Other Google-owned products have already embraced gradients. YouTube is now using a new red-to-magenta gradient in its UI, and Gemini, Google’s AI tool, also uses them. Now it’s bringing the design element to its flagship Google app.

Yes, innovative.

And Fast Company wraps up the hard hitting design analysis with some Inconel wordsmithing:

it’s not a small change for a behemoth of a company. We’ll never knows how many meetings, iterations, and deliberations went into making that little blur effect, but we can safely guess it was many.

Yep, guess.

Stephen E Arnold, May 19, 2025

Grok and the Dog Which Ate the Homework

May 16, 2025

dino-orange_thumb_thumb_thumb_thumb_[1]_thumb_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zillennials.

I remember the Tesla full self driving service. Is that available? I remember the big SpaceX rocket ship. Are those blowing up after launch? I now have to remember an “unauthorized modification” to xAI’s smart software Grok. Wow. So many items to tuck into my 80 year old brain.

I read “xAI Blames Grok’s Obsession with White Genocide on an Unauthorized Modification.” Do I believe this assertion? Of course, I believe everything I read on the sad, ad-choked, AI content bedeviled Internet.

Let’s look at the gems of truth in the report.

First, what is an unauthorized modification of a complex software humming along happily in Silicon Valley and— of all places — Memphis, a lovely town indeed. The unauthorized modification— whatever that is— caused a “bug in its AI-powered Grok chatbot.” If I understand this, a savvy person changed something he, she, or it was not supposed to modify. That change then caused a “bug.” I thought Grace Hopper nailed the idea of a “bug” when she  pulled an insect from one of the dinobaby’s favorite systems, the Harvard Mark II. Are their insects at the X shops? Are these unauthorized insects interacting with unauthorized entities making changes that propagate more bugs? Yes.

Second, the malfunction occurs when “@grok” is used as a tag. I believe this because the “unauthorized modification” fiddled with the user mappings and jiggled scripts to allow the “white genocide” content to appear. This is definitely not hallucination; it is an “unauthorized modification.” (Did you know that the version of Grok available via x.com cannot return information from X.com (formerly Twitter) content. Strange? Of course not.

Third, I know that Grok, xAI, and the other X entities have “internal policies and core values.” Violating these is improper. The company — like other self regulated entities — “conducted a thorough investigation.” Absolutely. Coders at X are well equipped to perform investigations. That’s why X.com personnel are in such demand as advisors to law enforcement and cyber fraud agencies.

Finally, xAI is going to publish system prompts on Microsoft GitHub. Yes, that will definitely curtail the unauthorized modifications and bugs at X entities. What a bold solution.

The cited write up is definitely not on the same page as this dinobaby. The article reports:

A study by SaferAI, a nonprofit aiming to improve the accountability of AI labs, found xAI ranks poorly on safety among its peers, owing to its “very weak” risk management practices. Earlier this month, xAI missed a self-imposed deadline to publish a finalized AI safety framework.

This negative report may be expanded to make the case that an exploding rocket or a wonky full self driving vehicle is not safe. Everyone must believe X outfits. The company is a paragon of veracity, excellent engineering, and delivering exactly what it says it will provide. That is the way you must respond.

Stephen E Arnold, May 16, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta