Why Customer Trust of Chatbot Does Not Matter
July 22, 2025
Just a dinobaby working the old-fashioned way, no smart software.
The need for a winner is pile driving AI into consumer online interactions. But like the piles under the San Francisco Leaning Tower of Insurance Claims, the piles cannot stop the sag, the tilt, and the sight of a giant edifice tilting.
I read an article in the “real” new service called Fox News. The story’s title is “Chatbots Are Losing Customer Trust Fast.” The write up is the work of the CyberGuy, so you know it is on the money. The write up states:
While companies are excited about the speed and efficiency of chatbots, many customers are not. A recent survey found that 71% of people would rather speak with a human agent. Even more concerning, 60% said chatbots often do not understand their issue. This is not just about getting the wrong answer. It comes down to trust. Most people are still unsure about artificial intelligence, especially when their time or money is on the line.
So what? Customers are essentially irrelevant. As long as the outfit hits its real or imaginary revenue goals, the needs of the customer are not germane. If you don’t believe me, navigate to a big online service like Amazon and try to find the number of customer service. Let me know how that works out.
Because managers cannot “fix” human centric systems, using AI is a way out. Let AI do it is a heck of lot easier than figuring out a work flow, working with humans, and responding to customer issues. The old excuse was that middle management was not needed when decisions were pushed down to the “workers.”
AI flips that. Managerial ranks have been reduced. AI decisions come from “leadership” or what I call carpetland. AI solves problems: Actually managing, cost reduction, and having good news for investor communications.
The customers don’t want to talk to software. The customer wants to talk to a human who can change a reservation without automatically billing for a service charge. The customer wants a person to adjust a double billing for a hotel doing business Snap Commerce Holdings. The customer wants a fair shake.
AI does not do fair. AI does baloney, confusion, errors, and hallucinations. I tried a new service which put Google Gemini front and center. I asked one question and got an incomplete and erroneous answer. That’s AI today.
The CyberGuy’s article says:
If a company is investing in a chatbot system, it should track how well that system performs. Businesses should ask chatbot vendors to provide real-world data showing how their bots compare to human agents in terms of efficiency, accuracy and customer satisfaction. If the technology cannot meet a high standard, it may not be worth the investment.
This is simply not going to happen. Deployment equals cost savings. Only when the money goes away will someone in leadership take action. Why? AI has put many outfits in a precarious position. Big money has been spent. Much of that money comes from other people. Those “other people” want profits, not excuses.
I heard a sci-fi rumor that suggests Apple can buy OpenAI and catch up. Apple can pay OpenAI’s investors and make good on whatever promissory payments have been offered by that firm’s leadership. Will that solve the problem?
Nope. The AI firms talk about customers but don’t care. Dealing with customers abused by intentionally shady business practices cooked up by a committee that has to do something is too hard and too costly. Let AI do it.
If the CyberGuy’s write up is correct, some excitement is speeding down the information highway toward some well known smart software companies. A crash at one of the big boys junctions will cause quite a bit of collateral damage.
Whom do you trust? Humans or smart software.
Stephen E Arnold, July 22, 2025
What Did You Tay, Bob? Clippy Did What!
July 21, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I was delighted to read “OpenAI Is Eating Microsoft’s Lunch.” I don’t care who or what wins the great AI war. So many dollars have been bet that hallucinating software is the next big thing. Most content flowing through my dinobaby information system is political. I think this food story is a refreshing change.
So what’s for lunch? The write up seems to suggest that Sam AI-Man has not only snagged a morsel from the Softies’ lunch pail but Sam AI-Man might be prepared to snap at those delicate lady fingers too. The write up says:
ChatGPT has managed to rack up about 10 times the downloads that Microsoft’s Copilot has received.
Are these data rock solid? Probably not, but the idea that two “partners” who forced Googzilla to spasm each time its Code Red lights flashed are not cooperating is fascinating. The write up points out that when Microsoft and OpenAI were deeply in love, Microsoft had the jump on the smart software contenders. The article adds:
Despite that [early lead], Copilot sits in fourth place when it comes to total installations. It trails not only ChatGPT, but Gemini and Deepseek.
Shades of Windows phone. Another next big thing muffed by the bunnies in Redmond. How could an innovation power house like Microsoft fail in the flaming maelstrom of burning cash that is AI? Microsoft’s long history of innovation adds a turbo boost to its AI initiatives. The Bob, Clippy, and Tay inspired Copilot is available to billions of Microsoft Windows users. It is … everywhere.
The write up explains the problem this way:
Copilot’s lagging popularity is a result of mismanagement on the part of Microsoft.
This is an amazing insight, isn’t it? Here’s the stunning wrap up to the article:
It seems no matter what, Microsoft just cannot make people love its products. Perhaps it could try making better ones and see how that goes.
To be blunt, the problem at Microsoft is evident in many organizations. For example, we could ask IBM Watson what Microsoft should do. We could fire up Deepseek and get some China-inspired insight. We could do a Google search. No, scratch that. We could do a Yandex.ru search and ask, “Microsoft AI strategy repair.”
I have a more obvious dinobaby suggestion, “Make Microsoft smaller.” And play well with others. Silly ideas I know.
Stephen E Arnold, July 21, 2025
Xooglers Reveal Googley Dreams with Nightmares
July 18, 2025
Just a dinobaby without smart software. I am sufficiently dull without help from smart software.
Fortune Magazine published a business school analysis of a Googley dream and its nightmares titled “As Trump Pushes Apple to Make iPhones in the U.S., Google’s Brief Effort Building Smartphones in Texas 12 years Ago Offers Critical Lessons.” The author, Mr. Kopytoff, states:
Equivalent in size to nearly eight football fields, the plant began producing the Google Motorola phones in the summer of 2013.
Mr. Kopytoff notes:
Just a year later, it was all over. Google sold the Motorola phone business and pulled the plug on the U.S. manufacturing effort. It was the last time a major company tried to produce a U.S. made smartphone.
Yep, those Googlers know how to do moon shots. They also produce some digital rocket ships that explode on the launch pads, never achieving orbit.
What happened? You will have to read the pork loin write up, but the Fortune editors did include a summary of the main point:
Many of the former Google insiders described starting the effort with high hopes but quickly realized that some of the assumptions they went in with were flawed and that, for all the focus on manufacturing, sales simply weren’t strong enough to meet the company’s ambitious goals laid out by leadership.
My translation of Fortune-speak is: “Google was really smart. Therefore, the company could do anything. Then when the genius leadership gets the bill, a knee jerk reaction kills the project and moves on as if nothing happened.”
Here’s a passage I found interesting:
One of the company’s big assumptions about the phone had turned out to be wrong. After betting big on U.S. assembly, and waving the red, white, and blue in its marketing, the company realized that most consumers didn’t care where the phone was made.
Is this statement applicable to people today? It seems that I hear more about costs than I last year. At a 4th of July hoe down, I heard:
- “The prices are Kroger go up each week.”
- “I wanted to trade in my BMW but the prices were crazy. I will keep my car.”
- “I go to the Dollar Store once a week now.”
What’s this got to do with the Fortune tale of Google wizards’ leadership goof and Apple (if it actually tries to build an iPhone in Cleveland?
Answer: Costs and expertise. Thinking one is smart and clever is not enough. One has to do more than spend big money, talk in a supercilious manner, and go silent when the crazy “moon shot” explodes before reaching orbit.
But the real moral of the story is that it is political. That may be more problematic than the Google fail and Apple’s bitter cider. It may be time to harvest the fruit of tech leaderships’ decisions.
Stephen E Arnold, July 18, 2025
New Business Tactics from Google and Meta: Fear-Fueled Management
July 8, 2025
No smart software. Just a dinobaby and an old laptop.
I like to document new approaches to business rules or business truisms. Examples range from truisms like “targeting is effective” to “two objectives is no objectives.” Today July 1, 2025, I spotted anecdotal evidence of two new “rules.” Both seem customed tailored to the GenX, GenY, GenZ, and GenAI approach to leadership. Let’s look at each briefly and then consider how effective these are likely to be.
The first example of new management thinking appears in “Google Embraces AI in the Classroom with New Gemini Tools for Educators, Chatbots for Students, and More.” The write up explains that Google has:
introduced more than 30 AI tools for educators, a version of the Gemini app built for education, expanded access to its collaborative video creation app Google Vids, and other tools for managed Chromebooks.
Forget the one objective idea when it comes to products. Just roll out more than two dozen AI services. That will definitely catch the attention of grade, middle school, high school, junior college, and university teachers in the US and elsewhere. I am not a teacher, but I know that when I attend neighborhood get togethers, the teachers at these functions often ask me about smart software. From these interactions, very few understand that smart software comes in different “flavors.” AI is still a mostly unexplored innovation. But Google is chock full of smart people who certainly know how teachers can rush to two dozen new products and services in a jiffy.
The second rule is that organizations are hierarchical. Assuming this is the approach, one person should lead an organization and then one person should lead a unit and one person should lead a department and so on. This is the old Great Chain of Being slapped on an enterprise. My father worked in this type of company, and he liked it. He explained how work flowed from one box on the organization chart to another. With everything working the way my father liked things to work, bulldozers and mortars appeared on the loading docks. Since I grew up with this approach, it made sense to me. I must admit that I still find this type of set up appealing, and I am usually less than thrilled to work in an matrix management, let’s just roll with it set up.
In “Nikita Bier, The Founder Of Gas And TBH, Who Once Asked Elon Musk To Hire Him As VP Of Product At Twitter, Has Joined X: ‘Never Give Up‘” I learned that Meta is going with the two bosses approach to smart software. The write up reports as real news as opposed to news release news:
On Monday, Bier announced on X that he’s officially taking the reins as head of product. "Ladies and gentlemen, I’ve officially posted my way to the top: I’m joining @X as Head of Product," Bier wrote.
Earlier in June 2025, Mark Zuckerberg pumped money into Scale.io (an indexing outfit) and hired Alexandr Wang to be the top dog of Meta’s catch up in AI initiative. It appears that Meta is going to give the two bosses are better than one approach its stamp of management genius approval. OpenAI appeared to emulate this approach, and it seemed to have spawned a number of competitors and created an environment in which huge sums of money could attract AI wizards to Mr. Zuckerberg’s social castle.
The first new management precept is that an organization can generate revenue by shotgunning more than two dozen new products and services to what Google sees as the education market. The outmoded management approach would focus on one product and service, provide that to a segment of the education market with some money to spend and a problem to solve. Then figure out how to make that product more useful and grow paying customers in that segment. That’s obviously stupid and not GenAI. The modern approach is to blast that bird shot somewhere in the direction of a big fuzzy market and go pick up the dead ducks for dinner.
The second new management precept is to have an important unit, a sense of desperation born from failure, and put two people in charge. I think this can work, but in most of the successful outfits to which I have been exposed, there is one person at the top. He or she may be floating above the fray, but the idea is that someone, in theory, is in charge.
Several observations are warranted:
- The chaos approach to building a business has taken root and begun to flower at Google and Meta. Out with the old and in with the new. I am willing to wait and see what happens because when either success or failure arrives, the stories of VCs jumping from tall buildings or youthful managers buying big yachts will circulate.
- The innovations in management at Google and Meta suggest to me a bit of desperation. Both companies perceive that each is falling behind or in danger of losing. That perception may be accurate because once the AI payoff is not evident, Google and Meta may find themselves paddling up the river, not floating down the river.
- The two innovations viewed as discrete actions are expensive, risky, and illustrative of the failure of management at both firms. Employees, stakeholders, and users have a lot to win or lose.
I heard a talk by someone who predicted that traditional management consulting would be replaced by smart software. In the blue chip firm in which I worked years ago, management decisions like these would be guaranteed to translate to old-fashioned, human-based consulting projects.
In today’s world, decisions by “leadership” are unlikely to be remediated by smart software. Fixing up the messes will require individuals with experience, knowledge, and judgment.
As Julius Caesar allegedly said:
In summo periculo timor miericordiam non recipit.
This means something along the lines, “In situations of danger, fear feels no pity.” These new management rules suggest that both Google and Meta’s “leadership” are indeed fearful and grandstanding in order to overcome those inner doubts. The decisions to go against conventional management methods seem obvious and logical to them. To others, perhaps the “two bosses” and “a blast of AI products and service” are just ill advised or not informed?
Stephen E Arnold, July 8, 2025
Technology Firms: Children of Shoemakers Go Barefoot
July 7, 2025
If even the biggest of Big Tech firms are not safe from cyberattacks, who is? Investor news site Benzinga reveals, “Apple, Google and Facebook Among Services Exposed in Massive Leak of More than 16 Billion Login Records.” The trove represents one of the biggest exposures of personal data ever, writer Murtuza J. Merchant tells us. We learn:
“Cybersecurity researchers have uncovered 30 massive data collections this year alone, each containing tens of millions to over 3.5 billion user credentials, Cybernews reported. These previously unreported datasets were briefly accessible through misconfigured cloud storage or Elasticsearch instances, giving the researchers just enough time to detect them, though not enough to trace their origin. The findings paint a troubling picture of how widespread and organized credential leaks have become, with login information originating from malware known as infostealers. These malicious programs siphon usernames, passwords, and session data from infected machines, usually structured as a combination of a URL, username, and password.”
Ah, advanced infostealers. One of the many handy tools AI has made possible. The write-up continues:
“The leaked credentials span a wide range of services from tech giants like Apple, Facebook, and Google, to platforms such as GitHub, Telegram, and various government portals. Some datasets were explicitly labeled to suggest their source, such as ‘Telegram’ or a reference to the Russian Federation. … Researchers say these leaks are not just a case of old data resurfacing.”
Not only that, the data’s format is cybercriminal-friendly. Merchant writes:
“Many of the records appear recent and structured in ways that make them especially useful for cybercriminals looking to run phishing campaigns, hijack accounts, or compromise corporate systems lacking multi-factor authentication.”
But it is the scale of these datasets that has researchers most concerned. The average collection held 500 million records, while the largest had more than 3.5 billion. What are the chances your credentials are among them? The post suggests the usual, most basic security measures: complex and frequently changed passwords and regular malware scans. But surely our readers are already observing these best practices, right?
Cynthia Murrell, July 7, 2025
Read This Essay and Learn Why AI Can Do Programming
July 3, 2025
No AI, just the dinobaby expressing his opinions to Zillennials.
I, entirely by accident since Web search does not work too well, an essay titled “Ticket-Driven Development: The Fastest Way to Go Nowhere.” I would have used a different title; for example, “Smart Software Can Do Faster and Cheaper Code” or “Skip Computer Science. Be a Plumber.” Despite my lack of good vibe coding from the essay’s title, I did like the information in the write up. The basic idea is that managers just want throughput. This is not news.
The most useful segment of the write up is this passage:
You don’t need a process revolution to fix this. You need permission to care again. Here’s what that looks like:
- Leave the code a little better than you found it — even if no one asked you to.
- Pair up occasionally, not because it’s mandated, but because it helps.
- Ask why. Even if you already know the answer. Especially then.
- Write the extra comment. Rename the method. Delete the dead file.
- Treat the ticket as a boundary, not a blindfold.
Because the real job isn’t closing tickets it’s building systems that work.
I wish to offer several observations:
- Repetitive boring, mindless work is perfect for smart software
- Implementing dot points one to five will result in a reprimand, transfer to a salubrious location, or termination with extreme prejudice
- Spending long hours with an AI version of an old-fashioned psychiatrist because you will go crazy.
After reading the essay, I realized that the managerial approach, the “ticket-driven workflow”, and the need for throughput applies to many jobs. Leadership no longer has middle managers who manage. When leadership intervenes, one gets [a] consultants or [b] knee-jerk decisions or mandates.
The crisis is in organizational set up and management. The developers? Sorry, you have been replaced. Say, “hello” to our version of smart software. Her name is No Kidding.
Stephen E Arnold, July 3, 2025
AI Management: Excellence in Distancing Decisions from Consequences
July 2, 2025
Smart software involved in the graphic, otherwise just an addled dinobaby.
This write up “Exclusive: Scale AI’s Spam, Security Woes Plagued the Company While Serving Google” raises two minor issues and one that is not called out in the headline or the subtitle:
$14 billion investment from Meta struggled to contain ‘spammy behavior’ from unqualified contributors as it trained Gemini.
Who can get excited about a workflow and editorial quality issue. What is “quality”? In one of my Google monographs I pointed out that Google used at one time a number of numerical recipes to figure out “quality.” Did that work? Well, it was good enough to help get the Yahoo-inspired Google advertising program off the ground. Then quality became like those good brownies from 1953: Stuffed with ingredients no self-respecting Stanford computer science graduate would eat for lunch.
I believe some caution is required when trying to understand a very large and profitable company from someone who is no longer working at the company. Nevertheless, the article presents a couple of interesting assertions and dodges what I consider the big issue.
Consider this statement in the article:
In a statement to Inc., Scale AI spokesperson Joe Osborne said: “This story is filled with so many inaccuracies, it’s hard to keep track. What these documents show, and what we explained to Inc ahead of publishing, is that we had clear safeguards in place to detect and remove spam before anything goes to customers.” [Editor’s Note: “this” means the rumor that Scale cut corners.]
The story is that a process included data that would screw up the neural network.
And the security issue? I noted this passage:
The [spam] episode raises the question of whether or not Google at one point had vital data muddied by workers who lacked the credentials required by the Bulba program. It also calls into question Scale AI’s security and vetting protocols. “It was a mess. They had no authentication at the beginning,” says the former contributor. [Editor’s Note: Bulba means “Bard.”]
A person reading the article might conclude that Scale AI was a corner cutting outfit. I don’t know. But when big money starts to flow and more can be turned on, some companies just do what’s expedient. The signals in this Scale example are the put the pedal to the metal approach to process and the information that people knew that bad data was getting pumped into Googzilla.
But what’s the big point that’s missing from the write up? In my opinion, Google management made a decision to rely on Scale. Then Google management distanced itself from the operation. In the good old days of US business, blue-suited informed middle managers pursued quality, some companies would have spotted the problems and ridden herd on the subcontractor.
Google did not do this in an effective manner.
Now Scale AI is beavering away for Meta which may be an unexpected win for the Google. Will Meta’s smart software begin to make recommendations like “glue your cheese on the pizza”? My personal view is that I now know why Google’s smart software has been more about public relations and marketing, not about delivering something that is crystal clear about its product line up, output reliability, and hallucinatory behaviors.
At least Google management can rely on Deepseek to revolutionize understanding the human genome. Will the company manage in as effective a manner as its marketing department touts its achievements?
Stephen E Arnold, July 2, 2025
Paper Tiger Management
June 24, 2025
An opinion essay written by a dinobaby who did not rely on smart software .
I learned that Apple and Meta (formerly Facebook) found themselves on the wrong side of the law in the EU. On June 19, 2025, I learned that “the European Commission will opt not to impose immediate financial penalties” on the firms. In April 2025, the EU hit Apple with a 500 million euro fine and Meta a 200 million euro fine for non compliance with the EU’s Digital Markets Act. Here’s an interesting statement in the cited EuroNews report the “grace period ends on June 26, 2025.” Well, not any longer.
What’s the rationale?
- Time for more negotiations
- A desire to appear fair
- Paper tiger enforcement.
I am not interested in items one and two. The winner is “paper tiger enforcement.” In my opinion, we have entered an era in management, regulation, and governmental resolve when the GenX approach to lunch. “Hey, let’s have lunch.” The lunch never happens. But the mental process follows these lanes in the bowling alley of life: [a] Be positive, [b] Say something that sounds good, [c] Check the box that says, “Okay, mission accomplished. Move on. [d] Forget about the lunch thing.
When this approach is applied on large scale, high-visibility issues, what happens? In my opinion, the credibility of the legal decision and the penalty is diminished. Instead of inhibiting improper actions, those who are on the receiving end of the punishment lean one thing: It doesn’t matter what we do. The regulators don’t follow through. Therefore, let’s just keep on moving down the road.
Another example of this type of management can be found in the return to the office battles. A certain percentage of employees are just going to work from home. The management of the company doesn’t do “anything”. Therefore, management is feckless.
I think we have entered the era of paper tiger enforcement. Make noise, show teeth, growl, and then go back into the den and catch some ZZZZs.
Stephen E Arnold, June 24, 2025
Move Fast, Break Your Expensive Toy
June 19, 2025
An opinion essay written by a dinobaby who did not rely on smart software .
The weird orange newspaper online service published “Microsoft Prepared to Walk Away from High-Stakes OpenAI Talks.” (I quite like the Financial Times, but orange?) The big news is that a copilot may be creating tension in the cabin of the high-flying software company. The squabble has to do with? Give up? Money and power. Shocked? It is Sillycon Valley type stuff, and I think the squabble is becoming more visible. What’s next? Live streaming the face-to-face meetings?
A pilot and copilot engage in a friendly discussion about paying for lunch. The art was created by that outstanding organization OpenAI. Yes, good enough.
The orange service reports:
Microsoft is prepared to walk away from high-stakes negotiations with OpenAI over the future of its multibillion-dollar alliance, as the ChatGPT maker seeks to convert into a for-profit company.
Does this sound like a threat?
The squabbling pilot and copilot radioed into the control tower this burst of static filled information:
“We have a long-term, productive partnership that has delivered amazing AI tools for everyone,” Microsoft and OpenAI said in a joint statement. “Talks are ongoing and we are optimistic we will continue to build together for years to come.”
The newspaper online service added:
In discussions over the past year, the two sides have battled over how much equity in the restructured group Microsoft should receive in exchange for the more than $13bn it has invested in OpenAI to date. Discussions over the stake have ranged from 20 per cent to 49 per cent.
As a dinobaby observing the pilot and copilot navigate through the cloudy skies of smart software, it certainly looks as if the duo are arguing about who pays what for lunch when the big AI tie up glides to a safe landing. However, the introduction of a “nuclear option” seems dramatic. Will this option be a modest low yield neutron gizmo or a variant of the 1961 Tsar Bomba fried animals and lichen within a 35 kilometer radius and converted an island in the arctic to a parking lot?
How important is Sam AI-Man’s OpenAI? The cited article reports this from an anonymous source (the best kind in my opinion):
“OpenAI is not necessarily the frontrunner anymore,” said one person close to Microsoft, remarking on the competition between rival AI model makers.
Which company kicked off what seems to be a rather snappy set of negotiations between the pilot and the copilot. The cited orange newspaper adds:
A Silicon Valley veteran close to Microsoft said the software giant “knows that this is not their problem to figure this out, technically, it’s OpenAI’s problem to have the negotiation at all”.
What could the squabbling duo do do do (a reference to Bing Crosby’s version of “I Love You” for those too young to remember the song’s hook or the Bingster for that matter):
- Microsoft could reach a deal, make some money, and grab the controls of the AI powered P-39 Airacobra training aircraft, and land without crashing at the Renton Municipal Airport
- Microsoft and OpenAI could fumble the landing and end up in Lake Washington
- OpenAI could bail out and hitchhike to the nearest venture capital firm for some assistance
- The pilot and copilot could just agree to disagree and sit at separate tables at the IHOP in Renton, Washington
One can imagine other scenarios, but the FT’s news story makes it clear that anonymous sources, threats, and a bit of desperation are now part of the Microsoft and OpenAI relationship.
Yep, money and control — business essentials in the world of smart software which seems to be losing its claim as the “next big thing.” Are those stupid red and yellow lights flashing at Microsoft and OpenAI as they are at Google?
Stephen E Arnold, June 19, 2025
Who Knew? Remote Workers Are Happier Than Cube Laborers
June 6, 2025
To some of us, these findings come as no surprise. The Farmingdale Observer reports, “Scientists Have Been Studying Remote Work for Four Years and Have Reached a Very Clear Conclusion: ‘Working from Home Makes Us Happier’.” Nestled in our own environment, no commuting, comfy clothes—what’s not to like? In case anyone remains unconvinced, researchers at the University of South Australia spent four years studying the effects of working from home. Writer Bob Rubila tells us:
“An Australian study, conducted over four years and starting before the pandemic, has come up with some enlightening conclusions about the impact of working from home. The researchers are unequivocal: this flexibility significantly improves the well-being and happiness of employees, transforming our relationship with work. … Their study, which was unique in that it began before the health crisis, tracked changes in the well-being of Australian workers over a four-year period, offering a unique perspective on the long-term effects of teleworking. The conclusions of this large-scale research highlight that, despite the sometimes contradictory data inherent in the complexity of the subject, offering employees the flexibility to choose to work from home has significant benefits for their physical and mental health.”
Specifically, researchers note remote workers get more sleep, eat better, and have more time for leisure and family activities. The study also contradicts the common fear that working from home means lower productivity. Quite the opposite, it found. As for concerns over losing in-person contact with colleagues, we learn:
“Concerns remain about the impact on team cohesion, social ties at work, and promotion opportunities. Although the connection between colleagues is more difficult to reproduce at a distance, the study tempers these fears by emphasizing the stability, and even improvement, in performance.”
That is a bit of a hedge. On balance, though, remote work seems to be a net positive. An important caveat: The findings are considerably less rosy if working from home was imposed by, say, a pandemic lock-down. Though not all jobs lend themselves to remote work, the researchers assert flexibility is key. The more one’s work situation is tailored to one’s needs and lifestyle, the happier and more productive one will be.
Cynthia Murrell, June 6, 2025