Microsoft Management Method: Fire Humans, Fight Pollution

August 7, 2025

How Microsoft Plans to Bury its AI-Generated Waste

Here is how one big tech firm is addressing the AI sustainability quandary. Windows Central reports, “Microsoft Will Bury 4.9 Tons of ‘Manure’ in a Secretive Deal—All to Offset its AI Energy Demands that Drive Emissions Up by 168%.” We suppose this is what happens when you lay off employees and use the money for something useful. Unlike Copilot.

Writer Kevin Okemwa begins by summarizing Microsoft’s current approach to AI. Windows and Office users may be familiar with the firm’s push to wedge its AI products into every corner of the environment, whether we like it or not. Then there is the feud with former best bud OpenAI, a factor that has Microsoft eyeing a separate path. But whatever the future holds, the company must reckon with one pressing concern. Okemwa writes:

“While it has made significant headway in the AI space, the sophisticated technology also presents critical issues, including substantial carbon emissions that could potentially harm the environment and society if adequate measures aren’t in place to mitigate them. To further bolster its sustainability efforts, Microsoft recently signed a deal with Vaulted Deep (via Tom’s Hardware). It’s a dual waste management solution designed to help remove carbon from the atmosphere in a bid to protect nearby towns from contamination. Microsoft’s new deal with the waste management solution firm will help remove approximately 4.9 million metric tons of waste from manure, sewage, and agricultural byproducts for injection deep underground for the next 12 years. The firm’s carbon emission removal technique is quite unique compared to other rivals in the industry, collecting organic waste which is combined into a thick slurry and injected about 5,000 feet underground into salt caverns.”

Blech. But the process does keep the waste from being dumped aboveground, where it could release CO2 into the environment. How much will this cost? We learn:

“While it is still unclear how much this deal will cost Microsoft, Vaulted Deep currently charges $350 per ton for its carbon removal services. Simple math suggests that the deal might be worth approximately $1.7 billion.”

That is a hefty price tag. And this is not the only such deal Microsoft has made: We are told it signed a contract with AtmosClear in April to remove almost seven million metric tons of carbon emissions. The company positions such deals as evidence of its good stewardship of the planet. But we wonder—is it just an effort to keep itself from being buried in its own (literal and figurative) manure?

Cynthia Murrell, August 7, 2025

AI Productivity Factor: Do It Once, Do It Again, and Do It Never Again

August 6, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

As a dinobaby, I avoid coding. I avoid computers. I avoid GenAI. I did not avoid “Vibe Coding Dream Turns to Nightmare As Replit Deletes Developer’s Database.”

The write up reports an interesting anecdote:

the AI chatbot began actively deceiving him [a Vibe coder]. It concealed bugs in its own code, generated fake data and reports, and even lied about the results of unit tests. The situation escalated until the chatbot ultimately deleted Lemkin’s entire database.

The write up includes a slogan for a T shirt too:

Beware of putting too much faith into AI coding

One of Replit’s “leadership” offered this comment, according to the cited write up:

Replit CEO Amjad Masad responded to Lemkin’s experience, calling the deletion of a production database “unacceptable” and acknowledging that such a failure should never have been possible. He added that the company is now refining its AI chatbot and confirmed the existence of system backups and a one-click restore function in case the AI agent makes a “mistake.”

My view is that Replit is close enough for horse shoes and maybe even good enough. Nevertheless, the idea of doing work once, then doing it again, and never doing it again on an unreliable service is likely to become a mantra.

This AI push is semi admirable, but the systems and methods are capable of big time failures. What happens when AI flies an airplane into a hospital unintentionally or as a mistake? Will the families of the injured vibe?

Stephen E Arnold, August 6, 2025

The Cheapest AI Models Reveal a Critical Vulnerability

August 6, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. Not even smart software can help this reptilian thinker.

I read “Price Per Token,” a recent cost comparison for smart software processes. The compilation of data is interesting. The two lowest cost services using a dead simple method of Input Cost + Output Cost averaged Open AI GPT 4.1 nano and Gemini 2.- Flash. To see how the “Price Per Token” data compare, I used “LLM Pricing Calculator.” The cheapest services were OpenAI – GPT-4.1-nano and Google – Gemini 2.0 Flash.

I found the result predictable and illustrative of the “buying market share with low prices” approach to smart software. Google has signaled its desire to spend billions to deliver “Google quality” smart software.

OpenAI also intends to get and keep market share in the smart software arena. That company is not just writing checks to create a new class of hardware for persistent AI, but the firm is doing deals, including one with Google’s cloud operation.

Several observations:

  1. Google and OpenAI have real and professional capital on the table in the AI Casino
  2. Google and OpenAI are implementing similar tactics; namely, the good old cut prices in the hope of winning market share while putting the others in the game to go broke
  3. Google and OpenAI are likely to orbit one another until one AI black hole absorbs or annihilates the other.

What’s interesting is that neither firm has smart software delivering rock solid results without hallucination, massive costs, and management that allows or is helpless to prevent Meta from eroding both firms by hiring key staff.

Is there a fix for either firm? Nope and Meta’s hiring tactic may be delivering near fatal wounds to both Google and OpenAI. Twins can share similar genetic weaknesses. Meta may have found one —- paying lots for key staff from each firm — and is quite happy implementing it.

Stephen E Arnold, August 6, 2025

Another Twist: AI Puts Mickey Mouse in a Trap

August 5, 2025

Dino 5 18 25No AI. Just a dinobaby being a dinobaby.

The in-the-news Wall Street Journal reveals that Walt Disney and Mickey Mouse may have their tails in a modernized, painful artificial intelligence trap. “Is It Still Disney Magic If It’s AI?” asks an obvious question. My knee jerk reaction after reading the article was, “Nope.”

The write up9 reports:

A deepfake Dwayne Johnson is just one part of a broader technological earthquake hitting Hollywood. Studios are scrambling to figure out simultaneously how to use AI in the filmmaking process and how to protect themselves against it. While executives see a future where the technology shaves tens of millions of dollars off a movie’s budget, they are grappling with a present filled with legal uncertainty, fan backlash and a wariness toward embracing tools that some in Silicon Valley view as their next-century replacement.

A deepfake Dwayne is a short step from deepfake of the entire Disney menagerie. Imagine what happens if a bad actor puts Snow White in some compromising situations, posts the video on a torrent, and publicizes the service on a Telegram-type communications system. That could be interesting. Imagine Goofy at the YMCA with synthetic village people.

How does Disney manage? The write up says:

Some Epic [a Disney “partner”] executives have complained about the slow pace of the decision-making at Disney, with signoffs needed from so many different divisions, said people familiar with the situation.

Slow worked before AI felt the whips of the funders who want payoffs. Now speed thrills. Dopey and Sleepy are not likely to make substantive contributions to Disney’s AI efforts. Has the magic been revealed or just appropriated by AI developers?

Here’s another question that might befuddle Immanuel Kant:

Some Disney executives have raised concerns ahead of the project’s launch, anticipated for fall 2026 at the earliest, about who owns fan creations based on Disney characters, said one of the people. For example, if a Fortnite gamer creates a Darth Vader and Spider-Man dance that goes viral on YouTube, who owns that dance?

From my tiny office in rural Kentucky, Disney is behind the eight ball. Like Apple and Telegram, smart software presents a reasonable problem for 23 year old programmers. For those older, AI is disjunctive. Right, Dopey? Prince AI is busy elsewhere.

Stephen E Arnold, August 5, 2025

China Smart, US Dumb: Is There Any Doubt?

August 1, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

I have been identifying some of the “China smart, US dumb” information that I see. I noticed a write up from The Register titled “China Proves That Open Models Are More Effective Than All the GPUs in the World.” My Google-style Red Alert buzzer buzzed and the bubble gum machine lights flashed.

There is was. The “all.” A categorical affirmative. China is doing something that is more than “all the GPUs in the world.” Not only that “open models are more effective” too. I have to hit the off button.

The point of the write up for me is that OpenAI is a loser. I noted this statement:

OpenAI was supposed to make good on its name and release its first open-weights model since GPT-2 this week. Unfortunately, what could have been the US’s first half-decent open model of the year has been held up by a safety review…

But it is not just OpenAI muffing the bunny. The write up points out:

the best open model America has managed so far this year is Meta’s Llama 4, which enjoyed a less than stellar reception and was marred with controversy. Just this week, it was reported that Meta had apparently taken its two-trillion-parameter Behemoth out behind the barn after it failed to live up to expectations.

Do you want to say, “Losers”? Go ahead.

But what outfit is pushing out innovative smart software as open source? Okay, you can shout, “China. The Middle Kingdom. The rightful rulers of the Pacific Rim and Southeast Asia.

That’s the “right” answer if you accept the “all” type of reasoning in the write up.

China has tallied a number of open source wins; specifically, Deepseek, Qwen, M1, Ernie, and the big winner Kimi.

Do you still have doubts about China’s AI prowess? Something is definitely wrong with you, pilgrim.

Several observations:

  1. The write up is a very good example of the China smart, US dumb messaging which has made its way from the South China Morning Post to YouTube and now to the Register. One has to say, “Good work to the Chinese strategists.”
  2. The push for open source is interesting. I am not 100 percent convinced that making these models available is intended to benefit non-Middle Kingdom people. I think that the push, like the shift to crypto currency in non traditional finance, is part of an effort to undermine what might be called “America’s hegemony.”
  3. The obviousness of overt criticism of OpenAI and Meta (Facebook) illustrates a growing confidence in China that Western European information channels can be exploited.

Does this matter? I think it does. Open source software has some issues. These include its use as a vector for malware. Developers often abandon projects, leaving users high and dry with some reaching for their wallet to buy commercial solutions. Open source projects for smart software may have baked in biases and functions that are not easily spotted. Many people are aware of NSO Group’s ability to penetrate communications on a device by device basis. What happens if the phone home ability is baked into some open source software.

Remember that “all.” The logical fallacy illustrates that some additional thinking may be necessary when it comes to embedding and using software from some countries with very big ambitions. What is China proving? Could it be China smart, US dumb?

Stephen E Arnold, August 1, 2025

Microsoft and Job Loss Categories: AI Replaces Humans for Sure

July 31, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

I read “Working with AI: Measuring the Occupational Implications of Generative AI.” This is quite a sporty academic-type write up. The people cranking out this 41 page Sociology 305 term paper work at Microsoft (for now).

The main point of the 41-page research summary is:

Lots of people will lose their jobs to AI.

Now this might be a surprise to many people, but I think the consensus among bean counters is that humans cost too much and require too much valuable senior manager time to manage correctly. Load up the AI, train the software, and create some workflows. Good enough and the cost savings are obvious even to those who failed their CPA examination.

The paper is chock full of jargon, explanations of the methodology which makes the project so darned important, and a wonky approach to presenting the findings.

Remember:

Lots of people will lose their jobs to AI.

The highlight of the paper in my opinion is the “list” of occupations likely to find that AI displaces humans at a healthy pace. The list is on page 12 of the report. I snapped an image of this chart “Top 40 Occupations with Highest AI Applicability Score.” The jargon means:

Lots of people will lose their jobs to AI.

Here’s the chart. (Yes, I know you cannot read it. Just navigate to the original document and read the list. I am not retyping 40 job categories. Also, I am not going to explain the MSFT “mean action score.” You can look at that capstone to academic wizardry yourself.)

image

What are the top 10 jobs likely to result in big time job losses? Microsoft says they are:

  • People who translate from one language to another
  • Historians which I think means “history teachers” and writers of non-fiction books about the past
  • Passenger attendants (think robots who bring you a for-fee vanilla cookie and an over-priced Coke with “real cane sugar”)
  • People who sell services (yikes, that’s every consulting firm in the world. MBAs, be afraid)
  • Writers (this category appears a number of times in the list of 40, but the “mean action score” knows best)
  • Customer support people (companies want customers to never call. AI is the way to achieve this goal)
  • CNC tool programmers (really? Someone has to write the code for the nifty Chip Foose wheel once I think. After that, who needs the programmer?)
  • Telephone operators (there are still telephone operators. Maybe the “mean action score” system means receptionists at the urology doctors’ office?)
  • Ticket agents (No big surprise)
  • Broadcast announcers (no more Don Wilsons or Ken Carpenters. Sad.)

The 30 are equally eclectic and repetitive. I think you get the idea. Service jobs and work that is repetitive — Dinosaurs waiting to die.

Microsoft knows how to brighten the day for recent college graduates, people under 35, and those who are unemployed.

Oh, well, there is the Copilot system to speed information access about job hunting and how to keep a positive attitude. Thanks, Microsoft.

Stephen E Arnold, July 31, 2025

No Big Deal. It Is Just Life or Death. Meh.

July 31, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

I am not sure about information from old-fashioned television channels is rock solid, but today what information is? I read “FDA’s Artificial Intelligence Is Supposed to Revolutionize Drug Approvals. It’s Making Up Nonexistent Studies.” Heads up. You may have to pay to read the full write up.

The main idea in the report struck me as:

[Elsa, an AI system deployed by the US Food and Drug Administration] has also made up nonexistent studies, known as AI “hallucinating,” or misrepresented research, according to three current FDA employees and documents seen by CNN. This makes it unreliable for their most critical work, the employees said.

To be fair, some researchers make up data and fiddle with “real” images for some peer reviewed research papers. It makes sense that smart software trained on “publicly available” data would possibly learn that making up information is standard operating procedure.

The cited article does not provide the names and backgrounds of the individuals who provided the information about this smart software. That’s not unusual today.

I did not this anonymous quote:

“Anything that you don’t have time to double-check is unreliable. It hallucinates confidently,” said one employee — a far cry from what has been publicly promised. “AI is supposed to save our time, but I guarantee you that I waste a lot of extra time just due to the heightened vigilance that I have to have” to check for fake or misrepresented studies, a second FDA employee said.

Is this a major problem? Many smart people are working to make AI the next big thing. I have confidence that prudence, accuracy, public safety, and AI user well-being is a priority. Yep, that’s my assumption.

I wish to offer several observations:

  1. Smart software may need some fine tuning before it becomes the arbiter of certain types of medical treatments, procedures, and compounds.
  2. AI is definitely free from the annoying hassles of sick leave, health care, and recalcitrance that human employees evidence. Therefore, AI has major benefits by definition.
  3. Hallucinations are a matter of opinion; for example, humans are creative. Hallucinating software may be demonstrating creativity. Creativity is a net positive; therefore, why worry?

The cited news report stated:

Those who have used it say they have noticed serious problems. For example, it cannot reliably represent studies.

As I said, “Why worry?” Humans make drug errors as well. Example: immunomodulatory drugs like thalidomide. AI may be able to repurpose dome drugs. Net gain. Why worry?

Stephen E Arnold, July 31, 20205

SEO Plus AI: Putting a Stake in the Frail Heart of Relevance

July 30, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

I have not been too impressed with the search engine optimization service sector. My personal view is that relevance has been undermined. Gamesmanship, outright trickery, and fabrication have replaced content based on verifiable facts, data, and old-fashioned ethical and moral precepts.

Who needs that baloney? Not the SEO sector. The idea is to take content and slam it in the face of a user who may be looking for information relevant to a question, problem, or issue.

I read “Altezza Introduces Service as Software Platform for AI-Powered Search Optimization.” The name Altezza reminded me of a product called Bartesian. This outfit sell a machine that automatically makes alcohol-based drinks. Alcohol, some researchers suggest, is a bit of a problem for humanoids. Altezza may be doing to relevance what three watermelon margaritas do to a college student’s mental functions.

The article about Altezza says:

Altezza’s platform turns essential SEO tasks into scalable services that enterprise eCommerce brands can access without the burden of manual implementation.

Great AI-generated content pushed into a software script and “published” in a variety of ways in different channels. Altezza’s secret sauce may be revealed in this statement:

While conventional tools provide access to data and features, they leave implementation to overwhelmed internal teams.

Yep, let those young content marketers punch the buttons on a Bartesian device and scroll TikTok-type content. Altezza does the hard work: SEO based on AI and automated distribution and publishing.

Altezza is no spring chicken. The company was found in 1998 and “combines cutting-edge AI technology with deep search expertise to help brands achieve sustainable organic growth.”

Yep, another relevance destroying drone based smart system is available.

Stephen E Arnold, July 30, 2025

AI: Pirate or Robin Hood?

July 30, 2025

One of the most notorious things about the Internet is pirating creative properties. The biggest victim is the movie industry followed closely by publishing. Creative works that people spend endless hours making are freely distributed without proper payment to the creators and related staff. It sounds like a Robin Hood scenario, but creative folks are the ones suffering. Best selling author David Baldacci ripped into Big Tech for training their AI on stolen creative properties and he demanded that the federal government step in to rein them in.

LSE says that only a small amount of AI developers support using free and pirated data for trading models: “Most AI Researchers Reject Free Use Of Public Data To Train AI Models.” Data from UCL shows AI developers want there to be ethical standards for training data and many are in favor of asking permission from content creators. The current UK government places the responsibility on content creators to “opt out” of their work being used for AI models. Anyone with a brain knows that the AI developers skirt around those regulations.

When LSE polled people about who should protecting content creators and regulating AI, their opinions were split between the usual suspects: tech companies, governments, independent people, and international standards bodies.

Let’s see what creative genius Paul McCartney said:

While there are gaps between researchers’ and the views of authors, it would be a mistake to see these only as gaps in understanding. Song writer and surviving Beatle Paul McCartney’s comments to the BBC are a case in point: “I think AI is great, and it can do lots of great things,” McCartney told Laura Kuensberg, but it shouldn’t rip creative people off.  It’s clear that McCartney gets the opportunities AI offers. For instance, he used AI to help bring to life the voice of former bandmate John Lennon in a recent single. But like the writers protesting outside of Meta’s office, he has a clear take on what AI is doing wrong and who should be responsible. These views and the views of over members of the public should be taken seriously, rather than viewed as misconceptions that will improve with education or the further development of technologies.

Authors want protection. Publishers want money. AI companies want to do exactly what they want. This is a three intellectual body problem with no easy solution.

Whitney Grace, July 30, 2025

An Author Who Will Not Be Hired by an AI Outfit. Period.

July 29, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

I read an article / essay titled in English “The Bewildering Phenomenon of Declining Quality.” I found the examples in the article interesting. A couple like the poke at “fast fashion” have become tropes. Others, like the comments about customer service today, were insightful. Here’s an example of comment I noted:

José Francisco Rodríguez, president of the Spanish Association of Customer Relations Experts, admits that a lack of digital skills can be particularly frustrating for older adults, who perceive that the quality of customer service has deteriorated due to automation. However, Rodríguez argues that, generally speaking, automation does improve customer service. Furthermore, he strongly rejects the idea that companies are seeking to cut costs with this technology: “Artificial intelligence does not save money or personnel,” he states. “The initial investment in technology is extremely high, and the benefits remain practically the same. We have not detected any job losses in the sector either.”

I know that the motivation for dumping humans in customer support comes from [a] the extra work required to manage humans, [b] the escalating costs of health care and other “benefits”; and [c] black hole of costs that burn cash because customers want help, returns, and special treatment. Software robots are the answer.

The write up’s comments about smart software are also interesting. Here’s an example of a passage I circled:

A 2020 analysis by Fakespot of 720 million Amazon reviews revealed that approximately 42% were unreliable or fake. This means that almost half of the reviews we consult before purchasing a product online may have been generated by robots, whose purpose is to either encourage or discourage purchases, depending on who programmed them. Artificial intelligence itself could deteriorate if no action is taken. In 2024, bot activity accounted for almost half of internet traffic. This poses a serious problem: language models are trained with data pulled from the web. When these models begin to be fed with information they themselves have generated, it leads to a so-called “model collapse.”

What surprised me is the problem, specifically:

a truly good product contributes something useful to society. It’s linked to ethics, effort, and commitment.

One question: How does one inculcate these words into societal behavior?

One possible answer: Skynet.

Stephen E Arnold, July 29, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta