Microsoft and Its Modern Management Method: Waffling

April 23, 2025

dino orange_thumb_thumb_thumb_thumbNo AI, just the dinobaby himself.

The Harvard Business School (which I assume will remain open for “business”) has not addressed its case writers to focus on Microsoft’s modern management method. To me, changing direction is not a pivot; it is a variant of waffling. “Waffling” means saying one thing like “We love OpenAI.” Then hiring people who don’t love OpenAI and cutting deals with other AI outfits. The whipped cream on the waffle is killing off investments in data centers.

If you are not following this, think of the old song “The first time is the last time,” and you might get a sense of the confusion that results from changes in strategic and tactical direction. You may find this GenX, Y and Z approach just fine. I think it is a hoot.

PC Gamer, definitely not the Harvard Business Review, tackles one example of Microsoft’s waffling in “Microsoft Pulls Out of Two Big Data Centre Deals Because It Reportedly Doesn’t Want to Support More OpenAI Training Workloads.”

The write up says:

Microsoft has pulled out of deals to lease its data centres for additional training of OpenAI’s language model ChatGPT. This news seems surprising given the perceived popularity of the model, but the field of AI technology is a contentious one, for a lot of good reasons. The combination of high running cost, relatively low returns, and increasing competition—plus working on it’s own sickening AI-made Quake 2 demo—have proven enough reason for Microsoft to bow out of two gigawatt worth of projects across the US and Europe.

I love the scholarly “sickening.” Listen up, HBR editors. That’s a management term for 2025.

The article adds:

Microsoft, as well as its investors, have witnessed this relatively slow payoff alongside the rise of competitor models such as China’s Deepseek.

Yep, “payoff.” The Harvard Business School’s professors are probably not familiar with the concept of a payoff.

The news report points out that Microsoft is definitely, 100 percent going to spend $80 billion on infrastructure in 2025. With eight months left in the year, the Softies have to get in gear. The Google is spending as well. The other big time high tech AI juggernauts are also spending.

Will these investments payoff? Sure. Accountants and chief financial officers learn how to perform number magic. Guess where? Schools like the HBS. Don’t waffle. Go to class. Learn and then implement big time waffling.

Stephen E Arnold, April 23, 2025

AI and Movies: Better and Cheaper!

April 21, 2025

dino orange_thumb_thumbBelieve it or not, no smart software. Just a dumb and skeptical dinobaby.

I am not a movie oriented dinobaby. I do see occasional stories about the motion picture industry. My knowledge is shallow, but several things seem to be stuck in my mind:

  1. Today’s movies are not too good
  2. Today’s big budget films are recycles of sequels, pre-quels, and less than equals
  3. Today’s blockbusters are expensive.

I did a project for a little-time B movie fellow. I have even been to an LA party held in a mansion in La Jolla. I sat in the corner in my brown suit and waited until I could make my escape.

End of Hollywood knowledge.

I read “Ted Sarandos Responds To James Cameron’s Vision Of AI Making Movies Cheaper: “There’s An Even Bigger Opportunity To Make Movies 10% Better.” No, I really did red the article. I cam away confused. Most of my pre-retirement work involved projects whose goal was to make a lot of money. The idea was be clever, do a minimum of “real” work, and then fix up the problems when people complained. The magic formula for some Silicon Valley and high-technology outfits located outside of the Plastic Fantastic World.

This article pits better versus cheaper. I learned:

Citing recent comments by James Cameron, Netflix Co-CEO Ted Sarandos said he hopes AI can make films “10% better,” not just “50% cheaper.”

Well, there you go. Better and cheaper. Is that the winning formula for creative work? The write up quotes Ted Sarandos (a movie expert, I assume) as saying:

Today, you can use these AI-powered tools to enable smaller-budget projects to have access to big VFX on screen.

From my point of view “better” means more VFX which is, I assume, movie talk for visual effects. These are the everyday things I see at my local grocery store. There are super heroes stopping crimes in progress. There are giant alien creatures shooting energy beams at military personnel. There are machines that have a great voice that some AI experts found particularly enchanting.

The cheaper means that the individuals who sit in front of computer screens fooling around with Blackmagic’s Fusion and the super-wonderful Adobe software will be able to let smart software do some of the work. If 100 people work on a big budget film’s VFX and smart software can do the work cheaper, the question arises, “Do we need these 100 people?” Based on my training, the answer is, “Nope. Let them find their future elsewhere.”

The article sidesteps two important questions: Question 1. What does better mean? Question 2. What does cheaper mean?

Better is subjective. Cheaper is a victim of scope creep. Big jobs don’t get cheaper. Big jobs get more expensive.

What smart software will do the motion picture industry is hasten its “re-invention.”

The new video stars are those who attract eyeballs on TikTok- and YouTube-type platforms. The traditional motion picture industry which created yesterday’s stars or “influencers” is long gone. AI is going to do three things:

  1. Replace skilled technicians with software
  2. Allow today’s “influencers” to become the next Clark Gabel and Marilyn Monroe (How did she die?)
  3. Reduce the barrier for innovations that do not come from recycling Superman-type pre-quels, sequels, and less than equals.

To sum up, both of these movie experts are right and wrong. I suppose both can be reskilled. Does Mr. Beast offer a for fee class on video innovation which includes cheaper production and better outputs?

Stephen E Arnold, April 21, 2025

When

AI: Job Harvesting

April 9, 2025

It is a question that keeps many of us up at night. Commonplace ponders, "Will AI Automate Away Your Job?" The answer: Probably, sooner or later. The when depends on the job. Some workers may be lucky enough to reach retirement age before that happens. Writer Jason Hausenloy explains:

"The key idea where the American worker is concerned is that your job is as automatable as the smallest, fully self-contained task is. For example, call center jobs might be (and are!) very vulnerable to automation, as they consist of a day of 10- to 20-minute or so tasks stacked back-to-back. Ditto for many forms of many types of freelancer services, or paralegals drafting contracts, or journalists rewriting articles. Compare this to a CEO who, even in a day broken up into similar 30-minute activities—a meeting, a decision, a public appearance—each required years of experiential context that a machine can’t yet simply replicate. … This pattern repeats across industries: the shorter the time horizon of your core tasks, the greater your automation risk."

See the post for a more detailed example that compares the jobs of a technical support specialist and an IT systems architect.

Naturally, other factors complicate the matter. For example, Hausenloy notes, blue-collar jobs may be safer longer because physical robots are more complex to program than information software. Also, the more data there is on how to do a job, the better equipped algorithms are to mimic it. That is one reason many companies implement tracking software. Yes, it allows them to micromanage workers. And also it gathers data needed to teach an LLM how to do the job. With every keystroke and mouse click, many workers are actively training their replacements.

Ironically, it seems those responsible for unleashing AI on the world may be some of the most replaceable. Schadenfreude, anyone? The article notes:

"The most vulnerable jobs, then, are not those traditionally thought of as threatened by automation—like manufacturing workers or service staff—but the ‘knowledge workers’ once thought to be automation-proof. And most vulnerable of all? The same Silicon Valley engineers and programmers who are building these AI systems. Software engineers whose jobs are based on writing code as discrete, well-documented tasks (often following standardized updates to a central directory) are essentially creating the perfect training data for AI systems to replace them."

In a section titled "Rethinking Work," Hausenloy waxes philosophical on a world in which all of humanity has been fired. Is a universal basic income a viable option? What, besides income, do humans get out of their careers? In what new ways will we address those needs? See the write-up for those thought exercises. Meanwhile, if you do want to remain employed as long as possible, try to make your job depend less on simple, repetitive tasks and more on human connection, experience, and judgement. With luck, you may just reach retirement before AI renders you obsolete.

Cynthia Murrell, April 9, 2025

Oh, Oh, a Technological Insight: Unstable, Degrading, Non-Reversable.

April 9, 2025

dino orange_thumb_thumbDinobaby says, “No smart software involved. That’s for “real” journalists and pundits.

Building a House of Cards” has a subtitle which echoes other statements of “Oh, oh, this is not good”:

Beneath the glossy promises of artificial intelligence lies a ticking time bomb — and it’s not the one you’re expecting

Yep, another, person who seems younger than I has realized that flows of digital information erode, not just social structures but other functions as well.

The author, who publishes in Mr. Plan B, states:

The real crisis isn’t Skynet-style robot overlords. It’s the quiet, systematic automation of human bias at scale.

The observation is excellent. The bias of engineers and coders who set thresholds, orchestrate algorithmic beavers, and use available data. The human bias is woven into the systems people use, believe, and depend upon.

The essay asserts:

We’re not coding intelligence — we’re fossilizing prejudice.

That, in my opinion, is a good line.

The author, however, runs into a bit of a problem. The idea of a developers’ manifesto is interesting but flawed. Most devs, as some term this group, like creating stuff and solving problems. That’s the kick. Most of the devs with whom I have worked laugh when I tell them I majored in medieval religious poetry. One, a friend of mine, said, “I paid someone to write my freshman essay, and I never took any classes other than math and science.”

I like that: Ignorance and a good laugh at how I spent my college years. The one saving grace is that I got paid to help a professor index Latin sermons using the university’s one computer to output the word lists and microfilm locators. Hey, in 1962, this was voodoo.

Those who craft the systems are not compensated to think about whether Latin sermons were original or just passed around when a visiting monk exchanged some fair copies for a snort of monastery wine and a bit of roast pig. Let me tell you that most of those sermons were tediously similar and raised such thorny problems as the originality of the “author.”

The essay concludes with a factoid:

25 years in tech taught me one thing: Every “revolutionary” technology eventually faces its reckoning. AI’s is coming.

I am not sure that those engaged in the noble art and craft of engineering “smart” software accept, relate, or care about the validity of the author’s statement.

The good news is that the essay’s author now understand that flows of digital information do not construct. The bits zipping around erode just like the glass beads or corn cob abrasive in a body shop’s media blaster aimed at rusted automobile frame.

The body shop “restores” the rusted part until it is as good as new. Even better some mechanics say.

As long as it is “good enough,” the customer is happy. But those in the know realize that the frame will someday be unable to support the stress placed upon it.

See. Philosophy from a mechanical process. But the meaning speaks to a car nut. One may have to give up or start over.

Stephen E Arnold, April 9, 2025

Programmers? Just the Top Code Wizards Needed. Sorry.

April 8, 2025

dino orange_thumb_thumb_thumb_thumbNo AI. Just a dinobaby sharing an observation about younger managers and their innocence.

Microsoft has some interesting ideas about smart software and writing “code.” To sum it up, consider another profession.

Microsoft CTO Predicts AI Will Generate 95% of Code by 2030” reports:

Developers’ roles will shift toward orchestrating AI-driven workflows and solving complex problems.

I think this means that instead of figuring out how to make something happen, one will perform the higher level mental work. The “script” comes out of the smart software.

The write up says:

“It doesn’t mean that the AI is doing the software engineering job … authorship is still going to be human,” Scott explained. “It creates another layer of abstraction [as] we go from being an input master (programming languages) to a prompt master (AI orchestrator).” He doesn’t believe AI will replace developers, but it will fundamentally change their workflows. Instead of painstakingly writing every line of code, engineers will increasingly rely on AI tools to generate code based on prompts and instructions. In this new paradigm, developers will focus on guiding AI systems rather than programming computers manually. By articulating their needs through prompts, engineers will allow AI to handle much of the repetitive work, freeing them to concentrate on higher-level tasks like design and problem-solving.

The idea is good. Does it imply that smart software has reached the end of its current trajectory and will not be able to:

  1. Recognize a problem
  2. Formulate appropriate questions
  3. Obtain via research, experimentation, or Eureka! moments a solution?

The observation by the Microsoft CTO does not seem to consider this question about a trolly line that can follow its tracks.

The article heads off in another direction; specifically, what happens to the costs?

IBM CEO Arvind Krishna’s is quoted as saying:

“If you can produce 30 percent more code with the same number of people, are you going to get more code written or less?” Krishna rhetorically posed, suggesting that increased efficiency would stimulate innovation and market growth rather than job losses.

Where does this leave “coders”?

Several observations:

  • Those in the top one percent of skills are in good shape. The other 99 percent may want to consider different paths to a bright, fulfilling future
  • Money, not quality, is going to become more important
  • Inexperienced “coders” may find themselves looking for ways to get skills at the same time unneeded “coders” are trying to reskill.

It is no surprise that CNET reported, “The public is particularly concerned about job losses. AI experts are more optimistic.”

Net net: Smart software, good or bad, is going to reshape work in a big chunk of the workforce. Are schools preparing students for this shift? Are there government programs in place to assist older workers? As a dinobaby, it seems the answer is not far to seek.

Stephen E Arnold, April 8, 2025

Amazon Takes the First Step Toward Moby Dickdom

April 7, 2025

dino orange_thumb_thumb_thumb_thumb_thumbNo AI. Just a dinobaby sharing an observation about younger managers and their innocence.

This Engadget article does not predict the future. “Amazon Will Use AI to Generate Recaps for Book Series on the Kindle” reports:

Amazon’s new feature could make it easier to get into the latest release in a series, especially if it’s been some time since you’ve read the previous books. The new Recaps feature is part of the latest software update for the Kindle, and the company compares it to “Previously on…” segments you can watch for TV shows. Amazon announced Recaps in a blog post, where it said that you can get access to it once you receive the software update over the air or after you download and install it from Amazon’s website. Amazon didn’t talk about the technology behind the feature in its post, but a spokesperson has confirmed to TechCrunch that the recaps will be AI generated.

You may know a person who majored in American or English literature. Here’s a question you could pose:

Do those novels by a successful author follow a pattern; that is, repeatable elements and a formula?

My hunch is that authors who have written a series of books have a recipe. The idea is, “If it makes money, do it again.” In the event that you could ask Nora Roberts or commune with Billy Shakespeare, did their publishers ask, “Could you produce another one of those for us? We have a new advance policy.” When my Internet 2000: The Path to the Total Network made money in 1994, I used the approach, tone, and research method for my subsequent monographs. Why? People paid to read or flip through the collected information presented my way. I admit I that combined luck, what I learned at a blue chip consulting firm, and inputs from people who had written successful non-fiction “reports.” My new monograph — The Telegram Labyrinth — follows this blueprint. Just ask my son, and he will say, “My dad has a template and fills in the blanks.”

If a dinobaby can do it, what about flawed smart software?

Chase down a person who teaches creative writing, preferably in a pastoral setting. Ask that person, “Do successful authors of series follow a pattern?”

Here’s what I think is likely to happen at Amazon. Remember. I have zero knowledge about the inner workings of the Bezos bulldozer. I inhale its fumes like many other people. Also, Engadget doesn’t get near this idea. This is a dinobaby opinion.

Amazon will train its smart software to write summaries. Then someone at Amazon will ask the smart software to generate a 5,000 word short story in the style of Nora Roberts or some other money spinner. If the story is okay, then the Amazonian with a desire to shift gears says, “Can you take this short story and expand it to a 200,000 word novel, using the patterns, motifs, and rhetorical techniques of the series of novels by Nora, Mark, or whoever.

Guess what?

Amazon now has an “original” novel which can be marketed as an Amazon test, a special to honor whomever, or experiment. If Prime members or the curious click a lot, that Amazon employee has a new business to propose to the big bulldozer driver.

How likely is this scenario? My instinct is that there is a 99 percent probability that an individual at Amazon or the firm from which Amazon is licensing its smart software has or will do this.

How likely is it that Amazon will sell these books to the specific audience known to consume the confections of Nora and Mark or whoever? I think the likelihood is close to 80 percent. The barriers are:

  1. Bad optics among publishers, many of which are not pals of fume spouting bulldozers in the few remaining bookstores
  2. Legal issues because both publishers and authors will grouse and take legal action. The method mostly worked when Google was scanning everything from timetables of 19th century trains in England to books just unwrapped for the romance novel crowd
  3. Management disorganization. Yep, Amazon is suffering the organization dysfunction syndrome just like other technology marvels
  4. The outputs lack the human touch. The project gets put on ice until OpenAI, Anthropic, or whatever comes along and does a better job and probably for fewer computing resources which means more profit.

What’s important is that this first step is now public and underway.

Engadget says, “Use it at your own risk.” Whose risk may I ask?

Stephen E Arnold, April 7, 2025

Amazon: So Many Great Ideas

April 1, 2025

AWS puts its customers first. Well, those who pay for the premium support plan, anyway. A thread on Reddit complains, "AWS Blocking Troubleshooting Docs Behind Paid Premium Support Plan." Redditor Certain_Dog1960 writes:

"When did AWS decide that troubleshooting docs/articles require you to have a paid premium support plan….like seriously who thought this was a good idea?"

Good question. The comments and the screenshot of Amazon’s message make clear that the company’s idea of how to support customers is different from actual customers’ thoughts. However, Certain_Dog posted an encouraging update:

"The paywall has been taken down!!! :)"

Apparently customer outrage still makes a difference. Occasionally.

Cynthia Murrell, March 31, 2025

Click Counting: It Is 1992 All Over Again

March 31, 2025

dino orange_thumb_thumb_thumb_thumbDinobaby says, “No smart software involved. That’s for “real” journalists and pundits.

I love it when search engine optimization experts, online marketing executives, and drum beaters for online advertising talk about clicks, clickstreams, and click metrics. Ho ho ho.

I think I was involved in creating a Web site called Point (The Top 5% of the Internet). The idea was simple: Curate and present a directory of the most popular sites on the Internet. It was a long shot because the team did not want to do drugs, sex, and a number of other illegal Web site profiles for the directory. The idea was that in 1992 or so, no one had a Good Housekeeping Seal of Approval-type of directory. There was Yahoo, but if one poked around, some interesting Web sites would display in their low resolution, terrible bandwidth glory.

To my surprise, the idea worked and the team wisely exited the business when someone a lot smarter than the team showed up with a check. I remember fielding questions about “traffic”. There was the traffic we used to figure out what sites were popular. Then there was traffic we counted when visitors to Point hit the home page and read profiles of sites with our Good Housekeeping-type of seal.

I want to share that from those early days of the Internet the counting of clicks was pretty sketchy. Scripts could rack up clicks in a slow heartbeat. Site operators just lied or cooked up reports that served up a reality in terms of tasty little clicks.

Why are clicks bogus? I am not prepared to explain the dark arts of traffic boosting which today is greatly aided by  scripts instantly generated by smart software. Instead I want to highlight this story in TechCrunch: “YouTube Is Changing How YouTube Shorts Views Are Counted.” The article does a good job of explaining how one monopoly is responding to its soaring costs and the slow and steady erosion of its search Nile River of money.

The write up says:

YouTube is changing how it counts views on YouTube Shorts to give creators a deeper understanding of how their short-form content is performing

I don’t know much about YouTube. But I recall watching little YouTubettes which bear a remarkable resemblance to TikTok weaponized data bursts just start playing. Baffled, I would watch a couple of seconds, check that my “autoplay” was set to off, and then kill the browser page. YouTubettes are not for me.

Most reasonable people would want to know several things about their or any YouTubette; for example:

  1. How many times did a YouTubette begin to play and then was terminated in less that five seconds
  2. How many times a YouTubette was viewed from start to bitter end
  3. How many times a YouTubette was replayed in its entirety by a single user
  4. What device was used
  5. How many YouTubettes were “shared”
  6. The percentage of these data points compared against the total clicks of a short nature or the full view?

You get the idea. Google has these data, and the wonderfully wise but stressed firm is now counting “short views” as what I describe as the reality: Knowing exactly how many times a YouTubette was played start to finish.

According to the write up:

With this update, YouTube Shorts will now align its metrics with those of TikTok and Instagram Reels, both of which track the number of times your video starts or replays. YouTube notes that creators will now be able to better understand how their short-form videos are performing across multiple platforms. Creators who are still interested in the original Shorts metric can view it by navigating to “Advanced Mode” within YouTube Analytics. The metric, now called “engaged views,” will continue to allow creators to see how many viewers choose to continue watching their Shorts. YouTube notes that the change won’t impact creators’ earnings or how they become eligible for the YouTube Partner Program, as both of these factors will continue to be based on engaged views rather than the updated metric.

Okay, responding to the competition from one other monopolistic enterprise. I get it. Okay, Google will allegedly provided something for a creator of a YouTubette to view for insight. And the change won’t impact what Googzilla pays a creator. Do creators really know how Google calculates payments? Google knows. With the majority of the billions of YouTube videos (short and long) getting a couple of clicks, the “popularity” scheme boils down to what we did in 1992. We used whatever data was available, did a few push ups, and pumped out a report.

Could Google follow the same road map? Of course not. In 1992, we had no idea what we were doing. But this is 2025 and Google knows exactly what it is doing.

Advertisers will see click data that do not reflect what creators want to see and what viewers of YouTubettes and probably other YouTube content really want to know: How many people watched the video from start to finish?

Google wants to sell ads at perhaps the most difficult point in its 20 year plus history. That autoplay inflates clicks. “Hey, the video played. We count it,” can you conceptualize the statement? I can.

Let’s not call this new method “weaponization.” That’s too strong. Let’s describe this as “shaping” or “inflating” clicks.

Remember. I am a dinobaby and usually wrong. No high technology company would disadvantage a creator or an advertiser. Therefore, this change is no big deal. Will it help Google deal with its current challenges? You can now ask Google AI questions answered by its most sophisticated smart software for free.

Is that an indication that something is not good enough to cause people to pay money? Of course not. Google says “engaged views” are still important. Absolutely. Google is just being helpful.

Stephen E Arnold, March 31, 2025

OpenAI and Alleged Environmental Costs: No Problem

March 28, 2025

We know ChatGPT uses an obscene amount of energy and water. But it can be difficult to envision exactly how much. Digg offers some helpful infographics in, "Do You Know How Much Energy ChatGPT Actually Uses?" Writer Darcy Jimenez tells us:

"Since it was first released in 2022, ChatGPT has gained a reputation for being particularly bad for the environment — for example, the GPT-4 model uses as many as 0.14 kilowatt-hours (kWh) generating something as simple as a 100-word email. It can be tricky to fully appreciate the environmental impact of using ChatGPT, though, so the researchers at Business Energy UK made some visualizations to help. Using findings from a 2023 research paper, they calculated the AI chatbot’s estimated water and electricity usage per day, week, month and year, assuming its 200 million weekly users feed it five prompts per day."

See the post for those enlightening graphics. Here are just a few of the astounding statistics:

"Electricity: each day, ChatGPT uses 19.99 million kWh. That’s enough power to charge 4 million phones, or run the Empire State Building for 270 days. … ChatGPT uses a whopping 7.23 billion kWh per year, which is more electricity than the world’s 112 lowest-consumption countries consume over the same period. It’s also enough to power every home in Wyoming for two and a half years."

And:

"Water: The 19.58 million gallons ChatGPT drinks every day could fill a bath for each of Colorado Springs’s 488,664 residents. That amount is also equivalent to everyone in Belgium flushing their toilet at the same time. … In the space of a year, the chatbot uses 7.14 billion gallons of water. That’s enough to fill up the Central Park Reservoir seven times, or power Las Vegas’s Fountains of Bellagio shows for almost 600 years."

Wow. See the write-up for more mind-boggling comparisons. Dolphin lovers and snail darter fans may want to check out the write up.

Cynthia Murrell, March 28, 2025

Insecure Managers Watch: Whistle While You Shirk

March 27, 2025

It’s never been easier to spy…er…watch employees while they work. Systems track employees and their work. Employees are monitored with software that’s akin to spyware except it wears a white hat (sort of) instead of a black one. Computer World says that employers are spying…er…monitoring their employees more than ever: “Electronic Employee Monitoring Reaches An All-Time High.”

The Massachusetts Institute of Technology (MIT) reported that 80% of companies are tracking remote and hybrid workers. Everything from keystrokes to tone in communications is being recorded. This is often happening without the knowledge of the employees. Meanwhile Gartner believes that spying…er…watching employees is up 30% from 2024, making it 71%.

One reason employees are being monitored more is that return-to-office mandates aren’t working. Employers aren’t enforcing them because employees value flexibility to work where they’re most productive. There’s also a changing workplace trend that employees value supportive management. If the support isn’t there, they’re not afraid to share their discontent online.

Employees are worried about the spying…er…monitoring:

“The constant oversight is stressing workers. In a survey of 1,500 US-based employers and 1,500 workers by ExpressVPN, 24% said they take fewer breaks to avoid looking idle, while 32% feel pressured to work faster. In response, 16% fake productivity with unnecessary apps, 15% schedule emails, and 12% use tools to evade detection. Nearly half (49%) would consider leaving if surveillance increased, with 24% willing to accept a pay cut to avoid it. ‘Surveillance may seem like a solution for improving efficiency, but it’s clearly eroding trust and morale in the workplace,’ said Lauren Hendry Parsons, ExpressVPN’s Digital Privacy Advocate. ‘As companies adopt increasingly invasive tools, they risk losing the loyalty and well-being of their workforce.’”

When employees are watched it lowers their productivity. Many of them also want a disclosure about being monitored and how their information is being used.

Maybe the modern workplace works better as a prison?

Whitney Grace, March 27, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta