YouTube: Behind the Scenes Cleverness?
September 17, 2025
No smart software involved. Just a dinobaby’s work.
I read “YouTube Is a Mysterious Monopoly.” The author tackles the subject of YouTube and how it seems to be making life interesting for some “creators.” In many countries, YouTube is television. I discovered this by accident in Bucharest, Cape Town, and Santiago, to name three locations where locals told me, “I watch YouTube.”
The write up offers some comments about this Google service. Let’s look at a couple of these.
First, the write up says:
…while views are down, likes and revenue have been mostly steady. He guesses that this might be caused by a change in how views are calculated, but it’s just a guess. YouTube hasn’t mentioned anything about a change, and the drop in views has been going on for about a month.
About five years ago, one of the companies with which I have worked for a while, pointed out that their Web site traffic was drifting down. As we monitored traffic and ad revenues, we noticed initial stability and then a continuing decline in both traffic and ad revenue. I recall we checked some data about competitive sites and most were experiencing the same drift downwards. Several were steady or growing. My client told me that Google was not able to provide substantive information. Is this type of decline an accident or is it what I call traffic shaping for Google’s revenue? No one has provided information to make this decline clear. Today (September 10, 2025) the explanation is related to smart software. I have my doubts. I think it is Google cleverness.
Second, the write up states:
I pay for YouTube Premium. For my money, it’s the best bang-for-the-buck subscription service on the market. I also think that YouTube is a monopoly. There are some alternatives — I also pay for Nebula, for example — but they’re tiny in comparison. YouTube is effectively the place to watch video on the internet.
In the US, Google has been tagged with the term “monopoly.” I find it interesting that YouTube is allegedly wearing a T shirt that says, “The only game in town.” I think that YouTube has become today’s version of the Google online search service. We have people dependent on the service for money, and we have some signals that Google is putting its thumb on the revenue scale or is suffering from what users are able to view on the service. Also, we have similar opaqueness about who or what is fiddling the dials. If a video or a Web site does not appear in a search result, that site may as well not exist for some people. The write up comes out and uses the “monopoly” word for YouTube.
Finally, the essay offers this statement:
Creators are forced to share notes and read tea leaves as weird things happen to their traffic. I can only guess how demoralizing that must feel.
For me, this comment illustrates that the experience of my client’s declining traffic and ad revenue seems to be taking place in the YouTube “datasphere.” What is a person dependent on YouTube revenue supposed to do when views drop or the vaunted YouTube search service does not display a hit for a video directly relevant to a user’s search. OSINT experts have compiled information about “Google dorks.” These are hit-and-miss methods to dig a relevant item from the Google index. But finding a video is a bit tricky, and there are fewer Google dorks to unlock YouTube content than for other types of information in the Google index.
What do I make of this? Several preliminary observations are warranted. First, Google is hugely successful, but the costs of running the operation and the quite difficult task of controlling the costs of ping, pipes, and power, the cost of people, and the expense of dealing with pesky government regulators. The “steering” of traffic and revenue to creators is possibly a way to hit financial targets.
Second, I think Google’s size and its incentive programs allow certain “deciders” to make changes that have local and global implications. Another Googler has to figure out what changed, and that may be too much work. The result is that Googlers don’t have a clue what’s going on.
Third, Google appears to be focused on creating walled gardens for what it views as “Web content” and for creator-generated content. What happens when a creator quits YouTube? I have heard that Google’s nifty AI may be able to extract the magnetic points of the disappeared created and let its AI crank out a satisfactory simulacrum. Hey, what are those YouTube viewers in Santiago going to watch on their Android mobile device?
My answer to this rhetorical question is the creator and Google “features” that generate the most traffic. What are these programs? A list of the alleged top 10 hits on YouTube is available at https://mashable.com/article/most-subscribed-youtube-channels. I want to point out that the Google holds down position in its own list spots number four and number 10. The four spot is Google Movies, a blend of free with ads, rent the video, “buy” the video which sort of puzzles me, and subscribe to a stream. The number 10 spot is Google’s own music “channel”. I think that works out to YouTube’s hosting of 10 big draw streams and services. Of those 10, the Google is 20 percent of the action. What percentage will be “Google” properties in a year?
Net net: Monitoring YouTube policy, technical, and creator data may help convert these observations into concrete factoids. On the other hand, you are one click away from what exactly? Answer: Daily Motion or RuTube? Mysterious, right?
Stephen E Arnold, September 17, 2025
Desperate Much? Buying Cyber Security Software Regularly
September 16, 2025
Bad actors have access to AI, and it is enabling them to increase both speed and volume at an alarming rate. Are cybersecurity teams able to cope? Maybe—if they can implement the latest software quickly enough. VentureBeat reports, “Software Commands 40% of Cybersecurity Budgets ad Gen AI Attacks Execute in Milliseconds.” Citing IBM’s recent Cost of a Data Breach Report, writer Louis Columbus reports 40% of cybersecurity spending now goes to software. Compare that to just 15.8% spent on hardware, 15% on outsourcing, and 29% on personnel. Even so, AI-assisted hacks now attack in milliseconds while the Mean Time to Identify (MTTI) is 181 days. That is quite the disparity. Columbus observes:
“Three converging threats are flipping cybersecurity on its head: what once protected organizations is now working against them. Generative AI (gen AI) is enabling attackers to craft 10,000 personalized phishing emails per minute using scraped LinkedIn profiles and corporate communications. NIST’s 2030 quantum deadline threatens retroactive decryption of $425 billion in currently protected data. Deepfake fraud that surged 3,000% in 2024 now bypasses biometric authentication in 97% of attempts, forcing security leaders to reimagine defensive architectures fundamentally.”
Understandable. But all this scrambling for solutions may now be part of the problem. Some teams, we are told, manage 75 or more security tools. No wonder they capture so much of the budget. Simplification, however, is proving elusive. We learn:
“Security Service Edge (SSE) platforms that promised streamlined convergence now add to the complexity they intended to solve. Meanwhile, standalone risk-rating products flood security operations centers with alerts that lack actionable context, leading analysts to spend 67% of their time on false positives, according to IDC’s Security Operations Study. The operational math doesn’t work. Analysts require 90 seconds to evaluate each alert, but they receive 11,000 alerts daily. Each additional security tool deployed reduces visibility by 12% and increases attacker dwell time by 23 days, as reported in Mandiant’s 2024 M-Trends Report. Complexity itself has become the enterprise’s greatest cybersecurity vulnerability.”
See the writeup for more on efforts to improve cybersecurity’s speed and accuracy and the factors that thwart them. Do we have a crisis yet? Of course not. Marketing tells us cyber security just works. Sort of.
Cynthia Murrell, September 16, 2025
Shame, Stress, and Longer Hours: AI’s Gifts to the Corporate Worker
September 15, 2025
Office workers from the executive suites to entry-level positions have a new reason to feel bad about themselves. Fortune reports, “ ‘AI Shame’ Is Running Rampant in the Corporate Sector—and C-Suite Leaders Are Most Worried About Getting Caught, Survey Says.” Writer Nick Lichtenberg cites a survey of over 1,000 workers by SAP subsidiary WalkMe. We learn almost half (48.8%) of the respondents said they hide their use of AI at work to avoid judgement. The number was higher at 53.4% for those at the top—even though they use AI most often. But what about the generation that has entered the job force amid AI hype? We learn:
“Gen Z approaches AI with both enthusiasm and anxiousness. A striking 62.6% have completed work using AI but pretended it was all their own effort—the highest rate among any generation. More than half (55.4%) have feigned understanding of AI in meetings. … But only 6.8% report receiving extensive, time-consuming AI training, and 13.5% received none at all. This is the lowest of any age group.”
In fact, the study found, only 3.7% of entry-level workers received substantial AI training, compared to 17.1% of C-suite executives. The write-up continues:
“Despite this, an overwhelming 89.2% [of Gen Z workers] use AI at work—and just as many (89.2%) use tools that weren’t provided or sanctioned by their employer. Only 7.5% reported receiving extensive training with AI tools.”
So younger employees use AI more but receive less training. And, apparently, are receiving little guidance on how and whether to use these tools in their work. What could go wrong?
From executives to fresh hires and those in between, the survey suggests everyone is feeling the impact of AI in the workplace. Lichtenberg writes:
“AI is changing work, and the survey suggests not always for the better. Most employees (80%) say AI has improved their productivity, but 59% confess to spending more time wrestling with AI tools than if they’d just done the work themselves. Gen Z again leads the struggle, with 65.3% saying AI slows them down (the highest amount of any group), and 68% feeling pressure to produce more work because of it.”
In addition, more than half the respondents said AI training initiatives amounted to a second, stressful job. But doesn’t all that hard work pay off? Um, no. At least, not according to this report from MIT that found 95% of AI pilot programs at large companies fail. So why are we doing this again? Ask the investor class.
Cynthia Murrell, September 15, 2025
Common Sense Returns for Coinbase Global
September 5, 2025
No AI. Just a dinobaby working the old-fashioned way.
Just a quick dino tail slap for Coinbase. I read “Coinbase Reverses Remote Policy over North Korean Hacker Threats.” The write up says:
Coinbase has reversed its remote-first policy due to North Korean hackers exploiting fake remote job applications for infiltration. The company now mandates in-person orientations and U.S. citizenship for sensitive roles. This shift highlights the crypto industry’s need to balance flexible work with robust cybersecurity.
I strongly disagree with the cyber security angle. I think it is a return (hopefully) to common sense, not the mindless pursuit of cheap technical work and lousy management methods. Sure, cyber security is at risk when an organization hires people to do work from a far off land. The easy access to voice and image synthesis tools means that some outfits are hiring people who aren’t the people the really busy, super professional human resources person thinks was hired.
The write up points out:
North Korean hackers have stolen an estimated $1.6 billion from cryptocurrency platforms in 2025 alone, as detailed in a recent analysis by Ainvest. Their methods have evolved from direct cyberattacks to more insidious social engineering, including fake job applications enhanced by deepfakes and AI-generated profiles. Coinbase’s CEO, Brian Armstrong, highlighted these concerns during an appearance on the Cheeky Pint podcast, as covered by The Verge, emphasizing how remote-first policies inadvertently create vulnerabilities.
Close but the North Korean angle is akin to Microsoft saying, “1,000 Russian hackers did this.” Baloney. My view is that the organized hacking operations blend smoothly with the North Korean government’s desire for free cash and the large Chinese criminal organizations operating money laundering operations from that garden spot, the Golden Triangle.
Stealing crypto is one thing. Coordinating attacks on organizations to exfiltrate high value information is a second thing. A third thing is to perform actions that meet the needs and business methods of large-scale money laundering, phishing, and financial scamming operations.
Looking at these events from the point of view of a single company, it is easy to see that cost reduction and low cost technical expertise motivated some managers, maybe those at Coinbase. But now that more information is penetrating the MBA fog that envelopes many organizations, common sense may become more popular. Management gurus and blue chip consulting firms are not proponents of common sense in my experience. Coinbase may have seen the light.
Stephen E Arnold, September 5, 2025
Grousing Employees Can Be Fun. Credible? You Decide
September 4, 2025
No AI. Just a dinobaby working the old-fashioned way.
I read “Former Employee Accuses Meta of Inflating Ad Metrics and Sidestepping Rules.” Now former employees saying things that cast aspersions on a former employer are best processed with care. I did that, and I want to share the snippets snagging my attention. I try not to think about Meta. I am finishing my monograph about Telegram, and I have to stick to my lane. But I found this write up a hoot.
The first passage I circled says:
Questions are mounting about the reliability of Meta’s advertising metrics and data practices after new claims surfaced at a London employment tribunal this week. A former Meta product manager alleged that the social media giant inflated key metrics and sidestepped strict privacy controls set by Apple, raising concerns among advertisers and regulators about transparency in the industry.
Imagine. Meta coming up at a tribunal. Does that remind anyone of the Cambridge Analytica excitement? Do you recall the rumors that fiddling with Facebook pushed Brexit over the finish line? Whatever happened to those oh-so-clever CA people?
I found this tribunal claim interesting:
… Meta bypassed Apple’s App Tracking Transparency (ATT) rules, which require user consent before tracking their activity across iPhone apps. After Apple introduced ATT in 2021, most users opted out of tracking, leading to a significant reduction in Meta’s ability to gather information for targeted advertising. Company investors were told this would trim revenues by about $10 billion in 2022.
I thought Apple had their system buttoned up. Who knew?
Did Meta have a response? Absolutely. The write up reports:
“We are actively defending these proceedings …” a Meta spokesperson told The Financial Times. “Allegations related to the integrity of our advertising practices are without merit and we have full confidence in our performance review processes.”
True or false? Well….
Stephen E Arnold, September 4, 2025
AI Will Not Have a Negative Impact on Jobs. Knock Off the Negativity Now
September 2, 2025
No AI. Just a dinobaby working the old-fashioned way.
The word from Goldman Sachs is parental and well it should be. After all, Goldman Sachs is the big dog. PC Week’s story “Goldman Sachs: AI’s Job Hit Will Be Brief as Productivity Rises” makes this crystal clear or almost. In an era of PR and smart software, I am never sure who is creating what.
The write up says:
AI will cause significant, but ultimately temporary, disruption. The headline figure from the report is that widespread adoption of AI could displace 6-7% of the US workforce. While that number sounds alarming, the firm’s economists, Joseph Briggs and Sarah Dong, argue against the narrative of a permanent “jobpocalypse.” They remain “skeptical that AI will lead to large employment reductions over the next decade.”
Knock of the complaining already. College graduates with zero job offers. Just do the van life thing for a decade or become an influencer.
The write up explains history just like the good old days:
“Predictions that technology will reduce the need for human labor have a long history but a poor track record,” they write. The report highlights a stunning fact: Approximately 60% of US workers today are employed in occupations that didn’t even exist in 1940. This suggests that over 85% of all employment growth in the last 80 years has been fueled by the creation of new jobs driven by new technologies. From the steam engine to the internet, innovation has consistently eliminated some roles while creating entirely new industries and professions.
Technology and brilliant management like that at Goldman Sachs makes the economy hum along. And the write up proves it, and I quote:
Goldman Sachs expects AI to follow this pattern.
For those TikTok- and YouTube-type videos revealing that jobs are hard to obtain or the fathers whining about sending 200 job applications each month for six months, knock it off. The sun will come up tomorrow. The financial engines will churn and charge a service fee, of course. The flowers will bloom because that baloney about global warming is dead wrong. The birds will sing (well, maybe not in Manhattan) but elsewhere because windmills creating power are going to be shut down so the birds won’t be decapitated any more.
Everything is great. Goldman Sachs says this. In Goldman we trust or is it Goldman wants your trust… fund that is.
Stephen E Arnold, September 2, 2025
Swinging for the Data Centers: You May Strike Out, Casey
September 2, 2025
Home to a sparse population of humans, the Cowboy State is about to generate an immense amount of electricity. Tech Radar Pro reports, “A Massive Wyoming Data Center Will Soon Use 5x More Power than the State’s Human Occupants—But No One Knows Who Is Using It.” Really? We think we can guess. The Cheyenne facility is to be powered by a bespoke combination of natural gas and renewables. Writer Efosa Udinmwen writes:
“The proposed facility, a collaboration between energy company Tallgrass and data center developer Crusoe, is expected to start at 1.8 gigawatts and could scale to an immense 10 gigawatts. For context, this is over five times more electricity than what all households in Wyoming currently use.”
Who could need so much juice? Could it be OpenAI? So far, Crusoe neither confirms nor denies that suspicion. The write-up, however, notes Crusoe worked with OpenAI to build the world’s “largest data center” in Texas as part of the OpenAI-led “Stargate” initiative. (Yes, named for the portals in the 1994 movie and subsequent TV show. So clever.) Udinmwen observes:
“At the core of such AI-focused data centers lies the demand for extremely high-performance hardware. Industry experts expect it to house the fastest CPUs available, possibly in dense, rack-mounted workstation configurations optimized for deep learning and model training. These systems are power-hungry by design, with each server node capable of handling massive workloads that demand sustained cooling and uninterrupted energy. Wyoming state officials have embraced the project as a boost to local industries, particularly natural gas; however, some experts warn of broader implications. Even with a self-sufficient power model, a data center of this scale alters regional power dynamics. There are concerns that residents of Wyoming and its environs could face higher utility costs, particularly if local supply chains or pricing models are indirectly affected. Also, Wyoming’s identity as a major energy exporter could be tested if more such facilities emerge.”
The financial blind spot is explained in Futurism’s article “There’s a Stunning Financial Problem With AI Data Centers.” The main idea is that today’s investment will require future spending for upgrades, power, water, and communications. The result is that most of these “home run” swings will result in lousy batting averages and maybe become a hot dog vendor at the ball park adjacent the humming, hot structures.
Cynthia Murrell, September 2, 2025
Picking on the Zuck: Now It Is the AI Vision
September 1, 2025
No AI. Just a dinobaby working the old-fashioned way.
Hey, the fellow just wanted to meet girls on campus. Now his life work has become a negative. Let’s cut some slack for the Zuck. He is a thinking, caring family man. Imagine my shock when I read “Mark Zuckerberg’s Unbelievably Bleak AI Vision: We Were Promised Flying Cars. We Got Instagram Brain Rot.”
A person choosing to use a product the Zuck just bought conflates brain rot with a mass affliction. That’s outstanding reasoning.
The write up says:
In an Instagram video (of course) posted last week, Zuck explains that Meta’s goal is to develop “personal superintelligence for everyone,” accessed through devices like “glasses that can see what we see, hear what we hear, and interact with us throughout the day.” “A lot has been written about the scientific and economic advances that AI can bring,” he noted. “And I’m really optimistic about this.” But his vision is “different from others in the industry who want to direct AI at automating all of the valuable work”: “I think an even more meaningful impact in our lives is going to come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, be a better friend, and grow to become the person that you aspire to be.”
A person wearing the Zuck glasses will not be a “glasshole.” That individual will be a better human. Imagine taking the Zuck qualities and amplifying them like a high school sound system on the fritz. That’s what smart software will do.
The write up I saw is dated August 6, 2025, and it is hopelessly out of date. the Zuck has reorganized his firm’s smart software unit. He has frozen hiring except for a few quick strikes at competitors. And he is bringing more order to a quite well organized, efficiently run enterprise.
The big question is, “How can a write up dated August 6, 2025, become so mismatched with what the Zuck is currently doing? I don’t think I can rely on a write up with an assertion like this one:
I’ve seen the best digital minds of my generation wasted on Reels.
I have never seen a Reels, but it is obvious I am in the minority. That means that I am ill-equipped to understand this:
the AI systems his team is building are not meant to automate work but to provide a Meta-governed layer between individual human beings and the world outside of them.
This sounds great.
I would like to share three thoughts I had whilst reading this essay:
- Ephemeral writing becomes weirdly unrelated to the reality of the current online market in the United States
- The Zuck’s statements and his subsequent reorganization suggest that alignment at Facebook is a bit like a grade school student trying to fit puzzle pieces into the wrong puzzle
- Googles, glasses, implants — The fact that Facebook does not have a device has created a desire for a vehicle with a long hood and a big motor. Compensation comes in many forms.
Net net: One of the risks in the Silicon Valley world is that “real” is slippery. Do the outputs of “leadership” correlate with the reality of the organization?
Nope. Do this. Do that. See what works. Modern leadership. Will someone turn off those stupid flashing red and yellow alarm lights? I can see the floundering without the glasses, buzzing, and flashing.
Stephen E Arnold, September 1, 2025
More about AI and Peasants from a Xoogler Too
September 1, 2025
A former Googler predicts a rough ride ahead for workers. And would-be workers. Yahoo News shares “Ex-Google Exec’s Shocking Warning: AI Will Create 15 Years of ‘Hell’—Starting Sooner than We Think.” Only 15 years? Seems optimistic. Mo Gawdat issued his prophesy on the “Diary of a CEO” podcast. He expects “the end of white-collar work” to begin by the end of this decade. Indeed, the job losses have already begun. But the cascading effects could go well beyond high unemployment. Reporter Ariel Zilber writes:
“Without proper government oversight, AI technology will channel unprecedented wealth and influence to those who own or control these systems, while leaving millions of workers struggling to find their place in the new economy, according to Gawdat. Beyond economic concerns, Gawdat anticipates serious social consequences from this rapid transformation. Gawdat said AI will trigger significant ‘social unrest’ as people grapple with losing their livelihoods and sense of purpose — resulting in rising rates of mental health problems, increased loneliness and deepening social divisions. ‘Unless you’re in the top 0.1%, you’re a peasant,’ Gawdat said. ‘There is no middle class.’”
That is ominous. But, to hear Gawdat tell it, there is a bright future on the other side of those hellish 15 years. He believes those who survive past 2040 can look forward to a “utopian” era free from tedious, mundane tasks. This will free us up to focus on “love, community, and spiritual development.” Sure. But to get there, he warns, we must take certain steps:
“Gawdat said that it is incumbent on governments, individuals and businesses to take proactive measures such as the adoption of universal basic income to help people navigate the transition. ‘We are headed into a short-term dystopia, but we can still decide what comes after that,’ Gawdat told the podcast, emphasizing that the future remains malleable based on choices society makes today. He argued that outcomes will depend heavily on decisions regarding regulation, equitable access to technology, and what he calls the ‘moral programming’ of AI algorithms.”
We are sure government and Big Tech will get right on that. Totally doable in our current political and business climates. Meanwhile, Mo Gawdat is working on an “AI love coach.” I am not sure Mr. Gawdat is connected to the bureaucratic and management ethos of 2025. Is that why he is a Xoogler?
Cynthia Murrell, September 1, 2025
Faux Boeuf Delivers Zero Calories Plus a Non-Human Toxin
August 29, 2025
No AI. Just a dinobaby working the old-fashioned way.
That sizzling rib AI called boeuf à la Margaux Blanchard is a treat. I learned about this recipe for creating filling, substantive, calorie laden content in “Wired and Business Insider Remove Articles by AI-Generated Freelancer.” I can visualize the meeting in which the decision was taken to hire Margaux Blanchard. I can also run in my mental VHS, the meeting when the issue was discovered. In my version, the group agreed to blame it on a contractor and the lousy job human resource professionals do these days.
What’s the “real” story? Let go to the Guardian write up:
On Thursday [August 22, 2025], Press Gazette reported that at least six publications, including Wired and Business Insider, have removed articles from their websites in recent months after it was discovered that the stories – written under the name of Margaux Blanchard – were AI-generated.
I frequently use the phrase “ordained officiant” in my dinobaby musings. Doesn’t everyone with some journalism experience?
The write u p said:
Wired’s management acknowledged the faux pas, saying: “If anyone should be able to catch an AI scammer, it’s Wired. In fact we do, all the time … Unfortunately, one got through. We made errors here: This story did not go through a proper fact-check process or get a top edit from a more senior editor … We acted quickly once we discovered the ruse, and we’ve taken steps to ensure this doesn’t happen again. In this new era, every newsroom should be prepared to do the same.”
Yeah, unfortunately and quickly. Yeah.
I liked this paragraph in the story:
This incident of false AI-generated reporting follows a May error when the Chicago Sun-Times’ Sunday paper ran a syndicated section with a fake reading list created by AI. Marco Buscaglia, a journalist who was working for King Features Syndicate, turned to AI to help generate the list, saying: “Stupidly, and 100% on me, I just kind of republished this list that [an AI program] spit out … Usually, it’s something I wouldn’t do … Even if I’m not writing something, I’m at least making sure that I correctly source it and vet it and make sure it’s all legitimate. And I definitely failed in that task.” Meanwhile, in June, the Utah court of appeals sanctioned a lawyer after he was discovered to have used ChatGPT for a filing he made in which he referenced a nonexistent court case.
Hey, that AI is great. It builds trust. It is intellectually satisfying just like some time in the kitchen with Margot Blanchard, a hot laptop, and some spicy prompts. Yum yum yum.
Stephen E Arnold, August 29, 2025