YouTube: Another Big Cost Black Hole?
March 25, 2025
Another dinobaby blog post. Eight decades and still thrilled when I point out foibles.
I read “Google Is in Trouble… But This Could Change Everything – and No, It’s Not AI.” The write up makes the case that YouTube is Google’s big financial opportunity. I agree with most of the points in the write up. The article says:
Google doesn’t clearly explain how much of the $40.3 billion comes from the YouTube platform, but based on their description and choice of phrasing like “primarily include,” it’s safe to assume YouTube generates significantly more revenue than just the $36.1 billion reported. This would mean YouTube, not Google Cloud, is actually Google’s second-biggest business.
Yep, financial fancy dancing is part of the game. Google is using its financial reports as marketing to existing stakeholders and investors who want a part of the still-hot, still-dominant Googzilla. The idea is that the Google is stomping on the competition in the hottest sectors: The cloud, smart software, advertising, and quantum computing.
A big time company’s chief financial officer enters his office after lunch and sees a flood of red ink engulfing his work space. Thanks, OpenAI, good enough.
Let’s flip the argument from Google has its next big revenue oil gusher to the cost of that oil field infrastructure.
An article appeared in mid-February 2025. I was surprised that the information in that write up did not generate more buzz in the world of Google watchers. “YouTube by the Numbers: Uncovering YouTube’s Ghost Town of Billions of Unwatched, Ignored Videos” contains some allegedly accurate information. Let’s assume that these data, like most information about online, is close enough for horseshoes or purely notional. I am not going to summarize the methodology. Academics come up with interesting ways to obtain information about closely guarded big company products and services.
The write up says:
the research estimates a staggering 14.8 billion total videos on YouTube as of mid-2024. Unsurprisingly, most of these videos are barely noticed. The median YouTube upload has just 41 views, with 4% garnering no views at all. Over 74% have no comments and 89% have no likes.
Here are a couple of other factoids about YouTube as reported in the Techspot article:
The production values are also remarkably modest. Only 14% of videos feature a professional set or background. Just 38% show signs of editing. More than half have shaky camerawork, and audio quality varies widely in 85% of videos. In fact, 40% are simply music tracks with no voice-over.
And another point I found interesting:
Moreover, the typical YouTube video is just 64 seconds long, and over a third are shorter than 33 seconds.
The most revealing statement in the research data appears in this passage:
… a senior researcher [said] that this narrative overlooks a crucial reality: YouTube is not just an entertainment hub – it has become a form of digital infrastructure. Case in point: just 0.21% of the sampled videos included any kind of sponsorship or advertising. Only 4% had common calls to action such as liking, commenting, and subscribing. The vast majority weren’t polished content plays but rather personal expressions – perhaps not so different from the old camcorder days.
Assuming the data are reasonably good Google has built plumbing whose cost will rival that of the firm’s investments in search and its cloud.
From my point of view, cost control is going to become as important as moving as quickly as possible to the old-school broadcast television approach to content. Hit shows on YouTube will do what is necessary to attract an audience. The audience will be what advertisers want.
Just as Google search has degraded to a popular “experience,” not a resource for individuals who want to review and extract high value information, YouTube will head the same direction. The question is, “Will YouTube’s pursuit of advertisers mean that the infrastructure required to permit free video uploads and storage be sustainable?
Imagine being responsible for capital investments at the Google. The Cloud infrastructure must be upgraded and enhanced. The AI infrastructure must be upgraded and enhanced. The quantum computing and other technology-centric infrastructures must be upgraded an enhanced. The adtech infrastructure must be upgraded and enhanced. I am leaving out some of the Google’s other infrastructure intensive activities.
The main idea is that the financial person is going to have a large job paying for hardware, software, maintenance, and telecommunications. This is a different cost from technical debt. These are on-going and constantly growing costs. Toss in unexpected outages, and what does the bean counter do. One option is to quit and another is to do the Zen thing to avoid have a stroke when reviewing the cost projections.
My take is that a hit in search revenue is likely to add to the firm’s financial challenges. The path to becoming the modern version of William Paley’s radio empire may be in Google’s future. The idea that everything is in the cloud is being revisited by companies due to cost and security concerns. Does Google host some sketchy services on its Cloud?
YouTube may be the hidden revenue gem at Google. I think it might become the infrastructure cost leader among Google’s stellar product line up. Big companies like Google don’t just disappear. Instead the black holes of cost suck them closer to a big event: Costs rise more quickly than revenue.
At this time, Google has three cost black holes. One hopes none is the one that makes Googzilla join the ranks of the street people of online dwell.
Net net: Google will have to give people what they will watch. The lowest common denominator will emerge. The costs flood the CFO’s office. Just ask Gemini what to do.
Stephen E Arnold, March 25, 2025
A Swelling Wave: Internet Shutdowns in Africa
March 18, 2025
Another dinobaby blog post. No AI involved which could be good or bad depending on one’s point of view.
How does a government deal with information it does not like, want, or believe? The question is a pragmatic one. Not long ago, Russia suggested to Telegram that it cut the flow of Messenger content to Chechnya. Telegram has been somewhat more responsive to government requests since Pavel Durov’s detainment in France, but it dragged its digital feet. The fix? The Kremlin worked with service providers to kill off the content flow or at least as much of it as was possible. Similar methods have been used in other semi-enlightened countries.
“Internet Shutdowns at Record High in Africa As Access Weaponised’ reports:
A report released by the internet rights group Access Now and #KeepItOn, a coalition of hundreds of civil society organisations worldwide, found there were 21 shutdowns in 15 African countries, surpassing the existing record of 19 shutdowns in 2020 and 2021.
There are workarounds, but some of these are expensive and impractical for the people in Cormoros, Guinea-Bassau, Mauritius, Burundi, Ethiopia, Equatorial Guinea, and Kenya. I am not sure the list is complete, but the idea of killing Internet access seems to be an accepted response in some countries.
Several observations:
- Recent announcements about Google making explicit its access to users’ browser histories provide a rich and actionable pool of information. Will these type of data be used to pinpoint a dissident or a problematic individual? In my visits to Africa, including the thrilling Zimbabwe, I would suggest that the answer could be, “Absolutely.”
- Online is now pervasive, and due to a lack of meaningful regulation, the idea of going online and sharing information is a negative. In the late 1980s, I gave a lecture for ASIS at Rutgers University. I pointed out that flows of information work like silica grit in a sand blasting device to remove rust in an autobody shop. I can say from personal experience that no one knew what I was talking about. In 40 years, people and governments have figured out how online flows erode structures and social conventions.
- The trend of shutdown is now in the playbook of outfits around the world. Commercial companies can play the game of killing a service too. Certain large US high technology companies have made it clear that their service would summarily be blocked if certain countries did not play ball the US way.
As a dinobaby who has worked in online for decades, I find it interesting that the pigeons are coming home to roost. A failure years ago to recognize and establish rules and regulation for online is the same as having those lovable birds loose in the halls of government. What do pigeons produce? Yep, that’s right. A mess, a potentially deadly one too.
Stephen E Arnold, March 18, 2025
A Vulnerability Bigger Than SolarWinds? Yes.
February 18, 2025
No smart software. Just a dinobaby doing his thing.
I read an interesting article from WatchTowr Labs. (The spelling is what the company uses, so the url is labs.watchtowr.com.) On February 4, 2024, the company reported that it discovered what one can think of as orphaned or abandoned-but-still alive Amazon S3 “buckets.” The discussion of the firm’s research and what it revealed is presented in “8 Million Requests Later, We Made The SolarWinds Supply Chain Attack Look Amateur.”
The company explains that it was curious if what it calls “abandoned infrastructure” on a cloud platform might yield interesting information relevant to security. We worked through the article and created what in the good old days would have been called an abstract for a database like ABI/INFORM. Here’s our summary:
The article from WatchTowr Labs describes a large-scale experiment where researchers identified and took control of about 150 abandoned Amazon Web Services S3 buckets previously used by various organizations, including governments, militaries, and corporations. Over two months, these buckets received more than eight million requests for software updates, virtual machine images, and sensitive files, exposing a significant vulnerability. Watchtowr explain that bad actors could have injected malicious content. Abandoned infrastructure could be used for supply chain attacks like SolarWinds. Had this happened, the impact would have been significant.
Several observations are warranted:
- Does Amazon Web Services have administrative functions to identify orphaned “buckets” and take action to minimize the attack surface?
- With companies information technology teams abandoning infrastructure, how will these organizations determine if other infrastructure vulnerabilities exist and remediate them?
- What can cyber security vendors’ software and systems do to identify and neutralize these “shoot yourself in the foot” vulnerabilities?
One of the most compelling statements in the WatchTowr article, in my opinion, is:
… we’d demonstrated just how held-together-by-string the Internet is and at the same time point out the reality that we as an industry seem so excited to demonstrate skills that would allow us to defend civilization from a Neo-from-the-Matrix-tier attacker – while a metaphorical drooling-kid-with-a-fork-tier attacker, in reality, has the power to undermine the world.
Is WatchTowr correct? With government and commercial organizations leaving S3 buckets available, perhaps WatchTowr should have included gum, duct tape, and grade-school white glue in its description of the Internet?
Stephen E Arnold, February 18, 2025
A Technologist Realizes Philosophy 101 Was Not All Horse Feathers
January 6, 2025
This is an official dinobaby post. No smart software involved in this blog post.
I am not too keen on non-dinobabies thinking big thoughts about life. The GenX, Y, and Zedders are good at reinventing the wheel, fire, and tacos. What some of these non-dinobabies are less good at is thinking about the world online information has disestablished and is reassembling in chaotic constructs.
The essay, published in HackerNoon, “Here’s Why High Achievers Feel Like Failures” explains why so many non-dinobabies are miserable. My hunch is that the most miserable are those who have achieved some measure of financial and professional success and embrace whinge, insecurity, chemicals to blur mental functions, big car payments, and “experiences.” The essay does a very good job of explaining the impact of getting badges of excellence for making a scoobie (aka lanyard, gimp, boondoggle, or scoubidou) bracelet at summer camp to tweaking an algorithm to cause a teen to seek solace in a controlled substance. (One boss says, “Hey, you hit the revenue target. Too bad about the kid. Let’s get lunch. I’ll buy.”)
The write up explains why achievement and exceeding performance goals can be less than satisfying. Does anyone remember the Google VP who overdosed with the help of a gig worker? My recollection is that the wizard’s boat was docked within a few minutes of his home stuffed with a wifey and some kiddies. Nevertheless, an OnlyFans potential big earner was enlisted to assist with the chemical bliss that may have contributed to his logging off early.
Here’s what the essay offers this anecdote about a high performer whom I think was a entrepreneur riding a rocket ship:
Think about it:
- Three years ago, Mark was ecstatic about his first $10K month. Now, he beats himself up over $800K months.
- Two years ago, he celebrated hiring his first employee. Now, managing 50 people feels like “not scaling fast enough.”
- Last year, a feature in a local business journal made his year. Now, national press mentions barely register.
His progress didn’t disappear. His standards just kept pace with his growth, like a shadow that stretches ahead no matter how far you walk.
The main idea is that once one gets “something”; one wants more. The write up says:
Every time you level up, your brain does something fascinating – it rewrites your definition of “normal.” What used to be a summit becomes your new base camp. And while this psychological adaptation helped our ancestors survive, it’s creating a crisis of confidence in today’s achievement-oriented world.
Yep, the driving force behind achievement is the need to succeed so one can achieve more. I am a dinobaby, and I don’t want to achieve anything. I never did. I have been lucky: Born at the right time. Survived school. Got lucky and was hired on a fluke. Now 60 years later I know how I achieve the modicum of success I accrued. I was really lucky, and despite my 80 years, I am not yet dead.
The essay makes this statement:
We’re running paleolithic software on modern hardware. Every time you achieve something, your brain…
- Quickly normalizes the achievement (adaptation)
- Immediately starts wanting more (drive)
- Erases the emotional memory of the struggle (efficiency)
Is there a fix? Absolutely. Not surprisingly the essay includes a to-do list. The approach is logical and ideally suited to those who want to become successful. Here are the action steps:
Once you’ve reviewed your time horizons, the next step is to build what I call a “Progress Inventory.” Dedicate 15 minutes every Sunday night to reflect and fill out these three sections:
Victories Section
- What’s easier now than it was last month?
- What do you do automatically that used to require thought?
- What problems have disappeared?
- What new capabilities have you gained?
Growth Section
- What are you attempting now that you wouldn’t have dared before?
- Where have your standards risen?
- What new problems have you earned the right to have?
- What relationships have deepened or expanded?
Learning Section
- What mistakes are you no longer making?
- What new insights have you gained?
- What patterns are you starting to recognize?
- What tools have you mastered?
These two powerful tools – the Progress Mirror and the Progress Inventory – work together to solve the central problem we’ve been discussing: your brain’s tendency to hide your growth behind rising standards. The Progress Mirror forces you to zoom out and see the bigger picture through three critical time horizons. It’s like stepping back from a painting to view the full canvas of your growth. Meanwhile, the weekly Progress Inventory zooms in, capturing the subtle shifts and small victories that compound into major transformations. Used together, these tools create something I call “progress consciousness” – the ability to stay ambitious while remaining aware of how far you’ve come.
But what happens when the road map does not lead to a zen-like state? Because I have been lucky, I cannot offer an answer to this question of actual, implicit, or imminent failure. I can serve up some observations:
- This essay has the backbone for a self-help book aimed at insecure high performers. My suggestion is to buy a copy of Thomas Harris’ I’m OK — You’re Okay and make a lot of money. Crank out the merch with slogans from the victories, growth, and learning sections of the book.
- The explanations are okay, but far from new. Spending some time with Friedrich Nietzsche’s Der Wille zur Macht. Too bad Friedrich was dead when his sister assembled the odds and ends of Herr Nietzsche’s notes into a book addressing some of the issues in the HackerNoon essay.
- The write up focuses on success, self-doubt, and an ever-receding finish line. What about the people who live on the street in most major cities, the individuals who cannot support themselves, or the young people with minds trashed by digital flows? The essay offers less information for these under performers as measured by doubt ridden high performers.
Net net: The essay makes clear that education today does not cover some basic learnings; for example, the good Herr Friedrich Nietzsche. Second, the excitement of re-discovering fire is no substitute for engagement with a social fabric that implicitly provides a framework for thinking and behaving in a way that others in the milieu recognize as appropriate. This HackerNoon essay encapsulates why big tech and other successful enterprises are dysfunctional. Welcome to the digital world.
Stephen E Arnold, January 6, 2025
AI Makes Stuff Up and Lies. This Is New Information?
December 23, 2024
The blog post is the work of a dinobaby, not AI.
I spotted “Alignment Faking in Large Language Models.” My initial reaction was, “This is new information?” and “Have the authors forgotten about hallucination?” The original article from Anthropic sparked another essay. This one appeared in Time Magazine (online version). Time’s article was titled “Exclusive: New Research Shows AI Strategically Lying.” I like the “strategically lying,” which implies that there is some intent behind the prevarication. Since smart software reflects its developers use of fancy math and the numerous knobs and levers those developers can adjust at the same time the model is gobbling up information and “learning”, the notion of “strategically lying” struck me as as interesting.
Thanks MidJourney. Good enough.
What strategy is implemented? Who thought up the strategy? Is the strategy working? were the questions which occurred to me. The Time essay said:
experiments jointly carried out by the AI company Anthropic and the nonprofit Redwood Research, shows a version of Anthropic’s model, Claude, strategically misleading its creators during the training process in order to avoid being modified.
This suggests that the people assembling the algorithms and training data, configuring the system, twiddling the administrative settings, and doing technical manipulations were not imposing a strategy. The smart software was cooking up a strategy. Who will say that the software is alive and then, like the former Google engineer, express a belief that the system is alive. It’s sci-fi time I suppose.
The write up pointed out:
Researchers also found evidence that suggests the capacity of AIs to deceive their human creators increases as they become more powerful.
That is an interesting idea. Pumping more compute and data into a model gives it a greater capacity to manipulate its outputs to fool humans who are eager to grab something that promises to make life easier and the user smarter. If data about the US education system’s efficacy are accurate, Americans are not doing too well in the reading, writing, and arithmetic departments. Therefore, discerning strategic lies might be difficult.
The essay concluded:
What Anthropic’s experiments seem to show is that reinforcement learning is insufficient as a technique for creating reliably safe models, especially as those models get more advanced. Which is a big problem, because it’s the most effective and widely-used alignment technique that we currently have.
What’s this “seem.” The actual output of large language models using transformer methods crafted by Google output baloney some of the time. Google itself had to regroup after the “glue cheese to pizza” suggestion.
Several observations:
- Smart software has become the technology more important than any other. The problem is that its outputs are often wonky and now the systems are befuddling the wizards who created and operate them. What if AI is like a carnival ride that routinely injures those looking for kicks?
- AI is finding its way into many applications but the resulting revenue has frayed some investors’ nerves. The fix is to go faster and win to reach the revenue goal. This frenzy for payoff has been building since early 2024 but those costs remain brutally high.
- The behavior of large language models is not understood by some of its developers. Does this seem like a problem?
Net net: “Seem?” One lies or one does not.
Stephen E Arnold, December 23, 2024
Why Present Bad Sites?
October 7, 2024
I read “Google Search Is Testing Blue Checkmark Feature That Helps Users Spot Genuine Websites.” I know this is a test, but I have a question: What’s genuine mean to Google and its smart software? I know that Google cannot answer this question without resorting to consulting nonsensicalness, but “genuine” is a word. I just don’t know what’s genuine to Google. Is a Web site that uses SEO trickery to appear in a results list? Is it a blog post written by a duplicitous PR person working at a large Google-type firm? Is it a PDF appearing on a “genuine” government’s Web site?
A programmer thinking about blue check marks. The obvious conclusion is to provide a free blue check mark. Then later one can charge for that sign of goodness. Thanks, Microsoft. Good enough. Just like that big Windows update. Good enough.
The write up reports:
Blue checkmarks have appeared next to certain websites on Google Search for some users. According to a report from The Verge, this is because Google is experimenting with a verification feature to let users know that sites aren’t fraudulent or scams.
Okay, what’s “fraudulent” and what’s a “scam”?
What does Google say? According to the write up:
A Google spokesperson confirmed the experiment, telling Mashable, “We regularly experiment with features that help shoppers identify trustworthy businesses online, and we are currently running a small experiment showing checkmarks next to certain businesses on Google.”
A couple of observations:
- Why not allow the user to NOT out these sites? Better yet, give the user a choice of seeing de-junked or fully junked sites? Wow, that’s too hard. Imagine. A Boolean operator.
- Why does Google bother to index these sites? Why not change the block list for the crawl? Wow, that’s too much work. Imagine a Googler editing a “do not crawl” list manually.
- Is Google admitting that it can identify problematic sites like those which push fake medications or the stolen software videos on YouTube? That’s pretty useful information for an attorney taking legal action against Google, isn’t it?
Net net: Google is unregulated and spouts baloney. Google needs to jack up its revenue. It has fines to pay and AI wizards to pay. Tough work.
Stephen E Arnold, October 7, 2024
US Government Procurement: Long Live Silos
September 12, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “Defense AI Models A Risk to Life Alleges Spurned Tech Firm.” Frankly , the headline made little sense to me so I worked through what is a story about a contractor who believes it was shafted by a large consulting firm. In my experience, the situation is neither unusual nor particularly newsworthy. The write up does a reasonable job of presenting a story which could have been titled “Naive Start Up Smoked by Big Consulting Firm.” A small high technology contractor with smart software hooks up with a project in the Department of Defense. The high tech outfit is not able to meet the requirements to get the job. The little AI high tech outfit scouts around and brings a big consulting firm to get the deal done. After some bureaucratic cycles, the small high tech outfit is benched. If you are not familiar with how US government contracting works, the write up provides some insight.
The work product of AI projects will be digital silos. That is the key message of this procurement story. I don’t feel sorry for the smaller company. It did not prepare itself to deal with the big time government contractor. Outfits are big for a reason. They exploit opportunities and rarely emulate Mother Theresa-type behavior. Thanks, MSFT Copilot. Good enough illustration although the robots look stupid.
For me, the article is a stellar example of how information or or AI silos are created within the US government. Smart software is hot right now. Each agency, each department, and each unit wants to deploy an AI enabled service. Then that AI infused service becomes (one hopes) an afterburner for more money with which one can add headcount and more AI technology. AI is a rare opportunity to become recognized as a high-performance operator.
As a result, each AI service is constructed within a silo. Think about a structure designed to hold that specific service. The design is purpose built to keep rats and other vermin from benefiting from the goodies within the AI silo. Despite the talk about breaking down information silos, silos in a high profile, high potential technical are like artificial intelligence are the principal product of each agency, each department, and each unit. The payoff could be a promotion which might result in a cushy job in the commercial AI sector or a golden ring; that is, the senior executive service.
I understand the frustration of the small, high tech AI outfit. It knows it has been played by the big consulting firm and the procurement process. But, hey, there is a reason the big consulting firm generates billions of dollars in government contracts. The smaller outfit failed to lock down its role, retain the key to the know how it developed, and allowed its “must have cachè” to slip away.
Welcome, AI company, to the world of the big time Beltway Bandit. Were you expecting the big time consulting firm to do what you wanted? Did you enter the deal with a lack of knowledge, management sophistication, and a couple of false assumptions? And what about the notion of “algorithmic warfare”? Yeah, autonomous weapons systems are the future. Furthermore, when autonomous systems are deployed, the only way they can be neutralized is to use more capable autonomous weapons. Does this sound like a reply of the logic of Cold War thinking and everyone’s favorite bedtime read On Thermonuclear War still available on Amazon and as of September 6, 2024, on the Internet Archive at this link.
Several observations are warranted:
- Small outfits need to be informed about how big consulting companies with billions in government contracts work the system before exchanging substantive information
- The US government procurement processes are slow to change, and the Federal Acquisition Regulations and related government documents provide the rules of the road. Learn them before getting too excited about a request for a proposal or Federal Register announcement
- In a fight with a big time government contractor make sure you bring money, not a chip on your shoulder, to the meeting with attorneys. The entity with the most money typically wins because legal fees are more likely to kill a smaller firm than any judicial or tribunal ruling.
Net net: Silos are inherent in the work process of any government even those run by different rules. But what about the small AI firm’s loss of the contract? Happens so often, I view it as a normal part of the success workflow. Winners and losers are inevitable. Be smarter to avoid losing.
Stephen E Arnold, September 12, 2024
AI Safety Evaluations, Some Issues Exist
August 14, 2024
Ah, corporate self regulation. What could go wrong? Well, as TechCrunch reports, “Many Safety Evaluations for AI Models Have Significant Limitations.” Writer Kyle Wiggers tells us:
“Generative AI models … are coming under increased scrutiny for their tendency to make mistakes and generally behave unpredictably. Now, organizations from public sector agencies to big tech firms are proposing new benchmarks to test these models’ safety. Toward the end of last year, startup Scale AI formed a lab dedicated to evaluating how well models align with safety guidelines. This month, NIST and the U.K. AI Safety Institute released tools designed to assess model risk. But these model-probing tests and methods may be inadequate. The Ada Lovelace Institute (ALI), a U.K.-based nonprofit AI research organization, conducted a study that interviewed experts from academic labs, civil society and those who are producing vendors models, as well as audited recent research into AI safety evaluations. The co-authors found that while current evaluations can be useful, they’re non-exhaustive, can be gamed easily and don’t necessarily give an indication of how models will behave in real-world scenarios.”
There are several reasons for the gloomy conclusion. For one, there are no established best practices for these evaluations, leaving each organization to go its own way. One approach, benchmarking, has certain problems. For example, for time or cost reasons, models are often tested on the same data they were trained on. Whether they can perform in the wild is another matter. Also, even small changes to a model can make big differences in behavior, but few organizations have the time or money to test every software iteration.
What about red-teaming: hiring someone to probe the model for flaws? The low number of qualified red-teamers and the laborious nature of the method make it costly, out of reach for smaller firms. There are also few agreed-upon standards for the practice, so it is hard to assess the effectiveness of red-team projects.
The post suggests all is not lost—as long as we are willing to take responsibility for evaluations out of AI firms’ hands. Good luck prying open that death grip. Government regulators and third-party testers would hypothetically fill the role, complete with transparency. What a concept. It would also be good to develop standard practices and context-specific evaluations. Bonus points if a method is based on an understanding of how each AI model operates. (Sadly, such understanding remains elusive.)
Even with these measures, it may never be possible to ensure any model is truly safe. The write-up concludes with a quote from the study’s co-author Mahi Hardalupas:
“Determining if a model is ‘safe’ requires understanding the contexts in which it is used, who it is sold or made accessible to, and whether the safeguards that are in place are adequate and robust to reduce those risks. Evaluations of a foundation model can serve an exploratory purpose to identify potential risks, but they cannot guarantee a model is safe, let alone ‘perfectly safe.’ Many of our interviewees agreed that evaluations cannot prove a model is safe and can only indicate a model is unsafe.”
How comforting.
Cynthia Murrell, August 14, 2024
Which Outfit Will Win? The Google or Some Bunch of Busy Bodies
July 30, 2024
This essay is the work of a dumb humanoid. No smart software required.
It may not be the shoot out at the OK Corral, but the dust up is likely to be a fan favorite. It is possible that some crypto outfit will find a way to issue an NFT and host pay-per-view broadcasts of the committee meetings, lawyer news conferences, and pundits recycling press releases. On the other hand, maybe the shoot out is a Hollywood deal. Everyone knows who is going to win before the real action begins.
“Third Party Cookies Have Got to Go” reports:
After reading Google’s announcement that they no longer plan to deprecate third-party cookies, we wanted to make our position clear. We have updated our TAG finding Third-party cookies must be removed to spell out our concerns.
A great debate is underway. Who or what wins? Experience suggests that money has an advantage in this type of disagreement. Thanks, MSFT. Good enough.
Who is making this draconian statement? A government regulator? A big-time legal eagle representing an NGO? Someone running for president of the United States? A member of the CCP? Nope, the World Wide Web Consortium or W3C. This group was set up by Tim Berners-Lee, who wanted to find and link documents at CERN. The outfit wants to cook up Web standards, much to the delight of online advertising interests and certain organizations monitoring Web traffic. Rules allow crafting ways to circumvent their intent and enable the magical world of the modern Internet. How is that working out? I thought the big technology companies set standards like no “soft 404s” or “sorry, Chrome created a problem. We are really, really sorry.”
The write up continues:
We aren’t the only ones who are worried. The updated RFC that defines cookies says that third-party cookies have “inherent privacy issues” and that therefore web “resources cannot rely upon third-party cookies being treated consistently by user agents for the foreseeable future.” We agree. Furthermore, tracking and subsequent data collection and brokerage can support micro-targeting of political messages, which can have a detrimental impact on society, as identified by Privacy International and other organizations. Regulatory authorities, such as the UK’s Information Commissioner’s Office, have also called for the blocking of third-party cookies.
I understand, but the Google seems to be doing one of those “let’s just dump this loser” moves. Revenue is more important than the silly privacy thing. Users who want privacy should take control of their technology.
The W3C points out:
The unfortunate climb-down will also have secondary effects, as it is likely to delay cross-browser work on effective alternatives to third-party cookies. We fear it will have an overall detrimental impact on the cause of improving privacy on the web. We sincerely hope that Google reverses this decision and re-commits to a path towards removal of third-party cookies.
Now the big question: “Who is going to win this shoot out?”
Normal folks might compromise or test a number of options to determine which makes the most sense at a particularly interesting point in time. There is post-Covid weirdness, the threat of escalating armed conflict in what six, 27, or 95 countries, and financial brittleness. That anti-fragile handwaving is not getting much traction in my opinion.
At one end of the corral are the sleek, technology wizards. These norm core folks have phasers, AI, and money. At the other end of the corral are the opponents who look like a random selection of Café de Paris customers. Place you bets.
Stephen E Arnold, July 30, 2024
.
Harvard University: A Sticky Wicket, Right, Old Chap?
April 22, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I know plastic recycling does not work. The garbage pick up outfit assures me it recycles. Yeah, sure. However, I know one place where recycling is alive and well. I watched a video about someone named Francesca Gino, a Harvard professor. A YouTuber named Pete Judo presents information showing that Ms. Gino did some recycling. He did not award her those little green Ouroboros symbols. Copying and pasting are out of bounds in the Land of Ivory Towers in which Harvard has allegedly the ivory-est. You can find his videos at https://www.youtube.com/@PeteJudo1.
The august group of academic scholars are struggling to decide which image best fits the 21st-century version of their prestigious university: The garbage recycling image representing reuse of trash generated by other scholars or the snake-eating-its-tail image of the Ouroboros. So many decisions have these elite thinkers. Thanks, MSFT Copilot. Looking forward to your new minority stake in a company in a far off land?
As impressive a source as a YouTuber is, I think I found an even more prestigious organ of insight, the estimable New York Post. Navigate through the pop ups until you see the “real” news story “Harvard Medical School Professor Massively Plagiarized Report for Lockheed Martin Suit: Judge.” The thrust of this story is that a moonlighting scholar “plagiarized huge swaths of a report he submitted on carcinogenic chemicals, according to a federal judge, who agreed to remove it as evidence in a class action case against Lockheed Martin.”
Is this Medical School-related item spot on? I don’t know. Is the Gino-us activity on the money? For that matter, is a third Harvard professor of ethics guilty of an ethical violation in a journal article about — wait for it — ethics? I don’t know, and I don’t have the energy to figure out if plagiarism is the new Covid among academics in Boston.
However, based on the drift of these examples, I can offer several observations:
- Harvard University has a public relations problem. Judging from the coverage in outstanding information services as YouTube and the New York Post, the remarkable school needs to get its act together and do some “messaging.” When the plagiarism pandemic is real or fabricated by the type of adversary Microsoft continually says creates trouble, Harvard’s reputation is going to be worn down by a stream of digital bad news.
- The ways of a most Ivory Tower thing are mysterious. Nevertheless, it is clear that the mechanism for hiring, motivating, directing, and preventing academic superstars from sticking their hand in a pile of dog doo is not working. That falls into what I call “governance.” I want to use my best Harvard rhetoric now: “Hey, folks, you ain’t making good moves.”
- The top dog (president, CFO, bursar, whatever) you are on the path to an “F.” Imagine what a moral stick in the mud like William James would think of Harvard’s leadership if he were still waddling around, mumbling about radical pragmatism. Even more frightening is an AI version of this sporty chap doing a version of AI Jesus on Twitch. Instead of recycling Christian phrases, he would combine his thoughts about ethics, psychology, and Harvard with the possibly true stories about Harvard integrity herpes. Yikes.
Net net: What about educating tomorrow’s leaders. Should these young minds emulate what professors are doing, or should they be learning to pursue knowledge without shortcuts, cheating, plagiarism, and looking like characters from The Simpsons?
Stephen E Arnold, April 22, 2024