A Googler Explains How AI Helps Creators and Advertisers in the Googley Maze

September 24, 2025

Dino 5 18 25_thumbSadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.

Most of Techmeme’s stories are paywalled. But one slipped through. (I wonder why?) Anyhow, the article in question is “An Interview with YouTube Neal Mohan about Building a Stage for Creators.” The interview is a long one. I want to focus on a couple of statements and offer a handful of observations.

The first comment by the Googler Mohan is this one:

Moving away from the old model of the cliché Madison Avenue type model of, “You go out to lunch and you negotiate a deal and it’s bespoke in this particular fashion because you were friends with the head of ad sales at that particular publisher”. So doing away with that model, and really frankly, democratizing the way advertising worked, which in our thesis, back to this kind of strategy book, would result in higher ROI for publishers, but also better ROI for advertisers.

The statement makes clear that disrupting advertising was the key to what is now the Google Double Click model. Instead of Madison Avenue, today there is the Google model. I think of it as a maze. Once one “gets into” the Google Double Click model, there is no obvious exit.

image

The art was generated by Venice.ai. No human needed. Sorry freelance artists on Fiverr.com. This is the future. It will come to YouTube as well.

Here’s the second I noted:

everything that we build is in service of people that are creative people, and I use the term “creator” writ large. YouTubers, artists, musicians, sports leagues, media, Hollywood, etc., and from that vantage point, it is really exceedingly clear that these AI capabilities are just that, they’re capabilities, they’re tools. But the thing that actually draws us to YouTube, what we want to watch are the original storytellers, the creators themselves.

The idea, in my interpretation, is that Google’s smart software is there to enable humans to be creative. AI is just a tool like an ice pick. Sure, the ice pick can be driven into someone’s heart, but that’s an extreme example of misuse of a simple tool. Our approach is to keep that ice pick for the artist who is going to create an ice sculpture.

Please, read the rest of this Googley interview to get a sense of the other concepts Google’s ad system and its AI are delivering to advertisers and “creators.”

Here’s my view:

  1. Google wants to get creators into the YouTube maze. Google wants advertisers to use the now 30 year old Google Double Click ad system. Everyone just enter the labyrinth.
  2. The rats will learn that the maze is the equivalent of a fish in an aquarium. What else will the fish know? Not too much. The aquarium is life. It is reality.
  3. Google has a captive, self-sustaining ecosystem. Creators create; advertisers advertise because people or systems want the content.

Now let me ask a question, “How does this closed ecosystem make more money?” The answer, according to Googler Mohan, a former consultant like others in Google leadership, is to become more efficient. How does one become more efficient? The answer is to replace expensive, popular creators with cheaper AI driven content produced by Google’s AI system.

Therefore, the words say one thin: Creator humans are essential. However, the trajectory of Google’s behavior is that Google wants to maximize its revenues. Just the threat or fear of using AI to knock off a hot new human engineered “content object” will allow the Google to reduce what it pays to a human until Google’s AI can eliminate those pesky, protesting, complaining humans. The advertisers want eyeballs. That’s what Google will deliver. Where will the advertisers go? Craigslist, Nextdoor, X.com?

Net net: Money is more important to Google than human creators. I know I am a dinobaby and probably incorrect. That’s how I see the Google.

Stephen E Arnold, September 24, 2025

The Skill for the AI World As Pronounced by the Google

September 24, 2025

Dino 5 18 25Written by an unteachable dinobaby. Live with it.

Worried about a job in the future: The next minute, day, decade. The secret of constant employment, big bucks, and even larger volumes of happiness has been revealed. “Google’s Top AI Scientist Says Learning How to Learn Will Be Next Generation’s Most Needed Skill” says:

the most important skill for the next generation will be “learning how to learn” to keep pace with change as Artificial Intelligence transforms education and the workplace.

Well, that’s the secret: Learn how to learn. Why? Surviving in the chaos of an outfit like Google means one has to learn. What should one learn? Well, the write up does not provide that bit of wisdom. I assume a Google search will provide the answer in a succinct AI-generated note, right?

The write up presents this chunk of wisdom from a person keen on getting lots of AI people aware of Google’s AI prowess:

The neuroscientist and former chess prodigy said artificial general intelligence—a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can—could arrive within a decade…. [He] Hassabis emphasized the need for “meta-skills,” such as understanding how to learn and optimizing one’s approach to new subjects, alongside traditional disciplines like math, science and humanities.

This means reading poetry, preferably Greek poetry. The Google super wizard’s father is “Greek Cypriot.” (Cyprus is home base for a number of interesting financial operations and the odd intelware outfit. Which part of Cyprus is which? Google Maps may or may not answer this question. Ask your Google Pixel smart phone to avoid an unpleasant mix up.)

The write up adds this courteous note:

[Greek Prime Minister Kyriakos] Mitsotakis rescheduled the Google Big Brain to “avoid conflicting with the European basketball championship semifinal between Greece and Turkey. Greece later lost the game 94-68.”

Will life long learning skill help the Greek basketball team win against a formidable team like Turkey?

Sure, if Google says it, you know it is true just like eating rocks or gluing cheese on pizza. Learn now.

Stephen E Arnold, September 24, 2025

Titanic AI Goes Round and Round: Are You Dizzy Yet?

September 23, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Nvidia to Invest Up to $100 Billion in OpenAI, Linking Two Artificial Intelligence Titans.” The headline makes an important point. The words “big” and “huge” are not sufficiently monumental. Now we have “titans." As you may know, a “titan” is a person of great power. I will leave out the Greek mythology. I do want to point out that “titans” were the kiddies produced by Uranus and Gaea. Titans were big dogs until Zeus and a few other Olympian gods forced them to live in what is now Newark, New Jersey.

image

An AI-generated diagram of a simple circular deal. Regulators and and IRS professionals enjoy challenges. What are those people doing to make the process work? Thanks, MidJourney.com. Good enough.

The write up from the outfit that it is into trust explains how two “titans” are now intertwined. No, I won’t bring up the issue of incestuous behavior. Let’s stick to the “real” news story:

Nvidia will invest up to $100 billion in OpenAI and supply it with data center chips… Nvidia will start investing in OpenAI for non-voting shares once the deal is finalized, then OpenAI can use the cash to buy Nvidia’s chips.

I am not a finance, tax, or money wizard. On the surface, it seems to me that I loan a person some money and then that person gives me the money back in exchange for products and services. I may have this wrong, but I thought a similar arrangement landed one of the once-famous enterprise search companies in a world of hurt and a member of the firm’s leadership in prison.

Reuters includes this statement:

Analysts said the deal was positive for Nvidia but also voiced concerns about whether some of Nvidia’s investment dollars might be coming back to it in the form of chip purchases. "On the one hand this helps OpenAI deliver on what are some very aspirational goals for compute infrastructure, and helps Nvidia ensure that that stuff gets built. On the other hand the ‘circular’ concerns have been raised in the past, and this will fuel them further," said Bernstein analyst Stacy Rasgon.

“Circular” — That’s an interesting word. Some of the financial transaction my team and I examined during our Telegram (the messaging outfit) research used similar methods. One of the organizations apparently aware of “circular” transactions was Huione Guarantee. No big deal, but the company has been in legal hot water for some of its circular functions. Will OpenAI and Nvidia experience similar problems? I don’t know, but the circular thing means that money goes round and round. In circular transactions, at each touch point magical number things can occur. Money deals are rarely hallucinatory like AI outputs and semiconductor marketing.

What’s this mean to companies eager to compete in smart software and Fancy Dan chips? In my opinion, I hear my inner voice saying, “You may be behind a great big circular curve. Better luck next time.”

Stephen E Arnold, September 23, 2025

Pavel Durov Was Arrested for Online Stubbornness: Will This Happen in the US?

September 23, 2025

Written by an unteachable dinobaby. Live with it.

In august 2024, the French judiciary arrested Pavel Durov, the founder of VKontakte and then Telegram, a robust but non-AI platform. Why? The French government identified more than a dozen transgressions by Pavel Durov, who holds French citizenship as a special tech bro. Now he has to report to his French mom every two weeks or experience more interesting French legal action. Is this an example of a failure to communicate?

Will the US take similar steps toward US companies? I raise the question because I read an allegedly accurate “real” news write up called “Anthropic Irks White House with Limits on Models’ Use.” (Like many useful online resources, this story requires the curious to subscribe, pay, and get on a marketing list.) These “models,” of course, are the zeros and ones which comprise the next big thing in technology: artificial intelligence.

The write up states:

Anthropic is in the midst of a splashy media tour in Washington, but its refusal to allow its models to be used for some law enforcement purposes has deepened hostility to the company inside the Trump administration…

The write up says as actual factual:

Anthropic recently declined requests by contractors working with federal law enforcement agencies because the company refuses to make an exception allowing its AI tools to be used for some tasks, including surveillance of US citizens…

I found the write up interesting. If France can take action against an upstanding citizen like Pavel Durov, what about the tech folks at Anthropic or other outfits? These firms allegedly have useful data and the tools to answer questions? I recently fed the output of one AI system (ChatGPT) into another AI system (Perplexity), and I learned that Perplexity did a good job of identifying the weirdness in the ChatGPT output. Would these systems provide similar insights into prompt patterns on certain topics; for instance, the charges against Pavel Durov or data obtained by people looking for information about nuclear fuel cask shipments?

With France’s action, is the door open to take direct action against people and their organizations which cooperate reluctantly or not at all when a government official makes a request?

I don’t have an answer. Dinobabies rarely do, and if they do have a response, no one pays attention to these beasties. However, some of those wizards at AI outfits might want to ponder the question about cooperation with a government request.

Stephen E Arnold, September 24, 2025

UAE: Will It Become U-AI?

September 23, 2025

Dino 5 18 25Written by an unteachable dinobaby. Live with it.

UAE is moving forward in smart software, not just crypto. “Industry Leading AI Reasoning for All” reports that the Institute of foundation Models has “industry leading AI reasoning for all.” The new item reports:

Built on six pillars of innovation, K2 Think represents a new class of reasoning model. It employs long chain-of-thought supervised fine-tuning to strengthen logical depth, followed by reinforcement learning with verifiable rewards to sharpen accuracy on hard problems. Agentic planning allows the model to decompose complex challenges before reasoning through them, while test-time scaling techniques further boost adaptability. 

I am not sure what the six pillars of innovation are, particularly after looking at some of the UAE’s crypto plays, but there is more. Here’s another passage which suggests that Intel and Nvidia may not be in the k2think.ai technology road map:

K2 Think will soon be available on Cerebras’ wafer-scale, inference-optimized compute platform, enabling researchers and innovators worldwide to push the boundaries of reasoning performance at lightning-fast speed. With speculative decoding optimized for Cerebras hardware, K2 Think will achieve unprecedented throughput of 2,000 tokens per second, making it both one of the fastest and most efficient reasoning systems in existence.

If you want to kick its tires (tAIres?), the system is available at k2think.ai and on Hugging Face. Oh, the write up quotes two people with interesting names: Eric Xing and Peng Xiao.

Stephen E Arnold, September 23, 2025

AI and the Media: AI Is the Answer for Some Outfits

September 22, 2025

Dino 5 18 25_thumb_thumbSadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.

I spotted a news item about Russia’s government Ministry of Defense. The estimable defensive outfit now has an AI-generated news program. Here’s the paywalled source link. I haven’t seen it yet, but the statistics for viewership and the Telegram comments will be interesting to observe. Gee, do you think that bright Russian developers have found a way to steer the output to represent the political views of the Russian government? Did you say, “No.” Congratulations, you may qualify for a visa to homestead in Norilsk. Check it out on Google Maps.

Back in Germany, Axel Springer SE is definitely into AI as well. Coincidentally, Axel Springer will use AI for news. I noted Business Insider will allow its real and allegedly human journalists to use AI to write “drafts” of news stories. Here’s the paywalled source link. Hey, Axel, aren’t your developers able to pipe the AI output into slamma jamma banana and produce via AI complete TikTok-type news videos? Russia’s Ministry of Defense has this angle figured out. YouTube may be in the MoD’s plans. One has to fund that “defensive” special operation in Ukraine somehow.

Several observations:

  1. Steering or weaponing large language models is a feature of the systems. Can one trust AI generated news? Can one trust any AI output from a large organization? You may. I don’t.
  2. The economics of producing Walter Cronkite type news make “real” news expensive. Therefore, say hello to AI written news and AI delivered news. GenX and GenY will love this approach to information in my opinion.
  3. How will government regulators respond to AI news? In Russia, government controlled AI news will get a green light. Elsewhere, the shift may be slightly more contentious.

Net net: AI is great.

Stephen E Arnold, September 22, 2025

OpenAI Says, Hallucinations Are Here to Stay?

September 22, 2025

Dino 5 18 25_thumb_thumbSadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.

I read “OpenAI Admits AI Hallucinations Are Mathematically Inevitable, Not Just Engineering Flaws.” I am not sure the information in the write up will make people who are getting smart software whether they want it or not happy. Even less thrilled with the big outfits who are implementing AI with success ranging from five percent to 90 percent hoorahs. Close enough for horse shoes works for putting shoes on equines. I am not sure how that will work out for medical and financial applications. I won’t comment on the kinetic applications of smart software, but hallucination may not be a plus in some situations.

The write up begins with what may make some people — how shall I say it? — nervous, frightened, squeamish. I quote:

… OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.

I quite liked the word always. It is obviously a statement that must persist for eternity, which to a dinobaby like me, quite a long time. I found the distinction between plausible and false delicious. The burden to figure out what is “correct,” “wrong,” slightly wonky, and false shifts to the user of smart software. But there is another word that struck me as significant: Perfect. Now that is another logical tar pit.

After this, I am not sure where the write up is going. I noted this passage:

OpenAI, the creator of ChatGPT, acknowledged in its own research that large language models will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies.

There you go. The fundamental method in use today and believed to be the next big thing is always going to produce incorrect information. Always.

The Computerworld story points to the “research paper.” Computerworld points out that industry evaluations of smart software are slippery fish. Computerworld reminds its readers that “enterprises must adapt strategies.” (I would imagine. If smart software gets chemical formula wrong or outputs information that leads to a substantial loss of revenue, problems might arise, might they not?) Computerworld concludes with a statement that left me baffled; to wit: “Market already adapting.”

Okay.

I wonder how many Computerworld readers will consume this story standing next to a burning pile of cash tossed into the cost black holes of smart software.

Stephen E Arnold, September 22, 2025

Google Emits a Tiny Signal: Is It Stress or Just a Brilliant Management Move?

September 22, 2025

Dino 5 18 25Sadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.

Google is chock full of technical and management wizards. Anything the firm does is a peak action. With the Google doing so many forward leaning things each day, it is possible for certain staggering insightful moments to be lost in the blitz of scintillating breakthroughs.

Tom’s Hardware spotted one sparkling decider diamond. “Google Terminates 200 AI Contractors — Ramp-Down Blamed, But Workers Claim Questions Over Pay and Job Insecurity Are the Real Reason Behind Layoffs” says:

Some believe they were let go because of complaints over working conditions and compensation.

Goes Google have a cancel culture?

The write up notes:

For the first half of 2025, AI growth was everywhere, and all the major companies were spending big to try to get ahead. Meta was offering individuals hundreds of millions to join its ranks … But while announcements of enormous industry deals continue, there’s also a lot of talk of contraction, particularly when it comes to lower-level positions like data annotation and AI response rating.

The individuals who are now free to find their future elsewhere have some ideas about why they were deleted from Google and promoted to Xooglers (former Google employees). The write up reports:

… many of them [the terminated with extreme Googliness] believe that it is their complaints over compensation that lead to them being laid off…. [Some] workers “attempted to unionize” earlier in the year to no avail. According to the report, “they [the future finders] allege that the company has retaliated against them.” … For its part, Google said in a statement that GlobalLogic is responsible for the working conditions of its employees.

See the brilliance of the management move. Google blames another outfit. Google reduces costs. Google makes it clear that grousing is not an path to the Google leadership enclave. Google AI is unscathed.

Google is A Number One in management in my opinion.

Stephen E Arnold, September 22, 2025

AI Poker: China Has Three Aces. Google, Your Play

September 19, 2025

animated-dinosaur-image-0062_thumb_t_thumb_thumbNo smart software involved. Just a dinobaby’s work. 

TV poker seems to be a thing on free or low cost US television streams. A group of people squint, sigh, and fiddle as each tries to win the big pile of cash. Another poker game is underway in the “next big thing” of smart software or AI.

Google released the Nano Banana image generator. Social media hummed. Okay, that looks like a winning hand. But another player dropped some coin on the table, squinted at the Google, and smirked just a tiny bit.

ByteDance Unveils New AI Image Model to Rival DeepMind’s Nano Banana” explains the poker play this way:

TikTok-owner ByteDance has launched its latest image generation artificial intelligence tool Seedream 4.0, which it said surpasses Google DeepMind’s viral “Nano Banana” AI image editor across several key indicators.

Now the cute jargon may make the poker hand friendly, there is menace behind the terminology. The write up states:

ByteDance claims that Seedream 4.0 beat Gemini 2.5 Flash Image for image generation and editing on its internal evaluation benchmark MagicBench, with stronger performance in prompt adherence, alignment and aesthetics.

Okay, prompt adherence, alignment (what the heck is that?), and aesthetics. That’s three aces right.

Who has the cost advantage? The write up says:

On Fal.ai, a global generative media hosting platform, Seedream 4.0 costs US$0.03 per generated image, while Gemini 2.5 Flash Image is priced at US$0.039.

I thought in poker one raised the stakes. Well, in AI poker one lowers the price in order to raise the stakes. These players are betting the money burned in the AI furnace will be “won” as the game progresses. Will AI poker turn up on the US free TV services? Probably. Burning cash makes for wonderful viewing, especially for those who are writing the checks.

What’s China’s view of this type of gambling? The write up says:

The state has signaled its support for AI-generated content by recognizing their copyright in late 2023, but has also recently introduced mandatory labelling of such content.

The game is not over. (Am I the only person who thinks that the name “nana banana” would have been better than “nano banana”?)

Stephen E Arnold, September 19, 2025

AI: The Tool for Humanity. Do Not Laugh.

September 19, 2025

Both sides of the news media are lamenting that AI is automating jobs and putting humans out of work. Conservative and liberals remain separated on how and why AI is “stealing” jobs, but the fear remains that humans are headed to obsoleteness…again. Humans have faced this issue since the start of human ingenuity. The key is to adapt and realize what AI truly is. Elizabeth Mathew of Signoz.io wrote: “I Built An MCP Server For Observability. This Is My Unhyped Take.”

If you’re unfamiliar with an MCP server it is an open standard that defines how LLMS or AI agents (i.e. Claude) uniformly connect external tools and data sources. It can be decoupled and used similar to a USB-C device then used for any agent.

After explaining some issues with MCP servers and why they are “schizophrenic”,

Mathew concludes with this:

“Ultimately, MCP-powered agents are not bringing us closer to automated problem-solving. They are giving us sophisticated hypothesis generators. They excel at exploring the known, but the unknown remains the domain of the human engineer. We’re not building an automated SRE; we’re building a co-pilot that can brainstorm, but can’t yet reason. And recognizing that distinction is the key to using these tools effectively without falling for the hype.”

She might be true from an optimistic and expert perspective, but that doesn’t prevent CEOs from implementing AI to replace their workforce or young adults being encouraged away from coding careers. Recent college graduates, do you have a job, any job?

Whitney Grace, September 19, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta