Google Is Just Like Santa with Free Goodies: Get “High” Grades, of Course

April 18, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumbNo AI, just the dinobaby himself.

Google wants to be [a] viewed as the smartest quantumly supreme outfit in the world and [b] like Santa. The “smart” part is part of the company’s culture. The CLEVER approach worked in Web search. Now the company faces what might charitably be called headwinds. There are those pesky legal hassles in the US and some gaining strength in other countries. Also, the competitive world of smart software continues to bedevil the very company that “invented” the transformer. Google gave away some technology, and now everyone from the update champs in Redmond, Washington, to Sam AI-Man is blowing smoke about Google’s systems and methods.

What a state of affairs?

The fix is to give away access to Google’s most advanced smart software to college students. How Santa like. According to “Google Is Gifting a Year of Gemini advanced to Every College Student in the US” reports:

Google has announced today that it’s giving all US college students free access to Gemini Advanced, and not just for a month or two—the offer is good for a full year of service. With Gemini Advanced, you get access to the more capable Pro models, as well as unlimited use of the Deep Research tool based on it. Subscribers also get a smattering of other AI tools, like the Veo 2 video generator, NotebookLM, and Gemini Live. The offer is for the Google One AI Premium plan, so it includes more than premium AI models, like Gemini features in Google Drive and 2TB of Drive storage.

The approach is not new. LexisNexis was one of the first online services to make online legal research available to law school students. It worked. Lawyers are among the savviest of the work fast, bill more professionals. When did Lexis Nexis move this forward? I recall speaking to a LexisNexis professional named Don Wilson in 1980, and he was eager to tell me about this “new” approach.

I asked Mr. Wilson (who as I recall was a big wheel at LexisNexis then), “That’s a bit like drug dealers giving the curious a ‘taste’?”

He smiled and said, “Exactly.”

In the last 45 years, lawyers have embraced new technology with a passion. I am not going to go through the litany of search, analysis, summarization, and other tools that heralded the success of smart software for the legal folks. I recall the early days of LegalTech when the most common question was, “How?” My few conversations with the professionals laboring in the jungle of law, rules, and regulations have shifted to “which system” and “how much.”

The marketing professionals at Google have “invented” their own approach to hook college students on smart software. My instinct is that Google does not know much about Don Wilson’s big idea. (As an aside, I remember one of Mr. Wilson’s technical colleague sometimes sported a silver jumpsuit which anticipated some of the fashion choices of Googlers by half a century.)

The write up says:

Google’s intention is to give students an entire school year of Gemini Advanced from now through finals next year. At the end of the term, you can bet Google will try to convert students to paying subscribers.

I am not sure I agree with this. If the program gets traction, Sam AI-Man and others will be standing by with special offers, deals, and free samples. The chemical structure of certain substances is similar to today’s many variants of smart software. Hey, whatever works, right? Whatever is free, right?

Several observations:

  1. Google’s originality is quantumly supreme
  2. Some people at the Google dress like Mr. Wilson’s technical wizard, jumpsuit and all
  3. The competition is going to do their own version of this “original” marketing idea; for example, didn’t Bing offer to pay people to use that outstanding Web search-and-retrieval system?

Net net: Hey, want a taste? It won’t hurt anything.  Try it. You will be mentally sharper. You will be more informed. You will have more time to watch YouTube. Trust the Google.

Stephen E Arnold, April 18, 2025

Google Gemini 2.5: A Somewhat Interesting Content Marketing Write Up

April 18, 2025

dino orangeJust a still alive dinobaby . No smart software involved.

How about this headline: “Google’s Gemini 2.5 Pro Is the Smartest Model You’re Not Using – and 4 Reasons It Matters for Enterprise AI”?

OpenAI scroogled the Google again. First, it was the January 2023 starting gun for AI hype. Now it was the release of a Japanese cartoon style for ChatGPT. Who knew that Japanese cartoons could have blasted the Google Gemini 2.5 Pro launch more effectively than a detonation of a failed SpaceX rocket?

The write up pants:

Gemini 2.5 Pro marks a significant leap forward for Google in the foundational model race – not just in benchmarks, but in usability. Based on early experiments, benchmark data, and hands-on developer reactions, it’s a model worth serious attention from enterprise technical decision-makers, particularly those who’ve historically defaulted to OpenAI or Claude for production-grade reasoning.

Yeah, whatever.

Announcements about Google AI are about as satisfying as pizza with glued-on cheese or Apple’s AI fantasy PR about “intelligence.”

But I like this statement:

Bonus: It’s Just Useful

The headline and this “just useful” make it clear none of Google’s previous AI efforts are winning the social media buzz game. Plus, the author points out that billions of Google dollars have not made the smart software speedy. And if you want to have smart software write that history paper about Germany after WW 2, stick with other models which feature “conversational smoothness.”

Quite an advertisement. A headline that says, “No one is using this” and” it is sluggish and writes in a way that a student will get flagged for cheating.

Stick to ads maybe?

And what about “why it matters to for enterprise AI.” Yeah, nice omission.

Stephen E Arnold, April 18, 2025

Trust: Zuck, Meta, and Llama 4

April 17, 2025

dino orange_thumb_thumbSorry, no AI used to create this item.

CNET published a very nice article that says to me: “Hey, we don’t trust you.” Navigate to “Meta Llama 4 Benchmarking Confusion: How Good Are the New AI Models?” The write up is like a wimpy version of the old PC Perspective podcast with Ryan Shrout. Before the embrace of Intel’s intellectual blanket, the podcast would raise questions about video card benchmarks. Most of the questions addressed: “Is this video card that fast?” In some cases, yes, the video card benchmarks were close to the real world. In other cases, video card manufacturers did what the butcher on Knoxville Avenue did in 1951. Mr. Wilson put his thumb on the scale. My grandmother watched friendly Mr. Wilson who drove a new Buick in a very, very modest neighborhood, closely. He did not smile as broadly when my grandmother and I would enter the store for a chicken.

image

Would someone put an AI professional benchmarked to this type of test? Of course not. But the idea has a certain charm. Plus, if the person dies, he was fooling. If the person survives, that individual is definitely a witch. This was a winner method to some enlightened leaders at one time.

The CNET story says about the Zuck’s most recent non-virtual reality investment:

Meta’s Llama 4 models Maverick and Scout are out now, but they might not be the best models on the market.

That’s a good way to say, “Liar, liar, pants on fire.”

The article adds:

the model that Meta actually submitted to the LMArena tests is not the model that is available for people to use now. The model submitted for testing is called “llama-4-maverick-03-26-experimental.” In a footnote on a chart on Llama’s website (not the announcement), in tiny font in the final bullet point, Meta clarifies that the model submitted to LMArena was ‘optimized for conversationality.”

Isn’t this a GenZ way to say, “You put your thumb on the scale, Mr. Wilson”?

Let’s review why one should think about the desire to make something better than it is:

  1. Meta’s decision is just marketing. Think about the self driving Teslas. Consequences? Not for fibbing.
  2. The Meta engineers have to deliver good news. Who wants to tell the Zuck that the Llama innovations are like making the VR thing a big winner? Answer: No one who wants to get a bonus and curry favor.
  3. Meta does not have the ability to distinguish good from bad. The model swap is what Meta is going to do anyway. So why not just use it? No big deal. Is this a moral and ethical dead zone?

What’s interesting is that from my point of view, Meta and the Zuck have a standard operating procedure. I am not sure that aligns with what some people expect. But as long as the revenue flows and meaningful regulation of social media remains a windmill for today’s Don Quixotes, Meta is the best — until another AI leader puts out a quantumly supreme news release.

Stephen E Arnold, April 17, 2025

Google AI: Invention Is the PR Game

April 17, 2025

Google was so excited to tout its AI’s great achievement: In under 48 hours, It solved a medical problem that vexed human researchers for a decade. Great! Just one hitch. As Pivot to AI tells us, "Google Co-Scientist AI Cracks Superbug Problem in Two Days!—Because It Had Been Fed the Team’s Previous Paper with the Answer In It." With that detail, the feat seems much less impressive. In fact, two days seems downright sluggish. Writer David Gerard reports:

"The hype cycle for Google’s fabulous new AI Co-Scientist tool, based on the Gemini LLM, includes a BBC headline about how José Penadés’ team at Imperial College asked the tool about a problem he’d been working on for years — and it solved it in less than 48 hours! [BBC; Google] Penadés works on the evolution of drug-resistant bacteria. Co-Scientist suggested the bacteria might be hijacking fragments of DNA from bacteriophages. The team said that if they’d had this hypothesis at the start, it would have saved years of work. Sounds almost too good to be true! Because it is. It turns out Co-Scientist had been fed a 2023 paper by Penadés’ team that included a version of the hypothesis. The BBC coverage failed to mention this bit. [New Scientist, archive]"

It seems this type of Googley AI over-brag is a pattern. Gerard notes the company claims Co-Scientist identified new drugs for liver fibrosis, but those drugs had already been studied for this use. By humans. He also reminds us of this bit of truth-stretching from 2023:

"Google loudly publicized how DeepMind had synthesized 43 ‘new materials’ — but studies in 2024 showed that none of the materials was actually new, and that only 3 of 58 syntheses were even successful. [APS; ChemrXiv]"

So the next time Google crows about an AI achievement, we have to keep in mind that AI often is a synonym for PR.

Cynthia Murrell, April 17, 2026

China Smart, US Dumb: The Fluid Mechanics Problem Solved

April 16, 2025

There are many puzzles that haven’t been solved, but with advanced technology and new ways of thinking some of them are finally getting answered. Two Chinese mathematicians working in the United States claim to have solved an old puzzle involving fluid mechanics says the South China Morning Post: “Chinese Mathematicians Say They Have Cracked Century-Old Fluid Mechanics Puzzle.”

Fluid mechanics is a study used in engineering and it is applied to aerodynamics, dams and bridges design, and hydraulic systems. The Chinese mathematicians are Deng Yu from the University of Chicago and Ma Xiao from the University of Michigan. They were joined by their international collaborator Zaher Hani also of the University of Michigan. They published a paper to arXiv-a platform that posts research papers before they are peer reviewed. The team said they found a solution to “Hilbert’s sixth problem.

What exactly did the mathematicians solve?

“At the intersection of physics and mathematics, researchers ask whether it is possible to establish physics as a rigorous branch of mathematics by taking microscopic laws as axioms and proving macroscopic laws as theorems. Axioms are mathematical statements that are assumed to be true, while a theorem is a logical consequence of axioms.

Hilbert’s sixth problem addresses that challenge, according to a post by Ma on Wednesday on Zhihu, a Quora-like Chinese online content platform.”

David Hilbert proposed this as one of twenty-three problems he presented in 1900 at the International Congress of Mathematicians. China is taking credit for these mathematicians and their work. China wants to point out how smart it is, while it likes to poke fun at the “dumb” United States. Let’s make our own point that these Chinese mathematicians are living and working in the United States.

Whitney Grace, April 16, 2025

Google Wears a Necklace and Sneakers with Flashing Blue LEDs. Snazzy.

April 15, 2025

dino orangeNo AI. Just an old dinobaby pointing out some exciting developments in the world “beyond search.”

I can still see the flashing blue light in Aisle 7. Yes, there goes the siren. K-Mart in Central Illinois was running a big sale on underwear. My mother loved those “blue light specials.” She would tell me as I covered my eyes and ears, “I don’t want to miss out.” Into the scrum she would go, emerging with two packages of purple boxer shorts for my father. He sat in the car while my mother shopped. I accompanied her because that’s what sons in Central Illinois do. I wonder if procurement officials are familiar with blue light specials. The sirens in DC wail 24×7.

image

Thanks, OpenAI. You produced a good enough illustration. A first!

I thought about K-Mart when I read “Google Slashes Business Software Prices for US Federal Agencies.” I see that flickering blue light as I type this short blog post. The trusted “real” news source reports:

Google will offer steep discounts to U.S. federal agencies for its business apps package as the company looks to capitalize on the Trump administration’s cost-cutting push and chip away at Microsoft’s longstanding grip on the government software market.

Yep, discounts. Now Microsoft has some traction in the US government. I cannot imagine what life would be like for aides to a senior Pentagon if he did not have nifty PowerPoint presentations. Perhaps offering a deal will get some Microsoft afficionados to learn to live without Excel and Word? I don’t know, but Google is giving the “discount” method a whirl.

What’s up with Google? I think someone told me that Gemini 2.5 was free. Now a discount on GSA listed services which could amount to $2 billion in savings … if — yes, that magic word — if the US government dumps the Softies’ outstanding products for the cloudy goodness of the Google’s way. Yep, “if.”

I have a cute anecdote about Google and the US government from the year 2000, but, alas, I cannot share it. Trust me. It is a knee slapper. And, no, it is not about Sergey wearing silver sparkle sneakers to meetings with US elected officials. Those were indeed eye catchers among shoes with toes that looked like potatoes.

Several observations:

  1. Google, like Amazon, is trying to obtain US government business. I think the flashing blue lights, if I were still working in the hallowed halls, would impair my vision. Price cutting seems to be the one true way right now.
  2. Will lower prices have an impact on US government procurement? I am not sure. The procurement process chugs along every day and in quite predictable ways. How long does it take to turn a battleship, assuming the captain can pull off the maneuver without striking a small fishing boat, of course.
  3. Google seems to think that slashing prices for its “products” will boost sales. My understanding of Google is that its sale to government agencies pivots on several characteristics; for example, [a] listening and understanding what government professionals say, [b] providing a modicum of customer support or at the very least answering a phone call from a government professional, and [c] delivering products that the aides, assistants, and contractors understand and can use to crank out documents with numbered lines, dense charts, and bullet points that mostly stay in place after a graphic is inserted.

To sum up, I find the idea of price cuts interesting. My initial reaction is that price cuts and procurement are not necessarily lined up procedurally. But I am a dinobaby. But after 50 years of “government” work I have a keen desire to see if the Google can shine enough blue lights to bedazzle people involved in purchasing software to keep the admirals happy. (I speak from a little experience working with the late Admiral Craig Hosmer, R-Calif. whom I thank for his service.)

Stephen E Arnold, April 15, 2025

AI Horn Honking: Toot for Refact

April 10, 2025

What is one of the things we were taught in kindergarten? Oh, right. Humility. That, however, doesn’t apply when you’re in a job interview, selling a product, or writing a press release. Dev.to’s wrote a press release about their open source AI agent for programming in IDE was high ranking: “Our AI Agent + 3.7 Sonnet Ranked #1 Pn Aider’s Polyglot Bench — A 76.4% Score.”

As the title says, Dev.to’s open source AI programming agent ranked 76.4%. The agent is called Refact.ai and was upgraded with 3.7 Sonnet. It outperformed other AI agents, include Claude, Deepseek, ChatGPT, GPT-4.5 Preview, and Aider.

Refact.ai does better than the others because it is an intuitive AI agent. It uses a feedback loop to create self-learning and auto-correcting AI agent:

• “Writes code: The agent generates code based on the task description.

• Fixes errors: Runs automated checks for issues.

• Iterates: If problems are found, the agent corrects the code, fixes bugs, and re-tests until the task is successfully completed.

• Delivers the result, which will be correct most of the time!”

Dev.to has good reasons to pat itself on the back. Hopefully they will continue to develop and deliver high-performing AI agents.

Whitney Grace, April 10, 2025

AI Addicts Are Now a Thing

April 9, 2025

Hey, pal, can you spare a prompt?

Gee, who could have seen this coming? It seems one can become dependent on a chatbot, complete with addition indicators like preoccupation, withdrawal symptoms, loss of control, and mood modification. "Something Bizarre Is Happening to People Who Use ChatGPT a Lot," reports The Byte. Writer Noor Al-Sibai cites a recent joint study by OpenAI and MIT Media Lab as she writes:

"To get there, the MIT and OpenAI team surveyed thousands of ChatGPT users to glean not only how they felt about the chatbot, but also to study what kinds of ‘affective cues,’ which was defined in a joint summary of the research as ‘aspects of interactions that indicate empathy, affection, or support,’ they used when chatting with it. Though the vast majority of people surveyed didn’t engage emotionally with ChatGPT, those who used the chatbot for longer periods of time seemed to start considering it to be a ‘friend.’ The survey participants who chatted with ChatGPT the longest tended to be lonelier and get more stressed out over subtle changes in the model’s behavior, too. Add it all up, and it’s not good. In this study as in other cases we’ve seen, people tend to become dependent upon AI chatbots when their personal lives are lacking. In other words, the neediest people are developing the deepest parasocial relationship with AI — and where that leads could end up being sad, scary, or somewhere entirely unpredictable."

No kidding. Interestingly, the study found those who use the bot as an emotional or psychological sounding board were less likely to become dependent than those who used it for "non-personal" tasks, like brainstorming. Perhaps because the former are well-adjusted enough to examine their emotions at all? (The privacy risks of sharing such personal details with a chatbot are another issue entirely.) Al-Sibai emphasizes the upshot of the research: The more time one spends using ChatGPT, the more likely one is to become emotionally dependent on it. We think parents, especially, should be aware of this finding.

How many AI outfits will offer free AI? You know. Just give folks a taste.

Cynthia Murrell, April 9, 2025

Bye-Bye Newsletters, Hello AI Marketing Emails

April 4, 2025

Adam Ryan takes aim at newsletters in the Work Week article, “Perpetual: The Major Shift of Media.” Ryan starts the article saying we’re already in changing media landscape and if you’re not preparing you will be left behind. He then dives into more detail explaining that the latest trend setter is an email newsletter. From his work in advertising, Ryan has seen newsletters rise from the bottom of the food chain to million dollar marketing tools.

He explains that newsletters becoming important marketing tools wasn’t an accident and that it happened through a the democratization process. By democratization Ryan means that newsletters became easier to make through the use of simplification software. He uses the example of Shopify streamlining e-commerce and Beehiiv doing the same for newsletters. Another example is Windows making PCs easier to use with its intuitive UI.

Continuing with the Shopify example, Ryan says that mass adoption of the e-commerce tool has flooded the market place. Top brands that used to dominate the market were now overshadowed by competition. In short, everyone and the kitchen sink was selling goods and services.

Ryan says that the newsletter trend is about to shift and people (operators) who solely focus on this trend will fall out of favor. He quotes Warren Buffet: “Be fearful when others are greedy, and be greedy when others are fearful.” Ryan continues that people are changing how they consume information and they want less of it, not more. Enter the AI tool:

“Here’s what that means:

• Email open rates will drop as people consume summaries instead of full emails.

• Ad clicks will collapse as fewer people see newsletter ads.

• The entire value of an “owned audience” declines if AI decides what gets surfaced.”

It’s not the end of the line for newsletter is you become indispensable such as creating content that can’t be summarized, build relationships beyond emails, and don’t be a commodity:

“This shift is coming. AI will change how people engage with email. That means the era of high-growth newsletters is ending. The ones who survive will be the ones who own their audience relationships, create habit-driven content, and build businesses beyond the inbox.”

This is true about every major change, not just news letters.

Whitney Grace, April 4, 2025

The AI Market: The Less-Educated

April 2, 2025

Writing is an essential function of education and communication. Writing is an innate skill as well as one that can be curated through dedicated practice. Digital writing tools such as spelling and grammar checkers and now AI like Grammarly and ChatGPT have influenced writing. Stanford University studied how AI writing tools have impacted writing in professional industries. The discovered that less-educated parts of the US heavily rely on AI. Ars Technica reviews the study in: “Researchers Surprised To Find Less-Educated Areas Adopting AI Writing Tools Faster.”

Stanford’s AI study tracked LLM adoption from January 2022 to September 2024 with a dataset that included US Consumer Financial Protection Bureau consumer complaints, corporate press releases, job postings, and UN press releases. The researchers used a statistical detection system that tracked word usage patterns. The system found that 14-24% of these communications showed AI assistance. The study also found an interesting pattern:

“The study also found that while urban areas showed higher adoption overall (18.2 percent versus 10.9 percent in rural areas), regions with lower educational attainment used AI writing tools more frequently (19.9 percent compared to 17.4 percent in higher-education areas). The researchers note that this contradicts typical technology adoption patterns where more educated populations adopt new tools fastest.”

The researchers theorize that AI-writing tools serve as equalizing measures for less-educated individuals. They also noted that AI-writing tools are being adopted because the market is saturated or the LLMs are becoming more advanced. IT will be difficult to distinguish between human and machine written text. They predict negative outcomes from this:

“ ‘The growing reliance on AI-generated content may introduce challenges in communication,’ the researchers write. ‘In sensitive categories, over-reliance on AI could result in messages that fail to address concerns or overall release less credible information externally. Over-reliance on AI could also introduce public mistrust in the authenticity of messages sent by firms.’”

It’s not good to blindly trust AI, especially with the current state of datasets. Can you imagine the critical thinking skills these future leaders and entrepreneurs will develop? On that thought, what will happen to imagination?

Whitney Grace, April 2, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta