Salesforce Surfs Agentic AI and Hopes to Stay on the Long Board

January 7, 2025

Hopping Dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis is an official dinobaby post. No smart software involved in this blog post.

I spotted a content marketing play, and I found it amusing. The spin was enough to make my eyes wobble. “Intelligence (AI). Its Stock Is Up 39% in 4 Months, and It Could Soar Even Higher in 2025” appeared in the Motley Fool online investment information service. The headline is standard fare, but the catchphrase in the write up is “the third wave of AI.” What were the other two waves, you may ask? The first wave was machine learning which is an age measured in decades. The second wave which garnered the attention of the venture community and outfits like Google was generative AI. I think of the second wave as the content suck up moment.

So what’s the third wave? Answer: Salesforce. Yep, the guts of the company is a digitized record of sales contacts. The old word for what made a successful sales person valuable was “Rolodex.” But today one may as well talk about a pressing ham.

What makes this content marketing-type article notable is that Salesforce wants to “win” the battle of the enterprise and relegate Microsoft to the bench. What’s interesting is that Salesforce’s innovation is presented this way:

The next wave of AI will build further on generative AI’s capabilities, enabling AI to make decisions and take actions across applications without human intervention. Salesforce (CRM -0.42%) CEO Marc Benioff calls it the “digital workforce.” And his company is leading the growth in this Agentic AI with its new Agentforce product.

Agentic.

What’s Salesforce’s secret sauce? The write up says:

Artificial intelligence algorithms are only as good as the data used to train them. Salesforce has accurate and specific data about each of its enterprise customers that nobody else has. While individual businesses could give other companies access to those data, Salesforce’s ability to quickly and simply integrate client data as well as its own data sets makes it a top choice for customers looking to add AI agents to their “workforce.” During the company’s third-quarter earnings call, Benioff called Salesforce’s data an “unfair advantage,” noting Agentforce agents are more accurate and less hallucinogenic as a result.

To put some focus on the competition, Salesforce targets Microsoft. The write up says:

Benioff also called out what might be Salesforce’s largest competitor in Agentic AI, Microsoft (NASDAQ: MSFT). While Microsoft has a lot of access to enterprise customers thanks to its Office productivity suite and other enterprise software solutions, it doesn’t have as much high-quality data on a business as Salesforce. As a result, Microsoft’s Copilot abilities might not be up to Agentforce in many instances. Benioff points out Microsoft isn’t using Copilot to power its online help desk like Salesforce.

I think it is worth mentioning that Apple’s AI seems to be a tad problematic. Also, those AI laptops are not the pet rock for a New Year’s gift.

What’s the Motley Fool doing for Salesforce besides making the company’s stock into a sure-fire winner for 2025? The rah rah is intense; for example:

But if there’s one thing investors have learned from the last two years of AI innovation, it’s that these things often grow faster than anticipated. That could lead Salesforce to outperform analysts’ expectations over the next few years, as it leads the third wave of artificial intelligence.

Let me offer several observations:

  1. Salesforce sees a marketing opportunity for its “agentic” wrappers or apps. Therefore, put the pedal to the metal and grab mind share and market share. That’s not much different from the company’s attention push.
  2. Salesforce recognizes that Microsoft has some momentum in some very lucrative markets. The prime example is the Microsoft tie up with Palantir. Salesforce does not have that type of hook to generate revenue from US government defense and intelligence budgets.
  3. Salesforce is growing, but so is Oracle. Therefore, Salesforce feels that it could become the cold salami in the middle of a Microsoft and Oracle sandwich.

Net net: Salesforce has to amp up whatever it can before companies that are catching the rising AI cloud wave swamp the Salesforce surf board.

Stephen E Arnold, January 7, 2025

China Smart, US Dumb: The Deepseek Interview

January 6, 2025

Hopping Dino_thumb_thumb_thumbThis is an official dinobaby post. I used AI to assist me in this AI. In fact, I used the ChatGPT system which seems to be the benchmark against which China’s AI race leader measures itself. This suggests that Deepseek has a bit of a second-place mentality, a bit of jealousy, and possibly a signal of inferiority, doesn’t it?

Deepseek: The Quiet Giant Leading China’s AI Race” is a good example of what the Middle Kingdom is revealing about smart software. The 5,000 word essay became available as a Happy New Year’s message to the US. Like the girl repairing broken generators without fancy tools, the message is clear to me: 2025 is going to be different.

image

Here’s an abstract of the “interview” generated by a US smart software system. I would have used Deepseek, but I don’t have access to it. I used the ChatGPT service which Deepseek has surpassed to create the paragraph below. Make sure the summary is in line with the ChinaTalk original and read the 5,000 word original and do some comparisons.

Deepseek, a Chinese AI startup, has emerged as an innovator in the AI industry, surpassing OpenAI’s o1 model with its R1 model on reasoning benchmarks. Backed entirely by High-Flyer, a top Chinese quantitative hedge fund, Deepseek focuses on foundational AI research, eschewing commercialization and emphasizing open-source development. The company has disrupted the AI market with breakthroughs like the multi-head latent attention and sparse mixture-of-experts architectures, which significantly reduce inference and computational costs, sparking a price war among Chinese AI developers. Liang Wenfeng, Deepseek CEO, aims to achieve artificial general intelligence through innovation rather than imitation, challenging the common perception that Chinese companies prioritize commercialization over technological breakthroughs. Wenfeng’s background in AI and engineering has fostered a bottom-up, curiosity-driven research culture, enabling the team to develop transformative models. Deepseek Version 2 delivers unparalleled cost efficiency, prompting major tech giants to reduce their API prices. Deepseek’s commitment to innovation extends to its organizational approach, leveraging young, local talent and promoting interdisciplinary collaboration without rigid hierarchies. The company’s open-source ethos and focus on advancing the global AI ecosystem set it apart from other large-model startups. Despite industry skepticism about China’s capacity for original innovation, Deepseek is reshaping the narrative, positioning itself as a catalyst for technological advancement. Liang’s vision highlights the importance of confidence, long-term investment in foundational research, and societal support for hardcore innovation. As Deepseek continues to refine its AGI roadmap, focusing on areas like mathematics, multimodality, and natural language, it exemplifies the transformative potential of prioritizing innovation over short-term profit.

I left the largely unsupported assertions in this summary. I also retained the repeated emphasis on innovation, originality, and local talent. With the aid of smart software, I was able to retain the essence of the content marketing propaganda piece’s 5,000 words.

You may disagree with my viewpoint. That’s okay. Let me annoy you further by offering several observations:

  1. The release of this PR piece coincides with additional information about China’s infiltration of the US telephone network and the directed cyber attack on the US Treasury.
  2. The multi-pronged content marketing / propaganda flow about China’s “local talent” is a major theme of these PR efforts. From the humble brilliant girl repairing equipment with primitive tools because she is a “genius” to the notion that China’s young “local talent” have gone beyond what the “imported” talent in the US has been able to achieve are two pronged. One tine of the conceptual pitchfork is that the US is stupid. The other tine is that China just works better, smarter, faster, and cheaper.
  3. The messaging is largely accomplished using free or low cost US developed systems and methods. This is definitely surfing on other people’s knowledge waves.

Net net: Mr. Putin is annoyed that the European Union wants to block Russia-generated messaging about the “special action.” The US is less concerned about China’s propaganda attacks. The New Year will be interesting, but I have lived through enough “interesting times” to do much more than write blogs posts from my outpost in rural Kentucky. What about you, gentle reader? China smart, US dumb: Which is it?

Stephen E Arnold, January 6, 2025

Chinese AI Lab Deepseek Grinds Ahead…Allegedly

December 31, 2024

Is the world’s most innovative AI company a low-profile Chinese startup? ChinaTalk examines “Deepseek: The Quiet Giant Leading China’s AI Race.” The Chinese-tech news site shares an annotated translation of a rare interview with DeepSeek CEO Liang Wenfeng. The journalists note the firm’s latest R1 model just outperformed OpenAI’s o1. In their introduction to the July interview, they write:

“Before Deepseek, CEO Liang Wenfeng’s main venture was High-Flyer, a top 4 Chinese quantitative hedge fund last valued at $8 billion. Deepseek is fully funded by High-Flyer and has no plans to fundraise. It focuses on building foundational technology rather than commercial applications and has committed to open sourcing all of its models. It has also singlehandedly kicked off price wars in China by charging very affordable API rates. Despite this, Deepseek can afford to stay in the scaling game: with access to High-Flyer’s compute clusters, Dylan Patel’s best guess is they have upwards of ‘50k Hopper GPUs,’ orders of magnitude more compute power than the 10k A100s they cop to publicly. Deepseek’s strategy is grounded in their ambition to build AGI. Unlike previous spins on the theme, Deepseek’s mission statement does not mention safety, competition, or stakes for humanity, but only ‘unraveling the mystery of AGI with curiosity’. Accordingly, the lab has been laser-focused on research into potentially game-changing architectural and algorithmic innovations.”

For example, we learn:

“They proposed a novel MLA (multi-head latent attention) architecture that reduces memory usage to 5-13% of the commonly used MHA architecture. Additionally, their original DeepSeekMoESparse structure minimized computational costs, ultimately leading to reduced overall costs.”

Those in Silicon Valley are well aware of this “mysterious force from the East,” with several AI head honchos heaping praise on the firm. The interview is split into five parts. The first examines the large-model price war set off by Deepseek’s V2 release. Next, Wenfeng describes how an emphasis on innovation over imitation sets his firm apart but, in part three, notes that more money does not always lead to more innovation. Part four takes a look at the talent behind DeepSeek’s work, and in part five the CEO looks to the future. Interested readers should check out the full interview. Headquartered in Hangzhou, China, the young firm was founded in 2023.

Cynthia Murrell, December 31, 2024

AI Video Is Improving: Hello, Hollywood!

December 30, 2024

Has AI video gotten scarily believable? Well, yes. For anyone who has not gotten the memo, The Guardian declares, “Video Is AI’s New Frontier—and It Is so Persuasive, We Should All Be Worried.” Writer Victoria Turk describes recent developments:

“Video is AI’s new frontier, with OpenAI finally rolling out Sora in the US after first teasing it in February, and Meta announcing its own text-to-video tool, Movie Gen, in October. Google made its Veo video generator available to some customers this month. Are we ready for a world in which it is impossible to discern which of the moving images we see are real?”

Ready or not, here it is. No amount of hand-wringing will change that. Turk mentions ways bad actors abuse the technology: Scammers who impersonate victims’ loved ones to extort money. Deepfakes created to further political agendas. Fake sexual images and videos featuring real people. She also cites safeguards like watermarks and content restrictions as evidence AI firms understand the potential for abuse.

But the author’s main point seems to be more philosophical. It was prompted by convincing fake footage of a tree frog, documentary style. She writes:

“Yet despite the technological feat, as I watched the tree frog I felt less amazed than sad. It certainly looked the part, but we all knew that what we were seeing wasn’t real. The tree frog, the branch it clung to, the rainforest it lived in: none of these things existed, and they never had. The scene, although visually impressive, was hollow.”

Turk also laments the existence of this Meta-made baby hippo, which she declares is “dead behind the eyes.” Is it though? Either way, these experiences led Turk to ponders a bleak future in which one can never know which imagery can be trusted. She concludes with this anecdote:

“I was recently scrolling through Instagram and shared a cute video of a bunny eating lettuce with my husband. It was a completely benign clip – but perhaps a little too adorable. Was it AI, he asked? I couldn’t tell. Even having to ask the question diminished the moment, and the cuteness of the video. In a world where anything can be fake, everything might be.”

That is true. An important point to remember when we see footage of a politician doing something horrible. Or if we get a distressed call from a family member begging for money. Or if we see a cute animal video but prefer to withhold the dopamine rush lest it turn out to be fake.

Cynthia Murrell, December 30, 2024

Debbie Downer Says, No AI Payoff Until 2026

December 27, 2024

Holiday greetings from the Financial Review. Its story “Wall Street Needs to Prepare for an AI Winter” is a joyous description of what’s coming down the Information Highway. The uplifting article sings:

shovelling more and more data into larger models will only go so far when it comes to creating “intelligent” capabilities, and we’ve just about arrived at that point. Even if more data were the answer, those companies that indiscriminately vacuumed up material from any source they could find are starting to struggle to acquire enough new information to feed the machine.

Translating to rural Kentucky speak: “We been shoveling in the horse stall and ain’t found the nag yet.”

The flickering light bulb has apparently illuminated the idea that smart software is expensive to develop, train, optimize, run, market, and defend against allegations of copyright infringement.

To add to the profit shadow, Debbie Downer’s cousin compared OpenAI to Visa. The idea in “OpenAI Is Visa” is that Sam AI-Man’s company is working overtime to preserve its lead in AI and become a monopoly before competitors figure out how to knock off OpenAI. The write up says:

Either way, Visa and OpenAI seem to agree on one thing: that “competition is for losers.”

Too add to the uncertainty about US AI “dominance,” Venture Beat reports:

DeepSeek-V3, ultra-large open-source AI, outperforms Llama and Qwen on launch.

Does that suggest that the squabbling and mud wrestling among US firms can be body slammed by the Chinese AI grapplers are more agile? Who knows. However, in a series of tweets, DeepSeek suggested that its “cost” was less than $6 million. The idea is that what Chinese electric car pricing is doing to some EV manufacturers, China’s AI will do to US AI. Better and faster? I don’t know but that “cheaper” angle will resonate with those asked to pump cash into the Big Dogs of US AI.

In January 2023, many were struck by the wonders of smart software. Will the same festive atmosphere prevail in 2025?

Stephen E Arnold, December 27, 2024

OpenAI Partners with Defense Startup Anduril to Bring AI to US Military

December 27, 2024

animated-dinosaur-image-0062_thumb_thumbNo smart software involved. Just a dinobaby’s work.

We learn from the Independent that “OpenAI Announces Weapons Company Partnership to Provide AI Tech to Military.” The partnership with Anduril represents an about-face for OpenAI. This will excite some people, scare others, and lead to remakes of the “Terminator.” Beyond Search thinks that automated smart death machines are so trendy. China also seems enthused. We learn:

“‘ChatGPT-maker OpenAI and high-tech defense startup Anduril Industries will collaborate to develop artificial intelligence-inflected technologies for military applications, the companies announced. ‘U.S. and allied forces face a rapidly evolving set of aerial threats from both emerging unmanned systems and legacy manned platforms that can wreak havoc, damage infrastructure and take lives,’ the companies wrote in a Wednesday statement. ‘The Anduril and OpenAI strategic partnership will focus on improving the nation’s counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real-time.’ The companies framed the alliance as a way to secure American technical supremacy during a ‘pivotal moment’ in the AI race against China. They did not disclose financial terms.”

Of course not. Tech companies were once wary of embracing military contracts, but it seems those days are over. Why now? The article observes:

“The deals also highlight the increasing nexus between conservative politics, big tech, and military technology. Palmer Lucky, co-founder of Anduril, was an early, vocal supporter of Donald Trump in the tech world, and is close with Elon Musk. … Vice-president-elect JD Vance, meanwhile, is a protege of investor Peter Thiel, who co-founded Palantir, another of the companies involved in military AI.”

“Involved” is putting it lightly. And as readers may have heard, Musk appears to be best buds with the president elect. He is also at the head of the new Department of Government Efficiency, which sounds like a federal agency but is not. Yet. The commission is expected to strongly influence how the next administration spends our money. Will they adhere to multinational guidelines on military use of AI? Do PayPal alums have any hand in this type of deal?

Cynthia Murrell, December 27, 2024

AI Oh-Oh: Innovation Needed Now

December 27, 2024

Hopping Dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis blog post is the work of an authentic dinobaby. No smart software was used.

I continue to hear about AI whiz kids “running out of data.” When people and institutions don’t know what’s happening, it is easy to just smash and grab. The copyright litigation and the willingness of AI companies to tie up with content owners make explicit that the zoom zoom days are over.

image

A smart software wizard is wondering how to get over, under, around, or through the stone wall of exhausted content. Thanks, Grok, good enough.

The AI Revolution Is Running Out of Data. What Can Researchers Do?” is a less crazy discussion of the addictive craze which has made smart software or — wait for it — agentic intelligence the next big thing. The write up states:

The Internet is a vast ocean of human knowledge, but it isn’t infinite. And artificial intelligence (AI) researchers have nearly sucked it dry.

“Sucked it dry” and the systems still hallucinate. Guard rails prevent users from obtaining information germane to certain government investigations. The image generators refuse to display a classroom of student paying attention to mobile phones, not the teacher. Yep, dry. More like “run aground.”

The fix to running out of data, according to the write up, is:

plans to work around it, including generating new data and finding unconventional data sources.

One approach is to “find data.” The write up says:

one option might be to harvest non-public data, such as WhatsApp messages or transcripts of YouTube videos. Although the legality of scraping third-party content in this manner is untested, companies do have access to their own data, and several social-media firms say they use their own material to train their AI models. For example, Meta in Menlo Park, California, says that audio and images collected by its virtual-reality headset Meta Quest are used to train its AI.

And what about this angle?

Another option might be to focus on specialized data sets such as astronomical or genomic data, which are growing rapidly. Fei-Fei Li, a prominent AI researcher at Stanford University in California, has publicly backed this strategy. She said at a Bloomberg technology summit in May that worries about data running out take too narrow a view of what constitutes data, given the untapped information available across fields such as health care, the environment and education.

If you want more of these work arounds, please, consult the Nature article.

Several observations are warranted:

First, the current AI “revolution” is the result of many years of research and experimentation, The fact that today’s AI produces reasonably good high school essays and allows  people to interact with a search system is a step forward. However, like most search-based innovations, the systems have flaws.

Second, the use of neural networks and the creation by Google (allegedly) of the transformer has provided fuel to fire the engines of investment. The money machines are chasing the next big thing. The problem is that the costs are now becoming evident. It is tough to hide the demand for electric power. (Hey, no problem how about a modular thorium reactor. Yeah, just pick one up at Home Depot. The small nukes are next to the Honda generators.) There is the need for computation. Google can talk about quantum supremacy, but good old fashioned architecture is making Nvidia a big dog in AI. And the cost of people? It is off the chart. Forget those coding boot camps and learn to do matrix math in your head.

Third, the real world applications like those Apple is known for don’t work very well. After vaporware time, Apple is pushing OpenAI to iPhone users. Will Siri actually work? Apple cannot afford to whiff to many big plays. Do you wear your Apple headset or do you have warm and fuzzies for the 2024 Mac Mini which is a heck of a lot cheaper than some of the high power Macs from a year ago? What about Copilot in Notebook. Hey, that’s helpful to some Notepad users. How many? Well, that’s another question. How many people want smart software doing the Clippy thing with every click?

Net net: It is now time for innovation, not marketing. Which of the Big Dog AI outfits will break through the stone walls? The bigger question is, “What if it is an innovator in China?” Impossible, right?

Stephen E Arnold, December 27, 2024

Boxing Day Cheat Sheet for AI Marketing: Happy New Year!

December 27, 2024

Other than automation and taking the creative talent out of the entertainment industry, where is AI headed in 2025? The lowdown for the upcoming year can be found on the Techknowledgeon AI blog and its post: “The Rise Of Artificial Intelligence: Know The Answers That Makes You Sensible About AI.”

The article acts as a primer for what AI I, its advantages, and answering important questions about the technology. The questions that grab our attention are “Will AI take over humans one day?” And “Is AI an Existential Threat to Humanity?” Here’s the answer to the first question:

“The idea of AI taking over humanity has been a recurring theme in science fiction and a topic of genuine concern among some experts. While AI is advancing at an incredible pace, its potential to surpass or dominate human capabilities is still a subject of intense debate. Let’s explore this question in detail.

AI, despite its impressive capabilities, has significant limitations:

  • Lack of General Intelligence: Most AI today is classified as narrow AI, meaning it excels at specific tasks but lacks the broader reasoning abilities of human intelligence.
  • Dependency on Humans: AI systems require extensive human oversight for design, training, and maintenance.
  • Absence of Creativity and Emotion: While AI can simulate creativity, it doesn’t possess intrinsic emotions, intuition, or consciousness.

And then the second one is:

“Instead of "taking over," AI is more likely to serve as an augmentation tool:

  • Workforce Support: AI-powered systems are designed to complement human skills, automating repetitive tasks and freeing up time for creative and strategic thinking.
  • Health Monitoring: AI assists doctors but doesn’t replace the human judgment necessary for patient care.
  • Smart Assistants: Tools like Alexa or Google Assistant enhance convenience but operate under strict limitations.”

So AI has a long way to go before it replaces humanity and the singularity of surpassing human intelligence is either a long way off or might never happen.

This dossier includes useful information to understand where AI is going and will help anyone interested in learning what AI algorithms are projected to do in 2025.

Whitney Grace, December 27, 2024

Juicing Up RAG: The RAG Bop Bop

December 26, 2024

Can improved information retrieval techniques lead to more relevant data for AI models? One startup is using a pair of existing technologies to attempt just that. MarkTechPost invites us to “Meet CircleMind: An AI Startup that is Transforming Retrieval Augmented Generation with Knowledge Graphs and PageRank.” Writer Shobha Kakkar begins by defining Retrieval Augmented Generation (RAG). For those unfamiliar, it basically combines information retrieval with language generation. Traditionally, these models use either keyword searches or dense vector embeddings. This means a lot of irrelevant and unauthoritative data get raked in with the juicy bits. The write-up explains how this new method refines the process:

“CircleMind’s approach revolves around two key technologies: Knowledge Graphs and the PageRank Algorithm. Knowledge graphs are structured networks of interconnected entities—think people, places, organizations—designed to represent the relationships between various concepts. They help machines not just identify words but understand their connections, thereby elevating how context is both interpreted and applied during the generation of responses. This richer representation of relationships helps CircleMind retrieve data that is more nuanced and contextually accurate. However, understanding relationships is only part of the solution. CircleMind also leverages the PageRank algorithm, a technique developed by Google’s founders in the late 1990s that measures the importance of nodes within a graph based on the quantity and quality of incoming links. Applied to a knowledge graph, PageRank can prioritize nodes that are more authoritative and well-connected. In CircleMind’s context, this ensures that the retrieved information is not only relevant but also carries a measure of authority and trustworthiness. By combining these two techniques, CircleMind enhances both the quality and reliability of the information retrieved, providing more contextually appropriate data for LLMs to generate responses.”

CircleMind notes its approach is still in its early stages, and expects it to take some time to iron out all the kinks. Scaling it up will require clearing hurdles of speed and computational costs. Meanwhile, a few early users are getting a taste of the beta version now. Based in San Francisco, the young startup was launched in 2024.

Cynthia Murrell, December 26, 2024

Anthropic Gifts a Feeling of Safety: Insecurity Blooms This Holiday Season

December 25, 2024

animated-dinosaur-image-0055_thumb_thumb_thumbWritten by a dinobaby, not an over-achieving, unexplainable AI system.

TechCrunch published “Google Is Using Anthropic’s Claude to Improve Its Gemini AI.” The write up reports:

Contractors working to improve Google’s Gemini AI are comparing its answers against outputs produced by Anthropic’s competitor model Claude, according to internal correspondence seen by TechCrunch. Google would not say, when reached by TechCrunch for comment, if it had obtained permission for its use of Claude in testing against Gemini.

Beyond Search notes Pymnts.com report from February 5, 2023, that Google invested at that time $300 million in Anthropic. Beyond Search recalls a presentation at a law enforcement conference. One comment made by an attendee to me suggested that Google was well aware of Anthropic’s so-called constitutional AI. I am immune to AI and crypto babble, but I did chase down “constitutional AI” because the image the bound phrase sparked in my mind was that of the mess my French bulldog delivers when he has eaten spicy food.

image

The illustration comes from You.com. Kwanzaa was the magic word. Good enough.

The explanation consumes 34 pages of an ArXiv paper called “Constitutional AI: Harmlessness from AI Feedback.” The paper has more than 48 authors. (Headhunters, please, take note when you need to recruit AI wizards.) I read the paper, and I think — please, note, “think” — the main idea is:

Humans provides some input. Then the Anthropic system figures out how to achieve helpfulness and instruction-following without human feedback. And the “constitution”? Those are the human-created rules necessary to get the smart software rolling along. Presumably Anthropic’s algorithms ride without training wheels forevermore.

The CAI acronym has not caught on like the snappier RAG or “retrieval augmented generation” or the most spectacular jargon “synthetic data.” But obviously Google understands and values to the tune of hundreds of millions of dollars, staff time, and the attention of big Googler thinkers like Jeff Dean (who once was the Big Dog of AI) but has given way to the alpha dog at DeepMind).

The swizzle for this “testing” or whatever the Googlers are doing is “safety.” I know that when I ask for an image like “a high school teacher at the greenboard talking to students who are immersed in their mobile phones”, I am informed that the image is not safe. I assume Anthropic will make such crazy prohibitions slightly less incomprehensible. Well, maybe, maybe not.

Several observations are warranted:

  1. Google’s investment in Anthropic took place shortly after the Microsoft AI marketing coup in 2023. Perhaps someone knew that Google’s “we invented it” transformer technology was becoming a bit of a problem
  2. Despite the Google “we are the bestest” in AI technology, the company continues to feel the need to prove that it is the bestest. That’s good. Self- knowledge and defeating “not invented here” malaise are positives.
  3. DeepMind itself — although identified as the go-to place for the most bestest AI technology — may not be perceived as the outfit with the Holy Grail, the secret to eternal life, and the owner of most of the land on which the Seven Cities of Cibola are erected.

Net net: Lots of authors, Google testing itself, and a bit of Google’s inferiority complex — Quite a Kwanzaa gift.

Stephen E Arnold, December 25, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta