Salesforce Surfs Agentic AI and Hopes to Stay on the Long Board

January 7, 2025

Hopping Dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis is an official dinobaby post. No smart software involved in this blog post.

I spotted a content marketing play, and I found it amusing. The spin was enough to make my eyes wobble. “Intelligence (AI). Its Stock Is Up 39% in 4 Months, and It Could Soar Even Higher in 2025” appeared in the Motley Fool online investment information service. The headline is standard fare, but the catchphrase in the write up is “the third wave of AI.” What were the other two waves, you may ask? The first wave was machine learning which is an age measured in decades. The second wave which garnered the attention of the venture community and outfits like Google was generative AI. I think of the second wave as the content suck up moment.

So what’s the third wave? Answer: Salesforce. Yep, the guts of the company is a digitized record of sales contacts. The old word for what made a successful sales person valuable was “Rolodex.” But today one may as well talk about a pressing ham.

What makes this content marketing-type article notable is that Salesforce wants to “win” the battle of the enterprise and relegate Microsoft to the bench. What’s interesting is that Salesforce’s innovation is presented this way:

The next wave of AI will build further on generative AI’s capabilities, enabling AI to make decisions and take actions across applications without human intervention. Salesforce (CRM -0.42%) CEO Marc Benioff calls it the “digital workforce.” And his company is leading the growth in this Agentic AI with its new Agentforce product.

Agentic.

What’s Salesforce’s secret sauce? The write up says:

Artificial intelligence algorithms are only as good as the data used to train them. Salesforce has accurate and specific data about each of its enterprise customers that nobody else has. While individual businesses could give other companies access to those data, Salesforce’s ability to quickly and simply integrate client data as well as its own data sets makes it a top choice for customers looking to add AI agents to their “workforce.” During the company’s third-quarter earnings call, Benioff called Salesforce’s data an “unfair advantage,” noting Agentforce agents are more accurate and less hallucinogenic as a result.

To put some focus on the competition, Salesforce targets Microsoft. The write up says:

Benioff also called out what might be Salesforce’s largest competitor in Agentic AI, Microsoft (NASDAQ: MSFT). While Microsoft has a lot of access to enterprise customers thanks to its Office productivity suite and other enterprise software solutions, it doesn’t have as much high-quality data on a business as Salesforce. As a result, Microsoft’s Copilot abilities might not be up to Agentforce in many instances. Benioff points out Microsoft isn’t using Copilot to power its online help desk like Salesforce.

I think it is worth mentioning that Apple’s AI seems to be a tad problematic. Also, those AI laptops are not the pet rock for a New Year’s gift.

What’s the Motley Fool doing for Salesforce besides making the company’s stock into a sure-fire winner for 2025? The rah rah is intense; for example:

But if there’s one thing investors have learned from the last two years of AI innovation, it’s that these things often grow faster than anticipated. That could lead Salesforce to outperform analysts’ expectations over the next few years, as it leads the third wave of artificial intelligence.

Let me offer several observations:

  1. Salesforce sees a marketing opportunity for its “agentic” wrappers or apps. Therefore, put the pedal to the metal and grab mind share and market share. That’s not much different from the company’s attention push.
  2. Salesforce recognizes that Microsoft has some momentum in some very lucrative markets. The prime example is the Microsoft tie up with Palantir. Salesforce does not have that type of hook to generate revenue from US government defense and intelligence budgets.
  3. Salesforce is growing, but so is Oracle. Therefore, Salesforce feels that it could become the cold salami in the middle of a Microsoft and Oracle sandwich.

Net net: Salesforce has to amp up whatever it can before companies that are catching the rising AI cloud wave swamp the Salesforce surf board.

Stephen E Arnold, January 7, 2025

OpenAI Partners with Defense Startup Anduril to Bring AI to US Military

December 27, 2024

animated-dinosaur-image-0062_thumb_thumbNo smart software involved. Just a dinobaby’s work.

We learn from the Independent that “OpenAI Announces Weapons Company Partnership to Provide AI Tech to Military.” The partnership with Anduril represents an about-face for OpenAI. This will excite some people, scare others, and lead to remakes of the “Terminator.” Beyond Search thinks that automated smart death machines are so trendy. China also seems enthused. We learn:

“‘ChatGPT-maker OpenAI and high-tech defense startup Anduril Industries will collaborate to develop artificial intelligence-inflected technologies for military applications, the companies announced. ‘U.S. and allied forces face a rapidly evolving set of aerial threats from both emerging unmanned systems and legacy manned platforms that can wreak havoc, damage infrastructure and take lives,’ the companies wrote in a Wednesday statement. ‘The Anduril and OpenAI strategic partnership will focus on improving the nation’s counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real-time.’ The companies framed the alliance as a way to secure American technical supremacy during a ‘pivotal moment’ in the AI race against China. They did not disclose financial terms.”

Of course not. Tech companies were once wary of embracing military contracts, but it seems those days are over. Why now? The article observes:

“The deals also highlight the increasing nexus between conservative politics, big tech, and military technology. Palmer Lucky, co-founder of Anduril, was an early, vocal supporter of Donald Trump in the tech world, and is close with Elon Musk. … Vice-president-elect JD Vance, meanwhile, is a protege of investor Peter Thiel, who co-founded Palantir, another of the companies involved in military AI.”

“Involved” is putting it lightly. And as readers may have heard, Musk appears to be best buds with the president elect. He is also at the head of the new Department of Government Efficiency, which sounds like a federal agency but is not. Yet. The commission is expected to strongly influence how the next administration spends our money. Will they adhere to multinational guidelines on military use of AI? Do PayPal alums have any hand in this type of deal?

Cynthia Murrell, December 27, 2024

Great Moments in Marketing: MSFT Copilot, the Salesforce Take

November 1, 2024

dino orangeA humanoid wrote this essay. I tried to get MSFT Copilot to work, but it remains dead. That makes four days with weird messages about a glitch. That’s the standard: Good enough.

It’s not often I get a kick out of comments from myth-making billionaires. I read through the boy wonder to company founder titled “An Interview with Salesforce CEO Marc Benioff about AI Abundance.” No paywall on this essay, unlike the New York Times’ downer about smart software which appears to have played a part in a teen’s suicide. Imagine when Perplexity can control a person’s computer. What exciting stories will appear. Here’s an example of what may be more common in 2025.

image

Great moments in Salesforce marketing. A senior Agentforce executive considers great marketing and brand ideas of the past. Inspiration strikes. In 2024, he will make fun of Clippy. Yes, a 1995 reference will resonate with young deciders in 2024. Thanks, Stable Diffusion. You are working; MSFT Copilot is not.

The focus today is a single statement in this interview with the big dog of Salesforce. Here’s the quote:

Well, I guess it wasn’t the AGI that we were expecting because I think that there has been a level of sell, including Microsoft Copilot, this thing is a complete disaster. It’s like, what is this thing on my computer? I don’t even understand why Microsoft is saying that Copilot is their vision of how you’re going to transform your company with AI, and you are going to become more productive. You’re going to augment your employees, you’re going to lower your cost, improve your customer relationships, and fundamentally expand all your KPIs with Copilot. I would say, “No, Copilot is the new Clippy”, I’m even playing with a paperclip right now.

Let’s think about this series of references and assertions.

First, there is the direct statement “Microsoft Copilot, this thing is a complete disaster.” Let’s assume the big dog of Salesforce is right. The large and much loved company — Yes, I am speaking about Microsoft — rolled out a number of implementations, applications, and assertions. The firm caught everyone’s favorite Web search engine with its figurative pants down like a hapless Russian trooper about to be dispatched by a Ukrainian drone equipped with a variant of RTX. (That stuff goes bang.) Microsoft “won” a marketing battle and gained the advantage of time. Google with its Sundar & Prabhakar Comedy Act created an audience. Microsoft seized the opportunity to talk to the audience. The audience applauded. Whether the technology worked, in my opinion was secondary. Microsoft wanted to be seen as the jazzy leader.

Second, the idea of a disaster is interesting. Since Microsoft relied on what may be the world’s weirdest organizational set up and supported the crumbling structure, other companies have created smart software which surfs on Google’s transformer ideas. Microsoft did not create a disaster; it had not done anything of note in the smart software world. Microsoft is a marketer. The technology is a second class citizen. The disaster is that Microsoft’s marketing seems to be out of sync with what the PowerPoint decks say. So what’s new? The answer is, “Nothing.” The problem is that some people don’t see Microsoft’s smart software as a disaster. One example is Palantir, which is Microsoft’s new best friend. The US government cannot rely on Microsoft enough. Those contract renewals keep on rolling. Furthermore the “certified” partners could not be more thrilled. Virtually every customer and prospect wants to do something with AI. When the blind lead the blind, a person with really bad eyesight has an advantage. That’s Microsoft. Like it or not.

Third, the pitch about “transforming your company” is baloney. But it sounds good. It helps a company do something “new” but within the really familiar confines of Microsoft software. In the good old days, it was IBM that provided the cover for doing something, anything, which could produce a marketing opportunity or a way to add a bit pizazz to a 1955 Chevrolet two door 210 sedan. Thus, whether the AI works or does not work, one must not lose sight of the fact that Microsoft centric outfits are going to go with Microsoft because most professionals need PowerPoint and the bean counters do not understand anything except Excel. What strikes me as important that Microsoft can use modest, even inept smart software, and come out a winner. Who is complaining? The Fortune 1000, the US Federal government, the legions of MBA students who cannot do a class project without Excel, PowerPoint, and Word?

Finally, the ultimate reference in the quote is Clippy. Personally I think the big dog at Salesforce should have invoked both Bob and Clippy. Regardless of the “joke” hooked to these somewhat flawed concepts, the names “Bob” and “Clippy” have resonance. Bob rolled out in 1995. Clippy helped so many people beginning in the same year. Decades later Microsoft’s really odd software is going to cause a 20 something who was not born to turn away from Microsoft products and services? Nope.

Let’s sum up: Salesforce is working hard to get a marketing lift by making Microsoft look stupid. Believe me. Microsoft does not need any help. Perhaps the big dog should come up with a marketing approach that replicates or comes close to what Microsoft pulled off in 2023. Google still hasn’t recovered fully from that kung fu blow.

The big dog needs to up its marketing game. Say Salesforce and what’s the reaction? Maybe meh.

Stephen E Arnold, November 1, 2024

Surveillance Watch Maps the Surveillance App Ecosystem

October 1, 2024

Here is an interesting resource: Surveillance Watch compiles information about surveillance tech firms, organizations that fund them, and the regions in which they are said to operate. The lists, compiled from contributions by visitors to the site, are not comprehensive. But they are full of useful information. The About page states:

“Surveillance technology and spyware are being used to target and suppress journalists, dissidents, and human rights advocates everywhere. Surveillance Watch is an interactive map that documents the hidden connections within the opaque surveillance industry. Founded by privacy advocates, most of whom were personally harmed by surveillance tech, our mission is to shed light on the companies profiting from this exploitation with significant risk to our lives. By mapping out the intricate web of surveillance companies, their subsidiaries, partners, and financial backers, we hope to expose the enablers fueling this industry’s extensive rights violations, ensuring they cannot evade accountability for being complicit in this abuse. Surveillance Watch is a community-driven initiative, and we rely on submissions from individuals passionate about protecting privacy and human rights.”

Yes, the site makes it easy to contribute information to its roundup. Anonymously, if one desires. The site’s information is divided into three alphabetical lists: Surveilling Entities, Known Targets, and Funding Organizations. As an example, here is what the service says about safeXai (formerly Banjo):

“safeXai is the entity that has quietly resumed the operations of Banjo, a digital surveillance company whose founder, Damien Patton, was a former Ku Klux Klan member who’d participated in a 1990 drive-by shooting of a synagogue near Nashville, Tennessee. Banjo developed real-time surveillance technology that monitored social media, traffic cameras, satellites, and other sources to detect and report on events as they unfolded. In Utah, Banjo’s technology was used by law enforcement agencies.”

We notice there are no substantive links which could have been included, like ones to footage of the safeXai surveillance video service or the firm’s remarkable body of patents. In our view, these patents represent an X-ray look at what most firms call artificial intelligence.

A few other names we recognize are IBM, Palantir, and Pegasus owner NSO Group. See the site for many more. The Known Targets page lists countries that, when clicked, list surveilling entities known or believed to be operating there. Entries on the Funding Organizations page include a brief description of each organization with a clickable list of surveillance apps it is known or believed to fund at the bottom. It is not clear how the site vets its entries, but the submission form does include boxes for supporting URL(s) and any files to upload. It also asks whether one consents to be contacted for more information.

Cynthia Murrell, October 1, 2024

Microsoft Explains Who Is at Fault If Copilot Smart Software Does Dumb Things

September 23, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Those Windows Central experts have delivered a Dusie of a write up. “Microsoft Says OpenAI’s ChatGPT Isn’t Better than Copilot; You Just Aren’t Using It Right, But Copilot Academy Is Here to Help” explains:

Avid AI users often boast about ChatGPT’s advanced user experience and capabilities compared to Microsoft’s Copilot AI offering, although both chatbots are based on OpenAI’s technology. Earlier this year, a report disclosed that the top complaint about Copilot AI at Microsoft is that “it doesn’t seem to work as well as ChatGPT.”

I think I understand. Microsoft uses OpenAI, other smart software, and home brew code to deliver Copilot in apps, the browser, and Azure services. However, users have reported that Copilot doesn’t work as well as ChatGPT. That’s interesting. A hallucinating capable software processed by the Microsoft engineering legions is allegedly inferior to Copilot.

image

Enthusiastic young car owners replace individual parts. But the old car remains an old, rusty vehicle. Thanks, MSFT Copilot. Good enough. No, I don’t want to attend a class to learn how to use you.

Who is responsible? The answer certainly surprised me. Here’s what the Windows Central wizards offer:

A Microsoft employee indicated that the quality of Copilot’s response depends on how you present your prompt or query. At the time, the tech giant leveraged curated videos to help users improve their prompt engineering skills. And now, Microsoft is scaling things a notch higher with Copilot Academy. As you might have guessed, Copilot Academy is a program designed to help businesses learn the best practices when interacting and leveraging the tool’s capabilities.

I think this means that the user is at fault, not Microsoft’s refactored version of OpenAI’s smart software. The fix is for the user to learn how to write prompts. Microsoft is not responsible. But OpenAI’s implementation of ChatGPT is perceived as better. Furthermore, training to use ChatGPT is left to third parties. I hope I am close to the pin on this summary. OpenAI just puts Strawberries in front of hungry users and let’s them gobble up ChatGPT output. Microsoft fixes up ChatGPT and users are allegedly not happy. Therefore, Microsoft puts the burden on the user to learn how to interact with the Microsoft version of ChatGPT.

I thought smart software was intended to make work easier and more efficient. Why do I have to go to school to learn Copilot when I can just pound text or a chunk of data into ChatGPT, click a button, and get an output? Not even a Palantir boot camp will lure me to the service. Sorry, pal.

My hypothesis is that Microsoft is a couple of steps away from creating something designed for regular users. In its effort to “improve” ChatGPT, the experience of using Copilot makes the user’s life more miserable. I think Microsoft’s own engineering practices act like a struck brake on an old Lada. The vehicle has problems, so installing a new master cylinder does not improve the automobile.

Crazy thinking: That’s what the write up suggests to me.

Stephen E Arnold, September 23, 2024

Surveillance: Who Watches What, When, and Who?

September 18, 2024

Here is an interesting resource: Surveillance Watch compiles information about surveillance tech firms, organizations that fund them, and the regions in which they are said to operate. The lists, compiled from contributions by visitors to the site, are not comprehensive. But they are full of useful information. The About page states:

“Surveillance technology and spyware are being used to target and suppress journalists, dissidents, and human rights advocates everywhere. Surveillance Watch is an interactive map that documents the hidden connections within the opaque surveillance industry. Founded by privacy advocates, most of whom were personally harmed by surveillance tech, our mission is to shed light on the companies profiting from this exploitation with significant risk to our lives. By mapping out the intricate web of surveillance companies, their subsidiaries, partners, and financial backers, we hope to expose the enablers fueling this industry’s extensive rights violations, ensuring they cannot evade accountability for being complicit in this abuse. Surveillance Watch is a community-driven initiative, and we rely on submissions from individuals passionate about protecting privacy and human rights.”

Yes, the site makes it easy to contribute information to its roundup. Anonymously, if one desires. The site’s information is divided into three alphabetical lists: Surveilling Entities, Known Targets, and Funding Organizations. As an example, here is what the service says about safeXai (formerly Banjo):

“safeXai is the entity that has quietly resumed the operations of Banjo, a digital surveillance company whose founder, Damien Patton, was a former Ku Klux Klan member who’d participated in a 1990 drive-by shooting of a synagogue near Nashville, Tennessee. Banjo developed real-time surveillance technology that monitored social media, traffic cameras, satellites, and other sources to detect and report on events as they unfolded. In Utah, Banjo’s technology was used by law enforcement agencies.”

We notice there are no substantive links which could have been included, like ones to footage of the safeXai surveillance video service or the firm’s remarkable body of patents. In our view, these patents represent an X-ray look at what most firms call artificial intelligence.

A few other names we recognize are IBM, Palantir, and Pegasus owner NSO Group. See the site for many more. The Known Targets page lists countries that, when clicked, list surveilling entities known or believed to be operating there. Entries on the Funding Organizations page include a brief description of each organization with a clickable list of surveillance apps it is known or believed to fund at the bottom. It is not clear how the site vets its entries, but the submission form does include boxes for supporting URL(s) and any files to upload. It also asks whether one consents to be contacted for more information.

Cynthia Murrell, September 18, 2024

Hey, Alexa, Why Does Amazon AI Flail?

September 5, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Amazon has its work cut out for itself. The company has those pesky third-party vendors shipping “interesting” products to customers and then ignoring complaints. Amazon is on the radar of some legal eagles in the EU and the US. Now the company has found itself in an unusual situation: Its super duper smart software does not work. The fix, if the information in “Gen AI Alexa to Use Anthropic Tech After it Struggled for Words” with Amazon’s” is correct, is to use Anthropic AI technology. Hey, why not? Amazon allegedly invested $5 billion in the company. Maybe that implementation of Google technology will do the trick?

image

The mother is happy with Alexa’s answers. The weird sounds emitted from the confused device surprise her daughter. Thanks, MSFT Copilot. Good enough.

The write up reports:

Amazon demoed a generative AI version of Alexa in September 2023 and touted it as being more advanced, conversational, and capable, including the ability to do multiple smart home tasks with simpler commands. Gen AI Alexa is expected to come with a subscription fee, as Alexa has reportedly lost Amazon tens of billions of dollars throughout the years. Earlier reports said the updated voice assistant would arrive in June, but Amazon still hasn’t confirmed an official release date.

A year later, Amazon is punting and giving the cash furnace Alexa more brains courtesy of Anthropic. Will the AI wizards working on Amazon’s own AI have a chance to work in one of the Amazon warehouses?

Ars Technica says without a trace of irony:

The previously announced generative AI version of Amazon’s Alexa voice assistant “will be powered primarily by Anthropic’s Claude artificial intelligence models," Reuters reported today. This comes after challenges with using proprietary models, according to the publication, which cited five anonymous people “with direct knowledge of the Alexa strategy.”

Amazon has a desire to convert the money-losing Alexa into a gold mine, or at least a modest one.

This report, if accurate, suggests some interesting sparkles on the Bezos bulldozer’s metal flake paint; to wit:

  1. The two pizza team approach to technology did not work either for Alexa (the money loser) or the home grown AI money spinner. What other Amazon technologies are falling short of the mark?
  2. How long will it take to get a money-generating Alexa working and into the hands of customers eager for a better Alexa experience and a monthly or annual subscription for the new Alexa? A year has been lost already, and Alexa users continue to ask for the weather and a timer for cooking broccoli.
  3. What happens if the product, its integration with smart TV, and the Ring doorbell is like a Pet Rock? The fad has come and gone, replaced by smart watches and mobile phones? The answer: Collectibles!

Why am I questioning Amazon’s technology competency? The recent tie up between Microsoft and Palantir Technologies makes clear that Amazon’s cloud services don’t have the horsepower to pull government sales. When these pieces are shifted around, the resulting puzzle says, “Amazon is flailing to me.” Consider this: AI was beyond the reach of a big money outfit like Amazon. There’s a message in that factoid.

Stephen E Arnold, September 5, 2024

The Seattle Syndrome: Definitely Debilitating

August 30, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I think the film “Sleepless in Seattle” included dialog like this:

What do they call it when everything intersects?
The Bermuda Triangle.”

Seattle has Boeing. The company is in the news not just for doors falling off its aircraft. The outfit has stranded two people in earth orbit and has to let Elon Musk bring them back to earth. And Seattle has Amazon, an outfit that stands behind the products it sells. And I have to include Intel Labs, not too far from the University of Washington, which is famous in its own right for many things.

image

Two job seekers discuss future opportunities in some of Seattle and environ’s most well-known enterprises. The image of the city seems a bit dark. Thanks, MSFT Copilot. Are you having some dark thoughts about the area, its management talent pool, and its commitment to ethical business activity? That’s a lot of burning cars, but whatever.

Is Seattle a Bermuda Triangle for large companies?

This question invites another; specifically, “Is Microsoft entering Seattle’s Bermuda Triangle?

The giant outfit has entered a deal with the interesting specialized software and consulting company Palantir Technologies Inc. This firm has a history of ups and downs since its founding 21 years ago. Microsoft has committed to smart software from OpenAI and other outfits. Artificial intelligence will be “in” everything from the Azure Cloud to Windows. Despite concerns about privacy, Microsoft wants each Windows user’s machine to keep screenshot of what the user “does” on that computer.

Microsoft seems to be navigating the Seattle Bermuda Triangle quite nicely. No hints of a flash disaster like the sinking of the sailing yacht Bayesian. Who could have predicted that? (That’s a reminder that fancy math does not deliver 1.000000 outputs on a consistent basis.

Back to Seattle. I don’t think failure or extreme stress is due to the water. The weather, maybe? I don’t think it is the city government. It is probably not the multi-faceted start up community nor the distinctive vocal tones of its most high profile podcasters.

Why is Seattle emerging as a Bermuda Triangle for certain firms? What forces are intersecting? My observations are:

  1. Seattle’s business climate is a precursor of broader management issues. I think it is like the pigeons that Greeks examined for clues about their future.
  2. The individuals who works at Boeing-type outfits go along with business processes modified incrementally to ignore issues. The mental orientation of those employed is either malleable or indifferent to downstream issues. For example, Windows update killed printing or some other function. The response strikes me as “meh.”
  3. The management philosophy disconnects from users and focuses on delivering financial results. Those big houses come at a cost. The payoff is personal. The cultural impacts are not on the radar. Hey, those quantum Horse Ridge things make good PR. What about the new desktop processors? Just great.

Net net: I think Seattle is a city playing an important role in defining how businesses operate in 2024 and beyond. I wish I was kidding. But I am bedeviled by reminders of a space craft which issues one-way tickets, software glitches, and products which seem to vary from the online images and reviews. (Maybe it is the water? Bermuda Triangle water?)

Stephen E Arnold, August 30, 2024

Google Leadership Versus Valued Googlers

August 23, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]This essay is the work of a dumb dinobaby. No smart software required.

The summer in rural Kentucky lingers on. About 2,300 miles away from the Sundar & Prabhakar Comedy Show’s nerve center, the Alphabet Google YouTube DeepMind entity is also “cyclonic heating from chaotic employee motion.” What’s this mean? Unsteady waters? Heat stroke? Confusion? Hallucinations? My goodness.

The Google leadership faces another round of employee pushback. I read “Workers at Google DeepMind Push Company to Drop Military Contracts.

How could the Google smart software fail to predict this pattern? My view is that smart software has some limitations when it comes to managing AI wizards. Furthermore, Google senior managers have not been able to extract full knowledge value from the tools at their disposal to deal with complexity. Time Magazine reports:

Nearly 200 workers inside Google DeepMind, the company’s AI division, signed a letter calling on the tech giant to drop its contracts with military organizations earlier this year, according to a copy of the document reviewed by TIME and five people with knowledge of the matter. The letter circulated amid growing concerns inside the AI lab that its technology is being sold to militaries engaged in warfare, in what the workers say is a violation of Google’s own AI rules.

Why are AI Googlers grousing about military work? My personal view is that the recent hagiography of Palantir’s Alex Karp and the tie up between Microsoft and Palantir for Impact Level 5 services means that the US government is gearing up to spend some big bucks for warfighting technology. Google wants — really needs — this revenue. Penalties for its frisky behavior as what Judge Mehta describes and “monopolistic” could put a hit in the git along of Google ad revenue. Therefore, Google’s smart software can meet the hunger militaries have for intelligent software to perform a wide variety of functions. As the Russian special operation makes clear, “meat based” warfare is somewhat inefficient. Ukrainian garage-built drones with some AI bolted on perform better than a wave of 18 year olds with rifles and a handful of bullets. The example which sticks in my mind is a Ukrainian drone spotting a Russian soldier in the field partially obscured by bushes. The individual is attending to nature’s call.l The drone spots the “shape” and explodes near the Russian infantry man.

image

A former consultant faces an interpersonal Waterloo. How did that work out for Napoleon? Thanks, MSFT Copilot. Are you guys working on the IPv6 issue? Busy weekend ahead?

Those who study warfare probably have their own ah-ha moment.

The Time Magazine write up adds:

Those principles state the company [Google/DeepMind] will not pursue applications of AI that are likely to cause “overall harm,” contribute to weapons or other technologies whose “principal purpose or implementation” is to cause injury, or build technologies “whose purpose contravenes widely accepted principles of international law and human rights.”) The letter says its signatories are concerned with “ensuring that Google’s AI Principles are upheld,” and adds: “We believe [DeepMind’s] leadership shares our concerns.”

I love it when wizards “believe” something.

Will the Sundar & Prabhakar brain trust do believing or banking revenue from government agencies eager to gain access to advantage artificial intelligence services and systems? My view is that the “believers” underestimate the uncertainty arising from potential sanctions, fines, or corporate deconstruction the decision of Judge Mehta presents.

The article adds this bit of color about the Sundar & Prabhakar response time to Googlers’ concern about warfighting applications:

The [objecting employees’] letter calls on DeepMind’s leaders to investigate allegations that militaries and weapons manufacturers are Google Cloud users; terminate access to DeepMind technology for military users; and set up a new governance body responsible for preventing DeepMind technology from being used by military clients in the future. Three months on from the letter’s circulation, Google has done none of those things, according to four people with knowledge of the matter. “We have received no meaningful response from leadership,” one said, “and we are growing increasingly frustrated.”

“No meaningful response” suggests that the Alphabet Google YouTube DeepMind rhetoric is not satisfactory.

The write up concludes with this paragraph:

At a DeepMind town hall event in June, executives were asked to respond to the letter, according to three people with knowledge of the matter. DeepMind’s chief operating officer Lila Ibrahim answered the question. She told employees that DeepMind would not design or deploy any AI applications for weaponry or mass surveillance, and that Google Cloud customers were legally bound by the company’s terms of service and acceptable use policy, according to a set of notes taken during the meeting that were reviewed by TIME. Ibrahim added that she was proud of Google’s track record of advancing safe and responsible AI, and that it was the reason she chose to join, and stay at, the company.

With Microsoft and Palantir, among others, poised to capture some end-of-fiscal-year money from certain US government budgets, the comedy act’s headquarters’ planners want a piece of the action. How will the Sundar & Prabhakar Comedy Act handle the situation? Why procrastinate? Perhaps the comedy act hopes the issue will just go away. The complaining employees have short attention spans, rely on TikTok-type services for information, and can be terminated like other Googlers who grouse, picket, boycott the Foosball table, or quiet quit while working on a personal start up.

The approach worked reasonably well before Judge Mehta labeled Google a monopoly operation. It worked when ad dollars flowed like latte at Philz Coffee. But today is different, and the unsettled personnel are not a joke and add to the uncertainty some have about the Google we know and love.

Stephen E Arnold, August 23, 2024

Thomson Reuters: A Trust Report about Trust from an Outfit with Trust Principles

June 21, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Thomson Reuters is into trust. The company has a Web page called “Trust Principles.” Here’s a snippet:

The Trust Principles were created in 1941, in the midst of World War II, in agreement with The Newspaper Proprietors Association Limited and The Press Association Limited (being the Reuters shareholders at that time). The Trust Principles imposed obligations on Reuters and its employees to act at all times with integrity, independence, and freedom from bias. Reuters Directors and shareholders were determined to protect and preserve the Trust Principles when Reuters became a publicly traded company on the London Stock Exchange and Nasdaq. A unique structure was put in place to achieve this. A new company was formed and given the name ‘Reuters Founders Share Company Limited’, its purpose being to hold a ‘Founders Share’ in Reuters.

Trust nestles in some legalese and a bit of business history. The only reason I mention this anchoring in trust is that Thomson Reuters reported quarterly revenue of $1.88 billion in May 2024, up from $1.74 billion in May 2023. The financial crowd had expected $1.85 billion in the quarter, and Thomson Reuters beat that. Surplus funds makes it possible to fund many important tasks; for example, a study of trust.

image

The ouroboros, according to some big thinkers, symbolizes the entity’s journey and the unity of all things; for example, defining trust, studying trust, and writing about trust as embodied in the symbol.

My conclusion is that trust as a marketing and business principle seems to be good for business. Therefore, I trust, and I am confident that the information in “Global Audiences Suspicious of AI-Powered Newsrooms, Report Finds.” The subject of the trusted news story is the Reuters Institute for the Study of Journalism. The Thomson Reuters reporter presents in a trusted way this statement:

According to the survey, 52% of U.S. respondents and 63% of UK respondents said they would be uncomfortable with news produced mostly with AI. The report surveyed 2,000 people in each country, noting that respondents were more comfortable with behind-the-scenes uses of AI to make journalists’ work more efficient.

To make the point a person working for the trusted outfit’s trusted report says in what strikes me as a trustworthy way:

“It was surprising to see the level of suspicion,” said Nic Newman, senior research associate at the Reuters Institute and lead author of the Digital News Report. “People broadly had fears about what might happen to content reliability and trust.”

In case you have lost the thread, let me summarize. The trusted outfit Thomson Reuters funded a study about trust. The research was conducted by the trusted outfit’s own Reuters Institute for the Study of Journalism. The conclusion of the report, as presented by the trusted outfit, is that people want news they can trust. I think I have covered the post card with enough trust stickers.

I know I can trust the information. Here’s a factoid from the “real” news report:

Vitus “V” Spehar, a TikTok creator with 3.1 million followers, was one news personality cited by some of the survey respondents. Spehar has become known for their unique style of delivering the top headlines of the day while laying on the floor under their desk, which they previously told Reuters is intended to offer a more gentle perspective on current events and contrast with a traditional news anchor who sits at a desk.

How can one not trust a report that includes a need met by a TikTok creator? Would a Thomson Reuters’ professional write a news story from under his or her desk or cube or home office kitchen table?

I think self funded research which finds that the funding entity’s approach to trust is exactly what those in search of “real” news need. Wikipedia includes some interesting information about Thomson Reuters in its discussion of the company in the section titled “Involvement in Surveillance.” Wikipedia alleges that Thomson Reuters licenses data to Palantir Technologies, an assertion which if accurate I find orthogonal to my interpretation of the word “trust.” But Wikipedia is not Thomson Reuters.

I will not ask questions about the methodology of the study. I trust the Thomson Reuters’ professionals. I will not ask questions about the link between revenue and digital information. I have the trust principles to assuage any doubt. I will not comment on the wonderful ouroboros-like quality of an enterprise embodying trust, funding a study of trust, and converting those data into a news story about itself. The symmetry is delicious and, of course, trustworthy. For information about Thomson Reuters’s trust use of artificial intelligence see this Web page.

Stephen E Arnold, June 21, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta