The Wiz: Google Gears Up for Enterprise Security
July 15, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Anyone remember this verse from “Ease on Down the Road,” from The Wiz, the hit musical from the 1970s? Here’s the passage:
‘Cause there may be times
When you think you lost your mind
And the steps you’re takin’
Leave you three, four steps behind
But the road you’re walking
Might be long sometimes
You just keep on trukin’
And you’ll just be fine, yeah
Why am I playing catchy tunes in my head on Monday, July 15, 2024? I just read “Google Near $23 Billion Deal for Cybersecurity Startup Wiz.” For years, I have been relating Israeli-developed cyber security technology to law enforcement and intelligence professionals. I try in each lecture to profile a firm, typically based in Tel Aviv or environs and staffed with former military professionals. I try to relate the functionality of the system to the particular case or matter I am discussing in my lecture.
The happy band is easin’ down the road. The Googlers have something new to sell. Does it work? Sure, get down. Boogie. Thanks, MSFT Copilot. Has your security created an opportunity for Google marketers?
That stopped in October 2023. A former Israeli intelligence officer told me, “The massacre was Israel’s 9/11. There was an intelligence failure.” I backed away form the Israeli security, cyber crime, and intelware systems. They did not work. If we flash forward to July 15, 2024, the marketing is back. The well-known NSO Group is hawking its technology at high-profile LE and intel conferences. Enhancements to existing systems arrive in the form of email newsletters at the pace of the pre-October 2023 missives.
However, I am maintaining a neutral and skeptical stance. There is the October 2023 event, the subsequent war, and the increasing agitation about tactics, weapons systems in use, and efficacy of digital safeguards.
Google does not share my concerns. That’s why the company is Google, and I am a dinobaby tracking cyber security from my small office in rural Kentucky. Google makes news. I make nothing as a marginalized dinobaby.
The Wiz tells the story of a young girl who wants to get her dog back after a storm carries the creature away. The young girl offs the evil witch and seeks the help of a comedian from Peoria, Illinois, to get back to her real life. The Wiz has a happy ending, and the quoted verse makes the point that the young girl, like the Google, has to keep taking steps even though the Information Highway may be long.
That’s what Google is doing. The company is buying security (which I want to point out is cut from the same cloth as the systems which failed to notice the October 2023 run up). Google has Mandiant. Google offers a free Dark Web scanning service. Now Google has Wiz.
What’s Wiz do? Like other Israeli security companies, it does the sort of thing intended to prevent events like October 2023’s attack. And like other aggressively marketed Israeli cyber technology companies’ capabilities, one has to ask, “Will Wiz work in an emerging and fluid threat environment?” This is an important question because of the failure of the in situ Israeli cyber security systems, disabled watch stations, and general blindness to social media signals about the October 2023 incident.
If one zips through the Wiz’s Web site, one can craft a description of what the firm purports to do; for example:
Wiz is a cloud security firm embodying capabilities associated with the Israeli military technology. The idea is to create a one-stop shop to secure cloud assets. The idea is to identify and mitigate risks. The system incorporates automated functions and graphic outputs. The company asserts that it can secure models used for smart software and enforce security policies automatically.
Does it work? I will leave that up to you and the bad actors who find novel methods to work around big, modern, automated security systems. Did you know that human error and old-fashioned methods like emails with links that deliver stealers work?
Can Google make the Mandiant Wiz combination work magic? Is Googzilla a modern day Wiz able to transport the little girl back to real life?
Google has paid a rumored $20 billion plus to deliver this reality.
I maintain my neutral and skeptical stance. I keep thinking about October 2023, the aftermath of a massive security failure, and the over-the-top presentations by Israeli cyber security vendors. If the stuff worked, why did October 2023 happen? Like most modern cyber security solutions, marketing to the people who desperately want a silver bullet or digital stake to pound through the heart of cyber risk produces sales.
I am not sure that sales, marketing, and assertions about automation work in what is an inherently insecure, fast-changing, and globally vulnerable environment.
But Google will keep on trukin’’ because Microsoft has created a heck of a marketing opportunity for the Google.
Stephen E Arnold, July 15, 2024
Googzilla, Man Up, Please
July 8, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read a couple of “real” news stories about Google and its green earth / save the whales policies in the age of smart software. The first write up is okay and not to exciting for a critical thinker wearing dinoskin. “The Morning After: Google’s Greenhouse Gas Emissions Climbed Nearly 50 Percent in Five Years Due to AI” states what seems to be a PR-massaged write up. Consider this passage:
According to the report, Google said it expects its total greenhouse gas emissions to rise “before dropping toward our absolute emissions reduction target,” without explaining what would cause this drop.
Yep, no explanation. A PR win.
The BBC published “AI Drives 48% Increase in Google Emissions.” That write up states:
Google says about two thirds of its energy is derived from carbon-free sources.
Thanks, MSFT Copilot. Good enough.
Neither these two articles nor the others I scanned focused on one key fact about Google’s saying green and driving snail darters to their fate. Google’s leadership team did not plan its energy strategy. In fact, my hunch is that no one paid any attention to how much energy Google’s AI activities were sucking down. Once the company shifted into Code Red or whatever consulting term craziness it used to label its frenetic response to the Microsoft OpenAI tie up, absolutely zero attention was directed toward the few big eyed tunas which might be taking their last dip.
Several observations:
- PR speak and green talk are like many assurances emitted by the Google. Talk is not action.
- The management processes at Google are disconnected from what happens when the wonky Code Red light flashes and the siren howls at midnight. Shouldn’t management be connected when the Tapanuli Orangutang could soon be facing the Big Ape in the sky?
- The AI energy consumption is not a result of AI. The energy consumption is a result of Googlers who do what’s necessary to respond to smart software. Step on the gas. Yeah, go fast. Endanger the Amur leopard.
Net net: Hey, Google, stand up and say, “My leadership team is responsible for the energy we consume.” Don’t blame your up-in-flames “green” initiative on software you invented. How about less PR and more focus on engineering more efficient data center and cloud operations? I know PR talk is easier, but buckle up, butter cup.
Stephen E Arnold, July 8, 2024
Some Tension in the Datasphere about Artificial Intelligence
June 28, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I generally try to avoid profanity in this blog. I am mindful of Google’s stopwords. I know there are filters running to protect those younger than I from frisky and inappropriate language. Therefore, I will cite the two articles and then convert the profanity to a suitably sanitized form.
The first write up is “I Will F…ing Piledrive You If You Mention AI Again”. Sorry, like many other high-technology professionals I prevaricated and dissembled. I have edited the F word to be less superficially offensive. (One simply cannot trust high-technology types, can you? I am not Thomson Reuters obviously.) The premise of this write up is that smart software is over-hyped. Here’s a passage I found interesting:
Unless you are one of a tiny handful of businesses who know exactly what they’re going to use AI for, you do not need AI for anything – or rather, you do not need to do anything to reap the benefits. Artificial intelligence, as it exists and is useful now, is probably already baked into your businesses software supply chain. Your managed security provider is probably using some algorithms baked up in a lab software to detect anomalous traffic, and here’s a secret, they didn’t do much AI work either, they bought software from the tiny sector of the market that actually does need to do employ data scientists.
I will leave it to you to ponder the wisdom of these words. I, for instance, do not know exactly what I am going to do until I do something, fiddle with it, and either change it up or trash it. You and most AI enthusiasts are probably different. That’s good. I envy your certitude. The author of the first essay is not gentle; he wants to piledrive you if you talk about smart software. I do not advocate violence under any circumstances. I can tolerate baloney about smart software. The piledriver person has hate in his heart. You have been warned.
The second write up is “ChatGPT Is Bullsh*t,” and it is an article published in SpringerLink, not a personal blog. Yep, bullsh*t as a term in an academic paper. Keep in mind, please, that Stanford University’s president and some Harvard wizards engaged in the bullsh*t business as part of their alleged making up data. Who needs AI when humans are perfectly capable of hallucinating, but I digress?
I noted this passage in the academic write up:
So perhaps we should, strictly, say not that ChatGPT is bullshit but that it outputs bullshit in a way that goes beyond being simply a vector of bullshit: it does not and cannot care about the truth of its output, and the person using it does so not to convey truth or falsehood but rather to convince the hearer that the text was written by a interested and attentive agent.
Please, read the 10 page research article about bullsh*t, soft bullsh*t, and hard bullsh*t. Form your own opinion.
I have now set the stage for some observations (probably unwanted and deeply disturbing to some in the smart software game).
- Artificial intelligence is a new big thing, and the hyperbole, misdirection, and outright lying like my saying I would use forbidden language in this essay irrelevant. The object of the new big thing is to make money, get power, maybe become an influencer on TikTok.
- The technology which seems to have flowered in January 2023 when Microsoft said, “We love OpenAI. It’s a better Clippy.” The problem is that it is now June 2024 and the advances have been slow and steady. This means that after a half century of research, the AI revolution is working hard to keep the hypemobile in gear. PR is quick; smart software improvement less speedy.
- The ripples the new big thing has sent across the datasphere attenuate the farther one is from the January 2023 marketing announcement. AI fatigue is now a thing. I think the hostility is likely to increase because real people are going to lose their jobs. Idle hands are the devil’s playthings. Excitement looms.
Net net: I think the profanity reveals the deep disgust some pundits and experts have for smart software, the companies pushing silver bullets into an old and rusty firearm, and an instinctual fear of the economic disruption the new big thing will cause. Exciting stuff. Oh, I am not stating a falsehood.
Stephen E Arnold, June 23, 2024
Can the Bezos Bulldozer Crush Temu, Shein, Regulators, and AI?
June 27, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The question, to be fair, should be, “Can the Bezos-less bulldozer crush Temu, Shein, Regulators, Subscriptions to Alexa, and AI?” The article, which appeared in the “real” news online service Venture Beat, presents an argument suggesting that the answer is, “Yes! Absolutely.”
Thanks MSFT Copilot. Good bulldozer.
The write up “AWS AI Takeover: 5 Cloud-Winning Plays They’re [sic] Using to Dominate the Market” depends upon an Amazon Big Dog named Matt Wood, VP of AI products at AWS. The article strikes me as something drafted by a small group at Amazon and then polished to PR perfection. The reasons the bulldozer will crush Google, Microsoft, Hewlett Packard’s on-premises play, and the keep-on-searching IBM Watson, among others, are:
- Covering the numbers or logo of the AI companies in the “game”; for example, Anthropic, AI21 Labs, and other whale players
- Hitting up its partners, customers, and friends to get support for the Amazon AI wonderfulness
- Engineering AI to be itty bitty pieces one can use to build a giant AI solution capable of dominating D&B industry sectors like banking, energy, commodities, and any other multi-billion sector one cares to name
- Skipping the Google folly of dealing with consumers. Amazon wants the really big contracts with really big companies, government agencies, and non-governmental organizations.
- Amazon is just better at security. Those leaky S3 buckets are not Amazon’s problem. The customers failed to use Amazon’s stellar security tools.
Did these five points convince you?
If you did not embrace the spirit of the bulldozer, the Venture Beat article states:
Make no mistake, fellow nerds. AWS is playing a long game here. They’re not interested in winning the next AI benchmark or topping the leaderboard in the latest Kaggle competition. They’re building the platform that will power the AI applications of tomorrow, and they plan to power all of them. AWS isn’t just building the infrastructure, they’re becoming the operating system for AI itself.
Convinced yet? Well, okay. I am not on the bulldozer yet. I do hear its engine roaring and I smell the no-longer-green emissions from the bulldozer’s data centers. Also, I am not sure the Google, IBM, and Microsoft are ready to roll over and let the bulldozer crush them into the former rain forest’s red soil. I recall researching Sagemaker which had some AI-type jargon applied to that “smart” service. Ah, you don’t know Sagemaker? Yeah. Too bad.
The rather positive leaning Amazon write up points out that as nifty as those five points about Amazon’s supremacy in the AI jungle, the company has vision. Okay, it is not the customer first idea from 1998 or so. But it is interesting. Amazon will have infrastructure. Amazon will provide model access. (I want to ask, “For how long?” but I won’t.), and Amazon will have app development.
The article includes a table providing detail about these three legs of the stool in the bulldozer’s cabin. There is also a run down of Amazon’s recent media and prospect directed announcements. Too bad the article does not include hyperlinks to these documents. Oh, well.
And after about 3,300 words about Amazon, the article includes about 260 words about Microsoft and Google. That’s a good balance. Too bad IBM. You did not make the cut. And HP? Nope. You did not get an “Also participated” certificate.
Net net: Quite a document. And no mention of Sagemaker. The Bezos-less bulldozer just smashes forward. Success is in crushing. Keep at it. And that “they” in the Venture Beat article title: Shouldn’t “they” be an “it”?
Stephen E Arnold, June 27, 2024
Nerd Flame War: AI AI AI
June 27, 2024
The Internet is built on trolls and their boorish behavior. The worst of the trolls are self-confessed “experts” on anything. Every online community has their loitering trolls and tech enthusiasts aren’t any different. In the old days of Internet lore, online verbal battles were dubbed “flame wars” and XDA-Developers reports that OpenAI started one: “AI Has Thrown Stack Overflow Into Civil War.”
A huge argument in AI development is online content being harvested for large language models (LLMs) to train algorithms. Writers and artists were rightly upset were used to train image and writing algorithms. OpenAI recently partnered with Stack Overflow to collect data and the users aren’t happy. Stack Overflow is a renowned tech support community for sysadmin, developers, and programmers. Stack Overflow even brags that it is world’s largest developer community.
Stack Overflow users are angry, because they weren’t ask permission to use their content for AI training models and they don’t like the platform’s response to their protests. Users are deleting their posts or altering them to display correct information. In response, Stack Overflow is restoring deleted and incorrect information, temporarily suspending users who delete content, and hiding behind the terms of service. The entire situation is explained here:
“Delving into discussion online about OpenAI and Stack Overflow’s partnership, there’s plenty to unpack. The level of hostility towards Stack Overflow varies, with some users seeing their answers as being posted online without conditions – effectively free for all to use, and Stack Overflow granting OpenAI access to that data as no great betrayal. These users might argue that they’ve posted their answers for the betterment of everyone’s knowledge, and don’t place any conditions on its use, similar to a highly permissive open source license.
Other users are irked that Stack Overflow is providing access to an open-resource to a company using it to build closed-source products, which won’t necessarily better all users (and may even replace the site they were originally posted on.) Despite OpenAI’s stated ambition, there is no guarantee that Stack Overflow will remain freely accessible in perpetuity, or that access to any AIs trained on this data will be free to the users who contributed to it.”
Reddit and other online communities are facing the same problems. LLMs are made from Stack Overflow and Reddit to train generative AI algorithms like ChatGPT. OpenAI’s ChatGPT is regarded as overblown because it continues to fail multiple tests. We know, however, that generative AI will improve with time. We also know that people will use the easiest solution and generative AI chatbots will become those tools. It’s easier to verbally ask or write a question than searching.
Whitney Grace, June 27, 2024
Can Anthropic Break Into the AI Black Box?
June 20, 2024
The inner workings of large language models have famously been a mystery, even to their creators. That is a problem for those who would like transparency around pivotal AI systems. Now, however, Anthropic may have found the solution. Time reports, “No One Truly Knows Bow AI Systems Work. A New Discovery Could Change That.” If the method pans out, this will be perfect for congressional hearings and anti trust testimony. Reporter Billy Perrigo writes:
“Researchers developed a technique for essentially scanning the ‘brain’ of an AI model, allowing them to identify collections of neurons—called ‘features’—corresponding to different concepts. And for the first time, they successfully used this technique on a frontier large language model, Anthropic’s Claude Sonnet, the lab’s second-most powerful system, .In one example, Anthropic researchers discovered a feature inside Claude representing the concept of ‘unsafe code.’ By stimulating those neurons, they could get Claude to generate code containing a bug that could be exploited to create a security vulnerability. But by suppressing the neurons, the researchers found, Claude would generate harmless code. The findings could have big implications for the safety of both present and future AI systems. The researchers found millions of features inside Claude, including some representing bias, fraudulent activity, toxic speech, and manipulative behavior. And they discovered that by suppressing each of these collections of neurons, they could alter the model’s behavior. As well as helping to address current risks, the technique could also help with more speculative ones.”
The researchers hope their method will replace “red-teaming,” where developers chat with AI systems in order to uncover toxic or dangerous traits. On the as-of-yet theoretical chance an AI gains the capacity to deceive its creators, the more direct method would be preferred.
A happy side effect of the method could be better security. Anthropic states being able to directly manipulate AI features may allow developers to head off AI jailbreaks. The research is still in the early stages, but Anthropic is singing an optimistic tune.
Cynthia Murrell, June 20, 2024
Great Moments in Smart Software: IBM Watson Gets to Find Its Future Elsewhere Again
June 19, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
The smart software game is a tough one. Whip up some compute, download the models, and go go go. Unfortunately artificial intelligence is artificial and often not actually intelligent. I read an interesting article in Time Magazine (who knew it was still in business?). The story has a clickable title: “McDonald’s Ends Its Test Run of AI Drive-Throughs With IBM.” The juicy word IBM, the big brand McDonald’s, and the pickle on top: IBM.
A college student tells the smart software system at a local restaurant that his order was misinterpreted. Thanks, MSFT Copilot. How your “recall” today? What about system security? Oh, that’s too bad.
The write up reports with the glee of a kid getting a happy meal:
McDonald’s automated order taker with IBM received scores of complaints in recent years, for example — with many taking to social media to document the chatbot misunderstanding their orders.
Consequently, the IBM fast food service has been terminated.
Time’s write up included a statement from Big Blue too:
In an initial statement, IBM said that “this technology is proven to have some of the most comprehensive capabilities in the industry, fast and accurate in some of the most demanding conditions," but did not immediately respond to a request for further comment about specifics of potential challenges.
IBM suggested its technology could help fight cancer in Houston a few years ago. How did that work out? That smart software worker had an opportunity to find its future elsewhere. The career trajectory, at first glance, seems to be from medicine to grilling burgers. One might interpret this as an interesting employment trajectory. The path seems to be heading down to Sleepy Town.
What’s the future of the IBM smart software test? The write up points out:
Both IBM and McDonald’s maintained that, while their AI drive-throughs partnership was ending, the two would continue their relationship on other projects. McDonalds said that it still plans to use many of IBM’s products across its global system.
But Ronald McDonald has to be practical. The article adds:
In December, McDonald’s launched a multi-year partnership with Google Cloud. In addition to moving restaurant computations from servers into the cloud, the partnership is also set to apply generative AI “across a number of key business priorities” in restaurants around the world.
Google’s smart software has been snagged in some food controversies too. The firm’s smart system advised some Googlers to use glue to make the cheese topping stick better. Yum.
Several observations seem to be warranted:
- Practical and money-saving applications of IBM’s smart software do not have the snap, crackle, and pop of OpenAI’s PR coup with Microsoft in January 2023. Time is writing about IBM, but the case example is not one that makes me crave this particular application. Customers want a sandwich, not something they did not order.
- Examples of reliable smart software applications which require spontaneous reaction to people ordering food or asking basic questions are difficult to find. Very narrow applications of smart software do result in positive case examples; for example, in some law enforcement software (what I call policeware), the automatic processes of some vendors’ solutions work well; for example, automatic report generation in the Shadowdragon Horizon system.
- Big companies spend money, catch attention, and then have to spend more money to remediate and clean up the negative publicity.
Net net: More small-scale testing and less publicity chasing seem to be two items to add to the menu. And, Watson, keep on trying. Google is.
Stephen E Arnold, June 19, 2024
x
Palantir: Fear Is Good. Fear Sells.
June 18, 2024
President Eisenhower may not have foreseen AI when he famously warned of the military-industrial complex, but certain software firms certainly fit the bill. One of the most successful, Palantir, is pursuing Madison Avenue type marketing with a message of alarm. The company’s co-founder, Alex Karp, is quoted in the fear-mongering post at right-wing Blaze Media, “U.S. Prepares for War Amid Growing Tensions that China Could Invade Taiwan.”
After several paragraphs of panic over tensions between China and Taiwan, writer Collin Jones briefly admits “It is uncertain if and when the Chinese president will deploy an attack against the small country.” He quickly pivots to the scary AI arms race, intimating Palantir and company can save us as long as we let (fund) them. The post concludes:
“Palantir’s CEO and co-founder Alex Karp said: ‘The way to prevent a war with China is to ramp up not just Palantir, but defense tech startups that produce software-defining weapons systems that scare the living F out of our adversaries.’ Karp noted that the U.S. must stay ahead of its military opponents in the realm of AI. ‘Our adversaries have a long tradition of being not interested in the rule of law, not interested in fairness, not interested in human rights and on the battlefield. It really is going to be us or them.’ Karp noted that the U.S. must stay ahead of its military opponents in the realm of AI. You do not want a world order where our adversaries try to define new norms. It would be very bad for the world, and it would be especially bad for America,’ Karp concluded.”
Wow. But do such scare tactics work? Of course they do. For instance, we learn from DefenseScoop, “Palantir Lands $480M Army Contract for Maven Artificial Intelligence Tech.” That article reports on not one but two Palantir deals: the titular Maven expansion and, we learn:
“The company was recently awarded another AI-related deal by the Army for the next phase of the service’s Tactical Intelligence Targeting Access Node (TITAN) ground station program, which aims to provide soldiers with next-generation data fusion and deep-sensing capabilities via artificial intelligence and other tools. That other transaction agreement was worth $178 million.”
Those are just two recent examples of Palantir’s lucrative government contracts, ones that have not, as of this writing, been added this running tally. It seems the firm has found its winning strategy. Ramping up tensions between world powers is a small price to pay for significant corporate profits, apparently.
Cynthia Murrell, June 18, 2024
AI May Not Be Magic: The Salesforce Signal
June 10, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Salesforce has been a steady outfit. However, the company suffered a revenue miss, its first in about a quarter century. The news reports cited broad economic factors like “macro headwinds.” Salesforce, according to the firm’s chief people officer, the company has been experimenting with AI for “over a decade.” But the magic of AI was not able to ameliorate the company’s dip or add some chrome trim to its revenue guidance.
John Milton’s god character from Paradise Lost watches the antics of super-sophisticated artificial intelligence algorithms. This character quickly realizes that zeros and ones crafted by humans and enhanced by smart machines is definitely not omniscient, omnipresent, and omnipotent character who knows everything before it happens no matter what the PR firms or company spokesperson asserts. Thanks, MSFT Copilot. Good enough.
Piecing together fragments of information, it appears that AI has added to the company’s administrative friction. In a Fortune interview, recycled for MSN.com, consider these administrative process examples:
- The company has deployed 50 AI tools.
- Salesforce has an AI governance council.
- There is an Office of Ethical and Humane Use, started in 2019.
- Salesforce uses surveys to supplement its “robust listening strategies.”
- There are phone calls and meetings.
Some specific uses of AI appear to address inherent design constraints in Salesforce software; for example, AI has:
saved employees 50,000 hours within one business quarter, and the bot answered nearly 370,000 employee questions, according to the company. Merging into Project Basecamp, the company’s project management platform, has resolved 88,000 worker requests, speeding up issue resolution from an average of 48 hours to just 30 minutes.
What’s the payoff to the bottom line? That information is scant. What we know is that Salesforce may not be benefiting from the additional AI investment or the friction AI’s bureaucratic processes imposes on the company.
What’s this mean for those who predict that AI will change everything? I continue to think about the two ends of the spectrum: Go fast and break things crowd and the stop AI contingent.
First, the type of AI which is the one that does high school essay writing is easy to understand. These systems work as long as the subject matter clumps into piles of factoids which limit the craziness of the algorithms’ outputs. The topic “How to make a taco” is nailed down. The topic “How to decrypt Telegram’s encryption system” is not. Big brains can explain why the taco question is relatively hallucination free but not why the Telegram question generates useless drivel. I have, therefore, concluded, “Limited, narrow domain questions are okay for AI.”
Second, the current systems are presented as super wonderful. An example is the steady flow of PR about Google DeepMind’s contributions to biological science. Yet Google’s search system generates baloney. I think the different is that whacking away at proteins is a repetitive combinatorial problem. Calling the methods AI is similar to describing Daylight Chemical Information Systems a manifestation of the Oracle at Delphi is hogwash. PR erases important differences in critical lines of research. Does Google DeepMind feel shame? Let’s ask IBM Watson. That will be helpful. PR has a role; it is not AI.
Third, the desire for a silver bullet is deep-seated in many Peter Principle managers. These “leaders” of “leadership teams” don’t know what to do. Managing becomes figuring out risks. AI has legs, so let’s give that pony a chance to win the cart race. But pony cart races are trivial. The real races require winning three competitions. Few horses pull of that trick. I watch in wonder the launch, retreat, PR explanation, and next launch of some AI outfits. The focus seems to be on getting $20 per month. Degrading the service. Asking for more money. Then repeat.
The lack of AI innovation is becoming obvious. From the starter’s gun cracking in time with Microsoft’s AI announcement in January 2023 how much progress has been made?
We have the Salesforce financial report. We have the management craziness at OpenAI. We have Microsoft investing in or partnering with a number of technology outfits, including one in Paris. We have Google just doddering and fumbling. We have lawsuits. We have craziness like Adobe’s “owning” any image created with its software. We have start ups which bandy about the term “AI” like a shuttlecock at a high school in India badminton league. We have so many LinkedIn AI experts, I marvel that no one pins these baloney artists to a piece of white bread. We have the Dutch police emphasizing home-grown AI which helped make sense of the ANOM phone stings when the procedures are part of most policeware systems. Statistics, yes. AI, no. Clustering, yes. AI, no. Metadata assignment, yes. AI, no. The ANOM operation took place about 2017 to its shut down four years later. AI? Nope.
What does the lack of financial payoff and revenue generating AI solutions tell me? My answer to this question is:
- The cost of just using and letting prospects use an AI system are high. Due to the lack of a Triple Crown contender, no company has the horse or can afford the costs of getting the nag ready to race and keeping the animal from keeling over dead.
- The tangible results are tough to express. Despite the talk about reducing the costs of customer service, the cost of the AI system and the need to have humans ride herd on what the crazed cattle-like algorithms yield is not evident to me. The Salesforce experience is that AI cannot fix or make the Slack system generate oodles of cost savings or revenues from new, happy customers.
- The AI systems, particularly the services promoted via Product Hunt, are impossible for me to differentiate. Some do images, but the functions are similar. Some AI system do text things. Okay. But what’s new? Money is being spent to produce endless variations and me-too services. Fun for some. But boring and a waste of time to a dinobaby like me.
Net net: With economic problems growing in numerous sectors, those with money or a belief that garlic will kill Count Vampire, Baron of Revenue Loss are in for a surprise. Sorry. No software equivalent to Milton’s eternal, all-knowing, omnipotent God. I won’t tell the PR people. That Salesforce signal is meaningful.
Stephen E Arnold, June 10, 2024
Selling AI with Scare Tactics
June 6, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Ah, another article with more assertions to make workers feel they must adopt the smart software that threatens their livelihoods. AI automation firm UiPath describes “3 Common Barriers to AI Adoption and How to Overcome Them.” Before marketing director Michael Robinson gets to those barriers, he tries to motivate readers who might be on the fence about AI. He writes:
“There’s a growing consensus about the need for businesses to embrace AI. McKinsey estimated that generative AI could add between $2.6 to $4.4 trillion in value annually, and Deloitte’s ’State of AI in the Enterprise’ report found that 94% of surveyed executives ‘agree that AI will transform their industry over the next five years.’ The technology is here, it’s powerful, and innovators are finding new use cases for it every day. But despite its strategic importance, many companies are struggling to make progress on their AI agendas. Indeed, in that same report, Deloitte estimated that 74% of companies weren’t capturing sufficient value from their AI initiatives. Nevertheless, companies sitting on the sidelines can’t afford to wait any longer. As reported by Bain & Company, a ‘larger wedge’ is being driven ‘between those organizations that have a plan [for AI] and those that don’t—amplifying advantage and placing early adopters into stronger positions.’”
Oh, no! What can the laggards do? Fret not, the article outlines the biggest hurdles: lack of a roadmap, limited in-house expertise, and security or privacy concerns. Curious readers can see the post for details about each. As it happens, software like UiPath’s can help businesses clear every one. What a coincidence.
Cynthia Murrell, June 6, 2024