Quantum Supremacy: The PR Race Shames the Google

July 17, 2024

dinosaur30a_thumb_thumb_thumb_thumb_[1]_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The quantum computing era exists in research labs and a handful of specialized locations. The qubits are small, but the cooling  system and control mechanisms are quite large. An environmentalist learning about the power consumption and climate footprint of a quantum computer might die of heart failure. But most of the worriers are thinking about AI’s power demands. Quantum computing is not a big deal. Yet.

But the title of “quantum supremacy champion” is a big deal. Sure the community of those energized by the concept may number in the tens of thousands, but quantum computing is a big deal. Google announced a couple of years ago that it was the quantum supremacy champ. I just read “New Quantum Computer Smashes Quantum Supremacy Record by a Factor of 100 — And It Consumes 30,000 Times Less Power.” The main point of the write up in my opinion is:

Anew quantum computer has broken a world record in “quantum supremacy,” topping the performance of benchmarking set by Google’s Sycamore machine by 100-fold.

Do I believe this? I am on the fence, but in the quantum computing “my super car is faster than your super car” means something to those in the game. What’s interesting to me is that the PR claim is not twice as fast as the Google’s quantum supremacy gizmo. Nor is the claim to be 10 times faster. The assertion is that a company called Quantinuum (the winner of the high-tech company naming contest with three letter “u”s, one “q” and four syllables) outperformed the Googlers by a factor of 100.

image

Two successful high-tech executives argue fiercely about performance. Thanks, MSFT Copilot. Good enough, and I love the quirky spelling? Is this a new feature of your smart software?

Now does the speedy quantum computer work better than one’s iPhone or Steam console. The article reports:

But in the new study, Quantinuum scientists — in partnership with JPMorgan, Caltech and Argonne National Laboratory — achieved an XEB score of approximately 0.35. This means the H2 quantum computer can produce results without producing an error 35% of the time.

To put this in context, use this system to plot your drive from your home to Texarkana. You will make it there one out of every three multi day drives. Close enough for horse shoes or an MVP (minimum viable product). But it is progress of sorts.

So what does the Google do? Its marketing team goes back to AI software and magically “DeepMind’s PEER Scales Language Models with Millions of Tiny Experts” appears in Venture Beat. Forget that quantum supremacy claim. The Google has “millions of tiny experts.” Millions. The PR piece reports:

DeepMind’s Parameter Efficient Expert Retrieval (PEER) architecture addresses the challenges of scaling MoE [mixture of experts and not to me confused with millions of experts [MOE].

I know this PR story about the Google is not quantum computing related, but it illustrates the “my super car is faster than your super car” mentality.

What can one believe about Google or any other high-technology outfit talking about the performance of its system or software? I don’t believe too much, probably about 10 percent of what I read or hear.

But the constant need to be perceived as the smartest science quick recall team is now routine. Come on, geniuses, be more creative.

Stephen E Arnold, July 17, 2024

Google Ups the Ante: Skip the Quantum. Aim Higher!

July 16, 2024

dinosaur30a_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

After losing its quantum supremacy crown to an outfit with lots of “u”s in its name and making clear it deploys a million software bots to do AI things, the Google PR machine continues to grind away.

image

The glowing “G” on god’s/God’s chest is the clue that reveals Google’s identity. Does that sound correct? Thanks, MSFT Copilot. Close enough to the Google for me.

What’s a bigger deal than quantum supremacy or the million AI bot assertion? Answer: Be like god or God as the case may be. I learned about this celestial achievement in “Google Researchers Say They Simulated the Emergence of Life.” The researchers have not actually created life. PR announcements can be sufficiently abstract to make a big Game of Life seem like more than an update of the 1970s John Horton Conway confection on a two-dimensional grid. Google’s goal is to get a mention in the Wikipedia article perhaps?

Google operates at a different scale in its PR world. Google does not fool around with black and white squares, blinkers, and spaceships. Google makes a simulation of life. Here’s how the write up explains the breakthrough:

In an experiment that simulated what would happen if you left a bunch of random data alone for millions of generations, Google researchers say they witnessed the emergence of self-replicating digital lifeforms.

Cue the pipe organ. Play Toccata and Fugue in D minor. The write up says:

Laurie and his team’s simulation is a digital primordial soup of sorts. No rules were imposed, and no impetus was given to the random data. To keep things as lean as possible, they used a funky programming language called Brainfuck, which to use the researchers’ words is known for its “obscure minimalism,” allowing for only two mathematical operations: adding one or subtracting one. The long and short of it is that they modified it to only allow the random data — stand-ins for molecules — to interact with each other, “left to execute code and overwrite themselves and neighbors based on their own instructions.” And despite these austere conditions, self-replicating programs were able to form.

Okay, tone down the volume on the organ, please.

The big discovery is, according to a statement in the write up attributed to a real life God-ler:

there are “inherent mechanisms” that allow life to form.

The God-ler did not claim the title of God-ler. Plus some point out that Google’s big announcement is not life. (No kidding?)

Several observations:

  1. Okay, sucking up power and computer resources to run a 1970s game suggests that some folks have a fairly unstructured work experience. May I suggest a bit of work on Google Maps and its usability?
  2. Google’s PR machine appears to value quantumly supreme reports of innovations, break throughs, and towering technical competence. Okay, but Google sells advertising, and the PR output doesn’t change that fact. Google sells ads. Period.
  3. The speed with which Google PR can react to any perceived achievement that is better or bigger than a Google achievement pushes the Emit PR button. Who punches this button?

Net net: I find these discoveries and innovations amusing. Yeah, Google is an ad outfit and probably should be headquartered on Madison Avenue or an even more prestigious location. Definitely away from Beelzebub and his ilk.

Stephen E Arnold, July 16, 2024

The Wiz: Google Gears Up for Enterprise Security

July 15, 2024

dinosaur30a_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Anyone remember this verse from “Ease on Down the Road,” from The Wiz, the hit musical from the 1970s? Here’s the passage:

‘Cause there may be times
When you think you lost your mind
And the steps you’re takin’
Leave you three, four steps behind
But the road you’re walking
Might be long sometimes
You just keep on trukin’
And you’ll just be fine, yeah

Why am I playing catchy tunes in my head on Monday, July 15, 2024? I just read “Google Near $23 Billion Deal for Cybersecurity Startup Wiz.” For years, I have been relating Israeli-developed cyber security technology to law enforcement and intelligence professionals. I try in each lecture to profile a firm, typically based in Tel Aviv or environs and staffed with former military professionals. I try to relate the functionality of the system to the particular case or matter I am discussing in my lecture.

image

The happy band is easin’ down the road. The Googlers have something new to sell. Does it work? Sure, get down. Boogie. Thanks, MSFT Copilot. Has your security created an opportunity for Google marketers?

That stopped in October 2023. A former Israeli intelligence officer told me, “The massacre was Israel’s 9/11. There was an intelligence failure.” I backed away form the Israeli security, cyber crime, and intelware systems. They did not work. If we flash forward to July 15, 2024, the marketing is back. The well-known NSO Group is hawking its technology at high-profile LE and intel conferences. Enhancements to existing systems arrive in the form of email newsletters at the pace of the pre-October 2023 missives.

However, I am maintaining a neutral and skeptical stance. There is the October 2023 event, the subsequent war, and the increasing agitation about tactics, weapons systems in use, and efficacy of digital safeguards.

Google does not share my concerns. That’s why the company is Google, and I am a dinobaby tracking cyber security from my small office in rural Kentucky. Google makes news. I make nothing as a marginalized dinobaby.

The Wiz tells the story of a young girl who wants to get her dog back after a storm carries the creature away. The young girl offs the evil witch and seeks the help of a comedian from Peoria, Illinois, to get back to her real life. The Wiz has a happy ending, and the quoted verse makes the point that the young girl, like the Google, has to keep taking steps even though the Information Highway may be long.

That’s what Google is doing. The company is buying security (which I want to point out is cut from the same cloth as the systems which failed to notice the October 2023 run up). Google has Mandiant. Google offers a free Dark Web scanning service. Now Google has Wiz.

What’s Wiz do? Like other Israeli security companies, it does the sort of thing intended to prevent events like October 2023’s attack. And like other aggressively marketed Israeli cyber technology companies’ capabilities, one has to ask, “Will Wiz work in an emerging and fluid threat environment?” This is an important question because of the failure of the in situ Israeli cyber security systems, disabled watch stations, and general blindness to social media signals about the October 2023 incident.

If one zips through the Wiz’s Web site, one can craft a description of what the firm purports to do; for example:

Wiz is a cloud security firm embodying capabilities associated with the Israeli military technology. The idea is to create a one-stop shop to secure cloud assets. The idea is to identify and mitigate risks. The system incorporates automated functions and graphic outputs. The company asserts that it can secure models used for smart software and enforce security policies automatically.

Does it work? I will leave that up to you and the bad actors who find novel methods to work around big, modern, automated security systems. Did you know that human error and old-fashioned methods like emails with links that deliver stealers work?

Can Google make the Mandiant Wiz combination work magic? Is Googzilla a modern day Wiz able to transport the little girl back to real life?

Google has paid a rumored $20 billion plus to deliver this reality.

I maintain my neutral and skeptical stance. I keep thinking about October 2023, the aftermath of a massive security failure, and the over-the-top presentations by Israeli cyber security vendors. If the stuff worked, why did October 2023 happen? Like most modern cyber security solutions, marketing to the people who desperately want a silver bullet or digital stake to pound through the heart of cyber risk produces sales.

I am not sure that sales, marketing, and assertions about automation work in what is an inherently insecure, fast-changing, and globally vulnerable environment.

But Google will keep on trukin’’ because Microsoft has created a heck of a marketing opportunity for the Google.

Stephen E Arnold, July 15, 2024

Googzilla, Man Up, Please

July 8, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a couple of “real” news stories about Google and its green earth / save the whales policies in the age of smart software. The first write   up is okay and not to exciting for a critical thinker wearing dinoskin. “The Morning After: Google’s Greenhouse Gas Emissions Climbed Nearly 50 Percent in Five Years Due to AI” states what seems to be a PR-massaged write up. Consider this passage:

According to the report, Google said it expects its total greenhouse gas emissions to rise “before dropping toward our absolute emissions reduction target,” without explaining what would cause this drop.

Yep, no explanation. A PR win.

The BBC published “AI Drives 48% Increase in Google Emissions.” That write up states:

Google says about two thirds of its energy is derived from carbon-free sources.

image

Thanks, MSFT Copilot. Good enough.

Neither these two articles nor the others I scanned focused on one key fact about Google’s saying green and driving snail darters to their fate. Google’s leadership team did not plan its energy strategy. In fact, my hunch is that no one paid any attention to how much energy Google’s AI activities were sucking down. Once the company shifted into Code Red or whatever consulting term craziness it used to label its frenetic response to the Microsoft OpenAI tie up, absolutely zero attention was directed toward the few big eyed tunas which might be taking their last dip.

Several observations:

  1. PR speak and green talk are like many assurances emitted by the Google. Talk is not action.
  2. The management processes at Google are disconnected from what happens when the wonky Code Red light flashes and the siren howls at midnight. Shouldn’t management be connected when the Tapanuli Orangutang could soon be facing the Big Ape in the sky?
  3. The AI energy consumption is not a result of AI. The energy consumption is a result of Googlers who do what’s necessary to respond to smart software. Step on the gas. Yeah, go fast. Endanger the Amur leopard.

Net net: Hey, Google, stand up and say, “My leadership team is responsible for the energy we consume.” Don’t blame your up-in-flames “green” initiative on software you invented. How about less PR and more focus on engineering more efficient data center and cloud operations? I know PR talk is easier, but buckle up, butter cup.

Stephen E Arnold, July 8, 2024

Some Tension in the Datasphere about Artificial Intelligence

June 28, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I generally try to avoid profanity in this blog. I am mindful of Google’s stopwords. I know there are filters running to protect those younger than I from frisky and inappropriate language. Therefore, I will cite the two articles and then convert the profanity to a suitably sanitized form.

The first write up is “I Will F…ing Piledrive You If You Mention AI Again”. Sorry, like many other high-technology professionals I prevaricated and dissembled. I have edited the F word to be less superficially offensive. (One simply cannot trust high-technology types, can you? I am not Thomson Reuters obviously.) The premise of this write up is that smart software is over-hyped. Here’s a passage I found interesting:

Unless you are one of a tiny handful of businesses who know exactly what they’re going to use AI for, you do not need AI for anything – or rather, you do not need to do anything to reap the benefits. Artificial intelligence, as it exists and is useful now, is probably already baked into your businesses software supply chain. Your managed security provider is probably using some algorithms baked up in a lab software to detect anomalous traffic, and here’s a secret, they didn’t do much AI work either, they bought software from the tiny sector of the market that actually does need to do employ data scientists.

I will leave it to you to ponder the wisdom of these words. I, for instance, do not know exactly what I am going to do until I do something, fiddle with it, and either change it up or trash it. You and most AI enthusiasts are probably different. That’s good. I envy your certitude. The author of the first essay is not gentle; he wants to piledrive you if you talk about smart software. I do not advocate violence under any circumstances. I can tolerate baloney about smart software. The piledriver person has hate in his heart. You have been warned.

The second write up is “ChatGPT Is Bullsh*t,” and it is an article published in SpringerLink, not a personal blog. Yep, bullsh*t as a term in an academic paper. Keep in mind, please, that Stanford University’s president and some Harvard wizards engaged in the bullsh*t business as part of their alleged making up data. Who needs AI when humans are perfectly capable of hallucinating, but I digress?

I noted this passage in the academic write up:

So perhaps we should, strictly, say not that ChatGPT is bullshit but that it outputs bullshit in a way that goes beyond being simply a vector of bullshit: it does not and cannot care about the truth of its output, and the person using it does so not to convey truth or falsehood but rather to convince the hearer that the text was written by a interested and attentive agent.

Please, read the 10 page research article about bullsh*t, soft bullsh*t, and hard bullsh*t. Form your own opinion.

I have now set the stage for some observations (probably unwanted and deeply disturbing to some in the smart software game).

  1. Artificial intelligence is a new big thing, and the hyperbole, misdirection, and outright lying like my saying I would use forbidden language in this essay irrelevant. The object of the new big thing is to make money, get power, maybe become an influencer on TikTok.
  2. The technology which seems to have flowered in January 2023 when Microsoft said, “We love OpenAI. It’s a better Clippy.” The problem is that it is now June 2024 and the advances have been slow and steady. This means that after a half century of research, the AI revolution is working hard to keep the hypemobile in gear. PR is quick; smart software improvement less speedy.
  3. The ripples the new big thing has sent across the datasphere attenuate the farther one is from the January 2023 marketing announcement. AI fatigue is now a thing. I think the hostility is likely to increase because real people are going to lose their jobs. Idle hands are the devil’s playthings. Excitement looms.

Net net: I think the profanity reveals the deep disgust some pundits and experts have for smart software, the companies pushing silver bullets into an old and rusty firearm, and an instinctual fear of the economic disruption the new big thing will cause. Exciting stuff. Oh, I am not stating a falsehood.

Stephen E Arnold, June 23, 2024

Can the Bezos Bulldozer Crush Temu, Shein, Regulators, and AI?

June 27, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The question, to be fair, should be, “Can the Bezos-less bulldozer crush Temu, Shein, Regulators, Subscriptions to Alexa, and AI?” The article, which appeared in the “real” news online service Venture Beat, presents an argument suggesting that the answer is, “Yes! Absolutely.”

image

Thanks MSFT Copilot. Good bulldozer.

The write up “AWS AI Takeover: 5 Cloud-Winning Plays They’re [sic] Using to Dominate the Market” depends upon an Amazon Big Dog named Matt Wood, VP of AI products at AWS. The article strikes me as something drafted by a small group at Amazon and then polished to PR perfection. The reasons the bulldozer will crush Google, Microsoft, Hewlett Packard’s on-premises play, and the keep-on-searching IBM Watson, among others, are:

  1. Covering the numbers or logo of the AI companies in the “game”; for example, Anthropic, AI21 Labs, and other whale players
  2. Hitting up its partners, customers, and friends to get support for the Amazon AI wonderfulness
  3. Engineering AI to be itty bitty pieces one can use to build a giant AI solution capable of dominating D&B industry sectors like banking, energy, commodities, and any other multi-billion sector one cares to name
  4. Skipping the Google folly of dealing with consumers. Amazon wants the really big contracts with really big companies, government agencies, and non-governmental organizations.
  5. Amazon is just better at security. Those leaky S3 buckets are not Amazon’s problem. The customers failed to use Amazon’s stellar security tools.

Did these five points convince you?

If you did not embrace the spirit of the bulldozer, the Venture Beat article states:

Make no mistake, fellow nerds. AWS is playing a long game here. They’re not interested in winning the next AI benchmark or topping the leaderboard in the latest Kaggle competition. They’re building the platform that will power the AI applications of tomorrow, and they plan to power all of them. AWS isn’t just building the infrastructure, they’re becoming the operating system for AI itself.

Convinced yet? Well, okay. I am not on the bulldozer yet. I do hear its engine roaring and I smell the no-longer-green emissions from the bulldozer’s data centers. Also, I am not sure the Google, IBM, and Microsoft are ready to roll over and let the bulldozer crush them into the former rain forest’s red soil. I recall researching Sagemaker which had some AI-type jargon applied to that “smart” service. Ah, you don’t know Sagemaker? Yeah. Too bad.

The rather positive leaning Amazon write up points out that as nifty as those five points about Amazon’s supremacy in the AI jungle, the company has vision. Okay, it is not the customer first idea from 1998 or so. But it is interesting. Amazon will have infrastructure. Amazon will provide model access. (I want to ask, “For how long?” but I won’t.), and Amazon will have app development.

The article includes a table providing detail about these three legs of the stool in the bulldozer’s cabin. There is also a run down of Amazon’s recent media and prospect directed announcements. Too bad the article does not include hyperlinks to these documents. Oh, well.

And after about 3,300 words about Amazon, the article includes about 260 words about Microsoft and Google. That’s a good balance. Too bad IBM. You did not make the cut. And HP? Nope. You did not get an “Also participated” certificate.

Net net: Quite a document. And no mention of Sagemaker. The Bezos-less bulldozer just smashes forward. Success is in crushing. Keep at it. And that “they” in the Venture Beat article title: Shouldn’t “they” be an “it”?

Stephen E Arnold, June 27, 2024

Nerd Flame War: AI AI AI

June 27, 2024

The Internet is built on trolls and their boorish behavior. The worst of the trolls are self-confessed “experts” on anything. Every online community has their loitering trolls and tech enthusiasts aren’t any different. In the old days of Internet lore, online verbal battles were dubbed “flame wars” and XDA-Developers reports that OpenAI started one: “AI Has Thrown Stack Overflow Into Civil War.”

A huge argument in AI development is online content being harvested for large language models (LLMs) to train algorithms. Writers and artists were rightly upset were used to train image and writing algorithms. OpenAI recently partnered with Stack Overflow to collect data and the users aren’t happy. Stack Overflow is a renowned tech support community for sysadmin, developers, and programmers. Stack Overflow even brags that it is world’s largest developer community.

Stack Overflow users are angry, because they weren’t ask permission to use their content for AI training models and they don’t like the platform’s response to their protests. Users are deleting their posts or altering them to display correct information. In response, Stack Overflow is restoring deleted and incorrect information, temporarily suspending users who delete content, and hiding behind the terms of service. The entire situation is explained here:

“Delving into discussion online about OpenAI and Stack Overflow’s partnership, there’s plenty to unpack. The level of hostility towards Stack Overflow varies, with some users seeing their answers as being posted online without conditions – effectively free for all to use, and Stack Overflow granting OpenAI access to that data as no great betrayal. These users might argue that they’ve posted their answers for the betterment of everyone’s knowledge, and don’t place any conditions on its use, similar to a highly permissive open source license.

Other users are irked that Stack Overflow is providing access to an open-resource to a company using it to build closed-source products, which won’t necessarily better all users (and may even replace the site they were originally posted on.) Despite OpenAI’s stated ambition, there is no guarantee that Stack Overflow will remain freely accessible in perpetuity, or that access to any AIs trained on this data will be free to the users who contributed to it.”

Reddit and other online communities are facing the same problems. LLMs are made from Stack Overflow and Reddit to train generative AI algorithms like ChatGPT. OpenAI’s ChatGPT is regarded as overblown because it continues to fail multiple tests. We know, however, that generative AI will improve with time. We also know that people will use the easiest solution and generative AI chatbots will become those tools. It’s easier to verbally ask or write a question than searching.

Whitney Grace, June 27, 2024

Can Anthropic Break Into the AI Black Box?

June 20, 2024

The inner workings of large language models have famously been a mystery, even to their creators. That is a problem for those who would like transparency around pivotal AI systems. Now, however, Anthropic may have found the solution. Time reports, “No One Truly Knows Bow AI Systems Work. A New Discovery Could Change That.” If the method pans out, this will be perfect for congressional hearings and anti trust testimony. Reporter Billy Perrigo writes:

“Researchers developed a technique for essentially scanning the ‘brain’ of an AI model, allowing them to identify collections of neurons—called ‘features’—corresponding to different concepts. And for the first time, they successfully used this technique on a frontier large language model, Anthropic’s Claude Sonnet, the lab’s second-most powerful system, .In one example, Anthropic researchers discovered a feature inside Claude representing the concept of ‘unsafe code.’ By stimulating those neurons, they could get Claude to generate code containing a bug that could be exploited to create a security vulnerability. But by suppressing the neurons, the researchers found, Claude would generate harmless code. The findings could have big implications for the safety of both present and future AI systems. The researchers found millions of features inside Claude, including some representing bias, fraudulent activity, toxic speech, and manipulative behavior. And they discovered that by suppressing each of these collections of neurons, they could alter the model’s behavior. As well as helping to address current risks, the technique could also help with more speculative ones.”

The researchers hope their method will replace “red-teaming,” where developers chat with AI systems in order to uncover toxic or dangerous traits. On the as-of-yet theoretical chance an AI gains the capacity to deceive its creators, the more direct method would be preferred.

A happy side effect of the method could be better security. Anthropic states being able to directly manipulate AI features may allow developers to head off AI jailbreaks. The research is still in the early stages, but Anthropic is singing an optimistic tune.

Cynthia Murrell, June 20, 2024

Great Moments in Smart Software: IBM Watson Gets to Find Its Future Elsewhere Again

June 19, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The smart software game is a tough one. Whip up some compute, download the models, and go go go. Unfortunately artificial intelligence is artificial and often not actually intelligent. I read an interesting article in Time Magazine (who knew it was still in business?). The story has a clickable title: “McDonald’s Ends Its Test Run of AI Drive-Throughs With IBM.” The juicy word IBM, the big brand McDonald’s, and the pickle on top: IBM.

image

A college student tells the smart software system at a local restaurant that his order was misinterpreted. Thanks, MSFT Copilot. How your “recall” today? What about system security? Oh, that’s too bad.

The write up reports with the glee of a kid getting a happy meal:

McDonald’s automated order taker with IBM received scores of complaints in recent years, for example — with many taking to social media to document the chatbot misunderstanding their orders.

Consequently, the IBM fast food service has been terminated.

Time’s write up included a statement from Big Blue too:

In an initial statement, IBM said that “this technology is proven to have some of the most comprehensive capabilities in the industry, fast and accurate in some of the most demanding conditions," but did not immediately respond to a request for further comment about specifics of potential challenges.

IBM suggested its technology could help fight cancer in Houston a few years ago. How did that work out? That smart software worker had an opportunity to find its future elsewhere. The career trajectory, at first glance, seems to be from medicine to grilling burgers. One might interpret this as an interesting employment trajectory. The path seems to be heading down to Sleepy Town.

What’s the future of the IBM smart software test? The write up points out:

Both IBM and McDonald’s maintained that, while their AI drive-throughs partnership was ending, the two would continue their relationship on other projects. McDonalds said that it still plans to use many of IBM’s products across its global system.

But Ronald McDonald has to be practical. The article adds:

In December, McDonald’s launched a multi-year partnership with Google Cloud. In addition to moving restaurant computations from servers into the cloud, the partnership is also set to apply generative AI “across a number of key business priorities” in restaurants around the world.

Google’s smart software has been snagged in some food controversies too. The firm’s smart system advised some Googlers to use glue to make the cheese topping stick better. Yum.

Several observations seem to be warranted:

  1. Practical and money-saving applications of IBM’s smart software do not have the snap, crackle, and pop of OpenAI’s PR coup with Microsoft in January 2023. Time is writing about IBM, but the case example is not one that makes me crave this particular application. Customers want a sandwich, not something they did not order.
  2. Examples of reliable smart software applications which require spontaneous reaction to people ordering food or asking basic questions are difficult to find. Very narrow applications of smart software do result in positive case examples; for example, in some law enforcement software (what I call policeware), the automatic processes of some vendors’ solutions work well; for example, automatic report generation in the Shadowdragon Horizon system.
  3. Big companies spend money, catch attention, and then have to spend more money to remediate and clean up the negative publicity.

Net net: More small-scale testing and less publicity chasing seem to be two items to add to the menu. And, Watson, keep on trying. Google is.

Stephen E Arnold, June 19, 2024

x

Palantir: Fear Is Good. Fear Sells.

June 18, 2024

President Eisenhower may not have foreseen AI when he famously warned of the military-industrial complex, but certain software firms certainly fit the bill. One of the most successful, Palantir, is pursuing Madison Avenue type marketing with a message of alarm. The company’s co-founder, Alex Karp, is quoted in the fear-mongering post at right-wing Blaze Media, “U.S. Prepares for War Amid Growing Tensions that China Could Invade Taiwan.”

After several paragraphs of panic over tensions between China and Taiwan, writer Collin Jones briefly admits “It is uncertain if and when the Chinese president will deploy an attack against the small country.” He quickly pivots to the scary AI arms race, intimating Palantir and company can save us as long as we let (fund) them. The post concludes:

“Palantir’s CEO and co-founder Alex Karp said: ‘The way to prevent a war with China is to ramp up not just Palantir, but defense tech startups that produce software-defining weapons systems that scare the living F out of our adversaries.’ Karp noted that the U.S. must stay ahead of its military opponents in the realm of AI. ‘Our adversaries have a long tradition of being not interested in the rule of law, not interested in fairness, not interested in human rights and on the battlefield. It really is going to be us or them.’ Karp noted that the U.S. must stay ahead of its military opponents in the realm of AI. You do not want a world order where our adversaries try to define new norms. It would be very bad for the world, and it would be especially bad for America,’ Karp concluded.”

Wow. But do such scare tactics work? Of course they do. For instance, we learn from DefenseScoop, “Palantir Lands $480M Army Contract for Maven Artificial Intelligence Tech.” That article reports on not one but two Palantir deals: the titular Maven expansion and, we learn:

“The company was recently awarded another AI-related deal by the Army for the next phase of the service’s Tactical Intelligence Targeting Access Node (TITAN) ground station program, which aims to provide soldiers with next-generation data fusion and deep-sensing capabilities via artificial intelligence and other tools. That other transaction agreement was worth $178 million.”

Those are just two recent examples of Palantir’s lucrative government contracts, ones that have not, as of this writing, been added this running tally. It seems the firm has found its winning strategy. Ramping up tensions between world powers is a small price to pay for significant corporate profits, apparently.

Cynthia Murrell, June 18, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta