Google: Lost in Its Own AI Maze

May 31, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

One “real” news items caught my attention this morning. Let me tell you. Even with the interesting activities in the Manhattan court, these jumped at me. Let’s take a quick look and see if Googzilla (see illustration) can make a successful exit from the AI maze in which the online advertising giant finds itself.

image

Googzilla is lost in its own AI maze. Can it find a way out? Thanks, MSFT Copilot. Three tries and I got a lizard in a maze. Keep allocating compute cycles to security because obviously Copilot is getting fewer and fewer these days.

Google Pins Blame on Data Voids for Bad AI Overviews, Will Rein Them In” makes it clear that Google is not blaming itself for some of the wacky outputs its centerpiece AI function has been delivering. I won’t do the guilty-34-times thing. I will just mention the non-toxic glue and pizza item. This news story reports:

Google thinks the AI Overviews for its search engine are great, and is blaming viral screenshots of bizarre results on "data voids" while claiming some of the other responses are actually fake. In a Thursday post, Google VP and Head of Google Search Liz Reid doubles down on the tech giant’s argument that AI Overviews make Google searches better overall—but also admits that there are some situations where the company "didn’t get it right."

So let’s look at that Google blog post titled “AI Overviews: About Last Week.”

How about this statement?

User feedback shows that with AI Overviews, people have higher satisfaction with their search results, and they’re asking longer, more complex questions that they know Google can now help with. They use AI Overviews as a jumping off point to visit web content, and we see that the clicks to webpages are higher quality — people are more likely to stay on that page, because we’ve done a better job of finding the right info and helpful webpages for them.

The statement strikes me as something that a character would say in an episode of the Twilight Zone, a TV series in the 50s and 60s. The TV show had a weird theme, and I thought I heard it playing when I read the official Googley blog post. Is this the Google “bullseye” method or a bullsh*t method?

The official Googley blog post notes:

This means that AI Overviews generally don’t “hallucinate” or make things up in the ways that other LLM products might. When AI Overviews get it wrong, it’s usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available. (These are challenges that occur with other Search features too.) This approach is highly effective. Overall, our tests show that our accuracy rate for AI Overviews is on par with another popular feature in Search — featured snippets — which also uses AI systems to identify and show key info with links to web content.

Okay, we are into bullsh*t method. Google search is now a key moment in the Sundar & Prabhakar Comedy Act. Since the début in Paris which featured incorrect data, the Google has been in Code Red or Red Alert of red faced-embarrassment mode. Now the company wants people to eat rocks, and it is not the online advertising giant’s fault. The blog post explains:

There isn’t much web content that seriously contemplates that question, either. This is what is often called a “data void” or “information gap,” where there’s a limited amount of high quality content about a topic. However, in this case, there is satirical content on this topic … that also happened to be republished on a geological software provider’s website. So when someone put that question into Search, an AI Overview appeared that faithfully linked to one of the only websites that tackled the question. In other examples, we saw AI Overviews that featured sarcastic or troll-y content from discussion forums. Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza.

Okay, I think one component of the bullsh*t method is that it is not Google’s fault. “Users” — not customers because Google has advertising clients, partners, and some lobbyists. Everyone else is a user, and it is users’ fault, the data creators’ fault, and probably Sam AI-Man’s fault. (Did I omit anyone on whom to blame the let them “eat rocks” result?)

And the Google cares. This passage is worthy of a Hallmark card with a foldout:

At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors. We’ve learned a lot over the past 25 years about how to build and maintain a high-quality search experience, including how to learn from these errors to make Search better for everyone. We’ll keep improving when and how we show AI Overviews and strengthening our protections, including for edge cases, and we’re very grateful for the ongoing feedback.

What’s my take on this?

  1. The assumption that Google search is “good” is interesting, just not in line with what I hear, read, and experience when I do use Google. Note: That my personal usage has decreased over time.
  2. Google is trying to explain away its obvious flaws. The Google speak may work for some people, just not for me.
  3. The tone is that of a entitled seventh-grader from a wealthy family, not the type of language I find particularly helpful when the “smart” Google software has to be remediated by humans. Google is terminating humans, right? Now Google needs humans. What’s up Google?

Net net: Google is snagged it ins own AI maze. I am growing less confident in the company’s ability to extricate itself. The Sam AI-Man has crafted deals with two outfits big enough to make Google’s life more interesting. Google’s own management seems ineffectual despite the flashing red and yellow lights and the honking of alarms. Google’s wordsmiths and lawyers are running out of verbal wiggle room. But most important, the failure of the bullseye method and the oozing comfort of the bullsh*it method marks a turning point for the company.

Stephen E Arnold, May 31, 2024

A Different View of That Google Search Leak

May 30, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

As a dinobaby, I can make observations that a person with two young children and a mortgage are not comfortable making. So buckle your seat belt and grab a couple of Prilosec. I don’t think the leak is a big deal. Let me provide some color.

image

This cartoon requires that you examine the information in “Authorities: Google Exec Died on Yacht after Upscale Prostitute Injected Him with Heroin.” The incident provides some insight into the ethical compass of one Google officer. Do others share that directionality? Thanks, MSFT Copilot. You unwittingly produced a good cartoon. Ho ho ho.

Many comments are zipping around about the thousands of pages of Google secret information are flying around. The “legend” of the leak is that Search API information became available. The “spark” which lit the current Google fire was this post: “An Anonymous Source Shared Thousands of Leaked Google Search API Documents with Me; Everyone in SEO Should See Them.” (FYI: The leaker is an entity using the handle “Erfan Azimi.”)

That write up says:

This documentation doesn’t show things like the weight of particular elements in the search ranking algorithm, nor does it prove which elements are used in the ranking systems. But, it does show incredible details about data Google collects.

If you want more of this SEO stuff, have at it. I think the information is almost useless. Do Googler’s follow procedures? Think about your answer for a company that operates essentially without meaningful controls. Here’s my view which means it is time to gulp those tabs.

First, the entire SEO game helps Google sell online advertising. Once the SEO push fails to return results to the client of the SEO expert, Google allows these experts to push Google ads on their customer. Why? Pay Google money and the advertiser will get traffic. How does this work? Well, money talks, and Google search experts deliver clicks.

Second, the core of Google is now surrounded by wrappers. The thousands of words in the leak record the stuff essentially unmanaged Googlers do to fill time. After 25 years, the old ideas (some of which were derived from the CLEVER method for which Jon Kleinberg deserves credit.) have been like a pretty good organic chicken swathed in hundreds of layers of increasingly crappy plastic wrap. With the appropriate source of illumination, one can discern the chicken beneath the halogenated wrap, but the chicken looks darned awful. Do you want to eat the chicken? Answer: Probably no more than I want to eat a pizza with non-toxic glue in the cheese.

Third, the senior management of the Google is divorced from the old-fashioned idea of typing a couple of words and getting results which are supposed to be germane to the query. When Boolean logic was part of the search game, search was about 60 percent effective. Thus, it seemed logical over the years to provide training wheels and expand the query against which ads could be sold. Now the game is just to sell ads because the query is relaxed, extended, and mostly useless except for a narrow class of search strings. (Use Google dorks and get some useful stuff.)

Okay, what are the implications of these three observations? Grab another Prilosec, please.

First, Google has to make more and more money because its costs are quite difficult to control. With cost control out of reach, the company’s “leadership” must focus on extracting cash from “users.” (Customers is not the right word for those in the Google datasphere.) The CFO is looking for her future elsewhere. The key point is that her future is not at the Google, its black maw hungry for cash, and the costs of keeping the lights on. Burn rate is not a problem just for start ups, folks.

Second, Google’s senior management is not focused on search no matter what the PR says. The company’s senior leader is a consultant, a smooth talking wordsmith, and a neutral personality to the outside world. As a result, the problems of software wrappers and even the incredible missteps with smart software are faint sounds coming from the other side of a sound-proofed room in a crazy college dormitory. Consultants consult. That’s what Google’s management team does. The “officers” have to figure out how to implement. Then those who do the work find themselves in a cloud of confusion. I did a blog essay about one of Google’s odd ball methods for delivering “minimum viable products”. The process has a name, but I have forgotten it, just like those working on Google’s “innovative” products which are difficult for me to name even after the mind-numbing Google I/O. Everything is fuzzy and illuminated by flickering Red Alert and Yellow Alert lights.

Third, Google has been trying to diversify its revenue stream for decades. After much time and effort, online advertising is darned close to 70 percent of the firm’s revenue. The numerous venture capital initiatives, the usually crazy skunk works often named X or a term from a weird union of a humanoid and a piece of hardware have delivered what? The Glasshole? The life-sized board game? The Transformic Inc.s’ data structure? Dr. Guha’s semantic technology? Yeah, failures because the revenue contributed is negligible. The idea of innovation at Google from the Backrub in the dorm has been derivative, imitative, and in the case of online advertising methods something for which Google paid some big bucks to Yahoo before the Google initial public offering. Google is not imitative; it is similar to a high school science club with an art teacher in charge. Google is clever and was quick moving. The company was fearless and was among the first to use academic ideas in its commercial search and advertising business until it did not. We are in the did not phase. Think about that when you put on a Google T shirt.

Finally, the company lacks the practical expertise to keep its 155,000 (estimated to be dropping at a cadence) full-time equivalents on the reservation. Where did the leaked but largely irrelevant documents originate? Not Mr. Fishkin: He was the lucky recipient of information from Mr. Ezimi. Where did he get the documents? I am waiting for an answer, Mr. Ezimi. Answer carefully because possession of such documents might be something of interest to some government authorities. The leak is just one example of a company which cannot coordinate information in a peer-reviewed journal paper. Remember the stochastic parrot? If not, run a query and look at what Google outputs from its smart software. And the protests? Yeah, thanks for screwing up traffic and my ability to grab a quick coffee at Philz when the Googlers are milling around with signs. Common sense seems in short supply.

So what?

For those who want search traffic, buy advertising. Plan to spend a minimum of $20,000 per month to get some action. If you cannot afford it, you need to put your thinking cap in a USB C socket and get some marketing ideas. Web search is not going to deliver those eyeballs. My local body shop owner asked me, “What can I do to get more visibility for my Google Local listing?” I said, “Pay a friend to post about your business in Nextdoor.com, get some customers to post about your dent removal prowess on Facebook, and pay some high school kid to spend some time making before and after pictures for Instagram. Pay the teen to make a TikTok video of a happy customer.” Note that I did not mention Google. It doesn’t deliver for local outfits.

Now you can kick back and enumerate the reasons why my view of Google is wrong, crazy, or out of touch. Feel free to criticize. I am a dinobaby; I consulted for a certain big time search engine; I consulted for venture firms investing in search; and I worked on some Fancy Dan systems. But my experience does not matter. I am a dinobaby, and I don’t care how other people find information. I pay several people to find information for me. I then review what those young wizards produce. Most of them don’t agree with me on some issues. That’s why I pay them. But this dinobaby’s views of Google are not designed to make them or you happy.

Net net: The image of Google to keep in mind is encapsulated in this article: Yacht Killing: Escort to Be Arraigned in Google Exec’s Heroin Death. Yep, Googlers are sporty. High school mentalities make mistakes, serious mistakes.

Stephen E Arnold, May 30, 2024

Guarantees? Sure … Just Like Unlimited Data Plans

May 30, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I loved this story: “T-Mobile’s Rate Hike Raises Ire over Price Lock Guarantees.” The idea that something is guaranteed today is a hoot. Remember “unlimited data plans”? I think some legal process determined that unlimited did not mean without limits. This is not just wordsmithing; it is probably a behavior which, if attempted in certain areas of Sicily, would result in something quite painful. Maybe a beating, a knife in the ribs, or something more colorful? But today, are you kidding me?

image

The soon-to-be-replaced-by-a-chatbot AI entity is reassuring a customer about a refund. Is the check in the mail? Will the sales professional take the person with whom he is talking to lunch? Absolutely. This is America, a trust outfit for sure. Thanks, MSFT Copilot. Working on security today?

The write up points out:

…in T-Mobile’s case, customers are seething because T-Mobile is raising prices on plans that were offered with “guarantees” they wouldn’t go up, such as T-Mobile One plans.

Unusual? No, visit a big time grocery store. Select 10 items at random. Do the prices match what was displayed on the shelves? Let me know. Our local outfit is batting 10 percent incorrect pricing per 10 items. Does the manager care? Sure, but does the pricing change or the database errors get adjusted. Ho ho ho.

The article reported:

“Clearly this is bad optics for T-Mobile since it won many people over as the ‘non-corporate’ un-carrier,” he [Eric Michelson, a social and digital media strategist] said.

Imagine a telecommunications company raising prices and refusing to provide specific information about which customers get the opportunity to pay more for service.

Several observations:

  1. Promises mean zero. Ask people trying to get reimbursed for medical expenses or for post-tornado house repairs
  2. Clever is more important that behaving in an ethical and responsible manner. Didn’t Google write a check to the US government to make annoying legal matters go away?
  3. The language warped by marketers and shape shifted by attorneys makes understanding exactly what’s afoot difficult. How about the wording in an omnibus bill crafted by lobbyists and US elected officials’ minions? Definitely crystal clear to some. To others, well, not too clear.

Net net: What’s up with the US government agencies charged with managing corporate behavior and protecting the rights of citizens? Answer: These folks are in meetings, on Zoom calls, or working from home. Please, leave a message.

Stephen E Arnold, May 30, 2024

Telegram: No Longer Just Mailing It In

May 29, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Allegedly about 900 million people “use” Telegram. More are going to learn about the platform as the company comes under more European Union scrutiny, kicks the tires for next-generation obfuscation technology, and become a best friend of Microsoft… for now. “Telegram Gets an In-App Copilot Bot” reports:

Microsoft has added an official Copilot bot within the messaging app Telegram, which lets users search, ask questions, and converse with the AI chatbot. Copilot for Telegram is currently in beta but is free for Telegram users on mobile or desktop. People can chat with Copilot for Telegram like a regular conversation on the messaging app. Copilot for Telegram is an official Microsoft bot (make sure it’s the one with the checkmark and the username @CopilotOfficialBot).

You can “try it now.” Just navigate to Microsoft “Copilot for Telegram.” At this location, you can:

Meet your new everyday AI companion: Copilot, powered by GPT, now on Telegram. Engage in seamless conversations, access information, and enjoy a smarter chat experience, all within Telegram.

image

A dinobaby lecturer explains the Telegram APIs and its bot function for automating certain operations within the Telegram platform. Some in the class are looking at TikTok, scrolling Instagram, or reading about a breakthrough in counting large numbers of objects using a unique numerical recipe. But Telegram? WhatsApp and Signal are where the action is, right? Thanks, MSFT Copilot. You are into security and now Telegram. Keep your focus, please.

Next week, I will deliver a talk about Telegram and some related information about obfuscated messaging at the TechnoSecurity & Digital Forensics Conference. I no longer do too many lectures because I am an 80 year old dinobaby, and I hate flying and standing around talking to people 50 years younger than I. However, my team’s research into end-to-end encrypted messaging yielded some interesting findings. At the 2024 US National Cyber Crime Conference about 260 investigators listened to my 75 minute talk, and a number of them said, “We did not know that.” I will also do a Telegram-centric lecture at another US government event in September. But in this short post, I want to cover what the “deal” with Microsoft suggests.

Let’s get to it.

Telegram operates out of Dubai. The distributed team of engineers has been adding features and functions to what began as a messaging app in Russia. The “legend” of Telegram is an interesting story, but I remain skeptical about the company, its links with a certain country, and the direction in which the firm is headed. If you are not familiar with the service, it has morphed into a platform with numerous interesting capabilities. For some actors, Telegram can and has replaced the Dark Web with Telegram’s services. Note: Messages on Telegram are not encrypted by default as they are on some other E2EE messaging applications. Examples include contraband, “personal” services, and streaming video to thousands of people. Some Telegram users pay to get “special” programs. (Please, use your imagination.)

Why is Telegram undergoing this shift from humble messaging app to a platform? Our research suggests that there are three reasons. I want to point out that Pavel Durov does not have a public profile on the scale of a luminary like Elon Musk or Sam AI-Man, but he is out an about. He conducted an “exclusive” and possibly red-herring discussion with Tucker Carlson in April 2024. After the interview, Mr. Pavlov took direct action to block certain message flows from Ukraine into Russia. That may be one reason: Telegram is actively steering information about Ukraine’s view of Mr. Putin’s special operation. Yep, freedom.

Are there others? Let me highlight three:

  1. Mr. Pavlov and his brother who allegedly is like a person with two PhDs see an opportunity to make money. The Pavlovs, however, are not hurting for cash.
  2. American messaging apps have been fat and lazy. Mr. Pavlov is an innovator, and he wants to make darned sure that he rungs rings around Signal, WhatsApp, and a number of other outfits. Ego? My team thinks that is part of Mr. Pavlov’s motivation.
  3. Telegram is expanding because it may not be an independent, free-wheeling outfit. Several on my team think that Mr. Pavlov answers to a higher authority. Is that authority aligned with the US? Probably not.

Now the Microsoft deal?

Several questions may get you synapses in gear:

  1. Where are the data flowing through Telegram located / stored geographically? The service can regenerate some useful information for a user with a new device.
  2. Why tout freedom and free speech in April 2024 and several weeks later apply restrictions on data flow? Does this suggest a capability to monitor by user, by content type, and by other metadata?
  3. Why is Telegram exploring additional network enhancements? My team thinks that Mr. Pavlov has some innovations in obfuscation planned. If the company does implement certain technologies freely disclosed in US patents, what will that mean for analysts and investigators?
  4. Why a tie up with Microsoft? Whose idea was this? Who benefits from the metadata? What happens if Telegram has some clever ideas about smart software and the Telegram bot function?

Net net: Not too many people in Europe’s regulatory entities have paid much attention to Telegram. The entities of interest have been bigger fish. Now Telegram is growing faster than a Chernobyl boar stuffed on radioactive mushrooms. The EU is recalibrating for Telegram at this time. In the US, the “I did not know” reaction provides some insight into general knowledge about Telegram’s more interesting functions. Think pay-to-view streaming video about certain controversial subjects. Free storage and data transfer is provided by Telegram, a company which does not embrace the Netflix approach to entertainment. Telegram is, as I explain in my lectures, interesting, very interesting.

Stephen E Arnold, May 29, 2024

AI Overviews: A He Said, She Said Argument

May 29, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Google has begun the process of setting up an AI Overview object in search results. The idea is that Google provides an “answer.” But the machine-generated response is a platform for selling sentences, “meaning,” and probably words. Most people who have been exposed to the Overview object point out some of the object’s flaws. Those “mistakes” are not the point. Before I offer some ideas about the advertising upside of an AI Overview, I want to highlight both sides of this “he said, she said” dust up. Those criticizing the Google’s enhancement to search results miss the point of generating a new way to monetize information. Those who are taking umbrage at the criticism miss the point of people complaining about how lousy the AI Overviews are perceived to be.

The criticism of Google is encapsulated in “Why Google Is (Probably) Stuck Giving Out AI Answers That May or May Not Be Right.” A “real” journalist explains:

What happens if people keep finding Bad Answers on Google and Google can’t whac-a-mole them fast enough? And, crucially, what if regular people, people who don’t spend time reading or talking about tech news, start to hear about Google’s Bad And Potentially Dangerous Answers? Because that would be a really, really big problem. Google does a lot of different things, but the reason it’s worth more than $2 trillion is still its two core products: search, and the ads that it generates alongside search results. And if people — normal people — lose confidence in Google as a search/answer machine … Well, that would be a real problem.

The idea is that the AI Overview makes Google Web search less useful than it was before AI. Whether the idea is accurate or not makes no difference to the “he said, she said” argument. The “real” news is that Google is doing something that many people may perceive as a negative. The consequence is that Google’s shiny carapace will be scratched and dented. A more colorful approach to this side of the “bad Google” argument appears in Android Authority. “Shut It Down: Google’s AI Search Results Are Beyond Terrible” states:

The new Google AI Overview feature is offering responses to queries that range from bizarre and funny to very dangerous.

Ooof. Bizarre and dangerous. Yep, that’s the new Google AI Overview.

The Red Alert Google is not taking the criticism well. Instead of Googzilla retreating into a dark, digital cave, the beastie is coming out fighting. Imagine. Google is responding to pundit criticism. Fifteen years ago, no one would have paid any attention to a podcaster writer and a mobile device news service. Times have indeed changed.

Google Scrambles to Manually Remove Weird AI Answers in Search” provides an allegedly accurate report about how Googzilla is responding to criticism. In spite of the split infinitive, the headline makes clear that the AI-infused online advertising machine is using humans (!) to fix up wonky AI Overviews. The write up pontificates:

Google continues to say that its AI Overview product largely outputs “high quality information” to users. “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce,” Google spokesperson Meghann Farnsworth said in an email to The Verge. Farnsworth also confirmed that the company is “taking swift action” to remove AI Overviews on certain queries “where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.”

Google seems to acknowledge that action is required. But the Google is not convinced that it has stepped on a baby duckling or two with its AI Overview innovation.

image

AI Overviews represent a potential revenue flow into Alphabet. The money, not the excellence of the outputs, is what matters in today’s Google. Thanks, MSFT Copilot. Back online and working on security today?

Okay, “he said, she said.” What’s the bigger picture? I worked on a project which required setting up an ad service which sold words in a text passage. I am not permitted to name the client or the outfit with the idea. On a Web page, some text would appear with an identified like an underline or bold face. When the reader of the Web page clicked (often inadvertently) on the word, that user would be whisked to another Web site or a pop up ad. The idea is that instead of an Oingo (Applied Semantics)-type of related concept expansion, the advertiser was buying a word. Brilliant.

The AI Overview, based on my team’s look at what the Google has been crafting, sets up a similar opportunity. Here’s a selection from our discussion at lunch on Friday, May 24, 2024 at a restaurant which featured a bridge club luncheon. Wow, was it noisy? Here’s what emerged from our frequently disrupted conversation:

  1. The AI Overview is a content object. It sits for now at the top of the search results page unless the “user” knows to add the string udm=14 to a query
  2. Advertising can be “sold” to the advertiser[s] who want[s] to put a message on the “topic” or “main concept” of the search
  3. Advertising can be sold to the organizations wanting to be linked to a sentence or a segment of a sentence in the AI Overview
  4. Advertising can be sold to the organizations wanting to be linked to a specific word in the AI Overview
  5. Advertising can be sold to the organizations wanting to be linked to a specific concept in the AI Overview.

Whether the AI Overview is good, bad, or indifferent will make zero difference in practice to the Google advertising “machine,” its officers, and its soon-to-be replaced by smart software staff makes no, zero, zip difference. AI has given Google the opportunity to monetize a new content object. That content object and its advertising is additive. People who want “traditional” Google online advertising can still by it. Furthermore, as one of my team pointed out, the presence of the new content object “space” on a search results page opens up additional opportunities to monetize certain content types. One example is buying a link to a related video which appears as an icon below, along side, or within the content object space. The monetization opportunities seem to have some potential.

Net net: Googzilla may be ageing. To poobahs and self-appointed experts, Google may be lost in space, trembling in fear, and growing deaf due to the blaring of the Red Alert klaxons. Whatever. But the AI Overview may have some upside even if it is filled with wonky outputs.

Stephen E Arnold, May 29, 2024

Copilot: I Have Control Now, Captain. Relax, Chill

May 29, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Appearing unbidden on Windows devices, Copilot is spreading its tendrils through businesses around the world. Like a network of fungal mycorrhizae, the AI integrates itself with the roots of Windows computing systems. The longer it is allowed to intrude, the more any attempt to dislodge it will harm the entire ecosystem. VentureBeat warns, “Ceding Control: How Copilot+ and PCs Could Make Enterprises Beholden to Microsoft.”

Writer James Thomason traces a gradual transition: The wide-open potential of the early Internet gave way to walled gardens, the loss of repair rights, and a shift to outside servers controlled by cloud providers. We have gradually ceded control of both software and hardware as well as governance of our data. All while tech companies make it harder to explore alternative products and even filter our news, information, and Web exploration.

Where does that put us now? AI has ushered in a whole new level of dominion for Microsoft in particular. Thomason writes:

“Microsoft’s recently announced ‘Copilot+ PCs’ represent the company’s most aggressive push yet towards an AI-driven, cloud-dependent computing model. These machines feature dedicated AI processors, or ‘NPUs’ (neural processing units), capable of over 40 trillion operations per second. This hardware, Microsoft claims, will enable ‘the fastest, most intelligent Windows PC ever built.’ But there’s a catch: the advanced capabilities of these NPUs are tightly tethered to Microsoft’s cloud ecosystem. Features like ‘Recall,’ which continuously monitors your activity to allow you to quickly retrieve any piece of information you’ve seen on your PC, and ‘Cocreator,’ which uses the NPU to aid with creative tasks like image editing and generation, are deeply integrated with Microsoft’s servers. Even the new ‘Copilot’ key on the keyboard, which summons the AI assistant, requires an active internet connection. In effect, these PCs are designed from the ground up to funnel users into Microsoft’s walled garden, where the company can monitor, influence and ultimately control the user experience to an unprecedented degree. This split-brain model, with core functionality divided between local hardware and remote servers, means you never truly own your PC. Purchasing one of these AI-driven machines equals irrevocable subjugation to Microsoft’s digital fiefdom. The competition, user choice and ability to opt out that defined the PC era are disappearing before our eyes.”

So what does this mean for the majority businesses that rely on Microsoft products? Productivity gains, yes, but at the price of a vendor stranglehold, security and compliance risks, and opaque AI decision-making. See the article for details on each of these.

For anyone who doubts Microsoft would be so unethical, the write-up reminds us of the company’s monopolistic tendencies. Thomason insists we cannot count on the government to intervene again, considering Big Tech’s herculean lobbying efforts. So if the regulators are not coming to save us, how can we defy Microsoft dominance? One can expend the effort to find and utilize open hardware and software alternatives, of course. Linux is a good example. But a real difference will only be made with action on a larger scale. There is an organization for that: FUTO (the Fund for Universal Technology Openness). We learn:

“One of FUTO’s key strategies is to fund open-source versions of important technical building blocks like AI accelerators, ensuring they remain accessible to a wide range of actors. They’re also working to make decentralized software as user-friendly and feature-rich as the offerings of the tech giants, to reduce the appeal of convenience-for-control tradeoffs.”

Even if and when those building blocks are available, resistance will be a challenge. It will take mindfulness about technology choices while Microsoft dangles shiny, easier options. But digital freedom, Thomason asserts, is well worth the effort.

Cynthia Murrell, May 29, 2024

Apple Fan Misses the Obvious: MSFT Marketing Is Tasty

May 28, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I love anecdotes seasoned investigators offer at law enforcement and intelligence conferences. Statements like “I did nothing wrong” are accompanied by a weapon in a waistband. Or, “You can take my drugs.” Yep, those are not informed remarks in some situations. But what happens when poohbahs and would-be experts explain in 2,600 words how addled Microsoft’s announcements were at its Build conference. “Microsoft’s Copilot PC and the M3 Mac Killer Myth” is an interesting argumentative essay making absolutely clear as fresh, just pressed apple cider in New Hampshire. (Have you ever seen the stuff?)

image

The Apple Cider judge does not look happy. Has the innovation factory failed with filtration? Thanks, MSFT Copilot. How is that security initiative today?

The write up provides a version of “tortured poet” writing infused with techno-talk. The object of the write up is to make as clear as the aforementioned apple cider several points to which people are not directing attention; to wit:

  • Microsoft has many failures; for example, the Windows Phone, Web search, and, of course, crappy Windows in many versions
  • Microsoft follows what Apple does; for example, smart software like facial recognition on a user’s device
  • Microsoft fouled up with its Slate PC and assorted Windows on Arm efforts.

So there.

Now Microsoft is, according to the write up:

Today, Microsoft is doing the exact same lazy thing to again try to garner some excitement about legacy Windows PCs, this time by tacking an AI chat bot. And specifically, the Bing Chat bot nobody cared about before Microsoft rebranded it as Copilot. Counting the Surface tablet and Windows RT, and the time Microsoft pretended to "design" its own advanced SoC just like Apple by putting RAM on a Snapdragon, this must be Microsoft’s third major attempt to ditch Intel and deliver something that could compete with Apple’s iPad, or M-powered Macs, or even both.

The article provides a quick review of the technical innovations in Apple’s proprietary silicon. The purpose of the technology information is to make as clear as that New Hampshire, just-pressed juice that Microsoft will continue its track record of fouling up. The essay concludes with this “core” statement flavored with the pungency of hard cider:

Things incrementally change rapidly in the tech industry, except for Microsoft and its photocopy culture.

Interesting. However, I want to point out that Microsoft created a bit of a problem for Google in January 2023. Microsoft’s president announced its push into AI. Google, an ageing beastie, was caught with its claws retracted. The online advertising giant’s response was the Sundar & Prabhakar Comedy Show. It featured smart software which made factual errors, launched the Code Red or whatever odd ball name Googlers assigned to the problem Microsoft created.

Remember. The problem was not AI. Google “invented” some of the intestines of OpenAI’s and Microsoft’s services. The kick in the stomach was marketing. Microsoft’s announcement captured attention and made — much to the chagrin of the online advertising service — look old and slow, not smooth and fast like those mythical US Navy Seals of technology. Google dropped the inflatable raft and appears to be struggling against a rather weak rip tide.

What Microsoft did at Build with its semi-wonky and largely unsupported AI PC announcement was marketing. The Apple essay ignores the interest in a new type of PC form factor that includes the allegedly magical smart software. Mastery of smart software means work, better grades, efficiency, and a Cybertruck filled with buckets of hog wash.

But that may not matter.

Apple, like Google, finds itself struggling to get its cider press hooked up and producing product. One can criticize the Softies for technology. But I have to admit that Microsoft is reasonably adept at marketing its AI efforts. The angst in the cited article is misdirected. Apple insiders should focus on the Microsoft marketing approach. With its AI messaging, Microsoft has avoided the craziness of the iPad’s squashing creativity.

Will the AI PC work? Probably in an okay way. Has Microsoft’s AI marketing worked? It sure looks like it.

Stephen E Arnold, May 28, 2024

Big Tech and AI: Trust Us. We Just Ooze Trust

May 28, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Amid rising concerns, The Register reports, “Top AI Players Pledge to Pull the Plug on Models that Present Intolerable Risk” at the recent AI Seoul Summit. How do they define “intolerable?” That little detail has yet to be determined. The non-binding declaration was signed by OpenAI, Anthropic, Microsoft, Google, Amazon, and other AI heavyweights. Reporter Laura Dobberstein writes:

“The Seoul Summit produced a set of Frontier AI Safety Commitments that will see signatories publish safety frameworks on how they will measure risks of their AI models. This includes outlining at what point risks become intolerable and what actions signatories will take at that point. And if mitigations do not keep risks below thresholds, the signatories have pledged not to ‘develop or deploy a model or system at all.’”

We also learn:

“Signatories to the Seoul document have also committed to red-teaming their frontier AI models and systems, sharing information, investing in cyber security and insider threat safeguards in order to protect unreleased tech, incentivizing third-party discovery and reporting of vulnerabilities, AI content labelling, prioritizing research on the societal risks posed by AI, and to use AI for good.”

Promises, promises. And where are these frameworks so we can hold companies accountable? Hang tight, the check is in the mail. The summit produced a document full of pretty words, but as the article notes:

“All of that sounds great … but the details haven’t been worked out. And they won’t be, until an ‘AI Action Summit’ to be staged in early 2025.”

If then. After all, there’s no need to hurry. We are sure we can trust these AI bros to do the right thing. Eventually. Right?

Cynthia Murrell, May 28, 2024

Meta Mismatch: Good at One Thing, Not So Good at Another

May 27, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “While Meta Stuffs AI Into All Its Products, It’s Apparently Helpless to Stop Perverts on Instagram From Publicly Lusting Over Sexualized AI-Generated Children.” The main idea is that Meta has a problems stopping “perverts.” You know a “pervert,” don’t you. One can spot ‘em when one sees ‘em. The write up reports:

As Facebook and Instagram owner Meta seeks to jam generative AI into every feasible corner of its products, a disturbing Forbes report reveals that the company is failing to prevent those same products from flooding with AI-generated child sexual imagery. As Forbes reports, image-generating AI tools have given rise to a disturbing new wave of sexualized images of children, which are proliferating throughout social media — the Forbes report focused on TikTok and Instagram — and across the web.

What is Meta doing or not doing? The write up is short on technical details. In fact, there are no technical details. Is it possible that any online service allowing anyone able to comment or upload certain content will do something “bad”? Online requires something that most people don’t want. The secret ingredient is spelling out an editorial policy and making decisions about what is appropriate or inappropriate for an “audience.” Note that I have converted digital addicts into an audience, albeit one that participates.

image

Two fictional characters are supposed to be working hard and doing their level best. Thanks, MSFT Copilot. How has that Cloud outage affected the push to more secure systems? Hello, hello, are you there?

Editorial policies require considerable intellectual effort, crafted workflow processes, and oversight. Who does the overseeing? In the good old days when publishing outfits like John Wiley & Sons-type or Oxford University Press-type outfits were gatekeepers, individuals who met the cultural standards were able to work their way up the bureaucratic rock wall. Now the mantra is the same as the probability-based game show with three doors and “Come on down!” Okay, “users” come on down, wallow in anonymity, exploit a lack of consequences, and surf on the darker waves of human thought. Online makes clear that people who read Kant, volunteer to help the homeless, and respect the rights of others are often at risk from the denizens of the psychological night.

Personally I am not a Facebook person, a users or Instagram, or a person requiring the cloak of a WhatsApp logo. Futurism takes a reasonably stand:

it’s [Meta, Facebook, et al] clearly unable to use the tools at its disposal, AI included, to help stop harmful AI content created using similar tools to those that Meta is building from disseminating across its own platforms. We were promised creativity-boosting innovation. What we’re getting at Meta is a platform-eroding pile of abusive filth that the company is clearly unable to manage at scale.

How long has been Meta trying to be a squeaky-clean information purveyor? Is the article going overboard?

I don’t have answers, but after years of verbal fancy dancing, progress may be parked at a rest stop on the information superhighway. Who is the driver of the Meta construct? If you know, that is the person to whom one must address suggestions about content. What if that entity does not listen and act? Government officials will take action, right?

PS. Is it my imagination or is Futurism.com becoming a bit more strident?

Stephen E Arnold, May 27, 2024

Silicon Valley and Its Bad Old Days? You Mean Today Days I Think

May 23, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Sam AI-Man knows how to make headlines. I wonder if he is aware of his PR prowess. The kids in Redmond tried their darnedest to make the “real” media melt down with an AI PC. And what happens? Sam AI-Man engages in the Scarlett Johansson voice play. Now whom does one believe? Did Sam AI-Man take umbrage at Ms. Johansson’s refusal to lend her voice to ChatGPTo? Did she recognize an opportunity to convert the digital voice available on ChatGPTo as “hers” fully aware she could become a household name. Star power may relate to visibility in the “real” media, not the wonky technology blogs.

image

It seems to be a mess, doesn’t it?  Thanks, MSFT Copilot. What happened to good, old Bing, DuckDuckGo, and other services on the morning of May 23, 2024. Oh, well, the consequences of close enough for horseshoes thinking perhaps?

And how do I know the dust up is “real”? There’s the BBC’s story “Scarlett Johansson’s AI Row Has Echoes of Silicon Valley’s Bad Old Days.” I will return to this particularly odd write up in a moment. Also there is the black hole of money (the estimable Washington Post) and its paywalled story “Scarlett Johansson Says OpenAI Copied Her Voice after She Said No.” Her is the title of a Hollywood type movie, not a TikTok confection.

Let’s look at the $77 million in losses outfit’s story first. The WaPo reports:

In May, two days before OpenAI planned to demonstrate the technology, Altman contacted her again, asking her to reconsider, she said. Before she could respond, OpenAI released a demo of its improved audio technology, featuring a voice called “Sky.” Many argued the coquettish voice — which flirted with OpenAI employees in the presentation — bore an uncanny resemblance to Johansson’s character in the 2013 movie “Her,” in which she performed the voice of a super-intelligent AI assistant. “When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” Johansson wrote. “Mr. Altman even insinuated that the similarity was intentional, tweeting a single word ‘her’ — a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human,” she added.

I am not sure that an AI could improve this tight narrative. (We won’t have to wait long before AI writes WaPo stories I have heard. Why? Maybe $77 million in losses?

Now let’s look at the BBC’s article with the reference to “bad old days.” The write up reports:

“Move fast and break things” is a motto that continues to haunt the tech sector, some 20 years after it was coined by a young Mark Zuckerberg. Those five words came to symbolize Silicon Valley at its worst – a combination of ruthless ambition and a rather breathtaking arrogance – profit-driven innovation without fear of consequence. I was reminded of that phrase this week when the actor Scarlett Johansson clashed with OpenAI.

Sam AI-Man’s use of a digital voice which some assert “sounds” like Ms. Johansson’s voice is a reminder of the “bad old days.” One question: When did the Silicon Valley “bad old days” come to an end?

Opportunistic tactics require moving quickly. Whether something is broken or not is irrelevant. Look at Microsoft. Once again Sam AI-Man was able to attract attention. Google’s massive iteration of the technological equivalent of herring nine ways found itself “left of bang.” Sam AI-Man announced ChatGPTo the day before the Sundar & Prabakar In and Out Review.

Let’s summarize:

  1. Sam AI-Man got publicity by implementing an opportunistic tactic. Score one for the AI-Man
  2. Ms. Johansson scored one because she was in the news and she may have a legal play, but that will take months to wend its way through the US legal system
  3. Google and Microsoft scored zero. Google played second fiddle to the ChatGPTo thing and Microsoft was caught in exhaust of the Sam AI-Man voice blast.

Now when did the “bad old days” of Silicon Valley End? Exactly never. It is so easy to say, “I’m sorry. So sorry.”

Stephen E Arnold, May 23, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta