Amazon Customer Service: Let Many Flowers Bloom and Die on the Vine

November 29, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Amazon has been outputting artificial intelligence “assertions” at a furious pace. What’s clear is that Amazon is “into” the volume and variety business in my opinion. The logic of offering multiple “works in progress” and getting them to work reasonably well is going to have three characteristics: The first is that deploying and operating different smart software systems is going to be expensive. The second is that tuning and maintaining high levels of accuracy in the outputs will be expensive. The third is that supporting the users, partners, customers, and integrators is going to be expensive. If we use a bit of freshman in high school algebra, the common factor is expensive. Amazon’s remarkable assertion that no one wants to bet a business on just one model strikes me as a bit out of step with the world in which bean counters scuttle and scurry in green eyeshades and sleeve protectors. (See. I am a dinobaby. Sleeve protectors. I bet none of the OpenAI type outfits have accountants who use these fashion accessories!)

Let’s focus on just one facet of the expensive burdens I touched upon above— customer service. Navigate to the remarkable and stunningly uncritical write up called “How to Reach Amazon Customer Service: A Complete Guide.” The write up is an earthworm list of the “options” Amazon provides. As Amazon was announcing its new new big big things, I was trying to figure out why an order for an $18 product was rejected. The item in question was one part of a multipart order. The other, more costly items were approved and billed to my Amazon credit card.

image

Thanks MSFT Copilot. You do a nice broken bulldozer or at least a good enough one.

But the dog treats?

I systematically worked through the Amazon customer service options. As a Prime customer, I assumed one of them would work. Here’s my report card:

  • Amazon’s automated help. A loop. See Help pages which suggested I navigate too the customer service page. Cute. A first year comp sci student’s programming error. A loop right out of the box. Nifty.
  • The customer service page. Well, that page sent me to Help and Help sent me to the automation loop. Cool Zero for two.
  • Access through the Amazon app. Nope. I don’t install “apps” on my computing devices unless I have zero choice. (Yes, I am thinking about Apple and Google.) Too bad Amazon, I reject your app the way I reject QR codes used by restaurants. (Do these hash slingers know that QR codes are a fave of some bad actors?)
  • Live chat with Amazon customer service was not live. It was a bot. The suggestion? Get back in the loop. Maybe the chat staff was at the Amazon AI announcement or just severely overstaffed or simply did not care. Another loser.
  • Request a call from Amazon customer service. Yeah, I got to that after I call Amazon customer service. Another loser.

I repeated the “call Amazon customer service” twice and I finally worked through the automated system and got a person who barely spoke English. I explained the problem. One product rejected because my Amazon credit card was rejected. I learned that this particular customer service expert did not understand how that could have happened. Yeah, great work.

How did I resolve the rejected credit card. I called the Chase Bank customer service number. I told a person my card was manipulated and I suspected fraud. I was escalated to someone who understood the word “fr4aud.” After about five minutes of “’Will you please hold”, the Chase person told me, “The problem is at Amazon, not your card and not Chase.”

What was the fix? Chase said, “Cancel the order.” I did and went to another vendor.

Now what’s that experience suggest about Amazon’s ability (willingness) to provide effective, efficient customer support to users of its purported multiple large language models, AI systems, and assorted marketing baloney output during Amazon’s “we are into AI” week?

My answer? The Bezos bulldozer has an engine belching black smoke, making a lot of noise because the muffler has a hole in it, and the thumpity thump of the engine reveals that something is out of tune.

Yeah, AI and customer support. Just one of the “expensive” things Amazon may not be able to deliver. The troubling thing is that Amazon’s AI may have been powering the multiple customer support systems. Yikes.

Stephen E Arnold, November 29, 2023

Is YouTube Marching Toward Its Waterloo?

November 28, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have limited knowledge of the craft of warfare. I do have a hazy recollection that Napoleon found himself at the wrong end of a pointy stick at the Battle of Waterloo. I do recall that Napoleon lost the battle and experienced the domino effect which knocked him down a notch or two. He ended up on the island of Saint Helena in the south Atlantic Ocean with Africa a short 1,200 miles to the east. But Nappy had no mobile phone, no yacht purchased with laundered money, and no Internet. Losing has its downsides. Bummer. No empire.

I thought about Napoleon when I read “YouTube’s Ad Blocker Crackdown Heats Up.” The question I posed to myself was, “Is the YouTube push for subscription revenue and unfettered YouTube user data collection a road to Google’s Battle of Waterloo?”

image

Thanks, MSFT Copilot. You have a knack for capturing the essence of a loser. I love good enough illustrations too.

The cited article from Channel News reports:

YouTube is taking a new approach to its crackdown on ad-blockers by delaying the start of videos for users attempting to avoid ads. There were also complaints by various X (formerly Twitter) users who said that YouTube would not even let a video play until the ad blocker was disabled or the user purchased a YouTube Premium subscription. Instead of an ad, some sources using Firefox and Edge browsers have reported waiting around five seconds before the video launches the content. According to users, the Chrome browser, which the streaming giant shares an owner with, remains unaffected.

If the information is accurate, Google is taking steps to damage what the firm has called the “user experience.” The idea is that users who want to watch “free” videos, have a choice:

  1. Put up with delays, pop ups, and mindless appeals to pay Google to show videos from people who may or may not be compensated by the Google
  2. Just fork over a credit card and let Google collect about $150 per year until the rates go up. (The cable TV and mobile phone billing model is alive and well in the Google ecosystem.)
  3. Experiment with advertisement blocking technology and accept the risk of being banned from Google services
  4. Learn to love TikTok, Instagram, DailyMotion, and Bitchute, among other options available to a penny-conscious consumer of user-produced content
  5. Quit YouTube and new-form video. Buy a book.

What happened to Napoleon before the really great decision to fight Wellington in a lovely part of Belgium. Waterloo is about nine miles south of the wonderful, diverse city of Brussels. Napoleon did not have a drone to send images of the rolling farmland, where the “enemies” were located, or the availability of something behind which to hide. Despite Nappy’s fine experience in his march to Russia, he muddled forward. Despite allegedly having said, “The right information is nine-tenths of every battle,” the Emperor entered battle, suffered 40,000 casualties, and ended up in what is today a bit of a tourist hot spot. In 1816, it was somewhat less enticing. Ordering troops to charge uphill against a septuagenarian’s forces was arguably as stupid as walking to Russia as snowflakes began to fall.

How does this Waterloo related to the YouTube fight now underway? I see several parallels:

  1. Google’s senior managers, informed with the management lore of 25 years of unfettered operation, knows that users can be knocked along a path of the firm’s choice. Think sheep. But sheep can be disorderly. One must watch sheep.
  2. The need to stem the rupturing of cash required to operate a massive “free” video service is another one of those Code Yellow and Code Red events for the company. With search known to be under threat from Sam AI-Man and the specters of “findability” AI apps, the loss of traffic could be catastrophic. Despite Google’s financial fancy dancing, costs are a bit of a challenge: New hardware costs money, options like making one’s own chips costs money, allegedly smart people cost money, marketing costs money, legal fees cost money, and maintaining the once-free SEO ad sales force costs money. Got the message: Expenses are a problem for the Google in my opinion.
  3. The threat of either TikTok or Instagram going long form remains. If these two outfits don’t make a move on YouTube, there will be some innovator who will. The price of “move fast and break things” means that the Google can be broken by an AI surfer. My team’s analysis suggests it is more brittle today than at any previous point in its history. The legal dust up with Yahoo about the Overture / GoTo issue was trivial compared to the cost control challenge and the AI threat. That’s a one-two for the Google management wizards to solve. Making sense of the Critique of Pure Reason is a much easier task in my view.

The cited article includes a statement which is likely to make some YouTube users uncomfortable. Here’s the statement:

Like other streaming giants, YouTube is raising its rates with the Premium price going up to $13.99 in the U.S., but users may have to shell out the money, and even if they do, they may not be completely free of ads.

What does this mean? My interpretation is that [a] even if you pay, a user may see ads; that is, paying does not eliminate ads for perpetuity; and [b] the fee is not permanent; that is, Google can increase it at any time.

Several observations:

  1. Google faces high-cost issues from different points of the business compass: Legal in the US and EU, commercial from known competitors like TikTok and Instagram, and psychological from innovators who find a way to use smart software to deliver a more compelling video experience for today’s users. These costs are not measured solely in financial terms. The mental stress of what will percolate from the seething mass of AI entrepreneurs. Nappy did not sleep too well after Waterloo. Too much Beef Wellington, perhaps?
  2. Google’s management methods have proven appropriate for generating revenue from a ad model in which Google controls the billing touch points. When those management techniques are applied to non-controllable functions, they fail. The hallmark of the management misstep is the handling of Dr. Timnit Gebru, a squeaky wheel in the Google AI content marketing machine. There is nothing quite like stifling a dissenting voice, the squawk of a parrot, and a don’t-let-the-door-hit-you-when -you-leave moment.
  3. The post-Covid, continuous warfare, and unsteady economic environment is causing the social fabric to fray and in some cases tear. This means that users may become contentious and become receptive to a spontaneous flash mob action toward Google and YouTube. User revolt at scale is not something Google has demonstrated a core competence.

Net net: I will get my microwave popcorn and watch this real-time Google Boogaloo unfold. Will a recipe become famous? How about Grilled Google en Croute?

Stephen E Arnold, November 28, 2023

Bogus Research Papers: They Are Here to Stay

November 27, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Science Is Littered with Zombie Studies. Here’s How to Stop Their Spread” is a Don Quixote-type write up. The good Don went to war against windmills. The windmills did not care. The people watching Don and his trusty sidekick did not care, and many found the site of a person of stature trying to gore a mill somewhat amusing.

image

A young researcher meets the ghosts of fake, distorted, and bogus information. These artefacts of a loss of ethical fabric wrap themselves around the peer-reviewed research available in many libraries and in for-fee online databases. When was the last time you spotted a correction to a paper in an online database? Thanks, MSFT Copilot. After several tries I got ghosts in a library. Wow, that was a task.

Fake research, non-reproducible research, and intellectual cheating like the exemplars at Harvard’s ethic department and the carpetland of Stanford’s former president’s office seem commonplace today.

The Hill’s article states:

Just by citing a zombie publication, new research becomes infected: A single unreliable citation can threaten the reliability of the research that cites it, and that infection can cascade, spreading across hundreds of papers. A 2019 paper on childhood cancer, for example, cites 51 different retracted papers, making its research likely impossible to salvage. For the scientific record to be a record of the best available knowledge, we need to take a knowledge maintenance perspective on the scholarly literature.

The idea is interesting. It shares a bit of technical debt (the costs accrued by not fixing up older technology) and some of the GenX, GenY, and GenZ notions of “what’s right.” The article sidesteps a couple of thorny bushes on its way to the Promised Land of Integrity.

First, the academic paper is designed to accomplish several things. First, it is a demonstration of one’s knowledge value. “Hey, my peers said this paper was fit to publish” some authors say. Yeah, as a former peer reviewer, I want to tell you that harsh criticism is not what the professional publisher wanted. These papers mean income. Don’t screw up the cash flow,” was the message I heard.

Second, the professional publisher certainly does not want to spend the resources (time and money) required to do crapola archeology. The focus of a professional publisher is to make money by publishing information to niche markets and charging as much money as possible for that information. Academic accuracy, ethics, and idealistic hand waving are not part of the Officers’ Meetings at some professional publisher off-sites. The focus is on cost reduction, market capture, and beating the well-known cousins in other companies who compete with one another. The goal is not the best life partner; the objective is revenue and profit margin.

Third, the academic bureaucracy has to keep alive the mechanisms for brain stratification. Therefore, publishing something “groundbreaking” in a blog or putting the information in a TikTok simply does not count. In fact, despite the brilliance of the information, the vehicle is not accepted. No modern institution building its global reputation and its financial services revenue wants to accept a person unless that individual has been published in a peer reviewed journal of note. Therefore, no one wants to look at data or a paper. The attention is on the paper’s appearing in the peer reviewed journal.

Who pays for this knowledge garbage? The answer is [a] libraries who have to “get” the journals departments identify as significant, [b] the US government which funds quite a bit of baloney and hocus pocus research via grants, [c] the authors of the paper who have to pay for proofs, corrections, and goodness knows what else before the paper is enshrined in a peer-reviewed journal.

Who fixes the baloney? No one. The content is either accepted as accurate and never verified or the researcher cites that which is perceived as important. Who wants to criticize one’s doctoral advisor?

News flash: The prevalence and amount of crapola is unlikely to change. In fact, with the easy availability of smart software, the volume of bad scholarly information is likely to increase. Only the disinformation entities working for nation states hostile to the US of A will outpace US academics in the generation of bogus information.

Net net: The wise researcher will need to verify a lot. But that’s work. So there we are.

Stephen E Arnold, November 27, 2023

Another Xoogler and More Process Insights

November 23, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Google employs many people. Over the last 25 years, quite a few Xooglers (former Google employees) are out and about. I find the essays by the verbal Xooglers interesting. “Reflecting on 18 Years at Google” contains several intriguing comments. Let me highlight a handful of these. You will want to read the entire Hixie article to get the context for the snips I have selected.

The first point I underlined with blushing pink marker was:

I found it quite frustrating how teams would be legitimately actively pursuing ideas that would be good for the world, without prioritizing short-term Google interests, only to be met with cynicism in the court of public opinion.

image

Old timers share stories about the golden past in the high-technology of online advertising. Thanks, Copilot, don’t overdo the schmaltz.

The “Google as a victim” is a notion not often discussed — except by some Xooglers. I recall a comment made to me by a seasoned manager at another firm, “Yes, I am paranoid. They are out to get me.” That comment may apply to some professionals at Google.

How about this passage?

My mandate was to do the best thing for the web, as whatever was good for the web would be good for Google (I was explicitly told to ignore Google’s interests).

The oft-repeated idea is that Google cares about its users and similar truisms are part of what I call the Google mythology. Intentionally, in my opinion, Google cultivates the “doing good” theme as part of its effort to distract observers from the actual engineering intent of the company. (You love those Google ads, don’t you?)

Google’s creative process is captured in this statement:

We essentially operated like a startup, discovering what we were building more than designing it.

I am not sure if this is part of Google’s effort to capture the “spirit” of the old-timey days of Bell Laboratories or an accurate representation of Google’s directionless methods became over the years. What people “did” is clearly dissociated from the advertising mechanisms on which the oversized tires and chrome do-dads were created and bolted on the ageing vehicle.

And, finally, this statement:

It would require some shake-up at the top of the company, moving the center of power from the CFO’s office back to someone with a clear long-term vision for how to use Google’s extensive resources to deliver value to users.

What happened to the ideas of doing good and exploratory innovation?

Net net: Xooglers pine for the days of the digital gold rush. Googlers may not be aware of what the company is and does. That may be a good thing.

Stephen E Arnold, November 23, 2023

OpenAI: What about Uncertainty and Google DeepMind?

November 20, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A large number of write ups about Microsoft and its response to the OpenAI management move populate my inbox this morning (Monday, November 20, 2023).

To give you a sense of the number of poohbahs, mavens, and “real” journalists covering Microsoft’s hiring of Sam (AI-Man) Altman, I offer this screen shot of Techmeme.com taken at 1100 am US Eastern time:

image

A single screenshot cannot do justice to  the digital bloviating on this subject as well as related matters.

I did a quick scan because I simply don’t have the time at age 79 to read every item in this single headline service. Therefore, I admit that others may have thought about the impact of the Steve Jobs’s like termination, the revolt of some AI wizards, and Microsoft’s creating a new “company” and hiring Sam AI-Man and a pride of his cohorts in the span of 72 hours (give or take time for biobreaks).

In this short essay, I want to hypothesize about how the news has been received by that merry band of online advertising professionals.

To begin, I want to suggest that the turmoil about who is on first at OpenAI sent a low voltage signal through the collective body of the Google. Frisson resulted. Uncertainty and opportunity appeared together like the beloved Scylla and Charybdis, the old pals of Ulysses. The Google found its right and left Brainiac hemispheres considering that OpenAI would experience a grave set back, thus clearing a path for Googzilla alone. Then one of the Brainiac hemisphere reconsidered and perceive a grave threat from the split. In short, the Google tipped into its zone of uncertainty.

image

A group of online advertising experts meet to consider the news that Microsoft has hired Sam Altman. The group looks unhappy. Uncertainty is an unpleasant factor in some business decisions. Thanks Microsoft Copilot, you captured the spirit of how some Silicon Valley wizards are reacting to the OpenAI turmoil because Microsoft used the OpenAI termination of Sam Altman as a way to gain the upper hand in the cloud and enterprise app AI sector.

Then the matter appeared to shift back to the pre-termination announcement. The co-founder of OpenAI gained more information about the number of OpenAI employees who were planning to quit or, even worse, start posting on Instagram, WhatsApp, and TikTok (X.com is no longer considered the go-to place by the in crowd.

The most interesting development was not that Sam AI-Man would return to the welcoming arms of Open AI. No, Sam AI-Man and another senior executive were going to hook up with the geniuses of Redmond. A new company would be formed with Sam AI-Man in charge.

As these actions unfolded, the Googlers sank under a heavy cloud of uncertainty. What if the Softies could use Google’s own open source methods, integrate rumored Microsoft-developed AI capabilities, and make good on Sam AI-Man’s vision of an AI application store?

The Googlers found themselves reading every “real news” item about the trajectory of Sam AI-Man and Microsoft’s new AI unit. The uncertainty has morphed into another January 2023 Davos moment. Here’s my take as of 230 pm US Eastern, November 20, 2023:

  1. The Google faces a significant threat when it comes to enterprise AI apps. Microsoft has a lock on law firms, the government, and a number of industry sectors. Google has a presence, but when it comes to go-to apps, Microsoft is the Big Dog. More and better AI raises the specter of Microsoft putting an effective laser defense behinds its existing enterprise moat.
  2. Microsoft can push its AI functionality as the Azure difference. Furthermore, whether Google or Amazon for that matter assert their cloud AI is better, Microsoft can argue, “We’re better because we have Sam AI-Man.” That is a compelling argument for government and enterprise customers who cannot imagine work without Excel and PowerPoint. Put more AI in those apps, and existing customers will resist blandishments from other cloud providers.
  3. Google now faces an interesting problem: It’s own open source code could be converted into a death ray, enhanced by Sam AI-Man, and directed at the Google. The irony of Googzilla having its left claw vaporized by its own technology is going to be more painful than Satya Nadella rolling out another Davos “we’re doing AI” announcement.

Net net: The OpenAI machinations are interesting to many companies. To the Google, the OpenAI event and the Microsoft response is like an unsuspecting person getting zapped by Nikola Tesla’s coil. Google’s mastery of high school science club management techniques will now dig into the heart of its DeepMind.

Stephen E Arnold, November 20, 2023

Google Pulls Out a Rhetorical Method to Try to Win the AI Spoils

November 20, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In high school in 1958, our debate team coach yapped about “framing.” The idea was new to me, and Kenneth Camp pounded it into our debate’s collective “head” for the four years of my high school tenure. Not surprisingly, when I read “Google DeepMind Wants to Define What Counts As Artificial General Intelligence” I jumped back in time 65 years (!) to Mr. Camp’s explanation of framing and how one can control the course of a debate with the technique.

Google should not have to use a rhetorical trick to make its case as the quantum wizard of online advertising and universal greatness. With its search and retrieval system, the company can boost, shape, and refine any message it wants. If those methods fall short, the company can slap on a “filter” or “change its rules” and deprecate certain Web sites and their messages.

But Google values academia, even if the university is one that welcomed a certain Jeffrey Epstein into its fold. (Do you remember the remarkable Jeffrey Epstein. Some of those who he touched do I believe.) The estimable Google is the subject of referenced article in the MIT-linked Technology Review.

From my point of view, the big idea is the write up is, and I quote:

To come up with the new definition, the Google DeepMind team started with prominent existing definitions of AGI and drew out what they believe to be their essential common features. The team also outlines five ascending levels of AGI: emerging (which in their view includes cutting-edge chatbots like ChatGPT and Bard), competent, expert, virtuoso, and superhuman (performing a wide range of tasks better than all humans, including tasks humans cannot do at all, such as decoding other people’s thoughts, predicting future events, and talking to animals). They note that no level beyond emerging AGI has been achieved.

Shades of high school debate practice and the chestnuts scattered about the rhetorical camp fire as John Schunk, Jimmy Bond, and a few others (including the young dinobaby me) learned how one can set up a frame, populate the frame with logic and facts supporting the frame, and then point out during rebuttal that our esteemed opponents were not able to dent our well formed argumentative frame.

Is Google the optimal source for a definition of artificial general intelligence, something which does not yet exist. Is Google’s definition more useful than a science fiction writer’s or a scene from a Hollywood film?

Even the trusted online source points out:

One question the researchers don’t address in their discussion of _what_ AGI is, is _why_ we should build it. Some computer scientists, such as Timnit Gebru, founder of the Distributed AI Research Institute, have argued that the whole endeavor is weird. In a talk in April on what she sees as the false (even dangerous) promise of utopia through AGI, Gebru noted that the hypothetical technology “sounds like an unscoped system with the apparent goal of trying to do everything for everyone under any environment.” Most engineering projects have well-scoped goals. The mission to build AGI does not. Even Google DeepMind’s definitions allow for AGI that is indefinitely broad and indefinitely smart. “Don’t attempt to build a god,” Gebru said.

I am certain it is an oversight, but the telling comment comes from an individual who may have spoken out about Google’s systems and methods for smart software.

image

Mr. Camp, the high school debate coach, explains how a rhetorical trope can gut even those brilliant debaters from other universities. (Yes, Dartmouth, I am still thinking of you.) Google must have had a “coach” skilled in the power of framing. The company is making a bold move to define that which does not yet exist and something whose functionality is unknown. Such is the expertise of the Google. Thanks, Bing. I find your use of people of color interesting. Is this a pre-Sam ouster or a post-Sam ouster function?

What do we learn from the write up? In my view of the AI landscape, we are given some insight into Google’s belief that its rhetorical trope packaged as content marketing within an academic-type publication will lend credence to the company’s push to generate more advertising revenue. You may ask, “But won’t Google make oodles of money from smart software?” I concede that it will. However, the big bucks for the Google come from those willing to pay for eyeballs. And that, dear reader, translates to advertising.

Stephen E Arnold, November 20, 2023

Adobe: Delivers Real Fake War Images

November 17, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Gee, why are we not surprised? Crikey. reveals, “Adobe Is Selling Fake AI Images of the War in Israel-Gaza.” While Adobe did not set out to perpetuate fake news about the war, neither it did not try very hard to prevent it. Reporter Cam Wilson writes:

“As part of the company’s embrace of generative artificial intelligence (AI), Adobe allows people to upload and sell AI images as part of its stock image subscription service, Adobe Stock. Adobe requires submitters to disclose whether they were generated with AI and clearly marks the image within its platform as ‘generated with AI’. Beyond this requirement, the guidelines for submission are the same as any other image, including prohibiting illegal or infringing content. People searching Adobe Stock are shown a blend of real and AI-generated images. Like ‘real’ stock images, some are clearly staged, whereas others can seem like authentic, unstaged photography. This is true of Adobe Stock’s collection of images for searches relating to Israel, Palestine, Gaza and Hamas. For example, the first image shown when searching for Palestine is a photorealistic image of a missile attack on a cityscape titled ‘Conflict between Israel and Palestine generative AI’. Other images show protests, on-the-ground conflict and even children running away from bomb blasts — all of which aren’t real.”

Yet these images are circulating online, adding to the existing swirl of misinformation. Even several small news outlets have used them with no disclaimers attached. They might not even realize the pictures are fake.

Or perhaps they do. Wilson consulted RMIT’s T.J. Thomson, who has been researching the use of AI-generated images. He reports that, while newsrooms are concerned about misinformation, they are sorely tempted by the cost-savings of using generative AI instead of on-the-ground photographers. One supposes photographer safety might also be a concern. Is there any stuffing this cat into the bag, or must we resign ourselves to distrusting any images we see online?

A loss suffered in the war is real. Need an image of this?

Cynthia Murrell, November 17, 2023

Buy Google Traffic: Nah, Paying May Not Work

November 16, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Tucked into a write up about the less than public trial of the Google was an interesting factoid. The source of the item was “More from the US v Google Trial: Vertical Search, Pre-Installs and the Case of Firefox / Yahoo.” Here’s the snippet:

Expedia execs also testified about the cost of ads and how increases had no impact on search results. On October 19, Expedia’s former chief operating officer, Jeff Hurst, told the court the company’s ad fees increased tenfold from $21 million in 2015 to $290 million in 2019. And yet, Expedia’s traffic from Google did not increase. The implication was that this was due to direct competition from Google itself. Hurst pointed out that Google began sharing its own flight and hotel data in search results in that period, according to the Seattle Times.

image

“Yes, sir, you can buy a ticket and enjoy a ticket to our entertainment,” says the theater owner. The customer asks, “Is the theater in good repair?” The ticket seller replies, “Of course, you get your money’s worth at our establishment. Next.” Thanks, Microsoft Bing. It took several tries before I gave up.

I am a dinobaby, and I am, by definition, hopelessly out of it. However, I interpret this passage in this way:

  1. Despite protestations about the Google algorithm’s objectivity, Google has knobs and dials it can use to cause the “objective” algorithm to be just a teenie weenie less objective. Is this a surprise? Not to me. Who builds a system without a mechanism for controlling what it does. My favorite example of this steering involves the original FirstGov.gov search system circa 2000. After Mr. Clinton lost the election, the new administration, a former Halliburton executive wanted a certain Web page result to appear when certain terms were searched. No problemo. Why? Who builds a system one cannot control? Not me. My hunch is that Google may have a similar affection for knobs and dials.
  2. Expedia learned that buying advertising from a competitor (Google) was expensive and then got more expensive. The jump from $21 million to $290 million is modest from the point of view of some technology feudalists. To others the increase is stunning.
  3. Paying more money did not result in an increase in clicks or traffic. Again I was not surprised. What caught my attention is that it has taken decades for others to figure out how the digital highway men came riding like a wolf on the fold. Instead of being bedecked with silver and gold, these actors wore those cheerful kindergarten colors. Oh, those colors are childish but those wearing them carried away the silver and gold it seems.

Net net: Why is this US v Google trial not more public? Why so many documents withheld? Why is redaction the best billing tactic of 2023? So many questions that this dinobaby cannot answer. I want to go for a ride in the Brin-A-Loon too. I am a simple dinobaby.

Stephen E Arnold, November 16, 2023

An Odd Couple Sharing a Soda at a Holiday Data Lake

November 16, 2023

What happens when love strikes the senior managers of the technology feudal lords? I will tell you what happens — Love happens. The proof appears in “Microsoft and Google Join Forces on OneTable, an Open-Source Solution for Data Lake Challenges.” Yes, the lakes around Redmond can be a challenge. For those living near Googzilla’s stomping grounds, the risk is that a rising sea level will nuke the outdoor recreation areas and flood the parking lots.

But any speed dating between two techno feudalists is news. The “real news” outfit Venture Beat reports:

In a new open-source partnership development effort announced today, Microsoft is joining with Google and Onehouse in supporting the OneTable project, which could reshape the cloud data lake landscape for years to come

And what does “reshape” mean to these outfits? Probably nothing more than making sure that Googzilla and Mothra become the suppliers to those who want to vacation at the data lake. Come to think of it. The concessions might be attractive as well.

image

Googzilla says to Mothra-Soft, a beast living in Mercer Island, “I know you live on the lake. It’s a swell nesting place. I think we should hook up and cooperate. We can share the money from merged data transfers the way you and I —  you good looking Lepidoptera — are sharing this malted milk. Let’s do more together if you know what I mean.” The delightful Mothra-Soft croons, “I thought you would wait until our high school reunion to ask, big boy. Let’s find a nice, moist, uncrowded place to consummate our open source deal, handsome.” Thanks, Microsoft Bing. You did a great job of depicting a senior manager from the company that developed Bob, the revolutionary interface.

The article continues:

The ability to enable interoperability across formats is critical for Google as it expands the availability of its BigQuery Omni data analytics technology. Kazmaier said that Omni basically extends BigQuery to AWS and Microsoft Azure and it’s a service that has been growing rapidly. As organizations look to do data processing and analytics across clouds there can be different formats and a frequent question that is asked is how can the data landscape be interconnected and how can potential fragmentation be stopped.

Is this alleged linkage important? Yeah, it is. Data lakes are great places to part AI training data. Imagine the intelligence one can glean monitoring inflows and outflows of bits. To make the idea more interesting think in terms of the metadata. Exciting because open source software is really for the little guys too.

Stephen E Arnold, November 16, 2023

SolarWinds: Huffing and Puffing in a Hot Wind on a Sunny Day

November 16, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Remember the SolarWinds’ misstep? Time has a way deleting memories of security kerfuffles. Who wants to recall ransomware, loss of data, and the general embarrassment of getting publicity for the failure of existing security systems? Not too many. A few victims let off steam by blaming their cyber vendors. Others — well, one — relieve their frustrations by emulating a crazed pit bull chasing an M1 A2 battle tank. The pit bull learns that the M1 A2 is not going to stop and wait for the pit bull to stop barking and snarling. The tank grinds forward, possibly over Solar (an unlikely name for a pit bull in my opinion).

11 11 political speech

The slick business professional speaks to a group of government workers gathered outside on the sidewalk of 100 F Street NW. The talker is semi-shouting, “Your agency is incompetent. You are unqualified. My company knows how to manage our business, security, and personnel affairs.” I am confident this positive talk will win the hearts and minds of the GS-13s listening. Thanks, Microsoft Bing. You obviously have some experience with government behaviors.

I read “SolarWinds Says SEC Sucks: Watchdog Lacks Competence to Regulate Cybersecurity.” The headline attributes the statement to a company. My hunch is that the criticism of the SEC is likely someone other than the firm’s legal counsel, the firm’s CFO, or its PR team.

The main idea, of course, is that SolarWinds should not be sued by the US Securities & Exchange Commission. The SEC does have special agents, but no criminal authority. However, like many US government agencies and their Offices of Inspector General, the investigators can make life interesting for those in whom the US government agency has an interest. (Tip: I will now offer an insider tip. Avoid getting crossways with a US government agency. The people may change but the “desks” persist through time along with documentation of actions. The business processes in the US government mean that people and organizations of interest can be the subject to scrutiny. Like the poem says, “Time cannot wither nor custom spoil the investigators’ persistence.”)

The write up presents information obtained from a public blog post by the victim of a cyber incident. I call the incident a misstep because I am not sure how many organizations, software systems, people, and data elements were negatively whacked by the bad actors. In general, the idea is that a bad actor should not be able to compromise commercial outfits.

The write up reports:

SolarWinds has come out guns blazing to defend itself following the US Securities and Exchange Commission’s announcement that it will be suing both the IT software maker and its CISO over the 2020 SUNBURST cyberattack.

The vendor said the SEC’s lawsuit is "fundamentally flawed," both from a legal and factual perspective, and that it will be defending the charges "vigorously." A lengthy blog post, published on Wednesday, dissected some of the SEC’s allegations, which it evidently believes to be false. The first of which was that SolarWinds lacked adequate security controls before the SUNBURST attack took place.

The right to criticize is baked into the ethos of the US of A. The cited article includes this quote from the SolarWinds’ statement about the US Securities & Exchange Commission:

It later went on to accuse the regulator of overreaching and "twisting the facts" in a bid to expand its regulatory footprint, as well as claiming the body "lacks the authority or competence to regulate public companies’ cybersecurity. The SEC’s cybersecurity-related capabilities were again questioned when SolarWinds addressed the allegations that it didn’t follow the NIST Cybersecurity Framework (CSF) at the time of the attack.

SolarWinds feels strongly about the SEC and its expertise. I have several observations to offer:

  1. Annoying regulators and investigators is not perceived in some government agencies as a smooth move
  2. SolarWinds may find that its strong words may be recast in the form of questions in the legal forum which appears to be roaring down the rails
  3. The SolarWinds’ cyber security professionals on staff and the cyber security vendors whose super duper bad actor stoppers appear to have an opportunity to explain their view of what I call a “misstep.”

Do I have an opinion? Sure. You have read it in my blog posts or heard me say it in my law enforcement lectures, most recently at the Massachusetts / New York Association of Crime Analysts’ meeting in Boston the first week of October 2023.

Cyber security is easier to describe in marketing collateral than do in real life. The SolarWinds’ misstep is an interesting case example of reality being different from the expectation.

Stephen E Arnold, November 16, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta