AI Will Not Definitely, Certainly, Absolute Not Take Some Jobs. Whew. That Is News

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Outfits like McKinsey & Co. are kicking the tires of smart software. Some bright young sprouts I have heard arrive with a penchant for AI systems to create summaries and output basic information on a subject the youthful masters of the universe do not know. Will consulting services firms, publishers, and customer service outfits embrace smart software? The answer is, “You bet your bippy.”

“Why?” Answer: Potential cost savings. Humanoids require vacations, health care, bonuses, pension contributions (ho ho ho), and an old-fashioned and inefficient five-day work week.

image

Cost reductions over time, cost controls in real time, and more consistent outputs mean that as long as smart software is good enough, the technologies will go through organizations with more efficiency than Union General William T. Sherman led some 60,000 soldiers on a 285-mile march from Atlanta to Savannah, Georgia. Thanks, MSFT Copilot. Working on security today?

Software is allegedly better, faster, and cheaper. Software, particularly AI, may not be better, faster, or cheaper. But once someone is fired, the enthusiasm to return to the fold may be diminished. Often the response is a semi-amusing and often negative video posted on social media.

Here’s Why AI Probably Isn’t Coming for Your Job Anytime Soon” disagrees with my fairly conservative prediction that consulting, publishing, and some service outfits will be undergoing what I call “humanoid erosion” and “AI accretion.” The write up asserts:

We live in an age of hyper specialization. This is a trend that’s been evolving for centuries. In his seminal work, The Wealth of Nations (written within months of the signing of the Declaration of Independence), Adam Smith observed that economic growth was primarily driven by specialization and division of labor. And specialization has been a hallmark of computing technology since its inception. Until now. Artificial intelligence (AI) has begun to alter, even reverse, this evolution.

Okay, Econ 101. Wonderful. But… and there are some, of course. the write up says:

But the direction is clear. While society is moving toward ever more specialization, AI is moving in the opposite direction and attempting to replicate our greatest evolutionary advantage—adaptability.

Yikes. I am not sure that AI is going in any direction. Senior managers are going toward reducing costs. “Good enough,” not excellence, is the high-water mark today.

Here’s another “but”:

But could AI take over the bulk of legal work or is there an underlying thread of creativity and judgment of the type only speculative super AI could hope to tackle? Put another way, where do we draw the line between general and specific tasks we perform? How good is AI at analyzing the merits of a case or determining the usefulness of a specific document and how it fits into a plausible legal argument? For now, I would argue, we are not even close.

I don’t remember much about economics. In fact, I only think about economics in terms of reducing costs and having more money for myself. Good old Adam wrote:

Wherever there is great property there is great inequality. For one very rich man, there must be at least five hundred poor, and the affluence of the few supposes the indigence of the many.

When it comes to AI, inequality is baked in. The companies that are competing fiercely to dominate the core technology are not into equality. The senior managers who want to reduce costs associated with publishing, writing consulting reports based on business school baloney, or reviewing documents hunting for nuggets useful in a trial. AI is going into these and similar knowledge professions. Most of those knowledge workers will have an opportunity to find their future elsewhere. But what about in-take professionals in hospitals? What about dispatchers at trucking companies? What about government citizen service jobs? Sorry. Software is coming. Companies are developing orchestrator software to allow smart software to function across multiple related and inter-related tasks. Isn’t that what most work in a many organizations is?

Here’s another test question from Econ 101:

Discuss the meaning of “It was not by gold or by silver, but by labor, that all wealth of the world was originally purchased.” Give examples of how smart software will replace labor and generate more money for those who own the rights to digital gold or silver.

Send me you blue book answers within 24 hours. You must write in legible cursive. You are not permitted to use artificial intelligence in any form to answer this question which counts for 95 percent of your grade in Economics 102: Work in the Age of AI.

Stephen E Arnold, June 3, 2024

Price Fixing Is Price Fixing with or without AI

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Small time landlords, such as mom and pops who invested in property for retirement, shouldn’t be compared to large, corporate landlords. The corporate landlords, however, give them all a bad name. Why? Because of actions like price fixing. ProPublicia details how politicians are fighting against the bad act: “We Found That Landlords Could Be Using Algorithms To Fix Rent Prices. Now Lawmakers Want To make The Practice Illegal.”

RealPage sells software programmed with AI algorithm that collect rent data and recommends how much landlords should charge. Lawmakers want to ban AI-base price fixing so landlords won’t become cartels that coordinate pricing. RealPage and its allies defend the software while lawmakers introduced a bill to ban it.

The FTC also states that AI-based real estate software has problems: “Price Fixing By Algorithm Is Still Price Fixing.” The FTC isn’t against technology. They’re against technology being used as a tool to cheat consumers:

“Meanwhile, landlords increasingly use algorithms to determine their prices, with landlords reportedly using software like “RENTMaximizer” and similar products to determine rents for tens of millions(link is external) of apartments across the country. Efforts to fight collusion are even more critical given private equity-backed consolidation(link is external) among landlords and property management companies. The considerable leverage these firms already have over their renters is only exacerbated by potential algorithmic price collusion. Algorithms that recommend prices to numerous competing landlords threaten to remove renters’ ability to vote with their feet and comparison-shop for the best apartment deal around.”

This is an example of how to use AI for evil. The problem isn’t the tool it’s the humans using it.

Whitney Grace, June 3, 2024

Spot a Psyop Lately?

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Psyops or psychological operations is also known as psychological warfare. It’s defines as actions used to weaken an enemy’s morale. Psyops can range from simple propaganda poster to a powerful government campaign. According to Annalee Newitz on her Hypothesis Buttondown blog, psyops are everywhere and she explains: “How To Recognize A Psyop In Three Easy Steps.”

Newitz smartly condenses the history of American psyops into a paragraph: it’s a mixture of pulp fiction tropes, advertising techniques, and pop psychology. In the twentieth century, US military harnessed these techniques to make messages to hurt, demean, and distract people. Unlike weapons, psyops can be avoided with a little bit of critical thinking.

The first step is to pay attention when people claim something is “anti-American.” The term “anti-American” can be interpreted in many ways, but it comes down to media saying one group of people (foreign, skin color, sexual orientation, etc.) is against the American way of life.

The second step is spreading lies with hints of truth. Newitz advises to read psychological warfare military manuals and uses an example of leaflets the Japanese dropped on US soldiers in the Philippines. The leaflets warned the soldiers about venomous snakes in jungles and they were signed by with “US Army.” Soldiers were told the leaflets were false, but it made them believe there were coverups:

“Psyops-level lies are designed to destabilize an enemy, to make them doubt themselves and their compatriots, and to convince them that their country’s institutions are untrustworthy. When psyops enter culture wars, you start to see lies structured like this snake “warning.” They don’t just misrepresent a specific situation; they aim to undermine an entire system of beliefs.”

The third step is the easiest to recognize and the most extreme: you can’t communicate with anyone who says you should be dead. Anyone who believes you should be dead is beyond rational thought. Her advice is to ignore it and not engage.

Another way to recognize psyops tactics is to question everything. Thinking isn’t difficult, but thinking critically takes practice.

Whitney Grace, June 3, 2024

So AI Is — Maybe, Just Maybe — Not the Economic Big Kahuna?

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I find it amusing how AI has become the go-to marketing word. I suppose if I were desperate, lacking an income, unsure about what will sell, and a follow-the-hyperbole-type person I would shout, “AI.” Instead I vocalize, “Ai-Yai-Ai” emulating the tones of a Central American death whistle. Yep, “Ai-Yai-AI.”

image

Thanks, MSFT Copilot. A harbinger? Good enough.

I read “MIT Professor Hoses Down Predictions AI Will Put a Rocket under the Economy.” I won’t comment upon the fog of distrust which I discern around Big Name Universities, nor will I focus my adjustable Walgreen’s spectacles on MIT’s fancy dancing with the quite interesting and decidedly non-academic Jeffrey Epstein. Nope. Forget those two factoids.

The write up reports:

…Daron Acemoglu, professor of economics at Massachusetts Institute of Technology, argues that predictions AI will improve productivity and boost wages in a “blue-collar bonanza” are overly optimistic.

The good professor is rowing against the marketing current. According to the article, the good professor identifies some wild and crazy forecasts. One of these is from an investment bank whose clients are unlikely to be what some one percenters perceive as non-masters of the universe.

That’s interesting. But it pales in comparison to the information in “Few People Are Using ChatGPT and Other AI Tools Regularly, Study Suggests.” (I love suggestive studies!) That write up reports about a study involving Thomson Reuters, the “trust” outfit:

Carried out by the Reuters Institute and Oxford University and involving 6,000 respondents from the U.S., U.K., France, Denmark, Japan, and Argentina, the researchers found that OpenAI’s ChatGPT is by far the most widely used generative-AI tool and is two or three times more widespread than the next most widely used products — Google Gemini and Microsoft Copilot. But despite all the hype surrounding generative AI over the last 18 months, only 1% of those surveyed are using ChatGPT on a daily basis in Japan, 2% in France and the UK, and 7% in the U.S. The study also found that between 19% and 30% of the respondents haven’t even heard of any of the most popular generative AI tools, and while many of those surveyed have tried using at least one generative-AI product, only a very small minority are, at the current time, regular users deploying them for a variety of tasks.

My hunch is that these contrarians want clicks. Well, the tactic worked for me. However, how many of those in AI-Land will take note? My thought is that these anti-AI findings are likely to be ignored until some of the Big Money folks lose their cash. Then the voices of negativity will be heard.

Several observations:

  1. The economics of AI seem similar to some early online ventures like Pets.com, not “all” mind you, just some
  2. Expertise in AI may not guarantee a job at a high-flying techno-feudalist outfit
  3. The difficulties Google appears to be having suggest that the road to AI-Land on the information superhighway may have some potholes. (If Google cannot pull AI off, how can Bob’s Trucking Company armed with Microsoft Word with Copilot?)

Net net: It will be interesting to monitor the frequency of “AI balloon deflating” analyses.

Stephen E Arnold,  June 3, 2024

x

Google: Lost in Its Own AI Maze

May 31, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

One “real” news items caught my attention this morning. Let me tell you. Even with the interesting activities in the Manhattan court, these jumped at me. Let’s take a quick look and see if Googzilla (see illustration) can make a successful exit from the AI maze in which the online advertising giant finds itself.

image

Googzilla is lost in its own AI maze. Can it find a way out? Thanks, MSFT Copilot. Three tries and I got a lizard in a maze. Keep allocating compute cycles to security because obviously Copilot is getting fewer and fewer these days.

Google Pins Blame on Data Voids for Bad AI Overviews, Will Rein Them In” makes it clear that Google is not blaming itself for some of the wacky outputs its centerpiece AI function has been delivering. I won’t do the guilty-34-times thing. I will just mention the non-toxic glue and pizza item. This news story reports:

Google thinks the AI Overviews for its search engine are great, and is blaming viral screenshots of bizarre results on "data voids" while claiming some of the other responses are actually fake. In a Thursday post, Google VP and Head of Google Search Liz Reid doubles down on the tech giant’s argument that AI Overviews make Google searches better overall—but also admits that there are some situations where the company "didn’t get it right."

So let’s look at that Google blog post titled “AI Overviews: About Last Week.”

How about this statement?

User feedback shows that with AI Overviews, people have higher satisfaction with their search results, and they’re asking longer, more complex questions that they know Google can now help with. They use AI Overviews as a jumping off point to visit web content, and we see that the clicks to webpages are higher quality — people are more likely to stay on that page, because we’ve done a better job of finding the right info and helpful webpages for them.

The statement strikes me as something that a character would say in an episode of the Twilight Zone, a TV series in the 50s and 60s. The TV show had a weird theme, and I thought I heard it playing when I read the official Googley blog post. Is this the Google “bullseye” method or a bullsh*t method?

The official Googley blog post notes:

This means that AI Overviews generally don’t “hallucinate” or make things up in the ways that other LLM products might. When AI Overviews get it wrong, it’s usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available. (These are challenges that occur with other Search features too.) This approach is highly effective. Overall, our tests show that our accuracy rate for AI Overviews is on par with another popular feature in Search — featured snippets — which also uses AI systems to identify and show key info with links to web content.

Okay, we are into bullsh*t method. Google search is now a key moment in the Sundar & Prabhakar Comedy Act. Since the début in Paris which featured incorrect data, the Google has been in Code Red or Red Alert of red faced-embarrassment mode. Now the company wants people to eat rocks, and it is not the online advertising giant’s fault. The blog post explains:

There isn’t much web content that seriously contemplates that question, either. This is what is often called a “data void” or “information gap,” where there’s a limited amount of high quality content about a topic. However, in this case, there is satirical content on this topic … that also happened to be republished on a geological software provider’s website. So when someone put that question into Search, an AI Overview appeared that faithfully linked to one of the only websites that tackled the question. In other examples, we saw AI Overviews that featured sarcastic or troll-y content from discussion forums. Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza.

Okay, I think one component of the bullsh*t method is that it is not Google’s fault. “Users” — not customers because Google has advertising clients, partners, and some lobbyists. Everyone else is a user, and it is users’ fault, the data creators’ fault, and probably Sam AI-Man’s fault. (Did I omit anyone on whom to blame the let them “eat rocks” result?)

And the Google cares. This passage is worthy of a Hallmark card with a foldout:

At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors. We’ve learned a lot over the past 25 years about how to build and maintain a high-quality search experience, including how to learn from these errors to make Search better for everyone. We’ll keep improving when and how we show AI Overviews and strengthening our protections, including for edge cases, and we’re very grateful for the ongoing feedback.

What’s my take on this?

  1. The assumption that Google search is “good” is interesting, just not in line with what I hear, read, and experience when I do use Google. Note: That my personal usage has decreased over time.
  2. Google is trying to explain away its obvious flaws. The Google speak may work for some people, just not for me.
  3. The tone is that of a entitled seventh-grader from a wealthy family, not the type of language I find particularly helpful when the “smart” Google software has to be remediated by humans. Google is terminating humans, right? Now Google needs humans. What’s up Google?

Net net: Google is snagged it ins own AI maze. I am growing less confident in the company’s ability to extricate itself. The Sam AI-Man has crafted deals with two outfits big enough to make Google’s life more interesting. Google’s own management seems ineffectual despite the flashing red and yellow lights and the honking of alarms. Google’s wordsmiths and lawyers are running out of verbal wiggle room. But most important, the failure of the bullseye method and the oozing comfort of the bullsh*it method marks a turning point for the company.

Stephen E Arnold, May 31, 2024

NSO Group: Making Headlines Again and Again and Again

May 31, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

NSO Group continues to generate news. One example is the company’s flagship sponsorship of an interesting conference going on in Prague from June 4th to the 6th. What’s interesting mean? I think those who attend the conference are engaged in information-related activities connected in some way to law enforcement and intelligence. How do I know NSO Group ponied up big bucks to be the “lead sponsor”? Easy. I saw this advertisement on the conference organizer’s Web site. I know you want me to reveal the url, but I will treat the organizer in a professional manner. Just use those Google Dorks, and you will locate the event. The ad:

image

What’s the ad from the “lead sponsor” say? Here are a few snippets from the marketing arm of NSO Group:

NSO Group develops and provides state-of-the-art solutions, designed to assist in preventing terrorism and crime. Our solutions address diverse strategical, tactical and operational needs and scenarios to serve authorized government agencies including intelligence, military and law enforcement. Developed by the top technology and data science experts, the NSO portfolio includes cyber intelligence, network and homeland security solutions. NSO Group is proud to help to protect lives, security and personal safety of citizens around the world.

Innocent stuff with a flavor jargon-loving Madison Avenue types prefer.

image

Citizen’s Lab is a bit like mules in an old-fashioned grist mill. The researchers do not change what they think about. Source: Royal Mint Museum in the UK.

Just for some fun, let’s look at the NSO Group through a different lens. The UK newspaper The Guardian, which counts how many stories I look at a year, published “Critics of Putin and His Allies Targeted with Spyware Inside the EU.” Here’s a sample of the story’s view of NSO Group:

At least seven journalists and activists who have been vocal critics of the Kremlin and its allies have been targeted inside the EU by a state using Pegasus, the hacking spyware made by Israel’s NSO Group, according to a new report by security researchers. The targets of the hacking attempts – who were first alerted to the attempted cyber-intrusions after receiving threat notifications from Apple on their iPhones – include Russian, Belarusian, Latvian and Israeli journalists and activists inside the EU.

And who wrote the report?

Access Now, the Citizen Lab at the Munk School of Global Affairs & Public Policy at the University of Toronto (“the Citizen Lab”), and independent digital security expert Nikolai Kvantiliani

The Citizen Lab has been paying attention to NSO Group for years. The people surveilled or spied upon via the NSO Group’s Pegasus technology are anti-Russia; that is, none of the entities will be invited to a picnic at Mr. Putin’s estate near Sochi.

Obviously some outfit has access to the Pegasus software and its command-and-control system. It is unlikely that NSO Group provided the software free of charge. Therefore, one can conclude that NSO Group could reveal what country was using its software for purposes one might consider outside the bounds of the write up’s words cited above.

NSO Group remains one of the — if not the main — poster children for specialized software. The company continues to make headlines. Its technology remains one of the leaders in the type of software which can be used to obtain information for a mobile device. There are some alternatives, but NSO Group remains the Big Dog.

One wonders why Israel, presumably with the Pegasus tool, could not have obtained information relevant to the attack in October 2023. My personal view is that having Fancy Dan ways to get data from a mobile phone, human analysts have to figure out what’s important and what to identify as significant.

My point is that the hoo-hah about NSO Group and Pegasus may not be warranted. Information without the trained analysts and downstream software may have difficulty getting the information required to take a specific action. Israel’s lack of intelligence means that software alone can’t do the job. No matter what the marketing material says or how slick the slide deck used to brief those with a “need to know” appears — software is not intelligence.

Will NSO Group continue to make headlines? Probably. Those with access to Pegasus will make errors and disclose their ineptness. Citizen’s Lab will be at the ready. New reports will be forthcoming.

Net net: Is anyone surprised Mr. Putin is trying to monitor anti-Russia voices? Is Pegasus the only software pressed into service? My answer to this question is: “Mr. Putin will use whatever tool he can to achieve his objectives.” Perhaps Citizen’s Lab should look for other specialized software and expand its opportunities to write reports? When will Apple address the vulnerability which NSO Group continues to exploit?

Stephen E Arnold, May 31, 2024

In the AI Race, Is Google Able to Win a Sprint to a Feature?

May 31, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

One would think that a sophisticated company with cash and skilled employees would avoid a mistake like shooting the CEO in the foot. The mishap has occurred again, and if it were captured in a TikTok, it would make an outstanding trailer for the Sundar & Prabhakar reprise of The Greatest Marketing Mistakes of the Year.

image

At age 25, which is quite the mileage when traveling on the Information Superhighway, the old timer is finding out that younger, speedier outfits may win a number of AI races. In the illustration, the Google runner seems stressed at the start of the race. Will the geezer win? Thanks, MidJourney. Good enough, which is the benchmark today I fear.

Google Is Taking ‘Swift Action’ to Remove Inaccurate AI Overview Responses” explains that Google rolled out with some fanfare its AI Overviews. The idea is that smart software would just provide the “user” of the Google ad delivery machine with an answer to a query. Some people have found that the outputs are crazier than one would expect from a Big Tech outfit. The article states:

… Google says, “The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web. Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce. “We conducted extensive testing before launching this new experience, and as with other features we’ve launched in Search, we appreciate the feedback,” Google adds. “We’re taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.”

But others are much kinder. One notable example is Mashable’s “We Gave Google’s AI Overviews the Benefit of the Doubt. Here’s How They Did.” This estimable publication reported:

Were there weird hallucinations? Yes. Did they work just fine sometimes? Also yes.

The write up noted:

AI Overviews were a little worse in most of my test cases, but sometimes they were perfectly fine, and obviously you get them very fast, which is nice. The AI hallucinations I experienced weren’t going to steer me toward any danger.

Let’s step back and view the situation via several observations:

  1. Google’s big moment becomes a meme cemented to glue on pizza
  2. Does Google have a quality control process which flags obvious gaffes? Apparently not.
  3. Google management seems to suggest that humans have to intervene in a Google “smart” process. Doesn’t that defeat the purpose of using smart software to replace some humans?

Net net: The Google is ageing, and I am not sure a singularity will offset these quite obvious effects of ageing, slowed corporate processes, and stuttering synapses in the revamped AI unit.

Stephen E Arnold, May 31, 2024

Amazon: Competition Heats Up in Some Carpetland Offices

May 31, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The tech industry is cutthroat and no one is safe in their position, no matter how high they are on the food chain. The Verge explains how one of Amazon’s CEOs might not be able to withstand competition: “Amazon Web Services CEO To Step Down.” Adam Selipsky is the acting CEO of Amazon Web Services and he will be stepping down June 3, 2024. He will be replaced by Matt Garman, who is currently the SVP of AWS sales, marketing, and global services. Garman has worked at Amazon for eighteen years in the AWS division.

AWS is responsible for 17% of Amazon’s total revenue and 6% of its operating income in the first quarter of 2024. AWS is known as an “invisible server empire” because it hosts the infrastructures of many organizations across all industries. When AWS experienced outages, there were ripple effects on the Internet and real world, i.e., Amazon delivery vans and warehouse bots couldn’t work. AWS is a big player in Amazon’s AI development: proprietary AI chips, Anthropic, Amazon Q, Amazon Bedrock, and Nvidia’s GH200 chips. Selipsky was a major leader in building Amazon’s AI foundations.

Andy Jassy wrote an email to AWS staff about the transfer of power that applauds Selipsky’s service, explains he’s moving onto another “challenge,” and is taking a “well-deserved respite.” The email then moves onto congratulating German. Selipsky replied with the following:

“Leading this amazing team and the AWS business is a big job, and I’m proud of all we’ve accomplished going from a start-up to where we are today. In the back of my head I thought there might be another chapter down the road at some point, but I never wanted to distract myself from what we are all working so hard to achieve. Given the state of the business and the leadership team, now is an appropriate moment for me to make this transition, and to take the opportunity to spend more time with family for a while, recharge a bit, and create some mental free space to reflect and consider the possibilities.

Matt and the AWS leadership team are ready for this next big opportunity. I’m excited to see what they and you do next, because I know it will be impressive. The future is bright for AWS (and for Amazon). I wish you all the very best of luck on this adventure.”

Selipsky, Jassy, Garman, and the AWS appear to be leaving on good terms. There might be something that happened behind closed doors and the verbiage indicates Selipsky can’t handle where AWS is going.

Whitney Grace, May 31, 2024

A Different View of That Google Search Leak

May 30, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

As a dinobaby, I can make observations that a person with two young children and a mortgage are not comfortable making. So buckle your seat belt and grab a couple of Prilosec. I don’t think the leak is a big deal. Let me provide some color.

image

This cartoon requires that you examine the information in “Authorities: Google Exec Died on Yacht after Upscale Prostitute Injected Him with Heroin.” The incident provides some insight into the ethical compass of one Google officer. Do others share that directionality? Thanks, MSFT Copilot. You unwittingly produced a good cartoon. Ho ho ho.

Many comments are zipping around about the thousands of pages of Google secret information are flying around. The “legend” of the leak is that Search API information became available. The “spark” which lit the current Google fire was this post: “An Anonymous Source Shared Thousands of Leaked Google Search API Documents with Me; Everyone in SEO Should See Them.” (FYI: The leaker is an entity using the handle “Erfan Azimi.”)

That write up says:

This documentation doesn’t show things like the weight of particular elements in the search ranking algorithm, nor does it prove which elements are used in the ranking systems. But, it does show incredible details about data Google collects.

If you want more of this SEO stuff, have at it. I think the information is almost useless. Do Googler’s follow procedures? Think about your answer for a company that operates essentially without meaningful controls. Here’s my view which means it is time to gulp those tabs.

First, the entire SEO game helps Google sell online advertising. Once the SEO push fails to return results to the client of the SEO expert, Google allows these experts to push Google ads on their customer. Why? Pay Google money and the advertiser will get traffic. How does this work? Well, money talks, and Google search experts deliver clicks.

Second, the core of Google is now surrounded by wrappers. The thousands of words in the leak record the stuff essentially unmanaged Googlers do to fill time. After 25 years, the old ideas (some of which were derived from the CLEVER method for which Jon Kleinberg deserves credit.) have been like a pretty good organic chicken swathed in hundreds of layers of increasingly crappy plastic wrap. With the appropriate source of illumination, one can discern the chicken beneath the halogenated wrap, but the chicken looks darned awful. Do you want to eat the chicken? Answer: Probably no more than I want to eat a pizza with non-toxic glue in the cheese.

Third, the senior management of the Google is divorced from the old-fashioned idea of typing a couple of words and getting results which are supposed to be germane to the query. When Boolean logic was part of the search game, search was about 60 percent effective. Thus, it seemed logical over the years to provide training wheels and expand the query against which ads could be sold. Now the game is just to sell ads because the query is relaxed, extended, and mostly useless except for a narrow class of search strings. (Use Google dorks and get some useful stuff.)

Okay, what are the implications of these three observations? Grab another Prilosec, please.

First, Google has to make more and more money because its costs are quite difficult to control. With cost control out of reach, the company’s “leadership” must focus on extracting cash from “users.” (Customers is not the right word for those in the Google datasphere.) The CFO is looking for her future elsewhere. The key point is that her future is not at the Google, its black maw hungry for cash, and the costs of keeping the lights on. Burn rate is not a problem just for start ups, folks.

Second, Google’s senior management is not focused on search no matter what the PR says. The company’s senior leader is a consultant, a smooth talking wordsmith, and a neutral personality to the outside world. As a result, the problems of software wrappers and even the incredible missteps with smart software are faint sounds coming from the other side of a sound-proofed room in a crazy college dormitory. Consultants consult. That’s what Google’s management team does. The “officers” have to figure out how to implement. Then those who do the work find themselves in a cloud of confusion. I did a blog essay about one of Google’s odd ball methods for delivering “minimum viable products”. The process has a name, but I have forgotten it, just like those working on Google’s “innovative” products which are difficult for me to name even after the mind-numbing Google I/O. Everything is fuzzy and illuminated by flickering Red Alert and Yellow Alert lights.

Third, Google has been trying to diversify its revenue stream for decades. After much time and effort, online advertising is darned close to 70 percent of the firm’s revenue. The numerous venture capital initiatives, the usually crazy skunk works often named X or a term from a weird union of a humanoid and a piece of hardware have delivered what? The Glasshole? The life-sized board game? The Transformic Inc.s’ data structure? Dr. Guha’s semantic technology? Yeah, failures because the revenue contributed is negligible. The idea of innovation at Google from the Backrub in the dorm has been derivative, imitative, and in the case of online advertising methods something for which Google paid some big bucks to Yahoo before the Google initial public offering. Google is not imitative; it is similar to a high school science club with an art teacher in charge. Google is clever and was quick moving. The company was fearless and was among the first to use academic ideas in its commercial search and advertising business until it did not. We are in the did not phase. Think about that when you put on a Google T shirt.

Finally, the company lacks the practical expertise to keep its 155,000 (estimated to be dropping at a cadence) full-time equivalents on the reservation. Where did the leaked but largely irrelevant documents originate? Not Mr. Fishkin: He was the lucky recipient of information from Mr. Ezimi. Where did he get the documents? I am waiting for an answer, Mr. Ezimi. Answer carefully because possession of such documents might be something of interest to some government authorities. The leak is just one example of a company which cannot coordinate information in a peer-reviewed journal paper. Remember the stochastic parrot? If not, run a query and look at what Google outputs from its smart software. And the protests? Yeah, thanks for screwing up traffic and my ability to grab a quick coffee at Philz when the Googlers are milling around with signs. Common sense seems in short supply.

So what?

For those who want search traffic, buy advertising. Plan to spend a minimum of $20,000 per month to get some action. If you cannot afford it, you need to put your thinking cap in a USB C socket and get some marketing ideas. Web search is not going to deliver those eyeballs. My local body shop owner asked me, “What can I do to get more visibility for my Google Local listing?” I said, “Pay a friend to post about your business in Nextdoor.com, get some customers to post about your dent removal prowess on Facebook, and pay some high school kid to spend some time making before and after pictures for Instagram. Pay the teen to make a TikTok video of a happy customer.” Note that I did not mention Google. It doesn’t deliver for local outfits.

Now you can kick back and enumerate the reasons why my view of Google is wrong, crazy, or out of touch. Feel free to criticize. I am a dinobaby; I consulted for a certain big time search engine; I consulted for venture firms investing in search; and I worked on some Fancy Dan systems. But my experience does not matter. I am a dinobaby, and I don’t care how other people find information. I pay several people to find information for me. I then review what those young wizards produce. Most of them don’t agree with me on some issues. That’s why I pay them. But this dinobaby’s views of Google are not designed to make them or you happy.

Net net: The image of Google to keep in mind is encapsulated in this article: Yacht Killing: Escort to Be Arraigned in Google Exec’s Heroin Death. Yep, Googlers are sporty. High school mentalities make mistakes, serious mistakes.

Stephen E Arnold, May 30, 2024

Guarantees? Sure … Just Like Unlimited Data Plans

May 30, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I loved this story: “T-Mobile’s Rate Hike Raises Ire over Price Lock Guarantees.” The idea that something is guaranteed today is a hoot. Remember “unlimited data plans”? I think some legal process determined that unlimited did not mean without limits. This is not just wordsmithing; it is probably a behavior which, if attempted in certain areas of Sicily, would result in something quite painful. Maybe a beating, a knife in the ribs, or something more colorful? But today, are you kidding me?

image

The soon-to-be-replaced-by-a-chatbot AI entity is reassuring a customer about a refund. Is the check in the mail? Will the sales professional take the person with whom he is talking to lunch? Absolutely. This is America, a trust outfit for sure. Thanks, MSFT Copilot. Working on security today?

The write up points out:

…in T-Mobile’s case, customers are seething because T-Mobile is raising prices on plans that were offered with “guarantees” they wouldn’t go up, such as T-Mobile One plans.

Unusual? No, visit a big time grocery store. Select 10 items at random. Do the prices match what was displayed on the shelves? Let me know. Our local outfit is batting 10 percent incorrect pricing per 10 items. Does the manager care? Sure, but does the pricing change or the database errors get adjusted. Ho ho ho.

The article reported:

“Clearly this is bad optics for T-Mobile since it won many people over as the ‘non-corporate’ un-carrier,” he [Eric Michelson, a social and digital media strategist] said.

Imagine a telecommunications company raising prices and refusing to provide specific information about which customers get the opportunity to pay more for service.

Several observations:

  1. Promises mean zero. Ask people trying to get reimbursed for medical expenses or for post-tornado house repairs
  2. Clever is more important that behaving in an ethical and responsible manner. Didn’t Google write a check to the US government to make annoying legal matters go away?
  3. The language warped by marketers and shape shifted by attorneys makes understanding exactly what’s afoot difficult. How about the wording in an omnibus bill crafted by lobbyists and US elected officials’ minions? Definitely crystal clear to some. To others, well, not too clear.

Net net: What’s up with the US government agencies charged with managing corporate behavior and protecting the rights of citizens? Answer: These folks are in meetings, on Zoom calls, or working from home. Please, leave a message.

Stephen E Arnold, May 30, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta