Harvard University: Does Money Influence Academic Research?

December 5, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Harvard University has been on my radar since the ethics misstep. In case your memory is fuzzy, Francesca Gino, a big thinker about ethics and taking shortcuts, was accused of data fraud. The story did not attract much attention in rural Kentucky. Ethics and dishonesty? Come on. Harvard has to do some serious training to catch up with a certain university in Louisville. For a reasonable explanation of the allegations (because, of course, one will never know), navigate to “Harvard Professor Who Studies Dishonesty Is Accused of Falsifying Data” and dig in.

image

Thanks, MSFT Copilot, you have nailed the depressive void that comes about when philosophers learn that ethics suck.

Why am I thinking about Harvard and ethics? The answer is that I read “Harvard Gutted Initial Team Examining Facebook Files Following $500 Million Donation from Chan Zuckerberg Initiative, Whistleblower Aid Client Reveals.” I have no idea if the write up is spot on, weaponized information, or the work of someone who did not get into one of the university’s numerous money generating certification programs.

The write up asserts:

Harvard University dismantled its prestigious team of online disinformation experts after a foundation run by Facebook’s Mark Zuckerberg and his wife Priscilla Chan donated $500 million to the university, a whistleblower disclosure filed by Whistleblower Aid reveals. Dr. Joan Donovan, one of the world’s leading experts on social media disinformation, says she ran into a wall of institutional resistance and eventual termination after she and her team at Harvard’s Technology and Social Change Research Project (TASC) began analyzing thousands of documents exposing Facebook’s knowledge of how the platform has caused significant public harm.

Let’s assume that the allegation is horse feathers, not to be confused with Intel’s fabulous Horse Ridge. Harvard still has to do some fancy dancing with regard to the ethics professor and expert in dishonesty who is alleged to have violated the esteemed university’s ethics guidelines and was dishonest.

If we assume that the information in Dr. Donovan’s whistleblower declaration is close enough for horse shoes, something equine can be sniffed in the atmosphere of Dr. William James’s beloved institution.

What could Facebook or the Metazuck do which would cause significant public harm? The options range from providing tools to disseminate information which spark body shaming, self harm, and angst among young users. Are old timers possibly affected? I suppose buying interesting merchandise on Facebook Marketplace and experiencing psychological problems as a result of defriending are possibilities too.

If the allegations are proven to be accurate, what are the consequences for the two esteemed organizations? My hunch is zero. Money talks; prestige walks away to put ethics on display for another day.

Stephen E Arnold, December 5, 2023

Why Google Dorks Exist and Why Most Users Do Not Know Why They Are Needed

December 4, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Many people in my lectures are not familiar with the concept of “dorks”. No, not the human variety. I am referencing the concept of a “Google dork.” If you do a quick search using Yandex.com, you will get pointers to different “Google dorks.” Click on one of the links and you will find information you can use to retrieve more precise and relevant information from the Google ad-supported Web search system.

Here’s what QDORKS.com looks like:

image

The idea is that one plugs in search terms and uses the pull down boxes to enter specific commands to point the ad-centric system at something more closely resembling a relevant result. Other interfaces are available; for example, the “1000 Best Google Dorks List." You get a laundry list of tips,commands, and ideas for wrestling Googzilla to the ground, twisting its tail, and (hopefully) yield relevant information. Hopefully. Good work.

image

Most people are lousy at pinning the tail on the relevance donkey. Therefore, let someone who knows define relevance for the happy people. Thanks, MSFT Copilot. Nice animal with map pins.

Why are Google Dorks or similar guides to Google search necessary? Here are three reasons:

  1. Precision reduces the opportunities for displaying allegedly relevant advertising. Semantic relaxation allows the Google to suggest that it is using Oingo type methods to find mathematically determined relationships. The idea is that razzle dazzle makes ad blasting something like an ugly baby wrapped in translucent fabric on a foggy day look really great.
  2. When Larry Page argued with me at a search engine meeting about truncation, he displayed a preconceived notion about how search should work for those not at Google or attending a specialist conference about search. Rational? To him, yep. Logical? To his framing of the search problem, the stance makes perfect sense if one discards the notion of tense, plurals, inflections, and stupid markers like “im” as in “impractical” and “non” as in “nonsense.” Hey, Larry had the answer. Live with it.
  3. The goal at the Google is to make search as intellectually easy for the “user” as possible. The idea was to suggest what the user intended. Also, Google had the old idea that a person’s past behavior can predict that person’s behavior now. Well, predict in the sense that “good enough” will do the job for vast majority of search-blind users who look for the short cut or the most convenient way to get information.

Why? Control, being clever, and then selling the dream of clicks for advertisers. Over the years, Google leveraged its information framing power to a position of control. I want to point out that most people, including many Googlers, cannot perceive. When pointed out, those individuals refuse to believe that Google does [a] NOT index the full universe of digital data, [b] NOT want to fool around with users who prefer Boolean algebra, content curation to identify the best or most useful content, and [c] fiddle around with training people to become effective searchers of online information. Obfuscation, verbal legerdemain, and the “do no evil” craziness make the railroad run the way Cornelius Vanderbilt-types implemented.

I read this morning (December 4, 2023) the Google blog post called “New Ways to Find Just What You Need on Search.” The main point of the write up in my opinion is:

Search will never be a solved problem; it continues to evolve and improve alongside our world and the web.

I agree, but it would be great if the known search and retrieval functions were available to users. Instead, we have a weird Google Mom approach. From the write up:

To help you more easily keep up with searches or topics you come back to a lot, or want to learn more about, we’re introducing the ability to follow exactly what you’re interested in.

Okay, user tracking, stored queries, and alerts. How does the Google know what you want? The answer is that users log in, use Google services, and enter queries which are automatically converted to search. You will have answers to questions you really care about.

There are other search functions available in the most recent version of Google’s attempts to deal with an unsolved problem:

As with all information on Search, our systems will look to show the most helpful, relevant and reliable information possible when you follow a topic.

Yep, Google is a helicopter parent. Mom will know what’s best, select it, and present it. Don’t like it? Mom will be recalcitrant, like shaping search results to meet what the probabilistic system says, “Take your medicine, you brat.” Who said, “Mother Google is a nice mom”? Definitely not me.

And Google will make search more social. Shades of Dr. Alon Halevy and the heirs of Orkut. The Google wants to bring people together. Social signals make sense to Google. Yep, content without Google ads must be conquered. Let’s hope the Google incentive plans encourage the behavior, or those valiant programmers will be bystanders to other Googlers’ promotions and accompanying money deliveries.

Net net: Finding relevant, on point, accurate information is more difficult today than at any other point in the 50+ year work career. How does the cloud of unknowing dissipate? I have no idea. I think it has moved in on tiny Googzilla feet and sits looking over the harbor, ready to pounce on any creature that challenges the status quo.

PS. Corny Vanderbilt was an amateur compared to the Google. He did trains; Google does information.

Stephen E Arnold, December 4, 2023

Good Fences, Right, YouTube? And Good Fences in Winter Even Better

December 4, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Remember that line from the grumpy American poet Bobby Frost. (I have on good authority that Bobby was not a charmer. And who, pray tell, was my source. How about a friend of the poet’s who worked with him in South Shaftsbury.)

Like those in the Nor’East say, “Good fences make good neighbors.”

The line is not original. Bobby’s pal told me that the saying was a “pretty common one” among the Shaftsburians. Bobby appropriated the line in his poem “Mending Wall. (It is loved by millions of high school students). The main point of the poem is that “Something there is that doesn’t love a wall.” The key is “something.”

The fine and judicious, customer centric, and well-managed outfit Google is now in the process of understanding the “something that doesn’t love a wall,” digital or stone.

Inside the Arms Race between YouTube and Ad Blockers” updates the effort of the estimable advertising outfit and — well — almost everyone. The article explains:

YouTube recently took dramatic action against anyone visiting its site with an ad blocker running — after a few pieces of content, it’ll simply stop serving you videos. If you want to get past the wall, that ad blocker will (probably) need to be turned off; and if you want an ad-free experience, better cough up a couple bucks for a Premium subscription.

The write up carefully explains that one must pay a “starting” monthly fee of $13.99 to avoid the highly relevant advertisements for metal men’s wallets, the total home gym which seems only inappropriate for a 79 year old dinobaby like me, and some type of women’s undergarment. Yeah, that ad matching to a known user is doing a bang up job in my opinion. I bet the Skim’s marketing manager is thrilled I am getting their message. How many packs of Skims do I buy in a lifetime? Zero. Yep, zero.

image

Yes, sir. Good fences make good neighbors. Good enough, MSFT Copilot. Good enough.

Okay, that’s the ad blocker thing, which I have identified as Google’s digital Battle of Waterloo in honor of a movie about everyone’s favorite French emperor, Nappy B.

But what the cited write up and most of the coverage is not focusing on is the question, “Why the user hostile move?” I want to share some of my team’s ideas about the motive force behind this disliked and quite annoying move by that company everyone loves (including the Skim’s marketing manager?).

First, the emergence of ChatGPT type services is having a growing impact on Google’s online advertising business. One can grind though Google’s financials and not find any specific item that says, “The Duke of Wellington and a crazy old Prussian are gearing up for a fight. So I will share some information we have rounded up by talking to people and looking through the data gathered about Googzilla. Specifically, users want information packaged to answer or to “appear” to answer their question. Some want lists; some want summaries; and some just want to avoid the enter the query, click through mostly irrelevant results, scan for something that is sort of close to an answer, and use that information to buy a ticket or get a Taylor Swift poster, whatever. That means that the broad trend in the usage of Google search is a bit like the town of Grindavik, Iceland. “Something” is going on, and it is unlikely to bode well for the future that charming town in Iceland. That’s the “something” that is hostile to walls. Some forces are tough to resist even by Googzilla and friends.

Second, despite the robust usage of YouTube, it costs more money to operate that service than it does to display from a cache ads and previously spidered information from Google compliant Web sites. Thus, as pressure on traditional search goes up from the ChatGPT type services, the darker the clouds on the search business horizon look. The big storm is not pelting the Googleplex yet, but it does looks ominous perched on the horizon and moving slowly. Don’t get our point wrong: Running a Google scale search business is expensive, but it has been engineered and tuned to deliver a tsunami of cash. The YouTube thing just costs more and is going to have a tough time replacing lost old-fashioned search revenue. What’s a pressured Googzilla going to do? One answer is, “Charge users.” Then raise prices. Gee, that’s the much-loved cable model, isn’t it? And the pressure point is motivating some users who are developers to find ways to cut holes in the YouTube fence. The fix? Make the fence bigger and more durable? Isn’t that a Rand arms race scenario? What’s an option? Where’s a J. Robert Oppenheimer-type when one needs him?

The third problem is that there is a desire on the part of advertisers to have their messages displayed in a non offensive context. Also, advertisers — because the economy for some outfits sucks — now are starting to demand proof that their ads are being displayed in front of buyers known to have an interest in their product. Yep, I am talking about the Skims’ marketing officer as well as any intermediary hosing money into Google advertising. I don’t want to try to convince those who are writing checks to the Google the following: “Absolutely. Your ad dollars are building your brand. You are getting leads. You are able to reach buyers no other outfit can deliver.” Want proof. Just look at this dinobaby. I am not buying health food, hidden carry holsters, and those really cute flesh colored women’s undergarments. The question is, “Are the ads just being dumped or are they actually targeted to someone who is interested in a product category?” Good question, right?

Net net: The YouTube ad blocking is shaping up to be a Google moment. Now Google has sparked an adversarial escalation in the world of YouTube ad blockers. What are Google’s options now that Googzilla is backed into a corner? Maybe Bobby Frost has a poem about it: “Some say the world will end in fire, Some say in ice.” How do Googzilla fare in the ice?

Stephen E Arnold, December 4, 2023

The RAG Snag: Convenience May Undermine Thinking for Millions

December 4, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My understanding is that everyone who is informed about AI knows about RAG. The acronym means Retrieval Augmented Generation. One explanation of RAG appears in the nVidia blog in the essay “What Is Retrieval Augmented Generation aka RAG.” nVidia, in my opinion, loves whatever drives demand for its products.

The idea is that machine processes to minimize errors in output. The write up states:

Retrieval-augmented generation (RAG) is a technique for enhancing the accuracy and reliability of generative AI models with facts fetched from external sources.

Simplifying the idea, RAG methods gather information and perform reference checks. The checks can be performed by consulting other smart software, the Web, or knowledge bases like an engineering database. nVidia provides a “reference architecture,” which obviously relies on nVidia products.

The write up does an obligatory tour of a couple of search and retrieval systems. Why? Most of the trendy smart software are demonstrations of information access methods wrapped up in tools that think for the “searcher” or person requiring output to answer a question, explain a concept, or help the human to think clearly. (In dinosaur days, the software is performing functions once associated with a special librarian or an informed colleague who could ask questions and conduct a reference interview. I hope the dusty concepts did not make you sneeze.)

image

“Yes, young man. The idea of using multiples sources can result in learning. We now call this RAG, not research.” The young man, stunned with the insight say, “WTF?” Thanks, MSFT Copilot. I love the dual tassels. The young expert is obviously twice as intelligent as the somewhat more experienced dinobaby with the weird fingers.

The article includes a diagram which I found difficult to read. I think the simple blocks represent the way in which smart software obviates the need for the user to know much about sources, verification, or provenance about the sources used to provide information. Furthermore, the diagram makes the entire process look just like getting the location of a pizza restaurant from an iPhone (no Google Maps for me).

The highlight of the write up are the links within the article. An interested reader can follow the links for additional information.

Several observations:

  1. The emergence of RAG as a replacement for such concepts as “search”, “special librarian,” and “provenance” makes clear that finding information is a problem not solved for systems, software, and people. New words make the “old” problem appear “new” again.
  2. The push for recursive methods to figure out what’s “right” or “correct” will regress to the mean; that is, despite the mathiness of the methods, systems will deliver “acceptable” or “average” outputs. A person who thinks that software will impart genius to a user are believing in a dream. These individuals will not be living the dream.
  3. widespread use of smart software and automation means that for most people, critical thinking will become the equivalent of an appendix. Instead of mother knows best, the system will provide the framing, the context, and the implication that the outputs are correct.

RAG opens new doors for those who operate widely adopted smart software systems will have significant control over what people think and, thus, do. If the iPhone shows a pizza joint, what about other pizza joints? Just ask. The system will not show pizza joints not verified in some way. If that “verification” requires the company advertising to be in the data set, well, that’s the set of pizza joints one will see. The others? Invisible, not on the radar, and doomed to failure seem their fate.

RAG is significant because it is new speak and it marks a disassociation of “knowing” from “accepting” output information as the best and final words on a topic. I want to point out that for a small percentage of humans, their superior cognitive abilities will ensure a different trajectory. The “new elite” will become the individuals who design, shape, control, and deploy these “smart” systems.

Most people will think they are informed because they can obtain information from a device. The mass of humanity will not know how information control influences their understanding and behavior. Am I correct? I don’t know. I do know one thing: This dinobaby prefers to do knowledge acquisition the old fashioned, slow, inefficient, and uncontrolled way.

Stephen E Arnold, December 4, 2023

Health Care and Steerable AI

December 4, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Large language models are powerful tools that can be used for the betterment of humanity. Or, in the hands of for-profit entities, to get away with wringing every last penny out of a system in the most opaque and intractable ways possible. When that system manages the wellbeing of millions and millions of people, the fallout can be tragic. TechDirt charges, “’AI’ Is Supercharging our Broken Healthcare System’s Worst Tendencies.”

Reporter Karl Bode begins by illustrating the bad blend of corporate greed and AI with journalism as an example. Media companies, he writes, were so eager to cut corners and dodge unionized labor they adopted AI technology before it was ready. In that case the results were “plagiarism, bull[pucky], a lower quality product, and chaos.” Those are bad. Mistakes in healthcare are worse. We learn:

“Not to be outdone, the very broken U.S. healthcare industry is similarly trying to layer half-baked AI systems on top of a very broken system. Except here, human lives are at stake. For example UnitedHealthcare, the largest health insurance company in the US, has been using AI to determine whether elderly patients should be cut off from Medicare benefits. If you’ve ever navigated this system on behalf of an elderly loved one, you likely know what a preposterously heartless [poop]whistle this whole system already is long before automation gets involved. But a recent investigation by STAT showed the AI consistently made major errors and cut elderly folks off from needed care prematurely, with little recourse by patients or families. … A recent lawsuit filed in the US District Court for the District of Minnesota alleges that the AI in question was reversed by human review roughly 90 percent of the time.”

And yet, employees were ordered to follow the algorithm’s decisions no matter their inanity. For the few patients who did win hard-fought reversals, those decisions were immediately followed by fresh rejections that kicked them back to square one. Bode writes:

“The company in question insists that the AI’s rulings are only used as a guide. But it seems pretty apparent that, as in most early applications of LLMs, the systems are primarily viewed by executives as a quick and easy way to cut costs and automate systems already rife with problems, frustrated consumers, and underpaid and overtaxed support employees.”

But is there hope these trends will be eventually curtailed? Well, no. The write-up concludes by scoffing at the idea that government regulations or class action lawsuits are any match for corporate greed. Sounds about right.

Cynthia Murrell, December 4, 2023

AI Adolescence Ascendance: AI-iiiiii!

December 1, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The monkey business of smart software has revealed its inner core. The cute high school essays and the comments about how to do search engine optimization are based on the fundamental elements of money, power, and what I call ego-tanium. When these fundamental elements go critical, exciting things happen. I know this this assertion is correct because I read “The AI Doomers Have Lost This Battle”, an essay which appears in the weird orange newspaper The Financial Times.

The British bastion of practical financial information says:

It would be easy to say that this chaos showed that both OpenAI’s board and its curious subdivided non-profit and for-profit structure were not fit for purpose. One could also suggest that the external board members did not have the appropriate background or experience to oversee a $90bn company that has been setting the agenda for a hugely important technology breakthrough.

In my lingo, the orange newspaper is pointing out that a high school science club management style is like a burning electric vehicle. Once ignited, the message is, “Stand back, folks. Let it burn.”

image

“Isn’t this great?” asks the driver. The passenger, a former Doomsayer, replies, “AIiiiiiiiiii.” Thanks MidJourney, another good enough illustration which I am supposed to be able to determine contains copyrighted material. Exactly how? may I ask. Oh, you don’t know.

The FT picks up a big-picture idea; that is, smart software can become a problem for humanity. That’s interesting because the book “Weapons of Math Destruction” did a good job of explaining why algorithms can go off the rails. But the FT’s essay embraces the idea of software as the Terminator with the enthusiasm of the crazy old-time guy who shouted “Eureka.”

I note this passage:

Unfortunately for the “doomers”, the events of the last week have sped everything up. One of the now resigned board members was quoted as saying that shutting down OpenAI would be consistent with the mission (better safe than sorry). But the hundreds of companies that were building on OpenAI’s application programming interfaces are scrambling for alternatives, both from its commercial competitors and from the growing wave of open-source projects that aren’t controlled by anyone. AI will now move faster and be more dispersed and less controlled. Failed coups often accelerate the thing that they were trying to prevent.

Okay, the yip yap about slowing down smart software is officially wrong. I am not sure about the government committees’ and their white papers about artificial intelligence. Perhaps the documents can be printed out and used to heat the camp sites of knowledge workers who find  themselves out of work.

I find it amusing that some of the governments worried about smart software are involved in autonomous weapons. The idea of a drone with access to a facial recognition component can pick out a target and then explode over the person’s head is an interesting one.

Is there a connection between the high school antics of OpenAI, the hand-wringing about smart software, and the diffusion of decider systems? Yes, and the relationship is one of those hockey stick curves so loved by MBAs from prestigious US universities. (Non reproducibility and a fondness for Jeffrey Epstein-type donors is normative behavior.)

Those who want to cash in on the next Big Thing are officially in the 2023 equivalent of the California gold rush. Unlike the FT, I had no doubt about the ascendance of the go-fast approach to technological innovation. Technologies, even lousy ones, are like gerbils. Start with a two or three and pretty so there are lots of gerbils.

Will the AI gerbils and the progeny be good or bad. Because they are based on the essential elements of life — money, power, and ego-tanium — the outlook is … exciting. I am glad I am a dinobaby. Too bad about the Doomers, who are regrouping to try and build shield around the most powerful elements now emitting excited particles. The glint in the eyes of Microsoft executives and some venture firms are the traces of high-energy AI emissions in the innovators’ aqueous humor.

Stephen E Arnold, December 1, 2023

Google and X: Shall We Again Love These Bad Dogs?

November 30, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Two stories popped out of my blah newsfeed this morning (Thursday, November 30, 2023). I want to highlight each and offer a handful of observations. Why? I am a dinobaby, and I remember the adults who influenced me telling me to behave, use common sense, and follow the rules of “good” behavior. Dull? Yes. A license to cut corners and do crazy stuff? No.

The first story, if it is indeed accurate, is startling. “Google Caught Placing Big-Brand Ads on Hardcore Porn Sites, Report Says” includes a number of statements about the Google which make me uncomfortable. For instance:

advertisers who feel there’s no way to truly know if Google is meeting their brand safety standards are demanding more transparency from Google. Ideally, moving forward, they’d like access to data confirming where exactly their search ads have been displayed.

Where are big brand ads allegedly appearing? How about “undesirable sites.” What comes to mind for me is adult content. There are some quite sporty ads on certain sites that would make a Methodist Sunday school teacher blush.

image

These two big dogs are having a heck of a time ruining the living room sofa. Neither dog knows that the family will not be happy. These are dogs, not the mental heirs of Immanuel Kant. Thanks, MSFT Copilot. The stuffing looks like soap bubbles, but you are “good enough,” the benchmark for excellence today.

But the shocking factoid is that Google does not provide a way for advertisers to know where their ads have been displayed. Also, there is a possibility that Google shared ad revenue with entities which may be hostile to the interests of the US. Let’s hope that the assertions reported in the article are inaccurate. But if the display of big brand ads on sites with content which could conceivably erode brand value, what exactly is Google’s system doing? I will return to this question in the observations section of this essay.

The second article is equally shocking to me.

Elon Musk Tells Advertisers: ‘Go F*** Yourself’” reports that the EV and rocket man with a big hole digging machine allegedly said about advertisers who purchase promotions on X.com (Twitter?):

Don’t advertise,” … “If somebody is going to try to blackmail me with advertising, blackmail me with money, go fuck yourself. Go f*** yourself. Is that clear? I hope it is.” … ” If advertisers don’t return, Musk said, “what this advertising boycott is gonna do is it’s gonna kill the company.”

The cited story concludes with this statement:

The full interview was meandering and at times devolved into stream of consciousness responses; Musk spoke for triple the time most other interviewees did. But the questions around Musk’s own actions, and the resulting advertiser exodus — the things that could materially impact X — seemed to garner the most nonchalant answers. He doesn’t seem to care.

Two stories. Two large and successful companies. What can a person like myself conclude, recognizing that there is a possibility that both stories may have some gaps and flaws:

  1. There is a disdain for old-fashioned “values” related to acceptable business practices
  2. The thread of pornography and foul language runs through the reports. The notion of well-crafted statements and behaviors is not part of the Google and X game plan in my view
  3. The indifference of the senior managers at both companies seeps through the descriptions of how Google and X operate strikes me as intentional.

Now why?

I think that both companies are pushing the edge of business behavior. Google obviously is distributing ad inventory anywhere it can to try and create a market for more ads. Instead of telling advertisers where their ads are displayed or giving an advertiser control over where ads should appear, Google just displays the ads. The staggering irrelevance of the ads I see when I view a YouTube video is evidence that Google knows zero about me despite my being logged in and using some Google services. I don’t need feminine undergarments, concealed weapons products, or bogus health products.

With X.com the dismissive attitude of the firm’s senior management reeks of disdain. Why would someone advertise on a system which  promotes behaviors that are detrimental to one’s mental set up?

The two companies are different, but in a way they are similar in their approach to users, customers, and advertisers. Something has gone off the rails in my opinion at both companies. It is generally a good idea to avoid riding trains which are known to run on bad tracks, ignore safety signals, and demonstrate remarkably questionable behavior.

What if the write ups are incorrect? Wow, both companies are paragons. What if both write ups are dead accurate? Wow, wow, the big dogs are tearing up the living room sofa. More than “bad dog” is needed to repair the furniture for living.

Stephen E Arnold, November 30, 2023

Google Maps: Rapid Progress on Un-Usability

November 30, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a Xhitter.com post about Google Maps. Those who have either heard me talk about the “new” Google Maps or who have read some of my blog posts on the subject know my view. The current Google Maps is useless for my needs. Last year, as one of my team were driving to a Federal secure facility, I bought an overpriced paper map at one of the truck stops. Why? I had no idea how to interact with the map in a meaningful way. My recollection was that I could coax Google Maps and Waze to be semi-helpful. Now the Google Maps’s developers have become tangled in a very large thorn bush. The team discusses how large the thorn bush is, how sharp the thorns are, and how such a large thorn bush could thrive in the Googley hot house.

11 23 grannie and nav 2

This dinobaby expresses some consternation at [a] not knowing where to look, [b] how to show the route, and [c] not cause a motor vehicle accident. Thanks, MSFT Copilot. Good enough I think.

The result is enhancements to Google Maps which are the digital equivalent of skin cancer. The disgusting result is a vehicle for advertising and engagement that no one can use without head scratching moments. Am I alone in my complaint. Nope, the afore mentioned Xhitter.com post aligns quite well with my perception. The author is a person who once designed a more usable version of Google Maps.

Her Xhitter.com post highlights the digital skin cancer the team of Googley wizards has concocted. Here’s a screen capture of her annotated, life-threatening disfigurement:

image

She writes:

The map should be sacred real estate. Only things that are highly useful to many people should obscure it. There should be a very limited number of features that can cover the map view. And there are multiple ways to add new features without overlaying them directly on the map.

Sounds good. But Xooglers and other outsiders are not likely to get much traction from the Map team. Everyone is working hard at landing in the hot AI area or some other discipline which will deliver a bonus and a promotion. Maps? Nope.

The former Google Maps’ designer points out:

In 2007, I was 1 of 2 designers on Google Maps. At that time, Maps had already become a cluttered mess. We were wedging new features into any space we could find in the UI. The user experience was suffering and the product was growing increasingly complicated. We had to rethink the app to be simple and scale for the future.

Yep, Google Maps, a case study for people who are brilliant who have lost the atlas to reality. And “sacred” at Google? Ad revenue, not making dear old grandma safer when she drives. (Tesla, Cruise, where are those smart, self-driving cars? Ah, I forgot. They are with Waymo, keeping their profile low.)

Stephen E Arnold, November 30, 2023

Amazon Customer Service: Let Many Flowers Bloom and Die on the Vine

November 29, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Amazon has been outputting artificial intelligence “assertions” at a furious pace. What’s clear is that Amazon is “into” the volume and variety business in my opinion. The logic of offering multiple “works in progress” and getting them to work reasonably well is going to have three characteristics: The first is that deploying and operating different smart software systems is going to be expensive. The second is that tuning and maintaining high levels of accuracy in the outputs will be expensive. The third is that supporting the users, partners, customers, and integrators is going to be expensive. If we use a bit of freshman in high school algebra, the common factor is expensive. Amazon’s remarkable assertion that no one wants to bet a business on just one model strikes me as a bit out of step with the world in which bean counters scuttle and scurry in green eyeshades and sleeve protectors. (See. I am a dinobaby. Sleeve protectors. I bet none of the OpenAI type outfits have accountants who use these fashion accessories!)

Let’s focus on just one facet of the expensive burdens I touched upon above— customer service. Navigate to the remarkable and stunningly uncritical write up called “How to Reach Amazon Customer Service: A Complete Guide.” The write up is an earthworm list of the “options” Amazon provides. As Amazon was announcing its new new big big things, I was trying to figure out why an order for an $18 product was rejected. The item in question was one part of a multipart order. The other, more costly items were approved and billed to my Amazon credit card.

image

Thanks MSFT Copilot. You do a nice broken bulldozer or at least a good enough one.

But the dog treats?

I systematically worked through the Amazon customer service options. As a Prime customer, I assumed one of them would work. Here’s my report card:

  • Amazon’s automated help. A loop. See Help pages which suggested I navigate too the customer service page. Cute. A first year comp sci student’s programming error. A loop right out of the box. Nifty.
  • The customer service page. Well, that page sent me to Help and Help sent me to the automation loop. Cool Zero for two.
  • Access through the Amazon app. Nope. I don’t install “apps” on my computing devices unless I have zero choice. (Yes, I am thinking about Apple and Google.) Too bad Amazon, I reject your app the way I reject QR codes used by restaurants. (Do these hash slingers know that QR codes are a fave of some bad actors?)
  • Live chat with Amazon customer service was not live. It was a bot. The suggestion? Get back in the loop. Maybe the chat staff was at the Amazon AI announcement or just severely overstaffed or simply did not care. Another loser.
  • Request a call from Amazon customer service. Yeah, I got to that after I call Amazon customer service. Another loser.

I repeated the “call Amazon customer service” twice and I finally worked through the automated system and got a person who barely spoke English. I explained the problem. One product rejected because my Amazon credit card was rejected. I learned that this particular customer service expert did not understand how that could have happened. Yeah, great work.

How did I resolve the rejected credit card. I called the Chase Bank customer service number. I told a person my card was manipulated and I suspected fraud. I was escalated to someone who understood the word “fr4aud.” After about five minutes of “’Will you please hold”, the Chase person told me, “The problem is at Amazon, not your card and not Chase.”

What was the fix? Chase said, “Cancel the order.” I did and went to another vendor.

Now what’s that experience suggest about Amazon’s ability (willingness) to provide effective, efficient customer support to users of its purported multiple large language models, AI systems, and assorted marketing baloney output during Amazon’s “we are into AI” week?

My answer? The Bezos bulldozer has an engine belching black smoke, making a lot of noise because the muffler has a hole in it, and the thumpity thump of the engine reveals that something is out of tune.

Yeah, AI and customer support. Just one of the “expensive” things Amazon may not be able to deliver. The troubling thing is that Amazon’s AI may have been powering the multiple customer support systems. Yikes.

Stephen E Arnold, November 29, 2023

Is YouTube Marching Toward Its Waterloo?

November 28, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have limited knowledge of the craft of warfare. I do have a hazy recollection that Napoleon found himself at the wrong end of a pointy stick at the Battle of Waterloo. I do recall that Napoleon lost the battle and experienced the domino effect which knocked him down a notch or two. He ended up on the island of Saint Helena in the south Atlantic Ocean with Africa a short 1,200 miles to the east. But Nappy had no mobile phone, no yacht purchased with laundered money, and no Internet. Losing has its downsides. Bummer. No empire.

I thought about Napoleon when I read “YouTube’s Ad Blocker Crackdown Heats Up.” The question I posed to myself was, “Is the YouTube push for subscription revenue and unfettered YouTube user data collection a road to Google’s Battle of Waterloo?”

image

Thanks, MSFT Copilot. You have a knack for capturing the essence of a loser. I love good enough illustrations too.

The cited article from Channel News reports:

YouTube is taking a new approach to its crackdown on ad-blockers by delaying the start of videos for users attempting to avoid ads. There were also complaints by various X (formerly Twitter) users who said that YouTube would not even let a video play until the ad blocker was disabled or the user purchased a YouTube Premium subscription. Instead of an ad, some sources using Firefox and Edge browsers have reported waiting around five seconds before the video launches the content. According to users, the Chrome browser, which the streaming giant shares an owner with, remains unaffected.

If the information is accurate, Google is taking steps to damage what the firm has called the “user experience.” The idea is that users who want to watch “free” videos, have a choice:

  1. Put up with delays, pop ups, and mindless appeals to pay Google to show videos from people who may or may not be compensated by the Google
  2. Just fork over a credit card and let Google collect about $150 per year until the rates go up. (The cable TV and mobile phone billing model is alive and well in the Google ecosystem.)
  3. Experiment with advertisement blocking technology and accept the risk of being banned from Google services
  4. Learn to love TikTok, Instagram, DailyMotion, and Bitchute, among other options available to a penny-conscious consumer of user-produced content
  5. Quit YouTube and new-form video. Buy a book.

What happened to Napoleon before the really great decision to fight Wellington in a lovely part of Belgium. Waterloo is about nine miles south of the wonderful, diverse city of Brussels. Napoleon did not have a drone to send images of the rolling farmland, where the “enemies” were located, or the availability of something behind which to hide. Despite Nappy’s fine experience in his march to Russia, he muddled forward. Despite allegedly having said, “The right information is nine-tenths of every battle,” the Emperor entered battle, suffered 40,000 casualties, and ended up in what is today a bit of a tourist hot spot. In 1816, it was somewhat less enticing. Ordering troops to charge uphill against a septuagenarian’s forces was arguably as stupid as walking to Russia as snowflakes began to fall.

How does this Waterloo related to the YouTube fight now underway? I see several parallels:

  1. Google’s senior managers, informed with the management lore of 25 years of unfettered operation, knows that users can be knocked along a path of the firm’s choice. Think sheep. But sheep can be disorderly. One must watch sheep.
  2. The need to stem the rupturing of cash required to operate a massive “free” video service is another one of those Code Yellow and Code Red events for the company. With search known to be under threat from Sam AI-Man and the specters of “findability” AI apps, the loss of traffic could be catastrophic. Despite Google’s financial fancy dancing, costs are a bit of a challenge: New hardware costs money, options like making one’s own chips costs money, allegedly smart people cost money, marketing costs money, legal fees cost money, and maintaining the once-free SEO ad sales force costs money. Got the message: Expenses are a problem for the Google in my opinion.
  3. The threat of either TikTok or Instagram going long form remains. If these two outfits don’t make a move on YouTube, there will be some innovator who will. The price of “move fast and break things” means that the Google can be broken by an AI surfer. My team’s analysis suggests it is more brittle today than at any previous point in its history. The legal dust up with Yahoo about the Overture / GoTo issue was trivial compared to the cost control challenge and the AI threat. That’s a one-two for the Google management wizards to solve. Making sense of the Critique of Pure Reason is a much easier task in my view.

The cited article includes a statement which is likely to make some YouTube users uncomfortable. Here’s the statement:

Like other streaming giants, YouTube is raising its rates with the Premium price going up to $13.99 in the U.S., but users may have to shell out the money, and even if they do, they may not be completely free of ads.

What does this mean? My interpretation is that [a] even if you pay, a user may see ads; that is, paying does not eliminate ads for perpetuity; and [b] the fee is not permanent; that is, Google can increase it at any time.

Several observations:

  1. Google faces high-cost issues from different points of the business compass: Legal in the US and EU, commercial from known competitors like TikTok and Instagram, and psychological from innovators who find a way to use smart software to deliver a more compelling video experience for today’s users. These costs are not measured solely in financial terms. The mental stress of what will percolate from the seething mass of AI entrepreneurs. Nappy did not sleep too well after Waterloo. Too much Beef Wellington, perhaps?
  2. Google’s management methods have proven appropriate for generating revenue from a ad model in which Google controls the billing touch points. When those management techniques are applied to non-controllable functions, they fail. The hallmark of the management misstep is the handling of Dr. Timnit Gebru, a squeaky wheel in the Google AI content marketing machine. There is nothing quite like stifling a dissenting voice, the squawk of a parrot, and a don’t-let-the-door-hit-you-when -you-leave moment.
  3. The post-Covid, continuous warfare, and unsteady economic environment is causing the social fabric to fray and in some cases tear. This means that users may become contentious and become receptive to a spontaneous flash mob action toward Google and YouTube. User revolt at scale is not something Google has demonstrated a core competence.

Net net: I will get my microwave popcorn and watch this real-time Google Boogaloo unfold. Will a recipe become famous? How about Grilled Google en Croute?

Stephen E Arnold, November 28, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta