Recent Googlies: The We-Care-about -Your-Experience Outfit

October 18, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[2]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I flipped through some recent items from my newsfeed and noted several about everyone’s favorite online advertising platform. Herewith is my selection for today:

ITEM 1. Boing Boing, “Google Reportedly Blocking Benchmarking Apps on Pixel 8 Phones.” If the mobile devices were fast — what the GenX and younger folks call “performant” (weird word, right?) — wouldn’t the world’s largest online ad service make speed test software and its results widely available? If not, perhaps the mobile devices are digital turtles?

10 15 dino chasing kids

Hey, kids. I just want to be your friend. We can play hide and seek. We can share experiences. You know that I do care about your experiences. Don’t run away, please. I want to be sticky. Thanks, MidJourney, you have a knack for dinosaur art. Boy that creature looks familiar.

ITEM 2. The Next Web, “Google to Pay €3.2M Yearly Fee to German News Publishers.” If Google traffic and its benefits were so wonderful, why would the Google pay publishers? Hmmm.

ITEM 3. The Verge (yep, the green weird logo outfit), “YouTube Is the Latest Large Platform to Face EU Scrutiny Regarding the War in Israel.” Why is the EU so darned concerned about an online advertising company which still sells wonderful Google Glass, expresses much interest in a user’s experience, and some fondness for synthetic data? Trust? Failure to filter certain types of information? A reputation for outstanding business policies?

ITEM 4. Slashdot quoted a document spotted by the Verge (see ITEM 3) which includes this statement: “… Google rejects state and federal attempts at requjiring platforms to verify the age of users.” Google cares about “user experience” too much to fool with administrative and compliance functions.

ITEM 5. The BBC reports in “Google Boss: AI Too Important Not to Get Right.” The tie up between Cambridge University and Google is similar to the link between MIT and IBM. One omission in the fluff piece: No definition of “right.”

ITEM 6. Arstechnica reports that Google has annoyed the estimable New York Times. Google, it seems, is using is legal brigades to do some Fancy Dancing at the antitrust trial. Access to public trial exhibits has been noted. Plus, requests from the New York Times are being ignored. Is the Google above the law? What does “public” mean?

Yep, Google googlies.

Stephen E Arnold, October 18, 2023

The Path to Success for AI Startups? Fancy Dancing? Pivots? Twisted Ankles?

October 17, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[2]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “AI-Enabled SaaS vs Moatless AI.” The buzzwordy title hides a somewhat grim prediction for startups in the AI game.” Viggy Balagopalakrishnan (I love that name Viggy) explains that the best shot at success is:

…the only real way to build a strong moat is to build a fuller product. A company that is focused on just AI copywriting for marketing will always stand the risk of being competed away by a larger marketing tool, like a marketing cloud or a creative generation tool from a platform like Google/Meta. A company building an AI layer on top of a CRM or helpdesk tool is very likely to be mimicked by an incumbent SaaS company. The way to solve for this is by building a fuller product.

My interpretation of this comment is that small or focused AI solutions will find competing with big outfits difficult. Some may be acquired. A few may come up with a magic formula for money. But most will fail.

10 14 moat and death rays

How does that moat work when an AI innovator’s construction is attacked by energy weapons discharged from massive death stars patrolling the commercial landscape? Thanks, MidJourney. Pretty complicated pointy things on the castle with a moat.

Viggy does not touch upon the failure of regulatory entities to slow the growth of companies that some allege are monopolies. One example is the Microsoft game play. Another is the somewhat accommodating investigation of the Google with its closed sessions and odd stance on certain documents.

There are other big outfits as well, and the main idea is that the ecosystem is not set up for most AI plays to survive with huge predators dominating the commercial jungle. That means clever scripts, trade secrets, and agility may not be sufficient to ensure survival.

What’s Ziggy think? Here’s an X-ray of his perception:

Given that the infrastructure and platform layers are getting reasonably commoditized, the most value driven from AI-fueled productivity is going to be captured by products at the application layer. Particularly in the enterprise products space, I do think a large amount of the value is going to be captured by incumbent SaaS companies, but I’m optimistic that new fuller products with an AI-forward feature set and consequently a meaningful moat will emerge.

How do moats work when Amazon-, Google-, Microsoft-, and Oracle-type outfits just add AI to their commercial products the way the owner of a Ford Bronco installs a lift kit and roof lights?

Productivity? If that means getting rid of humans, I agree. If the term means to Ziggy smarter and more informed decision making? I am not sure. Moats don’t work in the 21st century. Land mines, surprise attacks, drones, and missiles seem to be more effective. Can small firms deal with the likes of Googzilla, the Bezos bulldozer, and legions of Softies? Maybe. Ziggy is an optimist. I am a realist with a touch of radical empiricism, a tasty combo indeed.

Stephen E Arnold, October 17, 2023

Big, Fat AI Report: Free and Meaty for Marketing Collateral

October 12, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Curious about AI, machine learning, and smart software? You will want to obtain a free (at least as of October 6, 2023) report called “Artificial Intelligence Index Report 2023.” The 386 page PDF contains information selected to make it clear that AI is a big deal. There is no reference to the validity of the research conducted for the document. I find that interesting since the president of Stanford University stepped carefully from the speeding world of academia to find his future elsewhere. Making up data seems to be a signature feature of outfits like Stanford and, of course, Harvard.

10 6 robot reading ai report

A Musk-inspired robot reads a print out of the PDF report. The robot looks … like a robot. Thanks, Microsoft Bing. You do a good robot.

But back to the report.

For those who lack the time and swipe left deflector, an two page summary identifies the big finds from the work. Let me highlight three or 30 percent of the knowledge gems. Please, consult the full report for the other seven discoveries. No blood pressure reduction medicine is needed, but you may want to use the time between plays at an upcoming NFL game to work through the full document.

Three big reveals:

  1. AI continued to post state-of-the-art results, but year-over-year improvement on many benchmarks continues to be marginal.
  2. … The number of AI-related job postings has increased on average from 1.7% in 2021 to 1.9% in 2022.
  3. An AI Index analysis of the legislative records of 127 countries shows that the number of bills containing “artificial intelligence” that were passed into law grew from just 1 in 2016 to 37 in 2022.

My interpretation of these full suite of 10 key points: The hype is stabilizing.

Who funded the project. Not surprisingly the Google and OpenAI kicked in. There is a veritable who is who of luminaries and high-profile research outfits providing some assistance as well. Headhunters will probably want to print out the pages with the names and affiliations of the individuals listed. One never knows where the next Elon Musk lurks.

The report has eight chapters, but the bulk of the information appears in the first four; to wit:

  • R&D
  • Technical performance
  • Technical AI ethics
  • The economy.

I want to be up front. I scanned the document. Does it confront issues like the objective of Google and a couple of other firms dominating the AI landscape? Nah. Does it talk about the hallucination and ethical features of smart software? Nah. Does it delve into the legal quagmire which seems to be spreading faster than dilapidated RVs parked on El Camino Real? Nah.

I suggest downloading a copy and checking out the sections which appear germane to your interests in AI. I am happy to have a copy for reference. Marketing collateral from an outfit whose president resigned due to squishy research does not reassure me. Yes, integrity matters to me. Others? Maybe not.

Stephen E Arnold, October 12, 2023

Open Source Companies: Bet on Expandability and Extendibility

October 12, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[2]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Naturally, a key factor driving adoption of open source software is a need to save money. However, argues Lago co-founder Anh-Tho Chuong, “Open Source Does Not Win by Being Cheaper” than the competition. Not just that, anyway. She writes:

“What we’ve learned is that open-source tools can’t rely on being an open-source alternative to an already successful business. A developer can’t just imitate a product, tag on an MIT license, and call it a day. As awesome as open source is, in a vacuum, it’s not enough to succeed. … [Open-source companies] either need a concrete reason for why they are open source or have to surpass their competitors.”

One caveat: Chuong notes she is speaking of businesses like hers, not sponsored community projects like React, TypeORM, or VSCode. Outfits that need to turn a profit to succeed must offer more than savings to distinguish themselves, she insists. The post notes two specific problems open-source developers should aim to solve: transparency and extensibility. It is important to many companies to know just how their vendors are handling their data (and that of their clients). With closed software one just has to trust information is secure. The transparency of open-source code allows one verify that it is. The extensibility advantage comes from the passion of community developers for plugins, which are often merged into the open-source main branch. It can be difficult for closed-source engineering teams to compete with the resulting extendibility.

See the write-up for examples of both advantages from the likes of MongoDB, PostHog, and Minio. Chuong concludes:

“Both of the above issues contribute to commercial open-source being a better product in the long run. But by tapping the community for feedback and help, open-source projects can also accelerate past closed-source solutions. … Open-source projects—not just commercial open source—have served as a critical driver for the improvement of products for decades. However, some software is going to remain closed source. It’s just the nature of first-mover advantage. But when transparency and extensibility are an issue, an open-source successor becomes a real threat.”

Cynthia Murrell, October 12, 2023

9 Cognitive Blind Spot 3: You Trust Your Instincts, Right?

October 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

ChatGPT became available in the autumn of 2022. By December, a young person fell in love with his chatbot. From this dinobaby’s point of view, that was quicker than a love affair ignited by a dating app. “Treason Case: What Are the Dangers of AI Chatbots?” misses the point of its own reporter’s story. The Beeb puts the blame on Jaswant Singh Chail, not the software. Justice needs an individual, not a pride of zeros and ones.

10 6 trust me

A bad actor tries to convince other criminals that he is honest, loyal, trustworthy, and an all-around great person. “Trust me,” he says. Some of those listening to the words are skeptical. Thanks, MidJourney. You are getting better at depicting duplicity.

Here’s the story: Shortly after discovering an online chatbot, Mr. Chail fell in love with “an online companion.” The Replika app allows a user to craft a chatbot. The protagonist in this love story promptly moved from casual chit chat to emotional attachment. As the narrative arc unfolded, Mr. Chail confessed that he was an assassin, and he wanted to kill the Queen of England. Mr. Chail planned on using a crossbow.

The article reports:

Marjorie Wallace, founder and chief executive of mental health charity SANE, says the Chail case demonstrates that, for vulnerable people, relying on AI friendships could have disturbing consequences. “The rapid rise of artificial intelligence has a new and concerning impact on people who suffer from depression, delusions, loneliness and other mental health conditions,” she says.

  That seems reasonable. The software meshed nicely with the cognitive blind spot of trusting one’s intuition. Some call this “gut” feel. The label is less important in the confusion of software with reality.

But what happens when the new Google Pixel 8 camera enhances an image automatically. Who wants a lousy snap? Google appears to favor a Mother Google approach. When an image is manipulated either in a still or video, what does one’s gut say, “I trust pictures and videos for accuracy.” Like the young would be and off-the-rails chatbot lover, zeros and ones can create some interesting effects.

What about you, gentle reader? Do you know how to recognize an unhealthy interaction with smart software? Can you determine if an image is “real” or the fabrication of a large outfit like Google?

Stephen E Arnold, October 9, 2023

Smart Productivity Software Means Pink Slip Flood

October 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[2]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Ready for some excitement, you under 50s?

Soon, workers may be spared the pain of training their replacements. Consciously, anyway. Wired reports, “Your Boss’s Spyware Could Train AI to Replace You.” Researcher Carl Frey’s landmark 2013 prediction that AI could threated half of US jobs has not yet come to pass. Now that current tools like ChatGPT have proven (so far) less accurate and self-sufficient than advertised, some workers are breathing a sigh of relief. Not so fast, warns journalist Thor Benson. It is the growingly pervasive “productivity” (aka monitoring) software we need to be concerned about. Benson writes:

“Enter corporate spyware, invasive monitoring apps that allow bosses to keep close tabs on everything their employees are doing—collecting reams of data that could come into play here in interesting ways. Corporations, which are monitoring their employees on a large scale, are now having workers utilize AI tools more frequently, and many questions remain regarding how the many AI tools that are currently being developed are being trained. Put all of this together and there’s the potential that companies could use data they’ve harvested from workers—by monitoring them and having them interact with AI that can learn from them—to develop new AI programs that could actually replace them. If your boss can figure out exactly how you do your job, and an AI program is learning from the data you’re producing, then eventually your boss might be able to just have the program do the job instead.”

Even at companies that do not use spyware, employees may unwittingly train their AI replacements simply by generating data as part of their work. To make matters worse, because it gets neither salary nor benefits, an algorithm need not exceed or even match a human’s performance to land the job.

So what can we do? We could retrain workers but, as MIT economics professor David Autor notes, that is not one of the US’s strong suits. Or we could take a cue from the Industrial Revolution: Frey points to Britain’s Poor Laws, which gave financial relief to workers whose jobs became obsolete back then. Hmm, we wonder: How would a similar measure fair in the current US Congress?

Cynthia Murrell, October 9, 2023

Cognitive Blind Spot 2: Bandwagon Surfing or Do What May Be Fashionable

October 6, 2023

Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The litigation about the use of Web content to train smart generative software is ramping up. Outfits like OpenAI, Microsoft, and Amazon and its new best friend will be snagged in the US legal system.

Humans are into trends. The NFL and Taylor Swift appear to be a trend. A sporting money machine and a popular music money machine. Jersey sales increase. Ms. Swift’s music sales go up. New eyeballs track a certain football player. The question is, “Who is exploiting whom?”

Which bandwagon are you riding? Thank you, MidJourney. Gloom seems to be part of your DNA.
Think about large language models and smart software. A similar dynamic may exist. Late in 2022, the natural language interface became the next big thing. Students and bad actors figured out that using a ChatGPT-type service could expedite certain activities. Students could be 500 word essays in less than a minute. Bad actors could be snippets of code in seconds. In short, many people were hopping on the LLM bandwagon decorated with smart software logos.

Now a bandwagon powered by healthy skepticism may be heading toward main street. Wired Magazine published a short essay titled “Chatbot Hallucinations Are Poisoning Web Search.” The foundational assumption is that Web search was better before ChatGPT-type incursions. I am not sure that idea is valid, but for the purposes of illustrating bandwagon surfing, it will pass unchallenged. Wired’s main point is that as AI-generated content proliferates, the results delivered by Google and a couple of other but vastly less popular search engines will deteriorate. I think this is a way to assert that lousy LLM output will make Web search worse. “Hallucination” is jargon for made up or just incorrect information.

Consider this essay “Evaluating LLMs Is a Minefield.” The essay and slide deck are the work of two AI wizards. The main idea is that figuring out whether a particular LLM or a ChatGPT-service is right, wrong, less wrong, more right, biased, or a digital representation of a 23 year old art history major working in a public relations firm is difficult.

I am not going to take the side of either referenced article. The point is that the hyperbolic excitement about “smart software” seems to be giving way to LLM criticism. From software for Every Man, the services are becoming tools for improving productivity.

To sum up, the original bandwagon has been pushed out of the parade by a new bandwagon filled with poobahs explaining that smart software, LLM, et al are making the murky, mysterious Web worse.

The question becomes, “Are you jumping on the bandwagon with the banner that says, “LLMs are really bad?” or are you sticking with the rah rah crowd? The point is that information at one point was good. Now information is less good. Imagine how difficult it will be to determine what’s right or wrong, biased or unbiased, or acceptable or unacceptable.

Who wants to do the work to determine provenance or answer questions about accuracy? Not many people. That, rather then lousy Web search, may be more important to some professionals. But that does not solve the problem of the time and resources required to deal with accuracy and other issues.

So which bandwagon are you riding? The NFL or Taylor Swift? Maybe the tension between the two?

Stephen E Arnold, October 6, 2023

Cognitive Blind Spot 1: Can You Identify Synthetic Data? Better Learn.

October 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

It has been a killer with the back-to-back trips to Europe and then to the intellectual hub of the old-fashioned America. In France, I visited a location allegedly the office of a company which “owns” the domain rrrrrrrrrrr.com. No luck. Fake address. I then visited a semi-sensitive area in Paris, walking around in the confused fog only a 78 year old can generate. My goal was to spot a special type of surveillance camera designed to provide data to a smart software system. The idea is that the images can be monitored through time so a vehicle making frequent passes of a structure can be flagged, its number tag read, and a bit of thought given to answer the question, “Why?” I visited with a friend and big brain who was one of the technical keystones of an advanced search system. He gave me his most recent book and I paid for my Orangina. Exciting.

10 5 financial documents

One executive tells his boss, “Sir, our team of sophisticated experts reviewed these documents. The documents passed scrutiny.” One of the “smartest people in the room” asks, “Where are we going for lunch today?” Thanks, MidJourney. You do understand executive stereotypes, don’t you?

On the flights, I did some thinking about synthetic data. I am not sure that most people can provide a definition which will embrace the Google’s efforts in the money saving land of synthetic. I don’t think too many people know about Charlie Javice’s use of synthetic data to whip up JPMC’s enthusiasm for her company Frank Financial. I don’t think most people understand that when typing a phrase into the Twitch AI Jesus that software will output a video and mostly crazy talk along with some Christian lingo.

The purpose of this short blog post is to present an example of synthetic data and conclude by revisiting the question, “Can You Identify Synthetic Data?” The article I want to use as a hook for this essay is from Fortune Magazine. I love that name, and I think the wolves of Wall Street find it euphonious as well. Here’s the title: “Delta Is Fourth Major U.S. Airline to Find Fake Jet Aircraft Engine Parts with Forged Airworthiness Documents from U.K. Company.”

The write up states:

Delta Air Lines Inc. has discovered unapproved components in “a small number” of its jet aircraft engines, becoming the latest carrier and fourth major US airline to disclose the use of fake parts.  The suspect components — which Delta declined to identify — were found on an unspecified number of its engines, a company spokesman said Monday. Those engines account for less than 1% of the more than 2,100 power plants on its mainline fleet, the spokesman said. 

Okay, bad parts can fail. If the failure is in a critical component of a jet engine, the aircraft could — note that I am using the word could — experience a catastrophic failure. Translating catastrophic into more colloquial lingo, the sentence means catch fire and crash or something slightly less terrible; namely, catch fire, explode, eject metal shards into the tail assembly, or make a loud noise and emit smoke. Exciting, just not terminal.

I don’t want to get into how the synthetic or fake data made its way through the UK company, the UK bureaucracy, the Delta procurement process, and into the hands of the mechanics working in the US or offshore. The fake data did elude scrutiny for some reason. With money being of paramount importance, my hunch is that saving some money played a role.

If organizations cannot spot fake data when it relates to a physical and mission-critical component, how will organizations deal with fake data generated by smart software. The smart software can get it wrong because an engineer-programmer screwed up his or her math or the complex web of algorithms just generate unanticipated behaviors from dependencies no one knew to check and validate.

What happens when computers which many people are “always” more right than a human, says, “Here’s the answer.” Many humans will skip the hard work because they are in a hurry, have no appetite for grunt work, or are scheduled by a Microsoft calendar to do something else when the quality assurance testing is supposed to take place.

Let’s go back to the question in the title of the blog post, “Can You Identify Synthetic Data?”

I don’t want to forget this part of the title, “Better learn.”

JPMC paid out more than $100 million in November 2022 because some of the smartest guys in the room weren’t that smart. But get this. JPMC is a big, rich bank. People who could die because of synthetic data are a different kettle of fish. Yeah, that’s what I thought about as I flew Delta back to the US from Paris. At the time, I thought Delta had not fallen prey to the scam.

I was wrong. Hence, I “better learn” myself.

Stephen E Arnold, October 5, 2023

Kagi Rolls Out a Small Web Initiative

October 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Recall the early expectations for the Web: It would be a powerful conduit for instant connection and knowledge-sharing around the world. Despite promises to the contrary, that rosy vision has long since given way to commercial interests’ paid content, targeted ads, bots, and data harvesting. Launched in 2018, Kagi offers a way to circumvent those factors with its ad-free, data protecting search engine—for a small fee, naturally. Now the company is promoting what it calls the Kagi Small Web initiative. We learn from the blog post:

“Since inception, we’ve been featuring content from the small web through our proprietary Teclis and TinyGem search indexes. This inclusion of high-quality, lesser-known parts of the web is part of what sets Kagi’s search results apart and gives them a unique flavor. Today we’re taking this a step further by integrating Kagi Small Web results into the index.”

See the write-up for examples. Besides these insertions into search results, one can also access these harder-to-find sources at the new Kagi Small Web website. This project displays a different random, recent Web page with each click of the “Next Post” button. Readers are also encouraged to check out their experimental Small YouTube, which we are told features content by YouTube creators with fewer than 4,000 subscribers. (Although as of this writing, the Small YouTube link supplied redirects right back to the source blog post. Hmm.)

The write-up concludes with these thoughts on Kagi’s philosophy:

“The driving question behind this initiative was simple yet profound: the web is made of millions of humans, so where are they? Why do they get overshadowed in traditional search engines, and how can we remedy this? This project required a certain leap of faith as the content we crawl may contain anything, and we are putting our reputation on the line vouching for it. But we also recognize that the ‘small web’ is the lifeblood of the internet, and the web we are fighting for. Those who contribute to it have already taken their own leaps of faith, often taking time and effort to create, without the assurance of an audience. Our goal is to change that narrative. Together with the global community of people who envision a different web, we’re committed to revitalizing a digital space abundant in creativity, self-expression, and meaningful content – a more humane web for all.”

Does this suggest that Google Programmable Search Engine is a weak sister?

Cynthia Murrell, October 5, 2023

A Pivot al Moment in Management Consulting

October 4, 2023

The practice of selling “management consulting” has undergone a handful of tectonic shifts since Edwin Booz convinced Sears, the “department” store outfit to hire him. (Yes, I am aware I am cherry picking, but this is a blog post, not a for fee report.)

The first was the ability of a consultant to move around quickly. Trains and Chicago became synonymous with management razzle dazzle. The center of gravity shifted to New York City because consulting thrives where there are big companies. The second was the institutionalization of the MBA as a certification of a 23 year old’s expertise. The third was the “invention” of former consultants for hire. The innovator in this business was Gerson Lehrman Group, but there are many imitators who hire former blue-chip types and resell them without the fee baggage of the McKinsey & Co. type outfits. And now the fourth earthquake is rattling carpetland and the windows in corner offices (even if these offices are in an expensive home in Wyoming.)

9 30 centaur and cybord

A centaur and a cyborg working on a client report. Thanks, MidJourney. Nice hair style on the cyborg.

Now we have the era of smart software or what I prefer to call the era of hyperbole about semi-smart semi-automated systems which output “information.” I noted this write up from the estimable Harvard University. Yes, this is the outfit who appointed an expert in ethics to head up the outfit’s ethics department. The same ethics expert allegedly made up data for peer reviewed publications. Yep, that Harvard University.

Navigating the Jagged Technological Frontier” is an essay crafted by the D^3 faculty. None of this single author stuff in an institution where fabrication of research is a stand up comic joke. “What’s the most terrifying word for a Harvard ethicist?” Give up? “Ethics.” Ho ho ho.

What are the highlights of this esteemed group of researches, thinkers, and analysts. I quote:

  • For tasks within the AI frontier, ChatGPT-4 significantly increased performance, boosting speed by over 25%, human-rated performance by over 40%, and task completion by over 12%.
  • The study introduces the concept of a “jagged technological frontier,” where AI excels in some tasks but falls short in others.
  • Two distinct patterns of AI use emerged: “Centaurs,” who divided and delegated tasks between themselves and the AI, and “Cyborgs,” who integrated their workflow with the AI.

Translation: We need fewer MBAs and old timers who are not able to maximize billability with smart or semi smart software. Keep in mind that some consultants view clients with disdain. If these folks were smart, they would not be relying on 20-somethings to bail them out and provide “wisdom.”

This dinobaby is glad he is old.

Stephen E Arnold, October 4, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta