Knowledge Workers, AI Software Is Cheaper and Does Not Take Vacations. Worried Yet?
November 2, 2023
This essay is the work of a dumb humanoid. No smart software required.
I believe the 21st century is the era of good enough or close enough for horseshoes products and services. Excellence is a surprise, not a goal. At a talk I gave at CeBIT years ago, I explained that certain information centric technologies had reached the “let’s give up” stage of development. Fresh in my mind were the lessons I learned writing a compendium of information access systems published as “The Enterprise Search Report” by a company lost to me in the mists of time.
“I just learned that our department will be replaced by smart software,” says the MBA from Harvard. The female MBA from Stanford emits a scream just like the one she let loose after scuffing her new Manuel Blahnik (Rodríguez) shoes. Thanks, MidJourney, you delivered an image with a bit of perspective. Good enough work.
I identified the flaws in implementations of knowledge management, information governance, and enterprise search products. The “good enough” comment was made to me during the Q-and-A session. The younger person pointed out that systems for finding information — regardless of the words I used to describe what most knowledge workers did — was “good enough.” I recall the simile the intense young person offered as I was leaving the lecture hall. Vivid now years later was the comment that improving information access was like making catalytic converters deliver zero emissions. Thus, information access can’t get where it should be. The technology is good enough.
I wonder if that person has read “AI Anxiety As Computers Get Super Smart.” Probably not. I believe that young person knew more than I did. As a dinobaby, I just smiled and listened. I am a smart dinobaby in some situations. I noted this passage in the cited article:
Generative AI, however, can take aim at white-collar jobs such as lawyers, doctors, teachers, journalists, and even computer programmers. A report from the McKinsey consulting firm estimates that by the end of this decade, as much as 30 percent of the hours worked in the United States could be automated in a trend accelerated by generative AI.
Executive orders and government proclamations are unlikely to have much effect on some people. The write up points out:
Generative AI makes it easier for scammers to create convincing phishing emails, perhaps even learning enough about targets to personalize approaches. Technology lets them copy a face or a voice, and thus trick people into falling for deceptions such as claims a loved one is in danger, for example.
What’s the fix? One that is good enough probably won’t have much effect.
Stephen E Arnold, November 2, 2023
test
By Golly, the Gray Lady Will Not Miss This AI Tech Revolution!
November 2, 2023
This essay is the work of a dumb humanoid. No smart software required.
The technology beacon of the “real” newspaper is shining like a high-technology beacon. Flash, the New York Times Online. Flash, terminating the exclusive with LexisNexis. Flash. The shift to a — wait for it — a Web site. Flash. The in-house indexing system. Flash. Buying About.com. Flash. Doing podcasts. My goodness, the flashes have impaired my vision. And where are we today after labor strife, newsroom craziness, and a list of bestsellers that gets data from…? I don’t really know, and I just haven’t bothered to do some online poking around.
A real journalist of today uses smart software to write listicles for Buzzfeed, essays for high school students, and feature stories for certain high profile newspapers. Thanks for the drawing Microsoft Bing. Trite but okay.
I thought about the technology flashes from the Gray Lady’s beacon high atop its building sort of close to Times Square. Nice branding. I wonder if mobile phone users know why the tourist destination is called Times Square. Since I no longer work in New York, I have forgotten. I do remember the high intensity pinks and greens of a certain type of retail establishment. In fact, I used to know the fellow who created this design motif. Ah, you don’t remember. My hunch is that there are other factoids you and I won’t remember.
For example, what’s the byline on a New York Times’s story? I thought it was the name or names of the many people who worked long hours, made phone calls, visited specific locations, and sometimes visited the morgue (no, the newspaper morgue, not the “real” morgue where the bodies of compromised sources ended up).
If the information in that estimable source Showbiz411.com is accurate, the Gray Lady may cite zeros and ones. The article is “The New York Times Help Wanted: Looking for an AI Editor to Start Publishing Stories. Six Figure Salary.” Now that’s an interesting assertion. A person like me might ask, “Why not let a recent college graduate crank out machine generated stories?” My assumption is that most people trying to meet a deadline and in sync with Taylor Swift will know about machine-generated information. But, if the story is true, here’s what’s up:
… it looks like the Times is going let bots do their journalism. They’re looking for “a senior editor to lead the newsroom’s efforts to ambitiously and responsibly make use of generative artificial intelligence.” I’m not kidding. How the mighty have fallen. It’s on their job listings.
The Showbiz411.com story allegedly quotes the Gray Lady’s help wanted ad as saying:
“This editor will be responsible for ensuring that The Times is a leader in GenAI innovation and its applications for journalism. They will lead our efforts to use GenAI tools in reader-facing ways as well as internally in the newsroom. To do so, they will shape the vision for how we approach this technology and will serve as the newsroom’s leading voice on its opportunity as well as its limits and risks. “
There are a bunch of requirements for this job. My instinct is that a few high school students could jump into this role. What’s the difference between a ChatGPT output about crossing the Delaware and writing a “real” news article about fashion trends seen at Otto’s Shrunken Head.
Several observations:
- What does this ominous development mean to the accountants who will calculate the cost of “real” journalists versus a license to smart software? My thought is that the general reaction will be positive. Imagine: No vacays, no sick days, and no humanoid protests. The Promised Land has arrived.
- How will the Gray Lady’s management team explain this cuddling up to smart software? Perhaps it is just one of those newsroom romances? On the other hand, what if something serious develops and the smart software moves in? Yipes.
- What will “informed” reads think of stories crafted by the intellectual engine behind a high school student’s essay about great moments in American history? Perhaps the “informed” readers won’t care?
Exciting stuff in the world of real journalism down the street from Times Square and the furries, pickpockets, and gawkers from Ames, Iowa. I wonder if the hallucinating smart software will be as clever as the journalist who fabricates a story? Probably not. “Real” journalists do not shape, weaponized, or filter the actual factual. Is John Wiley & Sons ready to take the leap?
Stephen E Arnold, November 2, 2023
test
Now the AI $64 Question: Where Are the Profits?
October 26, 2023
This essay is the work of a dumb humanoid. No smart software required.
As happens with most over-hyped phenomena, AI is looking like a disappointment for investors. Gizmodo laments, “So Far, AI Is a Money Pit That Isn’t Paying Off.” Writer Lucas Ropek cites this report from the Wall Street Journal as he states tech companies are not, as of yet, profiting off AI as they had hoped. For example, Microsoft’s development automation tool GitHub Copilot lost an average of $20 a month for each $10-per-month user subscription. Even ChatGPT is seeing its user base decline while operating costs remain sky high. The write-up explains:
“The reasons why the AI business is struggling are diverse but one is quite well known: these platforms are notoriously expensive to operate. Content generators like ChatGPT and DALL-E burn through an enormous amount of computing power and companies are struggling to figure out how to reduce that footprint. At the same time, the infrastructure to run AI systems—like powerful, high-priced AI computer chips—can be quite expensive. The cloud capacity necessary to train algorithms and run AI systems, meanwhile, is also expanding at a frightening rate. All of this energy consumption also means that AI is about as environmentally unfriendly as you can get. To get around the fact that they’re hemorrhaging money, many tech platforms are experimenting with different strategies to cut down on costs and computing power while still delivering the kinds of services they’ve promised to customers. Still, it’s hard not to see this whole thing as a bit of a stumble for the tech industry. Not only is AI a solution in search of a problem, but it’s also swiftly becoming something of a problem in search of a solution.”
Ropek notes it would have been wise for companies to figure out how to turn a profit on AI before diving into the deep end. Perhaps, but leaping into the next big thing is a priority for tech firms lest they be left behind. After all, who could have predicted this result? Let’s ask Google Bard, OpenAI, or one of the numerous AI “players”? Even better perhaps will be deferring the question of costs until the AI factories go online.
Cynthia Murrell, October 26, 2023
xx
Data Drift: Yes, It Is Real and Feeds on False Economy Methods
October 10, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
When I mention statistical drift, most of those in my lectures groan and look at their mobile phone. I am delighted to call attention to a write up called “The Model-Eat-Model World’ of Clinical AI: How Predictive Power Becomes a Pitfall.” The article focuses on medical information, but its message applies to a wide range of “smart” models. These include the Google shortcuts of Snorkel to the Bayesian based systems in vogue in many policeware and intelware products. The behavior appears to have influenced Dr. Timnit Gebru and contributed to her invitation to find her future elsewhere from none other than the now marginalized Google Brain group. (Googlers do not appreciate being informed of their shortcomings it seems.)
The young shark of Wall Street ponders his recent failure at work. He thinks, “I used those predictive models as I did last year. How could they have gone off the rails. I am ruined.” Thanks, MidJourney. Manet you are not.
The main idea is that as numerical recipes iterate, the outputs deteriorate or wander off the desired path. The number of cycles require to output baloney depends on the specific collections of procedures. But wander these puppies do. To provide a baseline, users of the Autonomy Bayesian system found that after three months of operation, precision and recall were deteriorated. The fix was to retrain the system. Flash forward today to systems that iterate many times faster than the Autonomy neurolinguistic programming method, and the lousy outputs can appear in a matter of hours. There are corrective steps one can take, but these are expensive when they involve humans. Thus, some predictive outputs have developed smart software to try and keep the models from jumping their railroad tracks. When the models drift, the results seem off kilter.
The write up says:
Last year, an investigation from STAT and the Massachusetts Institute of Technology captured how model performance can degrade over time by testing the performance of three predictive algorithms. Over the course of a decade, accuracy for predicting sepsis, length of hospitalization, and mortality varied significantly. The culprit? A combination of clinical changes — the use of new standards for medical coding at the hospital — and an influx of patients from new communities. When models fail like this, it’s due to a problem called data drift.
Yep, data drift.
I need to check my mobile phone. Fixing data drift is tricky and in today’s zoom zoom world, “good enough” is the benchmark of excellence. Marketers do not want to talk about data drift. What if bad things result? Let the interns fix it next summer?
Stephen E Arnold, October 10, 2023
Cognitive Blind Spot 1: Can You Identify Synthetic Data? Better Learn.
October 5, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
It has been a killer with the back-to-back trips to Europe and then to the intellectual hub of the old-fashioned America. In France, I visited a location allegedly the office of a company which “owns” the domain rrrrrrrrrrr.com. No luck. Fake address. I then visited a semi-sensitive area in Paris, walking around in the confused fog only a 78 year old can generate. My goal was to spot a special type of surveillance camera designed to provide data to a smart software system. The idea is that the images can be monitored through time so a vehicle making frequent passes of a structure can be flagged, its number tag read, and a bit of thought given to answer the question, “Why?” I visited with a friend and big brain who was one of the technical keystones of an advanced search system. He gave me his most recent book and I paid for my Orangina. Exciting.
One executive tells his boss, “Sir, our team of sophisticated experts reviewed these documents. The documents passed scrutiny.” One of the “smartest people in the room” asks, “Where are we going for lunch today?” Thanks, MidJourney. You do understand executive stereotypes, don’t you?
On the flights, I did some thinking about synthetic data. I am not sure that most people can provide a definition which will embrace the Google’s efforts in the money saving land of synthetic. I don’t think too many people know about Charlie Javice’s use of synthetic data to whip up JPMC’s enthusiasm for her company Frank Financial. I don’t think most people understand that when typing a phrase into the Twitch AI Jesus that software will output a video and mostly crazy talk along with some Christian lingo.
The purpose of this short blog post is to present an example of synthetic data and conclude by revisiting the question, “Can You Identify Synthetic Data?” The article I want to use as a hook for this essay is from Fortune Magazine. I love that name, and I think the wolves of Wall Street find it euphonious as well. Here’s the title: “Delta Is Fourth Major U.S. Airline to Find Fake Jet Aircraft Engine Parts with Forged Airworthiness Documents from U.K. Company.”
The write up states:
Delta Air Lines Inc. has discovered unapproved components in “a small number” of its jet aircraft engines, becoming the latest carrier and fourth major US airline to disclose the use of fake parts. The suspect components — which Delta declined to identify — were found on an unspecified number of its engines, a company spokesman said Monday. Those engines account for less than 1% of the more than 2,100 power plants on its mainline fleet, the spokesman said.
Okay, bad parts can fail. If the failure is in a critical component of a jet engine, the aircraft could — note that I am using the word could — experience a catastrophic failure. Translating catastrophic into more colloquial lingo, the sentence means catch fire and crash or something slightly less terrible; namely, catch fire, explode, eject metal shards into the tail assembly, or make a loud noise and emit smoke. Exciting, just not terminal.
I don’t want to get into how the synthetic or fake data made its way through the UK company, the UK bureaucracy, the Delta procurement process, and into the hands of the mechanics working in the US or offshore. The fake data did elude scrutiny for some reason. With money being of paramount importance, my hunch is that saving some money played a role.
If organizations cannot spot fake data when it relates to a physical and mission-critical component, how will organizations deal with fake data generated by smart software. The smart software can get it wrong because an engineer-programmer screwed up his or her math or the complex web of algorithms just generate unanticipated behaviors from dependencies no one knew to check and validate.
What happens when computers which many people are “always” more right than a human, says, “Here’s the answer.” Many humans will skip the hard work because they are in a hurry, have no appetite for grunt work, or are scheduled by a Microsoft calendar to do something else when the quality assurance testing is supposed to take place.
Let’s go back to the question in the title of the blog post, “Can You Identify Synthetic Data?”
I don’t want to forget this part of the title, “Better learn.”
JPMC paid out more than $100 million in November 2022 because some of the smartest guys in the room weren’t that smart. But get this. JPMC is a big, rich bank. People who could die because of synthetic data are a different kettle of fish. Yeah, that’s what I thought about as I flew Delta back to the US from Paris. At the time, I thought Delta had not fallen prey to the scam.
I was wrong. Hence, I “better learn” myself.
Stephen E Arnold, October 5, 2023
A Pivot al Moment in Management Consulting
October 4, 2023
The practice of selling “management consulting” has undergone a handful of tectonic shifts since Edwin Booz convinced Sears, the “department” store outfit to hire him. (Yes, I am aware I am cherry picking, but this is a blog post, not a for fee report.)
The first was the ability of a consultant to move around quickly. Trains and Chicago became synonymous with management razzle dazzle. The center of gravity shifted to New York City because consulting thrives where there are big companies. The second was the institutionalization of the MBA as a certification of a 23 year old’s expertise. The third was the “invention” of former consultants for hire. The innovator in this business was Gerson Lehrman Group, but there are many imitators who hire former blue-chip types and resell them without the fee baggage of the McKinsey & Co. type outfits. And now the fourth earthquake is rattling carpetland and the windows in corner offices (even if these offices are in an expensive home in Wyoming.)
A centaur and a cyborg working on a client report. Thanks, MidJourney. Nice hair style on the cyborg.
Now we have the era of smart software or what I prefer to call the era of hyperbole about semi-smart semi-automated systems which output “information.” I noted this write up from the estimable Harvard University. Yes, this is the outfit who appointed an expert in ethics to head up the outfit’s ethics department. The same ethics expert allegedly made up data for peer reviewed publications. Yep, that Harvard University.
“Navigating the Jagged Technological Frontier” is an essay crafted by the D^3 faculty. None of this single author stuff in an institution where fabrication of research is a stand up comic joke. “What’s the most terrifying word for a Harvard ethicist?” Give up? “Ethics.” Ho ho ho.
What are the highlights of this esteemed group of researches, thinkers, and analysts. I quote:
- For tasks within the AI frontier, ChatGPT-4 significantly increased performance, boosting speed by over 25%, human-rated performance by over 40%, and task completion by over 12%.
- The study introduces the concept of a “jagged technological frontier,” where AI excels in some tasks but falls short in others.
- Two distinct patterns of AI use emerged: “Centaurs,” who divided and delegated tasks between themselves and the AI, and “Cyborgs,” who integrated their workflow with the AI.
Translation: We need fewer MBAs and old timers who are not able to maximize billability with smart or semi smart software. Keep in mind that some consultants view clients with disdain. If these folks were smart, they would not be relying on 20-somethings to bail them out and provide “wisdom.”
This dinobaby is glad he is old.
Stephen E Arnold, October 4, 2023
Kill Off the Dinobabies and Get Younger, Bean Counter-Pleasing Workers. Sound Familiar?
September 21, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “Google, Meta, Amazon Hiring low-Paid H1B Workers after US Layoffs: Report.” Is it accurate? Who knows? In the midst of a writers’ strike in Hollywood, I thought immediately about endless sequels to films like “Batman 3: Deleting Robin” and Halloween 8: The Night of the Dinobaby Purge.”
The write up reports a management method similar to those implemented when the high school science club was told that a school field trip to the morgue was turned down. The school’s boiler suffered a mysterious malfunction and school was dismissed for a day. Heh heh heh.
I noted this passage:
Even as global tech giants are carrying out mass layoffs, several top Silicon Valley companies are reportedly looking to hire lower-paid tech workers from foreign countries. Google, Meta, Amazon, Microsoft, Zoom, Salesforce and Palantir have applied for thousands of H1B worker visas this year…
I heard a rumor that IBM used a similar technique. Would Big Blue replace older, highly paid employees with GenX professionals not born in the US? Of course not! The term “dinobabies” was a product of spontaneous innovation, not from a personnel professional located in a suburb of New York City. Happy bean counters indeed. Saving money with good enough work. I love the phrase “minimal viable product” for “minimally viable” work environments.
There are so many ways to allow people to find their futures elsewhere. Shelf stockers are in short supply I hear.
Stephen E Arnold, September 21, 2023
Profits Over Promises: IBM Sells Facial Recognition Tech to British Government
September 18, 2023
Just three years after it swore off any involvement in facial recognition software, IBM has made an about-face. The Verge reports, “IBM Promised to Back Off Facial Recognition—Then it Signed a $69.8 Million Contract to Provide It.” Amid the momentous Black Lives Matter protests of 2020, IBM’s Arvind Krishna wrote a letter to Congress vowing to no longer supply “general purpose” facial recognition tech. However, it appears that is exactly what the company includes within the biometrics platform it just sold to the British government. Reporter Mark Wilding writes:
“The platform will allow photos of individuals to be matched against images stored on a database — what is sometimes known as a ‘one-to-many’ matching system. In September 2020, IBM described such ‘one-to-many’ matching systems as ‘the type of facial recognition technology most likely to be used for mass surveillance, racial profiling, or other violations of human rights.'”
In the face of this lucrative contract IBM has changed its tune. It now insists one-to-many matching tech does not count as “general purpose” since the intention here is to use it within a narrow scope. But scopes have a nasty habit of widening to fit the available tech. The write-up continues:
“Matt Mahmoudi, PhD, tech researcher at Amnesty International, said: ‘The research across the globe is clear; there is no application of one-to-many facial recognition that is compatible with human rights law, and companies — including IBM — must therefore cease its sale, and honor their earlier statements to sunset these tools, even and especially in the context of law and immigration enforcement where the rights implications are compounding.’ Police use of facial recognition has been linked to wrongful arrests in the US and has been challenged in the UK courts. In 2019, an independent report on the London Metropolitan Police Service’s use of live facial recognition found there was no ‘explicit legal basis’ for the force’s use of the technology and raised concerns that it may have breached human rights law. In August of the following year, the UK’s Court of Appeal ruled that South Wales Police’s use of facial recognition technology breached privacy rights and broke equality laws.”
Wilding notes other companies similarly promised to renounce facial recognition technology in 2020, including Amazon and Microsoft. Will governments also be able to entice them into breaking their vows with tantalizing offers?
Cynthia Murrell, September 18, 2023
An AI to Help Law Firms Craft More Effective Invoices
September 14, 2023
Think money. That answers many AI questions.
Why are big law firms embracing AI? For better understanding of the law? Nay. To help clients? No. For better writing? Nope. What then? Why more fruitful billing, if course. We learn from Above The Law, “Law Firms Struggling with Arcane Billing Guidelines Can Look to AI for Relief.” According to writer and litigator Joe Patrice, law clients rely on labyrinthine billing compliance guidelines to delay paying their invoices. Now AI products like Verify are coming to rescue beleaguered lawyers from penny pinching clients. Patrice writes:
“Artificial intelligence may not be prepared to solve every legal industry problem, but it might be the perfect fit for this one. ZERO CEO Alex Babin is always talking about developing automation to recover the money lawyers lose doing non-billable tasks, so it’s unsurprising that the company has turned its attention to the industry’s billing fiasco. And when it comes to billing guideline compliance, ZERO estimates that firms can recover millions by introducing AI to the process. Because just ‘following the guidelines’ isn’t always enough. Some guidelines are explicit. Others leave a world of interpretation. Still others are explicit, but no one on the client side actually cares enough to force outside counsel to waste time correcting the issue. Where ZERO’s product comes in is in understanding the guidelines and the history of rejections and appeals surrounding the bills to figure out what the bill needs to look like to get the lawyers paid with the least hassle.”
Verify can even save attorneys from their own noncompliant wording, rewriting their narratives to comply with guidelines. And it can do while mimicking each lawyer’s writing style. Very handy.
Cynthia Murrell, September 14, 2023
New Wave Management or Is It Leaderment?
September 12, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Here’s one of my biases, and I am rather proud of it. I like to word “manager.” According to my linguistics professor Lev Soudek, the word “manage” used to mean trickery and deceit. When I was working at a blue chip consulting firm, the word meant using tactics to achieve a goal. I think of management as applied trickery. The people whom one pays will go along with the program, but not 24×7. In a company which expects 60 hours of work a week the minimum for survival of a Spanish inquisition inspired personnel approach, mental effort had to be expended.
I read “I’m a Senior Leader at Amazon and Have Seen Many Bad Managers. Here Are 3 Reasons Why There Are So Few Great Ones.” The intense, clear-eyed young person explains that he has worked at some outfits which are not among my list of the Top 10 high-technology outfits. His résumé includes eBay (a digital yard sale), a game retailer, and the somewhat capricious Amazon (are we a retail outfit, are we a cloud outfit, are we a government services company, are we a data broker, are we a streaming company, etc.).
A modern practitioner of leaderment is having trouble getting the employees to fall in, throw their shoulders back, and mark in step to the cadence of Am-a-zon, Am-a-zon like a squad of French Foreign Legion troops on Bastille Day. Thanks, MidJourney. The illustration did not warrant a red alert, but it is also disappointing.
I assume that these credentials are sufficient to qualify for a management guru. Here are the three reasons managers are less than outstanding.
First, managers just sort of happen. Few people decide to be a manager. Ah, serendipity or just luck.
Second, managers don’t lead. (Huh, the word is “management”, not “leaderment.”)
Third, pressure for results means some managers are “sacrificing employee growth.” (I am not sure what this statement means. If one does not achieve results, then that individual and maybe his direct reports, the staff he leaderments, and his boss will be given an opportunity to find their future elsewhere. Translation for the GenZ reader: You are fired.
Let’s step back and think about these insights. My initial reaction is that a significant re-languaging has taken place in the write up. A good manager does not have to be a leader. In fact, when I was a guest lecturer at the Kansai Institute of Technology, I met a number of respected Japanese managers. I suppose some were leaders, but a number made it clear that results were number one or ichiban.
In my work career, confusing to manage with to lead would create some confusion. I recall when I was working in the US Congress with a retired admiral who was elected to represent an upscale LA district, the way life worked was simple: The retired admiral issued orders. Lesser entities like myself figured out how to execute, tapped appropriate resources, and got the job done. There was not much leadership required of me. I organized; I paid people money; and I hassled everyone until the retired admiral grunted in a happy way. There was no leaderment for me. The retired admiral said, “I want this in two days.” There was not much time for leaderment.
I listened to a podcast called GeekWire. The September 2, 2023, program made it clear that the current big dog at Amazon wants people to work in the office. If not, these folks are going to go away. What makes this interesting is that the GeekWire pundits pointed out that the Big Dog had changed his story, guidelines, and procedures for this work from home and work from office approach multiple times.
Therefore, I am not sure if there is management or leaderment at the world’s largest digital mall. I do know that modern leaderment is not for me. The old-fashioned meaning of manage seems okay to me.
Stephen E Arnold, September 12, 2023