Knowledge Workers, AI Software Is Cheaper and Does Not Take Vacations. Worried Yet?
November 2, 2023
This essay is the work of a dumb humanoid. No smart software required.
I believe the 21st century is the era of good enough or close enough for horseshoes products and services. Excellence is a surprise, not a goal. At a talk I gave at CeBIT years ago, I explained that certain information centric technologies had reached the “let’s give up” stage of development. Fresh in my mind were the lessons I learned writing a compendium of information access systems published as “The Enterprise Search Report” by a company lost to me in the mists of time.
“I just learned that our department will be replaced by smart software,” says the MBA from Harvard. The female MBA from Stanford emits a scream just like the one she let loose after scuffing her new Manuel Blahnik (Rodríguez) shoes. Thanks, MidJourney, you delivered an image with a bit of perspective. Good enough work.
I identified the flaws in implementations of knowledge management, information governance, and enterprise search products. The “good enough” comment was made to me during the Q-and-A session. The younger person pointed out that systems for finding information — regardless of the words I used to describe what most knowledge workers did — was “good enough.” I recall the simile the intense young person offered as I was leaving the lecture hall. Vivid now years later was the comment that improving information access was like making catalytic converters deliver zero emissions. Thus, information access can’t get where it should be. The technology is good enough.
I wonder if that person has read “AI Anxiety As Computers Get Super Smart.” Probably not. I believe that young person knew more than I did. As a dinobaby, I just smiled and listened. I am a smart dinobaby in some situations. I noted this passage in the cited article:
Generative AI, however, can take aim at white-collar jobs such as lawyers, doctors, teachers, journalists, and even computer programmers. A report from the McKinsey consulting firm estimates that by the end of this decade, as much as 30 percent of the hours worked in the United States could be automated in a trend accelerated by generative AI.
Executive orders and government proclamations are unlikely to have much effect on some people. The write up points out:
Generative AI makes it easier for scammers to create convincing phishing emails, perhaps even learning enough about targets to personalize approaches. Technology lets them copy a face or a voice, and thus trick people into falling for deceptions such as claims a loved one is in danger, for example.
What’s the fix? One that is good enough probably won’t have much effect.
Stephen E Arnold, November 2, 2023
test
Microsoft at Davos: Is Your Hair on Fire, Google?
November 2, 2023
This essay is the work of a dumb humanoid. No smart software required.
Microsoft said at the January 2023 Davos, AI is the next big thing. The result? Google shifted into Code Red and delivered a wild and crazy demonstration of a deeply flawed AI system in February 2023. I think the phrase “Code Red” became associated to the state of panic within the comfy confines of Googzilla’s executive suites, real and virtual.
Sam AI-man made appearances speaking to anyone who would listen words like “billion dollar investment,” efficiency, and work processes. The result? Googzilla itself found out that whether Microsoft’s brilliant marketing of AI worked or not, the Softies had just demonstrated that it — not the Google — was a “leader”. The new Microsoft could create revenue and credibility problems for the Versailles of technology companies.
Therefore, the Google tried to try and be nimble and make the myth of engineering prowess into reality, not a CGI version of Camelot. The PR Camelot featured Google as the Big Dog in the AI world. After all, Google had done the protein thing, an achievement which made absolutely no sense to 99 percent of the earth’s population. Some asked, “What the heck is a protein folder?” I want a Google Waze service that shows me where traffic cameras are.
The Google executives apparently went to meetings with their hair on fire.
A group of Google executives in a meeting with their hair on fire after Microsoft’s Davos AI announcement. Google wanted teams to manifest AI prowess everywhere, lickity split. Google reorganized. Google probed Anthropic and one Googler invested in the company. Dr. Prabhakar Raghavan demonstrated peculiar communication skills.
I had these thoughts after I read “Google Didn’t Rush Bard Chatbot to Beat Microsoft, Executive Says.” So what was this Code Red thing? Why has Google — the quantum supremacy and global leader in online advertising and protein folding — be lagging behind Microsoft? What is it now? Oh, yeah. Almost a year, a reorganization of the Google’s smart software group, and one of Google’s own employees explaining that AI could have a negative impact on the world. Oh, yeah, that guy is one of the founders of Google’s DeepMind AI group. I won’t mention the Googler who thought his chatbot was alive and ended up with an opportunity to find his future elsewhere. Right. Code Red. I want to note Timnit Gebru and the stochastic parrot, the Jeff Dean lateral arabesque, and the significant investment in a competitor’s AI technology. Right. Standard operating procedure for an online advertising company with a fairly healthy self concept about its excellence and droit du seigneur.
The Bloomberg article reports which I am assuming is “real”, actual factual information:
A senior Google executive disputed suggestions that the company rushed to release its artificial intelligence-based chatbot Bard earlier this year to beat a similar offering from rival Microsoft Corp. Testifying in Google’s defense at the Justice Department’s antitrust trial against the search giant, Elizabeth Reid, a vice president of search, acknowledged that Bard gave “a wrong answer” during its public unveiling in February. But she rejected the contention by government lawyer David Dahlquist that Bard was “rushed” out after Microsoft announced it was integrating generative AI into its own Bing search engine.
The real news story pointed out:
Google’s public demonstration of Bard underwhelmed investors. In one instance, Bard was asked about new discoveries from the James Webb Space Telescope. The chatbot incorrectly stated the telescope was used to take the first pictures of a planet outside the Earth’s solar system. While the Webb telescope was the first to photograph one particular planet outside the Earth’s solar system, NASA first photographed a so-called exoplanet in 2004. The mistake led to a sharp fall in Alphabet’s stock. “It’s a very subtle language difference,” Reid said in explaining the error in her testimony Wednesday. “The amount of effort to ensure that a paragraph is correct is quite a lot of work.” “The challenges of fact-checking are hard,” she added.
Yes, facts are hard in Hallucinationville? I think the concept I take away from this statement is that PR is easier than making technology work. But today Google and similar firms are caught in what I call a “close enough for horseshoes” mind set. Smart software, in my experience, is like my dear, departed mother’s not-quite-done pineapple upside down cakes. Yikes, those were a mess. I could eat the maraschino cherries but nothing else. The rest was deposited in the trash bin.
And where are the “experts” in smart search? Prabhakar? Danny? I wonder if they are embarrassed by their loss of their thick lustrous hair. I think some of it may have been singed after the outstanding Paris demonstration and subsequent Mountain View baloney festivals. Was Google behaving like a child frantically searching for his mom at the AI carnival? I suppose when one is swathed in entitlements, cashing huge paychecks, and obfuscating exactly how the money is extracted from advertisers, reality is distorted.
Net net: Microsoft at Davos caused Google’s February 2023 Paris presentation. That mad scramble has caused to conclude that talking about AI is a heck of a lot easier than delivering reliable, functional, and thought out products. Is it possible to deliver such products when one’s hair is on fire? Some data say, “Nope.”
Stephen E Arnold, November 2, 2023
By Golly, the Gray Lady Will Not Miss This AI Tech Revolution!
November 2, 2023
This essay is the work of a dumb humanoid. No smart software required.
The technology beacon of the “real” newspaper is shining like a high-technology beacon. Flash, the New York Times Online. Flash, terminating the exclusive with LexisNexis. Flash. The shift to a — wait for it — a Web site. Flash. The in-house indexing system. Flash. Buying About.com. Flash. Doing podcasts. My goodness, the flashes have impaired my vision. And where are we today after labor strife, newsroom craziness, and a list of bestsellers that gets data from…? I don’t really know, and I just haven’t bothered to do some online poking around.
A real journalist of today uses smart software to write listicles for Buzzfeed, essays for high school students, and feature stories for certain high profile newspapers. Thanks for the drawing Microsoft Bing. Trite but okay.
I thought about the technology flashes from the Gray Lady’s beacon high atop its building sort of close to Times Square. Nice branding. I wonder if mobile phone users know why the tourist destination is called Times Square. Since I no longer work in New York, I have forgotten. I do remember the high intensity pinks and greens of a certain type of retail establishment. In fact, I used to know the fellow who created this design motif. Ah, you don’t remember. My hunch is that there are other factoids you and I won’t remember.
For example, what’s the byline on a New York Times’s story? I thought it was the name or names of the many people who worked long hours, made phone calls, visited specific locations, and sometimes visited the morgue (no, the newspaper morgue, not the “real” morgue where the bodies of compromised sources ended up).
If the information in that estimable source Showbiz411.com is accurate, the Gray Lady may cite zeros and ones. The article is “The New York Times Help Wanted: Looking for an AI Editor to Start Publishing Stories. Six Figure Salary.” Now that’s an interesting assertion. A person like me might ask, “Why not let a recent college graduate crank out machine generated stories?” My assumption is that most people trying to meet a deadline and in sync with Taylor Swift will know about machine-generated information. But, if the story is true, here’s what’s up:
… it looks like the Times is going let bots do their journalism. They’re looking for “a senior editor to lead the newsroom’s efforts to ambitiously and responsibly make use of generative artificial intelligence.” I’m not kidding. How the mighty have fallen. It’s on their job listings.
The Showbiz411.com story allegedly quotes the Gray Lady’s help wanted ad as saying:
“This editor will be responsible for ensuring that The Times is a leader in GenAI innovation and its applications for journalism. They will lead our efforts to use GenAI tools in reader-facing ways as well as internally in the newsroom. To do so, they will shape the vision for how we approach this technology and will serve as the newsroom’s leading voice on its opportunity as well as its limits and risks. “
There are a bunch of requirements for this job. My instinct is that a few high school students could jump into this role. What’s the difference between a ChatGPT output about crossing the Delaware and writing a “real” news article about fashion trends seen at Otto’s Shrunken Head.
Several observations:
- What does this ominous development mean to the accountants who will calculate the cost of “real” journalists versus a license to smart software? My thought is that the general reaction will be positive. Imagine: No vacays, no sick days, and no humanoid protests. The Promised Land has arrived.
- How will the Gray Lady’s management team explain this cuddling up to smart software? Perhaps it is just one of those newsroom romances? On the other hand, what if something serious develops and the smart software moves in? Yipes.
- What will “informed” reads think of stories crafted by the intellectual engine behind a high school student’s essay about great moments in American history? Perhaps the “informed” readers won’t care?
Exciting stuff in the world of real journalism down the street from Times Square and the furries, pickpockets, and gawkers from Ames, Iowa. I wonder if the hallucinating smart software will be as clever as the journalist who fabricates a story? Probably not. “Real” journalists do not shape, weaponized, or filter the actual factual. Is John Wiley & Sons ready to take the leap?
Stephen E Arnold, November 2, 2023
test
How Does One Impede US AI Progress? Have a Government Meeting?
November 1, 2023
This essay is the work of a dumb humanoid. No smart software required.
The Washington Post may be sparking a litigation hoedown. How can a newspaper give legal eagles an opportunity to buy a private island and not worry about the cost of LexisNexis searches? The answer may be in “AI Researchers Uncover Ethical, Legal Risks to Using Popular Data Sets.” The UK’s efforts to get a group to corral smart software are interesting. Lawyers may be the foot that slows AI traffic on the new Information Superhighway.
The Washington Post reports:
The advent of chatbots that can answer questions and mimic human speech has kicked off a race to build bigger and better generative AI models. It has also triggered questions around copyright and fair use of text taken off the internet, a key component of the massive corpus of data required to train large AI systems. But without proper licensing, developers are in the dark about potential copyright restrictions, limitations on commercial use or requirements to credit a data set’s creators.
There is nothing like jumping in a lake with the local Polar Bears Club to spark investor concern about paying big fines. The chills and thrills of the cold water create a heightened state of awareness.
The article continues:
But without proper licensing, developers are in the dark about potential copyright restrictions, limitations on commercial use or requirements to credit a data set’s creators.
How’s the water this morning?
Several observations:
- A collision between the compunction to innovate in AI and the risk of legal liability seems likely
- Innovators will forge ahead and investors will have to figure out the risks by looking for legal eagles and big sharks lurking below the surface
- Whatever happens in North America and Western Europe will not slow the pace of investment into AI in the Middle East and China.
- Are there unpopular data sets perhaps generated by biased smart software?
Uncertainty and risk. Thanks, AI innovators.
Stephen E Arnold, November 1, 2023
Google Gets into a One in Four Chance to Destroy Humanity? Risky? Nah!
October 31, 2023
This essay is the work of a dumb humanoid. No smart software required.
Below is quite a headline in the Blaze online “information” service. Note: The Blaze is sufficiently confident in its ability to attract subscribers that the outfit is moving away from advertising. Okay, let’s see how that works out in an era of subscription fatigue, right, aggregators?
Relax, there is only a 25 percent chance that AI will destroy humanity. Go for it! Thanks, MidJourney, is this Redmond after the apocalypse?
Here’s the headline:
Google Invests $2 Billion in AI Company Whose CEO Admits AI Has a One in Four Chance of Destroying Humanity
Snappy. What does the story about the Google reveal. Here are a couple of snippets, and you will have to navigate to the Blaze write up, endure the “please, oh, please, subscribe” message, and read the allegedly accurate story yourself… or not. Tip: Check out the non opt out cookie settings. Quite a nice touch in my opinion.
Item 1: Google and Amazon?
There has already been $500 million that Google has invested in Anthropic, with the remaining investment being provided over a period of time. This comes after Amazon invested $4 billion into Anthropic
Item 2: OpenAI DNA
Amodei [the Anthropic CEO] was previously OpenAI’s vice president of research before going his own way to build something that could rival ChatGPT. Since he departed three years ago, Anthropic has become a company worth $5 billion.
So OpenAI was influenced by the Google AI work. Anthropic is probably aware of OpenAI’s work. Google, like Amazon, has invested some pocket change in Anthropic?
Does this seem like a bit of a cozy little circle? Why is the US government issuing broad AI guidelines for an entire swath of technology outfits. Perhaps a bit more focus would be useful? Hurry, because the one in four chance of destroying humanity is playing out in real time. You know. Percentages work in interesting ways.
Stephen E Arnold, October 31, 2023
Does a UK Facial Recognition Case Spell Trouble for AI Regulation?
October 30, 2023
This essay is the work of a dumb humanoid. No smart software required.
I noted this Politico article in my feed today (October 30, 2023). I am a dinobaby and no legal eagle. Consequently I may be thinking incorrectly about the information in “An AI Firm Harvested Billions of Photos without Consent. Britain Is Powerless to Act.” The British government has been talking about smart software. French government officials seem to be less chatty. The US government has ideas as well. What’s the Politico write up say that has me thinking that AI regulation, AI industry cooperation, and AI investors will not be immediately productive?
“Where did my horse go?” asks the farmer. Thanks, Microsoft Bing. The image is not of a horse out of a barn, but it is good enough… just like most technology today. Good enough is excellence.
Here’s the statement which concerns the facial recognition company Clearview, and its harvesting of image data. Those data are used to assist enforcement agencies in their work. The passage I circled was:
The judgment, issued by the three-member tribunal at the First-tier Tribunal, agreed with Clearview’s assertion that the ICO lacked jurisdiction in the case because the data processing in question was carried out on behalf of foreign government agencies. The ICO failed “not because this isn’t monitoring and not because in other circumstances, this might not be in breach of U.K. GDPR, but because it’s foreign law enforcement. It’s outside of the scope of European Union law so it doesn’t apply,” said James Moss, privacy and data protection partner at the law firm Bird & Bird.
Could AI regulation in the EU find itself caught in the same thicket? Furthermore, efforts in the US to curb or slow down the pace of AI innovation may collide with the reality of other countries’ efforts to expand business and military use of AI. Common sense suggests that nation states like China are unlikely to inhibit their interests in AI. What will Britain and US do?
My thought is that much effort will be expended in meetings, writing drafts, discussing the ideas, and promulgating guidelines. The plain fact is that both commercial and investor interests will find a way to push forward. Innovations like AI and the downstream applications have considerable potential for law enforcement and military professionals.
Net net: AI, despite its flaws and boundary breaking, is now out of the barn. Time travel is an interesting idea, but the arrow of time is here and now like the lawyers and bureaucrats.
Stephen E Arnold, October 30, 2023
AI: Enough of the Terminator and Robots!
October 30, 2023
I read — yes, this is the real title — “Humanoid Robots, Glowing Brains, Outstretched Robot Hands, Blue Backgrounds, and the Terminator. These Stereotypes Are Not Just Overworked, They Can Be Surprisingly Unhelpful.” (I love the SEO influence.)
The article includes some suggested images; to wit:
To test what’s available. I navigated to Microsoft Bing and entered the prompt “photorealistic. artificial intelligence.” Here’s what Bing output:
Next I zipped to MidJourney and entered the same prompt. Here is what the innovators at that outfit provided:
Interesting. I love the hair on image V4, don’t you. Not creepy at all.
Some prompt crafters believe Microsoft Bing will not generate images of females. These AI confections sure look like women in the singularity mode.
Observations:
- The new images of AI are not as compelling as the “old” blue robot images. Why? Illustrating software is difficult and originality more difficult.
- The smart software produces images with less obscurity than the same images from the cited article. Sorry, humanoids, I like both the MidJourney and Microsoft Bing outputs.
- The most compelling images are ones which play on cultural tropes; that is, a menacing Terminator has more sizzle than a banana and a house plant.
Net net: Go with what catches the eye and sells. Also, let me know when the Leonardo of AI is “discovered.”
Stephen E Arnold, October 30, 2023
Now the AI $64 Question: Where Are the Profits?
October 26, 2023
This essay is the work of a dumb humanoid. No smart software required.
As happens with most over-hyped phenomena, AI is looking like a disappointment for investors. Gizmodo laments, “So Far, AI Is a Money Pit That Isn’t Paying Off.” Writer Lucas Ropek cites this report from the Wall Street Journal as he states tech companies are not, as of yet, profiting off AI as they had hoped. For example, Microsoft’s development automation tool GitHub Copilot lost an average of $20 a month for each $10-per-month user subscription. Even ChatGPT is seeing its user base decline while operating costs remain sky high. The write-up explains:
“The reasons why the AI business is struggling are diverse but one is quite well known: these platforms are notoriously expensive to operate. Content generators like ChatGPT and DALL-E burn through an enormous amount of computing power and companies are struggling to figure out how to reduce that footprint. At the same time, the infrastructure to run AI systems—like powerful, high-priced AI computer chips—can be quite expensive. The cloud capacity necessary to train algorithms and run AI systems, meanwhile, is also expanding at a frightening rate. All of this energy consumption also means that AI is about as environmentally unfriendly as you can get. To get around the fact that they’re hemorrhaging money, many tech platforms are experimenting with different strategies to cut down on costs and computing power while still delivering the kinds of services they’ve promised to customers. Still, it’s hard not to see this whole thing as a bit of a stumble for the tech industry. Not only is AI a solution in search of a problem, but it’s also swiftly becoming something of a problem in search of a solution.”
Ropek notes it would have been wise for companies to figure out how to turn a profit on AI before diving into the deep end. Perhaps, but leaping into the next big thing is a priority for tech firms lest they be left behind. After all, who could have predicted this result? Let’s ask Google Bard, OpenAI, or one of the numerous AI “players”? Even better perhaps will be deferring the question of costs until the AI factories go online.
Cynthia Murrell, October 26, 2023
xx
Smart Software Generates Lots of Wizards Who Need Not Know Much at All
October 25, 2023
This essay is the work of a dumb humanoid. No smart software required.
How great is this headline? “DataGPT Users Generative AI to Transform Every Employee into a Skilled Business Analyst.” I am not sure I buy into the categorical affirmation of the “every employee.” As a dinobaby, I am skeptical of hallucinating algorithms and the exciting gradient descent delivered by some large language models.
“Smart software will turn everyone of you into a skilled analyst,” asserts the teacher. The students believe her because it means no homework and more time for TikTok and YouTube. Isn’t modern life great for students?
The write up presents as chiseled-in-stone truth:
By uniting conversational AI with a proprietary database and the most advanced data analytics techniques, DataGPT says, its platform can proactively uncover insights for any user in any company. Nontechnical users can type natural language questions in a familiar chat window interface, in the same way as they might question a human colleague. Questions such as “Why is our revenue down this week?” will be answered in seconds, and users can then dig deeper through additional prompts, such as “Tell me more about the drop from influencer partnerships” to understand the real reasons why it’s happening.
Hyperbolic marketing, 20-something PR, desperate fund raiser promises, or reality? If the assertions in the article are accurate, those students will have jobs and become top analysts without much bookwork or thrilling calculations requiring silliness like multivariate statistics or polynomial regression. Who needs this silliness?
Here’s what an expert says about this job making, work reducing, and accuracy producing approach:
Doug Henschen of Constellation Research Inc. said DataGPT’s platform looks to be a compelling and useful tool for many company employees, but questioned the veracity of the startup’s claim to be debuting an industry first. “Most of the leading BI and analytics vendors have announced generative AI capabilities themselves, with ThoughtSpot and MicroStrategy two major examples,” Henschen said. “We can’t discount OpenAI either, which introduced the OpenAI Advanced Data Analysis feature for ChatGPT Plus a few months ago.”
Truly amazing, and I have no doubt that this categorically affirmative will make everyone a business analyst. Believe it or not. I am in the “not” camp. Content marketing and unsupported assertions are amusing, just not the reality I inhabit as a dinobaby. Every? Baloney.
Stephen E Arnold, October 25, 2023
xx
AI Cybersecurity: Good News and, of Course, Bad News
October 23, 2023
This essay is the work of a dumb humanoid. No smart software required.
Life, like a sine wave, is filled with ups and downs. Nothing strikes me like the ups and downs of AI: Great promise but profits, not yet. Smart cyber security methods? Same thing. Ups and downs. Good news then bad news. Let’s look at two examples.
First, the good news. “New Cyber Algorithm Shuts Down Malicious Robotic Attack” reports:
Researchers have designed an algorithm that can intercept a man-in-the-middle (MitM) cyberattack on an unmanned military robot and shut it down in seconds. The algorithm, tested in real time, achieved a 99% success rate.
Is this a home run. 99 percent success rate. Take that percentage, some AI, and head to a casino or a facial recognition system. I assume I will have to wait until the marketers explain this limited test.
“Hello, we are the team responsible for infusing AI into cyber security safeguards. We are confident that our technology will have an immediate, direct impact on protecting your organization from threats and bad actors,” says Mary, a lawyer and MBA. I believe everything lawyers and MBAs say, even more than Tom, the head of marketing, or Ben, the lead developer who loves rock climbing and working remotely. Thanks, Bing Dall-e. You understand the look and feel of modern cyber security teams.
Okay, the bad news. A cyber security outfit named Okta was unable to secure itself. You can the allegedly real details from “Okta’s Stock Slumps after Security Company Says It Was Hacked.” The write up asserts:
Okta, a major provider of security technology for businesses, government agencies and other organizations, said Friday that one of its customer service tools had been hacked. The hacker used stolen credentials to access the company’s support case management system and view files uploaded by some customers, Okta Chief Security Officer David Bradbury disclosed in a securities filing. Okta said that system is separate from its main client platform, which was not penetrated.
Yep, the “main client platform” is or was secure.
Several observations:
- After Israel’s sophisticated cyber systems failed to detect planning and preparing for a reasonably large scale attack, what should I conclude about sophisticated cyber security systems? My initial conclusion is that writing marketing collateral is cheaper and easier then building secure systems.
- Are other cyber security firms’ systems vulnerable? I think the answer may be, “Yes, but lawyer and MBA presidents are not sure how and where?”
- Are cost cutting and business objectives more important than developing high reliability cyber security systems? I would suggest, “Yes. What companies say about their products and services is often different from that which is licensed to customers?
Net net: Cyber security may be a phrase similar to US telecommunications’ meaning of “unlimited.”
Stephen E Arnold, October 27, 2023