MBAs and Advisors, Is Your Nuclear Winter Looming?
May 31, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Big time, blue chip consulting firms are quite competent in three areas: [1] Sparking divorces because those who want money marry the firm, [2] Ingesting legions of MBAs to advise clients who are well compensated but insecure, and [3] Finding ways to cuts costs and pay the highly productive partners more money. I assume some will disagree, but that’s what kills horses at the Kentucky Derby.
I read but did not think twice about believing every single word in “Amid Mass Layoff, Accenture Identifies 300+ Generative AI Use Cases.” My first mental reaction was this question, “Just 300?”
The write up points out:
Accenture has identified five broad areas where generative AI can be implemented – advising, creating, automation, software creation and protection. The company is also working with a multinational bank to use generative AI to route large numbers of post-trade processing emails and draft responses with recommended actions to reduce manual effort and risk.
With fast food joints replacing humans with robots, what’s an MBA to do? The article does not identify employment opportunities for those who will be replaced with zeros and ones. As a former blue chip worker bee, I would suggest to anyone laboring in the intellectual vineyards to consider a career as an influencer.
Who will get hired and make big bucks at the Bains, the BCGs, the Boozers, and the McKinseys, et al? Here’s my short list:
- MBAs or people admitted to a fancy university with super connections. If one’s mom or dad was an ambassador or frequents parties drooled upon by Town & Country Magazine, you may be in the game.
- Individuals even if they worked at low rent used car lots who can sell big buck projects. The future at the blue chips is bright indeed.
- Individuals who are pals with highly regarded partners.
What about the quality of the work produced by the smart software? That is a good question. The idea is to make the client happy and sell follow on work. The initial work product may be reviewed by a partner or maybe not. The proof of the pudding are the revenue, costs, and profit figures.
That influencer opportunity looks pretty good, doesn’t it? I think snow is falling. Grab a Ralph Lauren Purple Label before you fire up that video camera.
Stephen E Arnold, May 31, 2023
Finally, an Amusing Analysis of AI
May 31, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Intentionally amusing or not, I found “ChatGPT Is Basically a Gen X’er Who Stopped Reading in 12th Grade” a hoot. The write up develops its thesis this way:
Turns out our soon-to-be AI Overlord, ChatGPT, has a worldview based in the 19th-century canon, Gen X sci-fi favorites, and the social dynamics at Hogwart’s School For Lil Magicians.
The essay then cites the estimable Business Insider (noted for its subscribe to read this most okay article approach to raising money) and its report about a data scientist who figured out what books ChatGPT has ingested. The list is interesting because it reflects how texts which most of today’s online users would find quaint, racist, irrelevant, or mildly titillating. Who doesn’t need to know about sensitive vampires?
So what’s funny?
First, the write up is similar to outputs from smart software: Recycled information and generic comments.
Second, the reading material fed into ChatGPT by more unnamed smart software experts.
I wonder if the Sundar & Prabhakar Comedy Act will integrate this type of material into their explanation about the great things which will emerge from the Google.
Stephen E Arnold, May 31, 2023
Stop Smart Software! A Petition to Save the World! Signed by 350 Humans!
May 30, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
A “real” journalist (Kevin Roose), who was told to divorce his significant other for a chat bot published the calming, measured, non-clickbait story “AI Poses Risk of Extinction, Industry Leaders Warn.” What’s ahead for the forest fire of smart software activity? The headline explains a “risk of extinction.” What no screenshot of a Terminator robot saying”:
The strength of the human heart. The difference between us and machines. [Uplifting music]
Sadly, no.
Write up reports:
Eventually, some believe, A.I. could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, though researchers sometimes stop short of explaining how that would happen. These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building — and, in many cases, are furiously racing to build faster than their competitors — poses grave risks and should be regulated more tightly.
Isn’t the Gray Lady amplifying fear, uncertainty, and doubt? Didn’t IBM pay sales engineers to spread the FUD?
Enough. AI is bad. Stop those who refined the math and numerical recipes. Pass laws to regulate the AI technology. Act now. Save humanity. Several observations:
- The credibility of technologists who “develop” functions and then beg for rules is disingenuous. The idea is to practice self-control and judgment before inviting Mr. Hyde to brunch.
- With smart software chock full of “unknown unknowns”, how exactly are elected officials supposed to regulate a diffusing and enabling technology? Appealing to US and EU officials omits common sense in my opinion.
- The “fix” for the AI craziness may be emulating the Chinese approach: Do what the CCP wants or be reeducated. What a nation state can d with smart software is indeed a something to consider. But China has taken action and will move forward with militarization no matter what the US and EU do.
Silicon Valley type innovation has created a “myth of excellence.” One need look at the consequences of social media to see the consequences of high school science club decision making. Now a handful of individuals with the Silicon Valley DNA want external forces to reign in their money making experiments and personal theme parks. Sorry, folks. Internal control, ethical behavior, and integrity provide that to mature individuals.
A sheet of paper with “rules” and “regulations” is a bit late to the Silicon Valley game. And the Gray Lady? Chasing clicks in my opinion.
Stephen E Arnold, May 30, 2023
Regulate Does Not Mean Regulate. Leave the EU Does Not Mean Leave the EU. Got That?
May 30, 2023
I wrote about Sam AI-man’s explanation that he wants regulation. I pointed out that his definition of regulate means leaving OpenAI free to do whatever it can to ace out the Google and a handful of other big outfits chasing the pot of gold at the end of the AI rainbow.
I just learned from the self-defined trusted news source (Thomson Reuters) that Mr. AI-man has no plans to leave Europe. I understand. “Leave” does not mean leave as in depart, say adios, or hit the road, Jack.
“ChatGPT Maker OpenAI Says Has No Plan to Leave Europe” reports:
OpenAI has no plans to leave Europe, CEO Sam Altman said on Friday, reversing a threat made earlier this week to leave the region if it becomes too hard to comply with upcoming laws on artificial intelligence.
I am not confused. Just as the company’s name OpenAI does not mean “open,” the AI-man’s communication skills are based on the probabilities of certain words following another word. Got it. The slippery fish with AI-man is that definition of the words in his mind do not regress to the mean. The words — like those of some other notable Silicon Valley high tech giants — reflect the deeper machinations of a machine-assisted superior intelligence.
Translated this means: Regulate means shaft our competitors. Leave means stay. Regulate means let those OpenAI sheep run through the drinking water of free range cattle
The trusted write up says:
Reacting to Altman’s tweet on Friday, Dutch MEP Kim van Sparrentak, who has worked closely on the AI draft rules, told Reuters she and her colleagues must stand up to pressure from tech companies…. Voluntary codes of conduct are not the European way.
What does this statement mean to AI-man?
I would suggest from my temporary office in clear thinking Washington, DC, not too much.
I look forward to the next hearing from AI-man. That will be equally easy to understand.
Stephen E Arnold, May 30, 2023
Probability: Who Wants to Dig into What Is Cooking Beneath the Outputs of Smart Software?
May 30, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
The ChatGPT and smart software “revolution” depends on math only a few live and breathe. One drawer in the pigeon hole desk of mathematics is probability. You know the coin flip example. Most computer science types avoid advanced statistics. I know because my great uncle Vladimir Arnold (yeah, the guy who worked with a so so mathy type named Andrey Kolmogorov, who was pretty good at mathy stuff and liked hiking in the winter in what my great uncle described as “minimal clothing.”)
When it comes to using smart software, the plumbing is kept under the basement floor. What people see are interfaces and application programming interfaces. Watching how the sausage is produced is not what the smart software outfits do. What makes the math interesting is that the system and methods are not really new. What’s new is that memory, processing power, and content are available.
If one pries up a tile on the basement floor, the plumbing is complicated. Within each pipe or workflow process are the mathematics that bedevil many college students: Inferential statistics. Those who dabble in the Fancy Math of smart software are familiar with Markov chains and Martingales. There are garden variety maths as well; for example, the calculations beloved of stochastic parrots.
MidJourney’s idea of complex plumbing. Smart software’s guts are more intricate with many knobs for acolytes to turn and many levers to pull for “users.”
The little secret among the mathy folks who whack together smart software is that humanoids set thresholds, establish boundaries on certain operations, exercise controls like those on an old-fashioned steam engine, and find inspiration with a line of code or a process tweak that arrived in the morning gym routine.
In short, the outputs from the snazzy interface make it almost impossible to understand why certain responses cannot be explained. Who knows how the individual humanoid tweaks interact as values (probabilities, for instance) interact with other mathy stuff. Why explain this? Few understand.
To get a sense of how contentious certain statistical methods are, I suggest you take a look at “Statistical Modeling, Causal Inference, and Social Science.” I thought the paper should have been called, “Why No One at Facebook, Google, OpenAI, and other smart software outfits can explain why some output showed up and some did not, why one response looks reasonable and another one seems like a line ripped from Fantasy Magazine.
In a nutshell, the cited paper makes one point: Those teaching advanced classes in which probability and related operations are taught do not agree on what tools to use, how to apply the procedures, and what impact certain interactions produce.
Net net: Glib explanations are baloney. This mathy stuff is a serious problem, particularly when a major player like Google seeks to control training sets, off-the-shelf models, framing problems, and integrating the firm’s mental orientation to what’s okay and what’s not okay. Are you okay with that? I am too old to worry, but you, gentle reader, may have decades to understand what my great uncle and his sporty pal were doing. What Google type outfits are doing is less easily looked up, documented, and analyzed.
Stephen E Arnold, May 30, 2023
Smart Software Knows Right from Wrong
May 29, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
The AI gold rush is underway. I am not sure if the gold is the stuff of the King’s crown or one of those NFT confections. I am not sure what company will own the mine or sell the miner’s pants with rivets. But gold rush days remind me of forced labor (human indexers), claim jumping (hiring experts from one company to advantage another), and hydraulic mining (ethical and moral world enhancement). Yes, I see some parallels.
I thought of claim jumping and morals after reading “OpenAI Competitor Says Its Chatbot Has a Rudimentary Conscience: A Fascinating Concept.” The following snippet from the article caught my attention:
Jared Kaplan, a former OpenAI research consultant who went on to found Anthropic with a group of his former coworkers, told Wired that Claude is, in essence, learning right from wrong because its training protocols are “basically reinforcing the behaviors that are more in accord with the constitution, and discourages behaviors that are problematic.”
Please, read the original.
I want to capture several thoughts which flitted through my humanoid mind:
- What is right? What is wrong?
- What yardstick will be used to determine “rightness” or “wrongness.”
- What is the context for each right or wrong determination; for example, at the National Criminal Justice Training Center, there is a concept called “sexploitation.” The moral compass of You.com prohibits searching for information related to this trendy criminal activity? How will the Anthropic approach address the issue of a user with a “right” intent from a user with a “wrong” intent?
Net net: Baloney. Services will do what’s necessary to generate revenue. I know from watching the trajectories of the Big Tech outfits that right, wrong, ethics, and associated dorm room discussions wobble around and focus on getting rich or just having a job.
The goal for some will be to get their fingers on the knobs and control levers. Right or wrong?
Stephen E Arnold, May 29, 2023
Shall We Train Smart Software on Scientific Papers? That Is an Outstanding Idea!
May 29, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “Fake Scientific Papers Are Alarmingly Common. But New Tools Show Promise in Tackling Growing Symptom of Academia’s Publish or Perish Culture.” New tools sounds great. Navigate to the cited document to get the “real” information.
MidJourney’s representation of a smart software system ingesting garbage and outputting garbage.
My purpose in mentioning this article is to ask a question:
In the last five years how many made up, distorted, or baloney filled journal articles have been produced?
The next question is,
How many of these sci-fi confections of scholarly research have been identified and discarded by the top smart software outfits like Facebook, Google, OpenAI, et al?
Let’s assume that 25 percent of the journal content is fakery.
A question I have is:
How does faked information impact the outputs of the smart software systems?
I can anticipate some answers; for example, “Well, there are a lot of papers so the flawed papers will represent a small portion of the intake data. The law of large numbers or some statistical jibber jabber will try to explain away erroneous information. Remember. Bad information is part of the human landscape. Does this mean smart software is a mirror of errors?
Do smart software outfits remove flawed information? If the peer review process cannot, what methods are the smart outfits using. Perhaps these companies should decide what’s correct and what’s incorrect? That sounds like a Googley-type idea, doesn’t it?
And finally, the third question about the impact of bad information on smart software “outputs” has an answer. No, it is not marketing jargon or a recycling of Google’s seven wonders of the AI world.
The answer, in my opinion, is garbage in and garbage out.
But you knew that, right?
Stephen E Arnold, Mary 29, 2023
Trust in Google and Its Smart Software: What about the Humans at Google?
May 26, 2023
The buzz about Google’s injection of its smart software into its services is crowding out other, more interesting sounds. For example, navigate to “Texas Reaches $8 Million Settlement With Google Over Blatantly False Pixel Ads: Google Settled a Lawsuit Filed by AG Ken Paxton for Alleged False Advertisements for its Google Pixel 4 Smartphone.”
The write up reports:
A press release said Google was confronted with information that it had violated Texas laws against false advertising, but instead of taking steps to correct the issue, the release said, “Google continued its deceptive advertising, prioritizing profits over truthfulness.”
Google is pushing forward with its new mobile devices.
Let’s consider Google’s seven wonders of its software. You can find these at this link or summarized in my article “The Seven Wonders of the Google AI World.”
Let’s consider principle one: Be socially beneficial.
I am wondering how the allegedly deceptive advertising encourages me to trust Google.
Principle 4 is Be accountable to people.
My recollection is that Google works overtime to avoid being held accountable. The company relies upon its lawyers, its lobbyists, and its marketing to float above the annoyances of nation states. In fact, when greeted with substantive actions by the European Union, Google stalls and does not make available its latest and greatest services. The only accountability seems to be a legal action despite Google’s determined lawyerly push back. Avoiding accountability requires intermediaries because Google’s senior executives are busy working on principles.
Kindergarten behavior.
MidJourney captures the thrill of two young children squabbling over a piggy bank. I wonder if MidJourney knows what is going in the newly merged Google smart software units.
Google approaches some problems like kids squabbling over a piggy bank.
Net net: The Texas fine makes clear that some do not trust Google. The “principles” are marketing hoo hah. But everyone loves Google, including me, my French bulldog, and billions of users worldwide. Everyone will want a new $1800 folding Pixel, which is just great based on the marketing information I have seen. It has so many features and works wonders.
Stephen E Arnold, May 26, 2023
The Return: IBM Watsonx!
May 26, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
It is no surprise IBM’s entry into the recent generative AI hubbub is a version of Watson, the company’s longtime algorithmic representative. Techspot reports, “IBM Unleashes New AI Strategy with ‘watsonx’.” The new suite of tools was announced at the company’s recent Think conference. Note “watsonx” is not interchangeable with “Watson.” The older name with the capital letter and no trendy “x” is to be used for tools individuals rather than company-wide software. That won’t be confusing at all. Writer Bob O’Donnell describes the three components of watsonx:
“Watsonx.ai is the core AI toolset through which companies can build, train, validate and deploy foundation models. Notably, companies can use it to create original models or customize existing foundation models. Watsonx.data, is a datastore optimized for AI workloads that’s used to gather, organize, clean and feed data sources that go into those models. Finally, watsonx.governance is a tool for tracking the process of the model’s creation, providing an auditable record of all the data going into the model, how it’s created and more.Another part of IBM’s announcement was the debut of several of its own foundation models that can be used with the watsonx toolset or on their own. Not unlike others, IBM is initially unveiling a LLM-based offering for text-based applications, as well as a code generating and reviewing tool. In addition, the company previewed that it intends to create some additional industry and application-specific models, including ones for geospatial, chemistry, and IT operations applications among others. Critically, IBM said that companies can run these models in the cloud as a service, in a customer’s own data center, or in a hybrid model that leverages both. This is an interesting differentiation because, at the moment, most model providers are not yet letting organizations run their models on premises.”
Just to make things confusing, er, offer more options, each of these three applications will have three different model architectures. On top of that, each of these models will be available with varying numbers of parameters. The idea is not, as it might seem, to give companies decision paralysis but to provide flexibility in cost-performance tradeoffs and computing requirements. O’Donnell notes watsonx can also be used with open-source models, which is helpful since many organizations currently lack staff able build their own models.
The article notes that, despite the announcement’s strategic timing, it is clear watsonx marks a change in IBM’s approach to software that has been in the works for years: generative AI will be front and center for the foreseeable future. Kinda like society as a whole, apparently.
Cynthia Murrell, May 26, 2023
OpenAI Clarifies What “Regulate” Means to the Sillycon Valley Crowd
May 25, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Sam AI-man begged (at least he did not get on his hands and knees) the US Congress to regulate artificial intelligence (whatever that means). I just read “Sam Altman Says OpenAI Will Leave the EU if There’s Any Real AI Regulation.” I know I am old. I know I lose my car keys a couple of times every 24 hours. I do recall Mr. AI-man wanted regulation.
However, the write up reports:
Though unlike in the AI-friendly U.S., Altman has threatened to take his big tech toys to the other end of the sandbox if they’re not willing to play by his rules.
The vibes of the Zuckster zip through my mind. Facebook just chugs along, pays fines, and mostly ignores regulators. China seems to be an exception for Facebook, the Google, and some companies I don’t know about. China had a mobile death-mobile. A person accused and convicted would be executed in the mobile death van as soon as it arrived at the location where the convicted bad actor was. Re-education camps and mobile death-mobiles suggest that some US companies choose to exit China. Lawyers who have to arrive quickly or their client has been processed are not much good in some of China’s efficient state machines. Fines, however, are okay. Write a check and move on.
Mr. AI-man is making clear that the word “regulate” means one thing to Mr. AI-man and another thing to those who are not getting with the smart software program. The write up states:
Altman said he didn’t want any regulation that restricted users’ access to the tech. He told his London audience he didn’t want anything that could harm smaller companies or the open source AI movement (as a reminder, OpenAI is decidedly more closed off as a company than it’s ever been, citing “competition”). That’s not to mention any new regulation would inherently benefit OpenAI, so when things inevitably go wrong it can point to the law to say they were doing everything they needed to do.
I think “regulate” means what the declining US fast food outfit who told me “have it your way” meant. The burger joint put in a paper bag whatever the professionals behind the counter wanted to deliver. Mr. AI-man doesn’t want any “behind the counter” decision making by a regulatory cafeteria serving up its own version of lunch.
Mr. AI-man wants “regulate” to mean his way.
In the US, it seems, that is exactly what big tech and promising venture funded outfits are going to get; that is, whatever each company wants. Competition is good. See how well OpenAI and Microsoft are competing with Facebook and Google. Regulate appears to mean “let us do what we want to do.”
I am probably wrong. OpenAI, Google, and other leaders in smart software are at this very moment consuming the Harvard Library of books to read in search of information about ethical behavior. The “moral” learning comes later.
Net net: Now I understand the new denotation of “regulate.” Governments work for US high-tech firms. Thus, I think the French term laissez-faire nails it.
Stephen E Arnold, May 25, 2023