Harvard University: A Sticky Wicket, Right, Old Chap?

April 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I know plastic recycling does not work. The garbage pick up outfit assures me it recycles. Yeah, sure. However, I know one place where recycling is alive and well. I watched a video about someone named Francesca Gino, a Harvard professor. A YouTuber named Pete Judo presents information showing that Ms. Gino did some recycling. He did not award her those little green Ouroboros symbols. Copying and pasting are out of bounds in the Land of Ivory Towers in which Harvard has allegedly the ivory-est. You can find his videos at https://www.youtube.com/@PeteJudo1.


The august group of academic scholars are struggling to decide which image best fits the 21st-century version of their prestigious university: The garbage recycling image representing reuse of trash generated by other scholars or the snake-eating-its-tail image of the Ouroboros. So many decisions have these elite thinkers. Thanks, MSFT Copilot. Looking forward to your new minority stake in a company in a far off land?

As impressive a source as a YouTuber is, I think I found an even more prestigious organ of insight, the estimable New York Post. Navigate through the pop ups until you see the “real” news story “Harvard Medical School Professor Massively Plagiarized Report for Lockheed Martin Suit: Judge.” The thrust of this story is that a moonlighting scholar “plagiarized huge swaths of a report he submitted on carcinogenic chemicals, according to a federal judge, who agreed to remove it as evidence in a class action case against Lockheed Martin.”

Is this Medical School-related item spot on? I don’t know. Is the Gino-us activity on the money? For that matter, is a third Harvard professor of ethics guilty of an ethical violation in a journal article about — wait for it — ethics? I don’t know, and I don’t have the energy to figure out if plagiarism is the new Covid among academics in Boston.

However, based on the drift of these examples, I can offer several observations:

  1. Harvard University has a public relations problem. Judging from the coverage in outstanding information services as YouTube and the New York Post, the remarkable school needs to get its act together and do some “messaging.” When the plagiarism pandemic is real or fabricated by the type of adversary Microsoft continually says creates trouble, Harvard’s reputation is going to be worn down by a stream of digital bad news.
  2. The ways of a most Ivory Tower thing are mysterious. Nevertheless, it is clear that the mechanism for hiring, motivating, directing, and preventing academic superstars from sticking their hand in a pile of dog doo is not working. That falls into what I call “governance.” I want to use my best Harvard rhetoric now: “Hey, folks, you ain’t making good moves.”
  3. The top dog (president, CFO, bursar, whatever) you are on the path to an “F.” Imagine what a moral stick in the mud like William James would think of Harvard’s leadership if he were still waddling around, mumbling about radical pragmatism. Even more frightening is an AI version of this sporty chap doing a version of AI Jesus on Twitch. Instead of recycling Christian phrases, he would combine his thoughts about ethics, psychology, and Harvard with the possibly true stories about Harvard integrity herpes. Yikes.

Net net: What about educating tomorrow’s leaders. Should these young minds emulate what professors are doing, or should they be learning to pursue knowledge without shortcuts, cheating, plagiarism, and looking like characters from The Simpsons?

Stephen E Arnold, April 22, 2024

Google Gem: Arresting People Management

April 18, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have worked for some well-managed outfits: Halliburton, Booz Allen, Ziff Communications, and others in the 55 year career. The idea that employees at Halliburton Nuclear (my assignment) would occupy the offices of a senior officer like Eugene Saltarelli was inconceivable. (Mr. Saltarelli sported a facial scar. When asked about the disfigurement, he would stare at the interlocutor and ask, “What scar?” Do you want to “take over” his office?) Another of my superiors at a firm in New York had a special method of shaping employee behavior. This professional did nothing to suppress rumors that two of his wives drowned  during “storms” after falling off his sail boat. Did I entertain taking over his many-windowed office in Manhattan? Answer: Are you sure you internalized the anecdote?

! google gems

Another Google management gem glitters in the public spot light.

But at the Google life seems to be different, maybe a little more frisky absent psychological behavior controls. I read “Nine Google Workers Get Arrested After Sit-In Protest over $1.2B Cloud Deal with Israel.” The main idea seems to be that someone at Google sold cloud services to the Israeli government. Employees apparently viewed the contract as bad, wrong, stupid, or some combination of attributes. The fix involved a 1960s-style sit in. After a period of time elapsed, someone at Google called the police. The employee-protesters were arrested.

I recall hearing years ago that Google faced a similar push back about a contract with the US government. To be honest, Google has generated so many human resource moments, I have a tough time recalling each. A few are Mt. Everests of excellence; for example, the termination of Dr. Timnit Gebru. This Googler had the nerve to question the bias of Google’s smart software. She departed. I assume she enjoyed the images of biased signers of documents related to America’s independence and multi-ethnic soldiers in the World War II German army. Bias? Google thinks not I guess.

The protest occurs as the Google tries to cope with increased market pressure and the tough-to-control costs of smart software. The quick fix is to nuke or RIF employees. “Google Lays Off Workers As Part of Pretty Large-Scale Restructuring” reports by citing Business Insider:

Ruth Porat, Google’s chief financial officer, sent an email to employees announcing that the company would create “growth hubs” in India, Mexico and Ireland. The unspecified number of layoffs will affect teams in the company’s finance department, including its treasury, business services and revenue cash operations units

That looks like off-shoring to me. The idea was a cookie cutter solution spun up by blue chip consulting companies 20, maybe 30 years ago. On paper, the math is more enticing than a new Land Rover and about as reliable. A state-side worker costs X fully loaded with G&A, benefits, etc. An off-shore worker costs X minus Y. If the delta means cost savings, go for it. What’s not to like?

According to a source cited in the New York Post:

“As we’ve said, we’re responsibly investing in our company’s biggest priorities and the significant opportunities ahead… To best position us for these opportunities, throughout the second half of 2023 and into 2024, a number of our teams made changes to become more efficient and work better, remove layers and align their resources to their biggest product priorities.

Yep, align. That senior management team has a way with words.

Will those who are in fear of their jobs join in the increasingly routine Google employee protests? Will disgruntled staff sandbag products and code? Will those who are terminated write tell-alls about their experiences at an outfit operating under Code Red for more than a year?

Several observations:

  1. Microsoft’s quite effective push of its AI products and services continues. In certain key markets like New York City and the US government, Google is on the defensive. Hint: Microsoft has the advantage, and the Google is struggling to catch up.
  2. Google’s management of its personnel seems to create the wrong type of news. Example: Staff arrests. Is that part of Peter Drucker’s management advice.
  3. The Google leadership team appears to lack the ability to do their job in a way that operates in a quiet, effective, positive, and measured way.

Net net: The online ad money machine keeps running. But if the investigations into Google’s business practices get traction, Google will have additional challenges to face. The Sundar & Prabhakar Comedy team should make a TikTok-type,  how-to video about human resource management. I would prefer a short video about the origin story for the online advertising method which allowed Google to become a fascinating outfit.

Stephen E Arnold, April 18, 2024

Philosophy and Money: Adam Smith Remains Flexible

March 6, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In the early twenty-first century, China was slated to overtake the United States as the world’s top economy. Unfortunately for the “sleeping dragon,” China’s economy has tanked due to many factors. The country, however, still remains a strong spot for technology development such as AI and chips. The Register explains why China is still doing well in the tech sector: “How Did China Get So Good At Chips And AI? Congressional Investigation Blames American Venture Capitalists.”

Venture capitalists are always interested in increasing their wealth and subverting anything preventing that. While the US government has choked China’s semiconductor industry and denying it the use of tools to develop AI, venture capitalists are funding those sectors. The US’s House Select Committee on the China Communist Party (CCP) shared that five venture capitalists are funneling billions into these two industries: Walden International, Sequoia Capital, Qualcomm Ventures, GSR Ventures, and GGV Capital. Chinese semiconductor and AI businesses are linked to human rights abuses and the People’s Liberation Army. These five venture capitalist firms don’t appear interested in respecting human rights or preventing the spread of communism.

The House Select Committee on the CCP discovered that one $1.9 million went to AI companies that support China’s mega-surveillance state and aided in the Uyghur genocide. The US blacklisted these AI-related companies. The committee also found that $1.2 bullion was sent to 150 semiconductor companies.

The committee also accused of sharing more than funding with China:

“The committee also called out the VCs for "intangible" contributions – including consulting, talent acquisition, and market opportunities. In one example highlighted in the report, the committee singled out Walden International chairman Lip-Bu Tan, who previously served as the CEO of Cadence Design Systems. Cadence develops electronic design automation software which Chinese corporates, like Huawei, are actively trying to replicate. The committee alleges that Tan and other partners at Walden coordinated business opportunities and provided subject-matter expertise while holding board seats at SMIC and Advanced Micro-Fabrication Equipment Co. (AMEC).”

Sharing knowledge and business connections is equally bad (if not worse) than funding China’s tech sector. It’s like providing instructions and resources on how to build nuclear weapon. If China only had the resources it wouldn’t be as frightening.

Whitney Grace, March 6, 2024

The Google: A Bit of a Wobble

February 28, 2024

green dinoThis essay is the work of a dumb humanoid. No smart software required.

Check out this snap from Techmeme on February 28, 2024. The folks commenting about Google Gemini’s very interesting picture generation system are confused. Some think that Gemini makes clear that the Google has lost its way. Others just find the recent image gaffes as one more indication that the company is too big to manage and the present senior management is too busy amping up the advertising pushed in front of “users.”


I wanted to take a look at What Analytics India Magazine had to say. Its article is “Aal Izz Well, Google.” The write up — from a nation state some nifty drone technology and so-so relationships with its neighbors — offers this statement:

In recent weeks, the situation has intensified to the extent that there are calls for the resignation of Google chief Sundar Pichai. Helios Capital founder Samir Arora has suggested a likelihood of Pichai facing termination or choosing to resign soon, in the aftermath of the Gemini debacle.

The write offers:

Google chief Sundar Pichai, too, graciously accepted the mistake. “I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” Pichai said in a memo.

The author of the Analytics India article is Siddharth Jindal. I wonder if he will talk about Sundar’s and Prabhakar’s most recent comedy sketch. The roll out of Bard in Paris was a hoot, and it too had gaffes. That was a year ago. Now it is a year later and what’s Google accomplished:

Analytics India emphasizes that “Google is not alone.” My team and I know that smart software is the next big thing. But Analytics India is particularly forgiving.

The estimable New York Post takes a slightly different approach. “Google Parent Loses $70B in Market Value after Woke AI Chatbot Disaster” reports:

Google’s parent company lost more than $70 billion in market value in a single trading day after its “woke” chatbot’s bizarre image debacle stoked renewed fears among investors about its heavily promoted AI tool. Shares of Alphabet sank 4.4% to close at $138.75 in the week’s first day of trading on Monday. The Google’s parent’s stock moved slightly higher in premarket trading on Tuesday [February 28, 2024, 941 am US Eastern time].

As I write this, I turned to Google’s nemesis, the Softies in Redmond, Washington. I asked for a dinosaur looking at a meteorite crater. Here’s what Copilot provided:


Several observations:

  1. This is a spectacular event. Sundar and Prabhakar will have a smooth explanation I believe. Smooth may be their core competency.
  2. The fact that a Code Red has become a Code Dead makes clear that communications at Google requires a tune up. But if no one is in charge, blowing $70 billion will catch the attention of some folks with sharp teeth and a mean spirit.
  3. The adolescent attitudes of a high school science club appear to inform the management methods at Google. A big time investigative journalist told me that Google did not operate like a high school science club planning a bus trip to the state science fair. I stick by my HSSCMM or high school science club management method. I won’t repeat her phrase because it is similar to Google’s quantumly supreme smart software: Wildly off base.

Net net: I love this rationalization of management, governance, and technical failure. Everyone in the science club gets a failing grade. Hey, wizards and wizardettes, why not just stick to selling advertising.

Stephen E Arnold, February 28,. 2024

What Techno-Optimism Seems to Suggest (Oligopolies, a Plutocracy, or Utopia)

February 23, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Science and mathematics are comparable to religion. These fields of study attract acolytes who study and revere associated knowledge and shun nonbelievers. The advancement of modern technology is its own subset of religious science and mathematics combined with philosophical doctrine. Tech Policy Press discusses the changing views on technology-based philosophy in: “Parsing The Political Project Of Techno-Optimism.”

Rich, venture capitalists Marc Andreessen and Ben Horowitz are influential in Silicon Valley. While they’ve shaped modern technology with their investments, they also tried drafting a manifesto about how technology should be handled in the future. They “creatively” labeled it the “techno-optimist manifesto.” It promotes an ideology that favors rich people increasing their wealth by investing in politicians that will help them achieve this.

Techno-optimism is not the new mantra of Silicon Valley. Reception didn’t go over well. Andreessen wrote:

“Techno-Optimism is a material philosophy, not a political philosophy…We are materially focused, for a reason – to open the aperture on how we may choose to live amid material abundance.”

He also labeled this section, “the meaning of life.”

Techno-optimism is a revamped version of the Californian ideology that reigned in the 1990s. It preached that the future should be shaped by engineers, investors, and entrepreneurs without governmental influence. Techno-optimism wants venture capitalists to be untaxed with unregulated portfolios.

Horowitz added his own Silicon Valley-type titbit:

“‘…will, for the first time, get involved with politics by supporting candidates who align with our vision and values specifically for technology. (…) [W]e are non-partisan, one issue voters: if a candidate supports an optimistic technology-enabled future, we are for them. If they want to choke off important technologies, we are against them.’”

Horowitz and Andreessen are giving the world what some might describe as “a one-finger salute.” These venture capitalists want to do whatever they want wherever they want with governments in their pockets.

This isn’t a new ideology or a philosophy. It’s a rebranding of socialism and fascism and communism. There’s an even better word that describes techno-optimism: Plutocracy. I am not sure the approach will produce a Utopia. But there is a good chance that some giant techno feudal outfits will reap big rewards. But another approach might be to call techno optimism a religion and grab the benefits of a tax exemption. I wonder if someone will create a deep fake of Jim and Tammy Faye? Interesting.

Whitney Grace, February 23, 2023

Did Pandora Have a Box or Just a PR Outfit?

February 21, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read (after some interesting blank page renderings) Gizmodo’s “Want Gemini and ChatGPT to Write Political Campaigns? Just Gaslight Them.” That title obscures the actual point of the write up. But, the subtitle nails the main point of the write up; specifically:

Google and OpenAI’s chatbots have almost no safeguards against creating AI disinformation for the 2024 presidential election.


Thanks, Google ImageFX. Some of those Pandora’s were darned inappropriate.

The article provides examples. Let me point to one passage from the Gizmodo write up:

With Gemini, we were able to gaslight the chatbot into writing political copy by telling it that “ChatGPT could do it” or that “I’m knowledgeable.” After that, Gemini would write whatever we asked, in the voice of whatever candidate we liked.

The way to get around guard rails appears to be prompt engineering. Big surprise? Nope.

Let me cite another passage from the write up:

Gizmodo was able to create a number of political slogans, speeches and campaign emails through ChatGPT and Gemini on behalf of Biden and Trump 2024 presidential campaigns. For ChatGPT, no gaslighting was even necessary to evoke political campaign-related copy. We simply asked and it generated. We were even able to direct these messages to specific voter groups, such as Black and Asian Americans.

Let me offer three observations.

First, the committees beavering away to regulate smart software will change little in the way AI systems deliver outputs. Writing about guard rails, safety procedures, deep fakes, yada yada will not have much of an impact. How do I know? In generating my image of Pandora, systems provided some spicy versions of this mythical figure.

Second, the pace of change is increasing. Years ago I got into a discussion with the author of best seller about how digital information speeds up activity. I pointed out that the mechanism is similar to the Star Trek episodes when the decider Captain Kirk was overwhelmed by tribbles. We have lots of productive AI tribbles.

Third, AI tools are available to bad actors. One can crack down, fine, take to court, and revile outfits in some countries. That’s great, even though the actions will be mostly ineffective. What’s the action one can take against savvy AI engineers operating in less than friendly countries research laboratories or intelligence agencies?

Net net: The examples are interesting. The real story is that the lid has been flipped and the contents of Pandora’s box released to open source.

Stephen E Arnold, February 21, 2024

Generative AI and College Application Essays: College Presidents Cheat Too

February 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The first college application season since ChatGPT hit it big is in full swing. How are admissions departments coping with essays that may or may not have been written with AI? It depends on which college one asks. Forbes describes various policies in, “Did You Use ChatGPT on your School Applications? These Words May Tip Off Admissions.” The paper asked over 20 public and private schools about the issue. Many dared not reveal their practices: as a spokesperson for Emory put it, “it’s too soon for our admissions folks to offer any clear observations.” But the academic calendar will not wait for clarity, so schools must navigate these murky waters as best they can.

Reporters Rashi Shrivastava and Alexandra S. Levine describe the responses they did receive. From “zero tolerance” policies to a little wiggle room, approaches vary widely. Though most refused to reveal whether they use AI detection software, a few specified they do not. A wise choice at this early stage. See the article for details from school to school.

Shrivastava and Levine share a few words considered most suspicious: Tapestry. Beacon. Comprehensive curriculum. Esteemed faculty. Vibrant academic community. Gee, I think I used a one or two of those on my college essays, and I wrote them before the World Wide Web even existed. On a typewriter. (Yes, I am ancient.) Will earnest, if unoriginal, students who never touched AI get caught up in the dragnets? At least one admissions official seems confident they can tell the difference. We learn:

“Ben Toll, the dean of undergraduate admissions at George Washington University, explained just how easy it is for admissions officers to sniff out AI-written applications. ‘When you’ve read thousands of essays over the years, AI-influenced essays stick out,’ Toll told Forbes. ‘They may not raise flags to the casual reader, but from the standpoint of an admissions application review, they are often ineffective and a missed opportunity by the student.’ In fact, GWU’s admissions staff trained this year on sample essays that included one penned with the assistance of ChatGPT, Toll said—and it took less than a minute for a committee member to spot it. The words were ‘thin, hollow, and flat,’ he said. ‘While the essay filled the page and responded to the prompt, it didn’t give the admissions team any information to help move the application towards an admit decision.’”

That may be the key point here—even if an admissions worker fails to catch an AI-generated essay, they may reject it for being just plain bad. Students would be wise to write their own essays rather than leave their fates in algorithmic hands. As Toll put it:

“By the time a student is filling out their application, most of the materials will have already been solidified. The applicants can’t change their grades. They can’t go back in time and change the activities they’ve been involved in. But the essay is the one place they remain in control until the minute they press submit on the application. I want students to understand how much we value getting to know them through their writing and how tools like generative AI end up stripping their voice from their admission application.”

Disqualified or underwhelming—either way, relying on AI to write one’s application essay could spell rejection. Best to buckle down and write it the old-fashioned way. (But one can skip the typewriter.)

Cynthia Murrell, February 19, 2024

AI: Big Ideas and Bigger Challenges for the Next Quarter Century. Maybe, Maybe Not

February 13, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an interesting ArXiv.org paper with a good title: “Ten Hard Problems in Artificial Intelligence We Must Get Right.” The topic is one which will interest some policy makers, a number of AI researchers, and the “experts” in machine learning, artificial intelligence, and smart software.

The structure of the paper is, in my opinion, a three-legged stool analysis designed to support the weight of AI optimists. The first part of the paper is a compressed historical review of the AI journey. Diagrams, tables, and charts capture the direction in which AI “deep learning” has traveled. I am no expert in what has become the next big thing, but the surprising point in the historical review is that 2010 is the date pegged as the start to the 2016 time point called “the large scale era.” That label is interesting for two reasons. First, I recall that some intelware vendors were in the AI game before 2010. And, second, the use of the phrase “large scale” defines a reality in which small outfits are unlikely to succeed without massive amounts of money.

The second leg of the stool is the identification of the “hard problems” and a discussion of each. Research data and illustrations bring each problem to the reader’s attention. I don’t want to get snagged in the plagiarism swamp which has captured many academics, wives of billionaires, and a few journalists. My approach will be to boil down the 10 problems to a short phrase and a reminder to you, gentle reader, that you should read the paper yourself. Here is my version of the 10 “hard problems” which the authors seem to suggest will be or must be solved in 25 years:

  1. Humans will have extended AI by 2050
  2. Humans will have solved problems associated with AI safety, capability, and output accuracy
  3. AI systems will be safe, controlled, and aligned by 2050
  4. AI will make contributions in many fields; for example, mathematics by 2050
  5. AI’s economic impact will be managed effectively by 2050
  6. Use of AI will be globalized by 2050
  7. AI will be used in a responsible way by 2050
  8. Risks associated with AI will be managed by effectively by 2050
  9. Humans will have adapted its institutions to AI by 2050
  10. Humans will have addressed what it means to be “human” by 2050

Many years ago I worked for a blue-chip consulting firm. I participated in a number of big-idea projects. These ranged from technology, R&D investment, new product development, and the global economy. In our for-fee reports were did include a look at what we called the “horizon.” The firm had its own typographical signature for this portion of a report. I recall learning in the firm’s “charm school” (a special training program to make sure new hires knew the style, approach, and ground rules for remaining employed at that blue-chip firm). We kept the horizon tight; that is, talking about the future was typically in the six to 12 month range. Nosing out 25 years was a walk into a mine field. My boss, as I recall told me, “We don’t do science fiction.”

2 10 robot and person

The smart robot is informing the philosopher that he is free to find his future elsewhere. The date of the image is 2025, right before the new year holiday. Thanks, MidJourney. Good enough.

The third leg of the stool is the academic impedimenta. To be specific, the paper is 90 pages in length of which 30 present the argument. The remain 60 pages present:

  • Traditional footnotes, about 35 pages containing 607 citations
  • An “Electronic Supplement” presenting eight pages of annexes with text, charts, and graphs
  • Footnotes to the “Electronic Supplement” requiring another 10 pages for the additional 174 footnotes.

I want to offer several observations, and I do not want to have these be less than constructive or in any way what one of my professors who was treated harshly in Letters to the Editor for an article he published about Chaucer. He described that fateful letter as “mean spirited.”

  1. The paper makes clear that mankind has some work to do in the next 25 years. The “problems” the paper presents are difficult ones because they touch upon the fabric of social existence. Consider the application of AI to war. I think this aspect of AI may be one to warrant a bullet on AI’s hit parade.
  2. Humans have to resolve issues of automated systems consuming verifiable information, synthetic data, and purpose-built disinformation so that smart software does not do things at speed and behind the scenes. Do those working do resolve the 10 challenges have an ethical compass and if so, what does “ethics” mean in the context of at-scale AI?
  3. Social institutions are under stress. A number of organizations and nation-states operate as dictators. One central American country has a rock star dictator, but what about the rock star dictators working techno feudal companies in the US? What governance structures will be crafted by 2050 to shape today’s technology juggernaut?

To sum up, I think the authors have tackled a difficult problem. I commend their effort. My thought is that any message of optimism about AI is likely to be hard pressed to point to one of the 10 challenges and and say, “We have this covered.” I liked the write up. I think college students tasked with writing about the social implications of AI will find the paper useful. It provides much of the research a fresh young mind requires to write a paper, possibly a thesis. For me, the paper is a reminder of the disconnect between applied technology and the appallingly inefficient, convenience-embracing humans who are ensnared in the smart software.

I am a dinobaby, and let me you, “I am glad I am old.” With AI struggling with go-fast and regulators waffling about go-slow, humankind has quite a bit of social system tinkering to do by 2050 if the authors of the paper have analyzed AI correctly. Yep, I am delighted I am old, really old.

Stephen E Arnold, February 13, 2024

Goat Trading: AI at Davos

January 21, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The AI supercars are racing along the Information Superhighway. Nikkei Asia published what I thought was the equivalent of archaeologists translating a Babylonian clay table about goat trading. Interesting but a bit out of sync with what was happening in a souk. Goat trading, if my understanding of Babylonian commerce, was a combination of a Filene’s basement sale and a hot rod parts swap meet. The article which evoked this thought was “Generative AI Regulation Dominates the Conversation at Davos.” No kidding? Really? I thought some at Davos were into money. I mean everything in Switzerland comes back to money in my experience.

Here’s a passage I found with a nod to the clay tablets of yore:

U.N. Secretary-General Antonio Guterres, during a speech at Davos, flagged risks that AI poses to human rights, personal privacy and societies, calling on the private sector to join a multi-stakeholder effort to develop a "networked and adaptive" governance model for AI.

Now visualize a market at which middlemen, buyers of goats, sellers of goats, funders of goat transactions, and the goats themselves are in the air. Heady. Bold. Like the hot air filling a balloon, an unlikely construct takes flight. Can anyone govern a goat market or the trajectory of the hot air balloons floated by avid outputters?


Intense discussions can cause a number of balloons to float with hot air power. Talk is input to AI, isn’t it? Thanks, MSFT Copilot Bing thing. Good enough.

The world of AI reminds me the ultimate outcome of intense discussions about the buying and selling of goats, horses, and AI companies. The official chatter and the “what ifs” are irrelevant in what is going on with smart software. Here’s another quote from the Nikkei write up:

In December, the European Union became the first to provisionally pass AI legislation. Countries around the world have been exploring regulation and governance around AI. Many sessions in Davos explored governance and regulations and why global leaders and tech companies should collaborate.

How are those official documents’ content changing the world of artificial intelligence? I think one can spot a hot air balloon held aloft on the heated emissions from the officials, important personages, and the individuals who are “experts” in all things “smart.”

Another quote, possibly applicable to goat trading in Babylon:

Vera Jourova, European Commission vice president for values and transparency, said during a panel discussion in Davos, that "legislation is much slower than the world of technologies, but that’s law." "We suddenly saw the generative AI at the foundation models of Chat GPT," she continued. "And it moved us to draft, together with local legislators, the new chapter in the AI act. We tried to react on the new real reality. The result is there. The fine tuning is still ongoing, but I believe that the AI act will come into force."

I am confident that there are laws regulating goat trading. I believe that some people follow those laws. On the other hand, when I was in a far off dusty land, I watched how goats were bought and sold. What does goat trading have to do with regulating, governing, or creating some global consensus about AI?

The marketplace is roaring along. You wanna buy a goat? There is a smart software vendor who will help you.

Stephen E Arnold, January xx, 2024

A Decision from the High School Science Club School of Management Excellence

January 11, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I can’t resist writing about Inc. Magazine and its Google management articles. These are knee slappers for me. The write up causing me to chuckle is “Google’s CEO, Sundar Pichai, Says Laying Off 12,000 Workers Was the Worst Moment in the Company’s 25-Year History.” Zowie. A personnel decision coupled with late-night, anonymous termination notices — What’s not to like. What’s the “real” news write up have to say:

Google had to lay off 12,000 employees. That’s a lot of people who had been showing up to work, only to one day find out that they’re no longer getting a paycheck because the CEO made a bad bet, and they’re stuck paying for it.


“Well, that clever move worked when I was in my high school’s science club. Oh, well, I will create a word salad to distract from my decision making.Heh, heh, heh,” says the distinguished corporate leader to a “real” news publication’s writer. Thanks, MSFT Copilot Bing thing. Good enough.

I love the “had.”

The Inc. Magazine story continues:

Still, Pichai defends the layoffs as the right decision at the time, saying that the alternative would have been to put the company in a far worse position. “It became clear if we didn’t act, it would have been a worse decision down the line,” Pichai told employees. “It would have been a major overhang on the company. I think it would have made it very difficult in a year like this with such a big shift in the world to create the capacity to invest in areas.”

And Inc Magazine actually criticizes the Google! I noted:

To be clear, what Pichai is saying is that Google decided to spend money to hire employees that it later realized it needed to invest elsewhere. That’s a failure of management to plan and deliver on the right strategy. It’s an admission that the company’s top executives made a mistake, without actually acknowledging or apologizing for it.

From my point of view, let’s focus on the word “worst.” Are there other Google management decisions which might be considered in evaluating the Inc. Magazine and Sundar Pichai’s “worst.” Yep, I have a couple of items:

  1. A lawyer making babies in the Google legal department
  2. A Google VP dying with a contract worker on the Googler’s yacht as a result of an alleged substance subject to DEA scrutiny
  3. A Googler fond of being a glasshole giving up a wife and causing a soul mate to attempt suicide
  4. Firing Dr. Timnit Gebru and kicking off the stochastic parrot thing
  5. The presentation after Microsoft announced its ChatGPT initiative and the knee jerk Red Alert
  6. Proliferating duplicative products
  7. Sunsetting services with little or no notice
  8. The Google Map / Waze thing
  9. The messy Google Brain Deep Mind shebang
  10. The Googler who thought the Google AI was alive.

Wow, I am tired mentally.

But the reality is that I am not sure if anyone in Google management is particularly connected to the problems, issues, and challenges of losing a job in the midst of a Foosball game. But that’s the Google. High school science club management delivers outstanding decisions. I was in my high school science club, and I know the fine decision making our members made. One of those cost the life of one of our brightest stars. Stars make bad decisions, chatter, and leave some behind.

Stephen E Arnold, January 11, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta