A Technologist Realizes Philosophy 101 Was Not All Horse Feathers

January 6, 2025

Hopping Dino_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis is an official dinobaby post. No smart software involved in this blog post.

I am not too keen on non-dinobabies thinking big thoughts about life. The GenX, Y, and Zedders are good at reinventing the wheel, fire, and tacos. What some of these non-dinobabies are less good at is thinking about the world online information has disestablished and is reassembling in chaotic constructs.

The essay, published in HackerNoon, “Here’s Why High Achievers Feel Like Failures” explains why so many non-dinobabies are miserable. My hunch is that the most miserable are those who have achieved some measure of financial and professional success and embrace whinge, insecurity, chemicals to blur mental functions, big car payments, and “experiences.” The essay does a very good job of explaining the impact of getting badges of excellence for making a scoobie (aka lanyard, gimp, boondoggle, or scoubidou) bracelet at summer camp to tweaking an algorithm to cause a teen to seek solace in a controlled substance. (One boss says, “Hey, you hit the revenue target. Too bad about the kid. Let’s get lunch. I’ll buy.”)

The write up explains why achievement and exceeding performance goals can be less than satisfying. Does anyone remember the Google VP who overdosed with the help of a gig worker? My recollection is that the wizard’s boat was docked within a few minutes of his home stuffed with a wifey and some kiddies. Nevertheless, an OnlyFans potential big earner was enlisted to assist with the chemical bliss that may have contributed to his logging off early.

Here’s what the essay offers this anecdote about a high performer whom I think was a entrepreneur riding a rocket ship:

Think about it:

  • Three years ago, Mark was ecstatic about his first $10K month. Now, he beats himself up over $800K months.
  • Two years ago, he celebrated hiring his first employee. Now, managing 50 people feels like “not scaling fast enough.”
  • Last year, a feature in a local business journal made his year. Now, national press mentions barely register.

His progress didn’t disappear. His standards just kept pace with his growth, like a shadow that stretches ahead no matter how far you walk.

The main idea is that once one gets “something”; one wants more. The write up says:

Every time you level up, your brain does something fascinating – it rewrites your definition of “normal.” What used to be a summit becomes your new base camp. And while this psychological adaptation helped our ancestors survive, it’s creating a crisis of confidence in today’s achievement-oriented world.

Yep, the driving force behind achievement is the need to succeed so one can achieve more. I am a dinobaby, and I don’t want to achieve anything. I never did. I have been lucky: Born at the right time. Survived school. Got lucky and was hired on a fluke. Now 60 years later I know how I achieve the modicum of success I accrued. I was really lucky, and despite my 80 years, I am not yet dead.

The essay makes this statement:

We’re running paleolithic software on modern hardware. Every time you achieve something, your brain…

  1. Quickly normalizes the achievement (adaptation)
  2. Immediately starts wanting more (drive)
  3. Erases the emotional memory of the struggle (efficiency)

Is there a fix? Absolutely. Not surprisingly the essay includes a to-do list. The approach is logical and ideally suited to those who want to become successful. Here are the action steps:

Once you’ve reviewed your time horizons, the next step is to build what I call a “Progress Inventory.” Dedicate 15 minutes every Sunday night to reflect and fill out these three sections:

Victories Section
  • What’s easier now than it was last month?
  • What do you do automatically that used to require thought?
  • What problems have disappeared?
  • What new capabilities have you gained?
Growth Section
  • What are you attempting now that you wouldn’t have dared before?
  • Where have your standards risen?
  • What new problems have you earned the right to have?
  • What relationships have deepened or expanded?
Learning Section
  • What mistakes are you no longer making?
  • What new insights have you gained?
  • What patterns are you starting to recognize?
  • What tools have you mastered?

These two powerful tools – the Progress Mirror and the Progress Inventory – work together to solve the central problem we’ve been discussing: your brain’s tendency to hide your growth behind rising standards. The Progress Mirror forces you to zoom out and see the bigger picture through three critical time horizons. It’s like stepping back from a painting to view the full canvas of your growth. Meanwhile, the weekly Progress Inventory zooms in, capturing the subtle shifts and small victories that compound into major transformations. Used together, these tools create something I call “progress consciousness” – the ability to stay ambitious while remaining aware of how far you’ve come.

But what happens when the road map does not lead to a zen-like state? Because I have been lucky, I cannot offer an answer to this question of actual, implicit, or imminent failure. I can serve up some observations:

  1. This essay has the backbone for a self-help book aimed at insecure high performers. My suggestion is to buy a copy of Thomas Harris’ I’m OK — You’re Okay and make a lot of money. Crank out the merch with slogans from the victories, growth, and learning sections of the book.
  2. The explanations are okay, but far from new. Spending some time with Friedrich Nietzsche’s Der Wille zur Macht. Too bad Friedrich was dead when his sister assembled the odds and ends of Herr Nietzsche’s notes into a book addressing some of the issues in the HackerNoon essay.
  3. The write up focuses on success, self-doubt, and an ever-receding finish line. What about the people who live on the street in most major cities, the individuals who cannot support themselves, or the young people with minds trashed by digital flows? The essay offers less information for these under performers as measured by doubt ridden high performers.

Net net: The essay makes clear that education today does not cover some basic learnings; for example, the good Herr Friedrich Nietzsche. Second, the excitement of re-discovering fire is no substitute for engagement with a social fabric that implicitly provides a framework for thinking and behaving in a way that others in the milieu recognize as appropriate. This HackerNoon essay encapsulates why big tech and other successful enterprises are dysfunctional. Welcome to the digital world.

Stephen E Arnold, January 6, 2025

AI Makes Stuff Up and Lies. This Is New Information?

December 23, 2024

animated-dinosaur-image-0055The blog post is the work of a dinobaby, not AI.

I spotted “Alignment Faking in Large Language Models.” My initial reaction was, “This is new information?” and “Have the authors forgotten about hallucination?” The original article from Anthropic sparked another essay. This one appeared in Time Magazine (online version). Time’s article was titled “Exclusive: New Research Shows AI Strategically Lying.” I like the “strategically lying,” which implies that there is some intent behind the prevarication. Since smart software reflects its developers use of fancy math and the numerous knobs and levers those developers can adjust at the same time the model is gobbling up information and “learning”, the notion of “strategically lying” struck me as as interesting.

image

Thanks MidJourney. Good enough.

What strategy is implemented? Who thought up the strategy? Is the strategy working? were the questions which occurred to me. The Time essay said:

experiments jointly carried out by the AI company Anthropic and the nonprofit Redwood Research, shows a version of Anthropic’s model, Claude, strategically misleading its creators during the training process in order to avoid being modified.

This suggests that the people assembling the algorithms and training data, configuring the system, twiddling the administrative settings, and doing technical manipulations were not imposing a strategy. The smart software was cooking up a strategy. Who will say that the software is alive and then, like the former Google engineer, express a belief that the system is alive. It’s sci-fi time I suppose.

The write up pointed out:

Researchers also found evidence that suggests the capacity of AIs to deceive their human creators increases as they become more powerful.

That is an interesting idea. Pumping more compute and data into a model gives it a greater capacity to manipulate its outputs to fool humans who are eager to grab something that promises to make life easier and the user smarter. If data about the US education system’s efficacy are accurate, Americans are not doing too well in the reading, writing, and arithmetic departments. Therefore, discerning strategic lies might be difficult.

The essay concluded:

What Anthropic’s experiments seem to show is that reinforcement learning is insufficient as a technique for creating reliably safe models, especially as those models get more advanced. Which is a big problem, because it’s the most effective and widely-used alignment technique that we currently have.

What’s this “seem.” The actual output of large language models using transformer methods crafted by Google output baloney some of the time. Google itself had to regroup after the “glue cheese to pizza” suggestion.

Several observations:

  1. Smart software has become the technology more important than any other. The problem is that its outputs are often wonky and now the systems are befuddling the wizards who created and operate them. What if AI is like a carnival ride that routinely injures those looking for kicks?
  2. AI is finding its way into many applications but the resulting revenue has frayed some investors’ nerves. The fix is to go faster and win to reach the revenue goal. This frenzy for payoff has been building since early 2024 but those costs remain brutally high.
  3. The behavior of large language models is not understood by some of its developers. Does this seem like a problem?

Net net: “Seem?” One lies or one does not.

Stephen E Arnold, December 23, 2024

Why Present Bad Sites?

October 7, 2024

dino 10 19_thumb_thumb_thumb_thumbThis blog post did not require the use of smart software, just a dumb humanoid.

I read “Google Search Is Testing Blue Checkmark Feature That Helps Users Spot Genuine Websites.” I know this is a test, but I have a question: What’s genuine mean to Google and its smart software? I know that Google cannot answer this question without resorting to consulting nonsensicalness, but “genuine” is a word. I just don’t know what’s genuine to Google. Is a Web site that uses SEO trickery to appear in a results list? Is it a blog post written by a duplicitous PR person working at a large Google-type firm? Is it a PDF appearing on a “genuine” government’s Web site?

image

A programmer thinking about blue check marks. The obvious conclusion is to provide a free blue check mark. Then later one can charge for that sign of goodness. Thanks, Microsoft. Good enough. Just like that big Windows update. Good enough.

The write up reports:

Blue checkmarks have appeared next to certain websites on Google Search for some users. According to a report from The Verge, this is because Google is experimenting with a verification feature to let users know that sites aren’t fraudulent or scams.

Okay, what’s “fraudulent” and what’s a “scam”?

What does Google say? According to the write up:

A Google spokesperson confirmed the experiment, telling Mashable, “We regularly experiment with features that help shoppers identify trustworthy businesses online, and we are currently running a small experiment showing checkmarks next to certain businesses on Google.”

A couple of observations:

  1. Why not allow the user to NOT out these sites? Better yet, give the user a choice of seeing de-junked or fully junked sites? Wow, that’s too hard. Imagine. A Boolean operator.
  2. Why does Google bother to index these sites? Why not change the block list for the crawl? Wow, that’s too much work. Imagine a Googler editing a “do not crawl” list manually.
  3. Is Google admitting that it can identify problematic sites like those which push fake medications or the stolen software videos on YouTube? That’s pretty useful information for an attorney taking legal action against Google, isn’t it?

Net net: Google is unregulated and spouts baloney. Google needs to jack up its revenue. It has fines to pay and AI wizards to pay. Tough work.

Stephen E Arnold, October 7, 2024

US Government Procurement: Long Live Silos

September 12, 2024

green-dino_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Defense AI Models A Risk to Life Alleges Spurned Tech Firm.” Frankly , the headline made little sense to me so I worked through what is a story about a contractor who believes it was shafted by a large consulting firm. In my experience, the situation is neither unusual nor particularly newsworthy. The write up does a reasonable job of presenting a story which could have been titled “Naive Start Up Smoked by Big Consulting Firm.” A small high technology contractor with smart software hooks up with a project in the Department of Defense. The high tech outfit is not able to meet the requirements to get the job. The little AI high tech outfit scouts around and brings a big consulting firm to get the deal done. After some bureaucratic cycles, the small high tech outfit is benched. If you are not familiar with how US government contracting works, the write up provides some insight.

image

The work product of AI projects will be digital silos. That is the key message of this procurement story. I don’t feel sorry for the smaller company. It did not prepare itself to deal with the big time government contractor. Outfits are big for a reason. They exploit opportunities and rarely emulate Mother Theresa-type behavior. Thanks, MSFT Copilot. Good enough illustration although the robots look stupid.

For me, the article is a stellar example of how information or or AI silos are created within the US government. Smart software is hot right now. Each agency, each department, and each unit wants to deploy an AI enabled service. Then that AI infused service becomes (one hopes) an afterburner for more money with which one can add headcount and more AI technology. AI is a rare opportunity to become recognized as a high-performance operator.

As a result, each AI service is constructed within a silo. Think about a structure designed to hold that specific service. The design is purpose built to keep rats and other vermin from benefiting from the goodies within the AI silo. Despite the talk about breaking down information silos, silos in a high profile, high potential technical are like artificial intelligence are the principal product of each agency, each department, and each unit. The payoff could be a promotion which might result in a cushy job in the commercial AI sector or a golden ring; that is, the senior executive service.

I understand the frustration of the small, high tech AI outfit. It knows it has been played by the big consulting firm and the procurement process. But, hey, there is a reason the big consulting firm generates billions of dollars in government contracts. The smaller outfit failed to lock down its role, retain the key to the know how it developed, and allowed its “must have cachè” to slip away.

Welcome, AI company, to the world of the big time Beltway Bandit. Were you expecting the big time consulting firm to do what you wanted? Did you enter the deal with a lack of knowledge, management sophistication, and a couple of false assumptions? And what about the notion of “algorithmic warfare”? Yeah, autonomous weapons systems are the future. Furthermore, when autonomous systems are deployed, the only way they can be neutralized is to use more capable autonomous weapons. Does this sound like a reply of the logic of Cold War thinking and everyone’s favorite bedtime read On Thermonuclear War still available on Amazon and as of September 6, 2024, on the Internet Archive at this link.

Several observations are warranted:

  1. Small outfits need to be informed about how big consulting companies with billions in government contracts work the system before exchanging substantive information
  2. The US government procurement processes are slow to change, and the Federal Acquisition Regulations and related government documents provide the rules of the road. Learn them before getting too excited about a request for a proposal or Federal Register announcement
  3. In a fight with a big time government contractor make sure you bring money, not a chip on your shoulder, to the meeting with attorneys. The entity with the most money typically wins because legal fees are more likely to kill a smaller firm than any judicial or tribunal ruling.

Net net: Silos are inherent in the work process of any government even those run by different rules. But what about the small AI firm’s loss of the contract? Happens so often, I view it as a normal part of the success workflow. Winners and losers are inevitable. Be smarter to avoid losing.

Stephen E Arnold, September 12, 2024

AI Safety Evaluations, Some Issues Exist

August 14, 2024

Ah, corporate self regulation. What could go wrong? Well, as TechCrunch reports, “Many Safety Evaluations for AI Models Have Significant Limitations.” Writer Kyle Wiggers tells us:

“Generative AI models … are coming under increased scrutiny for their tendency to make mistakes and generally behave unpredictably. Now, organizations from public sector agencies to big tech firms are proposing new benchmarks to test these models’ safety. Toward the end of last year, startup Scale AI formed a lab dedicated to evaluating how well models align with safety guidelines. This month, NIST and the U.K. AI Safety Institute released tools designed to assess model risk. But these model-probing tests and methods may be inadequate. The Ada Lovelace Institute (ALI), a U.K.-based nonprofit AI research organization, conducted a study that interviewed experts from academic labs, civil society and those who are producing vendors models, as well as audited recent research into AI safety evaluations. The co-authors found that while current evaluations can be useful, they’re non-exhaustive, can be gamed easily and don’t necessarily give an indication of how models will behave in real-world scenarios.”

There are several reasons for the gloomy conclusion. For one, there are no established best practices for these evaluations, leaving each organization to go its own way. One approach, benchmarking, has certain problems. For example, for time or cost reasons, models are often tested on the same data they were trained on. Whether they can perform in the wild is another matter. Also, even small changes to a model can make big differences in behavior, but few organizations have the time or money to test every software iteration.

What about red-teaming: hiring someone to probe the model for flaws? The low number of qualified red-teamers and the laborious nature of the method make it costly, out of reach for smaller firms. There are also few agreed-upon standards for the practice, so it is hard to assess the effectiveness of red-team projects.

The post suggests all is not lost—as long as we are willing to take responsibility for evaluations out of AI firms’ hands. Good luck prying open that death grip. Government regulators and third-party testers would hypothetically fill the role, complete with transparency. What a concept. It would also be good to develop standard practices and context-specific evaluations. Bonus points if a method is based on an understanding of how each AI model operates. (Sadly, such understanding remains elusive.)

Even with these measures, it may never be possible to ensure any model is truly safe. The write-up concludes with a quote from the study’s co-author Mahi Hardalupas:

“Determining if a model is ‘safe’ requires understanding the contexts in which it is used, who it is sold or made accessible to, and whether the safeguards that are in place are adequate and robust to reduce those risks. Evaluations of a foundation model can serve an exploratory purpose to identify potential risks, but they cannot guarantee a model is safe, let alone ‘perfectly safe.’ Many of our interviewees agreed that evaluations cannot prove a model is safe and can only indicate a model is unsafe.”

How comforting.

Cynthia Murrell, August 14, 2024

Which Outfit Will Win? The Google or Some Bunch of Busy Bodies

July 30, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

  

It may not be the shoot out at the OK Corral, but the dust up is likely to be a fan favorite. It is possible that some crypto outfit will find a way to issue an NFT and host pay-per-view broadcasts of the committee meetings, lawyer news conferences, and pundits recycling press releases. On the other hand, maybe the shoot out is a Hollywood deal. Everyone knows who is going to win before the real action begins.

“Third Party Cookies Have Got to Go” reports:

After reading Google’s announcement that they no longer plan to deprecate third-party cookies, we wanted to make our position clear. We have updated our TAG finding Third-party cookies must be removed to spell out our concerns.

image

A great debate is underway. Who or what wins? Experience suggests that money has an advantage in this type of disagreement. Thanks, MSFT. Good enough.

Who is making this draconian statement? A government regulator? A big-time legal eagle representing an NGO? Someone running for president of the United States? A member of the CCP? Nope, the World Wide Web Consortium or W3C. This group was set up by Tim Berners-Lee, who wanted to find and link documents at CERN. The outfit wants to cook up Web standards, much to the delight of online advertising interests and certain organizations monitoring Web traffic. Rules allow crafting ways to circumvent their intent and enable the magical world of the modern Internet. How is that working out? I thought the big technology companies set standards like no “soft 404s” or “sorry, Chrome created a problem. We are really, really sorry.”

The write up continues:

We aren’t the only ones who are worried. The updated RFC that defines cookies says that third-party cookies have “inherent privacy issues” and that therefore web “resources cannot rely upon third-party cookies being treated consistently by user agents for the foreseeable future.” We agree. Furthermore, tracking and subsequent data collection and brokerage can support micro-targeting of political messages, which can have a detrimental impact on society, as identified by Privacy International and other organizations. Regulatory authorities, such as the UK’s Information Commissioner’s Office, have also called for the blocking of third-party cookies.

I understand, but the Google seems to be doing one of those “let’s just dump this loser” moves. Revenue is more important than the silly privacy thing. Users who want privacy should take control of their technology.

The W3C points out:

The unfortunate climb-down will also have secondary effects, as it is likely to delay cross-browser work on effective alternatives to third-party cookies. We fear it will have an overall detrimental impact on the cause of improving privacy on the web. We sincerely hope that Google reverses this decision and re-commits to a path towards removal of third-party cookies.

Now the big question: “Who is going to win this shoot out?”

Normal folks might compromise or test a number of options to determine which makes the most sense at a particularly interesting point in time. There is post-Covid weirdness, the threat of escalating armed conflict in what six, 27, or 95 countries, and financial brittleness. That anti-fragile handwaving is not getting much traction in my opinion.

At one end of the corral are the sleek, technology wizards. These norm core  folks have phasers, AI, and money. At the other end of the corral are the opponents who look like a random selection of Café de Paris customers. Place you bets.

Stephen E Arnold, July 30, 2024

1

.

Harvard University: A Sticky Wicket, Right, Old Chap?

April 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I know plastic recycling does not work. The garbage pick up outfit assures me it recycles. Yeah, sure. However, I know one place where recycling is alive and well. I watched a video about someone named Francesca Gino, a Harvard professor. A YouTuber named Pete Judo presents information showing that Ms. Gino did some recycling. He did not award her those little green Ouroboros symbols. Copying and pasting are out of bounds in the Land of Ivory Towers in which Harvard has allegedly the ivory-est. You can find his videos at https://www.youtube.com/@PeteJudo1.

image

The august group of academic scholars are struggling to decide which image best fits the 21st-century version of their prestigious university: The garbage recycling image representing reuse of trash generated by other scholars or the snake-eating-its-tail image of the Ouroboros. So many decisions have these elite thinkers. Thanks, MSFT Copilot. Looking forward to your new minority stake in a company in a far off land?

As impressive a source as a YouTuber is, I think I found an even more prestigious organ of insight, the estimable New York Post. Navigate through the pop ups until you see the “real” news story “Harvard Medical School Professor Massively Plagiarized Report for Lockheed Martin Suit: Judge.” The thrust of this story is that a moonlighting scholar “plagiarized huge swaths of a report he submitted on carcinogenic chemicals, according to a federal judge, who agreed to remove it as evidence in a class action case against Lockheed Martin.”

Is this Medical School-related item spot on? I don’t know. Is the Gino-us activity on the money? For that matter, is a third Harvard professor of ethics guilty of an ethical violation in a journal article about — wait for it — ethics? I don’t know, and I don’t have the energy to figure out if plagiarism is the new Covid among academics in Boston.

However, based on the drift of these examples, I can offer several observations:

  1. Harvard University has a public relations problem. Judging from the coverage in outstanding information services as YouTube and the New York Post, the remarkable school needs to get its act together and do some “messaging.” When the plagiarism pandemic is real or fabricated by the type of adversary Microsoft continually says creates trouble, Harvard’s reputation is going to be worn down by a stream of digital bad news.
  2. The ways of a most Ivory Tower thing are mysterious. Nevertheless, it is clear that the mechanism for hiring, motivating, directing, and preventing academic superstars from sticking their hand in a pile of dog doo is not working. That falls into what I call “governance.” I want to use my best Harvard rhetoric now: “Hey, folks, you ain’t making good moves.”
  3. The top dog (president, CFO, bursar, whatever) you are on the path to an “F.” Imagine what a moral stick in the mud like William James would think of Harvard’s leadership if he were still waddling around, mumbling about radical pragmatism. Even more frightening is an AI version of this sporty chap doing a version of AI Jesus on Twitch. Instead of recycling Christian phrases, he would combine his thoughts about ethics, psychology, and Harvard with the possibly true stories about Harvard integrity herpes. Yikes.

Net net: What about educating tomorrow’s leaders. Should these young minds emulate what professors are doing, or should they be learning to pursue knowledge without shortcuts, cheating, plagiarism, and looking like characters from The Simpsons?

Stephen E Arnold, April 22, 2024

Google Gem: Arresting People Management

April 18, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have worked for some well-managed outfits: Halliburton, Booz Allen, Ziff Communications, and others in the 55 year career. The idea that employees at Halliburton Nuclear (my assignment) would occupy the offices of a senior officer like Eugene Saltarelli was inconceivable. (Mr. Saltarelli sported a facial scar. When asked about the disfigurement, he would stare at the interlocutor and ask, “What scar?” Do you want to “take over” his office?) Another of my superiors at a firm in New York had a special method of shaping employee behavior. This professional did nothing to suppress rumors that two of his wives drowned  during “storms” after falling off his sail boat. Did I entertain taking over his many-windowed office in Manhattan? Answer: Are you sure you internalized the anecdote?

! google gems

Another Google management gem glitters in the public spot light.

But at the Google life seems to be different, maybe a little more frisky absent psychological behavior controls. I read “Nine Google Workers Get Arrested After Sit-In Protest over $1.2B Cloud Deal with Israel.” The main idea seems to be that someone at Google sold cloud services to the Israeli government. Employees apparently viewed the contract as bad, wrong, stupid, or some combination of attributes. The fix involved a 1960s-style sit in. After a period of time elapsed, someone at Google called the police. The employee-protesters were arrested.

I recall hearing years ago that Google faced a similar push back about a contract with the US government. To be honest, Google has generated so many human resource moments, I have a tough time recalling each. A few are Mt. Everests of excellence; for example, the termination of Dr. Timnit Gebru. This Googler had the nerve to question the bias of Google’s smart software. She departed. I assume she enjoyed the images of biased signers of documents related to America’s independence and multi-ethnic soldiers in the World War II German army. Bias? Google thinks not I guess.

The protest occurs as the Google tries to cope with increased market pressure and the tough-to-control costs of smart software. The quick fix is to nuke or RIF employees. “Google Lays Off Workers As Part of Pretty Large-Scale Restructuring” reports by citing Business Insider:

Ruth Porat, Google’s chief financial officer, sent an email to employees announcing that the company would create “growth hubs” in India, Mexico and Ireland. The unspecified number of layoffs will affect teams in the company’s finance department, including its treasury, business services and revenue cash operations units

That looks like off-shoring to me. The idea was a cookie cutter solution spun up by blue chip consulting companies 20, maybe 30 years ago. On paper, the math is more enticing than a new Land Rover and about as reliable. A state-side worker costs X fully loaded with G&A, benefits, etc. An off-shore worker costs X minus Y. If the delta means cost savings, go for it. What’s not to like?

According to a source cited in the New York Post:

“As we’ve said, we’re responsibly investing in our company’s biggest priorities and the significant opportunities ahead… To best position us for these opportunities, throughout the second half of 2023 and into 2024, a number of our teams made changes to become more efficient and work better, remove layers and align their resources to their biggest product priorities.

Yep, align. That senior management team has a way with words.

Will those who are in fear of their jobs join in the increasingly routine Google employee protests? Will disgruntled staff sandbag products and code? Will those who are terminated write tell-alls about their experiences at an outfit operating under Code Red for more than a year?

Several observations:

  1. Microsoft’s quite effective push of its AI products and services continues. In certain key markets like New York City and the US government, Google is on the defensive. Hint: Microsoft has the advantage, and the Google is struggling to catch up.
  2. Google’s management of its personnel seems to create the wrong type of news. Example: Staff arrests. Is that part of Peter Drucker’s management advice.
  3. The Google leadership team appears to lack the ability to do their job in a way that operates in a quiet, effective, positive, and measured way.

Net net: The online ad money machine keeps running. But if the investigations into Google’s business practices get traction, Google will have additional challenges to face. The Sundar & Prabhakar Comedy team should make a TikTok-type,  how-to video about human resource management. I would prefer a short video about the origin story for the online advertising method which allowed Google to become a fascinating outfit.

Stephen E Arnold, April 18, 2024

Philosophy and Money: Adam Smith Remains Flexible

March 6, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In the early twenty-first century, China was slated to overtake the United States as the world’s top economy. Unfortunately for the “sleeping dragon,” China’s economy has tanked due to many factors. The country, however, still remains a strong spot for technology development such as AI and chips. The Register explains why China is still doing well in the tech sector: “How Did China Get So Good At Chips And AI? Congressional Investigation Blames American Venture Capitalists.”

Venture capitalists are always interested in increasing their wealth and subverting anything preventing that. While the US government has choked China’s semiconductor industry and denying it the use of tools to develop AI, venture capitalists are funding those sectors. The US’s House Select Committee on the China Communist Party (CCP) shared that five venture capitalists are funneling billions into these two industries: Walden International, Sequoia Capital, Qualcomm Ventures, GSR Ventures, and GGV Capital. Chinese semiconductor and AI businesses are linked to human rights abuses and the People’s Liberation Army. These five venture capitalist firms don’t appear interested in respecting human rights or preventing the spread of communism.

The House Select Committee on the CCP discovered that one $1.9 million went to AI companies that support China’s mega-surveillance state and aided in the Uyghur genocide. The US blacklisted these AI-related companies. The committee also found that $1.2 bullion was sent to 150 semiconductor companies.

The committee also accused of sharing more than funding with China:

“The committee also called out the VCs for "intangible" contributions – including consulting, talent acquisition, and market opportunities. In one example highlighted in the report, the committee singled out Walden International chairman Lip-Bu Tan, who previously served as the CEO of Cadence Design Systems. Cadence develops electronic design automation software which Chinese corporates, like Huawei, are actively trying to replicate. The committee alleges that Tan and other partners at Walden coordinated business opportunities and provided subject-matter expertise while holding board seats at SMIC and Advanced Micro-Fabrication Equipment Co. (AMEC).”

Sharing knowledge and business connections is equally bad (if not worse) than funding China’s tech sector. It’s like providing instructions and resources on how to build nuclear weapon. If China only had the resources it wouldn’t be as frightening.

Whitney Grace, March 6, 2024

The Google: A Bit of a Wobble

February 28, 2024

green dinoThis essay is the work of a dumb humanoid. No smart software required.

Check out this snap from Techmeme on February 28, 2024. The folks commenting about Google Gemini’s very interesting picture generation system are confused. Some think that Gemini makes clear that the Google has lost its way. Others just find the recent image gaffes as one more indication that the company is too big to manage and the present senior management is too busy amping up the advertising pushed in front of “users.”

image

I wanted to take a look at What Analytics India Magazine had to say. Its article is “Aal Izz Well, Google.” The write up — from a nation state some nifty drone technology and so-so relationships with its neighbors — offers this statement:

In recent weeks, the situation has intensified to the extent that there are calls for the resignation of Google chief Sundar Pichai. Helios Capital founder Samir Arora has suggested a likelihood of Pichai facing termination or choosing to resign soon, in the aftermath of the Gemini debacle.

The write offers:

Google chief Sundar Pichai, too, graciously accepted the mistake. “I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” Pichai said in a memo.

The author of the Analytics India article is Siddharth Jindal. I wonder if he will talk about Sundar’s and Prabhakar’s most recent comedy sketch. The roll out of Bard in Paris was a hoot, and it too had gaffes. That was a year ago. Now it is a year later and what’s Google accomplished:

Analytics India emphasizes that “Google is not alone.” My team and I know that smart software is the next big thing. But Analytics India is particularly forgiving.

The estimable New York Post takes a slightly different approach. “Google Parent Loses $70B in Market Value after Woke AI Chatbot Disaster” reports:

Google’s parent company lost more than $70 billion in market value in a single trading day after its “woke” chatbot’s bizarre image debacle stoked renewed fears among investors about its heavily promoted AI tool. Shares of Alphabet sank 4.4% to close at $138.75 in the week’s first day of trading on Monday. The Google’s parent’s stock moved slightly higher in premarket trading on Tuesday [February 28, 2024, 941 am US Eastern time].

As I write this, I turned to Google’s nemesis, the Softies in Redmond, Washington. I asked for a dinosaur looking at a meteorite crater. Here’s what Copilot provided:

image

Several observations:

  1. This is a spectacular event. Sundar and Prabhakar will have a smooth explanation I believe. Smooth may be their core competency.
  2. The fact that a Code Red has become a Code Dead makes clear that communications at Google requires a tune up. But if no one is in charge, blowing $70 billion will catch the attention of some folks with sharp teeth and a mean spirit.
  3. The adolescent attitudes of a high school science club appear to inform the management methods at Google. A big time investigative journalist told me that Google did not operate like a high school science club planning a bus trip to the state science fair. I stick by my HSSCMM or high school science club management method. I won’t repeat her phrase because it is similar to Google’s quantumly supreme smart software: Wildly off base.

Net net: I love this rationalization of management, governance, and technical failure. Everyone in the science club gets a failing grade. Hey, wizards and wizardettes, why not just stick to selling advertising.

Stephen E Arnold, February 28,. 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta