AI That Sort of, Kind of Did Not Work: Useful Reminders

April 24, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Epic AI Fails. A List of Failed Machine Learning Projects.” My hunch is that a write up suggesting that smart software may disappoint in some cases is not going to be a popular topics. I can hear the pooh-poohs now: “The examples used older technology.” And “Our system has been engineered to avoid that problem.” And “Our Large Language Model uses synthetic data which improves performance and the value of system outputs.” And “We have developed a meta-layer of AI which integrates multiple systems in order to produce a more useful response.”

Did I omit any promises other than “The check is in the mail” or “Our customer support team will respond to your call immediately, 24×7, and with an engineer, not a smart chatbot because. Humans, you know.”

The main point of the article from Analytics India, an online publication, provides some color on interesting flops; specifically:

  • Amazon’s recruitment system. Think discrimination against females. Amazon’s Rekognition system and its identification of elected officials as criminals. Wait. Maybe those IDs were accurate?
  • Covid 19 models. Moving on.
  • Google and the diabetic retinopathy detection system. The marketing sounded fine. Candy for breakfast? Sure, why not?
  • OpenAI’s Samantha. Not as crazy as Microsoft Tay but in the ballpark.
  • Microsoft Tay. Yeah, famous self instruction in near real time.
  • Sentient Investment AI Hedge Fund. Your retirement savings? There are jobs at Wal-Mart I think.
  • Watson. Wow. Cognitive computing and Jeopardy.

The author takes a less light-hearted approach than I. Useful list with helpful reminders that it is easier to write tweets and marketing collateral than deliver smart software that delivers on sales confections.

Stephen E Arnold, April 24, 2023

Divorcing the Google: Legal Eagles Experience a Frisson of Anticipation

April 24, 2023

No smart software has been used to create this dinobaby’s blog post.

I have poked around looking for a version or copy of the contract Samsung signed with Google for the firms’ mobile phone tie up. Based on what I have heard at conferences and read on the Internet (of course, I believe everything I read on the Internet, don’t you?), it appears that there are several major deals.

The first is the use of and access to the mindlessly fragmented Android mobile phone software. Samsung can do some innovating, but the Google is into providing “great experiences.” Why would a mobile phone maker like Samsung allow a user to manage contacts and block mobile calls without implementing a modern day hunt for gold near Placer.

The second is the “suggestion” — mind you, the suggestion is nothing more than a gentle nudge — to keep that largely-malware-free Google Play Store front and center.

The third is the default search engine. Buy a Samsung get Google Search.

Now you know why the legal eagles a shivering when they think of litigation to redo the Google – Samsun deal. For those who think the misinformation zipping around about Microsoft Bing displacing Google Search, my thought would be to ask yourself, “Who gains by pumping out this type of disinformation?” One answer is big Chinese mobile phone manufacturers. This is Art of War stuff, and I won’t dwell on this. What about Microsoft? Maybe but I like to think happy thoughts about Microsoft. I say, “No one at Microsoft would engage in disinformation intended to make life difficult for the online advertising king. Another possibility is Silicon Valley type journalists who pick up rumors, amplify them, and then comment that Samsung is kicking the tires of Bing with ChatGPT. Suddenly a “real” news outfit emits the Samsung rumor. Exciting for the legal eagles.

The write up “Samsung Can’t Dump Google for Bing As the Default Search Engine on Its Phones” does a good job of explaining the contours of a Google – Samsung tie up.

Several observations:

First, the alleged Samsung search replacement provides a glimpse of how certain information can move from whispers at conferences to headlines.

Second, I would not bet against lawyers. With enough money, contracts can be nullified, transformed, or left alone. The only option which disappoints attorneys is the one that lets sleeping dogs lie.

Third, the growing upswell of anti-Google sentiment is noticeable. That may be a far larger problem for Googzilla than rumors about Samsung. Perceptions can be quite real, and they translate into impacts. I am tempted to quote William James, but I won’t.

Net net: If Samsung wants to swizzle a deal with an entity other than the Google, the lawyers may vibrate with such frequency that a feather or two may fall off.

Stephen E Arnold, April 24, 2023

Google: Any Day Now, Any Day Now

April 21, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read what could be a recycled script from the Sundar and Prabhakar Comedy Show. Although not yet a YouTube video series, the company is edging ever closer to becoming the most amusing online advertising company in Mountain View.

Google Devising Radical Search Changes to Beat Back A.I. Rivals” is chock full of one-liners. Now these are not as memorable as Jack Benny’s “I’m thinking it over” or Abbott and Costello’s “I don’t know is on third”, but the Google is in the ball park.

I liked these statements:

The tech giant is sprinting. [Exactly how does Googzilla sprint?]

Google is racing [Okay, Kentucky Derby stuff or NASCAR stuff? One goes at the speed of organisms, and the other is into the engineering approach to speed. Google is in progressive tense mode, not delivering results mode.]

we’re excited about bringing new A.I.-powered features to search, and will share more details soon.” [I laughed at the idea of an outfit in panic and Red Alert mode getting exciting. Is this like a high school science club learning that it has qualified to participate in the international math competition or excite like members of the high school science club learning that the club will not be expelled for hijacking the principal’s morning announcements.]

“Modernizing its search engine has become an obsession at Google…” [I wonder if this is the type of obsession that pulled the Google VP to his yacht with a specialized contractor allegedly in possession of a controlled substance or the legal eagle populating his nest or the Google HR mastermind who made stochastic parrot the go-to phrase for discrimination and bias.’’]

The article contains more comedic gems. The main point is that my team and I cannot keep pace with the number of new applications of the chatbot technology. Amazon is giving the capability away free. China’s technical sector continues to beaver away adding to its formidable array of software capabilities. Plus we spotted a German outfit able to crank out interesting videos of former President Obama making fascinating statements about another former president.

The future and progressive present tenses are interesting. Other firms are outputting features, services, and products at a remarkable pace.

And what’s the Google search sensitive professionals doing? Creating more grist for the Sundar and Prabhakar Comedy Show.

The only problem is that Google continues to talk, do PR, and promise. What’s that suggest about quantum supremacy or delivering relevant search results? I do know one thing. If I want an answer, I am going to run the query on the You.com service, thank you very much.

Stephen E Arnold, April 21, 2023

AI: Sucking Value from Those with Soft Skills

April 21, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read an essay called “Beyond Algorithms: Skills Of Designers That AI Can’t Replicate.” The author has a specific type of expertise. The write up explains that his unique human capabilities cannot be replicated in smart software.

I noted this somewhat poignant passage:

Being designerly takes thinking, feeling, and acting like a designer…. I used the head, heart, and hands approach for transformative sustainability learning (Orr, Sipos, et al.) to organize these designerly skills related to thinking (head), feeling (heart), and doing (hands), and offer ways to practice them.

News flash: Those who can use smart software to cut costs and get good enough outputs from smart software don’t understand “designerly.”

I have seen lawyers in meeting perspire when I described methods for identifying relevant sections of information from content sucked into as part of the discovery process. Why memorize Bates number 525 when a computing device provides that information in an explicit form. Zippy zip. The fear, in my experience, is that lawyers often have degrees in history or political science, skipped calculus, and took golf instead of computer science. The same may be said of most knowledge workers.

The idea is that a human has “knowledge value,” a nifty phrase cooked up by Taichi Sakaiya in his MITI-infused book The Knowledge Value Revolution or a History of the Future.

The author of the essay perceives his designing skill as having knowledge value. Indeed his expertise has value to himself. However, the evolving world of smart software is not interested in humanoids’ knowledge value. Software is a way to reduce costs and increase efficiency.

The “good enough” facet of the smart software revolution de-values what makes the designer’s skill generate approbation, good looking stuff, and cash.

No more. The AI boomlet eliminates the need to pay in time and resources for what a human with expertise can do. As soon as software gets close enough to average, that’s the end of the need for soft excellence. Yes, that means lots of attorneys will have an opportunity to study new things via YouTube videos. Journalists, consultants, and pundits without personality will be kneecapped.

Who will thrive? The answer is in the phrase “the 10X engineer.” The idea is that a person with specific technical skills to create something like an enhancement to AI will be the alpha professional.  The vanilla engineer will find himself, herself, or itself sitting in Starbucks watching TikToks.

The present technology elite will break into two segments: The true elite and the serf elite. What’s that mean for today’s professionals who are not coding transformers? Those folks will have a chance to meet new friends when sharing a Starbucks’ table.

Forget creativity. Think cheaper, not better.

Stephen E Arnold, April 21, 2023

Google Panic: Just Three Reasons?

April 20, 2023

Vea4_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read tweets, heard from colleagues, and received articles emailed to me about Googlers’ Bard disgruntlement?  In my opinion, Laptop Magazine’s summary captures the gist of the alleged wizard annoyance. “Bard: 3 Reasons Why the Google Staff Hates the New ChatGPT Rival.”

I want to sidestep the word “hate”. With 100,000 or so employees a hefty chunk of those living in Google Land will love Bard. Other Google staff won’t care because optimizing a cache function for servers in Brazil is a world apart. The result is a squeaky cart with more squeaky wheels than a steam engine built in 1840.

The three trigger points are, according to the write up:

  1. Google Bard outputs that are incorrect. The example provided is that Bard explains how to crash a plane when the Bard user wants to land the aircraft safely. So stupid.
  2. Google (not any employees mind you) is “indifferent to ethical concerns.” The example given references Dr. Timnit Gebru, my favorite Xoogler. I want to point out that Dr. Jeff Dean does not have her on this weekend’s dinner party guest list. So unethical.
  3. Bard is flawed because Google wizards had to work fast. This is the outcome of the sort of bad judgment which has been the hallmark of Google management for some time. Imagine. Work. Fast. Google. So haste makes waste.

I want to point out that there is one big factor influencing Googzilla’s mindless stumbling and snorting. The headline of the Laptop Magazine article presents the primum mobile. Note the buzzword/sign “ChatGPT.”

Google is used to being — well, Googzilla — and now an outfit which uses some Google goodness is in the headline. Furthermore, the headline calls attention to Google falling behind ChatGPT.

Googzilla is used to winning (whether in patent litigation or in front of incredibly brilliant Congressional questioners). Now even Laptop Magazine explains that Google is not getting the blue ribbon in this particular, over-hyped but widely followed race.

That’s the Code Red. That is why the Paris presentation was a hoot. That is why the Sundar and Prabhakar Comedy Tour generates chuckles when jokes include “will,” “working on,” “coming soon”  as part of the routine.

Once again, I am posting this from the 2023 National Cyber Crime Conference. Not one of the examples we present are from Google, its systems, or its assorted innovation / acquisition units.

Googzilla for some is not in the race. And if the company is in the ChatGPT race, Googzilla has yet to cross the finish line.

That’s the Code Red. No PR, no Microsoft marketing tsunami, and no love for what may be a creature caught in a heavy winter storm. Cold, dark, and sluggish.

Stephen E Arnold, April 26, 2023

The Google Will Means We Are Not Lagging Behind ChatGPT: The Coding Angle

April 20, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read another easily-spotted Google smart software PR imitative. Google’s professionals apparently ignore the insights of the luminary Jason Calacanis. In his “The Rise of AutoGPT and AO Anxieties” available absolutely anywhere the energetic Mr. Calacanis can post the content, a glimpse of the Google anxiety is explained. One of Mr. Calacanis’ BFFs points out that companies with good AI use the AI to make more and better AI. The result is that those who plan, anticipate, and promise great AI products and services cannot catch up to those who are using AI to super-charge their engineers. (I refuse to use the phrase 10X engineer because it is little more than a way to say, “Smart engineers are now becoming 5X or 10X engineers.” The idea is that “wills” and “soon” are flashing messages that say, “We are now behind. We will never catch up.”

I thought about the Thursday, April 13, 2023, extravaganza when I read “DeepMind Says Its New AI Coding Engine Is As Good As an Average Human Programmer.” The entire write up is one propeller driven Piper Cub skywriting messages about the future. I quote:

DeepMind has created an AI system named AlphaCode that it says “writes computer programs at a competitive level.” The Alphabet subsidiary tested its system against coding challenges used in human competitions and found that its program achieved an “estimated rank” placing it within the top 54 percent of human coders. The result is a significant step forward for autonomous coding, says DeepMind, though AlphaCode’s skills are not necessarily representative of the sort of programming tasks faced by the average coder.

Mr. Calacanis and his BFFs were not talking about basic coding as the future. Their focus was on autonomous AI which can string together sequences of tasks. The angle in my lingo is “meta AI”; that is, instead of a single smart query answered by a single smart system, the instructions in natural language would be parsed by a meta-AI which would pull back separate responses, integrate them, and perform the desired task.

What’s Google’s PR team pushing? Competitive programming.

Code Red? Yeah, that’s the here and now. The reality is that Google is in “will” mode. Imagine for a moment that Mr. Calacanis and his BFFs are correct. What’s that mean for Google? Will Google catch up with “will”?

Stephen E Arnold, April 20, 2023

AI Legislation: Can the US Regulate What It Does Understand Like a Dull Normal Student?

April 20, 2023

I read an essay by publishing and technology luminary Tim O’Reilly. If you don’t know the individual, you may recognize the distinctive art used on many of his books. Here’s what I call the parrot book’s cover:

image

You can get a copy at this link.

The essay to which I referred in the first sentence of this post is “You Can’t Regulate What You Don’t Understand.” The subtitle of the write up is “Or, Why AI Regulations Should Begin with Mandated Disclosures.” The idea is an interesting one.

Here’s a passage I found worth circling:

But if we are to create GAAP for AI, there is a lesson to be learned from the evolution of GAAP itself. The systems of accounting that we take for granted today and use to hold companies accountable were originally developed by medieval merchants for their own use. They were not imposed from without, but were adopted because they allowed merchants to track and manage their own trading ventures. They are universally used by businesses today for the same reason.

The idea is that those without first hand knowledge of something cannot make effective regulations.

The essay makes it clear that government regulators may be better off:

formalizing and requiring detailed disclosure about the measurement and control methods already used by those developing and operating advanced AI systems. [Emphasis in the original.]

The essay states:

Companies creating advanced AI should work together to formulate a comprehensive set of operating metrics that can be reported regularly and consistently to regulators and the public, as well as a process for updating those metrics as new best practices emerge.

The conclusion is warranted by the arguments offered in the essay:

We shouldn’t wait to regulate these systems until they have run amok. But nor should regulators overreact to AI alarmism in the press. Regulations should first focus on disclosure of current monitoring and best practices. In that way, companies, regulators, and guardians of the public interest can learn together how these systems work, how best they can be managed, and what the systemic risks really might be.

My thought is that it may be useful to look at what generalities and self-regulation deliver in real life. As examples, I would point out:

  1. The report “Independent Oversight of the Auditing Professionals: Lessons from US History.” To keep it short and sweet: Self regulation has failed. I will leave you to work through the somewhat academic argument. I have burrowed through the document and largely agree with the conclusion.
  2. The US Securities & Exchange Commission’s decision to accept $1.1 billion in penalties as a result of 16 Wall Street firms’ failure to comply with record keeping requirements.
  3. The hollowness of the points set forth in “The Role of Self-Regulation in the Cryptocurrency Industry: Where Do We Go from Here?” in the wake of the Sam Bankman Fried FTX problem.
  4. The MBA-infused “ethical compass” of outfits operating with a McKinsey-type of pivot point?

My view is that the potential payoff from pushing forward with smart software is sufficient incentive to create a Wild West, anything-goes environment. Those companies with the most to gain and the resources to win at any cost can overwhelm US government professionals with flights of legal eagles.

With innovations in smart software arriving quickly, possibly as quickly as new Web pages in the early days of the Internet, firms that don’t move quickly, act expediently, and push toward autonomous artificial intelligence will be unable to catch up with firms who move with alacrity.

Net net: No regulation, imposed or self-generated, will alter the rocket launch of news services. The US economy is not set up to encourage snail-speed innovation. The objective is met by generating money. Money, not guard rails, common sense, or actions which harm a company’s self interest, makes the system work… for some. Losers are the exhaust from an economic machine. One doesn’t drive a Model T Ford. Today those who can drive a Tesla Plaid or McLaren. The “pet” is a French bulldog, not a parrot.

Stephen E Arnold, April 20, 2023

Italy Has an Interesting Idea Similar to Stromboli with Fried Flying Termites Perhaps?

April 19, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Bureaucratic thought processes are amusing, not as amusing as Google’s Paris demonstration of Bard, but darned close. I spotted one example of what seems so darned easy but may be as tough as getting 15th century Jesuits to embrace the concept of infinity. In short, mandating is different from doing.

Italy Says ChatGPT Must Allow Users to Correct Inaccurate Personal Information” reports in prose which may or may not have been written by smart software. I noted this passage about “rights”:

[such as] allowing users and non-users of ChatGPT to object to having their data processed by OpenAI and letting them correct false or inaccurate information about them generated by ChatGPT…

Does anyone recall the Google right to remove capability. The issue was blocking data, not making a determination if the information was “accurate.”

In one of my lectures at the 2023 US National Cyber Crime Conference I discuss with examples the issue of determining “accuracy.” My audience consists of government professionals who have resources to determine accuracy. I will point out that accuracy is a slippery fish.

The other issue is getting whiz bang Sillycon Valley hot stuff companies to implement reliable, stable procedures. Most of these outfits operate with Philz coffee in mind, becoming a rock star at a specialist conference, or the future owner of a next generation Italian super car. Listening to Italian bureaucrats is not a key part of their Italian thinking.

How will this play out? Hearing, legal proceedings, and then a shrug of the shoulders.

Stephen E Arnold, April 19, 2023

Business Baloney: Wowza, Google Management Is on the Ball

April 19, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid

I read “Google CEO Sundar Pichai Broke the Rules on OKRs. Why It Worked.” I looked at this story in Inc. Magazine because Google has managed to mire itself in deep mud since Mr. Pichai (one half of the Sundar and Prabhakar Comedy Act) got top billing. Sucking the exhaust of the Microsoft marketing four-wheel drives strikes me as somewhat dispiriting.

image

Scribble Diffusion’s imagineering of a Google management meeting with slide rules, computing devices, and management wisdom. Art generated by smart software.

I will enumerate a few of these quicksand filled voids after I pull out two comments from the rather wild and wooly story which is infused with MBA think.

I noted this comment:

…in 2019, Pichai cut out quarterly OKRs altogether, choosing to focus solely on annual OKRs with quarterly progress reports. Pichai’s move might have gone against conventional OKR wisdom, but it made sense because _Google was no longer in startup mode._ [Editor’s note: The weird underscores are supposed to make my eyes perk up and my mind turn from TikTok to the peals of wisdom in the statement “Google was no longer in start up mode. Since I count Google as existing since Backrub, when Mr. Pichai took the stage, the company was 20 years old. Yep, two decades.]

Here’s another quote to note from the Inc. article:

Take shortcuts and do what you need to do to keep things afloat. [Editor’s Note: The article does not mention the foundation short cuts at the GOOG; specifically, [a] the appropriation of some systems and methods from a company to which Google paid before its IPO about a billion dollars in cash and other considerations and [b] a focused effort to implement via acquisitions and staff work a method designed to make sure that buyers and sellers of advertising both paid Google whenever an advertising transaction took place.]

Now the fruits of Mr. Pichai’s management approach:

  1. Personnel decisions which sparked interest in stochastic parrots, protests, staff walk outs, and the exciting litigation related to staff reductions. Definitely excellent management from the perspective of taking shortcuts
  2. Triggering a massive loss in corporate value when the Google smart software displayed its dumbness. Remember this goof emerged from the company which awarded itself quantum supremacy and beat a humanoid Go player into international embarrassment
  3. Management behavior — yep, personal behavior — which caused one Googler to try to terminate her life, not a balky Chrome instance, death by heroin on a yacht in the presence of a specialized contractor who rendered personal services, and fathering a Googler to be within the company’s legal department. Classy, classy.

What about the article? From my point of view, it presents what I would call baloney. I think there are some interesting stories to write about Google; for example, the link between IBM Almaden’s CLEVER system and the Google relevance method, the company’s inability to generate substantive alternative revenue streams, and the mystery acquisitions like Transformic Inc., which few know or care about. There’s even a personal interest story to be written about the interesting interpersonal dynamics at DeepMind, the outfit that is light years ahead of the world in smart software.

But, no. We learn about management brilliance. Those of you familiar with my idiosyncratic lingo I conceptualize Google’s approach to running its business as a high school science club trying to organize a dance party.

Stephen E Arnold, April 19, 2023

SenseChat: Better Than TikTok?

April 18, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

In the midst of “chat”-ter about smart software, the Middle Kingdom shifts into babble mode. “Meet SenseChat, China’s Latest Answer to ChatGPT” is an interesting report. Of course, I believe everything I read on the Internet. Others may be more skeptical. To those Doubting Thomasinas I say, “Get with the program.”

The article reports with the solemnity of an MBA quoting from Sunzi or Sun-Tzu (what does a person unable to make sense of ideographs know?):

…SenseChat could tell a story about a cat catching fish, with multiple rounds of questions and responses.

And what else? The write up reported:

… the bot could help with writing computer code, taking in layman-level questions in English or Chinese and then translating them into a workable product.

SenseTime, the company which appears to “own” the technology is, according to the write up:

best known as a leader in computer vision.

Who is funding SenseTime? Perhaps Alibaba, the dragon with the clipped wings and docked tail. The company is on the US sanctions list. Investors in the US? Chinese government entities?

The write up suggests that SenseTime is resource intensive. How will the Chinese company satiate its thirst for computing power? The article “China’s Loongson Unveils 32 Core CPU, Reportedly 4X Faster Than Arm Chip” implies that China’s push to be AMD, Intel, and Qualcomm free is stumbling forward.

But where did the surveillance savvy SenseTime technology originate? The answer is the labs and dorms at Massachusetts Institute of Technology. Tang Xiao’ou started the company in 2021. Where does SenseTime operated? From a store front in Cambridge, Massachusetts, or a shabby building on Route 128? Nope. The MIT student labors away in the Miami Beach of the Pacific Rim, Pudong, Shanghai.

Several observations:

  1. Chinese developers, particularly entities involved with the government of the Middle Kingdom, are unlikely to respond from letters signed by US luminaries
  2. The software is likely to include a number of interesting features, possibly like those on one of the Chinese branded mobiles I once owned which sent data to Singapore data centers and then to other servers in a nearby country. That cloud interaction is a wonderful innovation for some in my opinion.
  3. Will individuals be able to determine what content was output by SenseTime-type systems?

That last question is an interesting one, isn’t it?

Stephen E Arnold, April 18, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta