AI: Sucking Value from Those with Soft Skills

April 21, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read an essay called “Beyond Algorithms: Skills Of Designers That AI Can’t Replicate.” The author has a specific type of expertise. The write up explains that his unique human capabilities cannot be replicated in smart software.

I noted this somewhat poignant passage:

Being designerly takes thinking, feeling, and acting like a designer…. I used the head, heart, and hands approach for transformative sustainability learning (Orr, Sipos, et al.) to organize these designerly skills related to thinking (head), feeling (heart), and doing (hands), and offer ways to practice them.

News flash: Those who can use smart software to cut costs and get good enough outputs from smart software don’t understand “designerly.”

I have seen lawyers in meeting perspire when I described methods for identifying relevant sections of information from content sucked into as part of the discovery process. Why memorize Bates number 525 when a computing device provides that information in an explicit form. Zippy zip. The fear, in my experience, is that lawyers often have degrees in history or political science, skipped calculus, and took golf instead of computer science. The same may be said of most knowledge workers.

The idea is that a human has “knowledge value,” a nifty phrase cooked up by Taichi Sakaiya in his MITI-infused book The Knowledge Value Revolution or a History of the Future.

The author of the essay perceives his designing skill as having knowledge value. Indeed his expertise has value to himself. However, the evolving world of smart software is not interested in humanoids’ knowledge value. Software is a way to reduce costs and increase efficiency.

The “good enough” facet of the smart software revolution de-values what makes the designer’s skill generate approbation, good looking stuff, and cash.

No more. The AI boomlet eliminates the need to pay in time and resources for what a human with expertise can do. As soon as software gets close enough to average, that’s the end of the need for soft excellence. Yes, that means lots of attorneys will have an opportunity to study new things via YouTube videos. Journalists, consultants, and pundits without personality will be kneecapped.

Who will thrive? The answer is in the phrase “the 10X engineer.” The idea is that a person with specific technical skills to create something like an enhancement to AI will be the alpha professional.  The vanilla engineer will find himself, herself, or itself sitting in Starbucks watching TikToks.

The present technology elite will break into two segments: The true elite and the serf elite. What’s that mean for today’s professionals who are not coding transformers? Those folks will have a chance to meet new friends when sharing a Starbucks’ table.

Forget creativity. Think cheaper, not better.

Stephen E Arnold, April 21, 2023

The Google Will Means We Are Not Lagging Behind ChatGPT: The Coding Angle

April 20, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read another easily-spotted Google smart software PR imitative. Google’s professionals apparently ignore the insights of the luminary Jason Calacanis. In his “The Rise of AutoGPT and AO Anxieties” available absolutely anywhere the energetic Mr. Calacanis can post the content, a glimpse of the Google anxiety is explained. One of Mr. Calacanis’ BFFs points out that companies with good AI use the AI to make more and better AI. The result is that those who plan, anticipate, and promise great AI products and services cannot catch up to those who are using AI to super-charge their engineers. (I refuse to use the phrase 10X engineer because it is little more than a way to say, “Smart engineers are now becoming 5X or 10X engineers.” The idea is that “wills” and “soon” are flashing messages that say, “We are now behind. We will never catch up.”

I thought about the Thursday, April 13, 2023, extravaganza when I read “DeepMind Says Its New AI Coding Engine Is As Good As an Average Human Programmer.” The entire write up is one propeller driven Piper Cub skywriting messages about the future. I quote:

DeepMind has created an AI system named AlphaCode that it says “writes computer programs at a competitive level.” The Alphabet subsidiary tested its system against coding challenges used in human competitions and found that its program achieved an “estimated rank” placing it within the top 54 percent of human coders. The result is a significant step forward for autonomous coding, says DeepMind, though AlphaCode’s skills are not necessarily representative of the sort of programming tasks faced by the average coder.

Mr. Calacanis and his BFFs were not talking about basic coding as the future. Their focus was on autonomous AI which can string together sequences of tasks. The angle in my lingo is “meta AI”; that is, instead of a single smart query answered by a single smart system, the instructions in natural language would be parsed by a meta-AI which would pull back separate responses, integrate them, and perform the desired task.

What’s Google’s PR team pushing? Competitive programming.

Code Red? Yeah, that’s the here and now. The reality is that Google is in “will” mode. Imagine for a moment that Mr. Calacanis and his BFFs are correct. What’s that mean for Google? Will Google catch up with “will”?

Stephen E Arnold, April 20, 2023

Google Panic: Just Three Reasons?

April 20, 2023

Vea4_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read tweets, heard from colleagues, and received articles emailed to me about Googlers’ Bard disgruntlement?  In my opinion, Laptop Magazine’s summary captures the gist of the alleged wizard annoyance. “Bard: 3 Reasons Why the Google Staff Hates the New ChatGPT Rival.”

I want to sidestep the word “hate”. With 100,000 or so employees a hefty chunk of those living in Google Land will love Bard. Other Google staff won’t care because optimizing a cache function for servers in Brazil is a world apart. The result is a squeaky cart with more squeaky wheels than a steam engine built in 1840.

The three trigger points are, according to the write up:

  1. Google Bard outputs that are incorrect. The example provided is that Bard explains how to crash a plane when the Bard user wants to land the aircraft safely. So stupid.
  2. Google (not any employees mind you) is “indifferent to ethical concerns.” The example given references Dr. Timnit Gebru, my favorite Xoogler. I want to point out that Dr. Jeff Dean does not have her on this weekend’s dinner party guest list. So unethical.
  3. Bard is flawed because Google wizards had to work fast. This is the outcome of the sort of bad judgment which has been the hallmark of Google management for some time. Imagine. Work. Fast. Google. So haste makes waste.

I want to point out that there is one big factor influencing Googzilla’s mindless stumbling and snorting. The headline of the Laptop Magazine article presents the primum mobile. Note the buzzword/sign “ChatGPT.”

Google is used to being — well, Googzilla — and now an outfit which uses some Google goodness is in the headline. Furthermore, the headline calls attention to Google falling behind ChatGPT.

Googzilla is used to winning (whether in patent litigation or in front of incredibly brilliant Congressional questioners). Now even Laptop Magazine explains that Google is not getting the blue ribbon in this particular, over-hyped but widely followed race.

That’s the Code Red. That is why the Paris presentation was a hoot. That is why the Sundar and Prabhakar Comedy Tour generates chuckles when jokes include “will,” “working on,” “coming soon”  as part of the routine.

Once again, I am posting this from the 2023 National Cyber Crime Conference. Not one of the examples we present are from Google, its systems, or its assorted innovation / acquisition units.

Googzilla for some is not in the race. And if the company is in the ChatGPT race, Googzilla has yet to cross the finish line.

That’s the Code Red. No PR, no Microsoft marketing tsunami, and no love for what may be a creature caught in a heavy winter storm. Cold, dark, and sluggish.

Stephen E Arnold, April 26, 2023

AI Legislation: Can the US Regulate What It Does Understand Like a Dull Normal Student?

April 20, 2023

I read an essay by publishing and technology luminary Tim O’Reilly. If you don’t know the individual, you may recognize the distinctive art used on many of his books. Here’s what I call the parrot book’s cover:

image

You can get a copy at this link.

The essay to which I referred in the first sentence of this post is “You Can’t Regulate What You Don’t Understand.” The subtitle of the write up is “Or, Why AI Regulations Should Begin with Mandated Disclosures.” The idea is an interesting one.

Here’s a passage I found worth circling:

But if we are to create GAAP for AI, there is a lesson to be learned from the evolution of GAAP itself. The systems of accounting that we take for granted today and use to hold companies accountable were originally developed by medieval merchants for their own use. They were not imposed from without, but were adopted because they allowed merchants to track and manage their own trading ventures. They are universally used by businesses today for the same reason.

The idea is that those without first hand knowledge of something cannot make effective regulations.

The essay makes it clear that government regulators may be better off:

formalizing and requiring detailed disclosure about the measurement and control methods already used by those developing and operating advanced AI systems. [Emphasis in the original.]

The essay states:

Companies creating advanced AI should work together to formulate a comprehensive set of operating metrics that can be reported regularly and consistently to regulators and the public, as well as a process for updating those metrics as new best practices emerge.

The conclusion is warranted by the arguments offered in the essay:

We shouldn’t wait to regulate these systems until they have run amok. But nor should regulators overreact to AI alarmism in the press. Regulations should first focus on disclosure of current monitoring and best practices. In that way, companies, regulators, and guardians of the public interest can learn together how these systems work, how best they can be managed, and what the systemic risks really might be.

My thought is that it may be useful to look at what generalities and self-regulation deliver in real life. As examples, I would point out:

  1. The report “Independent Oversight of the Auditing Professionals: Lessons from US History.” To keep it short and sweet: Self regulation has failed. I will leave you to work through the somewhat academic argument. I have burrowed through the document and largely agree with the conclusion.
  2. The US Securities & Exchange Commission’s decision to accept $1.1 billion in penalties as a result of 16 Wall Street firms’ failure to comply with record keeping requirements.
  3. The hollowness of the points set forth in “The Role of Self-Regulation in the Cryptocurrency Industry: Where Do We Go from Here?” in the wake of the Sam Bankman Fried FTX problem.
  4. The MBA-infused “ethical compass” of outfits operating with a McKinsey-type of pivot point?

My view is that the potential payoff from pushing forward with smart software is sufficient incentive to create a Wild West, anything-goes environment. Those companies with the most to gain and the resources to win at any cost can overwhelm US government professionals with flights of legal eagles.

With innovations in smart software arriving quickly, possibly as quickly as new Web pages in the early days of the Internet, firms that don’t move quickly, act expediently, and push toward autonomous artificial intelligence will be unable to catch up with firms who move with alacrity.

Net net: No regulation, imposed or self-generated, will alter the rocket launch of news services. The US economy is not set up to encourage snail-speed innovation. The objective is met by generating money. Money, not guard rails, common sense, or actions which harm a company’s self interest, makes the system work… for some. Losers are the exhaust from an economic machine. One doesn’t drive a Model T Ford. Today those who can drive a Tesla Plaid or McLaren. The “pet” is a French bulldog, not a parrot.

Stephen E Arnold, April 20, 2023

Italy Has an Interesting Idea Similar to Stromboli with Fried Flying Termites Perhaps?

April 19, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Bureaucratic thought processes are amusing, not as amusing as Google’s Paris demonstration of Bard, but darned close. I spotted one example of what seems so darned easy but may be as tough as getting 15th century Jesuits to embrace the concept of infinity. In short, mandating is different from doing.

Italy Says ChatGPT Must Allow Users to Correct Inaccurate Personal Information” reports in prose which may or may not have been written by smart software. I noted this passage about “rights”:

[such as] allowing users and non-users of ChatGPT to object to having their data processed by OpenAI and letting them correct false or inaccurate information about them generated by ChatGPT…

Does anyone recall the Google right to remove capability. The issue was blocking data, not making a determination if the information was “accurate.”

In one of my lectures at the 2023 US National Cyber Crime Conference I discuss with examples the issue of determining “accuracy.” My audience consists of government professionals who have resources to determine accuracy. I will point out that accuracy is a slippery fish.

The other issue is getting whiz bang Sillycon Valley hot stuff companies to implement reliable, stable procedures. Most of these outfits operate with Philz coffee in mind, becoming a rock star at a specialist conference, or the future owner of a next generation Italian super car. Listening to Italian bureaucrats is not a key part of their Italian thinking.

How will this play out? Hearing, legal proceedings, and then a shrug of the shoulders.

Stephen E Arnold, April 19, 2023

SenseChat: Better Than TikTok?

April 18, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

In the midst of “chat”-ter about smart software, the Middle Kingdom shifts into babble mode. “Meet SenseChat, China’s Latest Answer to ChatGPT” is an interesting report. Of course, I believe everything I read on the Internet. Others may be more skeptical. To those Doubting Thomasinas I say, “Get with the program.”

The article reports with the solemnity of an MBA quoting from Sunzi or Sun-Tzu (what does a person unable to make sense of ideographs know?):

…SenseChat could tell a story about a cat catching fish, with multiple rounds of questions and responses.

And what else? The write up reported:

… the bot could help with writing computer code, taking in layman-level questions in English or Chinese and then translating them into a workable product.

SenseTime, the company which appears to “own” the technology is, according to the write up:

best known as a leader in computer vision.

Who is funding SenseTime? Perhaps Alibaba, the dragon with the clipped wings and docked tail. The company is on the US sanctions list. Investors in the US? Chinese government entities?

The write up suggests that SenseTime is resource intensive. How will the Chinese company satiate its thirst for computing power? The article “China’s Loongson Unveils 32 Core CPU, Reportedly 4X Faster Than Arm Chip” implies that China’s push to be AMD, Intel, and Qualcomm free is stumbling forward.

But where did the surveillance savvy SenseTime technology originate? The answer is the labs and dorms at Massachusetts Institute of Technology. Tang Xiao’ou started the company in 2021. Where does SenseTime operated? From a store front in Cambridge, Massachusetts, or a shabby building on Route 128? Nope. The MIT student labors away in the Miami Beach of the Pacific Rim, Pudong, Shanghai.

Several observations:

  1. Chinese developers, particularly entities involved with the government of the Middle Kingdom, are unlikely to respond from letters signed by US luminaries
  2. The software is likely to include a number of interesting features, possibly like those on one of the Chinese branded mobiles I once owned which sent data to Singapore data centers and then to other servers in a nearby country. That cloud interaction is a wonderful innovation for some in my opinion.
  3. Will individuals be able to determine what content was output by SenseTime-type systems?

That last question is an interesting one, isn’t it?

Stephen E Arnold, April 18, 2023

Big Wizards Discover What Some Autonomy Users Knew 30 Years Ago. Remarkable, Is It Not?

April 14, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

What happens if one assembles a corpus, feeds it into a smart software system, and turns it on after some tuning and optimizing for search or a related process like indexing. After a few months, the precision and recall of the system degrades. What’s the fix? Easy. Assemble a corpus. Feed it into the smart software system. Turn it on after some tuning and optimizing. The approach works and would keep the Autonomy neuro linguistic programming system working quite well.

Not only was Autonomy ahead of the information retrieval game in the late 1990s, I have made the case that its approach was one of the enablers for the smart software in use today at outfits like BAE Systems.

There were a couple of drawbacks with the Autonomy approach. The principal one was the expensive and time intensive job of assembling a training corpus. The narrower the domain, the easier this was. The broader the domain — for instance, general business information — the more resource intensive the work became.

The second drawback was that as new content was fed into the black box, the internals recalibrated to accommodate new words and phrases. Because the initial training set did not know about these words and phrases, the precision and recall from the point of the view of the user would degrade. From the engineering point of view, the Autonomy system was behaving in a known, predictable manner. The drawback was that users did not understand what I call “drift”, and the licensees’ accountants did not want to pay for the periodic and time consuming retraining.

What’s changed since the late 1990s? First, there are methods — not entirely satisfactory from my point of view — like the Snorkel-type approach. A system is trained once and then it uses methods that do retraining without expensive subject matter experts and massive time investments. The second method is the use of ChatGPT-type approaches which get trained on large volumes of content, not the comparatively small training sets feasible decades ago.

Are there “drift” issues with today’s whiz bang methods?

Yep. For supporting evidence, navigate to “91% of ML Models Degrade in Time.” The write up from big brains at “MIT, Harvard, The University of Monterrey, and other top institutions” learned about model degradation. On one hand, that’s good news. A bit of accuracy about magic software is helpful. On the other hand, the failure of big brain institutions to note the problem and then look into it is troubling. I am not going to discuss why experts don’t know what high profile advanced systems actually do. I have done that elsewhere in my monographs and articles.

I found this “explanatory diagram” in the write up interesting:

image

What was the authors’ conclusion other than not knowing what was common knowledge among Autonomy-type system users in the 1990s?

You need to retrain the model! You need to embrace low cost Snorkel-type methods for building training data! You have to know what subject matter experts know even though SMEs are an endangered species!

I am glad I am old and heading into what Dylan Thomas called “that good night.” Why? The “drift” is just one obvious characteristic. There are other, more sinister issues just around the corner.

Stephen E Arnold, April 14, 2023

Sequoia on AI: Is The Essay an Example of What Informed Analysis Will Be in the Future?

April 10, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read an essay produced by the famed investment outfit Sequoia. Its title:  “Generative AI: A Creative New World.” The write up contains buzzwords, charts, a modern version of a list, and this fascinating statement:

This piece was co-written with GPT-3. GPT-3 did not spit out the entire article, but it was responsible for combating writer’s block, generating entire sentences and paragraphs of text, and brainstorming different use cases for generative AI. Writing this piece with GPT-3 was a nice taste of the human-computer co-creation interactions that may form the new normal. We also generated illustrations for this post with Midjourney, which was SO MUCH FUN!

I loved the capital letters and the exclamation mark. Does smart software do that in its outputs?

I noted one other passage which caught my attention; to wit:

The best Generative AI companies can generate a sustainable competitive advantage by executing relentlessly on the flywheel between user engagement/data and model performance.

I understand “relentlessly.” To be honest, I don’t know about a “sustainable competitive advantage” or user engagement/data model performance. I do understand the Amazon flywheel, but my understand that it is slowing and maybe wobbling a bit.

My take on the passage in purple as in purple prose is that “best” AI depends not on accuracy, lack of bias, or transparency. Success comes from users and how well the system performs. “Perform” is ambiguous. My hunch is that the Sequoia smart software (only version 3) and the super smart Sequoia humanoids were struggling to express why a venture firm is having “fun” with a bit of B-school teaming — money.

The word “money” does not appear in the write up. The phrase “economic value” appears twice in the introduction to the essay. No reference to “payoff.” No reference to “exit strategy.” No use of the word “financial.”

Interesting. Exactly how does a money-centric firm write about smart software without focusing on the financial upside in a quite interesting economic environment.

I know why smart software misses the boat. It’s good with deterministic answers for which enough information is available to train the model to produce what seems like coherent answers. Maybe the smart software used by Sequoia was not clued in to the reports about Sequoia’s explanations of its winners and losers? Maybe the version of the smart software is not up the tough subject to which the Sequoia MBAs sought guidance?

On the other hand, maybe Sequoia did not think through what should be included in a write up by a financial firm interested in generating big payoffs for itself and its partners.

Either way. The essay seems like a class project which is “good enough.” The creative new world lacks the force that through the green fuse drives the cash.

Stephen  E Arnold, April 10, 2023

AI Is Not the Only System That Hallucinates

April 7, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I personally love it when software goes off the deep end. From the early days of “Fatal Error” to the more interesting outputs of a black box AI system, the digital comedy road show delights me.

I read “The Call to Halt ‘Dangerous’ AI Research Ignores a Simple Truth” reminds me that it is not just software which is subject to synapse wonkiness. Consider this statement from the Wired Magazine story:

… there is no magic button that anyone can press that would halt “dangerous” AI research while allowing only the “safe” kind.

Yep, no magic button. No kidding. We have decades of experience with US big technology companies’ behavior to make clear exactly the trajectory of new methods.

I love this statement from Wired Magazine no less:

Instead of halting research, we need to improve transparency and accountability while developing guidelines around the deployment of AI systems. Policy, research, and user-led initiatives along these lines have existed for decades in different sectors, and we already have concrete proposals to work with to address the present risks of AI.

Wired was one of the cheerleaders when it fired up its unreadable pink text with orange headlines in 1993 as I recall. The cheerleading was loud and repetitive.

I would suggest that “simple truth” is in short supply. In my experience, big technology savvy companies will do whatever they can do to corner a market and generate as much money as possible. Lock in, monopolistic behavior, collusion, and other useful tools are available.

Nice try Wired. Transparency is good to consider, but big outfits are not in the let the sun shine in game.

Stephen E Arnold, April 7, 2023


Who Does AI? Academia? Nope. Government Research Centers? Nope. Who Then?

April 7, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Smart software is the domain of commercial enterprises. How would these questions be answered in China? Differently I would suggest.

AI Is Entering an Era of Corporate Control” cites a report from Stanford University (an institution whose president did some alleged Fancy Dancing in research data) to substantiate the observation. I noted this passage:

The AI Index states that, for many years, academia led the way in developing state-of-the-art AI systems, but industry has now firmly taken over. “In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia…

Interesting. Innovation, however, seems to have drained from the Ivory Towers (now in the student loan business) and Federal research labs (now in the marketing their achievements to obtain more US government funding). These two slices of smart people are not performing when it comes to smart software.

The source article does not dwell on these innovation laggards. Instead I learn that AI investment is decreasing and that running AI models kills whales and snail darters.

For me, the main issue is, “Why is there a paucity of smart software in US universities and national laboratories? Heck, let’s toss in DARPA too.” I think it is easy to point to the commercial moves of OpenAI, the marketing of Microsoft, and the foibles of the Sundar and Prabhakar Comedy Show. In fact, the role of big companies is obvious. Was a research report needed? A tweet would have handled the topic for me.

I wonder what structural friction is inhibiting universities and outfits like LANL, ORNL, and Sandia, among others.

Stephen E Arnold, April 7, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta