AI: Big Ideas and Bigger Challenges for the Next Quarter Century. Maybe, Maybe Not

February 13, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an interesting ArXiv.org paper with a good title: “Ten Hard Problems in Artificial Intelligence We Must Get Right.” The topic is one which will interest some policy makers, a number of AI researchers, and the “experts” in machine learning, artificial intelligence, and smart software.

The structure of the paper is, in my opinion, a three-legged stool analysis designed to support the weight of AI optimists. The first part of the paper is a compressed historical review of the AI journey. Diagrams, tables, and charts capture the direction in which AI “deep learning” has traveled. I am no expert in what has become the next big thing, but the surprising point in the historical review is that 2010 is the date pegged as the start to the 2016 time point called “the large scale era.” That label is interesting for two reasons. First, I recall that some intelware vendors were in the AI game before 2010. And, second, the use of the phrase “large scale” defines a reality in which small outfits are unlikely to succeed without massive amounts of money.

The second leg of the stool is the identification of the “hard problems” and a discussion of each. Research data and illustrations bring each problem to the reader’s attention. I don’t want to get snagged in the plagiarism swamp which has captured many academics, wives of billionaires, and a few journalists. My approach will be to boil down the 10 problems to a short phrase and a reminder to you, gentle reader, that you should read the paper yourself. Here is my version of the 10 “hard problems” which the authors seem to suggest will be or must be solved in 25 years:

  1. Humans will have extended AI by 2050
  2. Humans will have solved problems associated with AI safety, capability, and output accuracy
  3. AI systems will be safe, controlled, and aligned by 2050
  4. AI will make contributions in many fields; for example, mathematics by 2050
  5. AI’s economic impact will be managed effectively by 2050
  6. Use of AI will be globalized by 2050
  7. AI will be used in a responsible way by 2050
  8. Risks associated with AI will be managed by effectively by 2050
  9. Humans will have adapted its institutions to AI by 2050
  10. Humans will have addressed what it means to be “human” by 2050

Many years ago I worked for a blue-chip consulting firm. I participated in a number of big-idea projects. These ranged from technology, R&D investment, new product development, and the global economy. In our for-fee reports were did include a look at what we called the “horizon.” The firm had its own typographical signature for this portion of a report. I recall learning in the firm’s “charm school” (a special training program to make sure new hires knew the style, approach, and ground rules for remaining employed at that blue-chip firm). We kept the horizon tight; that is, talking about the future was typically in the six to 12 month range. Nosing out 25 years was a walk into a mine field. My boss, as I recall told me, “We don’t do science fiction.”

2 10 robot and person

The smart robot is informing the philosopher that he is free to find his future elsewhere. The date of the image is 2025, right before the new year holiday. Thanks, MidJourney. Good enough.

The third leg of the stool is the academic impedimenta. To be specific, the paper is 90 pages in length of which 30 present the argument. The remain 60 pages present:

  • Traditional footnotes, about 35 pages containing 607 citations
  • An “Electronic Supplement” presenting eight pages of annexes with text, charts, and graphs
  • Footnotes to the “Electronic Supplement” requiring another 10 pages for the additional 174 footnotes.

I want to offer several observations, and I do not want to have these be less than constructive or in any way what one of my professors who was treated harshly in Letters to the Editor for an article he published about Chaucer. He described that fateful letter as “mean spirited.”

  1. The paper makes clear that mankind has some work to do in the next 25 years. The “problems” the paper presents are difficult ones because they touch upon the fabric of social existence. Consider the application of AI to war. I think this aspect of AI may be one to warrant a bullet on AI’s hit parade.
  2. Humans have to resolve issues of automated systems consuming verifiable information, synthetic data, and purpose-built disinformation so that smart software does not do things at speed and behind the scenes. Do those working do resolve the 10 challenges have an ethical compass and if so, what does “ethics” mean in the context of at-scale AI?
  3. Social institutions are under stress. A number of organizations and nation-states operate as dictators. One central American country has a rock star dictator, but what about the rock star dictators working techno feudal companies in the US? What governance structures will be crafted by 2050 to shape today’s technology juggernaut?

To sum up, I think the authors have tackled a difficult problem. I commend their effort. My thought is that any message of optimism about AI is likely to be hard pressed to point to one of the 10 challenges and and say, “We have this covered.” I liked the write up. I think college students tasked with writing about the social implications of AI will find the paper useful. It provides much of the research a fresh young mind requires to write a paper, possibly a thesis. For me, the paper is a reminder of the disconnect between applied technology and the appallingly inefficient, convenience-embracing humans who are ensnared in the smart software.

I am a dinobaby, and let me you, “I am glad I am old.” With AI struggling with go-fast and regulators waffling about go-slow, humankind has quite a bit of social system tinkering to do by 2050 if the authors of the paper have analyzed AI correctly. Yep, I am delighted I am old, really old.

Stephen E Arnold, February 13, 2024

Scale Fail: Define Scale for Tech Giants, Not Residents of Never Never Land

December 29, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Scale Is a Trap.” The essay presents an interesting point of view, scale from the viewpoint of a resident of Never Never Land. The write up states:

But I’m pretty convinced the reason these sites [Vice, Buzzfeed, and other media outfits] have struggled to meet the moment is because the model under which they were built — eyeballs at all cost, built for social media and Google search results — is no longer functional. We can blame a lot of things for this, such as brand safety and having to work through perhaps the most aggressive commercial gatekeepers that the world has ever seen. But I think the truth is, after seeing how well it worked for the tech industry, we made a bet on scale — and then watched that bet fail over and over again.

The problem is that the focus is on media companies designed to surf on the free megaphones like Twitter and the money from Google’s pre-threat ad programs. 

However, knowledge is tough to scale. The firms which can convert knowledge into what William James called “cash value” charge for professional services. Some content is free like wild and crazy white papers. But the “good stuff” is for paying clients.

Outfits which want to find enough subscribers who will pay the necessary money to read articles is a difficult business to scale. I find it interesting that Substack is accepting some content sure to attract some interesting readers. How much will these folks pay. Maybe a lot?

But scale in information is not what many clever writers or traditional publishers and authors can do. What happens when a person writes a best seller. The publisher demands more books and the result? Subsequent books which are not what the original was. 

Whom does scale serve? Scale delivers power and payoff to the organizations which can develop products and services that sell to a large number of people who want a deal. Scale at a blue chip consulting firm means selling to the biggest firms and the organizations with the deepest products. 

But the scale of a McKinsey-type firm is different from the scale at an outfit like Microsoft or Google.

What is the definition of scale for a big outfit? The way I would explain what the technology firms mean when scale is kicked around at an artificial intelligence conference is “big money, big infrastructure, big services, and big brains.” By definition, individuals and smaller firms cannot deliver.

Thus, the notion of appropriate scale means what the cited essay calls a “niche.” The problems and challenges include:

  • Getting the cash to find, cultivate, and grow people who will pay enough to keep the knowledge enterprise afloat
  • Finding other people to create the knowledge value
  • Protecting the idea space from carpetbaggers
  • Remaining relevant because knowledge has a shelf life, and it takes time to grow knowledge or acquire new knowledge.

To sum up, the essay is more about how journalists are going to have to adapt to a changing world. The problem is that scale is a characteristic of the old school publishing outfits which have been ill-suited to the stress of adapting to a rapidly changing world.

Writers are not blue chip consultants. Many just think they are.

Stephen E Arnold, December 29, 2023

A Dinobaby Misses Out on the Hot Searches of 2023

December 28, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I looked at “Year in Search 2023.” I was surprised at how out of the flow of consumer information I was. “Out of the flow” does not not capture my reaction to the lists of the news topics, dead people, and songs I was. Do you know much about Bizarrap? I don’t. More to the point, I have never heard of the obviously world-class musician.

Several observations:

First, when people tell me that Google search is great, I have to recalibrate my internal yardsticks to embrace queries for entities unrelated to my microcosm of information. When I assert that Google search sucks, I am looking for information absolutely positively irrelevant to those seeking insight into most of the Google top of the search charts. No wonder Google sucks for me. Google is keeping pace with maps of sports stadia.

Second, as I reviewed these top searches, I asked myself, “What’s the correlation between advertisers’ spend and the results on these lists? My idea is that a weird quantum linkage exists in a world inhabited by incentivized programmers, advertisers, and the individuals who want information about shirts. Its the game rigged? My hunch is, “Yep.” Spooky action at a distance I suppose.

Third, from the lists substantive topics are rare birds. Who is looking for information about artificial intelligence, precision and recall in search, or new approaches to solving matrix math problems? The answer, if the Google data are accurate and not a come on to advertisers, is almost no one.

As a dinobaby, I am going to feel more comfortable in my isolated chamber in a cave of what I find interesting. For 2024, I have steeled myself to exist without any interest in Ginny & Georgia, FIFTY FIFTY, or papeda.

I like being a dinobaby. I really do.

Stephen E Arnold, December 28, 2023

Intel Inference: A CUDA Killer? Some Have Hope

December 15, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Intel is embracing “inference” approach. Why? Maybe it will irritate fewer legal eagles? Maybe it is a marketing differentiator? Maybe Intel knows how to make probability less of a “problem”?

Brilliant, right? The answer to these questions are supposed to be explained in “Intel CEO Attacks Nvidia on AI: The Entire Industry Is Motivated to Eliminate the CUDA Market.” The Tom’s Hardware report uses the “attack” angle as a hook. Intel is thinking differently. The company has not had the buzz of nVidia or OpenAI. Plus, no horse metaphors.

image

Marketing professionals explain to engineers what must be designed, tested, and delivered in 2024. The engineers are skeptical. The marketing team is confident that their TikTok strategy will be a winner. Thanks, MSFT Copilot. Good enough.

What’s an inference? According to Bing (the stupid version), inference is “a conclusion reached on the basis of evidence and reasoning.” But in mathematics, inference has a slightly different denotation; to wit, this explanation from Britannica:

Inference, in statistics, the process of drawing conclusions about a parameter one is seeking to measure or estimate. Often scientists have many measurements of an object—say, the mass of an electron—and wish to choose the best measure. One principal approach of statistical inference is Bayesian estimation, which incorporates reasonable expectations or prior judgments (perhaps based on previous studies), as well as new observations or experimental results. Another method is the likelihood approach, in which “prior probabilities” are eschewed in favor of calculating a value of the parameter that would be most “likely” to produce the observed distribution of experimental outcomes. In parametric inference, a particular mathematical form of the distribution function is assumed. Nonparametric inference avoids this assumption and is used to estimate parameter values of an unknown distribution having an unknown functional form.

Now what does Tom’s Hardware present at Intel’s vision for its “to be” chips. I have put several segments together for the purposes of my blog post:

"You know, the entire industry is motivated to eliminate the CUDA market.  [Gelsinger, the Intel CEO] said. He cited examples such as MLIR, Google, and OpenAI, suggesting that they are moving to a "Pythonic programming layer" to make AI training more open. "We think of the CUDA moat as shallow and small," Gelsinger went on. "Because the industry is motivated to bring a broader set of technologies for broad training, innovation, data science, et cetera." But Intel isn’t relying just on training. Instead, it thinks inference is the way to go. "As inferencing occurs, hey, once you’ve trained the model… There is no CUDA dependency," Gelsinger continued. "It’s all about, can you run that model well?"

CUDA is definitely the target. “CUDA” refers to nVidia’s parallel computing platform and programming model … With more than 20 million downloads to date, CUDA helps developers speed up their applications by harnessing the power of GPU accelerators.

Tom’s Hardware raises a question:

It’s a bold strategy, and Gelsinger appeared confident as he led his team through presentations today. Can he truly take on CUDA? Only time will tell as applications for the chips Intel launched today — and that his competitors are also working on — become more widespread.

Of course. With content marketing, PR professionals, and a definite need to generate some buzz in an OpenAI-dominated world, Intel will be capturing some attention. The hard part will be making sufficiently robust sales to show that an ageing company still can compete.

Stephen E Arnold, December 15, 2023

Does the NY Times Want Celebrity Journalists?

September 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “In the AI Age, The New York Times Wants Reporters to Tell Readers Who They Are.” subtitle is definitely Google pet food:

The paper is rolling out enhanced bios as “part of our larger mission to be more transparent,” says the Times’ Edmund Lee, and “as generative AI begins to creep into the media landscape.”

The write up states:

The idea behind the “enhanced bios,” as they are being called, is to put more of a face and a name to reporters, so as to foster greater trust with readers and, as more news elsewhere is written by generative AI, emphasize the paper’s human-led reporting.

9 26 trust mom and kid

“I am sorry, mom. I cannot reveal my sources. You have to trust me when I say, ‘Gran is addicted to trank.’” Thanks, MidJourney. Carry on with the gradient descent.

I have a modest proposal: Why not identify sources, financial tie ups of reporters, and provide links to the Web pages and LinkedIn biographies of these luminaries?

I know. I know. Don’t be silly. Okay, I won’t be. But trust begins with verifiable facts and sources as that dinobaby Walter Isaacson told Lex Fridman a few days ago. And Mr. Isaacson provides sources too. How old fashioned.

Stephen E Arnold, September 28, 2023

Free Employees? Yep, Smart Software Saves Jobs Too

May 31, 2023

If you want a “free employee,” navigate to “100+ Tech Roles Prompt Templates.” The service offers:

your secret weapon for unleashing the full potential of AI in any tech role. Boost productivity, streamline communication, and empower your AI to excel in any professional setting.

The templates embrace:

  • C-Level Roles
  • Programming Roles
  • Cybersecurity Roles
  • AI Roles
  • Administrative Roles

How will an MBA makes use of this type of capability? Here are a few thoughts:

First, terminate unproductive humans with software. The action will save time and reduce (allegedly) some costs.

Second, trim managerial staff who handle hiring, health benefits (ugh!), and administrative work related to humans.

Third, modify one’s own job description to yield more free time in which to enjoy the bonus pay the savvy MBA will receive for making the technical unit more productive.

Fourth, apply the concept to the company’s legal department, marketing department, and project management unit.

Paradise.

Stephen E Arnold, May 2023

What Is the Byproduct of a Farm, Content Farm, That Is?

May 31, 2023

Think about the glorious spring morning spent in a feed lot in Oklahoma. Yeah, that is an unforgettable experience. The sights, the sounds, and — well — the smell.

I read “Google’s AI Search Feels Like a Content Farm on Steroids.” Zoom. Back to the feed lot or in my case, the Poland China pen in Farmington, Illinois. Special.

The write up is about the Google and its smart software. I underlined this passage:

…with its LLM (Large Language Model) doing all the writing, Google looks like the world’s biggest content farm, one powered by robotic farmers who can produce an infinite number of custom articles in real-time.

What are the outputs of Google’s smart software search daemons? Bits and bytes, clicks and cash, and perhaps it is the digital stench of a content farm byproduct?

Beyond Search loves the Google and all things Google, even false allegations of stealing intellectual property and statements before Congress which include the words trust, responsibility, and users.

It will come as no surprise that Beyond Search absolutely loves content farms’ primary and secondary outputs.

Stephen E Arnold, June 1, 2023

Regulate Does Not Mean Regulate. Leave the EU Does Not Mean Leave the EU. Got That?

May 30, 2023

I wrote about Sam AI-man’s explanation that he wants regulation. I pointed out that his definition of regulate means leaving OpenAI free to do whatever it can to ace out the Google and a handful of other big outfits chasing the pot of gold at the end of the AI rainbow.

I just learned from the self-defined trusted news source (Thomson Reuters) that Mr. AI-man has no plans to leave Europe. I understand. “Leave” does not mean leave as in depart, say adios, or hit the road, Jack.

ChatGPT Maker OpenAI Says Has No Plan to Leave Europe” reports:

OpenAI has no plans to leave Europe, CEO Sam Altman said on Friday, reversing a threat made earlier this week to leave the region if it becomes too hard to comply with upcoming laws on artificial intelligence.

I am not confused. Just as the company’s name OpenAI does not mean “open,” the AI-man’s communication skills are based on the probabilities of certain words following another word. Got it. The slippery fish with AI-man is that definition of the words in his mind do not regress to the mean. The words — like those of some other notable Silicon Valley high tech giants — reflect the deeper machinations of a machine-assisted superior intelligence.

Translated this means: Regulate means shaft our competitors. Leave means stay. Regulate means let those OpenAI sheep run through the drinking water of free range cattle

The trusted write up says:

Reacting to Altman’s tweet on Friday, Dutch MEP Kim van Sparrentak, who has worked closely on the AI draft rules, told Reuters she and her colleagues must stand up to pressure from tech companies…. Voluntary codes of conduct are not the European way.

What does this statement mean to AI-man?

I would suggest from my temporary office in clear thinking Washington, DC, not too much.

I look forward to the next hearing from AI-man. That will be equally easy to understand.

Stephen E Arnold, May 30, 2023

Google Smart Software: Lawyers to the Rescue

May 2, 2023

The article “Beginning of the End of OpenAI” in Analytics India raised an interesting point about Google’s smart software. The essay suggests that a legal spat over a trademark for “GPT” could allow Google to make a come-from-behind play in the generative software race. I noted this passage:

A lot of product names appear with the term ‘GPT’ in it. Now, if OpenAI manages to get its trademark application decided in favour, all of these applications would have to change their name, and ultimately not look appealing to customers.

Flip this idea to “if Google wins…”, OpenAI could — note “could” — face a fleet of Google legal eagles and the might of Google’s prescient, forward forward, quantumly supreme marketing army.

What about useful products, unbiased methods of generating outputs, and slick technology? Wait. I know the answer. “That stuff is secondary to our new core competency. The outputs of lawyers and marketing specialists.”

Stephen E Arnold May 2, 2023

What Does Poor Performer Mean? Loser, Lousy Personnel Processes, or Crawfishing

December 15, 2022

Google is not afraid to fire anyone who ignites controversy within the company related to diversity and women. Sometimes it is not bad press that causes Google to lay off its employees, instead it is the economy. The Daily Hunt reports that, “Google Asked Managers To Fire 10,000 ‘Poor Performers’ As Mass Layoffs Hit Tech Sector.”

The US federal government’s raising interest rates and tech companies that make a large portion of their profits from ads are feeling the pain. Meta, Google, Amazon, Twitter, and more companies are firing more workers. Alphabet is telling its managers to lay off all employees who are rated as “poor performers.” The hope is to get rid of at least 10,000 workers and there might be some subterfuge behind it:

“As per a report from Forbes, Google might even bank on these rankings to avoid paying bonuses and stock grants. Google’s managers have been reportedly asked to categorize 10,000 employees as “poor performers” so that 10,000 people can be fired. Alphabet has a total workforce of 187,000 people, which is one of the largest workforces in tech.”

Google’s workforce is described as bloated and pays its employees 70% more than Microsoft compensates its staff or 153% compared to the top twenty big tech companies. Google pays more than its competition to hoard talent and increases its stranglehold on the tech industry.

My thought is that Google is into the lifetime labeling approach to handling RIFed professionals. There’s nothing like a lifetime albatross around the neck of a job seeking Xoogler used to Foosball and snacks.

Whitney Grace, December 15, 2022

Next Page »

  • Archives

  • Recent Posts

  • Meta