The Death of Digital News Upstarts: Woohoo!

May 31, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

When I worked at a “real” newspaper, I learned that obituaries were cooked; that is, the newspaper reports of death were written whilst the subject was still alive and presumably buying advertisements in the paper or at least subscribing. The Guardian ran its obituary for upstart digital news outfits. No, the opinion writer did not include the word “woohoo.” I just picked up the Hopf vibration with my spidey sense.

The essay is “Vice Is Boing Bankrupt, BuzzFeed News Is Dead. What Does It Mean?” I don’t want to be picky, but these are two separate entities and each, as far as I know, is still breathing. There may be life support equipment involved, but neither entity’s online presence delivers a cheerful 404 message… yet.

The essay sails forward with no interest in my online check or the fact that two separate entities do not in my mind comprise an “it”. I am not going to differentiate because if the Guardian sees two identical Lego blocks, that’s the reality.

The write up says via a quote from the “brilliant” Clay Shirky, author and meme generator:

“This is what real revolutions are like. The old stuff gets broken faster than the new stuff is put in its place,” Shirky wrote. And, amid the ensuing chaos, it’s extremely hard to see what’s going next: “The importance of any given experiment isn’t apparent at the moment it appears, big changes stall, small changes spread.”

There are some bright spots; for example, ProPublica, the Gray Lady of Wordle fame, the Bezos news service, and most important, The Guardian, “owned by the Scott Trust and sustained by its endowment” and supported by readers who roll over for the jazzy pop ups in blue and yellow saying, “Give cash.”

Too bad the write up did not include the woohoo.

Stephen E Arnold, May 31, 2023

MBAs and Advisors, Is Your Nuclear Winter Looming?

May 31, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Big time, blue chip consulting firms are quite competent in three areas: [1] Sparking divorces because those who want money marry the firm, [2] Ingesting legions of MBAs to advise clients who are well compensated but insecure, and [3] Finding ways to cuts costs and pay the highly productive partners more money. I assume some will disagree, but that’s what kills horses at the Kentucky Derby.

I read but did not think twice about believing every single word in “Amid Mass Layoff, Accenture Identifies 300+ Generative AI Use Cases.” My first mental reaction was this question, “Just 300?”

The write up points out:

Accenture has identified five broad areas where generative AI can be implemented – advising, creating, automation, software creation and protection. The company is also working with a multinational bank to use generative AI to route large numbers of post-trade processing emails and draft responses with recommended actions to reduce manual effort and risk.

With fast food joints replacing humans with robots, what’s an MBA to do? The article does not identify employment opportunities for those who will be replaced with zeros and ones. As a former blue chip worker bee, I would suggest to anyone laboring in the intellectual vineyards to consider a career as an influencer.

Who will get hired and make big bucks at the Bains, the BCGs, the Boozers, and the McKinseys, et al? Here’s my short list:

  1. MBAs or people admitted to a fancy university with super connections. If one’s mom or dad was an ambassador or frequents parties drooled upon by Town & Country Magazine, you may be in the game.
  2. Individuals even if they worked at low rent used car lots who can sell big buck projects. The future at the blue chips is bright indeed.
  3. Individuals who are pals with highly regarded partners.

What about the quality of the work produced by the smart software? That is a good question. The idea is to make the client happy and sell follow on work. The initial work product may be reviewed by a partner or maybe not. The proof of the pudding are the revenue, costs, and profit figures.

That influencer opportunity looks pretty good, doesn’t it? I think snow is falling. Grab a Ralph Lauren Purple Label before you fire up that video camera.

Stephen E Arnold, May 31, 2023

Finally, an Amusing Analysis of AI

May 31, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Intentionally amusing or not, I found “ChatGPT Is Basically a Gen X’er Who Stopped Reading in 12th Grade” a hoot. The write up develops its thesis this way:

Turns out our soon-to-be AI Overlord, ChatGPT, has a worldview based in the 19th-century canon, Gen X sci-fi favorites, and the social dynamics at Hogwart’s School For Lil Magicians.

The essay then cites the estimable Business Insider (noted for its subscribe to read this most okay article approach to raising money) and its report about a data scientist who figured out what books ChatGPT has ingested. The list is interesting because it reflects how texts which most of today’s online users would find quaint, racist, irrelevant, or mildly titillating. Who doesn’t need to know about sensitive vampires?

So what’s funny?

First, the write up is similar to outputs from smart software: Recycled information and generic comments.

Second, the reading material fed into ChatGPT by more unnamed smart software experts.

I wonder if the Sundar & Prabhakar Comedy Act will integrate this type of material into their explanation about the great things which will emerge from the Google.

Stephen E Arnold, May 31, 2023

What Is the Byproduct of a Farm, Content Farm, That Is?

May 31, 2023

Think about the glorious spring morning spent in a feed lot in Oklahoma. Yeah, that is an unforgettable experience. The sights, the sounds, and — well — the smell.

I read “Google’s AI Search Feels Like a Content Farm on Steroids.” Zoom. Back to the feed lot or in my case, the Poland China pen in Farmington, Illinois. Special.

The write up is about the Google and its smart software. I underlined this passage:

…with its LLM (Large Language Model) doing all the writing, Google looks like the world’s biggest content farm, one powered by robotic farmers who can produce an infinite number of custom articles in real-time.

What are the outputs of Google’s smart software search daemons? Bits and bytes, clicks and cash, and perhaps it is the digital stench of a content farm byproduct?

Beyond Search loves the Google and all things Google, even false allegations of stealing intellectual property and statements before Congress which include the words trust, responsibility, and users.

It will come as no surprise that Beyond Search absolutely loves content farms’ primary and secondary outputs.

Stephen E Arnold, June 1, 2023

Free Employees? Yep, Smart Software Saves Jobs Too

May 31, 2023

If you want a “free employee,” navigate to “100+ Tech Roles Prompt Templates.” The service offers:

your secret weapon for unleashing the full potential of AI in any tech role. Boost productivity, streamline communication, and empower your AI to excel in any professional setting.

The templates embrace:

  • C-Level Roles
  • Programming Roles
  • Cybersecurity Roles
  • AI Roles
  • Administrative Roles

How will an MBA makes use of this type of capability? Here are a few thoughts:

First, terminate unproductive humans with software. The action will save time and reduce (allegedly) some costs.

Second, trim managerial staff who handle hiring, health benefits (ugh!), and administrative work related to humans.

Third, modify one’s own job description to yield more free time in which to enjoy the bonus pay the savvy MBA will receive for making the technical unit more productive.

Fourth, apply the concept to the company’s legal department, marketing department, and project management unit.

Paradise.

Stephen E Arnold, May 2023

Stop Smart Software! A Petition to Save the World! Signed by 350 Humans!

May 30, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

A “real” journalist (Kevin Roose), who was told to divorce his significant other for a chat bot published the calming, measured, non-clickbait story “AI Poses Risk of Extinction, Industry Leaders Warn.” What’s ahead for the forest fire of smart software activity? The headline explains a “risk of extinction.” What no screenshot of a Terminator robot saying”:

The strength of the human heart. The difference between us and machines. [Uplifting music]

Sadly, no.

Write up reports:

Eventually, some believe, A.I. could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, though researchers sometimes stop short of explaining how that would happen. These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building — and, in many cases, are furiously racing to build faster than their competitors — poses grave risks and should be regulated more tightly.

Isn’t the Gray Lady amplifying fear, uncertainty, and doubt? Didn’t IBM pay sales engineers to spread the FUD?

Enough. AI is bad. Stop those who refined the math and numerical recipes. Pass laws to regulate the AI technology. Act now. Save humanity. Several observations:

  1. The credibility of technologists who “develop” functions and then beg for rules is disingenuous. The idea is to practice self-control and judgment before inviting Mr. Hyde to brunch.
  2. With smart software chock full of “unknown unknowns”, how exactly are elected officials supposed to regulate a diffusing and enabling technology? Appealing to US and EU officials omits common sense in my opinion.
  3. The “fix” for the AI craziness may be emulating the Chinese approach: Do what the CCP wants or be reeducated. What a nation state can d with smart software is indeed a something to consider. But China has taken action and will move forward with militarization no matter what the US and EU do.

Silicon Valley type innovation has created a “myth of excellence.” One need look at the consequences of social media to see the consequences of high school science club decision making. Now a handful of individuals with the Silicon Valley DNA want external forces to reign in their money making experiments and personal theme parks. Sorry, folks. Internal control, ethical behavior, and integrity provide that to mature individuals.

A sheet of paper with “rules” and “regulations” is a bit late to the Silicon Valley game. And the Gray Lady? Chasing clicks in my opinion.

Stephen E Arnold, May 30, 2023

Regulate Does Not Mean Regulate. Leave the EU Does Not Mean Leave the EU. Got That?

May 30, 2023

I wrote about Sam AI-man’s explanation that he wants regulation. I pointed out that his definition of regulate means leaving OpenAI free to do whatever it can to ace out the Google and a handful of other big outfits chasing the pot of gold at the end of the AI rainbow.

I just learned from the self-defined trusted news source (Thomson Reuters) that Mr. AI-man has no plans to leave Europe. I understand. “Leave” does not mean leave as in depart, say adios, or hit the road, Jack.

ChatGPT Maker OpenAI Says Has No Plan to Leave Europe” reports:

OpenAI has no plans to leave Europe, CEO Sam Altman said on Friday, reversing a threat made earlier this week to leave the region if it becomes too hard to comply with upcoming laws on artificial intelligence.

I am not confused. Just as the company’s name OpenAI does not mean “open,” the AI-man’s communication skills are based on the probabilities of certain words following another word. Got it. The slippery fish with AI-man is that definition of the words in his mind do not regress to the mean. The words — like those of some other notable Silicon Valley high tech giants — reflect the deeper machinations of a machine-assisted superior intelligence.

Translated this means: Regulate means shaft our competitors. Leave means stay. Regulate means let those OpenAI sheep run through the drinking water of free range cattle

The trusted write up says:

Reacting to Altman’s tweet on Friday, Dutch MEP Kim van Sparrentak, who has worked closely on the AI draft rules, told Reuters she and her colleagues must stand up to pressure from tech companies…. Voluntary codes of conduct are not the European way.

What does this statement mean to AI-man?

I would suggest from my temporary office in clear thinking Washington, DC, not too much.

I look forward to the next hearing from AI-man. That will be equally easy to understand.

Stephen E Arnold, May 30, 2023

Probability: Who Wants to Dig into What Is Cooking Beneath the Outputs of Smart Software?

May 30, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The ChatGPT and smart software “revolution” depends on math only a few live and breathe. One drawer in the pigeon hole desk of mathematics is probability. You know the coin flip example. Most computer science types avoid advanced statistics. I know because my great uncle Vladimir Arnold (yeah, the guy who worked with a so so mathy type named Andrey Kolmogorov, who was pretty good at mathy stuff and liked hiking in the winter in what my great uncle described as “minimal clothing.”)

When it comes to using smart software, the plumbing is kept under the basement floor. What people see are interfaces and application programming interfaces. Watching how the sausage is produced is not what the smart software outfits do. What makes the math interesting is that the system and methods are not really new. What’s new is that memory, processing power, and content are available.

If one pries up a tile on the basement floor, the plumbing is complicated. Within each pipe or workflow process are the mathematics that bedevil many college students: Inferential statistics. Those who dabble in the Fancy Math of smart software are familiar with Markov chains and Martingales. There are garden variety maths as well; for example, the calculations beloved of stochastic parrots.

5 15 smart software plumbing

MidJourney’s idea of complex plumbing. Smart software’s guts are more intricate with many knobs for acolytes to turn and many levers to pull for “users.”

The little secret among the mathy folks who whack together smart software is that humanoids set thresholds, establish boundaries on certain operations, exercise controls like those on an old-fashioned steam engine, and find inspiration with a line of code or a process tweak that arrived in the morning gym routine.

In short, the outputs from the snazzy interface make it almost impossible to understand why certain responses cannot be explained. Who knows how the individual humanoid tweaks interact as values (probabilities, for instance) interact with other mathy stuff. Why explain this? Few understand.

To get a sense of how contentious certain statistical methods are, I suggest you take a look at “Statistical Modeling, Causal Inference, and Social Science.” I thought the paper should have been called, “Why No One at Facebook, Google, OpenAI, and other smart software outfits can explain why some output showed up and some did not, why one response looks reasonable and another one seems like a line ripped from Fantasy Magazine.

In  a nutshell, the cited paper makes one point: Those teaching advanced classes in which probability and related operations are taught do not agree on what tools to use, how to apply the procedures, and what impact certain interactions produce.

Net net: Glib explanations are baloney. This mathy stuff is a serious problem, particularly when a major player like Google seeks to control training sets, off-the-shelf models, framing problems, and integrating the firm’s mental orientation to what’s okay and what’s not okay. Are you okay with that? I am too old to worry, but you, gentle reader, may have decades to understand what my great uncle and his sporty pal were doing. What Google type outfits are doing is less easily looked up, documented, and analyzed.

Stephen E Arnold, May 30, 2023

Smart Software Knows Right from Wrong

May 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The AI gold rush is underway. I am not sure if the gold is the stuff of the King’s crown or one of those NFT confections. I am not sure what company will own the mine or sell the miner’s pants with rivets. But gold rush days remind me of forced labor (human indexers), claim jumping (hiring experts from one company to advantage another), and hydraulic mining (ethical and moral world enhancement). Yes, I see some parallels.

I thought of claim jumping and morals after reading “OpenAI Competitor Says Its Chatbot Has a Rudimentary Conscience: A Fascinating Concept.” The following snippet from the article caught my attention:

Jared Kaplan, a former OpenAI research consultant who went on to found Anthropic with a group of his former coworkers, told Wired that Claude is, in essence, learning right from wrong because its training protocols are “basically reinforcing the behaviors that are more in accord with the constitution, and discourages behaviors that are problematic.”

Please, read the original.

I want to capture several thoughts which flitted through my humanoid mind:

  1. What is right? What is wrong?
  2. What yardstick will be used to determine “rightness” or “wrongness.”
  3. What is the context for each right or wrong determination; for example, at the National Criminal Justice Training Center, there is a concept called “sexploitation.” The moral compass of You.com prohibits searching for information related to this trendy criminal activity? How will the Anthropic approach address the issue of a user with a “right” intent from a user with a “wrong” intent?

Net net: Baloney. Services will do what’s necessary to generate revenue. I know from watching the trajectories of the Big Tech outfits that right, wrong, ethics, and associated dorm room discussions wobble around and focus on getting rich or just having a job.

The goal for some will be to get their fingers on the knobs and control levers. Right or wrong?

Stephen E Arnold, May 29, 2023

Shall We Train Smart Software on Scientific Papers? That Is an Outstanding Idea!

May 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Fake Scientific Papers Are Alarmingly Common. But New Tools Show Promise in Tackling Growing Symptom of Academia’s Publish or Perish Culture.” New tools sounds great. Navigate to the cited document to get the “real” information.

garbage in garbage out

MidJourney’s representation of a smart software system ingesting garbage and outputting garbage.

My purpose in mentioning this article is to ask a question:

In the last five years how many made up, distorted, or baloney filled journal articles have been produced?

The next question is,

How many of these sci-fi confections of scholarly research have been identified and discarded by the top smart software outfits like Facebook, Google, OpenAI, et al?

Let’s assume that 25 percent of the journal content is fakery.

A question I have is:

How does faked information impact the outputs of the smart software systems?

I can anticipate some answers; for example, “Well, there are a lot of papers so the flawed papers will represent a small portion of the intake data. The law of large numbers or some statistical jibber jabber will try to explain away erroneous information. Remember. Bad information is part of the human landscape. Does this mean smart software is a mirror of errors?

Do smart software outfits remove flawed information? If the peer review process cannot, what methods are the smart outfits using. Perhaps these companies should decide what’s correct and what’s incorrect? That sounds like a Googley-type idea, doesn’t it?

And finally, the third question about the impact of bad information on smart software “outputs” has an answer. No, it is not marketing jargon or a recycling of Google’s seven wonders of the AI world.

The answer, in my opinion, is garbage in and garbage out.

But you knew that, right?

Stephen E Arnold, Mary 29, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta