Killing Horses? Okay. Killing Digital Information? The Best Idea Ever!

August 14, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Fans at the 2023 Kentucky Derby were able to watch horses killed. True, the sport of kings parks vehicles and has people stand around so the termination does not spoil a good day at the races. It seems logical to me that killing information is okay too. Personally I want horses to thrive without brutalization with mint juleps, and in my opinion, information deserves preservation. Without some type of intentional or unintentional information, what would those YouTuber videos about ancient technology have to display and describe?

In the Age of Culling” — an article in the online publication — I noted a number of ideas which resonated with me. The first is one of the subheads in the write up; to wit:

CNet pruning its content is a harbinger of something bigger.

The basic idea in the essay is that killing content is okay, just like killing horses.

The article states:

I am going to tell you right now that CNET is not the first website that has removed or pruned its archives, or decided to underplay them, or make them hard to access. Far from it.

The idea is that eliminating content creates an information loss. If one cannot find some item of content, that item of content does not exist for many people.

I urge you to read the entire article.

I want to shift the focus from the essay slightly.

With digital information being “disappeared,” the cuts away research, some types of evidence, and collective memory. But what happens when a handful of large US companies effectively shape the information training smart software. Checking facts becomes more difficult because people “believe” a machine more than a human in many situations.

8 13 library

Two girls looking at a museum exhibit in 2028. The taller girl says, “I think this is what people used to call a library.” The shorter girl asks, “Who needs this stuff. I get what I need to know online. Besides this looks like a funeral to me.” The taller girl replies, “Yes, let’s go look at the plastic dinosaurs. When you put on the headset, the animals are real.” Thanks MidJourney for not including the word “library” or depicting the image I requested. You are so darned intelligent!

Consider the power information filtering and weaponizing conveys to those relying on digital information. The statement “harbinger of something bigger” is correct. But if one looks forward, the potential for selective information may be the flip side of forgetting.

Trying to figure out “truth” or “accuracy” is getting more difficult each day. How does one talk about a subject when those in conversation have learned about Julius Caesar from a TikTok video and perceive a problem with tools created to sell online advertising?

This dinobaby understands that cars are speeding down the information highway, and their riders are in a reality defined by online. I am reluctant to name the changes which suggest this somewhat negative view of learning. One believes what one experiences. If those experiences are designed to generate clicks, reduce operating costs, and shape behavior — what’s the information landscape look like?

No digital archives? No past. No awareness of information weaponization? No future. Were those horses really killed? Were those archives deleted? Were those Shakespeare plays removed from the curriculum? Were the tweets deleted?

Let’s ask smart software. No thanks, I will do dinobaby stuff despite the efforts to redefine the past and weaponize the future.

Stephen E Arnold, August 14, 2023

MBAs, Lawyers, and Sociology Majors Lose Another Employment Avenue

August 4, 2023

Note: Dinobaby here: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid. Services are now ejecting my cute little dinosaur gif. (´?_?`) Like my posts related to the Dark Web, the MidJourney art appears to offend someone’s sensibilities in the datasphere. If I were not 78, I might look into these interesting actions. But I am and I don’t really care.

Some days I find MBAs, lawyers, and sociology majors delightful. On others I fear for their future. One promising avenue of employment has now be cut off. What’s the job? Avocado peeler in an ethnic restaurant. Some hearty souls channeling Euell Gibbons may eat these as nature delivers them. Others prefer a toast delivery vehicle or maybe a dip to accompany a meal in an ethnic restaurant or while making a personal vlog about the stresses of modern life.

Chipotle’s Autocado Robot Can Prep Avocados Twice as Fast as Humans” reports:

The robot is capable of peeling, seeding, and halving a case of avocados significantly faster than humans, and the company estimates it could cut its typical 50-minute guacamole prep time in half…

When an efficiency expert from a McKinsey-type firm or a second tier thinker from a mid-tier consulting firm reads this article, there is one obvious line of thought the wizard will follow: Replace some of the human avocado peelers with a robot. Projecting into the future while under the influence of spreadsheet fever, an upgrade to the robot’s software will enable it to perform other jobs in the restaurant or food preparation center; for example, taco filler or dip crafter.

Based on this actual factual write up, I have concluded that some MBAs, lawyers, and sociology majors will have to seek another pathway to their future. Yard sale organizer, pet sitter, and possibly the life of a hermit remain viable options. Oh, the hermit will have GoFundMe and  BuyMeaCoffee pages. Perhaps a T shirt or a hat?

Stephen E Arnold, August 4, 2023

Netflix Has a Job Opening. One Job Opening to Replace Many Humanoids

July 27, 2023

Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “As Actors Strike for AI Protections, Netflix Lists $900,000 AI Job.” Obviously the headline is about AI, money, and entertainment. Is the job “real”? Like so much of the output of big companies, it is difficult to determine how much is clickbait, how much is surfing on “real” journalists’ thirst for the juicy info, and how much is trolling? Yep, trolling. Netflix drives a story about AI’s coming to Hollywood.

The write up offers Hollywood verbiage and makes an interesting point:

The [Netflix job] listing points to AI’s uses for content creation:“Artificial Intelligence is powering innovation in all areas of the business,” including by helping them to “create great content.” Netflix’s AI product manager posting alludes to a sprawling effort by the business to embrace AI, referring to its “Machine Learning Platform” involving AI specialists “across Netflix.”

The machine learning platform or MLP is an exercise in cost control, profit maximization, and presaging the future. If smart software can generate new versions of old content, whip up acceptable facsimiles, and eliminate insofar as possible the need for non-elite humans — what’s not clear.

The $900,000 may be code for “Smart software can crank out good enough content at lower cost than traditional Hollywood methods.” Even the TikTok and YouTube “stars” face an interesting choice: [a] Figure out how to offload work to smart software or [b] learn to cope with burn out, endless squabbles with gatekeepers about money, and the anxiety of becoming a has-been.

Will humans, even talented ones, be able to cope with the pressure smart software will exert on the production of digital content? Like the junior attorney and cannon fodder for blue chip consulting companies, AI is moving from spitting out high school essays to more impactful outputs.

One example is the integration of smart software into workflows. The jargon about this enabling use of smart software is fluid. The $900,000 job focuses on something that those likely to be affected can understand: A good enough script and facsimile actors and actresses with a mouse click.

But the embedded AI promises to rework the back office processes and the unseen functions of humans just doing their jobs. My view is that there will be $900K per year jobs but far fewer of them than there are regular workers. What is the future for those displaced?

Crafting? Running yard sales? Creating fine art?

Stephen E Arnold, July 27, 2023

Ethics Are in the News — Now a Daily Feature?

July 27, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

It is déjà vu all over again, or it seems like it. I read “Judge Finds Forensic Scientist Henry Lee Liable for Fabricating Evidence in a Murder Case.” Yep, that is the story. Scientist Lee allegedly has a knack for non-fiction; that is, making up stuff or arranging items in a special way. One of my relatives founded Hartford, Connecticut, in the 1635. I am not sure he would have been on board with this make-stuff-up approach to data. (According to our family lore, John Arnold was into beating people with a stick.) Dr. Lee is a big wheel because he worked on the 1995 running-through-airports trial. The cited article includes this interesting sentence:

[Scientist] Lee’s work in several other cases has come under scrutiny…

7 22 scientist and cookies

No one is watching. A noted scientist helps himself to the cookies in the lab’s cookie jar. He is heard mumbling, “Cookies. I love cookies. I am going to eat as many of these suckers as I can because I am alone. And who cares about anyone else in this lab? Not me.” Chomp chomp chomp. Thanks, MidJourney. You depicted an okay scientist but refused to create an image of a great leader whom I identified by proper name. For this I paid money?

Let me mention three ethics incidents which for one reason or another hit my radar:

  1. MIT accepting cash from every young person’s friend Jeffrey Epstein. He allegedly killed himself. He’s off the table.
  2. The Harvard ethics professor who made up data. She’s probably doing consulting work now. I don’t know if she will get back into the classroom. If she does it might be in the Harvard Business School. Those students have a hunger for information about ethics.
  3. The soon-to-be-departed president of Stanford University. He may find a future using ChatGPT or an equivalent to write technical articles and angling for a gig on cable TV.

What do these allegedly true incidents tell us about the moral fiber of some people in positions of influence? I have a few ideas. Now the task is remediation. When John Arnold chopped wood in Hartford, justice involved ostracism, possibly a public shaming, or rough justice played out to the the theme from Hang ‘Em High.

Harvard, MIT, and Stanford: Aren’t universities supposed to set an example for impressionable young minds? What are the students learning? Anything goes? Prevaricate? Cut corners? Grub money?

Imagine sweatshirts with the college logo and these words on the front and back of the garment. Winner. Some at Amazon, Apple, Facebook, Google, Microsoft, and OpenAI might wear them to the next off-site. I would wager that one turns up in the Rayburn House Office Building wellness room.

Stephen E Arnold, July 27, 2023

Will Smart Software Take Customer Service Jobs? Do Grocery Stores Raise Prices? Well, Yeah, But

July 26, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I have suggested that smart software will eliminate some jobs. Who will be doing the replacements? Workers one finds on Interns who will pay to learn something which may be more useful than a degree in art history? RIF’ed former employees who are desperate for cash and will work for a fraction of their original salary?

7 22 robot woman

“Believe it or not, I am here to help you. However, I strong suggest you learn more about the technology used to create software robots and helpers like me. I also think you have beautiful eyes. My are just blue LEDs, but the Terminator finds them quite attractive,” says the robot who is learning from her human sidekick. Thanks, MidJourney, you have the robot human art nailed.

The fact is that smart software will perform many tasks once handled by humans? Don’t believe me. Visit a local body shop. Then take a tour of the Toyota factory not too distant from Tokyo’s airport. See the difference? The local body shop is swarming with folks who do stuff with their hands, spray guns, and machines which have been around for decades. The Toyota factory is not like that.

Machines — hardware, software, or combos — do not take breaks. They do not require vacations. They do not complain about hard work and long days. They, in fact, are lousy machines.

Therefore, the New York Times’s article “Training My Replacement: Inside a Call Center Worker’s Battle with AI”  provides a human interest glimpse of the terrors of a humanoid who sees the writing on the wall. My hunch is that the New York Times’s “real news” team will do more stories like this.

However, it would be helpful to people like to include information such as a reference or a subtle nod to information like this: “There Are 4 Reasons Why Jobs Are Disappearing — But AI Isn’t One of Them.” What are these reasons? Here’s a snapshot:

  • Poor economic growth
  • Higher costs
  • Supply chain issues (real, convenient excuse, or imaginary)
  • That old chestnut: Covid. Boo.

Do I buy the report? I think identification of other factors is a useful exercise. In the short term, many organizations are experimenting with smart software. Few are blessed with senior executives who trust technology when those creating the technology are not exactly sure what’s going on with their digital whiz kids.

The Gray Lady’s “real news” teams should be nervous. The wonderful, trusted, reliable Google is allegedly showing how a human can use Google AI to help humans with creating news.

Even art history major should be suspicious because once a leader in carpetland hears about the savings generated by deleting humanoids and their costs, those bean counters will allow an MBA to install software. Remember, please, that the mantra of modern management is money and good enough.

Stephen E Arnold, July 26, 2023

Hedge Funds and AI: Lovers at First Sight

July 26, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

One promise of AI is that it will eliminate tedious tasks (and the jobs that with them). That promise is beginning to be fulfilled in the investment arena, we learn from the piece, “Hedge Funds Are Deploying ChatGPT to Handle All the Grunt Work,” shared by Yahoo Finance. What could go wrong?

7 22 swim in money

Two youthful hedge fund managers are so pleased with their AI-infused hedge fund tactics, they jumped in a swimming pool which is starting to fill with money. Thanks, MidJourney. You have nailed the happy bankers and their enjoyment of money raining down.

Bloomberg’s Justina Lee and Saijel Kishan write:

“AI on Wall Street is a broad church that includes everything from machine-learning algorithms used to compute credit risks to natural language processing tools that scan the news for trading. Generative AI, the latest buzzword exemplified by OpenAI’s chatbot, can follow instructions and create new text, images or other content after being trained on massive amounts of inputs. The idea is that if the machine reads enough finance, it could plausibly price an option, build a portfolio or parse a corporate news headline.”

Parse the headlines for investment direction. Interesting. We also learn:

“Fed researchers found [ChatGPT] beats existing models such as Google’s BERT in classifying sentences in the central bank’s statements as dovish or hawkish. A paper from the University of Chicago showed ChatGPT can distill bloated corporate disclosures into their essence in a way that explains the subsequent stock reaction. Academics have also suggested it can come up with research ideas, design studies and possibly even decide what to invest in.”

Sounds good in theory, but there is just one small problem (several, really, but let’s focus on just the one): These algorithms make mistakes. Often. (Scroll down in this GitHub list for the ChatGPT examples.) It may be wise to limit one’s investments to firms patient enough to wait for AI to become more reliable.

Cynthia Murrell, July 26, 2023

Silicon Valley and Its Busy, Busy Beavers

July 21, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Several stories caught my attention. These are:

7 21 beavers

Google’s busy beavers have been active: AI, pricing tactics, quantum goodness, and team building. Thanks, MidJourney but you left out the computing devices which no high value beaver goes without.

Google has allowed its beavers to gnaw on some organic material to build some dams. Specifically, the newspapers which have been affected by Google’s online advertising (no I am not forgetting I am just focusing on the Google at the moment) can avail themselves of AI. The idea is… cost cutting. Could there be some learnings for the Google? What I mean is that such a series of tests or trials provides the Google with telemetry. Such telemetry allows the Google to refine its news writing capabilities. The trajectory of such knowledge may allow the Google to embark on its own newspaper experiment. Where will that lead? I don’t know, but it does not bode well for real journalists or some other entities.

The YouTube price increase is positioned as a better experience. Could the sharp increase in ads before, during, and after a YouTube video be part of a strategy? What I am hypothesizing is that more ads will force users to pay to be able to watch a YouTube video without being driven crazy by ads for cheap mobile, health products, and gun belts? Deteriorating the experience allows a customer to buy a better experience. Could that be semi-accurate?

The quantum supremacy thing strikes me as 100 percent PR with a dash of high school braggadocio. The write up speaks to me this way: “I got a higher score on the SAT.” Snort snort snort. The snorts are a sound track to putting down those whose machines just don’t have the right stuff. I wonder if this is how others perceive the article.

And the busy beavers turned up at the White House. The beavers say, “We will be responsible with this AI stuff.  We AI promise.” Okay, I believe this because I don’t know what these creatures mean when the word “responsible” is used. I can guess, however.

Net net: The ethicist from Harvard and the soon-to-be-former president of Stanford are available to provide advisory services. Silicon Valley is a metaphor for many good things, especially for the companies and their senior executives. Life will get better and better with certain high technology outfits running the show, pulling the strings, and controlling information, won’t it?

Stephen E Arnold, July 21, 2023

Smart Software: Good Enough Plus 18 Percent More Quality

July 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Do I believe the information in “ChatGPT Can Turn Bad Writers into Better Ones”? No, I don’t. First, MIT is the outfit which had a special relationship with Jeffrey Epstein. Yep, that guy. Quite a pal. Second, academic outfits are known to house individuals who just make up or enhance research data. Does MIT have professors who do that? Of course not. But With Harvard professionals engaging in some ethical ballroom dancing with data, I want to be cautious. (And, please, navigate to the original write up and read the report. Subscribe too because Mr. Epstein is indisposed and unable to contribute to the academic keel of the scholarly steamboat.)

What counts, however, is perception, not reality. The write up fosters some Chemical Guys’s shine on information, so let’s take a look. It will be a shallow one because that is the spirit of some research today, and this dinobaby wants to get with the program. My writing may be lousy, but I do it myself, which seems to go against the current trend.

Here’s the core point in the write from my point of view in rural Kentucky, a state known for its intellectual rigor and fine writing about basketball:

A new study by two MIT economics graduate students … suggests it could help reduce gaps in writing ability between employees. They found that it could enable less experienced workers who lack writing skills to produce work similar in quality to that of more skilled colleagues.

The point in my opinion is that cheaper workers can do what more expensive workers can do.

Just to drive home the point, the write up included this point:

The writers who chose to use ChatGPT took 40% less time to complete their tasks, and produced work that the assessors scored 18% higher in quality than that of the participants who didn’t use it.

7 16 winning with ai

The MidJourney highly original art system produced this picture of an accountant, trained online by the once proud University of Phoenix, manifests great joy when discovering that smart software can produce marketing and PR collateral faster, cheaper, and better than a disgruntled English major wanting to rent a larger apartment in a big city. The accountant seems to be sitting in a modest thundershower of budget surplus.

For many, MIT has heft. Therefore, will this write up and the expert researchers’ data influence people; for instance, owners of marketing, SEO, reputation management, and PR companies?



  1. Layoffs will be accelerating
  2. Good enough becomes outstanding when financial benefits are fungible
  3. Assurances about employment security will be irrelevant.

And what about those MIT graduates? Better get a degree in math, computer science, engineering, or medieval English poetry. No, strike that medieval English poetry. Substitute “prompt engineer” or museum guide in Albania.

Stephen E Arnold, July 19, 2023

Financial Analysts, Lawyers, and Consultants Can See Their Future

July 17, 2023

It is the middle of July 2023, and I think it is time for financial analysts, lawyers, and consultants to spruce up their résumés. Why would a dinobaby make such a suggestion to millions of the beloved Millennials, GenXers, the adorable GenY folk, and the vibrant GenZ lovers of TikTok, BMWs, and neutral colors?

I read three stories helpfully displayed by my trusty news reader. Let’s take a quick look at each and offer a handful of observations.

The first article is “This CEO Replaced 90% of Support Staff with an AI Chatbot.” The write up reports:

The chief executive of an Indian startup laid off 90% of his support staff after the firm built a chatbot powered by artificial intelligence that he says can handle customer queries much faster than his employees.

Yep, better, faster, and cheaper. Pick all three which is exactly what some senior managers will do. AI is now disrupting. But what about “higher skill” jobs than talking on the phone and looking up information for a clueless caller?

The second article is newsy or is it newsie? “Open AI and Associated Press Announce Partnership to Train AI on New Articles” reports:

[The deal] will see OpenAI licensing text content from the AP archives that will be used for training large language models (LLMs). In exchange, the AP will make  use of OpenAI’s expertise and technology — though the media company clearly emphasized in a release that it is not using generative AI to help write actual news stories.

Will these stories become the property of the AP? Does Elon Musk have confidence in himself?

7 14 sad female writer

Young professionals learning that they are able to find their future elsewhere. In the MidJourney confection is a lawyer, a screenwriter, and a consultant at a blue chip outfit selling MBAs at five times the cost of their final year at university.

I think that the move puts Google in a bit of a spot if it processes AP content and a legal eagle can find that content in a Bard output. More significantly, hasta la vista reporters. Now the elimination of hard working, professional journalists will not happen immediately. However, from my vantage point in rural Kentucky, I hear the train a-rollin’ down the tracks. Whooo Whooo.

The third item is “Producers Allegedly Sought Rights to Replicate Extras Using AI, Forever, for Just $200.” The write up reports:

Hollywood’s top labor union for media professionals has alleged that studios want to pay extras around $200 for the rights to use their likenesses in AI – forever – for just $200.

Will the unions representing these skilled professionals refuse to cooperate? Does Elon Musk like Grimes’s music?

A certain blue chip consulting firm has made noises about betting $2 billion on smart software and Microsoft consulting. Oh, oh. Junior MBAs, it may not be too late to get an associate of arts degree in modern poetry so you can work as a prompt engineer. As a famous podcasting person says, “What say you?”

Several questions:

  1. Will trusted, reliable, research supporting real news organizations embrace smart software and say farewell to expensive humanoids?
  2. Will those making videos use computer generated entities?
  3. Will blue chip consulting firms find a way to boost partners’ bonuses standing on the digital shoulders of good enough software?

I sure hope you answered “no” to each of these questions. I have a nice two cruzeiro collectable from Brazil, circa 1952 to sell you. Make me an offer. Collectible currency is an alternative to writing prompts or becoming a tour guide in Astana. Oh, that’s in Kazakhstan.

Smart software is a cost reducer because humanoids [a] require salaries and health care, [b] take vacations, [c] create security vulnerabilities or are security vulnerabilities, and [d] require more than high school science club management methods related to sensitive issues.

Money and good enough will bring changes in news, Hollywood, and professional services.

Stephen E Arnold, July 17, 2023

Amazon: Machine-Generated Content Adds to Overhead Costs

July 7, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Amazon Has a Big Problem As AI-Generated Books Flood Kindle Unlimited” makes it clear that Amazon is going to have to re-think how it runs its self-publishing operation and figure out how to deal with machine-generated books from “respected” publishers.

The author of the article is expressing concern about ChatGPT-type outputs being assembled into electronic books. That concern is focused on Amazon and its ageing, arthritic Kindle eBook business. With voice to text tools, I suppose one should think about Audible audiobooks spit out by text-to-voice. The culprit, however, may be Amazon itself. Paying a person read a book for seven hours, not screw up, and making sure the sound is acceptable when the reader has a stuffed nose can be pricey.

7 4 baffled exec

A senior Amazon executive thinks to herself, “How can I fix this fake content stuff? I should really update my LinkedIn profile too.’ Will the lucky executive charged with fixing the problem identified in the article be allowed to eliminate revenue? Yep, get going on the LinkedIn profile first. Tackle the fake stuff later.

The write up points out:

the mass uploading of AI-generated books could be used to facilitate click-farming, where ‘bots’ click through a book automatically, generating royalties from Amazon Kindle Unlimited, which pays authors by the amount of pages that are read in an eBook.

And what’s Amazon doing about this quasi-fake content? The article reports:

It [Amazon] didn’t explicitly state that it was making an effort specifically to address the apparent spam-like persistent uploading of nonsensical and incoherent AI-generated books.

Then, the article raises the issues of “quality” and “authenticity.” I am not sure what these two glory words mean. My impression is that a machine-generated book is not as good as one crafted by a subject matter expert or motivated human author. If I am right, the editors at TechRadar are apparently oblivious to the idea of using XML structure content and a MarkLogic-type tool to slice-and-dice content. Then the components are assembled into a reference book. I want to point out that this method has been in use by professional publishers for a number of years. Because I signed a confidentiality agreement, I am not able to identify this outfit. But I still recall the buzz of excitement that rippled through one officer meeting at this outfit when those listening to a presentation realized [a] Humanoids could be terminated and a reduced staff could produce more books and [b] the guts of the technology was a database, a technology mostly understood by those with a few technical conferences under their belt. Yippy! No one had to learn anything. Just calculate the financial benefit of dumping humans and figuring out how to expense the contractors who could format content from a hovel in a Myanmar-type of low-cost location. At night, the executives dreamed about their bonuses for hitting their financial targets and how to start RIF’ing editorial staff, subject matter experts, and assorted specialists who doodled with front matter, footnotes, and fonts.

Net net: There is no fix. The write up illustrates the lack of understanding about how large sections of the information industry uses technology and the established procedures for dealing with cost-saving opportunity. Quality means more revenue from decisions. Authenticity is a marketing job. Amazon has a content problem and has to gear up its tools and business procedures to cope with machine-generated content whether in product reviews and eBooks.

Stephen E Arnold, July 7, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta