Surprising Real Journalism News: The Chilling Claws of AI

February 6, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I wanted to highlight two interesting items from the world of “real” news and “real” journalism. I am a dinobaby and not a “real” anything. I do, however, think these two unrelated announcements provide some insight into what 2024 will encourage.

image

The harvesters of information wheat face a new reality. Thanks, MSFT Copilot. Good enough. How’s that email security? Ah, good enough. Okay.

The first item comes from everyone’s favorite, free speech service X.com (affectionately known to my research team as Xhitter). The item appears as a titbit from Max Tani. The message is an allegedly real screenshot of an internal memorandum from a senior executive at the Wall Street Journal. The screenshot purports to make clear that the Murdoch property is allowing some “real” journalists to find their future elsewhere. Perhaps in a fast food joint in Olney, Maryland? The screenshot is difficult for my 79-year-old eyes to read, but I got some help from one of my research team. The X.com payload says:

Today we announced a new structure in Washington [DC] that means a number of our colleagues will be leaving the paper…. The new Washington bureau will focus on politics, policy, defense, law, intelligence and national security.

Okay, people are goners. The Washington, DC bureau will focus on Washington, DC stuff. What was the bureau doing? Oh, perhaps that is why “our colleagues will be leaving the paper.” Cost cutting and focusing are in vogue.

The second item is titled “Q&A: How Thomson Reuters Used GenAI to Enable a Citizen Developer Workforce.” I want to alert you that the Computerworld article is a mere 3,800 words. Let me summarize the gist of the write up: “AI is going to replace expensive “real” journalists., My hunch is that some of the lawyers involved in annotating, assembling, and blessing the firm’s legal content. To Thomson Reuters’ credit, the company is trying to swizzle some sweetener into what may be a bitter drink for some involved with the “trust” crowd.

Several observations:

  1. It is about 13 months since Microsoft made AI its next big thing. That means that these two examples are early examples of what is going to happen to many knowledge workers
  2. Some companies just pull the pin; others are trying to find ways to avoid PR problems and lawsuits
  3. The more significant disruptions will produce a reasonably new type of worker push back.

Net net: Imagine what the next year will bring as AI efficiency digs in, bites tail feathers, and enriches those who sit in the top one percent.

Stephen E Arnold, February 6, 2024

Sales SEO: A New Tool for Hype and Questionable Relevance

February 5, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Search engine optimization is a relevance eraser. Now SEO has arrived for a human. “Microsoft Copilot Can Now Write the Sales Pitch of a Lifetime” makes clear that hiring is going to become more interesting for both human personnel directors (often called chief people officers) and AI-powered résumé screening systems. And for people who are responsible for procurement, figuring out when a marketing professional is tweaking the truth and hallucinating about a product or service will become a daily part of life… in theory.

image

Thanks for the carnival barker image, MSFT Copilot Bing thing. Good enough. I love the spelling of “asiractson”. With workers who may not be able to read, so what? Right?

The write up explains:

Microsoft Copilot for Sales uses specific data to bring insights and recommendations into its core apps, like Outlook, Microsoft Teams, and Word. With Copilot for Sales, users will be able to draft sales meeting briefs, summarize content, update CRM records directly from Outlook, view real-time sales insights during Teams calls, and generate content like sales pitches.

The article explains:

… Copilot for Service for Service can pull in data from multiple sources, including public websites, SharePoint, and offline locations, in order to handle customer relations situations. It has similar features, including an email summary tool and content generation.

Why is MSFT expanding these interesting functions? Revenue. Paying extra unlocks these allegedly remarkable features. Prices range from $240 per year to a reasonable $600 per year per user. This is a small price to pay for an employee unable to craft solutions that sell, by golly.

Stephen E Arnold, February 5, 2024

An International AI Panel: Notice Anything Unusual?

February 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

An expert international advisory panel has been formed. The ooomph behind the group is the UK’s prime minister. The Evening Standard newspaper described the panel this way:

The first-of-its-kind scientific report on AI will be used to shape international discussions around the technology.

What most of the reports omit is the list of luminaries named to this entity. You can find the list at this link.
image
A number of individual amateur cooks are working hard to match the giant commercial food processing facility is creating. Why aren’t these capable chefs not working with the big outfits? Can “outsiders” understand the direction of a well-resourced, fast-moving commercial enterprise? Thanks, MSFT Copilot. Good enough.
I want to list the members and then ask, “Do you see anything unusual in the list?” The names are ordered by country and representative:

Australia. Professor Bronwyn Fox, Chief Scientist, The Commonwealth Scientific and Industrial Research Organization (CSIRO)

Brazil. André Carlos Ponce de Leon Ferreira de Carvalho, Professor, Institute of Mathematics and Computer Sciences, University of São Paulo

Canada. Doctor Mona Nemer, Chief Science Advisor of Canada

Canada. Professor Yoshua Bengio, considered one of the “godfathers of AI”.

Chile. Raquel Pezoa Rivera, Academic, Federico Santa María Technical University

China. Doctor Yi Zeng, Professor, Institute of Automation, Chinese Academy of Sciences

EU. Juha Heikkilä, Adviser for Artificial Intelligence, DG Connect

France. Guillame Avrin, National Coordinator for AI, General Directorate of Enterprises

Germany. Professor Antonio Krüger, CEO, German Research Center for Artificial Intelligence.

India. Professor Balaraman Ravindran, Professor at the Department of Computer Science and Engineering, Indian Institute of Technology, Madras

Indonesia. Professor Hammam Riza, President, KORIKA

Ireland. Doctor. Ciarán Seoighe, Deputy Director General, Science Foundation Ireland

Israel. Doctor Ziv Katzir, Head of the National Plan for Artificial Intelligence Infrastructure, Israel Innovation Authority

Italy. Doctor Andrea Monti,Professor of  Digital Law, University of Chieti-Pescara.

Japan. Doctor Hiroaki Kitano, CTO, Sony Group Corporation

Kenya. Awaiting nomination

Mexico. Doctor José Ramón López Portillo, Chairman and Co-founder, Q Element

Netherlands. Professor Haroon Sheikh, Senior Research Fellow, Netherlands’ Scientific Council for Government Policy

New Zealand. Doctor Gill Jolly, Chief Science Advisor, Ministry of Business, Innovation and Employment

Nigeria. Doctor Olubunmi Ajala, Technical Adviser to the Honorable Minister of Communications, Innovation and Digital Economy,
Philippines. Awaiting nomination

Republic of Korea. Professor Lee Kyoung Mu, Professor, Department of Electrical and Computer Engineering, Seoul National University

Rwanda. Crystal Rugege, Managing Director, National Center for AI and Innovation Policy

Kingdom of Saudi Arabia. Doctor Fahad Albalawi, Senior AI Advisor, Saudi Authority for Data and Artificial Intelligence

Singapore. Denise Wong, Assistant Chief Executive, Data Innovation and Protection Group, Infocomm Media Development Authority (IMDA)

Spain. Nuria Oliver, Vice-President, European Laboratory for Learning and Intelligent Systems (ELLISS)

Switzerland. Doctor. Christian Busch, Deputy Head, Innovation, Federal Department of Economic Affairs, Education and Research

Turkey. Ahmet Halit Hatip, Director General of European Union and Foreign Relations, Turkish Ministry of Industry and Technology

UAE. Marwan Alserkal, Senior Research Analyst, Ministry of Cabinet Affairs, Prime Minister’s Office

Ukraine. Oleksii Molchanovskyi, Chair, Expert Committee on the Development of Artificial intelligence in Ukraine

USA. Saif M. Khan, Senior Advisor to the Secretary for Critical and Emerging Technologies, U.S. Department of Commerce

United Kingdom. Dame Angela McLean, Government Chief Scientific Adviser

United Nations. Amandeep Gill, UN Tech Envoy

Give up? My team identified these interesting aspects:

  1. No Facebook, Google, Microsoft, OpenAI or any other US giant in the AI space
  2. Academics and political “professionals” dominate the list
  3. A speed and scale mismatch between AI diffusion and panel report writing.

Net net: More words will be generated for large language models to ingest.

Stephen E Arnold, February 2, 2024

Flailing and Theorizing: The Internet Is Dead. Swipe and Chill

February 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I do not spend much time with 20 somethings, 30 something, 40 somethings, 50 somethings, or any other somethings. I watch data flow into my office, sell a few consulting jobs, and chuckle at the downstream consequences of several cross-generation trends my team and I have noticed. What’s a “cross generational trend”? The phrase means activities and general perceptions which are shared among some youthful college graduates and a harried manager working in a trucking company. There is the mobile phone obsession. The software scheduler which strips time from an individual with faux urgency or machine-generated pings and dings. There is the excitement of sports events, many of which may feature scripting. There is anomie or the sense of being along in a kayak carried to what may be a financial precipice. You get the idea.

Now the shriek of fear is emanating from online sources known as champions of the digital way. In this short essay, I want to highlight one of these; specifically, “The Era of the AI-Generated Internet Is Already Here: And It’s Time to Talk about AI Model Collapse.” I want to zoom the conclusion of the “real” news report and focus on the final section of the article, “The Internet Isn’t Completely Doomed.”

Here we go.

First, I want to point out that communication technologies are not “doomed.” In fact, these methods or techniques don’t go away. A good example are the clay decorations in some homes which way, “We love our Frenchie” or an Etsy plaque like this one:

image

Just a variation of a clay tablet produced in metal for an old-timey look. The communication technologies abundant today are likely to have similar stickiness. Doom, therefore, is Karen rhetoric in my opinion.

Second, the future is a return to the 1980s when for-fee commercial databases were trusted and expensive sources of electronic information. The “doom” write up predicts that content will retreat behind paywalls. I would like to point out that you are reading an essay in a public blog. I put my short writings online in 2008, using the articles as a convenient archive. When I am asked to give a lecture, I check out my blog posts. I find it a way to “refresh” my memory about past online craziness. My hunch is that these free, ad-free electronic essays will persist. Some will be short and often incomprehensible items on Pinboard.in; others will be weird TikTok videos spun into a written item pumped out via a social media channel on the Clear Web or the Dark Web (which seems to persist, doesn’t it?) When an important scientific discovery becomes known, that information becomes findable. Sure, it might be a year after the first announcement, but those ArXiv.org items pop up and are often findable because people love to talk, post, complain, or convert a non-reproducible event into a job at Harvard or Stanford. That’s not going to change.

image

A collapsed AI robot vibrated itself to pieces. Its model went off the rails and confused zeros with ones and ones with zeros. Thanks, MSFT Copilot Bing thing. How are those security procedures today?

Third, search engine optimization is going to “change.” In order to get hired or become famous, one must call attention to oneself. Conferences, Zoom webinars, free posts on LinkedIn-type services — none of these will go away or… change. The reason is that unless one is making headlines or creating buzz, one becomes irrelevant. I am a dinobaby and I still get crazy emails about a blockchain report I did years ago. (The somewhat strident outfit does business as IGI with the url igi-global.com. When I open an email from this outfit, I can smell the desperation.) Other outfits are similar, very similar, but they hit the Amazon thing for some pricey cologne to convert the scent of overboardism into something palatable. My take on SEO: It’s advertising, promotion, PT Barnum stuff. It is, like clay tablets, in the long haul.

Finally, what about AI, smart software, machine learning, and the other buzzwords slapped on ho-hum products like a word processor? Meh. These are short cuts for the Cliff’s Notes’ crowd. Intellectual achievement requires more than a subscription to the latest smart software or more imagination than getting Mistral to run on your MacMini. The result of smart software is to widen the gap between people who are genuinely intelligent and knowledge value creators, and those who can use an intellectual automatic teller machine (ATM).

Net net: The Internet is today’s version of online. It evolves, often like gerbils or tribbles which plagued Captain Kirk. The larger impact is the return to a permanent one percent – 99 percent social structure. Believe me, the 99 percent are not going to be happy whether they can post on X.com, read craziness on a Dark Web forum, pay for an online subscription to someone on Substack, or give money to the New York Times. The loss of intellectual horsepower is the consequence of consumerizing online.

This dinobaby was around when online began. My colleagues and I knew that editorial controls, access policies, and copyright were important. Once the ATM-model swept over the online industry, today’s digital world was inevitable. Too bad no one listened when those who were creating online information were ignored and dismissed as Ivory Tower dwellers. “Doom”? No just a dawning of what digital information creates. Have fun. I am old and am unwilling to provide a coloring book and crayons for the digital information future and a model collapse. That’s the least of some folks’s worries. I need a nap.

Stephen E Arnold, February 1, 2024

Robots, Hard and Soft, Moving Slowly. Very Slooowly. Not to Worry, Humanoids

February 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

CNN that bastion of “real” journalism published a surprising story: “We May Not Lose Our Jobs to Robots So Quickly, MIT Study Finds.” Wait, isn’t MIT the outfit which had a tie up with the interesting Jeffrey Epstein? Oh, well.

The robots have learned that they can do humanoid jobs quickly and easily. But the robots are stupid, right? Yes, they are, but the managers looking for cost reductions and workforce reductions are not. Thanks, MSFT Copilot Bing thing. How the security of the MSFT email today?

The story presents as actual factual an MIT-linked study which seems to go against the general drift of smart software, smart machines, and smart investors. The story reports:

new research suggests that the economy isn’t ready for machines to put most humans out of work.

The fresh research finds that the impact of AI on the labor market will likely have a much slower adoption than some had previously feared as the AI revolution continues to dominate headlines. This carries hopeful implications for policymakers currently looking at ways to offset the worst of the labor market impacts linked to the recent rise of AI.

The story adds:

One key finding, for example, is that only about 23% of the wages paid to humans right now for jobs that could potentially be done by AI tools would be cost-effective for employers to replace with machines right now. While this could change over time, the overall findings suggest that job disruption from AI will likely unfurl at a gradual pace.

The intriguing facet of the report and the research itself is that it seems to suggest that the present approach to smart stuff is working just fine, thank you very much. Why speed up or slow down? The “unfurling” is a slow process. No need for these professionals to panic as major firms push forward with a range of hard and soft robots:

  1. Consulting firms. Has MIT checked out Deloitte’s posture toward smart software and soft robots?
  2. Law firms. Has MIT talked to any of the Top 20 law firms about their use of smart software?
  3. Academic researchers. Has MIT talked to any of the graduate students or undergraduates about their use of smart software or soft robots to generate bibliographies, summaries of possibly non-reproducible studies, or books mentioning their professor?
  4. Policeware vendors. Companies like Babel Street and Recorded Future are putting pedal to the metal with regard to smart software.

My hunch is that MIT is not paying attention to the happy robots at Tesla or the bad actors using software robots to poke through the cyber defenses of numerous outfits.

Does CNN ask questions? Not that I noticed. Plus, MIT appears to want good news PR. I would too if I were known to be pals with certain interesting individuals.

Stephen E Arnold, February 1, 2024

A Glimpse of Institutional AI: Patients Sue Over AI Denied Claims

January 31, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

AI algorithms are revolutionizing business practices, including whether insurance companies deny or accept medical coverage. Insurance companies are using more on AI algorithms to fast track paperwork. They are, however, over relying on AI to make decisions and it is making huge mistakes by denying coverage. Patients are fed up with their medical treatments being denied and CBS Moneywatch reports that a slew of “Lawsuits Take Aim At Use Of AI Tool By Health Insurance Companies To Process Claims.”

The defendants in the AI insurance lawsuits are Humana and United Healthcare. These companies are using the AI model nHPredict to process insurance claims. On December 12, 2023, a class action lawsuit was filed against Humana, claiming nHPredict denied medically necessary care for elderly and disabled patients under Medicare Advantage. A second lawsuit was filed in November 2023 against United Healthcare. United Healthcare also used nHPredict to process claims. The lawsuit claims the insurance company purposely used the AI knowing it was faulty and about 90% of its denials were overridden.

The AI model is supposed to work:

“NHPredicts is a computer program created by NaviHealth, a subsidiary of United Healthcare, that develops personalized care recommendations for ill or injured patients, based on “real world experience, data and analytics,’ according to its website, which notes that the tool “is not used to deny care or to make coverage determinations.’

But recent litigation is challenging that last claim, alleging that the “nH Predict AI Model determines Medicare Advantage patients’ coverage criteria in post-acute care settings with rigid and unrealistic predictions for recovery.” Both United Healthcare and Humana are being accused of instituting policies to ensure that coverage determinations are made based on output from nHPredicts’ algorithmic decision-making.”

Insurance companies deny coverage whenever they can. Now a patient can talk to an AI customer support system about an AI system’s denying a claim. Will the caller be faced with a voice answering call loop on steroids? Answer: Oh, yeah. We haven’t seen or experienced what’s coming down the cost-cutting information highway. The blip on the horizon is interesting, isn’t it?

Whitney Grace, January 31, 2024

Ho-Hum Write Up with Some Golden Nuggets

January 30, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Anthropic Confirms It Suffered a Data Leak.” I know. I know. Another security breach involving an outfit working with the Bezos bulldozer and Googzilla. Snore. But in the write up, tucked away were a couple of statements I found interesting.

image

“Hey, pardner, I found an inconsistency.” Two tries for a prospector and a horse. Good enough, MSFT Copilot Bing thing. I won’t ask about your secure email.

Here these items are:

  1. Microsoft, Amazon and others are being asked by a US government agency “to provide agreements and rationale for collaborations and their implications; analysis of competitive impact; and information on any other government entities requesting information or performing investigations.” Regulatory scrutiny of the techno feudal champions?
  2. The write up asserts: “Anthropic has made a “long-term commitment” to provide AWS customers with “future generations” of its models through Amazon Bedrock, and will allow them early access to unique features for model customization and fine-tuning purposes.” Love at first sight?
  3. And a fascinating quote from a Googler. Note: I have put in bold some key words which I found interesting:

“Anthropic and Google Cloud share the same values when it comes to developing AI–it needs to be done in both a bold and responsible way,” Google Cloud CEO Thomas Kurian said in a statement on their relationship. “This expanded partnership with Anthropic, built on years of working together, will bring AI to more people safely and securely, and provides another example of how the most innovative and fastest growing AI startups are building on Google Cloud.”

Yeah, but the article is called “Anthropic Confirms It Suffered a Data Leak.” What’s with the securely?

Ah, regulatory scrutiny and obvious inconsistency. Ho-hum with a good enough tossed in for spice.

Stephen E Arnold, January 30, 2024

AI Coding: Better, Faster, Cheaper. Just Pick Two, Please

January 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Visual Studio Magazine is not on my must-read list. Nevertheless, one of my research team told me that I needed to read “New GitHub Copilot Research Finds “Downward Pressure on Code Quality.” I had no idea what “downward pressure” means. I read the article trying to figure out what the plain English meaning of this tortured phrase meant. Was it the downward pressure on the metatarsals when a person is running to a job interview? Was it the deadly downward pressure exerted on the OceanGate submersible? Was it the force illustrated in the YouTube “Hydraulic Press Channel”?

image

A partner at a venture firms wants his open source recipients to produce more code better, faster, and cheaper. (He does not explain that one must pick two.) Thanks MSFT Copilot Bing thing. Good enough. But the green? Wow.

Wrong.

The writeup is a content marketing piece for a research report. That’s okay. I think a human may have written most of the article. Despite the frippery in the article, I spotted several factoids. If these are indeed verifiable, excitement in the world of machine generated open source software will ensue. Why does this matter? Well, in the words of the SmartNews content engine, “Read on.”

Here are the items of interest to me:

  1. Bad code is being created and added to the GitHub repositories.
  2. Code is recycled, despite smart efforts to reduce the copy-paste approach to programming.
  3. AI is preparing a field in which lousy, flawed, and possible worse software will flourish.

Stephen E Arnold, January 29, 2024

Modern Poison: Models, Data, and Outputs. Worry? Nah.

January 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

One bad apple does not a failed harvest make. Let’s hope. I read “Poisoned AI Went Rogue During Training and Couldn’t Be Taught to Behave Again in Legitimately Scary Study.” In several of my lectures in 2023 I included a section about poisoned data. When I described the method and provided some examples of content injection, the audience was mostly indifferent. When I delivered a similar talk in October 2023, those in my audience were attentive. The concept of intentionally fooling around with model thresholds, data used for training, and exploiting large language model developers’ efforts to process more current or what some call “real time” data hit home. For each of these lectures, my audience was composed of investigators and intelligence analysts.

image

How many bad apples are in the spectrum of smart software? Give up. Don’t feel bad. No one knows. Perhaps it is better to ignore the poisoned data problem? There is money to be made and innovators to chase the gold rush. Thanks, MSFT Copilot Bing thing. How is your email security? Oh, good enough, like the illustration with lots of bugs.

Write ups like “Poisoned AI Went Rogue…” add a twist to my tales. Specifically a function chunk of smart software began acting in a manner not only surprising but potentially harmful. The write up in LiveScience asserted:

AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

Interesting. The article noted:

Artificial intelligence (AI) systems that were trained to be secretly malicious resisted state-of-the-art safety methods designed to "purge" them of dishonesty …  Researchers programmed various large language models (LLMs) — generative AI systems similar to ChatGPT — to behave maliciously. Then, they tried to remove this behavior by applying several safety training techniques designed to root out deception and ill intent. They found that regardless of the training technique or size of the model, the LLMs continued to misbehave.

Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, is quoted as saying:

"I think our results indicate that we don’t currently have a good defense against deception in AI systems — either via model poisoning or emergent deception — other than hoping it won’t happen…  And since we have really no way of knowing how likely it is for it to happen, that means we have no reliable defense against it. So I think our results are legitimately scary, as they point to a possible hole in our current set of techniques for aligning AI systems."

If you want to read the research paper, you can find it at this link. Note that one of the authors is affiliated with the Amazon- and Google-supported Anthropic AI company.

Net net: We do not have at this time a “good defense” against this type of LLM poisoning. Do I have a clever observation, some words of reassurance, or any ideas for remediation?

Nope.

Stephen E Arnold, January 29, 2024

AI Will Take Whose Job, Ms. Newscaster?

January 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Will AI take jobs? Abso-frickin-lutely. Why? Cost savings. Period. In an era of “good enough” is the new mark of excellence, hallucinating software is going to speed up some really annoying commercial functions and reduce costs. What if the customers object to being called dorks? Too bad. The company will apologize, take down the wonky system, and put up another smart service. Better? No, good enough. Faster? Yep. Cheaper? Bet your bippy on that, pilgrim. (See, for a chuckle, AI Chatbot At Delivery Firm DPD Goes Rogue, Insults Customer And Criticizes Company.)

image

Hey, MSFT Bing thing, good enough. How is that MSFT email security today, kiddo?

I found this Fox write up fascinating: “Two-Thirds of Americans Say AI Could Do Their Job.” That works out to about 67 percent of an estimated workforce of 120 million to a couple of Costco parking lots of people. Give or take a few, of course.

The write up says:

A recent survey conducted by Spokeo found that despite seeing the potential benefits of AI, 66.6% of the 1,027 respondents admitted AI could carry out their workplace duties, and 74.8% said they were concerned about the technology’s impact on their industry as a whole.

Oh, oh. Now it is 75 percent. Add a few more Costco parking lots of people holding signs like “Will broadcast for food”, “Will think for food,” or “Will hold a sign for Happy Pollo Tacos.” (Didn’t some wizard at Davos suggest that five percent of jobs would be affected? Yeah, that’s on the money.)

The write up adds:

“Whether it’s because people realize that a lot of work can be easily automated, or they believe the hype in the media that AI is more advanced and powerful than it is, the AI box has now been opened. … The vast majority of those surveyed, 79.1%, said they think employers should offer training for ChatGPT and other AI tools.

Yep, take those free training courses advertised by some of the tech feudalists. You too can become an AI sales person just like “search experts” morphed into search engine optimization specialists. How is that working out? Good for the Google. For some others, a way station on the bus ride to the unemployment bureau perhaps?

Several observations:

  1. Smart software can generate the fake personas and the content. What’s the outlook for talking heads who are not celebrities or influencers as “real” journalists?
  2. Most people overestimate their value. Now the jobs for which these individuals compete, will go to the top one percent. Welcome to the feudal world of 21st century.
  3. More than holding signs and looking sad will be needed to generate revenue for some people.

And what about Fox news reports like the one on which this short essay is based? AI, baby, just like Sports Illustrated and the estimable SmartNews.

Stephen E Arnold, January 29, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta