Techno Feudalist Governance: Not a Signal, a Rave Sound Track

January 31, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

One of the UK’s watchdog outfits published a 30-page report titled “One Click Away: A Study on the Prevalence of Non-Suicidal Self Injury, Suicide, and Eating Disorder Content Accessible by Search Engines.” I suggest that you download the report if you are interested in what the consequences of poor corporate governance are. I recommend reading the document while watching your young children or grand children playing with their mobile phones or tablet devices.

Let me summarize the document for you because its contents provide some color and context for the upcoming US government hearings with a handful of techno feudalist companies:

Web search engines and social media services are one-click gateways to self-harm and other content some parents and guardians might deem inappropriate.

Does this report convey information relevant to the upcoming testimony of selected large US technology companies in the Senate? I want to say, “Yes.” However, the realistic answer is, “No.”

Techmeme, an online information service, displayed its interest in the testimony with these headlines on January 31, 2024:

image

Screenshots are often difficult to read. The main story is from the weird orange newspaper whose content is presented under this Techmeme headline:

Ahead of the Senate Hearing, Mark Zuckerberg Calls for Requiring Apple and Google to Verify Ages via App Stores…

Ah, ha, is this a red herring intended to point the finger at outfits not on the hot seat in the true blue Senate hearing room?

The New York Times reports on a popular DC activity: A document reveal:

Ahead of the Senate Hearing, US Lawmakers Release 90 Pages of Internal Meta Emails…

And to remind everyone that an allegedly China linked social media service wants to do the right thing (of course!), Bloomberg’s angle is:

In Prepared Senate Testimony, TikTok CEO Shou Chew Says the Company Plans to Spend $2B+ in 2024 on Trust and Safety Globally…

Therefore, the Senate hearing on January 31, 2024 is moving forward.

What will be the major take-away from today’s event? I would suggest an opportunity for those testifying to say, “Senator, thank you for the question” and “I don’t have that information. I will provide that information when I return to my office.”

And the UK report? What? And the internal governance of certain decisions related to safety in the techno feudal firms? Secondary to generating revenue perhaps?

Stephen E Arnold, January 31, 2024

Journalism Is … Exciting, Maybe Even Thrilling

January 31, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Journalism is a field in an unusual industrial location. It is an important career because journalists are dedicated to sharing current and important information. Journalism, however, is a difficult field because news outlets are fading faster than the Internet’s current meme. Another alarming problem for journalists, especially those who work internationally, is the increasing risk of incarceration. The Committee To Protect Journalists (CPJ) reported that according to a “2023 Prison Census: Jailed Journalists Near Record High; Israel Imprisonments Spike.”

Due to the October 7 terrorist attack by Palestinian-led Hamas and the start of a new war, Israel ranked sixth on the list countries that imprison journalists. Israel ironically tied with Iran and is behind China, Myanmar, Belarus, Russia, and Vietnam. CPJ recorded that 320 journalists were incarcerated in 2023. It’s the second-highest number since CPJ started tracking in 1992. CPJ explained the high number of imprisonments is due to authoritarian regimes silencing the opposition. One hundred sixty-eight, more than half of the journalists, are charged with terrorism for critical coverage and spreading “false news.”

China is one of the worst offenders with Orwellian censorship laws, human rights violations, and a crackdown on pro-democracy protests and news. Myanmar’s coup in 2021 and Belarus’s controversial 2020 election incited massive upheavals and discontentment with citizens. Reporters from these countries are labeled as extremists when they are imprisoned.

Israel ties with Iran in 2023 due to locking up a high number of Palestinian journalists. They’re kept behind bars without cause on the grounds to prevent future crimes. Iran might have less imprisoned journalists than 2022 but the country is still repressing the media. Russia also keeps a high number of journalists jailed due to its war with Ukraine.

Jailed reporters face horrific conditions:

“Prison conditions are harsh in the nations with the worst track records of detaining journalists. Country reports released by the U.S. Department of State in early 2023 found that prisoners in China, Myanmar, Belarus, Russia, and Vietnam typically faced physical and sexual abuse, overcrowding, food and water shortages, and inadequate medical care.”

They still face problems even when they’ve served their sentence:

“Many journalists face curbs on their freedom even after they’ve served their time. This not only affects their livelihoods, but allows repressive governments to continue silencing their voices.”

These actions signify the importance of the US Constitution’s First Amendment. Despite countless attempts for politicians and bad actors to silence journalists abroad and on home soil, the First Amendment is still upheld. It’s so easy to take it for granted.

Whitney Grace, January 31, 2024

A Glimpse of Institutional AI: Patients Sue Over AI Denied Claims

January 31, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

AI algorithms are revolutionizing business practices, including whether insurance companies deny or accept medical coverage. Insurance companies are using more on AI algorithms to fast track paperwork. They are, however, over relying on AI to make decisions and it is making huge mistakes by denying coverage. Patients are fed up with their medical treatments being denied and CBS Moneywatch reports that a slew of “Lawsuits Take Aim At Use Of AI Tool By Health Insurance Companies To Process Claims.”

The defendants in the AI insurance lawsuits are Humana and United Healthcare. These companies are using the AI model nHPredict to process insurance claims. On December 12, 2023, a class action lawsuit was filed against Humana, claiming nHPredict denied medically necessary care for elderly and disabled patients under Medicare Advantage. A second lawsuit was filed in November 2023 against United Healthcare. United Healthcare also used nHPredict to process claims. The lawsuit claims the insurance company purposely used the AI knowing it was faulty and about 90% of its denials were overridden.

The AI model is supposed to work:

“NHPredicts is a computer program created by NaviHealth, a subsidiary of United Healthcare, that develops personalized care recommendations for ill or injured patients, based on “real world experience, data and analytics,’ according to its website, which notes that the tool “is not used to deny care or to make coverage determinations.’

But recent litigation is challenging that last claim, alleging that the “nH Predict AI Model determines Medicare Advantage patients’ coverage criteria in post-acute care settings with rigid and unrealistic predictions for recovery.” Both United Healthcare and Humana are being accused of instituting policies to ensure that coverage determinations are made based on output from nHPredicts’ algorithmic decision-making.”

Insurance companies deny coverage whenever they can. Now a patient can talk to an AI customer support system about an AI system’s denying a claim. Will the caller be faced with a voice answering call loop on steroids? Answer: Oh, yeah. We haven’t seen or experienced what’s coming down the cost-cutting information highway. The blip on the horizon is interesting, isn’t it?

Whitney Grace, January 31, 2024

Habba Logic? Is It Something One Can Catch?

January 30, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I don’t know much about lawyering. I have been exposed to some unusual legal performances. Most recently, Alina Habba delivered in impassioned soliloquy after a certain high-profile individual was told, “You have to pay a person whom you profess not to know $83 million.” Ms. Habba explained that the decision was a bit of a problem based on her understanding of New York State law. That’s okay. As a dinobaby, I am wrong on a pretty reliable basis. Once it is about 3 pm, I have difficulty locating my glasses, my note cards about items for this blog, and my bottle of Kroger grape-flavored water. (Did you know the world’s expert on grape flavor was a PhD named Abe Bakal. I worked with him in the 1970s. He influenced me, hence the Bakalized water.)

image

Habba logic explains many things in the world. If Socrates does not understand, that’s his problem, the young Agonistes Habba in the logic class. Thanks, MSFT Copilot. Good enough. But the eyes are weird.

I did find my notecard about a TechDirt article titled “Cable Giants Insist That Forcing Them to Make Cancellations Easier Violates Their First Amendment Rights.” I once learned that the First Amendment had something to do with free speech. To me, a dinobaby don’t forget, this means I can write a blog post, offer my personal opinions, and mention the event or item which moved me to action. Dinobabies are not known for their swiftness.

The write up explains that cable companies believe that making it difficult for a customer to cancel a subscription to TV, phone, Internet, and other services is a free speech issue. The write up reports:

But the cable and broadband industry, which has a long and proud tradition of whining about every last consumer protection requirement (no matter how basic), is kicking back at the requirement. At a hearing last week, former FCC boss-turned-top-cable-lobbying Mike Powell suggested such a rule wouldn’t be fair, because it might somehow (?) prevent cable companies from informing customers about better deals.

The idea is that the cable companies’ free of speech would be impaired. Okay.

What’s this got to do with the performance by Ms. Habba after her client was slapped with a big monetary award? Answer: Habba logic.

Normal logic says, “If a jury finds a person guilty, that’s what a jury is empowered to do.” I don’t know if describing it in more colorful terms alters what the jury does. But Habba logic is different, and I think it is diffusing from the august legal chambers to a government meeting. I am not certain how to react to Habba logic.

I do know, however, however, that cable companies are having a bit of struggle retaining their customers, amping up their brands, and becoming the equivalent of Winnie the Pooh sweatshirts for kids and adults. Cable companies do not want a customer to cancel and boost the estimable firms’ churn ratio. Cable companies do want to bill every month in order to maintain their cash intake. Cable companies do want to maintain a credit card type of relationship to make it just peachy to send mindless snail mail marketing messages about outstanding services, new set top boxes, and ever faster Internet speeds. (Ho ho ho. Sorry. I can’t help myself.)

Net net: Habba logic is identifiable, and I will be watching for more examples. Dinobabies like watching those who are young at heart behaving in a fascinating manner. Where’s my fake grape water? Oh, next to fake logic.

Stephen E Arnold, January 30, 2024

Google Gems: January 30, 2024

January 30, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The dinobaby wants to share another collection of Google gems. These are high-value actions which provide insight into one of the world’s most successful online advertising companies. Let’s get rolling with the items which I thought were the biggest outputs of behavioral magma movements in the last week, give or take a day or two. For gems, whose keeping track?

image

The dinobaby is looking for Google gems. There are many. Thanks, MSFT Copilot Bing thing. Good enough, but I think I am more svelt than your depiction of me.

GOOGLE AND REAL INNOVATION

How do some smart people innovate. “Google Settles AI-Related Chip Patent Lawsuit That Sought US$1.67-Billion in Damages” states:

Singular, founded by Massachusetts-based computer scientist Joseph Bates, claimed that Google incorporated his technology into processing units that support AI features in Google Search, Gmail, Google Translate and other Google services. The 2019 lawsuit said that Bates shared his inventions with the company between 2010 and 2014. It argued that Google’s Tensor Processing Units copied Bates’ technology and infringed two patents.

Did Google accidentally borrow intellectual property? I don’t know. But when $1.67 is bandied about as a desired amount and the Google settles right before trial, one can ask, “Does Google do me-too invention?” Of course not. Google is too cutting edge. Plus the invention allegedly touches Google’s equally innovative artificial intelligence set up. But $1.67 billion? Interesting.

A TWO’FER

Two former Googlers have their heads in the clouds (real, not data center clouds). Well, one mostly former Googler and another who has returned to the lair to work on AI. Hey, those are letters which appear in the word lAIr. What a coincidence. Xoogler one is a founder of the estimable company. Xoogler two is a former “adult” at the innovative firm.

Sergey Brin’s, like Icarus, has taken flight. He didn’t. His big balloon has. The Travel reports in “The World’s Largest Airship Is Now A Reality As It Took Flight In California”:

Pathfinder 1, a prototype electric airship designed by LTA Research, is being unveiled to the public as dawn rises over Silicon Valley. The project’s backer, Google co-founder Sergey Brin, expects it will speed the airship’s humanitarian efforts and usher in a new age of eco-friendly air travel. The airship has magnified drone technology, incorporating fly-by-wire controls, electric motors, and lidar sensing, to a scale surpassing that of three Boeing 737s. This enlarged version has the potential to transport substantial cargo across extensive distances. Its distinctive snow-white steampunk appearance is easily discernible from the bustling 101 highway.

The article includes a reference to the newsreel meme The Hindenburg. Helpful? Not so much. Anyway the Brin-aloon is up.

The second item about a Xoogler also involves flight. Business Insider (an outfit in the news itself this week) published “Ex-Google CEO Eric Schmidt Quietly Created a Company Called White Stork, Which Plans to Build AI-Powered Attack Drones, Report Says.” Drones are a booming business. The write up states:

The former Google chief told Wired that occasionally, a new weapon comes to market that “changes things” and that AI could help revolutionize the Department of Defense’s equipment. He said in the Wired interview, “Einstein wrote a letter to Roosevelt in the 1930s saying that there is this new technology — nuclear weapons — that could change war, which it clearly did. I would argue that [AI-powered] autonomy and decentralized, distributed systems are that powerful.”

What if a smart White Stork goes after Pathfinder? Impossible. AI is involved.

WAY FINDING WITH THRILLS

The next major Google gem is about the map product I find almost impossible to use. But I am a dinobaby, and these nifty new products are not tuned to 80-year-old eyes and fingers. I can still type, however. “The Google Maps Effect: Authorities Looking for Ways to Prevent Cars From Going Down Steps” shares this allegedly actual factual functionality:

… beginning in December, several drivers attempted to go down the steps either in small passenger cars or lorries that wouldn’t even fit in the small space between the buildings. Drivers blamed Google Maps on every occasion, claiming they followed the turn-by-turn guidance offered by the application. Google Maps told them to make a turn and attempt to go down the steps, so they eventually got stuck for obvious reasons.

I did a job for the bright fellow who brought WordStar to market. Google Maps wanted me to drive off the highway and into the bay. I turned off the helpful navigation system. I may be old, but dinobabies are not completely stupid. Other drivers relying on good enough Google presumably are.

AI MARKETING HOO-HAH

The Google is tooting its trumpet. Here are some recent “innovations” designed to keep the pesky OpenAI, Mistal, and Zuckbookers at bay:

  1. Google can make videos using AI. “Google’s New AI Video Generator Looks Incredible” reports that the service is “incredible.” What else from the quantum supremacy crowd? Sure, and it produces cute animals.
  2. Those Chromebooks are not enough. Google is applying its AI to education. Read more about how an ad company will improve learning in “Google Announces New AI-Powered Features for Education.”
  3. More Googley AI is coming to ads. If you are into mental manipulation, you will revel in “YouTube Ads Are About to Get Way More Effective with AI-Powered Neuromarketing.” Hey, “way more” sounds like the super smart Waymo Google car thing, doesn’t it?

LITTLE CUBIC ZIRCONIAS

Let me highlight what I call little cubic zirconias of Google goodness. Here we go:

  1. The New York Post published “Google News Searches Ranked AI-Generated Rip-offs Above Real Articles — Including a Post Exclusive.” The main point is that Google’s estimable system and wizards cannot tell diamonds from the chemical twins produced by non-Googlers. With elections coming, let’s talk about trust in search results, shall we?
  2. Google’s wizards have created a new color for the Pixel phone. Read about the innovative green at this link.
  3. TechRadar reported that Google has a Kubernetes “flaw.” Who can exploit it? Allegedly anyone with a Google Gmail account. Details at this Web location.

Before I close this week’s edition of Gems, I want to mention two relatively minor items. Some people may think these molehills are much larger issues. What can I do?

Google has found that firing people is difficult. According to Business Insider, Googlers fired in South Korea won’t leave the company. Okay. Whatever.

Also, New York Magazine, a veritable treasure trove of technical information, reports that Google has ended the human Internet with the upgrade Chrome browser. News flash: The human Internet was killed by search engine optimization years ago.

Watch for more Google Gems next week. I think there will be sparkly items available.

Stephen E Arnold, January 30, 2024

Ho-Hum Write Up with Some Golden Nuggets

January 30, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Anthropic Confirms It Suffered a Data Leak.” I know. I know. Another security breach involving an outfit working with the Bezos bulldozer and Googzilla. Snore. But in the write up, tucked away were a couple of statements I found interesting.

image

“Hey, pardner, I found an inconsistency.” Two tries for a prospector and a horse. Good enough, MSFT Copilot Bing thing. I won’t ask about your secure email.

Here these items are:

  1. Microsoft, Amazon and others are being asked by a US government agency “to provide agreements and rationale for collaborations and their implications; analysis of competitive impact; and information on any other government entities requesting information or performing investigations.” Regulatory scrutiny of the techno feudal champions?
  2. The write up asserts: “Anthropic has made a “long-term commitment” to provide AWS customers with “future generations” of its models through Amazon Bedrock, and will allow them early access to unique features for model customization and fine-tuning purposes.” Love at first sight?
  3. And a fascinating quote from a Googler. Note: I have put in bold some key words which I found interesting:

“Anthropic and Google Cloud share the same values when it comes to developing AI–it needs to be done in both a bold and responsible way,” Google Cloud CEO Thomas Kurian said in a statement on their relationship. “This expanded partnership with Anthropic, built on years of working together, will bring AI to more people safely and securely, and provides another example of how the most innovative and fastest growing AI startups are building on Google Cloud.”

Yeah, but the article is called “Anthropic Confirms It Suffered a Data Leak.” What’s with the securely?

Ah, regulatory scrutiny and obvious inconsistency. Ho-hum with a good enough tossed in for spice.

Stephen E Arnold, January 30, 2024

AI Coding: Better, Faster, Cheaper. Just Pick Two, Please

January 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Visual Studio Magazine is not on my must-read list. Nevertheless, one of my research team told me that I needed to read “New GitHub Copilot Research Finds “Downward Pressure on Code Quality.” I had no idea what “downward pressure” means. I read the article trying to figure out what the plain English meaning of this tortured phrase meant. Was it the downward pressure on the metatarsals when a person is running to a job interview? Was it the deadly downward pressure exerted on the OceanGate submersible? Was it the force illustrated in the YouTube “Hydraulic Press Channel”?

image

A partner at a venture firms wants his open source recipients to produce more code better, faster, and cheaper. (He does not explain that one must pick two.) Thanks MSFT Copilot Bing thing. Good enough. But the green? Wow.

Wrong.

The writeup is a content marketing piece for a research report. That’s okay. I think a human may have written most of the article. Despite the frippery in the article, I spotted several factoids. If these are indeed verifiable, excitement in the world of machine generated open source software will ensue. Why does this matter? Well, in the words of the SmartNews content engine, “Read on.”

Here are the items of interest to me:

  1. Bad code is being created and added to the GitHub repositories.
  2. Code is recycled, despite smart efforts to reduce the copy-paste approach to programming.
  3. AI is preparing a field in which lousy, flawed, and possible worse software will flourish.

Stephen E Arnold, January 29, 2024

Modern Poison: Models, Data, and Outputs. Worry? Nah.

January 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

One bad apple does not a failed harvest make. Let’s hope. I read “Poisoned AI Went Rogue During Training and Couldn’t Be Taught to Behave Again in Legitimately Scary Study.” In several of my lectures in 2023 I included a section about poisoned data. When I described the method and provided some examples of content injection, the audience was mostly indifferent. When I delivered a similar talk in October 2023, those in my audience were attentive. The concept of intentionally fooling around with model thresholds, data used for training, and exploiting large language model developers’ efforts to process more current or what some call “real time” data hit home. For each of these lectures, my audience was composed of investigators and intelligence analysts.

image

How many bad apples are in the spectrum of smart software? Give up. Don’t feel bad. No one knows. Perhaps it is better to ignore the poisoned data problem? There is money to be made and innovators to chase the gold rush. Thanks, MSFT Copilot Bing thing. How is your email security? Oh, good enough, like the illustration with lots of bugs.

Write ups like “Poisoned AI Went Rogue…” add a twist to my tales. Specifically a function chunk of smart software began acting in a manner not only surprising but potentially harmful. The write up in LiveScience asserted:

AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

Interesting. The article noted:

Artificial intelligence (AI) systems that were trained to be secretly malicious resisted state-of-the-art safety methods designed to "purge" them of dishonesty …  Researchers programmed various large language models (LLMs) — generative AI systems similar to ChatGPT — to behave maliciously. Then, they tried to remove this behavior by applying several safety training techniques designed to root out deception and ill intent. They found that regardless of the training technique or size of the model, the LLMs continued to misbehave.

Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, is quoted as saying:

"I think our results indicate that we don’t currently have a good defense against deception in AI systems — either via model poisoning or emergent deception — other than hoping it won’t happen…  And since we have really no way of knowing how likely it is for it to happen, that means we have no reliable defense against it. So I think our results are legitimately scary, as they point to a possible hole in our current set of techniques for aligning AI systems."

If you want to read the research paper, you can find it at this link. Note that one of the authors is affiliated with the Amazon- and Google-supported Anthropic AI company.

Net net: We do not have at this time a “good defense” against this type of LLM poisoning. Do I have a clever observation, some words of reassurance, or any ideas for remediation?

Nope.

Stephen E Arnold, January 29, 2024

AI Will Take Whose Job, Ms. Newscaster?

January 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Will AI take jobs? Abso-frickin-lutely. Why? Cost savings. Period. In an era of “good enough” is the new mark of excellence, hallucinating software is going to speed up some really annoying commercial functions and reduce costs. What if the customers object to being called dorks? Too bad. The company will apologize, take down the wonky system, and put up another smart service. Better? No, good enough. Faster? Yep. Cheaper? Bet your bippy on that, pilgrim. (See, for a chuckle, AI Chatbot At Delivery Firm DPD Goes Rogue, Insults Customer And Criticizes Company.)

image

Hey, MSFT Bing thing, good enough. How is that MSFT email security today, kiddo?

I found this Fox write up fascinating: “Two-Thirds of Americans Say AI Could Do Their Job.” That works out to about 67 percent of an estimated workforce of 120 million to a couple of Costco parking lots of people. Give or take a few, of course.

The write up says:

A recent survey conducted by Spokeo found that despite seeing the potential benefits of AI, 66.6% of the 1,027 respondents admitted AI could carry out their workplace duties, and 74.8% said they were concerned about the technology’s impact on their industry as a whole.

Oh, oh. Now it is 75 percent. Add a few more Costco parking lots of people holding signs like “Will broadcast for food”, “Will think for food,” or “Will hold a sign for Happy Pollo Tacos.” (Didn’t some wizard at Davos suggest that five percent of jobs would be affected? Yeah, that’s on the money.)

The write up adds:

“Whether it’s because people realize that a lot of work can be easily automated, or they believe the hype in the media that AI is more advanced and powerful than it is, the AI box has now been opened. … The vast majority of those surveyed, 79.1%, said they think employers should offer training for ChatGPT and other AI tools.

Yep, take those free training courses advertised by some of the tech feudalists. You too can become an AI sales person just like “search experts” morphed into search engine optimization specialists. How is that working out? Good for the Google. For some others, a way station on the bus ride to the unemployment bureau perhaps?

Several observations:

  1. Smart software can generate the fake personas and the content. What’s the outlook for talking heads who are not celebrities or influencers as “real” journalists?
  2. Most people overestimate their value. Now the jobs for which these individuals compete, will go to the top one percent. Welcome to the feudal world of 21st century.
  3. More than holding signs and looking sad will be needed to generate revenue for some people.

And what about Fox news reports like the one on which this short essay is based? AI, baby, just like Sports Illustrated and the estimable SmartNews.

Stephen E Arnold, January 29, 2024

Why Stuff Does Not Work: Airplane Doors, Health Care Services, and Cyber Security Systems, Among Others

January 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Downward Spiral of Technology” stuck a chord with me. Think about building monuments in the reign of Cleopatra. The workers can check out the sphinx and giant stone blocks in the pyramids and ask, “What happened to the technology? We are banging with bronze and crappy metal compounds and those ancient dudes were zipping along with snappier tech.? That conversation is imaginary, of course.

The author of “The Downward Spiral” is focusing on less dusty technology, the theme might resonate with my made up stone workers. Modern technology lacks some of the zing of the older methods. The essay by Thomas Klaffke hit on some themes my team has shared whilst stuffing Five Guys’s burgers in their shark-like mouths.

Here are several points I want to highlight. In closing, I will offer some of my team’s observations on the outcome of the Icarus emulators.

First, let’s think about search. One cannot do anything unless one can find electronic content. (Lawyers, please, don’t tell me you have associates work through the mostly-for-show books in your offices. You use online services. Your opponents in court print stuff out to make life miserable. But electronic content is the cat’s pajamas in my opinion.)

Here’s a table from the Mr. Klaffke essay:

image

Two things are important in this comparison of the “old” tech and the “new” tech deployed by the estimable Google outfit. Number one: Search in Google’s early days made an attempt to provide content relevant to the query. The system was reasonably good, but it was not perfect. Messrs. Brin and Page fancy danced around issues like disambiguation, date and time data, date and time of crawl, and forward and rearward truncation. Flash forward to the present day, the massive contributions of Prabhakar Raghavan and other “in charge of search” deliver irrelevant information. To find useful material, navigate to a Google Dorks service and use those tips and tricks. Otherwise, forget it and give Swisscows.com, StartPage.com, or Yandex.com a whirl. You are correct. I don’t use the smart Web search engines. I am a dinobaby, and I don’t want thresholds set by a 20 year old filtering information for me. Thanks but no thanks.

The second point is that search today is a monopoly. It takes specialized expertise to find useful, actionable, and accurate information. Most people — even those with law degrees, MBAs, and the ability to copy and paste code — cannot cope with provenance, verification, validation, and informed filtering performed by a subject matter expert. Baloney does not work in my corner of the world. Baloney is not a favorite food group for me or those who are on my team. Kudos to Mr. Klaffke to make this point. Let’s hope someone listens. I have given up trying to communicate the intellectual issues lousy search and retrieval creates. Good enough. Nope.

image

Yep, some of today’s tools are less effective than modern gizmos. Hey, how about those new mobile phones? Thanks, MSFT Copilot Bing thing. Good enough. How’s the MSFT email security today? Oh, I asked that already.

Second, Mr Klaffke gently reminds his reader that most people do not know snow cones from Shinola when it comes to information. Most people assume that a computer output is correct. This is just plain stupid. He provides some useful examples of problems with hardware and user behavior. Are his examples ones that will change behaviors. Nope. It is, in my opinion, too late. Information is an undifferentiated haze of words, phrases, ideas, facts, and opinions. Living in a haze and letting signals from online emitters guide one is a good way to run a tiny boat into a big reef. Enjoy the swim.

Third, Mr. Klaffke introduces the plumbing of the good-enough mentality. He is accurate. Some major social functions are broken. At lunch today, I mentioned the writings about ethics by Thomas Dewey and William James. My point was that these fellows wrote about behavior associated with a world long gone. It would be trendy to wear a top hat and ride in a horse drawn carriage. It would not be trendy to expect that a person would work and do his or her best to do a good job for the agreed-upon wage. Today I watched a worker who played with his mobile phone instead of stocking the shelves in the local grocery store. That’s the norm. Good enough is plenty good. Why work? Just pay me, and I will check out Instagram.

I do not agree with Mr. Klaffke’s closing statement; to wit:

The problem is not that the “machine” of humanity, of earth is broken and therefore needs an upgrade. The problem is that we think of it as a “machine”.

The problem is that worldwide shared values and cultural norms are eroding. Once the glue gives way, we are in deep doo doo.

Here are my observations:

  1. No entity, including governments, can do anything to reverse thousands of years of cultural accretion of norms, standards, and shared beliefs.
  2. The vast majority of people alive today are reverting back to some fascinating behaviors. “Fascinating” is not a positive in the sense in which I am using the word.
  3. Online has accelerated the stress on social glue; smart software is the turbocharger of abrupt, hard-to-understand change.

Net net: Please, read Mr. Klaffke’s essay. You may have an idea for remediating one or more of today’s challenges.

Stephen E Arnold, January 25, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta