AI: Big Ideas Become Money Savers and Cost Cutters
December 6, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Earlier this week (November 28, 2023,) The British newspaper The Guardian published “Sports Illustrated Accused of Publishing Articles Written by AI.” The main idea is that dependence on human writers became the focus of a bunch of bean counters. The magazine has a reasonably high profile among a demographic not focused on discerning the difference between machine output and sleek, intellectual, well groomed New York “real” journalists. Some cared. I didn’t. It’s money ball in the news business.
The day before the Sports Illustrated slick business and PR move, I noted a Murdoch-infused publication’s revelation about smart software. Barron’s published “AI Will Create—and Destroy—Jobs. History Offers a Lesson.” Barron’s wrote about it; Sports Illustrated got snared doing it.
Barron’s said:
That AI technology will come for jobs is certain. The destruction and creation of jobs is a defining characteristic of the Industrial Revolution. Less certain is what kind of new jobs—and how many—will take their place.
Okay, the Industrial Revolution. Exactly how long did that take? What jobs were destroyed? What were the benefits at the beginning, the middle, and end of the Industrial Revolution? What were the downsides of the disruption which unfolded over time? Decades wasn’t it?
The AI “revolution” is perceived to be real. Investors, testosterone-charged venture capitalists, and some Type A students are going to make the AI Revolution a reality. Damn, the regulators, the copyright complainers, and the dinobabies who want to read, think, and write themselves.
Barron’s noted:
A survey conducted by LinkedIn for the World Economic Forum offers hints about where job growth might come from. Of the five fastest-growing job areas between 2018 and 2022, all but one involve people skills: sales and customer engagement; human resources and talent acquisition; marketing and communications; partnerships and alliances. The other: technology and IT. Even the robots will need their human handlers.
I can think of some interesting jobs. Thanks, MSFT Copilot. You did ingest some 19th century illustrations, didn’t you, you digital delight.
Now those are rock solid sources: Microsoft’s LinkedIn and the charming McKinsey & Company. (I think of McKinsey as the opioid innovators, but that’s just my inexplicable predisposition toward an outstanding bastion of ethical behavior.)
My problem with the Sports Illustrated AI move and the Barron’s essay boils down to the bipolarism which surfaces when a new next big thing appears on the horizon. Predicting what will happen when a technology smashes into business billiard balls is fraught with challenges.
One thing is clear: The balls are rolling, and journalists, paralegals, consultants, and some knowledge workers are going to find themselves in the side pocket. The way out might be making TikToks or selling gadgets on eBay.
Some will say, “AI took our jobs, Billy. Now what?” Yes, now what?
Stephen E Arnold, December 6, 2023
Is Crypto the Funding Mechanism for Bad Actors?
December 6, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Allegations make news. The United States and its allies are donating monies and resources to Israel as they fight against Hamas. As a rogue group, Hamas is not as well-funded Israel and people are speculative about how it is financing its violent attacks. The Marketplace explains how the Palestinian group is receiving some of its funding and it’s a very obvious answer: “Crypto Is One Way Hamas Gets Its Funding.” David Brancaccio, host of the Marketplace Morning Report, interviewed former federal prosecutor and US Treasury Department official and current head of TRM Labs, Ari Redford. TRM Labs is a cryptocurrency compliance firm. Redford and Brancaccio discuss how Hamas uses crypto.
Hamas is subject to sanctions from the US Treasury Department, so the group’s access to international banking is restricted. Cryptocurrency allows Hamas to circumvent those sanctions. Ironically, cryptocurrency might make it easier for authorities to track illegal use of money because the ledger can’t be forged. Crypto moves along a network of computers known as blockchains. The blockchains are public, therefore traceable and transparent. Companies like TRM allow law enforcement and other authorities to track blockchains.
The US Department of Justice, IRS-CI, and FBI removed 150 crypto wallets associated with Hamas in 2020. TRM Labs is continuously tracking Hamas and its financial supporters, most appear to be in Iran. Hamas doesn’t accept bitcoin donations anymore:
“Brancaccio: I think it was April of this year, Hamas announced it would no longer take donations in bitcoin.. Perhaps it’s because of its traceability? Redbord: Yeah, really important point. And that’s essentially what Hamas itself said that, you know, law enforcement and other authorities have been coming down on their supporters because they’ve been able to trace and track these flows. And announced in April that they would not be soliciting donations in cryptocurrency. Now, whether that’s entirely true or not, it’s hard to say. We’re obviously seeing at least supporters of Hamas go out there raising funds in crypto.”
What will bad actors do to get money? Find options and use them.
Whitney Grace, December 18, 2023
Harvard University: Does Money Influence Academic Research?
December 5, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Harvard University has been on my radar since the ethics misstep. In case your memory is fuzzy, Francesca Gino, a big thinker about ethics and taking shortcuts, was accused of data fraud. The story did not attract much attention in rural Kentucky. Ethics and dishonesty? Come on. Harvard has to do some serious training to catch up with a certain university in Louisville. For a reasonable explanation of the allegations (because, of course, one will never know), navigate to “Harvard Professor Who Studies Dishonesty Is Accused of Falsifying Data” and dig in.
Thanks, MSFT Copilot, you have nailed the depressive void that comes about when philosophers learn that ethics suck.
Why am I thinking about Harvard and ethics? The answer is that I read “Harvard Gutted Initial Team Examining Facebook Files Following $500 Million Donation from Chan Zuckerberg Initiative, Whistleblower Aid Client Reveals.” I have no idea if the write up is spot on, weaponized information, or the work of someone who did not get into one of the university’s numerous money generating certification programs.
The write up asserts:
Harvard University dismantled its prestigious team of online disinformation experts after a foundation run by Facebook’s Mark Zuckerberg and his wife Priscilla Chan donated $500 million to the university, a whistleblower disclosure filed by Whistleblower Aid reveals. Dr. Joan Donovan, one of the world’s leading experts on social media disinformation, says she ran into a wall of institutional resistance and eventual termination after she and her team at Harvard’s Technology and Social Change Research Project (TASC) began analyzing thousands of documents exposing Facebook’s knowledge of how the platform has caused significant public harm.
Let’s assume that the allegation is horse feathers, not to be confused with Intel’s fabulous Horse Ridge. Harvard still has to do some fancy dancing with regard to the ethics professor and expert in dishonesty who is alleged to have violated the esteemed university’s ethics guidelines and was dishonest.
If we assume that the information in Dr. Donovan’s whistleblower declaration is close enough for horse shoes, something equine can be sniffed in the atmosphere of Dr. William James’s beloved institution.
What could Facebook or the Metazuck do which would cause significant public harm? The options range from providing tools to disseminate information which spark body shaming, self harm, and angst among young users. Are old timers possibly affected? I suppose buying interesting merchandise on Facebook Marketplace and experiencing psychological problems as a result of defriending are possibilities too.
If the allegations are proven to be accurate, what are the consequences for the two esteemed organizations? My hunch is zero. Money talks; prestige walks away to put ethics on display for another day.
Stephen E Arnold, December 5, 2023
23andMe: Those Users and Their Passwords!
December 5, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Silicon Valley and health are match fabricated in heaven. Not long ago, I learned about the estimable management of Theranos. Now I find out that “23andMe confirms hackers stole ancestry data on 6.9 million users.” If one follows the logic of some Silicon Valley outfits, the data loss is the fault of the users.
“We have the capability to provide the health data and bioinformation from our secure facility. We have designed our approach to emulate the protocols implemented by Jack Benny and his vault in his home in Beverly Hills,” says the enthusiastic marketing professional from a Silicon Valley success story. Thanks, MSFT Copilot. Not exactly Jack Benny, Ed, and the foghorn, but I have learned to live with “good enough.”
According to the peripatetic Lorenzo Franceschi-Bicchierai:
In disclosing the incident in October, 23andMe said the data breach was caused by customers reusing passwords, which allowed hackers to brute-force the victims’ accounts by using publicly known passwords released in other companies’ data breaches.
Users!
What’s more interesting is that 23andMe provided estimates of the number of customers (users) whose data somehow magically flowed from the firm into the hands of bad actors. In fact, the numbers, when added up, totaled almost seven million users, not the original estimate of 14,000 23andMe customers.
I find the leak estimate inflation interesting for three reasons:
- Smart people in Silicon Valley appear to struggle with simple concepts like adding and subtracting numbers. This gap in one’s education becomes notable when the discrepancy is off by millions. I think “close enough for horse shoes” is a concept which is wearing out my patience. The difference between 14,000 and almost 17 million is not horse shoe scoring.
- The concept of “security” continues to suffer some set backs. “Security,” one may ask?
- The intentional dribbling of information reflects another facet of what I call high school science club management methods. The logic in the case of 23andMe in my opinion is, “Maybe no one will notice?”
Net net: Time for some regulation, perhaps? Oh, right, it’s the users’ responsibility.
Stephen E Arnold, December 5, 2023
Cyber Security Responsibility: Where It Belongs at Last!
December 5, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I want to keep this item brief. Navigate to “CISA’s Goldstein Wants to Ditch ‘Patch Faster, Fix Faster’ Model.”
CISA means the US government’s Cybersecurity and Infrastructure Security Agency. The “Goldstein” reference points to Eric Goldstein, the executive assistant director of CISA.
The main point of the write up is that big technology companies have to be responsible for cleaning up their cyber security messes. The write up reports:
Goldstein said that CISA is calling on technology providers to “take accountability” for the security of their customers by doing things like enabling default security controls such as multi-factor authentication, making security logs available, using secure development practices and embracing memory safe languages such as Rust.
I may be incorrect, but I picked up a signal that the priorities of some techno feudalists are not security. Perhaps these firms’ goals are maximizing profit, market share, and power over their paying customers. Security? Maybe it is easier to describe in a slide deck or a short YouTube video?
The use of a parental mode seems appropriate for a child? Will it work for techno feudalists who have created a digital mess in kitchens throughout the world? Thanks, MSFT Copilot. You must have ingested some “angry mommy” data when your were but a wee sprout.
Will this approach improve the security of mission-critical systems? Will the enjoinder make a consumer’s mobile phone more secure?
My answer? Without meaningful consequences, security is easier to talk about than deliver. Therefore, minimal change in the near future. I wish I were wrong.
Stephen E Arnold, December 5, 2023
Are There Consequences for Social Media? Well, Not Really
December 5, 2023
This essay is the work of a dumb dinobaby. No smart software required.
While parents and legal guardians are responsible for their kids screen time, the US government ruled that social media companies shoulder some responsibility for rotting kids’ brains. The Verge details the government’s ruling in the article, “Social Media Giants Must Face Child Safety Lawsuits, Judge Rules.” US District Judge Yvonne Gonzalez Rogers ruled that social media companies Snap, Alphabet, ByteDance, and Meta must proceed with a lawsuit alleging their platforms have negative mental health effects on kids. Judge Gonzalez Rogers dismissed the companies’ motions to dismiss the lawsuits that accuse the platforms of purposely being addictive.
The lawsuits were filed by 42 states and multiple school districts:
"School districts across the US have filed suit against Meta, ByteDance, Alphabet, and Snap, alleging the companies cause physical and emotional harm to children. Meanwhile, 42 states sued Meta last month over claims Facebook and Instagram “profoundly altered the psychological and social realities of a generation of young Americans.” This order addresses the individual suits and “over 140 actions” taken against the companies.”
Judge Gonzalez Rogers ruled that the First Amendment and Section 230, which say that online platforms shouldn’t be treated as third-party content publishers, don’t protect online platforms from liability. The judge also explained the lawsuits deal with the platforms’ “defects,” such as lack of a robust age verification system, poor parental controls, and a hard account deletion process.
She did dismiss other alleged defects that include no time limits on platforms, use of addictive algorithms, recommending children’s accounts to adults, and offering a beginning and end to a feed. These are protected by Section 230.
The ruling doesn’t determine if the social media platforms are harmful or hold them liable. It only allows lawsuits to go forward in court.
Whitney Grace, December 5, 2023
Why Google Dorks Exist and Why Most Users Do Not Know Why They Are Needed
December 4, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Many people in my lectures are not familiar with the concept of “dorks”. No, not the human variety. I am referencing the concept of a “Google dork.” If you do a quick search using Yandex.com, you will get pointers to different “Google dorks.” Click on one of the links and you will find information you can use to retrieve more precise and relevant information from the Google ad-supported Web search system.
Here’s what QDORKS.com looks like:
The idea is that one plugs in search terms and uses the pull down boxes to enter specific commands to point the ad-centric system at something more closely resembling a relevant result. Other interfaces are available; for example, the “1000 Best Google Dorks List." You get a laundry list of tips,commands, and ideas for wrestling Googzilla to the ground, twisting its tail, and (hopefully) yield relevant information. Hopefully. Good work.
Most people are lousy at pinning the tail on the relevance donkey. Therefore, let someone who knows define relevance for the happy people. Thanks, MSFT Copilot. Nice animal with map pins.
Why are Google Dorks or similar guides to Google search necessary? Here are three reasons:
- Precision reduces the opportunities for displaying allegedly relevant advertising. Semantic relaxation allows the Google to suggest that it is using Oingo type methods to find mathematically determined relationships. The idea is that razzle dazzle makes ad blasting something like an ugly baby wrapped in translucent fabric on a foggy day look really great.
- When Larry Page argued with me at a search engine meeting about truncation, he displayed a preconceived notion about how search should work for those not at Google or attending a specialist conference about search. Rational? To him, yep. Logical? To his framing of the search problem, the stance makes perfect sense if one discards the notion of tense, plurals, inflections, and stupid markers like “im” as in “impractical” and “non” as in “nonsense.” Hey, Larry had the answer. Live with it.
- The goal at the Google is to make search as intellectually easy for the “user” as possible. The idea was to suggest what the user intended. Also, Google had the old idea that a person’s past behavior can predict that person’s behavior now. Well, predict in the sense that “good enough” will do the job for vast majority of search-blind users who look for the short cut or the most convenient way to get information.
Why? Control, being clever, and then selling the dream of clicks for advertisers. Over the years, Google leveraged its information framing power to a position of control. I want to point out that most people, including many Googlers, cannot perceive. When pointed out, those individuals refuse to believe that Google does [a] NOT index the full universe of digital data, [b] NOT want to fool around with users who prefer Boolean algebra, content curation to identify the best or most useful content, and [c] fiddle around with training people to become effective searchers of online information. Obfuscation, verbal legerdemain, and the “do no evil” craziness make the railroad run the way Cornelius Vanderbilt-types implemented.
I read this morning (December 4, 2023) the Google blog post called “New Ways to Find Just What You Need on Search.” The main point of the write up in my opinion is:
Search will never be a solved problem; it continues to evolve and improve alongside our world and the web.
I agree, but it would be great if the known search and retrieval functions were available to users. Instead, we have a weird Google Mom approach. From the write up:
To help you more easily keep up with searches or topics you come back to a lot, or want to learn more about, we’re introducing the ability to follow exactly what you’re interested in.
Okay, user tracking, stored queries, and alerts. How does the Google know what you want? The answer is that users log in, use Google services, and enter queries which are automatically converted to search. You will have answers to questions you really care about.
There are other search functions available in the most recent version of Google’s attempts to deal with an unsolved problem:
As with all information on Search, our systems will look to show the most helpful, relevant and reliable information possible when you follow a topic.
Yep, Google is a helicopter parent. Mom will know what’s best, select it, and present it. Don’t like it? Mom will be recalcitrant, like shaping search results to meet what the probabilistic system says, “Take your medicine, you brat.” Who said, “Mother Google is a nice mom”? Definitely not me.
And Google will make search more social. Shades of Dr. Alon Halevy and the heirs of Orkut. The Google wants to bring people together. Social signals make sense to Google. Yep, content without Google ads must be conquered. Let’s hope the Google incentive plans encourage the behavior, or those valiant programmers will be bystanders to other Googlers’ promotions and accompanying money deliveries.
Net net: Finding relevant, on point, accurate information is more difficult today than at any other point in the 50+ year work career. How does the cloud of unknowing dissipate? I have no idea. I think it has moved in on tiny Googzilla feet and sits looking over the harbor, ready to pounce on any creature that challenges the status quo.
PS. Corny Vanderbilt was an amateur compared to the Google. He did trains; Google does information.
Stephen E Arnold, December 4, 2023
The High School Science Club Got Fined for Its Management Methods
December 4, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I almost missed this story. “Google Reaches $27 Million Settlement in Case That Sparked Employee Activism in Tech” which contains information about the cost of certain management methods. The write up asserts:
Google has reached a $27 million settlement with employees who accused the tech giant of unfair labor practices, setting a record for the largest agreement of its kind, according to California state court documents that haven’t been previously reported.
The kindly administrator (a former legal eagle) explains to the intelligent teens in the high school science club something unpleasant. Their treatment of some non sci-club types will cost them. Thanks, MSFT Copilot. Who’s in charge of the OpenAI relationship now?
The article pegs the “worker activism” on Google. I don’t know if Google is fully responsible. Googzilla’s shoulders and wallet are plump enough to carry the burden in my opinion. The article explains:
In terminating the employee, Google said the person had violated the company’s data classification guidelines that prohibited staff from divulging confidential information… Along the way, the case raised issues about employee surveillance and the over-use of attorney-client privilege to avoid legal scrutiny and accountability.
Not surprisingly, the Google management took a stand against the apparently unjust and unwarranted fine. The story notes via a quote from someone who is in the science club and familiar with its management methods::
“While we strongly believe in the legitimacy of our policies, after nearly eight years of litigation, Google decided that resolution of the matter, without any admission of wrongdoing, is in the best interest of everyone,” a company spokesperson said.
I want to point out that the write up includes links to other articles explaining how the Google is refining its management methods.
Several questions:
- Will other companies hit by activist employees be excited to learn the outcome of Google’s brilliant legal maneuvers which triggered a fine of a mere $27 million
- Has Google published a manual of its management methods? If not, for what is the online advertising giant waiting?
- With more than 170,000 (plus or minus) employees, has Google found a way to replace the unpredictable, expensive, and recalcitrant employees with its smart software? (Let’s ask Bard, shall we?)
After 25 years, the Google finds a way to establish benchmarks in managerial excellence. Oh, I wonder if the company will change it law firm line up. I mean $27 million. Come on. Loose the semantic noose and make more ads “relevant.”
Stephen E Arnold, December 4, 2023
Good Fences, Right, YouTube? And Good Fences in Winter Even Better
December 4, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Remember that line from the grumpy American poet Bobby Frost. (I have on good authority that Bobby was not a charmer. And who, pray tell, was my source. How about a friend of the poet’s who worked with him in South Shaftsbury.)
Like those in the Nor’East say, “Good fences make good neighbors.”
The line is not original. Bobby’s pal told me that the saying was a “pretty common one” among the Shaftsburians. Bobby appropriated the line in his poem “Mending Wall. (It is loved by millions of high school students). The main point of the poem is that “Something there is that doesn’t love a wall.” The key is “something.”
The fine and judicious, customer centric, and well-managed outfit Google is now in the process of understanding the “something that doesn’t love a wall,” digital or stone.
“Inside the Arms Race between YouTube and Ad Blockers” updates the effort of the estimable advertising outfit and — well — almost everyone. The article explains:
YouTube recently took dramatic action against anyone visiting its site with an ad blocker running — after a few pieces of content, it’ll simply stop serving you videos. If you want to get past the wall, that ad blocker will (probably) need to be turned off; and if you want an ad-free experience, better cough up a couple bucks for a Premium subscription.
The write up carefully explains that one must pay a “starting” monthly fee of $13.99 to avoid the highly relevant advertisements for metal men’s wallets, the total home gym which seems only inappropriate for a 79 year old dinobaby like me, and some type of women’s undergarment. Yeah, that ad matching to a known user is doing a bang up job in my opinion. I bet the Skim’s marketing manager is thrilled I am getting their message. How many packs of Skims do I buy in a lifetime? Zero. Yep, zero.
Yes, sir. Good fences make good neighbors. Good enough, MSFT Copilot. Good enough.
Okay, that’s the ad blocker thing, which I have identified as Google’s digital Battle of Waterloo in honor of a movie about everyone’s favorite French emperor, Nappy B.
But what the cited write up and most of the coverage is not focusing on is the question, “Why the user hostile move?” I want to share some of my team’s ideas about the motive force behind this disliked and quite annoying move by that company everyone loves (including the Skim’s marketing manager?).
First, the emergence of ChatGPT type services is having a growing impact on Google’s online advertising business. One can grind though Google’s financials and not find any specific item that says, “The Duke of Wellington and a crazy old Prussian are gearing up for a fight. So I will share some information we have rounded up by talking to people and looking through the data gathered about Googzilla. Specifically, users want information packaged to answer or to “appear” to answer their question. Some want lists; some want summaries; and some just want to avoid the enter the query, click through mostly irrelevant results, scan for something that is sort of close to an answer, and use that information to buy a ticket or get a Taylor Swift poster, whatever. That means that the broad trend in the usage of Google search is a bit like the town of Grindavik, Iceland. “Something” is going on, and it is unlikely to bode well for the future that charming town in Iceland. That’s the “something” that is hostile to walls. Some forces are tough to resist even by Googzilla and friends.
Second, despite the robust usage of YouTube, it costs more money to operate that service than it does to display from a cache ads and previously spidered information from Google compliant Web sites. Thus, as pressure on traditional search goes up from the ChatGPT type services, the darker the clouds on the search business horizon look. The big storm is not pelting the Googleplex yet, but it does looks ominous perched on the horizon and moving slowly. Don’t get our point wrong: Running a Google scale search business is expensive, but it has been engineered and tuned to deliver a tsunami of cash. The YouTube thing just costs more and is going to have a tough time replacing lost old-fashioned search revenue. What’s a pressured Googzilla going to do? One answer is, “Charge users.” Then raise prices. Gee, that’s the much-loved cable model, isn’t it? And the pressure point is motivating some users who are developers to find ways to cut holes in the YouTube fence. The fix? Make the fence bigger and more durable? Isn’t that a Rand arms race scenario? What’s an option? Where’s a J. Robert Oppenheimer-type when one needs him?
The third problem is that there is a desire on the part of advertisers to have their messages displayed in a non offensive context. Also, advertisers — because the economy for some outfits sucks — now are starting to demand proof that their ads are being displayed in front of buyers known to have an interest in their product. Yep, I am talking about the Skims’ marketing officer as well as any intermediary hosing money into Google advertising. I don’t want to try to convince those who are writing checks to the Google the following: “Absolutely. Your ad dollars are building your brand. You are getting leads. You are able to reach buyers no other outfit can deliver.” Want proof. Just look at this dinobaby. I am not buying health food, hidden carry holsters, and those really cute flesh colored women’s undergarments. The question is, “Are the ads just being dumped or are they actually targeted to someone who is interested in a product category?” Good question, right?
Net net: The YouTube ad blocking is shaping up to be a Google moment. Now Google has sparked an adversarial escalation in the world of YouTube ad blockers. What are Google’s options now that Googzilla is backed into a corner? Maybe Bobby Frost has a poem about it: “Some say the world will end in fire, Some say in ice.” How do Googzilla fare in the ice?
Stephen E Arnold, December 4, 2023
The RAG Snag: Convenience May Undermine Thinking for Millions
December 4, 2023
This essay is the work of a dumb dinobaby. No smart software required.
My understanding is that everyone who is informed about AI knows about RAG. The acronym means Retrieval Augmented Generation. One explanation of RAG appears in the nVidia blog in the essay “What Is Retrieval Augmented Generation aka RAG.” nVidia, in my opinion, loves whatever drives demand for its products.
The idea is that machine processes to minimize errors in output. The write up states:
Retrieval-augmented generation (RAG) is a technique for enhancing the accuracy and reliability of generative AI models with facts fetched from external sources.
Simplifying the idea, RAG methods gather information and perform reference checks. The checks can be performed by consulting other smart software, the Web, or knowledge bases like an engineering database. nVidia provides a “reference architecture,” which obviously relies on nVidia products.
The write up does an obligatory tour of a couple of search and retrieval systems. Why? Most of the trendy smart software are demonstrations of information access methods wrapped up in tools that think for the “searcher” or person requiring output to answer a question, explain a concept, or help the human to think clearly. (In dinosaur days, the software is performing functions once associated with a special librarian or an informed colleague who could ask questions and conduct a reference interview. I hope the dusty concepts did not make you sneeze.)
“Yes, young man. The idea of using multiples sources can result in learning. We now call this RAG, not research.” The young man, stunned with the insight say, “WTF?” Thanks, MSFT Copilot. I love the dual tassels. The young expert is obviously twice as intelligent as the somewhat more experienced dinobaby with the weird fingers.
The article includes a diagram which I found difficult to read. I think the simple blocks represent the way in which smart software obviates the need for the user to know much about sources, verification, or provenance about the sources used to provide information. Furthermore, the diagram makes the entire process look just like getting the location of a pizza restaurant from an iPhone (no Google Maps for me).
The highlight of the write up are the links within the article. An interested reader can follow the links for additional information.
Several observations:
- The emergence of RAG as a replacement for such concepts as “search”, “special librarian,” and “provenance” makes clear that finding information is a problem not solved for systems, software, and people. New words make the “old” problem appear “new” again.
- The push for recursive methods to figure out what’s “right” or “correct” will regress to the mean; that is, despite the mathiness of the methods, systems will deliver “acceptable” or “average” outputs. A person who thinks that software will impart genius to a user are believing in a dream. These individuals will not be living the dream.
- widespread use of smart software and automation means that for most people, critical thinking will become the equivalent of an appendix. Instead of mother knows best, the system will provide the framing, the context, and the implication that the outputs are correct.
RAG opens new doors for those who operate widely adopted smart software systems will have significant control over what people think and, thus, do. If the iPhone shows a pizza joint, what about other pizza joints? Just ask. The system will not show pizza joints not verified in some way. If that “verification” requires the company advertising to be in the data set, well, that’s the set of pizza joints one will see. The others? Invisible, not on the radar, and doomed to failure seem their fate.
RAG is significant because it is new speak and it marks a disassociation of “knowing” from “accepting” output information as the best and final words on a topic. I want to point out that for a small percentage of humans, their superior cognitive abilities will ensure a different trajectory. The “new elite” will become the individuals who design, shape, control, and deploy these “smart” systems.
Most people will think they are informed because they can obtain information from a device. The mass of humanity will not know how information control influences their understanding and behavior. Am I correct? I don’t know. I do know one thing: This dinobaby prefers to do knowledge acquisition the old fashioned, slow, inefficient, and uncontrolled way.
Stephen E Arnold, December 4, 2023