Why Humans Follow Techno Feudal Lords and Ladies
December 19, 2023
This essay is the work of a dumb dinobaby. No smart software required.
“Seduced By The Machine” is an interesting blend of human’s willingness to follow the leader and Silicon Valley revisionism. The article points out:
We’re so obsessed by the question of whether machines are rising to the level of humans that we fail to notice how the humans are becoming more like machines.
I agree. The write up offers an explanation — it’s arriving a little late because the Internet has been around for decades:
Increasingly we also have our goals defined for us by technology and by modern bureaucratic systems (governments, schools, corporations). But instead of providing us with something equally rich and well-fitted, they can only offer us pre-fabricated values, standardized for populations. Apps and employers issue instructions with quantifiable metrics. You want good health – you need to do this many steps, or achieve this BMI. You want expertise? You need to get these grades. You want a promotion? Hit these performance numbers. You want refreshing sleep? Better raise your average number of hours.
A modern high-tech pied piper leads many to a sanitized Burning Man? Sounds like fun. Look at the funny outfit. The music is a TikTok hit. The followers are looking forward to their next “experience.” Thanks, MSFT Copilot. One try for this cartoon. Good enough again.
The idea is that technology offers a short cut. Who doesn’t like a short cut? Do you want to write music in the manner of Herr Bach or do you want to do the loop and sample thing?
The article explores the impact of metrics; that is, the idea of letting Spotify make clear what a hit song requires. Now apply that malleability and success incentive to getting fit, getting start up funding, or any other friction-filled task. Let’s find some Teflon, folks.
The write up concludes with this:
Human beings tend to prefer certainty over doubt, comprehensibility to confusion. Quantified metrics are extremely good at offering certainty and comprehensibility. They seduce us with the promise of what Nguyen calls “value clarity”. Hard and fast numbers enable us to easily set goals, justify decisions, and communicate what we’ve done. But humans reach true fulfilment by way of doubt, error and confusion. We’re odd like that.
Hot button alert! Uncertainty means risk. Therefore, reduce risk. Rely on an “authority,” “data,” or “research.” What if the authority sells advertising? What if the data are intentionally poisoned (a somewhat trivial task according to watchers of disinformation outfits)? What if the research is made up? (I am thinking of the Stanford University president and the Harvard ethic whiz. Both allegedly invented data; both found themselves in hot water. But no one seems to have cared.
With smart software — despite its hyperbolic marketing and its role as the next really Big Thing — finding its way into a wide range of business and specialized systems, just trust the machine output. I went for a routine check up. One machine reported I was near death. The doctor was recommending a number of immediate remediation measures. I pointed out that the data came from a single somewhat older device. No one knew who verified its accuracy. No one knew if the device was repaired. I noted that I was indeed still alive and asked if the somewhat nervous looking medical professional would get a different device to gather the data. Hopefully that will happen.
Is it a positive when the new pied piper of Hamelin wants to have control in order to generate revenue? Is it a positive when education produces individuals who do not ask, “Is the output accurate?” Some day, dinobabies like me will indeed be dead. Will the willingness of humans to follow the pied piper be different?
Absolutely not. This dinobaby is alive and kicking, no matter what the aged diagnostic machine said. Gentle reader, can you identify fake, synthetic, or just plain wrong data? If you answer yes, you may be in the top tier of actual thinkers. Those who are gatekeepers of information will define reality and take your money whether you want to give it up or not.
Stephen E Arnold, December 19, 2023
AI: Are You Sure You Are Secure?
December 19, 2023
This essay is the work of a dumb dinobaby. No smart software required.
North Carolina University published an interesting article. Are the data in the write up reproducible. I don’t know. I wanted to highlight the report in the hopes that additional information will be helpful to cyber security professionals. The article is “AI Networks Are More Vulnerable to Malicious Attacks Than Previously Thought.”
I noted this statement in the article:
Artificial intelligence tools hold promise for applications ranging from autonomous vehicles to the interpretation of medical images. However, a new study finds these AI tools are more vulnerable than previously thought to targeted attacks that effectively force AI systems to make bad decisions.
A corporate decision maker looks at a point of vulnerability. One of his associates moves a sign which explains that smart software protects the castel and its crown jewels. Thanks, MSFT Copilot. Numerous tries, but I finally got an image close enough for horseshoes.
What is the specific point of alleged weakness?
At issue are so-called “adversarial attacks,” in which someone manipulates the data being fed into an AI system in order to confuse it.
The example presented in the article is that a bad actor manipulates data provided to the smart software; for example, causing an image or content to be deleted or ignored. Another use case is that a bad actor could cause an X-ray machine to present altered information to the analyst.
The write up includes a description of software called QuadAttacK. The idea is to test a network for “clean” data. Four different networks were tested. The report includes a statement from Tianfu Wu, co-author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. He allegedly said:
“We were surprised to find that all four of these networks were very vulnerable to adversarial attacks,” Wu says. “We were particularly surprised at the extent to which we could fine-tune the attacks to make the networks see what we wanted them to see.”
You can download the vulnerability testing tool at this link.
Here are the observations my team and I generated at lunch today (Friday, December 14, 2023):
- Poisoned data is one of the weak spots in some smart software
- The free tool will allow bad actors with access to certain smart systems a way to identify points of vulnerability
- AI, at this time, may be better at marketing than protecting its reasoning systems.
Stephen E Arnold, December 19, 2023
Facing an Information Drought Tech Feudalists Will Innovate
December 18, 2023
This essay is the work of a dumb dinobaby. No smart software required.
The Exponential View (Azeem Azhar) tucked an item in his “blog.” The item is important, but I am not familiar with the cited source of the information in “LLMs May Soon Exhaust All Available High Quality Language Data for Training.” The main point is that the go-to method for smart software requires information in volume to [a] be accurate, [b] remain up to date, and [c] sufficiently useful to pay for the digital plumbing.
Oh, oh. The water cooler is broken. Will the Pilates’ teacher ask the students to quench their thirst with synthetic water? Another option is for those seeking refreshment to rejuvenate tired muscles with more efficient metabolic processes. The students are not impressed with these ideas? Thanks, MSFT Copilot. Two tries and close enough.
One datum indicates / suggests that the Big Dogs of AI will run out of content to feed into their systems in either 2024 or 2025. The date is less important than the idea of a hard stop.
What will the AI companies do? The essay asserts:
OpenAI has shown that it’s willing to pay eight figures annually for historical and ongoing access to data — I find it difficult to imagine that open-source builders will…. here are ways other than proprietary data to improve models, namely synthetic data, data efficiency, and algorithmic improvements – yet it looks like proprietary data is a moat open-source cannot cross.
Several observations:
- New methods of “information” collection will be developed and deployed. Some of these will be “off the radar” of users by design. One possibility is mining the changes to draft content is certain systems. Changes or deltas can be useful to some analysts.
- The synthetic data angle will become a go-to method using data sources which, by themselves, are not particularly interesting. However, when cross correlated with other information, “new” data emerge. The new data can be aggregated and fed into other smart software.
- Rogue organizations will acquire proprietary data and “bitwash” the information. Like money laundering systems, the origin of the data are fuzzified or obscured, making figuring out what happened expensive and time consuming.
- Techno feudal organizations will explore new non commercial entities to collect certain data; for example, the non governmental organizations in a niche could be approached for certain data provided by supporters of the entity.
Net net: Running out of data is likely to produce one high probability event: Certain companies will begin taking more aggressive steps to make sure their digital water cooler is filled and working for their purposes.
Stephen E Arnold, December 18, 2023
Ignoring the Big Thing: Google and Its PR Hunger
December 18, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I read “FunSearch: Making New Discoveries in Mathematical Sciences Using Large Language Models.” The main idea is that Google’s smart software is — once again — going where no mortal man has gone before. The write up states:
Today, in a paper published in Nature, we introduce FunSearch, a method to search for new solutions in mathematics and computer science. FunSearch works by pairing a pre-trained LLM, whose goal is to provide creative solutions in the form of computer code, with an automated “evaluator”, which guards against hallucinations and incorrect ideas. By iterating back-and-forth between these two components, initial solutions “evolve” into new knowledge. The system searches for “functions” written in computer code; hence the name FunSearch.
I like the idea of getting the write up in Nature, a respected journal. I like even better the idea of Google-splaining how a large language model can do mathy things. I absolutely love the idea of “new.”
“What’s with the pointed stick? I needed a wheel,” says the disappointed user of an advanced technology in days of yore. Thanks, MSFT Copilot. Good enough, which is a standard of excellence in smart software in my opinion.
Here’s a wonderful observation summing up Google’s latest development in smart software:
FunSearch is like one of those rocket cars that people make once in a while to break land speed records. Extremely expensive, extremely impractical and terminally over-specialized to do one thing, and do that thing only. And, ultimately, a bit of a show. YeGoblynQueenne via YCombinator.
My question is, “Is Google dusting a code brute force method with marketing sprinkles?” I assume that the approach can be enhanced with more tuning of the evaluator. I am not silly enough to ask if Google will explain the settings, threshold knobs, and probability levers operating behind the scenes.
Google’s prose makes the achievement clear:
This work represents the first time a new discovery has been made for challenging open problems in science or mathematics using LLMs. FunSearch discovered new solutions for the cap set problem, a longstanding open problem in mathematics. In addition, to demonstrate the practical usefulness of FunSearch, we used it to discover more effective algorithms for the “bin-packing” problem, which has ubiquitous applications such as making data centers more efficient.
The search for more effective algorithms is a never-ending quest. Who bothers to learn how to get a printer to spit out “Hello, World”? Today I am pleased if my printer outputs a Gmail message. And bin-packing is now solved. Good.
As I read the blog post, I found the focus on large language models interesting. But that evaluator strikes me as something of considerable interest. When smart software discovers something new, who or what allows the evaluator to “know” that something “new” is emerging. That evaluator must be something to prevent hallucination (a fancy term for making stuff up) and blocking the innovation process. I won’t raise any Philosophy 101 questions, but I will say, “Google has the keys to the universe” with sprinkles too.
There’s a picture too. But where’s the evaluator. Simplification is one thing, but skipping over the system and method that prevents smart software hallucinations (falsehoods, mistakes, and craziness) is quite another.
Google is not a company to shy from innovation from its human wizards. If one thinks about the thrust of the blog post, will these Googlers be needed. Google’s innovativeness has drifted toward me-too behavior and being clever with advertising.
The blog post concludes:
FunSearch demonstrates that if we safeguard against LLMs’ hallucinations, the power of these models can be harnessed not only to produce new mathematical discoveries, but also to reveal potentially impactful solutions to important real-world problems.
I agree. But the “how” hangs above the marketing. But when a company has quantum supremacy, the grimness of the recent court loss, and assorted legal hassles — what is this magical evaluator?
I find Google’s deal to use facial recognition to assist the UK in enforcing what appears to be “stop porn” regulations more in line with what Google’s smart software can do. The “new” math? Eh, maybe. But analyzing every person trying to access a porn site and having the technical infrastructure to perform cross correlation. Now that’s something that will be of interest to governments and commercial customers.
The bin thing and a short cut for a python script. Interesting but it lacks the practical “big bucks now” potential of the facial recognition play. That, as far as I know, was not written up and ponied around to prestigious journals. To me, that was news, not the FUN as a cute reminder of a “function” search.
Stephen E Arnold, December 18, 2023
Google and Its Age Verification System: Will There Be a FAES Off?
December 18, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Just in time for the holidays! Google’s user age verification system is ready for 2024. “Google Develops Selfie Scanning Software Ahead of Porn Crackdown” reports:
Google has developed face-scanning technology that would block children from accessing adult websites ahead of a crackdown on online porn. An artificial intelligence system developed by the web giant for estimating a person’s age based on their face has quietly been approved in the UK.
Thanks, MSFT Copilot. A good enough eyeball with a mobile phone, a pencil, a valise, stealthy sneakers, and data.
Facial recognition, although widely used in some countries, continues to make some people nervous. But in the UKL, the Google method will allow the UK government to obtain data to verify one’s age. The objective is to stop those who are younger than 18 from viewing “adult Web sites.”
[Google] says the technology is 99.9pc reliable in identifying that a photo of an 18-year-old is under the age of 25. If users are believed to be under the age of 25, they could be asked to provide additional ID.
The phrase used to describe the approach is “face age estimation system.”
The cited newspaper article points out:
It is unclear what Google plans to use the system for. It could use it within its own services, such as YouTube and the Google Play app download store, or build it into its Chrome web browser to allow websites to verify that visitors are over 18.
Google is not the only outfit using facial recognition to allegedly reduce harm to individuals. Facebook and OnlyFans, according to the write up are already deploying similar technology.
The news story says:
It is unclear what privacy protections Google would apply to the system.
I wonder what interesting insights would be available if data from the FAES were cross cross correlated with other information. That might have value to advertisers and possibly other commercial or governmental entities.
Stephen E Arnold, December 18, 2023
Google and Its Epic Magic: Will It Keep on Thrilling?
December 17, 2023
This essay is the work of a dumb dinobaby. No smart software required.
The Financial Times (the orange newspaper) published a paywalled essay/interview with Epic Games’s CEO Tim Sweeney. The hook for the sit down was the decision that a court proceeding determined that Google had acted in an illegal way. How? Google developed Android, then Google used that mobile system as a platform for revenue generation. These appear to have involved one-off special deals with some companies and a hefty commission on sales made via the Google Play Store.
Will the magic show continue to surprise and entertain the innocent at the party? Thanks, MSFT Copilot. Close enough for horseshoes, but I wanted a Godzilla monster in a tuxedo doing the tricks. But that’s forbidden.
Several items struck me in the article “Epic Games Chief Concerned Google Will Get Away with App Store Charges.”
First, the trial made clear that Google was unable to back up certain data. Here’s how the Financial Times’s story phrased this matter:
The judge in the case, US district judge James Donato, also criticized the company for its failure to preserve evidence, with internal policies for deleting chats. He instructed the jury that they were free to conclude Google’s chat deletion policies were designed to conceal incriminating evidence. “The Google folks clearly knew what they were doing,” Sweeney said. “They had very lucid writings internally as they were writing emails to each other, though they destroyed most of the chats.” “And then there was the massive document destruction,” Sweeney added. “It’s astonishing that a trillion-dollar corporation at the pinnacle of the American tech industry just engages in blatantly dishonest processes, such as putting all of their communications in a form of chat that is destroyed every 24 hours.” Google has since changed its chat deletion policy.
Taking steps to obscure evidence suggests to me that Google operates in an ethical zone with which I and the judge find uncomfortable. The behavior also implies that Google professionals are not just clever, but that they do what pays off within a governance system which is comfortable with a philosophy of entitlement. Google does what Google does. Oh, that is a problem for others. Well, that’s too bad.
Second, according to the article, Google would pursue “alternative payment methods.” The online ad giant would then slap a fee to list a product in the Google Play Store. The method has a number of variations which can include a fee for promoting a product to offering different size listings. The idea is similar to a grocery chain charging a manufacturer to put annoying free standing displays of breakfast foods in the center of a high traffic aisle.
Third , Mr. Sweeney seems happy with the evidence about payola which emerged during the trial. Google appears to have payed Samsung to sell its digital goods via the Google Play Store. The pay-to-play model apparently prevented the South Korean company from setting up an alternative store for Android equipped mobile devices.
Several observations:
- The trial, unlike the proceedings in the DC monopoly probe produced details about what Google does to generate lock in, money, and Googliness
- The destruction of evidence makes clear a disdain for behavior which preserves the trust and integrity of certain norms of behavior
- The trial makes clear that Google wants to preserve its dominant position and will pay to remain Number One.
Net net: Will Google’s magic wow everyone as it did when the company was gaining momentum? For some, yes. For others, no, sorry. I think the costume Google has worn for decades is now weakening at the seams. But the show must go on.
Stephen E Arnold, December 17, 2023
An Effort to Put Spilled Milk Back in the Bottle
December 15, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Microsoft was busy when the Activision Blizzard saga began. I dimly recall thinking, “Hey, one way to distract people from the SolarWinds’ misstep would be to become an alleged game monopoly.” I thought that Microsoft would drop the idea, but, no. I was wrong. Microsoft really wanted to be an alleged game monopoly. Apparently the successes (past and present) of Nintendo and Sony, the failure of Google’s Grand Slam attempt, and the annoyance of refurbished arcade game machines was real. Microsoft has focus. And guess what government agency does not? Maybe the Federal Trade Commission?
Two bureaucrats to be engage in a mature discussioin about the rules for the old-fashioned game of Monopoly. One will become a government executive; the other will become a senior legal professional at a giant high-technology outfit. Thanks, MSFT Copilot. You capture the spirit of rational discourse in a good enough way.
The MSFT game play may not be over. “The FTC Is Trying to Get Back in the Ring with Microsoft Over Activision Deal” asserts:
Nearly five months later, the FTC has appealed the court’s decision, arguing that the lower court essentially just believed whatever Microsoft said at face value…. We said at the time that Microsoft was clearly taking the complaints from various regulatory bodies as some sort of paint by numbers prescription as to what deals to make to get around them. And I very much can see the FTC’s point on this. It brought a complaint under one set of facts only to have Microsoft alter those facts, leading to the courts slamming the deal through before the FTC had a chance to amend its arguments. But ultimately it won’t matter. This last gasp attempt will almost certainly fail. American regulatory bodies have dull teeth to begin with and I’ve seen nothing that would lead me to believe that the courts are going to allow the agency to unwind a closed deal after everything it took to get here.
From my small office in rural Kentucky, the government’s desire or attempt to get “back in the ring” is interesting. It illustrates how many organizations approach difficult issues.
The advantage goes to the outfit with [a] the most money, [b] the mental wherewithal to maintain some semblance of focus, and [c] a mechanism to keep moving forward. The big four wheel drive will make it through the snow better than a person trying to ride a bicycle in a blizzard.
The key sentence in the cited article, in my opinion, is:
“I fail to understand how giving somebody a monopoly of something would be pro-competitive,” said Imad Dean Abyad, an FTC attorney, in the argument Wednesday before the appeals court. “It may be a benefit to some class of consumers, but that is very different than saying it is pro-competitive.”
No problem with that logic.
And who is in charge of today Monopoly games?
Stephen E Arnold, December 15, 2023
FTC Enacts Investigative Process On AI Products and Services
December 15, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Creative types and educational professionals are worried about the influence of AI-generated work. However, law, legal, finance, business operations, and other industries are worried about how AI will impact them. Aware about the upward trend in goods and services that are surreptitiously moving into the market, the Federal Trade Commission (FTC) took action. The FTC released a briefing on the new consumer AI protection: “FTC Authorities Compulsory Process For AI-Related Products And Services.”
The executive recruiter for a government contractor says, “You can earn great money with a side gig helping your government validate AI algorithms. Does that sound good?” Will American schools produce enough AI savvy people to validate opaque and black box algorithms? Thanks, MSFT Copilot. You hallucinated on this one, but your image was good enough.
The FTC passed an omnibus resolution that authorizes a compulsory process in nonpublic investigations about products and services that use or claim to be made with AI or claim to detect it. The new omnibus resolution will increase the FTC’s efficiency with civil investigation demands (CIDs), a compulsory process like a subpoena. CIDs are issued to collect information, similar to legal discovery, for consumer protection and competition investigations. The new resolution will be in effect for ten years and the FTC voted to approve it 3-0.
The FTC defines AI as:
“AI includes, but is not limited to, machine-based systems that can, for a set of defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Generative AI can be used to generate synthetic content including images, videos, audio, text, and other digital content that appear to be created by humans. Many companies now offer products and services using AI and generative AI, while others offer products and services that claim to detect content made by generative AI.”
AI can also be used for deception, privacy infringements, fraud, and other illegal activities. AI can causes competition problems, such as if a few companies monopolize algorithms are other AI-related technologies.
The FTC is taking preliminary steps to protect consumers from bad actors and their nefarious AI-generated deeds. However, what constitutes a violation in relation to AI? Will the data training libraries be examined along with the developers? Where will the expert analysts come? An online university training program?
Whitney Grace, December 15, 2023
Microsoft Snags Cyber Criminal Gang: Enablers Finally a Target
December 14, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Earlier this year at the National Cyber Crime Conference, we shared some of our research about “enablers.” The term is our shorthand for individuals, services, and financial outfits providing the money, services, and management support to cyber criminals. Online crime comes, like Baskin & Robbins ice cream, in a mind-boggling range of “flavors.” To make big bucks, funding and infrastructure are needed. The reasons include amped up enforcement from the US Federal Bureau of Investigation, Europol, and cooperating law enforcement agencies. The cyber crime “game” is a variation of a cat-and-mouse game. With each technological advance, bad actors try out the latest and greatest. Then enforcement agencies respond and neutralize the advantage. The bad actors then scan the technology horizon, innovate, and law enforcement responds. There are many implications of this innovate-react-innovate cycle. I won’t go into those in this short essay. Instead I want to focus on a Microsoft blog post called “Disrupting the Gateway Services to Cybercrime.”
Industrialized cyber crime uses existing infrastructure providers. That’s a convenient, easy, and economical means of hiding. Modern obfuscation technology adds to law enforcements’ burden. Perhaps some oversight and regulation of these nearly invisible commercial companies is needed? Thanks, MSFT Copilot. Close enough and I liked the investigators on the roof of a typical office building.
Microsoft says:
Storm-1152 [the enabler?] runs illicit websites and social media pages, selling fraudulent Microsoft accounts and tools to bypass identity verification software across well-known technology platforms. These services reduce the time and effort needed for criminals to conduct a host of criminal and abusive behaviors online.
What moved Microsoft to take action? According to the article:
Storm-1152 created for sale approximately 750 million fraudulent Microsoft accounts, earning the group millions of dollars in illicit revenue, and costing Microsoft and other companies even more to combat their criminal activity.
Just 750 million? One question which struck me was: “With the updating, the telemetry, and the bits and bobs of Microsoft’s “security” measures, how could nearly a billion fake accounts be allowed to invade the ecosystem?” I thought a smaller number might have been the tipping point.
Another interesting point in the essay is that Microsoft identifies the third party Arkose Labs as contributing to the action against the bad actors. The company is one of the firms engaged in cyber threat intelligence and mitigation services. The question I had was, “Why are the other threat intelligence companies not picking up signals about such a large, widespread criminal operation?” Also, “What is Arkose Labs doing that other sophisticated companies and OSINT investigators not doing?” Google and In-Q-Tel invested in Recorded Future, a go to threat intelligence outfit. I don’t recall seeing, but I heard that Microsoft invested in the company, joining SoftBank’s Vision Fund and PayPal, among others.
I am delighted that “enablers” have become a more visible target of enforcement actions. More must be done, however. Poke around in ISP land and what do you find? As my lecture pointed out, “Respectable companies in upscale neighborhoods harbor enablers, so one doesn’t have to travel to Bulgaria or Moldova to do research. Silicon Valley is closer and stocked with enablers; the area is a hurricane of crime.
In closing, I ask, “Why are discoveries of this type of industrialized criminal activity unearthed by one outfit?" And, “What are the other cyber threat folks chasing?”
Stephen E Arnold, December 14, 2023
Why Modern Interfaces Leave Dinobabies Lost in Space
December 14, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Until the advent of mobile phones, I paid zero attention to interfaces. As a dinobaby, I have well-honed skills remembering strings of commands. I would pump these into the computing device via a keyboard or a script and move on with the work. Now, when a new app arrives, I resist using it. The reasons are explained quite well in “Modern iOS Navigation Patterns.” I would suggest that the craziness presented clearly in the essay be extended to any modern interface: Desktop anchor, zippy tablet, or the look-alike mobiles.
The dinobaby says, “How in the world do I send a picture to my grandson?” Thanks, MSFT Copilot. Did you learn something with the Windows phone interface?
The write up explains and illustrates the following types of “modern” iOS interfaces. I am not making these up, and I am assuming that the write up is not a modern-day Swiftian satire. Here we go:
- Structural navigation with these options or variants: Drill down, flat, pyramid, and hub and spoke
- Overlay navigations with these options or variants: High friction, low friction, and non-modal (following along?)
- Embedded navigation with these options or variants: State change, step by step, or content driven (crystal clear, right?)
Several observations. I want an interface to deliver the functions the software presents as its core functionality. I do not want changing interfaces, hidden operations, or weirdness which distracts me from the task which I wish to accomplish.
What do designers do when they have to improve an interface. Embrace one of the navigation approaches, go to meetings, decide on which to use and where. When the “new” interface comes out, poll users to get feedback. Ignore the dinobabies who say, “You are nuts because the app is unusable.”
Stephen E Arnold, December 14, 2023