AI and the Workplace: Change Will Happen, Just Not the Way Some Think
May 15, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I read “AI and the Workplace.” The essay contains observations related to smart software in the workplace. The idea is that employees who are savvy will experiment and try to use the technology within today’s work framework. I think that will happen just as the essay suggests. However, I think there is a larger, more significant impact that is easy to miss. Looking at today’s workplace is missing a more significant impact. Employees either [a] want to keep their job, [b] gain new skills and get a better job, or [c] quit to vegetate or become an entrepreneur. I understand.
The data in the report make clear that some employees are what I call change flexible; that is, these motivated individuals differentiate from others at work by learning and experimenting. Note that more than half the people in the “we don’t use AI” categories want to use AI.
These data come from the cited article and an outfit called Asana.
The other data in the report. Some employees get a productivity boost; others just chug along, occasionally getting some benefit from AI. The future, therefore, requires learning, double checking outputs, and accepting that it is early days for smart software. This makes sense; however, it misses where the big change will come.
In my view, the major shift will appear in companies founded now that AI is more widely available. These organizations will be crafted to make optimal use of smart software from the day the new idea takes shape. A new news organization might look like Grok News (the Elon Musk project) or the much reviled AdVon. But even these outfits are anchored in the past. Grok News just substitutes smart software (which hopefully will not kill its users) for old work processes and outputs. AdVon was a “rip and replace” tool for Sports Illustrated. That did not go particularly well in my opinion.
The big job impact will be on new organizational set ups with AI baked in. The types of people working at these organizations will not be from the lower 98 percent of the work force pool. I think the majority of employees who once expected to work in information processing or knowledge work will be like a 58 year old brand manager at a vape company. Job offers will not be easy to get and new companies might opt for smart software and search engine optimization marketing. How many workers will that require? Maybe zero. Someone on Fiverr.com will do the job for a couple of hundred dollars a month.
In my view, new companies won’t need workers who are not in the top tier of some high value expertise. Who needs a consulting team when one bright person with knowledge of orchestrating smart software is able to do the work of a marketing department, a product design unit, and a strategic planning unit? In fact, there may not be any “employees” in the sense of workers at a warehouse or a consulting firm like Deloitte.
Several observations are warranted:
- Predicting downstream impacts of a technology unfamiliar to a great many people is tricky and sometimes impossible. Who knew social media would spawn a renaissance in getting tattooed?
- Visualizing how an AI-centric start up is assembled is a challenge? I submit it won’t look like an insurance company today. What’s a Tesla repair station look like? The answer, “Not much.”
- Figuring out how to be one of the elite who gets a job means being perceived as “smart.” Unlike Alina Habba, I know that I cannot fake “smart.” How many people will work hard to maximize the return on their intelligence? The answer, in my experience, is, “Not too many, dinobaby.”
Looking at the future from within the framework of today’s datasphere distorts how one perceives impact. I don’t know what the future looks like, but it will have some quite different configurations than the companies today have. The future will arrive slowly and then it becomes the foundation of a further evolution. What’s the grandson of tomorrow’s AI firm look like? Beauty will be in the eye of the beholder.
Net net: Where will the never-to-be-employed find something meaningful to do?
Stephen E Arnold, May 15, 2024
AI May Help Real Journalists Explain Being Smart. May, Not Will
May 9, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I found the link between social media and stupid people interesting. I am not sure I embrace the causal chain as presented in “As IQ Scores Decline in the US, Experts Blame the Rise of Tech — How Stupid Is Your State?” The “real” news story has a snappy headline, but social media and IQ? Let’s take a look. The write up states:
Here’s the first sentence of the write up. Note the novel coinage, dumbening. I assume the use of dumb as a gerund open the door to such statements as “I dumb” or “We dumbed together at Harvard’s lecture about ethics” or “My boss dumbed again, like he did last summer.”
Do all Americans go through a process of dumbening?
A tour group has a low IQ when it comes to understanding ancient rock painting. Should we blame technology and social media? Thanks, MSFT Copilot. Earning extra money because you do great security?
The write up explains that IQ scores are going down after a “rise” which began in 1905. What causes this decline? Is it broken homes? Lousy teachers? A lack of consequences for inattentiveness? Skipping school? Crappy pre-schools? Bus rides? School starting too early or too late? Dropping courses in art, music, and PE? Chemical-infused food? Television? Not learning cursive?
The answer is, “Technology.” More specifically, the culprit is social media. The article quotes a professor, who opines:
The professor [Hetty Roessingh, professor emerita of education at the University of Calgary] said that time spent with devices like phones and iPads means less time for more effective methods of increasing one’s intelligence level.
Several observations:
- Wow.
- Technology is an umbrella term. Social media is an umbrella term. What exactly is causing people to be dumb?
- What about an IQ test being mismatched to those who take it? My IQ was pretty low when I lived in Campinas, Brazil. It was tough to answer questions I could not read until I learned Portuguese.
Net net: Dumbening. You got it.
Stephen E Arnold, May 9, 2024
A High-Tech Best Friend and Campfire Lighter
May 1, 2024
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
A dog is allegedly man’s best friend. I have a French bulldog,
and I am not 100 percent sure that’s an accurate statement. But I have a way to get the pal I have wanted for years.
Ars Technica reports “You Can Now Buy a Flame-Throwing Robot Dog for Under $10,000” from Ohio-based maker Throwflame. See the article for footage of this contraption setting fire to what appears to be a forest. Terrific. Reporter Benj Edwards writes:
“Thermonator is a quadruped robot with an ARC flamethrower mounted to its back, fueled by gasoline or napalm. It features a one-hour battery, a 30-foot flame-throwing range, and Wi-Fi and Bluetooth connectivity for remote control through a smartphone. It also includes a LIDAR sensor for mapping and obstacle avoidance, laser sighting, and first-person view (FPV) navigation through an onboard camera. The product appears to integrate a version of the Unitree Go2 robot quadruped that retails alone for $1,600 in its base configuration. The company lists possible applications of the new robot as ‘wildfire control and prevention,’ ‘agricultural management,’ ‘ecological conservation,’ ‘snow and ice removal,’ and ‘entertainment and SFX.’ But most of all, it sets things on fire in a variety of real-world scenarios.”
And what does my desired dog look like? The GenY Tibby asleep at work? Nope.
I hope my Thermonator includes an AI at the controls. Maybe that will be an add-on feature in 2025? Unitree, maker of the robot base mentioned above, once vowed to oppose the weaponization of their products (along with five other robotics firms.) Perhaps Throwflame won them over with assertions their device is not technically a weapon, since flamethrowers are not considered firearms by federal agencies. It is currently legal to own this mayhem machine in 48 states. Certain restrictions apply in Maryland and California. How many crazies can get their hands on a mere $9,420 plus tax for that kind of power? Even factoring in the cost of napalm (sold separately), probably quite a few.
Cynthia Murrell, May 1, 2024
Research into Baloney Uses Four Letter Words
March 25, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I am critical of university studies. However, I spotted one which strikes as the heart of the Silicon Valley approach to life. “Research Shows That People Who BS Are More Likely to Fall for BS” has an interesting subtitle; to wit:
People who frequently mislead others are less able to distinguish fact from fiction, according to University of Waterloo researchers
A very good looking bull spends time reviewing information helpful to him in selling his artificial intelligence system. Unlike the two cows, he does not realize that he is living in a construct of BS. Thanks, MSFT Copilot. How are you doing with those printer woes today? Good enough, I assume.
Consider the headline in the context of promises about technologies which will “change everything.” Examples range from the marvels of artificial intelligence to the crazy assertions about quantum computing. My hunch is that the reason baloney has become one of the most popular mental foods in the datasphere is that people desperately want a silver bullet. Other know that if a silver bullet is described with appropriate language and a bit of sizzle, the thought can be a runway for money.
What’s this mean? We have created a culture in North America that makes “technology” and “glittering generalities” into hyperbole factories. Why believe me? Let’s look at the “research.”
The write up reports:
People who frequently try to impress or persuade others with misleading exaggerations and distortions are themselves more likely to be fooled by impressive-sounding misinformation… The researchers found that people who frequently engage in “persuasive bullshitting” were actually quite poor at identifying it. Specifically, they had trouble distinguishing intentionally profound or scientifically accurate fact from impressive but meaningless fiction. Importantly, these frequent BSers are also much more likely to fall for fake news headlines.
Let’s think about this assertion. The technology story teller is an influential entity. In the world of AI, for example, some firms which have claimed “quantum supremacy” showcase executives who spin glorious word pictures of smart software reshaping the world. The upsides are magnetic; the downsides dismissed.
What about crypto champions? Telegram, founded by two Russian brothers, are spinning fabulous tales of revenue from advertising in an encrypted messaging system and cheerleading for a more innovative crypto currency. Operating from Dubai, there are true believers. What’s not to like? Maybe these bros have the solution that has long been part of the Harvard winkle confections.
What shocked me about the write up was the use of the word “bullshit.” Here’s an example from the academic article:
“We found that the more frequently someone engages in persuasive bullshitting, the more likely they are to be duped by various types of misleading information regardless of their cognitive ability, engagement in reflective thinking, or metacognitive skills,” Littrell said. “Persuasive BSers seem to mistake superficial profoundness for actual profoundness. So, if something simply sounds profound, truthful, or accurate to them that means it really is. But evasive bullshitters were much better at making this distinction.”
What if the write up is itself BS? What if the journal publishing the article — British Journal of Social Psychology — is BS? On one level, I want to agree that those skilled in the art of baloney manufacturing, distributing, and outputting have a quite specific skill. On the other hand, I admit that I cannot determine at first glance if the information provided is not synthetic, ripped off, shaped, or weaponized. I would assert that most people are not able to identify what is “verifiable”, “an accurate accepted fact”, or “true.”
We live in a post-reality era. When the presidents of outfits like Harvard and Stanford face challenges to their research accuracy, what can I do when confronted with a media release about BS. Upon reflection, I think the generalization that people cannot figure out what’s on point or not is true. When drug store cashiers cannot make change, I think that’s strong anecdotal evidence that other parts of their mental toolkit have broken or missing parts.
But the statement that those who output BS cannot themselves identify BS may be part of a broader educational failure. Lazy people, those who take short cuts, people who know how to do the PT Barnum thing, and sales professionals trying to close a deal reflect a societal issue. In a world of baloney, everything is baloney.
Stephen E Arnold, March 25, 2024
Old Code, New Code: Can You Make It Work Again… Sort Of?
March 18, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Even hippy dippy super slick AI start ups have a technical debt problem. It is, in my opinion, no different from the “costs” imposed on outfits like JPMorgan Chase or (heaven help us) AMTRAK. Software which mostly works is subject to two environmental problems. First, the people who wrote the code or made it work that last time catastrophe struck (hello, AT&T, how are those pushed updates working for you now?) move on, quit, or whatever. Second, the technical options for remediating the problem are evolving (how are those security hot fixes working out, Microsoft?).
The helpful father asks an question the aspiring engineer cannot answer. Thus it was when the wizard was a child, and it is when the wizard is working on a modern engineering project. Buildings tip; aircraft lose doors and wheels. Software updates kill computers. Self-driving cars cannot. Thanks, MSFT Copilot. Did you get your model airplane to fly when you were a wee lad? I think I know the answer.
I thought about this problem of the cost of code remediating, fixing, redoing, upgrading or whatever term fast-talking sales engineers use in their Zooms and PowerPoints as I read “The High-Risk Refactoring.” The write up does a good job of explaining in a gentle way what happens when suits authorize making old code like new again. (The suits do not know the agonies of the original developers, but why should “history” intrude on a whiz bang GenX or GenY management type?
The article says:
it’s highly important to ensure the system works the same way after the swap with the new code. In that regard, immediately spotting when something breaks throughout the whole refactoring process is very helpful. No one wants to find that out in production.
No kidding.
In most cases, there are insufficient skilled people and money to create a new or revamped system, get it up and running in parallel for an appropriate period of time, identify the problems, remediate them, and then make the cut over. People buy cars this way, but that’s not how most organizations, regardless of size, “do” software. Okay, the take your car in, buy a new one, and drive off will not work in today’s business environment.
The write up focuses on what most organizations do; that is, write or fix new code and stick it into a system. There may or may not be resources for a staging server, but the result is the same. The old software has been “fixed” and the documentation is “sort of written” and people move on to other work or in the case of consulting engineering firms, just get replaced by a new, higher margin professional.
The write up takes a different approach and concludes with four suggestions or questions to ask. I quote:
“Refactor if things are getting too complicated, but stop if can’t prove it works.
Accompany new features with refactoring for areas you foresee to be subject to a change, but copy-pasting is ok until patterns arise.
Be proactive in finding new ways to ensure refactoring predictability, but be conservative about the assumption QA will find all the bugs.
Move business logic out of busy components, but be brave enough to keep the legacy code intact if the only argument is “this code looks wrong”.
These are useful points. I would like to suggest some bright white lines for those who have to tackle an IRS-mainframe- or AT&T-billing system type of challenge as well as tweaking an artificial intelligence solution to respond to those wonky multi-ethnic images Google generated in order to allow the Sundar & Prabhakar Comedy Team to smile sheepishly and apologize again for lousy software.
Are you ready? Let’s go:
- Fixes add to the complexity of the code base. As time goes stumbling forward, the complexity of the software becomes greater. The cost of making sure the fix works and does not create exciting dependency behavior goes up. Thus, small fixes “cost” more, and these costs are tough to control.
- The safest fixes are “wrappers”; that is, no one in his or her right mind wants to change software written in 1978 for a machine no longer in production by the manufacturer. Therefore, new software is written to interact in a “safe” way with the original software. The new code “fixes up” the problem without screwing up what grandpa programmer wrote almost half a century ago. The problem is that “wrappers” tend to slow stuff down. The fix is to say one will optimize the system while one looks for a new project or job.
- The software used for “fixing” a problem is becoming the equivalent of repairing an aircraft component with Dawn laundry detergent. The “fix” is cheap, easy to use, and good enough. The software equivalent of this Dawn solution is that it will not stand the test of time. Instead of code crafted in good old COBOL or Assembler, we have some Fancy Dan tools which may fall out of favor in a matter of months, not decades.
Many projects result in better, faster, and cheaper. The reminder “Pick two” is helpful.
Net net: Fixing up lousy or flawed software is going to increase risks and costs. The question asked by bean counters is, “How much?” The answer is, “No one knows until the project is done … if ever.”
Stephen E Arnold, March 18, 2024
Stanford: Tech Reinventing Higher Education: I Would Hope So
March 15, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “How Technology Is Reinventing Education.” Essays like this one are quite amusing. The ideas flow without important context. Let’s look at this passage:
“Technology is a game-changer for education – it offers the prospect of universal access to high-quality learning experiences, and it creates fundamentally new ways of teaching,” said Dan Schwartz, dean of Stanford Graduate School of Education (GSE), who is also a professor of educational technology at the GSE and faculty director of the Stanford Accelerator for Learning. “But there are a lot of ways we teach that aren’t great, and a big fear with AI in particular is that we just get more efficient at teaching badly. This is a moment to pay attention, to do things differently.”
A university expert explains to a rapt audience that technology will make them healthy, wealthy, and wise. Well, that’s the what the marketing copy which the lecturer recites. Thanks, MSFT Copilot. Are you security safe today? Oh, that’s too bad.
I would suggest that Stanford’s Graduate School of Education consider these probably unimportant points:
- The president of Stanford University resigned allegedly because he fudged some data in peer-reviewed documents. True or false. Does it matter? The fellow quit.
- The Stanford Artificial Intelligence Lab or SAIL innovated with cooking up synthetic data. Not only was synthetic data the fast food of those looking for cheap and easy AI training data, Stanford became super glued to the fake data movement which may be good or it may be bad. Hallucinating is easier if the models are training using fake information perhaps?
- Stanford University produced some outstanding leaders in the high technology “space.” The contributions of famous graduates have delivered social media, shaped advertising systems, and interesting intelware companies which dabble in warfighting and saving lives from one versatile software and consulting platform.
The essay operates in smarter-than-you territory. It presents a view of the world which seems to be at odds with research results which are not reproducible, ethics-free researchers, and an awareness of how silly it looks to someone in rural Kentucky to have a president accused of pulling a grade-school essay cheating trick.
Enough pontification. How about some progress in remediating certain interesting consequences of Stanford faculty and graduates innovations?
Stephen E Arnold, March 15, 2024
Techno Bashing from Thumb Typers. Give It a Rest, Please
March 5, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Every generation says that the latest cultural and technological advancements make people stupider. Novels were trash, the horseless carriage ruined traveling, radio encouraged wanton behavior, and the list continues. Everything changed with the implementation of television aka the boob tube. Too much television does cause cognitive degradation. In layman’s terms, it means the brain goes into passive functioning rather than actively thinking. It would be almost a Zen moment. Addiction is fun for some.
The introduction of videogames, computers, and mobile devices augmented the decline of brain function. The combination of AI-chatbots and screens, however, might prove to be the ultimate dumbing down of humans. APA PsycNet posted a new study by Umberto León-Domínguez called, “Potential Cognitive Risks Of Generative Transformer-Based AI-Chatbots On Higher Order Executive Thinking.”
Psychologists already discovered that spending too much time on a screen (i.e. playing videogames, watching TV or YouTube, browsing social media, etc.) increases the risk of depression and anxiety. When that is paired with AI-chatbots, or programs designed to replicate the human mind, humans rely on the algorithms to think for them.
León-Domínguez wondered if too much AI-chatbot consumption impaired cognitive development. In his abstract he invented some handy new terms that:
“The “neuronal recycling hypothesis” posits that the brain undergoes structural transformation by incorporating new cultural tools into “neural niches,” consequently altering individual cognition. In the case of technological tools, it has been established that they reduce the cognitive demand needed to solve tasks through a process called “cognitive offloading.” Cognitive offloading”perfectly describes younger generations and screen addicts. “Cultural tools into neural niches” also respects how older crowds view new-fangled technology, coupled with how different parts of the brain are affected with technology advancements. The modern human brain works differently from a human brain in the 18th-century or two thousand years ago.
He found:
“The pervasive use of AI chatbots may impair the efficiency of higher cognitive functions, such as problem-solving. Importance: Anticipating AI chatbots’ impact on human cognition enables the development of interventions to counteract potential negative effects. Next Steps: Design and execute experimental studies investigating the positive and negative effects of AI chatbots on human cognition.”
Are we doomed? No. Do we need to find ways to counteract stupidity? Yes. Do we know how it will be done? No.
Isn’t tech fun?
Whitney Grace, March 6, 2024
Technology Becomes Detroit
March 4, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Have you ever heard of technical debt? Technical debt is when an IT team prioritize speedy delivery of a product over creating a feasible, quality product. Technology history is full of technical debt. Some of the more famous cases are the E.T. videogame for the Atari, Windows Vista, and the Samsung Galaxy Gear. Technical debt is an ongoing issue for IT departments and tech companies. It’s apparently getting worse. ITPro details the current problems with technical debt in, “IT Leaders Need To Accept They’ll Never Escape Technical Debt, But That Doesn’t Mean They Should Down Tools.”
Gordon Haff is a senior leader at Red Hat and a technology evangelist. Haff told ITPro that tech experts will continue to remain hindered as they continue to deal with technical debt and shill shortages. Tech experts want to advance their field with transformative projects but they’re held back by the same aforementioned issues. Haff stressed that as soon as one project is complete, tech experts build the next project on existing architecture. It creates a technical debt infrastructure.
Haff provided an example using a band-aid metaphor:
“Haff pointed toward application modernization as a prime example of this rinse and repeat trend. Many enterprises, he said, deliberately choose to not tinker with certain applications due to the fact they still worked nominally.
Fast forward several years later, these applications are overhauled and modernized, then are left to their own devices – to some extent – and reassessed during the next transformation cycle.
‘If you go back 10 years, we had this sort of bimodal IT, or fast-slow IT, that was kind of the thing,” he explained. “The idea was ‘we’ll leave that old stuff, we’ll shove that off into the corner and not worry about it’ and the cool kids can work on all this greenfield, often new customer-facing applications.
‘But by and large, it’s then a case of ‘oh we actually need to deal with this core business stuff’ and these older applications.’”
Haff suggests that IT experts shouldn’t approach their work with a “one and done” mindset. They should realize their work is constantly evolving. These should be aware of how to go with the flow and program legacy systems that don’t transform into large messes. There’s a reason videogame companies have beta tests, restaurants have soft openings, and musicals have previews. They test things to deliver quality products. Technical debt leads to technical rot.
Whitney Grace, March 4, 2024
Does Cheap and Sneaky Work Better than Expensive and Hyperbole?
February 8, 2024
This essay is the work of a dumb dinobaby. No smart software required.
My father was a member of the Sons of the American Revolution (SAR). He loved reading about that historical “special operation.” I think he imagine himself in a make-shift uniform, hiding behind some bushes, and then greeting the friend of George III with some old-fashioned malice. My hunch is that John Arnold’s descendants wrote anti-British editorials and gave speeches. But what do I know? Not much that’s for sure.
The David and Goliath trope may be applicable to the cheap drone tactic. Thanks, MSFT Copilot Bing thing. Good enough.
I thought about how a rag-tag, under-supplied collection of colonials could bedevil the British when I read The Guardian’s essay “Deadly, Cheap and Widespread: How Iran-Supplied Drones Are Changing the Nature of Warfare.” The write up opines that the drone which killed several Americans in Iraq:
is most likely to the smaller Shahed 101 or delta winged Shahed 131, both believed to be in Kataib Hezbollah’s arsenal …with estimated ranges of at least 700km (434 miles) and a cost of $20,000 (£15,700) or more. (Source Fabian Hinz, a weapons expert)
The point strikes me as a variant of David versus Goliath. The giant gets hurt by a lesser opponent with a cheap weapon. Iran is using drones, not exotic hardware like the F-16s Türkiye craves. A flimsy drone does not require the obvious paraphernalia of power the advanced jet does. Tin snips, some parts from Shenzhen retail outlets, and model airplane controls. No hangers, mechanics, engineers, and specially trained pilots.
Shades of the Colonials I think. The article continues:
The drones …are best considered cheap substitutes for guided cruise missiles, most effective against soft or “static structures” which force those under threat to “either invest money in defenses or disperse and relocate which renders things like aircraft on bases more inefficient”
Is there a factoid in this presumably accurate story from a British newspaper? Yes. My take-away is that simple and basic can do considerable harm. Oh, let me add “economical”, but that is rarely a popular concept among some government entities and approved contractors.
Net net: How about thinking like some of those old-time Revolutionaries in what has become the US?
Stephen E Arnold, February 8, 2024
Why Stuff Does Not Work: Airplane Doors, Health Care Services, and Cyber Security Systems, Among Others
January 26, 2024
This essay is the work of a dumb dinobaby. No smart software required.
“The Downward Spiral of Technology” stuck a chord with me. Think about building monuments in the reign of Cleopatra. The workers can check out the sphinx and giant stone blocks in the pyramids and ask, “What happened to the technology? We are banging with bronze and crappy metal compounds and those ancient dudes were zipping along with snappier tech.? That conversation is imaginary, of course.
The author of “The Downward Spiral” is focusing on less dusty technology, the theme might resonate with my made up stone workers. Modern technology lacks some of the zing of the older methods. The essay by Thomas Klaffke hit on some themes my team has shared whilst stuffing Five Guys’s burgers in their shark-like mouths.
Here are several points I want to highlight. In closing, I will offer some of my team’s observations on the outcome of the Icarus emulators.
First, let’s think about search. One cannot do anything unless one can find electronic content. (Lawyers, please, don’t tell me you have associates work through the mostly-for-show books in your offices. You use online services. Your opponents in court print stuff out to make life miserable. But electronic content is the cat’s pajamas in my opinion.)
Here’s a table from the Mr. Klaffke essay:
Two things are important in this comparison of the “old” tech and the “new” tech deployed by the estimable Google outfit. Number one: Search in Google’s early days made an attempt to provide content relevant to the query. The system was reasonably good, but it was not perfect. Messrs. Brin and Page fancy danced around issues like disambiguation, date and time data, date and time of crawl, and forward and rearward truncation. Flash forward to the present day, the massive contributions of Prabhakar Raghavan and other “in charge of search” deliver irrelevant information. To find useful material, navigate to a Google Dorks service and use those tips and tricks. Otherwise, forget it and give Swisscows.com, StartPage.com, or Yandex.com a whirl. You are correct. I don’t use the smart Web search engines. I am a dinobaby, and I don’t want thresholds set by a 20 year old filtering information for me. Thanks but no thanks.
The second point is that search today is a monopoly. It takes specialized expertise to find useful, actionable, and accurate information. Most people — even those with law degrees, MBAs, and the ability to copy and paste code — cannot cope with provenance, verification, validation, and informed filtering performed by a subject matter expert. Baloney does not work in my corner of the world. Baloney is not a favorite food group for me or those who are on my team. Kudos to Mr. Klaffke to make this point. Let’s hope someone listens. I have given up trying to communicate the intellectual issues lousy search and retrieval creates. Good enough. Nope.
Yep, some of today’s tools are less effective than modern gizmos. Hey, how about those new mobile phones? Thanks, MSFT Copilot Bing thing. Good enough. How’s the MSFT email security today? Oh, I asked that already.
Second, Mr Klaffke gently reminds his reader that most people do not know snow cones from Shinola when it comes to information. Most people assume that a computer output is correct. This is just plain stupid. He provides some useful examples of problems with hardware and user behavior. Are his examples ones that will change behaviors. Nope. It is, in my opinion, too late. Information is an undifferentiated haze of words, phrases, ideas, facts, and opinions. Living in a haze and letting signals from online emitters guide one is a good way to run a tiny boat into a big reef. Enjoy the swim.
Third, Mr. Klaffke introduces the plumbing of the good-enough mentality. He is accurate. Some major social functions are broken. At lunch today, I mentioned the writings about ethics by Thomas Dewey and William James. My point was that these fellows wrote about behavior associated with a world long gone. It would be trendy to wear a top hat and ride in a horse drawn carriage. It would not be trendy to expect that a person would work and do his or her best to do a good job for the agreed-upon wage. Today I watched a worker who played with his mobile phone instead of stocking the shelves in the local grocery store. That’s the norm. Good enough is plenty good. Why work? Just pay me, and I will check out Instagram.
I do not agree with Mr. Klaffke’s closing statement; to wit:
The problem is not that the “machine” of humanity, of earth is broken and therefore needs an upgrade. The problem is that we think of it as a “machine”.
The problem is that worldwide shared values and cultural norms are eroding. Once the glue gives way, we are in deep doo doo.
Here are my observations:
- No entity, including governments, can do anything to reverse thousands of years of cultural accretion of norms, standards, and shared beliefs.
- The vast majority of people alive today are reverting back to some fascinating behaviors. “Fascinating” is not a positive in the sense in which I am using the word.
- Online has accelerated the stress on social glue; smart software is the turbocharger of abrupt, hard-to-understand change.
Net net: Please, read Mr. Klaffke’s essay. You may have an idea for remediating one or more of today’s challenges.
Stephen E Arnold, January 25, 2024