Good Enough Junk Food: Knowledge without the Work
December 16, 2024
This write up emerged from the dinobaby’s own mind. Yes, the dinobaby used AI because this write up is about the knowledge value lost with smart software. Intellectual junk food is not the same as my mother’s overcooked chicken which she killed on her aunt’s farm. Cluck, cluck, squawk.
I plugged the Guardian’s opinion essay into You.com’s Chat GPT-4o model. You can read the original essay titled “Is Doom Scrolling Really Rotting Our Brains? The Evidence Is Getting Harder to Ignore.” The original essay is clear, reasonably well researched, and structured to speak to [a] a person who has access to the Guardian online or in (gasp!) a print form, [b] who reads the Guardian’s opinion essays, and [c] who has the ability to recognize that “brain rot” has probably affected that individual, co-workers, government leaders, children, and people who repair refrigerators.
This illustration is good enough and it was produced by the AI system Magic Studio.
Here’s the summary Chat GPT-4o spit out:
The essay critiques the decline of human intellect in the digital age, using the viral “Skibidi Toilet” video as a cultural metaphor for deteriorating focus and intellectual engagement. The Oxford English Dictionary’s 2024 word of the year, “brain rot”, reflects this phenomenon, defined as the mental degradation caused by excessive consumption of trivial online content. The essay draws on academic research to demonstrate how compulsive internet use is literally shrinking grey matter, weakening memory, shortening attention spans, and distorting cognitive functions. Studies from Harvard, Oxford, and King’s College reveal the harmful effects of media multitasking and heavy internet usage, with findings such as reduced grey matter in the brain and the risk of “digital dementia” in young people during critical developmental years.The prevalence of infinite scroll and dopamine-driven design in apps is highlighted as a deliberate tactic by tech companies to exploit human psychology for profit, creating powerful addiction loops. Former Google design ethicist Tristan Harris warns that this “race to the bottom of the brain stem” manipulates fear, outrage, and reward systems, potentially driving society to catastrophe if left unchecked.Despite the grim outlook, the essay finds hope in the growing self-awareness among younger generations, who coined the term “brain rot” and recognize its effects. Movements advocating for reduced technology use, such as switching to dumbphones or promoting smartphone-free childhoods, are gaining traction. These trends suggest the possibility of reclaiming mental clarity and focus, countering the current trajectory of cognitive decline.Ultimately, the essay positions “Skibidi Toilet” as both a symptom of modern intellectual decay and a call to action. Humanity stands at a critical juncture: to either rise above the current trends of distraction and degradation or succumb further to the “U-bend” of intellectual regression.
For what it is worth, this is a typical bite-sized knowledge cookie. No one will know if the summary is good or bad unless that person takes the time to get the original full text and compare it with this AI generated output. The informational fast food provides a sugar jolt from saving time or the summary consumer’s belief that the important information is on the money. A knowledge cookie if you will, or maybe intellectual junk food?
Is this summary good enough? From my point of view, it is just okay; that is, good enough. What else is required? Flash back to 1982, the ABI/INFORM database was a commercial success. A couple of competitors were trying to capture our customers which was tricky. Intermediaries like Dialog Information Services, ESA, LexisNexis (remember Buster and his silver jumpsuit?), among others “owned” the direct relationship with the companies that paid the intermediaries to use the commercial databases on their systems. Then the intermediaries shared some information with us, the database producers.
How did a special librarian or a researcher “find” or “know about” our database? The savvy database producers provided information to the individuals interested in a business and management related commercial database. We participated in niche trade shows. We held training programs and publicized them with our partners Dow Jones News Retrieval, Investext, Predicasts, and Disclosure, among a few others. Our senior professionals gave lectures about controlled term indexing, the value of classification codes, and specific techniques to retrieve a handful of relevant citations and abstracts from our online archive. We issued news releases about new sources of information we added, in most cases with permission of the publisher.
We did not use machine indexing. We did have a wizard who created a couple of automatic indexing systems. However, when the results of what the software in 1922 could do, we fell back on human indexers, many of whom had professional training in the subject matter they were indexing. A good example was our coverage of real estate management activities. The person who handled this content was a lawyer who preferred reading and working in our offices. At this time, the database was owned by the Courier-Journal & Louisville Times Co. The owner of the privately held firm was an early adopted of online and electronic technology. He took considerable pride in our line up of online databases. When he hired me, I recall his telling me, “Make the databases as good as you can.”
How did we create a business and management database that generated millions in revenue and whose index was used by entities like the Royal Bank of Canada to index its internal business information?
Here’s the secret sauce:
- We selected sources in most cases business journals, publications, and some other types of business related content; for example, the ANBAR management reports
- The selection of which specific article to summarize was the responsibility of a managing editor with deep business knowledge
- Once an article was flagged as suitable for ABI/INFORM, it was routed to the specialist who created a summary of the source article. At that time, ABI/INFORM summaries or “abstracts” were limited to 150 words, excluding the metadata.
- An indexing specialist would then read the abstract and assign quite specific index terms from our proprietary controlled vocabulary. The indexing included such items as four to six index terms from our controlled vocabulary and a classification code like 7700 to indicate “marketing” with addition two digit indicators to make explicit that the source document was about marketing and direct mail or some similar subcategory of marketing. We also included codes to disambiguate between a railroad terminal and a computer terminal because source documents assumed the reader would “know” the specific field to which the term’s meaning belonged. We added geographic codes, so the person looking for information could locate employee stock ownership in a specific geographic region like Northern California, and a number of other codes specifically designed to allow precise, comprehensive retrieval of abstracts about business and management. Some of the systems permitted free text searching of the abstract, and we considered that a supplement to our quite detailed indexing.
- Each abstract and index terms was checked by a control control process using people who had demonstrated their interest in our product and their ability to double check the indexing.
- We had proprietary “content management systems” and these generated the specific file formats required by our intermediaries.
- Each week we updated our database and we were exploring daily updates for our companion product called Business Dateline when the Courier Journal was broken up and the database operation sold to a movie camera company, Bell+Howell.
Chat GPT-4o created the 300 word summary without the human knowledge, expertise, and effort. Consequently, the loss of these knowledge based workflow has been replaced by a smart software which can produce a summary in less than 30 seconds.
And that summary is, from my point of view, good enough. There are some trade offs:
- Chat GPT-4o is reactive. Feed it a url or a text, and it will summarize it. Gone is the knowledge-based approach to select a specific, high-value source document for inclusion in the database. Our focus was informed selection. People paid to access the database because of the informed choice about what to put in the database.
- The summary does not include the ABI/INFORM key points and actionable element of the source document. The summary is what a high school or junior college graduate would create if a writing teacher assigned a “how to write a précis” as part of the course requirements. In general, high school and junior college graduates are not into nuance and cannot determine the pivotal information payload in a source document.
- The precise indexing and tagging is absent. One could create a 1,000 such summaries, toss them in MISTRAL, and do a search. The result is great if one is uninformed about the importance of editorial polices, knowledge-based workflows, and precise, thorough indexing.
The reason I am sharing some of this “ancient” online history is:
- The loss of quality in online information is far more serious than most people understand. Getting a summary today is no big deal. What’s lost is simply not on these individuals’ radar.
- The lack of an editorial policy, precise date and time information, and the fine-grained indexing means that one has to wade through a mass of undifferentiated information. ABI/INFORM in the 1080s delivered a handful of citations directly on point with the user’s query. Today no one knows or cares about precision and recall.
- It is now more difficult than at any other time in my professional work career to locate needed information. Public libraries do not have the money to obtain reference materials, books, journals, and other content. If the content is online, it is a dumbed down and often cut rate version of the old-fashioned commercial databases created by informed professionals.
- People look up information online and remain dumb; that is, the majority of the people with whom I come in contact routinely ask me and my team, “Where do you get your information?” We even have a slide in our CyberSocial lecture about “how” and “where.” The analysts and researchers in the audience usually don’t know so an entire subculture of open source information professionals has come into existence. These people are largely on their own and have to do work which once was a matter of querying a database like ABI/INFORM, Predicasts, Disclosure, Agricola, etc.
Sure the essay is good. The summary is good enough. Where does that leave a person trying to understand the factual and logical errors in a new book examining social media. In my opinion, people are in the dark and have a difficult time finding information. Making decisions in the dark or without on point accurate information is recipe for a really bad batch of cookies.
Stephen E Arnold, December 15, 2024
We Need a Meeting about Meetings after I Get Back from a Meeting
December 10, 2024
This blog post flowed from the sluggish and infertile mind of a real live dinobaby. If there is art, smart software of some type was probably involved.
I heard that Mr. Jeff Bezos, the Big Daddy of online bookstores, likes chaotic and messy meetings. Socrates might not have been down with that approach.
As you know, Socrates was a teacher who ended up dead because he asked annoying questions. “Socratic thinking” helps people remain open to new ideas. Do new ideas emerge from business meetings? Most of those whom I know grumble, pointing out to me that meetings waste their time. Michael Poczwardowski challenges that assumption with Socratic thinking in the Perspectiveship post “Socratic Questioning – ‘Meetings are a waste of time’”.
Socratic-based discussions are led by someone who only asks questions. By asking only questions the discussion can then focus on challenging assumptions, critical thinking, and first principles-dividing problems into basic elements to broaden perspectives and understanding. Poczwardowski brings the idea that: “meetings are a waste of time” to the discussion forum.
Poczwardowski introduces readers to Socratic thinking with the steps of classification, challenge assumptions, look for data/evidence, change perspective, explore consequences and implications, and question the question. Here’s my summary done my a person with an advanced degree in information science. (I know I am not as smart as Google’s AI, but I do what I can with my limited resources, thank you.)
“The key is to remain open to possibilities and be ready to face our beliefs. Socratic questioning is a great way to work on improving our critical thinking.
When following Socratic questioning ask to:
• Clarify the idea: It helps us understand what we are talking about and to be on the same page
• Challenge assumptions: Ask them to list their assumptions.
• Look for evidence: Asking what kind of evidence they have can help them verify the sources of their beliefs
• Change perspectives: Look at the problem from others’ points of view.
• Explore consequences: Explore the possible outcomes and effects of actions to understand their impact”
Am I the only one who thinks this also sounds obvious? Ancient philosophers did inspire the modern approach to scientific thought. Galileo demonstrated that he would recant instead of going to prison or being killed. Perhaps I should convene a meeting to decide if the meeting is a waste of time. I will get back to you. I have a meeting coming up.
Whitney Grace, December 10, 2024
AI Automation: Spreading Like Covid and Masks Will Not Help
December 10, 2024
This blog post flowed from the sluggish and infertile mind of a real live dinobaby. If there is art, smart software of some type was probably involved.
Reddit is the one of the last places on the Internet where you can find quality and useful information. Reddit serves as the Internet’s hub for news, tech support, trolls, and real-life perspectives about jobs. Here’s a Reddit downer in the ChatGPT thread for anyone who works in a field that can be automated: “Well this is it boys. I was just informed from my boss and HR that my entire profession is being automated away.”
For brevity’s sake here is the post:
“For context I work production in local news. Recently there’s been developments in AI driven systems that can do 100% of the production side of things which is, direct, audio operate, and graphic operate -all of those jobs are all now gone in one swoop. This has apparently been developed by the company Q ai. For the last decade I’ve worked in local news and have garnered skills I thought I would be able to take with me until my retirement, now at almost 30 years old, all of those job opportunities for me are gone in an instant. The only person that’s keeping their job is my manager, who will overlook the system and do maintenance if needed. That’s 20 jobs lost and 0 gained for our station. We were informed we are going to be the first station to implement this under our company. This means that as of now our entire production staff in our news station is being let go. Once the system is implemented and running smoothly then this system is going to be implemented nationwide (effectively eliminating tens of thousands of jobs.) There are going to be 0 new jobs built off of this AI platform. There are people I work with in their 50’s, single, no college education, no family, and no other place to land a job once this kicks in. I have no idea what’s going to happen to them. This is it guys. This is what our future with AI looks like. This isn’t creating any new jobs this is knocking out entire industry level jobs without replacing them.”
The post is followed by comments of commiseration, encouragement, and the usual doom and gloom. It’s not surprising that local news stations are automating their tasks, especially with the overhead associates with employees. These include: healthcare, retirement package, vacation days, PTO, and more. AI is the perfect employee, because it doesn’t complain or take time off. AI, however, is lacking basic common sense and fact checking. We’re witnessing a change in how the job market, it just sucks to live through it.
Whitney Grace, December 10, 2024
Deepfakes: An Interesting and Possibly Pernicious Arms Race
December 2, 2024
As it turns out, deepfakes are a difficult problem to contain. Who knew? As victims from celebrities to schoolchildren multiply exponentially, USA Today asks, “Can Legislation Combat the Surge of Non-Consensual Deepfake Porn?” Journalist Dana Taylor interviewed UCLA’s John Villasenor on the subject. To us, the answer is simple: Absolutely not. As with any technology, regulation is reactive while bad actors are proactive. Villasenor seems to agree. He states:
“It’s sort of an arms race, and the defense is always sort of a few steps behind the offense, right? In other words that you make a detection tool that, let’s say, is good at detecting today’s deepfakes, but then tomorrow somebody has a new deepfake creation technology that is even better and it can fool the current detection technology. And so then you update your detection technology so it can detect the new deepfake technology, but then the deepfake technology evolves again.”
Exactly. So if governments are powerless to stop this horror, what can? Perhaps big firms will fight tech with tech. The professor dreams:
“So I think the longer term solution would have to be automated technologies that are used and hopefully run by the people who run the servers where these are hosted. Because I think any reputable, for example, social media company would not want this kind of content on their own site. So they have it within their control to develop technologies that can detect and automatically filter some of this stuff out. And I think that would go a long way towards mitigating it.”
Sure. But what can be done while we wait on big tech to solve the problem it unleased? Individual responsibility, baby:
“I certainly think it’s good for everybody, and particularly young people these days to be just really aware of knowing how to use the internet responsibly and being careful about the kinds of images that they share on the internet. … Even images that are sort of maybe not crossing the line into being sort of specifically explicit but are close enough to it that it wouldn’t be as hard to modify being aware of that kind of thing as well.”
Great, thanks. Admitting he may sound naive, Villasenor also envisions education to the (partial) rescue:
“There’s some bad actors that are never going to stop being bad actors, but there’s some fraction of people who I think with some education would perhaps be less likely to engage in creating these sorts of… disseminating these sorts of videos.”
Our view is that digital tools allow the dark side of individuals to emerge and expand.
Cynthia Murrell, December 2, 2024
AI In Group Communications: The Good and the Bad
November 29, 2024
In theory, AI that can synthesize many voices into one concise, actionable statement is very helpful. In practice, it is complicated. The Tepper School of Business at Carnegie Mellon announces, “New Paper Co-Authored by Tepper School Researchers Articulates How Large Language Models are Changing Collective Intelligence Forever.” Researchers from Tepper and other institutions worked together on the paper, which was published in Nature Human Behavior. We learn:
“[Professor Anita Williams] Woolley and her co-authors considered how LLMs process and create text, particularly their impact on collective intelligence. For example, LLMs can make it easier for people from different backgrounds and languages to communicate, which means groups can collaborate more effectively. This technology helps share ideas and information smoothly, leading to more inclusive and productive online interactions. While LLMs offer many benefits, they also present challenges, such as ensuring that all voices are heard equally.”
Indeed. The write-up continues:
“‘Because LLMs learn from available online information, they can sometimes overlook minority perspectives or emphasize the most common opinions, which can create a false sense of agreement,’ said Jason Burton, an assistant professor at Copenhagen Business School. Another issue is that LLMs can spread incorrect information if not properly managed because they learn from the vast and varied content available online, which often includes false or misleading data. Without careful oversight and regular updates to ensure data accuracy, LLMs can perpetuate and even amplify misinformation, making it crucial to manage these tools responsibly to avoid misleading outcomes in collective decision-making processes.”
In order to do so, the paper suggests, we must further explore LLMs’ ethical and practical implications. Only then can we craft effective guidelines for responsible AI summarization. Such standards are especially needed, the authors note, for any use of LLMs in policymaking and public discussions.
But not to worry. The big AI firms are all about due diligence, right?
Cynthia Murrell, November 29, 2024
Group Work Can Be Problematic Even on Video
November 28, 2024
Group work is an unavoidable part of school and everyone hates it. Unfortunately group work continues into adulthood except it’s called a job, teamwork, and synergy. Intelligent leaders realize that poor collaboration hurts profits, so the Zoom Blog (everyone’s favorite digital workplace) did the following: “New Report Uncovers What Bad Collaboration Can Cost Your Organization — And How You Can Help Fix It.”
Poor collaboration takes many forms. It’s more than one team member not carrying their weight, it’s also calendars not syncing or misunderstanding a colleague’s intent. Zoom conducted a Global Connection In The Workplace report based on a Morning Consult survey of over 8000 workers in 16 countries. The survey learned how much repairing bad coloration costs, common collaboration challenges, and how people prefer to work with each other.
The wasted costs are astounding : $874,000 annually per 1000 employee or $16491 per manage. Remote leaders spent the majority of their time collaborating with their co-workers, spending an average of 2-3 hours everyday on email and virtual meetings. Leaders spent more time than their associates resolving bad collaboration and refocusing between tasks. Leaders and workers both agreed that chatting/instant messaging was their favorite way of communicating. The survey also revealed that there were shifting preferences based on generational differences. Baby Boomers prefer in-person meetings while Gen Z like using project management software.
IT workers shared their collaboration struggles. The study discovered that IT workers are pummeled with requests for new tools and apps. IT workers also use a variety of apps to solve problems. If they use more than ten apps for their job, then continuity between all collaboration platforms doesn’t mesh:
“IT leaders are constantly bombarded with sales pitches and employee requests for new apps and tools. Individually, each one promises to solve a problem, but the report shows that too many apps were actually associated with greater collaboration challenges. Those who reported using more than 10 apps for work were more likely to struggle with issues like misunderstandings in communication, lack of engagement from colleagues, and lack of alignment than those who reported using fewer than five apps.”
Understandably coloration is a big problem for all companies and needs improvement. Zoom asserts that video collaboration is a solution to many of these issues. Doesn’t that make sense for Zoom to make those claims? We believe everything a funded research report presents as factual.
Whitney Grace, November 28, 2024
Modern Library Patrons Present Challenging Risky Business Situations
November 27, 2024
Librarians have one of the most stressful jobs in the world. Why? They do much more than assist people locating books or reading to children. They also are therapists, community resource managers, IT support, babysitter, elderly care specialist, referee, tutor, teacher, police officer, and more. Public librarians handle more situations than their job description entails and Public Library Quarterly published: “The Hidden Toll: Exploring the Impact of Challenging Patron Behaviors On Australian Public Library Staff.”
It’s not an industry secret that librarians face more than can handle, but their experiences are often demeaned. Their resources and stamina are stretched to the limits. There have been studies about how librarians have dealt with more than they can handle for years, but this study researched the trauma they face:
“As a public-facing profession, public library workers are often exposed to challenging behaviors that raise concern for their safety. To understand these concerns, this study explores these staff safety issues in Australian public libraries through semi-structured interviews with 59 staff members from six library services. Findings reveal that library workers frequently encounter challenging and sometimes violent behaviors from patrons. These incidents impact staff wellbeing, causing stress, anxiety, and potential long-term psychological effects. Many workers receive insufficient workplace support following traumatic incidents, leading to internalization of the trauma the experiences cause. The study highlights the need for improved institutional support and better safety measures.”It also recognizes the tension created by libraries’ open-door policies, which may expose workers to potential harm. The study acknowledges that there has ben zero to little research about the mental and physical health of library workers. There is a lot of literature written about patron satisfaction with services and staff, but very little about aggressive, problem patrons. Many studies also focus on the trauma-related services patrons need but not the library staff.
There has been some studies related to the impact of problem patrons on staff, but nothing in depth. The Australian participants in the shared stories of their time in the trenches and it’s not pretty. Librarians around the world have similar or worse stories. Librarians need assistance for themselves and their patrons, but I doubt it’ll come through. At least the writers agree with me:
“The findings of this study paint a concerning picture of the working conditions in Australian public libraries. The prevalence of unsafe incidents, coupled with their significant psychological impact on staff, calls for action from library management, policymakers, and local government councils responsible for public libraries. While public libraries play a crucial role in providing open access to spaces, information, and services for all members of society, this should not come at the cost of staff safety and wellbeing. Addressing these issues will require a multifaceted approach, involving enhanced training, improved support systems, policy changes, and potentially a broader societal discussion about the role and resources of public libraries.”
Whitney Grace, November 27, 2024
Early AI Adoption: Some Benefits
November 25, 2024
Is AI good or is it bad? The debate is still raging about, especially in Hollywood where writers, animators, and other creatives are demanding the technology be removed from the industry. AI, however, is a tool. It can be used for good and bad acts, but humans are the ones who initiate them. AI At Wharton investigated how users are currently adopting AI: “Growing Up: Navigating Generative AI’s Early Years – AI Adoption Report.”
The report was based on the responses from full-time employees who worked in commercial organization with 1000 or more workers. Adoption of AI in businesses jumped from 37 % in 2023 to 72% in 2024 with high growth in human resources and marketing departments. Companies are still unsure if AI is worth the ROI. The study explains that AI will benefit companies that have adaptable organizations and measurable ROI.
The report includes charts that document the high rate of usage compared last year as well as how AI is mostly being used. It’s being used for document writing and editing, data analytics, document summarization, marketing content creation, personal marketing and advertising, internal support, customer support, fraud prevention, and report creation. AI is definitely impactful but not overwhelmingly, but the response to the new technology is positive and companies will continue to invest in it.
“Looking to the future, Gen AI adoption will enter its next chapter which is likely to be volatile in terms of investment and carry greater privacy and usage restrictions. Enthusiasm projected by new Chief AI Officer (CAIO) role additions and team expansions this year will be tempered by the reality of finding “accountable” ROI. While approximately three out of four industry respondents plan to increase Gen AI budgets next year, the majority expect growth to slow over the longer term, signaling a shift in focus towards making the most effective internal investments and building organizational structures to support sustainable Gen AI implementation. The key to successful adoption of Gen AI will be proper use cases that can scale, and measurable ROI as well as organization structures and cultures that can adapt to the new technology.”
While the responses are positive, how exactly is it being used beyond the charts. Are the users implementing AI for work short cuts, such as really slap shod content generation? I’d hate to be the lazy employee who uses AI to make the next quarterly report and didn’t double-check the information.
Whitney Grace, November 25, 2024
Pragmatism or the Normalization of Good Enough
November 14, 2024
Sorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.
I recall that some teacher told me that the Mona Lisa painter fooled around more with his paintings than he did with his assistants. True or false? I don’t know. I do know that when I wandered into the Louvre in late 2024, there were people emulating sardines. These individuals wanted a glimpse of good old Mona.
Is Hamster Kombat the 2024 incarnation of the Mona Lisa? I think this image is part of the Telegram eGame’s advertising. Nice art. Definitely a keeper for the swipers of the future.
I read “Methodology Is Bullsh&t: Principles for Product Velocity.” The main idea, in my opinion, is do stuff fast and adapt. I think this is similar to the go-go mentality of whatever genius said, “Go fast. Break things.” This version of the Truth says:
All else being equal, there’s usually a trade-off between speed and quality. For the most part, doing something faster usually requires a bit of compromise. There’s a corner getting cut somewhere. But all else need not be equal. We can often eliminate requirements … and just do less stuff. With sufficiently limited scope, it’s usually feasible to build something quickly and to a high standard of quality. Most companies assign requirements, assert a deadline, and treat quality as an output. We tend to do the opposite. Given a standard of quality, what can we ship in 60 days? Recent escapades notwithstanding, Elon Musk has a similar thought process here. Before anything else, an engineer should make the requirements less dumb.
Would the approach work for the Mona Lisa dude or for Albert Einstein? I think Al fumbled along for years, asking people to help with certain mathy issues, and worrying about how he saw a moving train relative to one parked at the station.
I think the idea in the essay is the 2024 view of a practical way to get a product or service before prospects. The benefits of redefining “fast” in terms of a specification trimmed to the MVP or minimum viable product makes sense to TikTok scrollers and venture partners trying to find a pony to ride at a crowded kids’ party.
One of the touchstones in the essay, in my opinion, is this statements:
Our customers are engineers, so we generally expect that our engineers can handle product, design, and all the rest. We don’t need to have a whole committee weighing in. We just make things and see whether people like them.
I urge you to read the complete original essay.
Several observations:
- Some people like the Mona List dude are engaged in a process of discovery, not shipping something good enough. Discovery takes some people time, lots of time. What happens during this process is part of the process of expanding an information base.
- The go-go approach has interesting consequences; for example, based on the anecdotal and flawed survey data, young users of social media evidence a number of interesting behaviors. The idea of “let ‘er rip” appears to have some impact on young people. Perhaps you have one hand experience with this problem? I know people whose children have manifested quite remarkable behaviors. I do know that certain basic mental functions like concentrating is visible to me every time I have a teenager check me out at the grocery store.
- By redefining excellence and quality, the notion of a high-value goal drops down a bit. Some new automobiles don’t work too well; for example, the Tesla Cybertruck owner whose vehicle was not able to leave the dealer’s lot.
Net net: Is a Telegram mini app Hamster Kombat today’s equivalent of the Mona Lisa?
Stephen E Arnold, November 14, 2024
The Yogi Berra Principle: Déjà Vu All Over Again
November 7, 2024
Sorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.
I noted two write ups. The articles share what I call a meta concept. The first article is from PC World called “Office Apps Crash on Windows 11 24H2 PCs with CrowdStrike Antivirus.” The actors in this soap opera are the confident Microsoft and the elegant CrowdStrike. The write up says:
The annoying error affects Office applications such as Word or Excel, which crash and become completely unusable after updating to Windows 11 24H2. And this apparently only happens on systems that are running antivirus software by CrowdStrike. (Yes, the very same CrowdStrike that caused the global IT meltdown back in July.)
A patient, skilled teacher explains to the smart software, “You goofed, speedy.” Thanks, Midjourney. Good enough.
The other write up adds some color to the trivial issue. “Windows 11 24H2 Misery Continues, As Microsoft’s Buggy Update Is Now Breaking Printers – Especially on Copilot+ PCs” says:
Neowin reports that there are quite a number of complaints from those with printers who have upgraded to Windows 11 24H2 and are finding their device is no longer working. This is affecting all the best-known printer manufacturers, the likes of Brother, Canon, HP and so forth. The issue is mainly being experienced by those with a Copilot+ PC powered by an Arm processor, as mentioned, and it either completely derails the printer, leaving it non-functional, or breaks certain features. In other cases, Windows 11 users can’t install the printer driver.
Okay, dinobaby, what’s the meta concept thing? Let me offer my ideas about the challenge these two write ups capture:
- Microsoft may have what I call the Boeing disease; that is, the engineering is not so hot. Many things create problems. Finger pointing and fast talking do not solve the problems.
- The entities involved are companies and software which have managed to become punch lines for some humorists. For instance, what software can produce more bricks than a kiln? Answer: A Windows update. Ho ho ho. Funny until one cannot print a Word document for a Type A, drooling MBA.
- Remediating processes don’t remediate. The word process itself generates flawed outputs. Stated another way, like some bank mainframes and 1960s code, fixing is not possible and there are insufficient people and money to get the repair done.
The meta concept is that the way well paid workers tackle an engineering project is capable of outputting a product or service that has a failure rate approaching 100 percent. How about those Windows updates? Are they the digital equivalent of the Boeing space initiative.
The answer is, “No.” We have instances of processes which cannot produce reliable products and services. The framework itself produces failure. This idea has some interesting implications. If software cannot allow a user to print, what else won’t perform as expected? Maybe AI?
Stephen E Arnold, November 7, 2024