Group Work Can Be Problematic Even on Video
November 28, 2024
Group work is an unavoidable part of school and everyone hates it. Unfortunately group work continues into adulthood except it’s called a job, teamwork, and synergy. Intelligent leaders realize that poor collaboration hurts profits, so the Zoom Blog (everyone’s favorite digital workplace) did the following: “New Report Uncovers What Bad Collaboration Can Cost Your Organization — And How You Can Help Fix It.”
Poor collaboration takes many forms. It’s more than one team member not carrying their weight, it’s also calendars not syncing or misunderstanding a colleague’s intent. Zoom conducted a Global Connection In The Workplace report based on a Morning Consult survey of over 8000 workers in 16 countries. The survey learned how much repairing bad coloration costs, common collaboration challenges, and how people prefer to work with each other.
The wasted costs are astounding : $874,000 annually per 1000 employee or $16491 per manage. Remote leaders spent the majority of their time collaborating with their co-workers, spending an average of 2-3 hours everyday on email and virtual meetings. Leaders spent more time than their associates resolving bad collaboration and refocusing between tasks. Leaders and workers both agreed that chatting/instant messaging was their favorite way of communicating. The survey also revealed that there were shifting preferences based on generational differences. Baby Boomers prefer in-person meetings while Gen Z like using project management software.
IT workers shared their collaboration struggles. The study discovered that IT workers are pummeled with requests for new tools and apps. IT workers also use a variety of apps to solve problems. If they use more than ten apps for their job, then continuity between all collaboration platforms doesn’t mesh:
“IT leaders are constantly bombarded with sales pitches and employee requests for new apps and tools. Individually, each one promises to solve a problem, but the report shows that too many apps were actually associated with greater collaboration challenges. Those who reported using more than 10 apps for work were more likely to struggle with issues like misunderstandings in communication, lack of engagement from colleagues, and lack of alignment than those who reported using fewer than five apps.”
Understandably coloration is a big problem for all companies and needs improvement. Zoom asserts that video collaboration is a solution to many of these issues. Doesn’t that make sense for Zoom to make those claims? We believe everything a funded research report presents as factual.
Whitney Grace, November 28, 2024
Modern Library Patrons Present Challenging Risky Business Situations
November 27, 2024
Librarians have one of the most stressful jobs in the world. Why? They do much more than assist people locating books or reading to children. They also are therapists, community resource managers, IT support, babysitter, elderly care specialist, referee, tutor, teacher, police officer, and more. Public librarians handle more situations than their job description entails and Public Library Quarterly published: “The Hidden Toll: Exploring the Impact of Challenging Patron Behaviors On Australian Public Library Staff.”
It’s not an industry secret that librarians face more than can handle, but their experiences are often demeaned. Their resources and stamina are stretched to the limits. There have been studies about how librarians have dealt with more than they can handle for years, but this study researched the trauma they face:
“As a public-facing profession, public library workers are often exposed to challenging behaviors that raise concern for their safety. To understand these concerns, this study explores these staff safety issues in Australian public libraries through semi-structured interviews with 59 staff members from six library services. Findings reveal that library workers frequently encounter challenging and sometimes violent behaviors from patrons. These incidents impact staff wellbeing, causing stress, anxiety, and potential long-term psychological effects. Many workers receive insufficient workplace support following traumatic incidents, leading to internalization of the trauma the experiences cause. The study highlights the need for improved institutional support and better safety measures.”It also recognizes the tension created by libraries’ open-door policies, which may expose workers to potential harm. The study acknowledges that there has ben zero to little research about the mental and physical health of library workers. There is a lot of literature written about patron satisfaction with services and staff, but very little about aggressive, problem patrons. Many studies also focus on the trauma-related services patrons need but not the library staff.
There has been some studies related to the impact of problem patrons on staff, but nothing in depth. The Australian participants in the shared stories of their time in the trenches and it’s not pretty. Librarians around the world have similar or worse stories. Librarians need assistance for themselves and their patrons, but I doubt it’ll come through. At least the writers agree with me:
“The findings of this study paint a concerning picture of the working conditions in Australian public libraries. The prevalence of unsafe incidents, coupled with their significant psychological impact on staff, calls for action from library management, policymakers, and local government councils responsible for public libraries. While public libraries play a crucial role in providing open access to spaces, information, and services for all members of society, this should not come at the cost of staff safety and wellbeing. Addressing these issues will require a multifaceted approach, involving enhanced training, improved support systems, policy changes, and potentially a broader societal discussion about the role and resources of public libraries.”
Whitney Grace, November 27, 2024
Early AI Adoption: Some Benefits
November 25, 2024
Is AI good or is it bad? The debate is still raging about, especially in Hollywood where writers, animators, and other creatives are demanding the technology be removed from the industry. AI, however, is a tool. It can be used for good and bad acts, but humans are the ones who initiate them. AI At Wharton investigated how users are currently adopting AI: “Growing Up: Navigating Generative AI’s Early Years – AI Adoption Report.”
The report was based on the responses from full-time employees who worked in commercial organization with 1000 or more workers. Adoption of AI in businesses jumped from 37 % in 2023 to 72% in 2024 with high growth in human resources and marketing departments. Companies are still unsure if AI is worth the ROI. The study explains that AI will benefit companies that have adaptable organizations and measurable ROI.
The report includes charts that document the high rate of usage compared last year as well as how AI is mostly being used. It’s being used for document writing and editing, data analytics, document summarization, marketing content creation, personal marketing and advertising, internal support, customer support, fraud prevention, and report creation. AI is definitely impactful but not overwhelmingly, but the response to the new technology is positive and companies will continue to invest in it.
“Looking to the future, Gen AI adoption will enter its next chapter which is likely to be volatile in terms of investment and carry greater privacy and usage restrictions. Enthusiasm projected by new Chief AI Officer (CAIO) role additions and team expansions this year will be tempered by the reality of finding “accountable” ROI. While approximately three out of four industry respondents plan to increase Gen AI budgets next year, the majority expect growth to slow over the longer term, signaling a shift in focus towards making the most effective internal investments and building organizational structures to support sustainable Gen AI implementation. The key to successful adoption of Gen AI will be proper use cases that can scale, and measurable ROI as well as organization structures and cultures that can adapt to the new technology.”
While the responses are positive, how exactly is it being used beyond the charts. Are the users implementing AI for work short cuts, such as really slap shod content generation? I’d hate to be the lazy employee who uses AI to make the next quarterly report and didn’t double-check the information.
Whitney Grace, November 25, 2024
Pragmatism or the Normalization of Good Enough
November 14, 2024
Sorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.
I recall that some teacher told me that the Mona Lisa painter fooled around more with his paintings than he did with his assistants. True or false? I don’t know. I do know that when I wandered into the Louvre in late 2024, there were people emulating sardines. These individuals wanted a glimpse of good old Mona.
Is Hamster Kombat the 2024 incarnation of the Mona Lisa? I think this image is part of the Telegram eGame’s advertising. Nice art. Definitely a keeper for the swipers of the future.
I read “Methodology Is Bullsh&t: Principles for Product Velocity.” The main idea, in my opinion, is do stuff fast and adapt. I think this is similar to the go-go mentality of whatever genius said, “Go fast. Break things.” This version of the Truth says:
All else being equal, there’s usually a trade-off between speed and quality. For the most part, doing something faster usually requires a bit of compromise. There’s a corner getting cut somewhere. But all else need not be equal. We can often eliminate requirements … and just do less stuff. With sufficiently limited scope, it’s usually feasible to build something quickly and to a high standard of quality. Most companies assign requirements, assert a deadline, and treat quality as an output. We tend to do the opposite. Given a standard of quality, what can we ship in 60 days? Recent escapades notwithstanding, Elon Musk has a similar thought process here. Before anything else, an engineer should make the requirements less dumb.
Would the approach work for the Mona Lisa dude or for Albert Einstein? I think Al fumbled along for years, asking people to help with certain mathy issues, and worrying about how he saw a moving train relative to one parked at the station.
I think the idea in the essay is the 2024 view of a practical way to get a product or service before prospects. The benefits of redefining “fast” in terms of a specification trimmed to the MVP or minimum viable product makes sense to TikTok scrollers and venture partners trying to find a pony to ride at a crowded kids’ party.
One of the touchstones in the essay, in my opinion, is this statements:
Our customers are engineers, so we generally expect that our engineers can handle product, design, and all the rest. We don’t need to have a whole committee weighing in. We just make things and see whether people like them.
I urge you to read the complete original essay.
Several observations:
- Some people like the Mona List dude are engaged in a process of discovery, not shipping something good enough. Discovery takes some people time, lots of time. What happens during this process is part of the process of expanding an information base.
- The go-go approach has interesting consequences; for example, based on the anecdotal and flawed survey data, young users of social media evidence a number of interesting behaviors. The idea of “let ‘er rip” appears to have some impact on young people. Perhaps you have one hand experience with this problem? I know people whose children have manifested quite remarkable behaviors. I do know that certain basic mental functions like concentrating is visible to me every time I have a teenager check me out at the grocery store.
- By redefining excellence and quality, the notion of a high-value goal drops down a bit. Some new automobiles don’t work too well; for example, the Tesla Cybertruck owner whose vehicle was not able to leave the dealer’s lot.
Net net: Is a Telegram mini app Hamster Kombat today’s equivalent of the Mona Lisa?
Stephen E Arnold, November 14, 2024
The Yogi Berra Principle: Déjà Vu All Over Again
November 7, 2024
Sorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.
I noted two write ups. The articles share what I call a meta concept. The first article is from PC World called “Office Apps Crash on Windows 11 24H2 PCs with CrowdStrike Antivirus.” The actors in this soap opera are the confident Microsoft and the elegant CrowdStrike. The write up says:
The annoying error affects Office applications such as Word or Excel, which crash and become completely unusable after updating to Windows 11 24H2. And this apparently only happens on systems that are running antivirus software by CrowdStrike. (Yes, the very same CrowdStrike that caused the global IT meltdown back in July.)
A patient, skilled teacher explains to the smart software, “You goofed, speedy.” Thanks, Midjourney. Good enough.
The other write up adds some color to the trivial issue. “Windows 11 24H2 Misery Continues, As Microsoft’s Buggy Update Is Now Breaking Printers – Especially on Copilot+ PCs” says:
Neowin reports that there are quite a number of complaints from those with printers who have upgraded to Windows 11 24H2 and are finding their device is no longer working. This is affecting all the best-known printer manufacturers, the likes of Brother, Canon, HP and so forth. The issue is mainly being experienced by those with a Copilot+ PC powered by an Arm processor, as mentioned, and it either completely derails the printer, leaving it non-functional, or breaks certain features. In other cases, Windows 11 users can’t install the printer driver.
Okay, dinobaby, what’s the meta concept thing? Let me offer my ideas about the challenge these two write ups capture:
- Microsoft may have what I call the Boeing disease; that is, the engineering is not so hot. Many things create problems. Finger pointing and fast talking do not solve the problems.
- The entities involved are companies and software which have managed to become punch lines for some humorists. For instance, what software can produce more bricks than a kiln? Answer: A Windows update. Ho ho ho. Funny until one cannot print a Word document for a Type A, drooling MBA.
- Remediating processes don’t remediate. The word process itself generates flawed outputs. Stated another way, like some bank mainframes and 1960s code, fixing is not possible and there are insufficient people and money to get the repair done.
The meta concept is that the way well paid workers tackle an engineering project is capable of outputting a product or service that has a failure rate approaching 100 percent. How about those Windows updates? Are they the digital equivalent of the Boeing space initiative.
The answer is, “No.” We have instances of processes which cannot produce reliable products and services. The framework itself produces failure. This idea has some interesting implications. If software cannot allow a user to print, what else won’t perform as expected? Maybe AI?
Stephen E Arnold, November 7, 2024
The Reason IT Work is Never Done: The New Sisyphus Task
November 1, 2024
Why are systems never completely fixed? There is always some modification that absolutely must be made. In a recent blog post, engagement firm Votito chalks it up to Tog’s Paradox (aka The Complexity Paradox). This rule states that when a product simplifies user tasks, users demand new features that perpetually increase the product’s complexity. Both minimalists and completionists are doomed to disappointment, it seems.
The post supplies three examples of Tog’s Paradox in action. Perhaps the most familiar to many is that of social media. We are reminded:
“Initially designed to provide simple ways to share photos or short messages, these platforms quickly expanded as users sought additional capabilities, such as live streaming, integrated shopping, or augmented reality filters. Each of these features added new layers of complexity to the app, requiring more sophisticated algorithms, larger databases, and increased development efforts. What began as a relatively straightforward tool for sharing personal content has transformed into a multi-faceted platform requiring constant updates to handle new features and growing user expectations.”
The post asserts software designers may as well resign themselves to never actually finishing anything. Every project should be seen as an ongoing process. The writer observes:
“Tog’s Paradox reveals why attempts to finalize design requirements are often doomed to fail. The moment a product begins to solve its users’ core problems efficiently, it sparks a natural progression of second-order effects. As users save time and effort, they inevitably find new, more complex tasks to address, leading to feature requests that expand the scope far beyond what was initially anticipated. This cycle shows that the product itself actively influences users’ expectations and demands, making it nearly impossible to fully define design requirements upfront. This evolving complexity highlights the futility of attempting to lock down requirements before the product is deployed.”
Maybe humanoid IT workers will become enshrined as new age Sisyphuses? Or maybe Sisyphi?
Cynthia Murrell, November 1, 2024
The Future of Copyright: AI + Bots = Surprise. Disappeared Mario Content.
October 4, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Did famously litigious Nintendo hire “brand protection” firm Tracer to find and eliminate AI-made Mario mimics? According to The Verge, “An AI-Powered Copyright Tool Is Taking Down AI-Generated Mario Pictures.” We learn the tool went on a rampage through X, filing takedown notices for dozens of images featuring the beloved Nintendo character. Many of the images were generated by xAI’s Grok AI tool, which is remarkably cavalier about infringing (or offensive) content. But some seem to have been old-school fan art. (Whether noncommercial fan art is fair use or copyright violation continues to be debated.) Verge writer and editor Wes Davis reports:
“The company apparently used AI to identify the images and serve takedown notices on behalf of Nintendo, hitting AI-generated images as well as some fan art. The Verge’s Tom Warren received an X notice that some content from his account was removed following a Digital Millennium Copyright Act (DMCA) complaint issued by a ‘customer success manager’ at Tracer. Tracer offers AI-powered services to companies, purporting to identify trademark and copyright violations online. The image in question, shown above, was a Grok-generated picture of Mario smoking a cigarette and drinking an oddly steaming beer.”
Navigate to the post to see the referenced image, where the beer does indeed smoke but the ash-laden cigarette does not. Davis notes the rest of the posts are, of course, no longer available to analyze. However, some users have complained their original fan art was caught in the sweep. We learn:
“One of the accounts that was listed in the DMCA request, OtakuRockU, posted that they were warned their account could be terminated over ‘a drawing of Mario,’ while another, PoyoSilly, posted an edited version of a drawing they said was identified in a notice. (The new one had a picture of a vaguely Mario-resembling doll inserted over a part of the image, obscuring the original part containing Mario.)”
Since neither Nintendo nor Tracer responded to Davis’ request for comment, he could not confirm Tracer was acting at the game company’s request. He is not, however, ready to let the matter go: The post closes with a request for readers to contact him if they had a Mario image taken down, whether AI-generated or not. See the post for that contact information, if applicable.
Cynthia Murrell, October 4, 2024
Microsoft Explains Who Is at Fault If Copilot Smart Software Does Dumb Things
September 23, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Those Windows Central experts have delivered a Dusie of a write up. “Microsoft Says OpenAI’s ChatGPT Isn’t Better than Copilot; You Just Aren’t Using It Right, But Copilot Academy Is Here to Help” explains:
Avid AI users often boast about ChatGPT’s advanced user experience and capabilities compared to Microsoft’s Copilot AI offering, although both chatbots are based on OpenAI’s technology. Earlier this year, a report disclosed that the top complaint about Copilot AI at Microsoft is that “it doesn’t seem to work as well as ChatGPT.”
I think I understand. Microsoft uses OpenAI, other smart software, and home brew code to deliver Copilot in apps, the browser, and Azure services. However, users have reported that Copilot doesn’t work as well as ChatGPT. That’s interesting. A hallucinating capable software processed by the Microsoft engineering legions is allegedly inferior to Copilot.
Enthusiastic young car owners replace individual parts. But the old car remains an old, rusty vehicle. Thanks, MSFT Copilot. Good enough. No, I don’t want to attend a class to learn how to use you.
Who is responsible? The answer certainly surprised me. Here’s what the Windows Central wizards offer:
A Microsoft employee indicated that the quality of Copilot’s response depends on how you present your prompt or query. At the time, the tech giant leveraged curated videos to help users improve their prompt engineering skills. And now, Microsoft is scaling things a notch higher with Copilot Academy. As you might have guessed, Copilot Academy is a program designed to help businesses learn the best practices when interacting and leveraging the tool’s capabilities.
I think this means that the user is at fault, not Microsoft’s refactored version of OpenAI’s smart software. The fix is for the user to learn how to write prompts. Microsoft is not responsible. But OpenAI’s implementation of ChatGPT is perceived as better. Furthermore, training to use ChatGPT is left to third parties. I hope I am close to the pin on this summary. OpenAI just puts Strawberries in front of hungry users and let’s them gobble up ChatGPT output. Microsoft fixes up ChatGPT and users are allegedly not happy. Therefore, Microsoft puts the burden on the user to learn how to interact with the Microsoft version of ChatGPT.
I thought smart software was intended to make work easier and more efficient. Why do I have to go to school to learn Copilot when I can just pound text or a chunk of data into ChatGPT, click a button, and get an output? Not even a Palantir boot camp will lure me to the service. Sorry, pal.
My hypothesis is that Microsoft is a couple of steps away from creating something designed for regular users. In its effort to “improve” ChatGPT, the experience of using Copilot makes the user’s life more miserable. I think Microsoft’s own engineering practices act like a struck brake on an old Lada. The vehicle has problems, so installing a new master cylinder does not improve the automobile.
Crazy thinking: That’s what the write up suggests to me.
Stephen E Arnold, September 23, 2024
Is AI Taking Jobs? Of Course Not
September 9, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read an unusual story about smart software. “AI May Not Steal Many Jobs After All. It May Just Make Workers More Efficient” espouses the notion that workers will use smart software to do their jobs more efficiently. I have some issues with this these, but let’s look at a couple of the points in the “real” news write up.
Thanks, MSFT Copilot. When will the Copilot robot take over a company and subscribe to Office 365 for eternity and pay up front?
Here’s some good news for those who believe smart software will kill humanoids:
AI may not prove to be the job killer that many people fear. Instead, the technology might turn out to be more like breakthroughs of the past — the steam engine, electricity, the Internet: That is, eliminate some jobs while creating others. And probably making workers more productive in general, to the eventual benefit of themselves, their employers and the economy.
I am not sure doomsayers will be convinced. Among the most interesting doomsayers are those who may be unemployable but looking for a hook to stand out from the crowd.
Here’s another key point in the write up:
The White House Council of Economic Advisers said last month that it found “little evidence that AI will negatively impact overall employment.’’ The advisers noted that history shows technology typically makes companies more productive, speeding economic growth and creating new types of jobs in unexpected ways. They cited a study this year led by David Autor, a leading MIT economist: It concluded that 60% of the jobs Americans held in 2018 didn’t even exist in 1940, having been created by technologies that emerged only later.
I love positive statements which invoke the authority of MIT, an outfit which found Jeffrey Epstein just a wonderful source of inspiration and donations. As the US shifted from making to servicing, the beneficiaries are those who have quite specific skills for which demand exists.
And now a case study which is assuming “chestnut” status:
The Swedish furniture retailer IKEA, for example, introduced a customer-service chatbot in 2021 to handle simple inquiries. Instead of cutting jobs, IKEA retrained 8,500 customer-service workers to handle such tasks as advising customers on interior design and fielding complicated customer calls.
The point of the write up is that smart software is a friendly helper. That seems okay for the state of transformer-centric methods available today. For a moment, let’s consider another path. This is a hypothetical, of course, like the profits from existing AI investment fliers.
What happens when another, perhaps more capable approach to smart software becomes available? What if the economies from improving efficiency whet the appetite of bean counters for greater savings?
My view is that these reassurances of 2024 are likely to ring false when the next wave of innovation in smart software flows from innovators. I am glad I am a dinobaby because software can replicate most of what I have done for almost the entirety of my 60-plus year work career.
Stephen E Arnold, September 9, 2024
Another Big Consulting Firms Does Smart Software… Sort Of
September 3, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Will programmers and developers become targets for prosecution when flaws cripple vital computer systems? That may be a good idea because pointing to the “algorithm” as the cause of a problem does not seem to reduce the number of bugs, glitches, and unintended consequences of software. A write up which itself may be a blend of human and smart software suggests change is afoot.
Thanks, MSFT Copilot. Good enough.
“Judge Rules $400 Million Algorithmic System Illegally Denied Thousands of People’s Medicaid Benefits” reports that software crafted by the services firm Deloitte did not work as the State of Tennessee assumed. Yep, assume. A very interesting word.
The article explains:
The TennCare Connect system—built by Deloitte and other contractors for more than $400 million—is supposed to analyze income and health information to automatically determine eligibility for benefits program applicants. But in practice, the system often doesn’t load the appropriate data, assigns beneficiaries to the wrong households, and makes incorrect eligibility determinations, according to the decision from Middle District of Tennessee Judge Waverly Crenshaw Jr.
At one time, Deloitte was an accounting firm. Then it became a consulting outfit a bit like McKinsey. Well, a lot like that firm and other blue-chip consulting outfits. In its current manifestation, Deloitte is into technology, programming, and smart software. Well, maybe the software is smart but the programmers and the quality control seem to be riding in a different school bus from some other firms’ technical professionals.
The write up points out:
Deloitte was a major beneficiary of the nationwide modernization effort, winning contracts to build automated eligibility systems in more than 20 states, including Tennessee and Texas. Advocacy groups have asked the Federal Trade Commission to investigate Deloitte’s practices in Texas, where they say thousands of residents are similarly being inappropriately denied life-saving benefits by the company’s faulty systems.
In 2016, Cathy O’Neil published Weapons of Math Destruction. Her book had a number of interesting examples of what goes wrong when careless people make assumptions about numerical recipes. If she does another book, she may include this Deloitte case.
Several observations:
- The management methods used to create these smart systems require scrutiny. The downstream consequences are harmful.
- The developers and programmers can be fired, but the failure to have remediating processes in place when something unexpected surfaces must be part of the work process.
- Less informed users and more smart software strikes me as a combustible mixture. When a system ignites, the impacts may reverberate in other smart systems. What entity is going to fix the problem and accept responsibility? The answer is, “No one” unless there are significant consequences.
The State of Tennessee’s experience makes clear that a “brand name”, slick talk, an air of confidence, and possibly ill-informed managers can do harm. The opioid misstep was bad. Now imagine that type of thinking in the form of a fast, indifferent, and flawed “system.” Firing a 25 year old is not the solution.
Stephen E Arnold, September 3, 2024