An Important, Easily Pooh-Poohed Insight
December 24, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Dinobaby here. I am on the regular highway, not the information highway. Nevertheless l want to highlight what I call an “easily poohpoohed factoid. The source of the item this morning is an interview titled “Google Cloud Exec: Enterprise AI Is Game-Changing, But Companies Need to Prepare Their Data.”
I am going to skip the PR baloney, the truisms about Google fumbling the AI ball, and rah rah about AI changing everything. Let me go straight to factoid which snagged my attention:
… at the other side of these projects, what we’re seeing is that organizations did not have their data house in order. For one, they had not appropriately connected all the disparate data sources that make up the most effective outputs in a model. Two, so many organizations had not cleansed their data, making certain that their data is as appropriate and high value as possible. And so we’ve heard this forever — garbage in, garbage out. You can have this great AI project that has all the tenets of success and everybody’s really excited. Then, it turns out that the data pipeline isn’t great and that the data isn’t streamlined — all of a sudden your predictions are not as accurate as they could or should have been.
Why are points about data significant?
First, investors, senior executives, developers, and the person standing on line with you at Starbucks dismisses data normalization as a solved problem. Sorry, getting the data boat to float is a work in progress. Few want to come to grips with the issue.
Second, fixing up data is expensive. Did you ever wonder why the Stanford president made up data, forcing his resignation? The answer is that the “cost of fixing up data is too high.” If the president of Stanford can’t do it, is the run-fo-the-mill fast talking AI guru different? Answer: Nope.
Third, knowledge of exception folders and non-conforming data is confined to a small number of people. Most will explain what is needed to make a content intake system work. However, many give up because the cloud of unknowing is unlikely to disperse.
The bottom line is that many data sets are not what senior executives, marketers, or those who use the data believe they are. The Google comment — despite Google’s sketchy track record in plain honest talk — is mostly correct.
So what?
- Outputs are often less useful than many anticipated. But if the user is uninformed or the downstream system uses whatever is pushed to it, no big deal.
- The thresholds and tweaks needed to make something semi useful are not shared, discussed, or explained. Keep the mushrooms in the dark and feed them manure. What do you get? Mushrooms.
- The graphic outputs are eye candy and distracting. Look here, not over there. Sizzle sells and selling is important.
Net net: Data are a problem. Data have been due to time and cost issues. Data will remain a problem because one can sidestep a problem few recognize and those who do recognize the pit find a short cut. What’s this mean for AI? Those smart systems will be super. What’s in your AI stocking this year?
Stephen E Arnold, December 24, 2023
Bugged? Hey, No One Can Get Our Data
December 22, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I read “The Obscure Google Deal That Defines America’s Broken Privacy Protections.” In the cartoon below, two young people are confident that their lunch will be undisturbed. No “bugs” will chow down on their hummus, sprout sandwiches, or their information. What happens, however, is that the young picnic fans cannot perceive what is out of sight. Are these “bugs” listening? Yep. They are. 24×7.
What the young fail to perceive is that “bugs” are everywhere. These digital creatures are listening, watching, harvesting, and consuming every scrap of information. The image of the picnic evokes an experience unfolding in real time. Thanks, MSFT Copilot. My notion of “bugs” is obviously different from yours. Good enough and I am tired of finding words you can convert to useful images.
The essay explains:
While Meta, Google, and a handful of other companies subject to consent decrees are bound by at least some rules, the majority of tech companies remain unfettered by any substantial federal rules to protect the data of all their users, including some serving more than a billion people globally, such as TikTok and Apple.
The situation is simple: Major centers of techno gravity remain unregulated. Law makers, regulators, and “users” either did not understand or just believed what lobbyists told them. The senior executives of certain big firms smiled, said “Senator, thank you for that question,” and continued to build out their “bug” network. Do governments want to lose their pride of place with these firms? Nope. Why? Just reference bad actors who commit heinous acts and invoke “protect our children.” When these refrains from the techno feudal playbook sound, calls to take meaningful action become little more than a faint background hum.
But the article continues:
…there is diminishing transparency about how Google’s consent decree operates.
I think I understand. Google-type companies pretend to protect “privacy.” Who really knows? Just ask a Google professional. The answer in my experience is, “Hey, dude, I have zero idea.”
How does Wired, the voice of the techno age, conclude its write up? Here you go:
The FTC agrees that a federal privacy law is long overdue, even as it tries to make consent decrees more powerful. Samuel Levine, director of the FTC’s Bureau of Consumer Protection, says that successive privacy settlements over the years have become more limiting and more specific to account for the growing, near-constant surveillance of Americans by the technology around them. And the FTC is making every effort to enforce the settlements to the letter…
I love the “every effort.” The reality is that the handling of online data collection presages the trajectory for smart software. We live with bugs. Now those bugs can “think”, adapt, and guide. And what’s the direction in which we are now being herded? Grim, isn’t it?
Stephen E Arnold, December 23, 2023
A High Profile Religious Leader: AI? Yeah, Well, Maybe Not So Fast, Folks
December 22, 2023
This essay is the work of a dumb dinobaby. No smart software required.
The trusted news outfit Thomson Reuters put out a story about the thoughts of the Pope, the leader of millions of Catholics. Presumably many of these people use ChatGPT-type systems to create content. (I wonder if Leonardo would have used an OpenAI system to crank out some art work. He was an innovator. My hunch is that he would have given MidJourney-type smart software a whirl.)
A group of religious individuals thinking about artificial intelligence. Thanks, MidJourney, a good enough engraving.
“Pope Francis Calls for Binding Global Treaty to Regulate AI” reports that Pope Francis wants someone to create a legally binding international treaty. The idea is that AI numerical recipes would be prevented from replacing humans with good old human values. The idea is that AI would output answers, and humans would use those answers to find pizza joints, develop smart weapons, and eliminate carbon by eliminating carbon generating entities (maybe humans?).
The trusted news outfit’s report included this quote from the Pope:
I urge the global community of nations to work together in order to adopt a binding international treaty that regulates the development and use of artificial intelligence in its many forms…
The Pope mentioned a need to avoid a technological dictatorship. He added:
Research on emerging technologies in the area of so-called Lethal Autonomous Weapon Systems, including the weaponization of artificial intelligence, is a cause for grave ethical concern. Autonomous weapon systems can never be morally responsible subjects…
Several observations are warranted:
- Is this a UN job or is some other entity responsible to obtain consensus and effective enforcement?
- Who develops the criteria for “good” AI, “neutral” AI, and “bad” AI?
- What are the penalties for implementing “bad” AI?
For me the Pope’s statement is important. It may be difficult to implement without a global dictatorship or a sudden change in how informed people debate and respond to difficult issues. From my point of view, the Pope should worry. When I look at the images of the Four Horsemen of the Apocalypse, the riders remind of four high profile leaders in AI. That’s my imagination reading into the depictions of conquest, war, famine, and death.
Stephen E Arnold, December 22, 2023
Palantir to Solve Banking IT Problems: Worth Monitoring
December 21, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Palantir Technologies recast itself as an artificial intelligence company. The firm persevered in England and positioned itself as the one best choice to wrestle the UK National Health Service’s IT systems into submission. Now, the company founded 20 years ago is going to demonstrate its business chops in a financial institution.
A young IT wizard explains to a group of senior executives, “Our team can deal with mainframe software and migrate the operations of this organization to a modern, scalable, more economical, and easier-to-use system. I am wearing a special seeing stone, so trust me.” Thanks, MSFT Copilot. It took five tries to get a good enough cartoon.
Before referencing the big, new job Palantir has “won,” I want to mention an interesting 2016 write up called “Interviewing My Mother, a Mainframe COBOL Programmer” by Tom Jordan. I want to point out that I am not suggesting that financial institutions have not solved their IT problems. I simply don’t know. But my poking around the Charlie Javice matter, my hunch is that banks IT systems have not changed significantly in the last seven years. Had the JPMC infrastructure been humming along with real-time data checks and smart software to determine if data were spoofed, those $175 million dollars would not have flown the upscale coop at JP Morgan Chase. For some Charlie Javice detail, navigate to this CNBC news item.
Here are several points about financial institutions IT infrastructure from the 2016 mom chat:
- Many banks rely on COBOL programs
- Those who wrote the COBOL programs may be deceased or retired
- Newbies may not know how undocumented legacy COBOL programs interact with other undocumented programs
- COBOL is not the go-to language for most programmers
- The databases for some financial institutions are not widely understood; for example, DL/1 / IMS, so some programmers have to learn something new about something old
- Moving data around can be tricky and the documentation about what an upstream system does and how it interacts with a downstream system may be fuzzy or unknown.
Anyone who has experience fiddling with legacy financial systems knows that changes require an abundance of caution. An error can wreck financial havoc. For more “color” about legacy systems used in banks, consult Mr. Jordan’s mom interview.
I thought about Mr. Jordan’s essay when I read “Palantir and UniCredit Renew Digital Transformation Partnership.” Palantir has been transforming UniCredit for five years, but obviously more work is needed. From my point of view, Palantir is a consulting company which does integration. Thus, the speed of the transformation is important. Time is money. The write up states:
The partnership will see UniCredit deploy the Palantir Foundry operating system to accelerate the bank’s digital transformation and help increase revenue and mitigate risks.
I like the idea of a financial services institution increasing its revenue and reducing its risk.
The report about the “partnership” adds:
Palantir and UniCredit first partnered in 2018 as the bank sought technology that could streamline sales spanning jurisdictions, better operationalize machine learning and artificial intelligence, enforce policy compliance, and enhance decision making on the front lines. The bank chose Palantir Foundry as the operating system for the enterprise, leveraging a single, open and integrated platform across entities and business lines and enabling synergies across the Group.
Yep, AI is part of the deal. Compliance management is part of the agreement. Plus, Palantir will handle cross jurisdictional sales. Also, bank managers will make better decisions. (One hopes the JPMC decision about the fake data, revenues, and accounts will not become an issue for UniCredit.)
Palantir is optimistic about the partnership renewal and five years of billing for what may be quite difficult work to do without errors and within the available time and resource window. A Palantir executive said, according to the article:
Palantir has long been a proud partner to some of the world’s top financial institutions. We’re honored that UniCredit has placed its confidence in Palantir once again and look forward to furthering the bank’s digital transformation.
Will Palantir be able to handle super-sized jobs like the NHS work and the UniCredit project? Personally I will be watching for news about both of these contract wins. For a 20 year old company with its roots in the intelligence community, success in health care and financial services will mark one of the few times, intelware has made the leap to mainstream commercial problem solving.
The question is, “Why have the other companies failed in financial services modernization?” I have a lecture about that. Curious to know more. Write benkent2020 at yahoo dot com, and one of my team will respond to you.
Stephen E Arnold, December 18, 2023
Vendor Lock In: Good for Geese, Bad for Other Birds
December 21, 2023
This essay is the work of a dumb dinobaby. No smart software required.
TechCrunch examines the details behind the recent OpenAI CEO exodus and why vendors are projecting a clear and present danger to tech startups: “OpenAI Mess Exposes The Dangers Of Vendor Lock-In For Start-Ups.” When startups fundraise for seed capital, they attract investors that can influence the company’s future.
OpenAI’s leaders, including ex-CEO Sam Altman and cofounder Greg Brockman, accepted investments from Microsoft. This forced OpenAI into a vendor lock-in situation, where they relied on Microsoft as their business model and vendor of OpenAI products. While there are many large language model products on the market, ChatGPT 3.5 and 4 were touted as the best product on the market. It’s in no small part to Microsoft’s investment and association with the respected and powerful tech company.
As Sam Altman and his OpenAI friends head to Microsoft, it points to the importance of diversifying a company’s vendor portfolio. While ChatGPT was the most popular solution, many tech experts believe that it’s better to research all options instead of relying on one service:
“The companies that chose a flexible approach over depending on a single AI model vendor must be feeling pretty good today. If there is any object lesson to be learned from all this, even as the drama continues to play out in real time, it’s that it’s never, ever a good idea to go with a single vendor.
Founders who put all of their eggs in the OpenAI basket now find themselves suddenly in a very uncomfortable situation, as the uncertainty around OpenAI continues to swirl.”
This is what happens when you rely on one vendor to supply all technology and related services. It’s always best to research and have multiple backup options in case one doesn’t work out.
Whitney Grace, December 21, 2023
Google AI and Ads: Beavers Do What Beavers Do
December 20, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Consider this. Take a couple of beavers. Put them in the Cloud Room near the top of the Chrysler Building in Manhattan. Shut the door. Come back in a day. What have the beavers done? The beavers start making a dam. Beavers do what beavers do. That’s a comedian’s way of explaining that some activities are hard wired into an organization. Therefore, beavers do what beavers do.
I read the paywalled article “Google Plans Ad Sales Restructuring as Automation Booms” and the other versions of the story on the shoulder of the Information Superhighway; for example, the trust outfit’s recycling of the Information’s story. The giant quantum supremacy, protein folding, and all-round advertising company is displaying beaver-like behavior. Smart software will be used to sell advertising.
That ad DNA? Nope, the beavers do what beavers do. Here’s a snip from the write up:
The planned reorganization comes as Google is relying more on machine-learning techniques to help customers buy more ads on its search engine, YouTube and other services…
Translating: Google wants fewer people to present information to potential and actual advertisers. The idea is to reduce costs and sell more advertising. I find it interesting that the quantum supremacy hoo-hah boils down to … selling ads and eliminating unreliable, expensive, vacation-taking, and latte consuming humans.
Two real beavers are surprised to learn that a certain large and dangerous creature also has DNA. Notice that neither of the beavers asks the large reptile to join them for lunch. The large reptile may, in fact, view the beavers as something else; for instance, lunch. Thanks, MSFT Copilot. Good enough.
Are there other ad-related changes afoot at the Google? According to “Google Confirms It is Testing Ad Copy Variation in Live Ads” points out:
Google quietly started placing headlines in ad copy description text without informing advertisers
No big deal. Just another “test”, I assume. Search Engine Land (a publication founded, nurtured, and shaped into the search engine optimization information machine by Dan Sullivan, now a Googler) adds:
Changing the rules without informing advertisers can make it harder for them to do their jobs and know what needs to be prioritized. The impact is even more significant for advertisers with smaller budgets, as assessing the changes, especially with responsive search ads, becomes challenging, adding to their workload.
Google wants to reduce its workload. In pursuing that noble objective, if Search Engine Land is correct, may increase the workload of the advertisers. But never fear, the change is trivial, “a small test.”
What was that about beavers? Oh, right. Certain behaviors are hard wired into the DNA of a corporate entity, which under US law is a “person” someone once told me.
Let me share with you several observations based on my decades-long monitoring of the Google.
- Google does what Google wants and then turns over the explanation to individuals who say what is necessary to deflect actual intent, convert actions into fuzzy Google speech, and keep customer and user pushback to a minimum. (Note: The tactic does not work with 100 percent reliability as the recent loss to US state attorneys general illustrates.)
- Smart software is changing rapidly. What appears to be one application may (could) morph into more comprehensive functionality. Predicting the future of AI and Google’s actions is difficult. Google will play the odds which means what the “entity” does will favor its objective and goals.
- The quaint notion of a “small test” is the core of optimization for some methods. Who doesn’t love “quaint” as a method for de-emphasizing the significance of certain actions. The “small test” is often little more than one component of a larger construct. Dismissing the small is to ignore the larger component’s functionality; for example, data control and highly probable financial results.
Let’s flash back to the beavers in the Cloud Room. Imagine the surprise of someone who opens the door and sees gnawed off portions of chairs, towels, a chunk of unidentifiable gook piled between two tables.
Those beavers and their beavering can create an unexpected mess. The beavers, however, are proud of their work because they qualify under an incentive plan for a bonus. Beavers do what beavers do.
Stephen E Arnold, December 20, 2023
FTC Enacts Investigative Process for AI Technology
December 20, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Creative types and educational professionals are worried about the influence of AI-generated work. However, law, legal, finance, business operations, and other industries are worried about how AI will impact them. Aware about the upward trend in goods and services that are surreptitiously moving into the market, the Federal Trade Commission (FTC) took action. The FTC released a briefing on the new consumer AI protection: “FTC Authorities Compulsory Process For AI-Related Products And Services.”
The FTC passed an omnibus resolution that authorizes a compulsory process in nonpublic investigations about products and services that use or claim to be made with AI or claim to detect it. The new omnibus resolution will increase the FTC’s efficiency with civil investigation demands (CIDs), a compulsory process like a subpoena. CIDs are issued to collect information, similar to legal discovery, for consumer protection and competition investigations. The new resolution will be in effect for ten years and the FTC voted to approve it 3-0.
The FTC defines AI as:
“AI includes, but is not limited to, machine-based systems that can, for a set of defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Generative AI can be used to generate synthetic content including images, videos, audio, text, and other digital content that appear to be created by humans. Many companies now offer products and services using AI and generative AI, while others offer products and services that claim to detect content made by generative AI.”
AI can also be used for deception, privacy infringements, fraud, and other illegal activities. AI can causes competition problems, such as if a few companies monopolize algorithms are other AI-related technologies.
The FTC is taking preliminary steps to protect consumers from bad actors and their nefarious AI-generated deeds. However, what constitutes a violation in relation to AI? Will the data training libraries be examined along with the developers?
Whitney Grace, December 20, 2023
Why Humans Follow Techno Feudal Lords and Ladies
December 19, 2023
This essay is the work of a dumb dinobaby. No smart software required.
“Seduced By The Machine” is an interesting blend of human’s willingness to follow the leader and Silicon Valley revisionism. The article points out:
We’re so obsessed by the question of whether machines are rising to the level of humans that we fail to notice how the humans are becoming more like machines.
I agree. The write up offers an explanation — it’s arriving a little late because the Internet has been around for decades:
Increasingly we also have our goals defined for us by technology and by modern bureaucratic systems (governments, schools, corporations). But instead of providing us with something equally rich and well-fitted, they can only offer us pre-fabricated values, standardized for populations. Apps and employers issue instructions with quantifiable metrics. You want good health – you need to do this many steps, or achieve this BMI. You want expertise? You need to get these grades. You want a promotion? Hit these performance numbers. You want refreshing sleep? Better raise your average number of hours.
A modern high-tech pied piper leads many to a sanitized Burning Man? Sounds like fun. Look at the funny outfit. The music is a TikTok hit. The followers are looking forward to their next “experience.” Thanks, MSFT Copilot. One try for this cartoon. Good enough again.
The idea is that technology offers a short cut. Who doesn’t like a short cut? Do you want to write music in the manner of Herr Bach or do you want to do the loop and sample thing?
The article explores the impact of metrics; that is, the idea of letting Spotify make clear what a hit song requires. Now apply that malleability and success incentive to getting fit, getting start up funding, or any other friction-filled task. Let’s find some Teflon, folks.
The write up concludes with this:
Human beings tend to prefer certainty over doubt, comprehensibility to confusion. Quantified metrics are extremely good at offering certainty and comprehensibility. They seduce us with the promise of what Nguyen calls “value clarity”. Hard and fast numbers enable us to easily set goals, justify decisions, and communicate what we’ve done. But humans reach true fulfilment by way of doubt, error and confusion. We’re odd like that.
Hot button alert! Uncertainty means risk. Therefore, reduce risk. Rely on an “authority,” “data,” or “research.” What if the authority sells advertising? What if the data are intentionally poisoned (a somewhat trivial task according to watchers of disinformation outfits)? What if the research is made up? (I am thinking of the Stanford University president and the Harvard ethic whiz. Both allegedly invented data; both found themselves in hot water. But no one seems to have cared.
With smart software — despite its hyperbolic marketing and its role as the next really Big Thing — finding its way into a wide range of business and specialized systems, just trust the machine output. I went for a routine check up. One machine reported I was near death. The doctor was recommending a number of immediate remediation measures. I pointed out that the data came from a single somewhat older device. No one knew who verified its accuracy. No one knew if the device was repaired. I noted that I was indeed still alive and asked if the somewhat nervous looking medical professional would get a different device to gather the data. Hopefully that will happen.
Is it a positive when the new pied piper of Hamelin wants to have control in order to generate revenue? Is it a positive when education produces individuals who do not ask, “Is the output accurate?” Some day, dinobabies like me will indeed be dead. Will the willingness of humans to follow the pied piper be different?
Absolutely not. This dinobaby is alive and kicking, no matter what the aged diagnostic machine said. Gentle reader, can you identify fake, synthetic, or just plain wrong data? If you answer yes, you may be in the top tier of actual thinkers. Those who are gatekeepers of information will define reality and take your money whether you want to give it up or not.
Stephen E Arnold, December 19, 2023
AI: Are You Sure You Are Secure?
December 19, 2023
This essay is the work of a dumb dinobaby. No smart software required.
North Carolina University published an interesting article. Are the data in the write up reproducible. I don’t know. I wanted to highlight the report in the hopes that additional information will be helpful to cyber security professionals. The article is “AI Networks Are More Vulnerable to Malicious Attacks Than Previously Thought.”
I noted this statement in the article:
Artificial intelligence tools hold promise for applications ranging from autonomous vehicles to the interpretation of medical images. However, a new study finds these AI tools are more vulnerable than previously thought to targeted attacks that effectively force AI systems to make bad decisions.
A corporate decision maker looks at a point of vulnerability. One of his associates moves a sign which explains that smart software protects the castel and its crown jewels. Thanks, MSFT Copilot. Numerous tries, but I finally got an image close enough for horseshoes.
What is the specific point of alleged weakness?
At issue are so-called “adversarial attacks,” in which someone manipulates the data being fed into an AI system in order to confuse it.
The example presented in the article is that a bad actor manipulates data provided to the smart software; for example, causing an image or content to be deleted or ignored. Another use case is that a bad actor could cause an X-ray machine to present altered information to the analyst.
The write up includes a description of software called QuadAttacK. The idea is to test a network for “clean” data. Four different networks were tested. The report includes a statement from Tianfu Wu, co-author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. He allegedly said:
“We were surprised to find that all four of these networks were very vulnerable to adversarial attacks,” Wu says. “We were particularly surprised at the extent to which we could fine-tune the attacks to make the networks see what we wanted them to see.”
You can download the vulnerability testing tool at this link.
Here are the observations my team and I generated at lunch today (Friday, December 14, 2023):
- Poisoned data is one of the weak spots in some smart software
- The free tool will allow bad actors with access to certain smart systems a way to identify points of vulnerability
- AI, at this time, may be better at marketing than protecting its reasoning systems.
Stephen E Arnold, December 19, 2023
Facing an Information Drought Tech Feudalists Will Innovate
December 18, 2023
This essay is the work of a dumb dinobaby. No smart software required.
The Exponential View (Azeem Azhar) tucked an item in his “blog.” The item is important, but I am not familiar with the cited source of the information in “LLMs May Soon Exhaust All Available High Quality Language Data for Training.” The main point is that the go-to method for smart software requires information in volume to [a] be accurate, [b] remain up to date, and [c] sufficiently useful to pay for the digital plumbing.
Oh, oh. The water cooler is broken. Will the Pilates’ teacher ask the students to quench their thirst with synthetic water? Another option is for those seeking refreshment to rejuvenate tired muscles with more efficient metabolic processes. The students are not impressed with these ideas? Thanks, MSFT Copilot. Two tries and close enough.
One datum indicates / suggests that the Big Dogs of AI will run out of content to feed into their systems in either 2024 or 2025. The date is less important than the idea of a hard stop.
What will the AI companies do? The essay asserts:
OpenAI has shown that it’s willing to pay eight figures annually for historical and ongoing access to data — I find it difficult to imagine that open-source builders will…. here are ways other than proprietary data to improve models, namely synthetic data, data efficiency, and algorithmic improvements – yet it looks like proprietary data is a moat open-source cannot cross.
Several observations:
- New methods of “information” collection will be developed and deployed. Some of these will be “off the radar” of users by design. One possibility is mining the changes to draft content is certain systems. Changes or deltas can be useful to some analysts.
- The synthetic data angle will become a go-to method using data sources which, by themselves, are not particularly interesting. However, when cross correlated with other information, “new” data emerge. The new data can be aggregated and fed into other smart software.
- Rogue organizations will acquire proprietary data and “bitwash” the information. Like money laundering systems, the origin of the data are fuzzified or obscured, making figuring out what happened expensive and time consuming.
- Techno feudal organizations will explore new non commercial entities to collect certain data; for example, the non governmental organizations in a niche could be approached for certain data provided by supporters of the entity.
Net net: Running out of data is likely to produce one high probability event: Certain companies will begin taking more aggressive steps to make sure their digital water cooler is filled and working for their purposes.
Stephen E Arnold, December 18, 2023