AI and Human Workers: AI Wins for Now
July 17, 2024
When it come to US employment news, an Australian paper does not beat around the bush. Citing a recent survey from the Federal Reserve Bank of Richmond, The Sydney Morning Herald reports, “Nearly Half of US Firms Using AI Say Goal Is to Cut Staffing Costs.” Gee, what a surprise. Writer Brian Delk summarizes:
“In a survey conducted earlier this month of firms using AI since early 2022 in the Richmond, Virginia region, 45 per cent said they were automating tasks to reduce staffing and labor costs. The survey also found that almost all the firms are using automation technology to increase output. ‘CFOs say their firms are tapping AI to automate a host of tasks, from paying suppliers, invoicing, procurement, financial reporting, and optimizing facilities utilization,’ said Duke finance professor John Graham, academic director of the survey of 450 financial executives. ‘This is on top of companies using ChatGPT to generate creative ideas and to draft job descriptions, contracts, marketing plans, and press releases.’ The report stated that over the past year almost 60 per cent of companies surveyed have ‘have implemented software, equipment, or technology to automate tasks previously completed by employees.’ ‘These companies indicate that they use automation to increase product quality (58 per cent of firms), increase output (49 per cent), reduce labor costs (47 per cent), and substitute for workers (33 per cent).’”
Delk points to the Federal Reserve Bank of Dallas for a bit of comfort. Its data shows the impact of AI on employment has been minimal at the nearly 40% of Texas firms using AI. For now. Also, the Richmond survey found manufacturing firms to be more likely (53%) to adopt AI than those in the service sector (43%). One wonders whether that will even out once the uncanny valley has been traversed. Either way, it seems businesses are getting more comfortable replacing human workers with cheaper, more subservient AI tools.
Cynthia Murrell, July 17, 2024
What Will the AT&T Executives Serve Their Lawyers at the Security Breach Debrief?
July 15, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
On the flight back to my digital redoubt in rural Kentucky, I had the thrill of sitting behind a couple of telecom types who were laughing at the pickle AT&T has plopped on top of what I think of a Judge Green slushee. Do lime slushees and dill pickles go together? For my tastes, nope. Judge Green wanted to de-monopolize the Ma Bell I knew and loved. (Yes, I cashed some Ma Bell checks and I had a Young Pioneers hat.)
We are back to what amounts a Ma Bell trifecta: AT&T (the new version which wears spurs and chaps), Verizon (everyone’s favorite throw back carrier), and the new T-Mobile (bite those customer pocketbooks as if they were bratwursts mit sauerkraut). Each of these outfits is interesting. But at the moment, AT&T is in the spotlight.
“Data of Nearly All AT&T Customers Downloaded to a Third-Party Platform in a 2022 Security Breach” dances around a modest cyber misstep at what is now a quite old and frail Ma Bell. Imagine the good old days before the Judge Green decision to create Baby Bells. Security breaches were possible, but it was quite tough to get the customer data. Attacks were limited to those with the knowledge (somewhat tough to obtain), the tools (3B series computers and lots of mainframes), and access to network connections. Technology has advanced. Consequently competition means that no one makes money via security. Security is better at old-school monopolies because money can be spent without worrying about revenue. As one AT&T executive said to my boss at a blue-chip consulting company, “You guys charge so much we will have to get another railroad car filled with quarters to pay your bill.” Ho ho ho — except the fellow was not joking. At the pre-Judge Green AT&T, spending money on security was definitely not an issue. Today? Seems to be different.
A more pointed discussion of Ma Bell’s breaking her hip again appears in “AT&T Breach Leaked Call and Text Records from Nearly All Wireless Customers” states:
AT&T revealed Friday morning (July 12, 2024) that a cybersecurity attack had exposed call records and texts from “nearly all” of the carrier’s cellular customers (including people on mobile virtual network operators, or MVNOs, that use AT&T’s network, like Cricket, Boost Mobile, and Consumer Cellular). The breach contains data from between May 1st, 2022, and October 31st, 2022, in addition to records from a “very small number” of customers on January 2nd, 2023.
The “problem” if I understand the reference to Snowflake. Is AT&T suggesting that Snowflake is responsible for the breach? Big outfits like to identify the source of the problem. If Snowflake made the misstep, isn’t it the responsibility of AT&T’s cyber unit to make sure that the security was as good as or better than the security implemented before the Judge Green break up? I think AT&T, like other big companies, wants to find a way to shift blame, not say, “We put the pickle in the lime slushee.”
My posture toward two year old security issues is, “What’s the point of covering up a loss of ‘nearly all’ customers’ data?” I know the answer: Optics and the share price.
As a person who owned a Young Pioneers’ hat, I am truly disappointed in the company. The Regional Managers for whom I worked as a contractor had security on the list of top priorities from day one. Whether we were fooling around with a Western Electric data service or the research charge back system prior to the break up, security was not someone else’s problem.
Today it appears that AT&T has made some decisions which are now perched on the top officer’s head. Security problems are, therefore, tough to miss. Boeing loses doors and wheels from aircraft. Microsoft tantalizes bad actors with insecure systems. AT&T outsources high value data and then moves more slowly than the last remaining turtle in the mine run off pond near my home in Harrod’s Creek.
Maybe big is not as wonderful as some expect the idea to be? Responsibility for one’s decisions and an ethical compass are not cyber tools, but both notions are missing in some big company operations. Will the after-action team guzzle lime slushees with pickles on top?
Stephen E Arnold, July 15, 2024
AI and Electricity: Cost and Saving Whales
July 15, 2024
Grumbling about the payoff from those billions of dollars injected into smart software continues. The most recent angle is electricity. AI is a power sucker, a big-time energy glutton. I learned this when I read the slightly alarmist write up “Artificial Intelligence Needs So Much Power It’s Destroying the Electrical Grid.” Texas, not a hot bed of AI excitement, seems to be doing quite well with the power grid problem without much help from AI. Mother Nature has made vivid the weaknesses of the infrastructure in that great state.
Some dolphins may love the power plant cooling effluent (run off). Other animals, not so much. Thanks, MSFT Copilot. Working on security this week?
But let’s get back to saving whales and the piggishness of those with many GPUs processing data to help out the eighth-graders with their 200 word essays.
The write up says:
As a recent report from the Electric Power Research Institute lays out, just 15 states contain 80% of the data centers in the U.S.. Some states – such as Virginia, home to Data Center Alley – astonishingly have over 25% of their electricity consumed by data centers. There are similar trends of clustered data center growth in other parts of the world. For example, Ireland has become a data center nation.
So what?
The article says that it takes just two years to spin up a smart software data center but it takes four years to enhance an electrical grid. Based on my experience at a unit of Halliburton specializing in nuclear power, the four year number seems a bit optimistic. One doesn’t flip a switch and turn on Three Mile Island. One does not pick a nice spot near a river and start building a nuclear power reactor. Despite the recent Supreme Court ruling calling into question what certain frisky Executive Branch agencies can require, home owners’ associations and medical groups can make life interesting. Plus building out energy infrastructure is expensive and takes time. How long does it take for several feet of specialized concrete to set? Longer than pouring some hardware store quick fix into a hole in your driveway?
The article says:
There are several ways the industry is addressing this energy crisis. First, computing hardware has gotten substantially more energy efficient over the years in terms of the operations executed per watt consumed. Data centers’ power use efficiency, a metric that shows the ratio of power consumed for computing versus for cooling and other infrastructure, has been reduced to 1.5 on average, and even to an impressive 1.2 in advanced facilities. New data centers have more efficient cooling by using water cooling and external cool air when it’s available. Unfortunately, efficiency alone is not going to solve the sustainability problem. In fact, Jevons paradox points to how efficiency may result in an increase of energy consumption in the longer run. In addition, hardware efficiency gains have slowed down substantially as the industry has hit the limits of chip technology scaling.
Okay, let’s put aside the grid and the dolphins for a moment.
AI has and will continue to have downstream consequences. Although the methods of smart software are “old” when measured in terms of Internet innovations, the knock on effects are not known.
Several observations are warranted:
- Power consumption can be scheduled. The method worked to combat air pollution in Poland, and it will work for data centers. (Sure, the folks wanting computation will complain, but suck it up, buttercups. Plan and engineer for efficiency.)
- The electrical grid, like the other infrastructures in the US, need investment. This is a job for private industry and the governmental authorities. Do some planning and deliver results, please.
- Those wanting to scare people will continue to exercise their First Amendment rights. Go for it. However, I would suggest putting observations in a more informed context may be helpful. But when six o’clock news weather people scare the heck out of fifth graders when a storm or snow approaches, is this an appropriate approach to factual information? Answer: Sure when it gets clicks, eyeballs, and ad money.
Net net: No big changes for now are coming. I hope that the “deciders” get their Fiat 500 in gear.
Stephen E Arnold, July 15, 2024
Big Plays or Little Plays: The Key to AI Revenue
July 11, 2024
I keep thinking about the billions and trillions of dollars required to create a big AI win. A couple of snappy investment banks have edged toward the idea that AI might not pay off with tsunamis of money right away. The fix is to become brokers for GPU cycles or “humble brags” about how more money is needed to fund the next big thing in what venture people want to be the next big thing. Yep, AI: A couple of winners and the rest are losers at least in terms of the pay off scale whacked around like a hapless squash ball at the New York Athletic Club.
However, a radical idea struck me as I read a report from the news service that oozes “trust.” The Reuters’ story is “China Leads the World in Adoption of Generative AI Survey Shows.” Do I trust surveys? Not really. Do I trust trusted “real” news outfits? Nope, not really. But the write up includes an interesting statement, and the report sparked what is for me a new idea.
First, here’s the passage I circled:
“Enterprise adoption of generative AI in China is expected to accelerate as a price war is likely to further reduce the cost of large language model services for businesses. The SAS report also said China led the world in continuous automated monitoring (CAM), which it described as “a controversial but widely-deployed use case for generative AI tools”.”
I interpreted this to mean:
- Small and big uses of AI in somewhat mundane tasks
- Lots of small uses with more big outfits getting with the AI program
- AI allows nifty monitoring which is going to catch the attention of some Chinese government officials who may be able to repurpose these focused applications of smart software
With models available as open source like the nifty Meta Facebook Zuck concoction, big technology is available. Furthermore the idea of applying smart software to small problems makes sense. The approach avoids the Godzilla lumbering associated with some outfits and, second, fast iteration with fast failures provides useful factoids for other developers.
The “real” news report does not provide numbers or much in the way of analysis. I think the idea of small-scale applications does not make sense when one is eating fancy food at a smart software briefing in mid town Manhattan. Small is not going to generate that. big wave of money from AI. The money is needed to raise more money.
My thought is that the Chinese approach has value because it is surfing on open source and some proprietary information known to Chinese companies solving or trying to solve a narrow problem. Also, the crazy pace of try-fail, try-fail enables acceleration of what works. Failures translate to lessons about what lousy path to follow.
Therefore, my reaction to the “real” news about the survey is that China may be in a position to do better, faster, and cheaper AI applications that the Godzilla outfits. The chase for big money exists, but in the US without big money, who cares? In China, big money may not be as large as the pile of cash some VCs and entrepreneurs argue is absolutely necessary.
So what? The “let many flowers bloom” idea applies to AI. That’s a strength possibly neither appreciated or desired by the US AI crowd. Combined with China’s patent surge, my new thought translates to “oh, oh.”
Stephen E Arnold, July 11, 2024
Common Sense from an AI-Centric Outfit: How Refreshing
July 11, 2024
This essay is the work of a dumb dinobaby. No smart software required.
In the wild and wonderful world of smart software, common sense is often tucked beneath a stack of PowerPoint decks and vaporized in jargon-spouting experts in artificial intelligence. I want to highlight “Interview: Nvidia on AI Workloads and Their Impacts on Data Storage.” An Nvidia poohbah named Charlie Boyle output some information that is often ignored by quite a few of those riding the AI pony to the pot of gold at the end of the AI rainbow.
The King Arthur of senior executives is confident that in his domain he is the master of his information. By the way, this person has an MBA, a law degree, and a CPA certification. His name is Sir Walter Mitty of Dorksford, near Swindon. Thanks, MSFT Copilot. Good enough.
Here’s the pivotal statement in the interview:
… a big part of AI for enterprise is understanding the data you have.
Yes, the dwellers in carpetland typically operate with some King Arthur type myths galloping around the castle walls; specifically:
Myth 1: We have excellent data
Myth 2: We have a great deal of data and more arriving every minute our systems are online
Myth 3: Out data are available and in just a few formats. Processing the information is going to be pretty easy.
Myth 4: Out IT team can handle most of the data work. We may not need any outside assistance for our AI project.
Will companies map these myths to their reality? Nope.
The Nvidia expert points out:
…there’s a ton of ready-made AI applications that you just need to add your data to.
“Ready made”: Just like a Betty Crocker cake mix my grandmother thought tasted fake, not as good as home made. Granny’s comment could be applied to some of the AI tests my team have tracked; for example, the Big Apple’s chatbot outputting comments which violated city laws or the exciting McDonald’s smart ordering system. Sure, I like bacon on my on-again, off-again soft serve frozen dessert. Doesn’t everyone?
The Nvidia experts offers this comment about storage:
If it’s a large model you’re training from scratch you need very fast storage because a lot of the way AI training works is they all hit the same file at the same time because everything’s done in parallel. That requires very fast storage, very fast retrieval.
Is that a problem? Nope. Just crank up the cloud options. No big deal, except it is. There are costs and time to consider. But otherwise this is no big deal.
The article contains one gems and wanders into marketing “don’t worry” territory.
From my point of view, the data issue is the big deal. Bad, stale, incomplete, and information in odd ball formats — these exist in organizations now. The mass of data may have 40 percent or more which has never been accessed. Other data are back ups which contain versions of files with errors, copyright protected data, and Boy Scout trip plans. (Yep, non work information on “work” systems.)
Net net: The data issue is an important one to consider before getting into the let’s deploy a customer support smart chatbot. Will carpetland dwellers focus on the first step? Not too often. That’s why some AI projects get lost or just succumb to rising, uncontrollable costs. Moving data? No problem. Bad data? No problem. Useful AI system? Hmmm. How much does storage cost anyway? Oh, not much.
Stephen E Arnold, July 11, 2024
Microsoft Security: Big and Money Explain Some Things
July 10, 2024
I am heading out for a couple of day. I spotted this story in my newsfeed: “The President Ordered a Board to Probe a Massive Russian Cyberattack. It Never Did.” The main point of the write up, in my opinion, is captured in this statement:
The tech company’s failure to act reflected a corporate culture that prioritized profit over security and left the U.S. government vulnerable, a whistleblower said.
But there is another issue in the write up. I think it is:
The president issued an executive order establishing the Cyber Safety Review Board in May 2021 and ordered it to start work by reviewing the SolarWinds attack. But for reasons that experts say remain unclear, that never happened.
The one-two punch may help explain why some in other countries do not trust Microsoft, the US government, and the cultural forces in the US of A.
Let’s think about these three issues briefly.
A group of tomorrow’s leaders responding to their teacher’s request to pay attention and do what she is asking. One student expresses the group’s viewpoint. Thanks, MSFT Copilot. How the Recall today? What about those iPhones Mr. Ballmer disdained?
First, large technology companies use the word “trust”; for example, Microsoft apparently does not trust Android devices. On the other hand, China does not have trust in some Microsoft products. Can one trust Microsoft’s security methods? For some, trust has become a bit like artificial intelligence. The words do not mean much of anything.
Second, Microsoft, like other big outfits needs big money. The easiest way to free up money is to not spend it. One can talk about investing in security and making security Job One. The reality is that talk is cheap. Cutting corners seems to be a popular concept in some corporate circles. One recent example is Boeing dodging trials with a deal. Why? Money maybe?
Third, the committee charged with looking into SolarWinds did not. For a couple of years after the breach became known, my SolarWinds’ misstep analysis was popular among some cyber investigators. I was one of the few people reviewing the “misstep.”
Okay, enough thinking.
The SolarWinds’ matter, the push for money and more money, and the failure of a committee to do what it was asked to do explicitly three times suggests:
- A need for enforcement with teeth and consequences is warranted
- Tougher procurement policies are necessary with parallel restrictions on lobbying which one of my clients called “the real business of Washington”
- Ostracism of those who do not follow requests from the White House or designated senior officials.
Enough of this high-vulnerability decision making. The problem is that as I have witnessed in my work in Washington for decades, the system births, abets, and provides the environment for doing what is often the “wrong” thing.
There you go.
Stephen E Arnold, July 10, 2024
Market Research Shortcut: Fake Users Creating Fake Data
July 10, 2024
Market research can be complex and time consuming. It would save so much time if one could consolidate thousands of potential respondents into one model. A young AI firm offers exactly that, we learn from Nielsen Norman Group’s article, “Synthetic Users: If, When, and How to Use AI Generated ‘Research.’”
But are the results accurate? Not so much, according to writers Maria Rosala and Kate Moran. The pair tested fake users from the young firm Synthetic Users and ones they created using ChatGPT. They compared responses to sample questions from both real and fake humans. Each group gave markedly different responses. The write-up notes:
“The large discrepancy between what real and synthetic users told us in these two examples is due to two factors:
- Human behavior is complex and context-dependent. Synthetic users miss this complexity. The synthetic users generated across multiple studies seem one-dimensional. They feel like a flat approximation of the experiences of tens of thousands of people, because they are.
- Responses are based on training data that you can’t control. Even though there may be proof that something is good for you, it doesn’t mean that you’ll use it. In the discussion-forum example, there’s a lot of academic literature on the benefits of discussion forums on online learning and it is possible that the AI has based its response on it. However, that does not make it an accurate representation of real humans who use those products.”
That seems obvious to us, but apparently some people need to be told. The lure of fast and easy results is strong. See the article for more observations. Here are a couple worth noting:
“Real people care about some things more than others. Synthetic users seem to care about everything. This is not helpful for feature prioritization or persona creation. In addition, the factors are too shallow to be useful.”
Also:
“Some UX [user experience] and product professionals are turning to synthetic users to validate or product concepts or solution ideas. Synthetic Users offers the ability to run a concept test: you describe a potential solution and have your synthetic users respond to it. This is incredibly risky. (Validating concepts in this way is risky even with human participants, but even worse with AI.) Since AI loves to please, every idea is often seen as a good one.”
So as appealing as this shortcut may be, it is a fast track to incorrect results. Basing business decisions on “insights” from shallow, eager-to-please algorithms is unwise. The authors interviewed Synthetic Users’ cofounder Hugo Alves. He acknowledged the tools should only be used as a supplement to surveys of actual humans. However, the post points out, the company’s website seems to imply otherwise: it promises “User research. Without the users.” That is misleading, at best.
Cynthia Murrell, July 10, 2024
TV Pursues Nichification or 1 + 1 = Barrels of Money
July 10, 2024
This essay is the work of a dumb dinobaby. No smart software required.
When an organization has a huge market like the Boy Scouts and the Girl Scouts? What do they do to remain relevant and have enough money to pay the overhead and salaries of the top dogs? They merge.
What does an old-school talking heads television channel do to remain relevant and have enough money to pay the overhead and salaries of the top dogs? They create niches.
A cheese maker who can’t sell his cheddar does some MBA-type thinking. Will his niche play work? Thanks, MSFT Copilot. How’s that Windows 11 update doing today?
Which path is the optimal one? I certainly don’t have a definitive answer. But if each “niche” is a new product, I remember hearing that the failure rate was of sufficient magnitude to make me a think in terms of a regular job. Call me risk averse, but I prefer the rational dinobaby moniker, thank you.
“CNBC Launches Sports Vertical amid Broader Biz Shift” reports with “real” news seriousness:
The idea is to give sports business executives insights and reporting about sports similar to the data and analysis CNBC provides to financial professionals, CNBC President KC Sullivan said in a statement.
I admit. I am not a sports enthusiast. I know some people who are, but their love of sport is defined by gambling, gambling and drinking at the 19th hole, and dressing up in Little League outfits and hitting softballs in the Harrod’s Creek Park. Exciting.
The write up held one differentiator from the other seemingly endless sports programs like those featuring Pat McAfee-type personalities. Here’s the pivot upon which the nichification turns:
The idea is to give sports business executives insights and reporting about sports similar to the data and analysis CNBC provides to financial professionals…
Imagine the legions of viewers who are interested in dropping billions on a major sports franchise. For me, it is easier to visualize sports betting. One benefit of gambling is a source of “addicts” for rehabilitation centers.
I liked the wrap up for the article. Here it is:
Between the lines: CNBC has already been investing in live coverage of sports, and will double down as part of the new strategy.
- CNBC produces an annual business of sports conference, Game Plan, in partnership with Boardroom.
- Andrew Ross Sorkin, Carl Quintanilla and others will host coverage from the 2024 Olympic Games in Paris this summer.
Zoom out: Cable news companies are scrambling to reimagine their businesses for a digital future.
- CNBC already sells digital subscriptions that include access to its live TV feed.
- In the future, it could charge professionals for niche insights around specific verticals, or beats.
Okay, I like the double down, a gambling term. I like the conference angle, but the named entities do not resonate with me. I am a dinobaby and nichification is not a tactic that an outfit with eyeballs going elsewhere makes sense to me. The subscription idea is common. Isn’t there something called “subscription fatigue”? And the plan to charge to access a sports portal is an interesting one. But if one has 1,000 people looking at content, the number who subscribe seems to be in the < one to two percent range based on my experience.
But what do I know? I am a dinobaby and I know about TikTok and other short form programming. Maybe that’s old hat too? Did CNBC talk to influencers?
Stephen E Arnold, July 10, 2024
A Signal That Money People Are Really Worried about AI Payoffs
July 8, 2024
This essay is the work of a dumb dinobaby. No smart software required.
“AI’s $600B Question” is an interesting signal. The subtitle for the article is the pitch that sent my signal processor crazy: The AI bubble is reaching a tipping point. Navigating what comes next will be essential.”
Executives on a thrill ride seem to be questioning the wisdom of hopping on the roller coaster. Thanks, MSFT Copilot. Good enough.
When money people output information that raises a question, something is happening. When the payoff is nailed, the financial types think about yachts, Bugatti’s, and getting quoted in the Financial Times. Doubts are raised because of these headline items: AI and $600 billion.
The write up says:
A huge amount of economic value is going to be created by AI. Company builders focused on delivering value to end users will be rewarded handsomely. We are living through what has the potential to be a generation-defining technology wave. Companies like Nvidia deserve enormous credit for the role they’ve played in enabling this transition, and are likely to play a critical role in the ecosystem for a long time to come. Speculative frenzies are part of technology, and so they are not something to be afraid of.
If I understand this money talk, a big time outfit is directly addressing fears that AI won’t generate enough cash to pay its bills and make the investors a bundle of money. If the AI frenzy was on the Money Train Express, why raise questions and provide information about the tough-to-control costs for making AI knock off the hallucination, the product recalls, the lawsuits, and the growing number of AI projects which just don’t work?
The fact of the article’s existence makes it clear to me that some folks are indeed worried. Does the write up reassure those with big bucks on the line? Does the write up encourage investors to pump more money into a new AI start up? Does the write up convert tests into long-term contracts with the big AI providers?
Nope, nope, and nope.
But here’s the unnerving part of the essay:
In reality, the road ahead is going to be a long one. It will have ups and downs. But almost certainly it will be worthwhile.
Translation: We will take your money and invest it. Just buckle up, butter cup. The ride on this roller coaster may end with the expensive cart hurtling from the track to the asphalt below. But don’t worry about us venture types. We will surf on churn and the flows of money. Others? Not so much.
Stephen E Arnold, July 8, 2024
Happy Fourth of July Says Microsoft to Some Employees
July 8, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I read “Microsoft Lays Off Employees in New Round of Cuts.” The write up reports:
Microsoft conducted another round of layoffs this week in the latest workforce reduction implemented by the Redmond tech giant this year… Posts on LinkedIn from impacted employees show the cuts affecting employees in product and program management roles.
I wonder if some of those Softies were working on security (the new Job One at Microsoft) or the brilliantly conceived and orchestrated Recall “solution.”
The write up explains or articulates an apologia too:
The cutbacks come as Microsoft tries to maintain its profit margins amid heavier capital spending, which is designed to provide the cloud infrastructure needed to train and deploy the models that power AI applications.
Several observations:
- A sure-fire way to solve personnel and some types of financial issues is identifying employees, whipping up some criteria-based dot points, and telling the folks, “Good news. You can find your future elsewhere.”
- Dumping people calls attention to management’s failure to keep staff and tasks aligned. Based on security and reliability issues Microsoft evidences, the company is too large to know what color sock is on each foot.
- Microsoft faces a challenge, and it is not AI. With more functions working in a browser, perhaps fed up individuals and organizations will re-visit Linux as an alternative to Microsoft’s products and services?
Net net: Maybe firing the security professionals and those responsible for updates which kill Windows machines is a great idea?
Stephen E Arnold, July 8, 2024