Google Meet: Going in Circles Is Either Brilliant or Evidence of a Management Blind Spot
October 24, 2024
No smart software but we may use image generators to add some modern spice to the dinobaby’s output.
I read an article which seems to be a rhetorical semantic floor routine. “Google Meet (Original) Is Finally, Properly Dead” explains that once there was Google Meet. Actually there was something called Hangouts, which as I recall was not exactly stable on my steam powered system in rural Kentucky. Hangouts morphed into Hangouts Meet. Then Hangouts Meet forked itself (maybe Google forked its users?) and there was Hangouts Meet and Hangouts Chat. Hangouts Chat then became Google Chat.
The write up focuses on Hangouts Meet, which is now dead. But the write up says:
In April 2020, Google rebranded Hangouts Meet to just “Meet.” A couple of years later, in 2022, the company merged Google Duo into Google Meet due to Duo’s larger user base, aiming to streamline its video chat services. However, to avoid confusion between the two Meet apps, Google labeled the former Hangouts Meet as “Meet (Original)” and changed its icon to green. However, having two Google Meet apps didn’t make sense and the company began notifying users of the “Meet (Original)” app to uninstall it and switch to the Duo-rebranded Meet. Now, nearly 18 months later, Google is officially discontinuing the Meet (Original) app, consolidating everything and leaving just one version of Meet on the Play Store.
Got that? The article explains:
Phasing out the original Meet app is a logical move for Google as it continues to focus on developing and enhancing the newer, more widely used version of Meet. The Duo-rebranded Google Meet has over 5 billion downloads on the Play Store and is where Google has been adding new features. Redirecting users to this app aligns with Google’s goal of consolidating its video services into a single, feature-rich platform.
Let’s step back. What does this Meet tell us about Google’s efficiency? Here are my views:
- Without its monopoly money, Google could not afford the type of inefficiency evidenced by the tale of the Meets
- The product management process appears to operate without much, if any, senior management oversight
- Google allows internal developers to whack away, release services, and then flounder until a person decides, “Let’s try again, just with different Googlers.”
So how has that worked out for Google? First, I think Microsoft Teams is a deeply weird product. The Softies want Teams to have more functions than the elephantine Microsoft Word. But lots of companies use Word and they now use Teams. And there is Zoom. Poor Zoom has lost its focus on allowing quick and easy online video conferences. Now I have to hunt for options between a truly peculiar Zoom app and the even more clumsy Zoom Web site.
Then there is Google Meet Duo whatever. Amazing. The services are an example of a very confused dog chasing its tail. Round and round she goes until some adult steps in and says, “Down, girl, before you die.”
PS. Who Google Chats from email?
Stephen E Arnold, October 24, 2024
OpenAI: An Illustration of Modern Management Acumen
October 23, 2024
Just a humanoid processing information related to online services and information access.
The Hollywood Reporter (!) published “What the Heck Is Going On At OpenAI? As executives flee with Warnings of Danger, the Company Says It Will Plow Ahead.” When I compare the Hollywood Reporter with some of the poohbah “real” news discussion of a company on track to lose an ballpark figure of $5 billion in 2024, the write up does a good job of capturing the managerial expertise on display at the company.
The wanna-be lion of AI is throwing a party. Will there be staff to attend? Thanks, MSFT Copilot. Good enough.
I worked through the write up and noted a couple of interesting passages. Let’s take a look at them and then ponder the caption in the smart software generated for my blog post. Full disclosure: I used the Microsoft Copilot version of OpenAI’s applications to create the art. Is it derivative? Heck, who knows when OpenAI is involved in crafting information with a click?
The first passage I circled is the one about the OpenAI chief technology officer bailing out of the high-flying outfit:
she left because she’d given up on trying to reform or slow down the company from within. Murati was joined in her departure from the high-flying firm by two top science minds, chief research officer Bob McGrew and researcher Barret Zoph (who helped develop ChatGPT). All are leaving for no immediately known opportunity.
That suggests stability in the virtual executive suite. I suppose the the prompt used to aid these wizards in their decision to find their future elsewhere was something like “Hello, ChatGTP 4o1, I want to work in a technical field which protects intellectual property, helps save the whales, and contributes to the welfare of those without deep knowledge of multi-layer neural networks. In order to find self-fulfillment not possible with YouTube TikTok videos, what do you suggest for a group of smart software experts? Please, provide examples of potential work paths and provide sources for the information. Also, do not include low probability job opportunities like sanitation worker in the Mission District, contract work for Microsoft, or negotiator for the countries involved in a special operation, war, or regional conflict. Thanks!”
The output must have been convincing because the write up says: “All are leaving for no immediately known opportunity.” Interesting.
The second passage warranting a blue underline is a statement attributed to another former OpenAI wizard, William Saunders. He apparently told a gathering of esteemed Congressional leaders:
“AGI [artificial general intelligence or a machine smarter than every humanoid] would cause significant changes to society, including radical changes to the economy and employment. AGI could also cause the risk of catastrophic harm via systems autonomously conducting cyberattacks, or assisting in the creation of novel biological weapons,” he told lawmakers. “No one knows how to ensure that AGI systems will be safe and controlled … OpenAI will say that they are improving. I and other employees who resigned doubt they will be ready in time.”
I wonder if he asked the OpenAI smart software for tips about testifying before a Senate Committee. If he did, he seems to be voicing the idea that smart software will help some people to develop “novel biological weapons.” Yep, we could all die in a sequel Covid 2.0: The Invisible Global Killer. (Does that sound like a motion picture suitable for Amazon, Apple, or Netflix? I have a hunch some people in Hollywood will do some tests in Peoria or Omaha wherever the “middle” of America is now.
The final snippet I underlined is:
OpenAI has something of a history of releasing products before the industry thinks they’re ready.
No kidding. But the object of the technology game is to become the first mover, obtain market share, and kill off any pretenders like a lion in Africa goes for the old, lame, young, and dumb. OpenAI wants to be the king of the AI jungle. The one challenge may be that the AI lion at the company is getting staff to attend his next party. I see empty cubicles.
Stephen E Arnold, October 23, 2024
When Wizards Squabble the Digital World Bleats, “AI Yi AI”
October 21, 2024
No smart software but we may use image generators to add some modern spice to the dinobaby’s output.
The world is abuzz with New York Times “real” news story. From my point of view, the write up reminds me of a script from “The Guiding Light.” The “to be continued” is implicit in the drama presented in the pitch for a new story line. AI wizard and bureaucratic marvel squabble about smart software.
According to “Microsoft and OpenAI’s Close Partnership Shows Signs of Fraying”:
At an A.I. conference in Seattle this month, Microsoft didn’t spend much time discussing OpenAI. Asha Sharma, an executive working on Microsoft’s A.I. products, emphasized the independence and variety of the tech giant’s offerings. “We definitely believe in offering choice,” Ms. Sharma said.
Two wizards squabble over the AI goblet. Thanks, MSFT Copilot, good enough which for you is top notch.
What? Microsoft offers a choice. What about pushing Edge relentlessly? What about the default install of an intelligence officer’s fondest wish: Historical data on a bad actor’s computer? What about users who want to stick with Windows 7 because existing applications run on it without choking? What about users who want to install Windows 11 but cannot because of arbitrary Microsoft restrictions? Choice?
Several observations:
- The tension between Sam AI-Man and Satya Nadella, the genius behind today’s wonderful Microsoft software is not secret. Sam AI-Man found some acceptance when he crafted a deal with Oracle.
- When wizards argue the drama is high because both of the parties to the dispute know that AI is a winner take all game, with losers destined to get only 65 percent of the winner’s size. Others get essentially nothing. Winners get control.
- The anti-MBA organization of OpenAI, Microsoft’s odd deal, and the staffing shenanigans of both Microsoft and OpenAI suggest that neither MSFT’s Nadella or OpenAI’s Sam AI-Man are big picture thinkers.
What will happen now? I think that the Googlers will add a new act to the Sundar & Prabhakar Comedy Tour. The two jokers will toss comments back and forth about how both the Softies and the AI-Men need to let another firm’s AI provide information about organizational planning.
I think the story will be better as a comedy routine. Scrap that “Guiding Light” idea. A soap opera is far to serious for the comedy now on stage.
Stephen E Arnold, October 21, 2024
Forget Surveillance Capitalism. Think Parasite Culture
October 15, 2024
Ted Gioia touts himself as The Honest Broker on his blog and he recently posted about the current state of the economy: “Are We Now Living In A Parasite Culture?” In the opening he provides examples of natural parasites before moving to his experience working with parasite strategies.
Gioia said that when he consulted fortune 500 companies, he and others used parasite strategies as thought exercises. Here’s what a parasite strategy is:
1. “You allow (or convince) someone else to make big investments in developing a market—so they cover the cost of innovation, or advertising, or lobbying the government, or setting up distribution, or educating customers, or whatever. But…
2. You invest your energy instead on some way of cutting off these dutiful folks at the last moment—at the point of sale, for example. Hence…
3. You reap the benefits of an opportunity that you did nothing to create.”
On first reading, it doesn’t seem that our economy is like that until he provides true examples: Facebook, Spotify, TikTok, and Google. All of these platforms are nothing more than a central location for people to post and share their content or they aggregate content from the Internet. These platforms thrive off the creativity of their users and their executive boards reap the benefits, while the creators struggle to rub two cents together.
Smart influencers know to diversify their income streams through sponsorship, branding, merchandise, and more. Gioia points out that the Forbes lists of billionaires includes people who used parasitical business strategies to get rich. He continues by saying that these parasites will continue to guzzle off their hosts’ lifeblood with a chance of killing said host.
Its happening now in the creative economy with Big Tech’s investment in AI and how, despite lawsuits and laws, these companies are illegally training AI on creative pursuits. He finishes with the obvious statement that politicians should be protecting people, but that they’re probably part of the problem. No duh.
Whitney Grace, October 15, 2024
Microsoft Security: A World First
September 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
After the somewhat critical comments of the chief information security officer for the US, Microsoft said it would do better security. “Secure Future Initiative” is a 25 page document which contains some interesting comments. Let’s look at a handful.
Some bad actors just go where the pickings are the easiest. Thanks, MSFT Copilot. Good enough.
On page 2 I noted the record beating Microsoft has completed:
Our engineering teams quickly dedicated the equivalent of 34,000 full-time engineers to address the highest priority security tasks—the largest cybersecurity engineering project in history.
Microsoft is a large software company. It has large security issues. Therefore, the company undertaken the “largest cyber security engineering project in history.” That’s great for the Guinness Book of World Records. The question is, “Why?” The answer, it seems to me, is, “Microsoft did “good enough” security. As the US government’s report stated, “Nope. Not good enough.” Hence, a big and expensive series of changes. Have the changes been tested or have unexpected security issues been introduced to the sprawl of Microsoft software? Another question from this dinobaby: “Can a big company doing good enough security implement fixes to remediate “the highest priority security tasks”? Companies have difficulty changing certain work practices. Can “good enough” methods do the job?
On page 3:
Security added as a core priority for all employees, measured against all performance reviews. Microsoft’s senior leadership team’s compensation is now tied to security performance
Compensation is lined to security as a “core priority.” I am not sure what making something a “core priority” means, particularly when the organization has implement security systems and methods which have been found wanting. When the US government gives a bad report card, one forms an impression of a fairly deep hole which needs to be filled with functional, reliable bits. Adding a “core priority” does not correlate with security software from cloud to desktop.
On page 5:
To enhance governance, we have established a new Cybersecurity Governance Council…
The creation of a council and adding security responsibilities to some executives and hiring a few other means to me:
- Meetings and delays
- Adding duties may translate to other issues
- How much will these remediating processes cost?
Microsoft may be too big to change its culture in a timely manner. The time required for a council to enhance governance means fixing security problems may take time. Even with additional time and “the equivalent of 34,000 full time engineers” may be a project management task of more than modest proportions.
On page 7:
Secure by design
Quite a subhead. How can Microsoft’s sweep of legacy and now products be made secure by design when these products have been shown to be insecure.
On page 10:
Our strategy for delivering enduring compliance with the standard is to identify how we will Start Right, Stay Right, and Get Right for each standard, which are then driven programmatically through dashboard driven reviews.
The alliteration is notable. However, what is “right”? What happens when fixing up existing issues and adhering to a “standard” find that a “standard” has changed. The complexity of management and the process of getting something “right” is like an example from a book from a Santa Fe Institute complexity book. The reality of addressing known security issues and conforming to standards which may change is interesting to contemplate. Words are great but remediating what’s wrong in a dynamic and very complicated series of dependent services is likely to be a challenge. Bad actors will quickly probe for new issues. Generally speaking, bad actors find faults and exploit them. Thus, Microsoft will find itself in a troublesome mode: Permanent reactions to previously unknown and new security issues.
On page 11, the security manifesto launches into “pillars.” I think the idea is that good security is built upon strong foundations. But when remediating “as is” code as well as legacy code, how long will the design, engineering, and construction of the pillars take? Months, years, decades, or multiple decades. The US CISO report card may not apply to certain time scales; for instance, big government contracts. Pillars are ideas.
Let’s look at one:
The monitor and detect threats pillar focuses on ensuring that all assets within Microsoft production infrastructure and services are emitting security logs in a standardized format that are accessible from a centralized data system for both effective threat hunting/investigation and monitoring purposes. This pillar also emphasizes the development of robust detection capabilities and processes to rapidly identify and respond to any anomalous access, behavior, and configuration.
The reality of today’s world is that security issues can arise from insiders. Outside threats seem to be identified each week. However, different cyber security firms identify and analyze different security issues. No one cyber security company is delivering 100 percent foolproof threat identification. “Logs” are great; however, Microsoft used to charge for making a logging function available to a customer. Now more logs. The problem is that logs help identify a breach; that is, a previously unknown vulnerability is exploited or an old vulnerability makes its way into a Microsoft system by a user action. How can a company which has a poor report card issued by the US government become the firm with a threat detection system which is the equivalent of products now available from established vendors. The recent CrowdStrike misstep illustrates that the Microsoft culture created the opportunity for the procedural mistake someone made at Crowdstrike. The words are nice, but I am not that confident in Microsoft’s ability to build this pillar. Microsoft may have to punt and buy several competitive systems and deploy them like mercenaries to protect the unmotivated Roman citizens in a century.
I think reading the “Secure Future Initiative” is a useful exercise. Manifestos can add juice to a mission. However, can the troops deliver a victory over the bad actors who swarm to Microsoft systems and services because good enough is like a fried chicken leg to a colony of ants.
Stephen E Arnold, September 30, 2024
AI Automation Has a Benefit … for Some
September 26, 2024
Humanity’s progress runs parallel to advancing technology. As technology advances, aspects of human society and culture are rendered obsolete and it is replaced with new things. Job automation is a huge part of this; past example are the Industrial Revolution and the implementation of computers. AI algorithms are set to make another part of the labor force defunct, but the BBC claims that might be beneficial to workers: “Klarna: AI Lets Us Cut Thousands Of Jobs-But Pay More.”
Klarna is a fintech company that provides online financial services and is described as a “buy now, pay later” company. Klarna plans to use AI to automate the majority of its workforce. The company’s leaders already canned 1200 employees and they plan to fire another 2000 as AI marketing and customer service is implemented. That leaves Klarna with a grand total of 1800 employees who will be paid more.
Klarna’s CEO Sebastian Siematkowski is putting a positive spin on cutting jobs by saying the remaining employees will receive larger salaries. While Siematkowski sees the benefits of AI, he does warn about AI’s downside and advises the government to do something. He said:
“ ‘I think politicians already today should consider whether there are other alternatives of how they could support people that may be effective,’ he told the Today programme, on BBC Radio 4.
He said it was “too simplistic” to simply say new jobs would be created in the future.
‘I mean, maybe you can become an influencer, but it’s hard to do so if you are 55-years-old,’ he said.”
The International Monetary Fund (IMF) predicts that 40% of all jobs will worsen in “overall equality” due to AI. As Klarna reduces its staff, the company will enter what is called “natural attrition” aka a hiring freeze. The remaining workforce will have bigger workloads. Siematkowski claims AI will eventually reduce those workloads.
Will that really happen? Maybe?
Will the remaining workers receive a pay raise or will that money go straight to the leaders’ pockets? Probably.
Whitney Grace, September 26, 2024
Happy AI News: Job Losses? Nope, Not a Thing
September 19, 2024
This essay is the work of a dumb humanoid. No smart software required.
I read “AI May Not Steal Many Jobs after All. It May Just Make Workers More Efficient.” Immediately two points jumped out at me. The AP (the publisher of the “real” news story is hedging with the weasel word “may” and the hedgy phrase “after all.” Why is this important? The “real” news industry is interested in smart software to reduce costs and generate more “real” news more quickly. The days with “real” reporters disappearing for hours to confirm with a source are often associated with fiddling around. The costs of doing anything without a gusher of money pumping 24×7 are daunting. The word “efficient” sits in the headline as a digital harridan stakeholder. Who wants that?
The manager of a global news operation reports that under his watch, he has achieved peak efficiency. Thanks, MSFT Copilot. Will this work for production software development? Good enough is the new benchmark, right?
The story itself strikes me as a bit of content marketing which says, “Hey, everyone can use AI to become more efficient.” The subtext is, “Hey, don’t worry. No software robot or agentic thingy will reduce staff. Probably.
The AP is a litigious outfit even though I worked at a newspaper which “participated” in the business process of the entity. Here’s one sentence from the “real” news write up:
Instead, the technology might turn out to be more like breakthroughs of the past — the steam engine, electricity, the internet: That is, eliminate some jobs while creating others. And probably making workers more productive in general, to the eventual benefit of themselves, their employers and the economy.
Yep, just like the steam engine and the Internet.
When technologies emerge, most go away or become componentized or dematerialized. When one of those hot technologies fail to produce revenues, quite predictable outcomes result. Executives get fired. VC firms do fancy dancing. IRS professionals squint at tax returns.
So far AI has been a “big guys win sort of because they have bundles of cash” and “little outfits lose control of their costs”. Here’s my take:
- Human-generated news is expensive and if smart software can do a good enough job, that software will be deployed. The test will be real time. If the software fails, the company may sell itself, pivot, or run a garage sale.
- When “good enough” is the benchmark, staff will be replaced with smart software. Some of the whiz kids in AI like the buzzword “agentic.” Okay, agentic systems will replace humans with good enough smart software. That will happen. Excellence is not the goal. Money saving is.
- Over time, the ideas of the current transformer-based AI systems will be enriched by other numerical procedures and maybe— just maybe — some novel methods will provide “smart software” with more capabilities. Right now, most smart software just finds a path through already-known information. No output is new, just close to what the system’s math concludes is on point. Right now, the next generation of smart software seems to be in the future. How far? It’s anyone’s guess.
My hunch is that Amazon Audible will suggest that humans will not lose their jobs. However, the company is allegedly going to replace human voices with “audibles” generated by smart software. (For more about this displacement of humans, check out the Bloomberg story.)
Net net: The “real” news story prepares the field for planting writing software in an organization. It says, “Customer will benefit and produce more jobs.” Great assertions. I think AI will be disruptive and in unpredictable ways. Why not come out and say, “If the agentic software is good enough, we will fire people”? Answer: Being upfront is not something those who are not dinobabies do.
Stephen E Arnold, September 19, 2024
IT Departments Losing Support From Top Brass
September 19, 2024
Modern businesses can’t exist today without technological infrastructure. Organizations rely on the IT department. Without the Internet, computer, and other technology businesses come to a screeching halt. Despite the power IT departments wield, ZDNet says that, “Business Leaders Are Losing Faith In IT, According To This IBM Study. Here’s Why.” According to the survey, ten years ago business leaders believed that basic IT services were effect. Now it is only about half of what it used to be. Generative AI is also giving leaders the willies.
Business leaders are disgruntled with IT and they have high expectations over what technology shoulder deliver. Leaders want there technology to give their businesses a competitive edge. They’re also more technology competent than their predecessors, so the leaders want instantaneous fixes and results.
A big problem is that the leaders and tech departments aren’t communicating and collaborating. Generative AI is making both parties worry, because one doesn’t know what the other is doing concerning implementation and how to use it. It’s important for these groups to start talking, because AI and hybrid cloud services are expected to consume 50% more of infrastructure budgets.
The survey shared suggestions to improve confidence in IT services. Among the usual suggestions were hire more women who are IT or AI experts, make legacy systems AI ready by making infrastructure investments, use AI to build better AI, involve the workforce in how AI drives the business, and then these:
“Measure, measure, measure technology’s impact on business outcomes: Notably, among high-performing tech CxO respondents defined in the survey, the study found that organizations that connect technology investments to measurable business outcomes report 12% higher revenue growth.
Talk about outcomes, not about data: "Focus on shared objectives by finding a common language with the business based on enhancing the customer experience and delivering outcomes. Use storytelling and scenario-based exercises to drive tech and the business to a shared understanding of the customer journey and pain points."
It’s the usual information with an updated spin on investing in the future, diversifying the workforce, and listening to the needs of workers. It’s the same stuff in a new package.
Whitney Grace, September 19, 2024
Great Moments in Leadership: Drive an Uber
September 18, 2024
I was zipping through my newsfeed and spotted this item: “Ex-Sony Boss Tells Laid-Off Employees to Drive an Uber and Find a Cheap Place to Live.” In the article, the ex-Sony boss is quoted as allegedly saying:
I think it’s probably very painful for the managers, but I don’t think that having skill in this area is going to be a lifetime of poverty or limitation. It’s still where the action is, and it’s like the pandemic but now you’re going to have to take a few…figure out how to get through it, drive an Uber or whatever, go off to find a cheap place to live and go to the beach for a year.
I admit that I find the advice reasonably practical. However, it costs money to summon an Uber. The other titbit is that a person without a job should find a “cheap place to live.” Ah, ha, van life or moving in with a friend. Possibly one could become a homeless person dwelling near a beach. What if the terminated individual has a family? I suppose there are community food services.
From an employee’s point of view, this is “tough love” management. How effective is this approach? I have worked for a number of firms in my 50 plus year career prior to my retiring in 2013. I can honestly say that this Uber and move to a cheaper place to live is remarkable. It is novel. Possibly a breakthrough in management methods.
I look forward to a TED talk from this leader. When will the Harvard Business Review present a more in-depth look at the former Sony president’s ideas? Oh, right. “Former” is the operative word. Yep, former.
Stephen E Arnold, September 17, 2024
CrowdStrike: Whiffing Security As a Management Precept
September 17, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Not many cyber security outfits can make headlines like NSO Group. But no longer. A new buzz champion has crowned: CrowdStrike. I learned a bit more about the company’s commitment to rigorous engineering and exemplary security practices. “CrowdStrike Ex-Employees: Quality Control Was Not Part of Our Process.” NSO Group’s rise to stardom was propelled by its leadership and belief in the superiority of Israeli security-related engineering. CrowdStrike skipped that and perfected a type of software that could strand passengers, curtail surgeries, and force Microsoft to rethink its own wonky decisions about kernel access.
A trained falcon tells an onlooker to go away. The falcon, a stubborn bird, has fallen in love with a limestone gargoyle. Its executive function resists inputs. Thanks, MSFT Copilot. Good enough.
The write up says:
Software engineers at the cybersecurity firm CrowdStrike complained about rushed deadlines, excessive workloads, and increasing technical problems to higher-ups for more than a year before a catastrophic failure of its software paralyzed airlines and knocked banking and other services offline for hours.
Let’s assume this statement is semi-close to the truth pin on the cyber security golf course. In fact, the company insists that it did not cheat like a James Bond villain playing a round of golf. The article reports:
CrowdStrike disputed much of Semafor’s reporting and said the information came from “disgruntled former employees, some of whom were terminated for clear violations of company policy.” The company told Semafor: “CrowdStrike is committed to ensuring the resiliency of our products through rigorous testing and quality control, and categorically rejects any claim to the contrary.”
I think someone at CrowdStrike has channeled a mediocre law school graduate and a former PR professional from a mid-tier publicity firm in Manhattan, lower Manhattan, maybe in Alphabet City.
The article runs through a litany of short cuts. You can read the original article and sort them out.
The company’s flagship product is called “Falcon.” The idea is that the outstanding software can, like a falcon, spot its prey (a computer virus). Then it can solve trajectory calculations and snatch the careless gopher. One gets a plump Falcon and one gopher filling in for a burrito at a convenience store on the Information Superhighway.
The killer paragraph in the story, in my opinion, is:
Ex-employees cited increased workloads as one reason they didn’t improve upon old code. Several said they were given more work following staff reductions and reorganizations; CrowdStrike declined to comment on layoffs and said the company has “consistently grown its headcount year over year.” It added that R&D expenses increased from $371.3 million to $768.5 million from fiscal years 2022 to 2024, “the majority of which is attributable to increased headcount.”
I buy the complaining former employee argument. But the article cites a number of CloudStrikers who are taking their expertise and work ethic elsewhere. As a result, I think the fault is indeed a management problem.
What does one do with a bad Falcon? I would put a hood on the bird and let it scroll TikToks. Bewits and bells would alert me when one of these birds were getting close to me.
Stephen E Arnold, September 16, 2024

