Management Insights Circa Spring 2025

March 18, 2025

dino orangeAnother dinobaby blog post. Eight decades and still thrilled when I point out foibles.

On a call today, one of the people asked, “Did you see that excellent leadership comes from ambivalence?” No, sorry. After my years at the blue chip consulting firm, I ignore those insights. Ambivalence. The motivated leader cares about money, the lawyers, the vacations, the big customer, and money. I think I have these in the correct order.

Imagine my surprise when I read another management breakthrough. Navigate to “Why Your ‘Harmonious’ Team Is Actually Failing.” The insight is that happy teams are in coffee shop mode. If one is not motivated by one of the factors I identified in the first paragraph of this essay, life will be like a drive-through smoothie shop. Kick back, let someone else do the work, and lap up that banana and tangerine goodie.

The write up reports about a management concept that is that one should strive for a roughie, maybe with a dollop of chocolate and some salted nuts. Get that blood pressure rising. Here’s a passage I noted:

… real psychological safety isn’t about avoiding conflict. It’s about creating an environment where challenging ideas makes the team stronger, not weaker.

The idea is interesting. I have learned that many workers, like helicopter parents, want to watch and avoid unnecessary conflicts, interactions, and dust ups. The write up slaps some psycho babble on this management insight. That’s perfect for academics on tenure track and talking to quite sensitive big spending clients. But often a more dynamic approach is necessary. If it is absent, there is a problem with the company. Hello, General Motors, Intel, and Boeing.

Stifle much?

The write up adds:

I’ve seen plenty of “nice” teams where everyone was polite, nobody rocked the boat, and meetings were painless. And almost all of those teams produced ok work. Why? Because critical thinking requires friction. Those teams weren’t actually harmonious—they were conflict-avoidant. The disagreements still existed; they just went underground. Engineers would nod in meetings then go back to their desks and code something completely different. Design flaws that everyone privately recognized would sail through reviews untouched. The real dysfunction wasn’t the lack of conflict—it was the lack of honest communication. Those teams weren’t failing because they disagreed too little; they were failing because they couldn’t disagree productively.

Who knew? Hello, General Motors, Intel, and Boeing.

Here’s the insight:

Here’s the weird thing I’ve found: teams that feel safe enough to hash things out actually have less nasty conflict over time. When small disagreements can be addressed head-on, they don’t turn into silent resentment or passive-aggressive BS. My best engineering teams were never the quiet ones—they were the ones where technical debates got spirited, where different perspectives were welcomed, and where we could disagree while still respecting each other.

The challenge is to avoid creating complacency.

Stephen E Arnold, March 18, 2025

AI May Be Discovering Kurt Gödel Just as Einstein and von Neumann Did

March 17, 2025

Hopping Dino_thumb_thumb_thumbThis blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything.

AI re-thinking is becoming more widespread. I published a snippet of an essay about AI and its impact in socialist societies on March 10, 2025. I noticed “A Bear Case: My Predictions Regarding AI Progress.” The write is interesting, and I think it represents thinking which is becoming more prevalent among individuals who have racked up what I call AI mileage.

The main theme of the write up is a modern day application of Kurt Gödel’s annoying incompleteness theorem. I am no mathematician like my great uncle Vladimir Arnold, who worked for year with the somewhat quirky Dr. Kolmogorov. (Family tip: Going winter camping with Dr. Kolmogorov wizard was not a good idea unless. Well, you know…)

The main idea is a formal axiomatic system satisfying certain technical conditions cannot decide the truth value of all statements about natural numbers. In a nutshell, a set cannot contain itself. Smart software is not able to go outside of its training boundaries as far as I know.

Back to the essay, the author points out that AI something useful:

There will be a ton of innovative applications of Deep Learning, perhaps chiefly in the field of biotech, see GPT-4b and Evo 2. Those are, I must stress, human-made innovative applications of the paradigm of automated continuous program search. Not AI models autonomously producing innovations.

The essay does contain a question I found interesting:

Because what else are they [AI companies and developers] to do? If they admit to themselves they’re not closing their fingers around godhood after all, what will they have left?

Let me offer several general thoughts. I admit that I am not able to answer the question, but some ideas crossed my mind when I was thinking about the sporty Kolmogorov, my uncle’s advice about camping in the winter, and this essay:

  1. Something else will come along. There is a myth that technology progresses. I think technology is like the fictional tribble on Star Trek. The products and services are destined to produce more products and services. Like the Santa Fe Institute crowd, order emerges. Will the next big thing be AI? Probably AI will be in the DNA of the next big thing. So one answer to the question is, “Something will emerge.” Money will flow and the next big thing cycle begins again.
  2. The innovators and the AI companies will pivot. This is a fancy way of saying, “Try to come up with something else.” Even in the age of monopolies and oligopolies, change is relentless. Some of the changes will be recognized as the next big thing or at least the thing a person can do to survive. Does this mean Sam AI-Man will manage the robots at the local McDonald’s? Probably not, but he will come up with something.
  3. The AI hot pot will cool. Life will regress to the mean or a behavior that is not hell bent on becoming a super human like the guy who gets transfusions from his kid, the wonky “have my baby” thinking of a couple of high profile technologist, or the money lust of some 25 year old financial geniuses on Wall Street. A digitized organization man man living out the theory of the leisure class will return. (Tip: Buy a dark grey suit. Lose the T shirt.)

As an 80 year old dinobaby, I find the angst of AI interesting. If Kurt Gödel were alive, he might agree to comment, “Sonny, you can’t get outside the set.” My uncle would probably say, “Those costs. Are they crazy?”

Stephen E Arnold, March 17, 2025

Wizard Snarks Amazon: Does Amazon Care? Ho Ho No

March 13, 2025

dino orange_thumb_thumb_thumb_thumbAnother post from the dinobaby. Alas, no smart software used for this essay.

I read a wonderful essay from the fellow who created a number of high-value solutions. Remember the Oxford English Dictionary SGML project or the Open Text Index? The person involved deeply in both of these projects is Tim Bray. He wrote a pretty good essay called “Bye, Prime.” On the surface it is a chatty explanation of why a former Amazon officer dropped the “Prime” membership. Thinking about the comments in the write up, Dr. Bray’s article underscores some deeper issues.

In my opinion, the significant points include:

First, 21st century capitalism lacks “ethics stuff.” The decisions benefit the stakeholders.

Second, in a major metropolitan area, local outlets provide equivalent products at competitive prices. This suggests a bit of price exploitation occurs in giant online retail operations.

Third, American companies are daubed with tar as a result of certain national postures.

Fourth, a crassness is evident in some US online services.

Is the article about Amazon? I would suggest that it is, but the implications are broader. I recommend the write up. I believe attending to the explicit and implicit messages in the essay would be useful.

I think the processes identified by Dr. Bray are unlikely to slow. Going back is difficult, perhaps impossible.

PS. I think fixing up the security of AWS buckets, getting the third party reseller scams cleaned up, and returning basic functionality to the Kindle interface are indications that Amazon has gotten lost in one of its warehouses because smart Alexa is really dumb.

Stephen E Arnold, March 13, 2025

NSO Group the PR of Intelware Captures Headlines …. Yet Again

March 13, 2025

Our reading and research have lead us to this basic rule: Unless measures are taken to keep something secret, diffusion is inevitable. Knowledge about systems, methods, and tools to access data is widespread. Case in point—Today’s General Counsel tells us, "Pegasus Spyware Is Showing Up on Corporate Execs’ Cell Phones." The brief write-up cites reporting by The Record’s Suzanne Smalley, who got her information from security firm iVerify. It shows a steep climb in Pegasus-infected devices over the second half of last year. We learn:

"The number of reported infected phones among iVerify corporate clients was eleven out of 18,000 devices tested in December last year. In May 2024, when iVerify first began offering the spyware testing service, a study found seven spyware infections out of 3,000 phones tested. ‘The world remains totally unprepared to deal with this from a security perspective,’ says iVerify co-founder and former National Security Agency analyst Rocky Cole, who was interviewed for the article. ‘This stuff is way more prevalent than people think.’ The article notes that business executives are now proving to be vulnerable, including individuals with access to proprietary plans and financial data, as well as those who frequently communicate with other influential leaders in the private sector. These leaders engage in sensitive work out of the public eye, including deals that have the potential to impact financial markets."

But how could this happen? Pegasus-maker NSO Group vows it only sells spyware to whitelisted governments for counterterrorism and fighting crime. It does do that. And also other things, reportedly. So we are unsurprised to find business executives among those allegedly targeted. We think it best to assume anything digital can be accessed by anyone at any moment. Is it time to bring back communications via pen and paper? At least someone must get out from behind a desk to intercept snail mail or dead drops.

Cynthia Murrell, March 13, 2025

AI and Jobs: Tell These Folks AI Will Not Impact Their Work

March 12, 2025

dino orange_thumbThe work of a real, live dinobaby. Sorry, no smart software involved. Whuff, whuff. That’s the sound of my swishing dino tail. Whuff.

I have a friend who does some translation work. She’s chugging along because of her reputation for excellent work. However, one of the people who worked with me on a project requiring Russian language skills has not worked out. The young person lacks the reputation and the contacts with a base of clients. The older person can be as busy as she wants to be.

What’s the future of translating from one language to another for money? For the established person, smart software appears to have had zero impact. The younger person seems to be finding that smart software is getting the translation work.

I will offer my take in a moment. First, let’s look at “Turkey’s Translators Are Training the AI Tools That Will Replace Them.”

I noted this statement in the cited article:

Turkey’s sophisticated translators are moonlighting as trainers of artificial intelligence models, even as their profession shrinks with the rise of machine translations. As the models improve, these training jobs, too, may disappear.

What’s interesting is that the skilled translators are providing information to AI models. These models are definitely going to replace the humans. The trajectory is easy to project. Machines will work faster and cheaper. The humans will abandon the discipline. Then prices will go up. Those requiring translations will find themselves spending more and having few options. Eventually the old hands will wither. Excellent translations which capture nuance will become a type of endangered species. The snow leopard of knowledge work is with us.

I noted this statement in the article:

Book publishing, too, is transforming. Turkish publisher Dedalus announced in 2023 that it had machine-translated nine books. In 2022, Agora Books, helmed by translator Osman Ak?nhay, released a Turkish edition of Jean-Dominique Brierre’s Milan Kundera, une vie d’écrivain, a biography of the Czech-French novelist Milan Kundera. Ak?nhay, who does not know French, used Google Translate to help him in the translation, to much criticism from the industry.

What’s this mean?

  1. Jobs will be lost and the professionals with specialist skills are going to be the buggy whip makers in a world of automobiles
  2. The downstream impact of smart software is going to kill off companies. The Chegg legal matter illustrates how a monopoly can mindlessly erode a company. This is like a speeding semi-truck smashing love bugs on a Florida highway. The bugs don’t know what hit them, and the semi-truck is unaware and the driver is uncaring. Dead bugs? So what? See “Chegg Sues Google for Hurting Traffic with AI As It Considers Strategic Alternatives.”
  3. Data from different sources suggesting that AI will just create jobs is either misleading, public relations, or dead wrong. The Bureau of Labor Statistics data are spawning articles like “AI and Its Impact on Software Development Jobs.”

Net net: What’s emerging is one of those classic failure scenarios. Nothing big seems to go wrong. Then a collapse occurs. That’s what’s beginning to appear. Just little changes. Heed the signals? Of course not. I can hear someone saying, “That won’t happen to me.” Of course not but cheaper and faster are good enough at this time.

Stephen E Arnold, March 12, 2025

Microsoft Sends a Signal: AI, AIn’t Working

March 11, 2025

dino orange_thumb_thumb_thumb_thumb_thumbAnother post from the dinobaby. Alas, no smart software used for this essay.

The problems with Microsoft’s AI push were evident from the start of its AI push in 2023. The company thought it had identified the next big thing and had the big fish on the line. Now the work was easy. Just reel in the dough.

Has it worked out for Microsoft? We know that big companies often have difficulty innovating. The enervating white board sessions which seek to answer the question, “Do we build it or buy it?” usually give way to: [a] Let’s lock it up somehow or [b] Let’s steal it because it won’t take our folks too long to knock out a me-too.

Microsoft sent a fairly loud beep-beep-beep when it began to cut back on its dependence on OpenAI. Not long ago, Microsoft trimmed some of its crazy spending for AI. Now we have the allegedly accurate information in “Microsoft Is Reportedly Potting a Future without OpenAI.”

The write up states:

Microsoft has poured over $13 billion into the AI firm since 2019, but now it wants more control over its own models and costs. Simple enough in theory—build in-house alternatives, cut expenses, and call the shots.

Is this a surprise? No, I think it is just one more beep added to the already emitted beep-beep-beep.

Here’s my take:

  1. Narrowly focused smart software adds some useful capabilities to what I would call workflow enhancement. The narrow focus for an AI system reduces some of the wonkiness of the output. Therefore, certain tasks benefit; for example, grinding through data for a chemistry application or providing a call center operation with a good enough solution to rising costs. Broad use cases are more problematic.
  2. Humans who rely on information for a living don’t want to be caught out. This means that using smart software is an assist or a supplement. This is like an older person using a cane when walking on a senior citizens adventure tour.
  3. Productizing a broad use case for smart software is expensive and prone to the sort of failure rate associated with a new product or service. A good example is a self driving auto with collision avoidance. Would you stand in front of such a vehicle confident in the smart software’s ability to not run over you? I wouldn’t.

What’s happening at Microsoft is a reasonably predictable and understandable approach. The company wants to hedge its bets since big bucks are flowing out, not in. The firm thinks it has enough smarts to do a better job even though in my opinion this is unlikely. Remember Bob, Clippy, and Windows updates? I do.

Also, small teams believe their approach will be a winner. Big companies believe their people can row that boat faster than anyone else. I know from personal experience and observation that this is not true. But the appearance of effort and the illusion of high value work encourages the approach.

Plus, the idea that a “leadership team” can manage innovation is a powerful one. Microsoft’s leadership believes in its leadership. That’s why the company is a leader. (I love this logic.)

Net net: My hunch is that Microsoft’s AI push is a disappointment. Now the company can shift into SWAT team mode and overwhelm the problem: AI that does not pay for itself.

Will this approach work? Nope, the outcome will be good enough. That is a bit more than one can say about Apple intelligence: Seriously out of step with the Softies.

Stephen E Arnold, March 11, 2025

AI and Two Villages: A Challenge in Some Large Countries

March 10, 2025

Hopping Dino_thumb_thumbThis blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything. Ask any 80 year old why don’t you? We used AI to translate the original Russian into semi English and to create the illustration. Hasta la vista a human Russian translater and a human artist. That’s how AI works in real life.

My team and I are wrapping up out Telegram monograph. As part of the drill, we have been monitoring some information sources in Russia. We spotted the essay “AI and Capitalism.” Note: I am not sure the link will resolve, but you can locate it via Yandex by searching for PCNews. I apologize, but some content is tricky to locate using consumer tools.)

image

The “white-collar village” and the “blue collar village” generated by You.com. Good enough.

I mention the article because it makes clear how smart software is affecting one technical professional working in a Russian government-owned telecommunications company. The author’s day-to-day work requires programming. One description of the value of smart software appears in this passage:

I work as a manager in a telecom and since last year I have been actively modifying the product line, adding AI components to each product. And I am not the only one there – the movement is going on in principle throughout the IT industry, of which we are a part… Where we have seen the payoff is replacing tree navigation with a text search bar, helping to generate text on a specific topic taking into account the concept cloud of the subject area, aggregating information from sources with different data structures, extracting a sequence of semantic actions of a person while working on a laptop, simultaneous translation with imitation of any voice, etc. The goal of all these events, as before, is to increase labor productivity. Previously, a person dug with his hands, then with a shovel, now with an excavator. Indeed, now it’s easier to ask the model for an example of code  than to spend hours searching on Stack Overflow. This seriously speeds things up.

The author then identifies three consequences of the use of AI:

  1. Training will change because “you will need to retrain for another narrow specialty several times”
  2. Education will become more expensive but who will pay? Possible as important who will be able to learn?
  3. Society will change which is a way of saying “social turmoil” ahead in my opinion.

Here’s an okay translation of the essay’s final paragraph:

…in the medium term, the target architecture of our society will inevitably see a critical stratification into workers and educated people. Blue and white collar castes. The fence between them will be so high that films about a possible future will become a fairly accurate forecast. I really want to end up in a white-collar village in the role of a white collar worker. Scary.

What’s interesting about this person’s point of view is that AI is already changing work in Russia and the Russian Federation. The challenge will be that an allegedly “flat” social structure will be split into those who can implement smart software and those who cannot. The chatter about smart software is usually focused on which company will find a way to generate revenue from the massive investments required to create solutions that consumers and companies will buy.

What gets less attention is the apparent impact of the technology on countries which purport to make life “better” via a different system. If the author is correct, some large nation states are likely to face some significant social challenges. Not everyone can work in “a white-collar village.”

Stephen E Arnold, March 10, 2025

A French Outfit Points Out Some Issues with Starlink-Type Companies

March 10, 2025

dino orangeAnother one from the dinobaby. No smart software. I spotted a story on the Thales Web site, but when I went back to check a detail, it had disappeared. After a bit of poking I found a recycled version called “Thales Warns Governments Over Reliance on Starlink-Type Systems.” The story must be accurate because it is from the “real” news outfit that wants my belief in their assertion of trust. Well, what do you know about trust?

Thales, as none of the people in Harrod’s Creek knows, is a French defence, intelligence, and go-to military hardware type of outfit. Thales and Dassault Systèmes are among the world leaders in a number cutting edge technology sectors. As a person who did some small work in France,  I heard the Thales name mentioned a number of times. Thales has a core competency in electronics, military communications, and related fields.

The cited article reports:

Thales CEO Patrice Caine questioned the business model of Starlink, which he said involved frequent renewal of satellites and question marks over profitability. Without further naming Starlink, he went on to describe risks of relying on outside services for government links. “Government actors need reliability, visibility and stability,” Caine told reporters. “A player that – as we have seen from time to time – mixes up economic rationale and political motivation is not the kind that would reassure certain clients.”

I am certainly no expert in the lingo of a native French speaker using English words. I do know that the French language has a number of nuances which are difficult for a dinobaby like me to understand without saying, “Pourriez-vous répéter, s’il vous plaît?”

I noticed several things; specifically:

  • The phrase “satellite renewal.” The idea is that the useful life of a Starlink-type device is shorter than some other technologies such as those from Thales-type of companies. Under the surface is the French attitude toward “fast fashion”. The idea is that cheap products are wasteful; well-made products, like a well-made suite, last a long time. Longer than a black baseball cap is how I interpreted the reference to “renewal.” I may be wrong, but this is a quite serious point underscoring the issue of engineering excellence.
  • The reference to “profitability” seems to echo news reports that Starlink itself may be on the receiving end of preferential contract awards. If those type of cozy deals go away, will the Starlink-type business generate sufficient revenue to sustain innovation, higher quality, and longer life spans? Based on my limited knowledge of thing French, this is a fairly direct way of pointing out the weak business model of the Starlink-type of service.
  • The use of the words “reliability” and “stability” struck me as directing two criticisms at the Starlink-type of company. On one level the issue of corporate stability is obvious. However, “stability” applies to engineering methods as well as mental set up. Henri Bergson observed, ““Think like a man of action, act like a man of thought.” I am not sure what M. Bergson would have thought about a professional wielding a chainsaw during a formal presentation.
  • The direct reference to “mixing up” reiterates the mental stability and corporate stability referents. But the killer comment is the merging of “economic rationale and political motivation” flashes bright warning lights to some French professionals and probably would resonate with other Europeans. I wonder what Austrian government officials thought about the chainsaw performance.

Net net: Some of the actions of a Starlink-type of company have been disruptive. In game theory, “keep people guessing” is a proven tactic. Will it work in France? Unlikely. Chainsaws will not be permitted in most meetings with Thales or French agencies. The baseball cap? Probably not.

Stephen E Arnold, March 10, 2025

Attention, New MBAs in Finance: AI-gony Arrives

March 6, 2025

dino orange_thumb_thumbAnother post from the dinobaby. Alas, no smart software used for this essay.

I did a couple of small jobs for a big Wall Street outfit years ago. I went to meetings, listened, and observed. To be frank, I did not do much work. There were three or four young, recent graduates of fancy schools. These individuals were similar to the colleagues I had at the big time consulting firm at which I worked earlier in my career.

Everyone was eager and very concerned that their Excel fevers were in full bloom: Bright eyes, earnest expressions, and a gentle but persistent panting in these meetings. Wall Street and Wall Street like firms in London, England, and Los Angeles, California, were quite similar. These churn outfits and deal makers shared DNA or some type of quantum entanglement.

These “analysts” or “associates” gathered data, pumped it into Excel spreadsheets set up by colleagues or technical specialists. Macros processed the data and spit out tables, charts, and graphs. These were written up as memos, reports for those with big sticks, or senior deciders.

My point is that the “work” was done by cannon fodder from well-known universities business or finance programs.

Well, bad news, future BMW buyers, an outfit called PublicView.ai may have curtailed your dreams of a six figure bonus in January or whatever month is the big momma at your firm. You can take a look at example outputs and sign up free at https://www.publicview.ai/.

If the smart product works as advertised, a category of financial work is going to be reshaped. It is possible that fewer analyst jobs will become available as the gathering and importing are converted to automated workflows. The meetings and the panting will become fewer and father between.

I don’t have data about how many worker bees power the Wall Street type outfits. I showed up, delivered information when queried, departed, and sent a bill for my time and travel. The financial hive and its quietly buzzing drones plugged away 10 or more hours a day, mostly six days a week.

The PublicView.ai FAQ page answers some basic questions; for example, “Can I perform quantitative analysis on the files?” The answer is:

Yes, you can ask Publicview to perform computations on the files using Python code. It can create graphs, charts, tables and more.

This is good news for the newly minted MBAs with programming skills. The bad news is that repeatable questions can be converted to workflows.

Let’s assume this product is good enough. There will be no overnight change in the work for existing employees. But slowly the senior managers will get the bright idea of hiring MBAs with different skills, possibly on a  contract basis. Then the work will begin to shift to software. At some point in the not-to-distant future, jobs for humans will be eliminated.

The question is, “How quickly can new hires make themselves into higher value employees in what are the early days of smart software?”

I suggest getting on a fast horse and galloping forward. Donkeys with Excel will fall behind. Software does not require health care, ever increasing inducements, and vacations. What’s interesting is that at some point many “analyst” jobs, not just in finance, will be handled by “good enough” smart software.

Remember a 51 percent win rate from code that does not hang out with a latte will strike some in carpetland as a no brainer. The good news is that MBAs don’t have a graduate degree in 18th century buttons or the Brutalist movement in architecture.

Stephen E Arnold, March 6, 2025

Lawyers and High School Students Cut Corners

March 6, 2025

Cost-cutting lawyers beware: using AI in your practice may make it tough to buy a new BMW this quarter. TechSpot reports, "Lawyer Faces $15,000 Fine for Using Fake AI-Generated Cases in Court Filing." Writer Rob Thubron tells us:

"When representing HooserVac LLC in a lawsuit over its retirement fund in October 2024, Indiana attorney Rafael Ramirez included case citations in three separate briefs. The court could not locate these cases as they had been fabricated by ChatGPT."

Yes, ChatGPT completely invented precedents to support Ramirez’ case. Unsurprisingly, the court took issue with this:

"In December, US Magistrate Judge for the Southern District of Indiana Mark J. Dinsmore ordered Ramirez to appear in court and show cause as to why he shouldn’t be sanctioned for the errors. ‘Transposing numbers in a citation, getting the date wrong, or misspelling a party’s name is an error,’ the judge wrote. ‘Citing to a case that simply does not exist is something else altogether. Mr Ramirez offers no hint of an explanation for how a case citation made up out of whole cloth ended up in his brief. The most obvious explanation is that Mr Ramirez used an AI-generative tool to aid in drafting his brief and failed to check the citations therein before filing it.’ Ramirez admitted that he used generative AI, but insisted he did not realize the cases weren’t real as he was unaware that AI could generate fictitious cases and citations."

Unaware? Perhaps he had not heard about the similar case in 2023. Then again, maybe he had. Ramirez told the court he had tried to verify the cases were real—by asking ChatGPT itself (which replied in the affirmative). But that query falls woefully short of the due diligence required by the Federal Rule of Civil Procedure 11, Thubron notes. As the judge who ultimately did sanction the firm observed, Ramirez would have noticed the cases were fiction had his attempt to verify them ventured beyond the ChatGPT UI.

For his negligence, Ramirez may face disciplinary action beyond the $15,000 in fines. We are told he continues to use AI tools, but has taken courses on its responsible use in the practice of law. Perhaps he should have done that before building a case on a chatbot’s hallucinations.

Cynthia Murrell, March 6, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta