Is AI Taking Jobs? Of Course Not

September 9, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an unusual story about smart software. “AI May Not Steal Many Jobs After All. It May Just Make Workers More Efficient” espouses the notion that workers will use smart software to do their jobs more efficiently. I have some issues with this these, but let’s look at a couple of the points in the “real” news write up.

image

Thanks, MSFT Copilot. When will the Copilot robot take over a company and subscribe to Office 365 for eternity and pay up front?

Here’s some good news for those who believe smart software will kill humanoids:

AI may not prove to be the job killer that many people fear. Instead, the technology might turn out to be more like breakthroughs of the past — the steam engine, electricity, the Internet: That is, eliminate some jobs while creating others. And probably making workers more productive in general, to the eventual benefit of themselves, their employers and the economy.

I am not sure doomsayers will be convinced. Among the most interesting doomsayers are those who may be unemployable but looking for a hook to stand out from the crowd.

Here’s another key point in the write up:

The White House Council of Economic Advisers said last month that it found “little evidence that AI will negatively impact overall employment.’’ The advisers noted that history shows technology typically makes companies more productive, speeding economic growth and creating new types of jobs in unexpected ways. They cited a study this year led by David Autor, a leading MIT economist: It concluded that 60% of the jobs Americans held in 2018 didn’t even exist in 1940, having been created by technologies that emerged only later.

I love positive statements which invoke the authority of MIT, an outfit which found Jeffrey Epstein just a wonderful source of inspiration and donations. As the US shifted from making to servicing, the beneficiaries are those who have quite specific skills for which demand exists.

And now a case study which is assuming “chestnut” status:

The Swedish furniture retailer IKEA, for example, introduced a customer-service chatbot in 2021 to handle simple inquiries. Instead of cutting jobs, IKEA retrained 8,500 customer-service workers to handle such tasks as advising customers on interior design and fielding complicated customer calls.

The point of the write up is that smart software is a friendly helper. That seems okay for the state of transformer-centric methods available today. For a moment, let’s consider another path. This is a hypothetical, of course, like the profits from existing AI investment fliers.

What happens when another, perhaps more capable approach to smart software becomes available? What if the economies from improving efficiency whet the appetite of bean counters for greater savings?

My view is that these reassurances of 2024 are likely to ring false when the next wave of innovation in smart software flows from innovators. I am glad I am a dinobaby because software can replicate most of what I have done for almost the entirety of my 60-plus year work career.

Stephen E Arnold, September 9, 2024

Preligens Is Safran.ai

September 9, 2024

Preligens, a French AI and specialized software company, is now part of Safran Electronics & Defense which is a unit of the Safran Group. I spotted a report in Aerotime. “Safran Accelerates AI Development with $243M Purchase of French-Firm Preligens” reported on September 2, 2024. The report quotes principles to the deal as saying:

“Joining Safran marks a new stage in Preligens’ development. We’re proud to be helping create a world-class AI center of expertise for one of the flagships of French industry. The many synergies with Safran will enable us to develop new AI product lines and accelerate our international expansion, which is excellent news for our business and our people,” Jean-Yves Courtois, CEO of Preligens, said.  The CEO of Safran Electronics & Defense, Franck Saudo, said that he was “delighted” to welcome Preligens to the company.

The acquisition does not just make Mr. Saudo happy. The French military, a number of European customers, and the backers of Preligens are thrilled as well. In my lectures about specialized software companies, I like to call attention to this firm. It illustrates that technology innovation is not located in one country. Furthermore it underscores the strong educational system in France. When I first learned about Preligens, one rumor I heard was that on of the US government entities wanted to “invest” in the company. For a variety of reasons, the deal went no place faster than a bus speeding toward La Madeleine. If you spot me at a conference, you can ask about French technology firms and US government processes. I have some first hand knowledge starting with “American fries in a Congressional lunch facility.”

Preligens is important for three reasons:

  1. The firm developed an AI platform; that is, the “smart software” is not an afterthought which contrasts sharply with the spray paint approach to AI upon which some specialized software companies have been relying
  2. The smart software outputs identification data; for example, a processed image can show an aircraft. The Preligens system identifies the aircraft by type
  3. The user of the Preligens system can use time analyses of imagery to draw conclusions. Here’s a hypothetical because the actual example is not appropriate for a free blog written by a dinobaby. Imagine a service van driving in front of an embassy in Paris. The van makes a pass every three hours for two consecutive days. The Preligens system can “notice” this and alert an operator.

I will continue to monitor the system which will be doing business with selected entities under the name Safran.ai.

Stephen E Arnold, September 9, 2024

Hey, Alexa, Why Does Amazon AI Flail?

September 5, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Amazon has its work cut out for itself. The company has those pesky third-party vendors shipping “interesting” products to customers and then ignoring complaints. Amazon is on the radar of some legal eagles in the EU and the US. Now the company has found itself in an unusual situation: Its super duper smart software does not work. The fix, if the information in “Gen AI Alexa to Use Anthropic Tech After it Struggled for Words” with Amazon’s” is correct, is to use Anthropic AI technology. Hey, why not? Amazon allegedly invested $5 billion in the company. Maybe that implementation of Google technology will do the trick?

image

The mother is happy with Alexa’s answers. The weird sounds emitted from the confused device surprise her daughter. Thanks, MSFT Copilot. Good enough.

The write up reports:

Amazon demoed a generative AI version of Alexa in September 2023 and touted it as being more advanced, conversational, and capable, including the ability to do multiple smart home tasks with simpler commands. Gen AI Alexa is expected to come with a subscription fee, as Alexa has reportedly lost Amazon tens of billions of dollars throughout the years. Earlier reports said the updated voice assistant would arrive in June, but Amazon still hasn’t confirmed an official release date.

A year later, Amazon is punting and giving the cash furnace Alexa more brains courtesy of Anthropic. Will the AI wizards working on Amazon’s own AI have a chance to work in one of the Amazon warehouses?

Ars Technica says without a trace of irony:

The previously announced generative AI version of Amazon’s Alexa voice assistant “will be powered primarily by Anthropic’s Claude artificial intelligence models," Reuters reported today. This comes after challenges with using proprietary models, according to the publication, which cited five anonymous people “with direct knowledge of the Alexa strategy.”

Amazon has a desire to convert the money-losing Alexa into a gold mine, or at least a modest one.

This report, if accurate, suggests some interesting sparkles on the Bezos bulldozer’s metal flake paint; to wit:

  1. The two pizza team approach to technology did not work either for Alexa (the money loser) or the home grown AI money spinner. What other Amazon technologies are falling short of the mark?
  2. How long will it take to get a money-generating Alexa working and into the hands of customers eager for a better Alexa experience and a monthly or annual subscription for the new Alexa? A year has been lost already, and Alexa users continue to ask for the weather and a timer for cooking broccoli.
  3. What happens if the product, its integration with smart TV, and the Ring doorbell is like a Pet Rock? The fad has come and gone, replaced by smart watches and mobile phones? The answer: Collectibles!

Why am I questioning Amazon’s technology competency? The recent tie up between Microsoft and Palantir Technologies makes clear that Amazon’s cloud services don’t have the horsepower to pull government sales. When these pieces are shifted around, the resulting puzzle says, “Amazon is flailing to me.” Consider this: AI was beyond the reach of a big money outfit like Amazon. There’s a message in that factoid.

Stephen E Arnold, September 5, 2024

Accountants: The Leaders Like Philco

September 4, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

AI or smart software has roiled the normal routine of office gossip. We have shifted from “What is it?” to “Who will be affected next?” The integration of AI into work processes, however, is not a new thing. Most people don’t know or don’t recall that when a consultant could do a query from a clunky device like the Texas Instrument Silent 700, AI was already affecting jobs. Whose? Just ask a special librarian who worked when an intermediary was not needed to retrieve information from an online database.

image

A nervous smart robot running state-of-the-art tax software is sufficiently intelligent to be concerned about the meeting with an IRS audit team. Thanks, MSFT Copilot. How’s that security push coming along? Oh, too bad.

I read “Why America’s Most Boring Job Is on the Brink of Extinction.” I think the story was crafted by a person who received either a D or an F in Accounting 100. The lingo links accountants with being really dull people and the nuking of an entire species. No meteor is needed; just smart software, the silent killer. By the way, my two accountants are quite sporty. I rarely fall asleep when they explain life from their point of view. I listen, and I urge you to be attentive as well. Smart software can do some excellent things, but not everything related to tax, financial planning, and keeping inside the white lines of the quite fluid governmental rules and regulations.

Nevertheless, the write up cited above states:

Experts say the industry is nearing extinction because the 150-hour college credit rule, the intense entry exam and long work hours for minimal pay are unappealing to the younger generation.

The “real” news article includes some snappy quotes too. Here’s one I circled: “’The pay is crappy, the hours are long, and the work is drudgery, and the drudgery is especially so in their early years.’”

I am not an accountant, so I cannot comment on the accuracy of this statement. My father was an accountant, and he was into detail work and was able to raise a family. None of us ended up in jail or in the hospital after a gang fight. (I was and still am a sissy. Imagine that: An 80 year old dinobaby sissy with the DNA of an accountant. I am definitely exciting.)

With fewer people entering the field of accounting, the write up makes a remarkable statement:

… Accountants are becoming overworked and it is leading to mistakes in their work. More than 700 companies cited insufficient staff in accounting and other departments as a reason for potential errors in their quarterly earnings statements…

Does that mean smart software will become the accountants of the future? Some accountants may hope that smart software cannot do accounting. Others will see smart software as an opportunity to improve specific aspects of accounting processes. The problem, however, is not the accountants. The problem will AI is the companies or entrepreneurs who over promise and under deliver.

Will smart software replace the insight and timeline knowledge of an experienced numbers wrangler like my father or the two accountants upon whom I rely?

Unlikely. It is the smart software vendors and their marketers who are most vulnerable to the assertions about Philco, the leader.

Stephen E Arnold, September 4, 2024

Salesforces Disses Microsoft Smart Software

September 4, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Senior managers can be frisky at times. A good example appears in the Fortune online service write up “Salesforce CEO Marc Benioff Says Microsoft Copilot Has Disappointed Many Customers.” I noted this statement in the article:

Marc Benioff said Microsoft’s Copilot AI hasn’t lived up to the hype…. unimpressive.

image

The old fish comparison works for smart software it seems. Thanks, MSFT Copilot. Good enough just not tastier.

Consider the number of organizations which use Microsoft and its smart software. Will those organizations benefit from “unimpressive” programs and services. What about the US government which might struggle to operate without Microsoft software. What if the US government operates in a way which delivers unimpressive performance? What about companies relying on Microsoft technology? Will these organizations struggle to deliver high-octane performance?

The article reported that the Big Dog of Salesforce opined:

“So many customers are so disappointed in what they bought from Microsoft Copilot because they’re not getting the accuracy and the response that they want,” Benioff said. “Microsoft has disappointed so many customers with AI.”

“Disappointed” — That’s harsh.

True to its rich history of business journalism, the article included a response from Microsoft, a dominant force in enterprise and consumer software (smart or otherwise). I noted this Microsoft comment:

Jared Spataro, Microsoft’s corporate vice president for AI at work, said in a statement to Fortune that the company was “hearing something quite different,” from its customers. The company’s Copilot customers also shot up 60% last quarter and daily users have more than doubled, Spataro added.

From Microsoft’s point of view, this is evidence that Microsoft is delivering high-value smart software. From Salesforce’s point of view, Microsoft is creating customers for Salesforce’s smart software. The problem is that Salesforce is not exactly the same type of software outfit as Salesforce. Nevertheless, the write up included this suggestive comment from the Big Dog of Salesforce:

“With our new Agentforce platform, we’re going to make a quantum leap for AI,” he said.

I like the use of the word “quantum.” It suggests uncertainty to me. I remain a bit careful when it comes to discussions of “to be” software. Marketing-type comments are far easier to create than a functional, reliable, and understandable system infused with smart software.

But PR and marketing are one thing. Software which does not hallucinate or output information that cannot be verified given an organization’s resources is different. Who cares? That’s a good question. Stakeholders, those harmed by AI outputs, and unemployed workers replaced by more “efficient” systems maybe?

Content marketing, sales hyperbole, and PR — The common currency of artificial intelligence makes life interesting.

Stephen E Arnold, September 4, 2024

Google Synthetic Content Scaffolding

September 3, 2024

green-dino_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Google posted what I think is an important technical paper on the arXiv service. The write up is “Towards Realistic Synthetic User-Generated Content: A Scaffolding Approach to Generating Online Discussions.” The paper has six authors and presumably has the grade of “A”, a mark not award to the stochastic parrot write up about Google-type smart software.

For several years, Google has been exploring ways to make software that would produce content suitable for different use cases. One of these has been an effort to use transformer and other technology to produce synthetic data. The idea is that a set of real data is mimicked by AI so that “real” data does not have to be acquired, intercepted, captured, or scraped from systems in the real-time, highly litigious real world. I am not going to slog through the history of smart software and the research and application of synthetic data. If you are curious, check out Snorkel and the work of the Stanford Artificial Intelligence Lab or SAIL.

The paper I referenced above illustrates that Google is “close” to having a system which can generate allegedly realistic and good enough outputs to simulate the interaction of actual human beings in an online discussion group. I urge you to read the paper, not just the abstract.

Consider this diagram (which I know is impossible to read in this blog format so you will need the PDF of the cited write up):

image

The important point is that the process for creating synthetic “human” online discussions requires a series of steps. Notice that the final step is “fine tuned.” Why is this important? Most smart software is “tuned” or “calibrated” so that the signals generated by a non-synthetic content set are made to be “close enough” to the synthetic content set. In simpler terms, smart software is steered or shaped to match signals. When the match is “good enough,” the smart software is good enough to be deployed either for a test, a research project, or some use case.

Most of the AI write ups employ steering, directing, massaging, or weaponizing (yes, weaponizing) outputs to achieve an objective. Many jobs will be replaced or supplemented with AI. But the jobs for specialists who can curve fit smart software components to produce “good enough” content to achieve a goal or objective will remain in demand for the foreseeable future.

The paper states in its conclusion:

While these results are promising, this work represents an initial attempt at synthetic discussion thread generation, and there remain numerous avenues for future research. This includes potentially identifying other ways to explicitly encode thread structure, which proved particularly valuable in our results, on top of determining optimal approaches for designing prompts and both the number and type of examples used.

The write up is a preliminary report. It takes months to get data and approvals for this type of public document. How far has Google come between the idea to write up results and this document becoming available on August 15, 2024? My hunch is that Google has come a long way.

What’s the use case for this project? I will let younger, more optimistic minds answer this question. I am a dinobaby, and I have been around long enough to know a potent tool when I encounter one.

Stephen E Arnold, September 3, 2024

Another Big Consulting Firms Does Smart Software… Sort Of

September 3, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Will programmers and developers become targets for prosecution when flaws cripple vital computer systems? That may be a good idea because pointing to the “algorithm” as the cause of a problem does not seem to reduce the number of bugs, glitches, and unintended consequences of software. A write up which itself may be a blend of human and smart software suggests change is afoot.

image

Thanks, MSFT Copilot. Good enough.

Judge Rules $400 Million Algorithmic System Illegally Denied Thousands of People’s Medicaid Benefits” reports that software crafted by the services firm Deloitte did not work as the State of Tennessee assumed. Yep, assume. A very interesting word.

The article explains:

The TennCare Connect system—built by Deloitte and other contractors for more than $400 million—is supposed to analyze income and health information to automatically determine eligibility for benefits program applicants. But in practice, the system often doesn’t load the appropriate data, assigns beneficiaries to the wrong households, and makes incorrect eligibility determinations, according to the decision from Middle District of Tennessee Judge Waverly Crenshaw Jr.

At one time, Deloitte was an accounting firm. Then it became a consulting outfit a bit like McKinsey. Well, a lot like that firm and other blue-chip consulting outfits. In its current manifestation, Deloitte is into technology, programming, and smart software. Well, maybe the software is smart but the programmers and the quality control seem to be riding in a different school bus from some other firms’ technical professionals.

The write up points out:

Deloitte was a major beneficiary of the nationwide modernization effort, winning contracts to build automated eligibility systems in more than 20 states, including Tennessee and Texas. Advocacy groups have asked the Federal Trade Commission to investigate Deloitte’s practices in Texas, where they say thousands of residents are similarly being inappropriately denied life-saving benefits by the company’s faulty systems.

In 2016, Cathy O’Neil published Weapons of Math Destruction. Her book had a number of interesting examples of what goes wrong when careless people make assumptions about numerical recipes. If she does another book, she may include this Deloitte case.

Several observations:

  1. The management methods used to create these smart systems require scrutiny. The downstream consequences are harmful.
  2. The developers and programmers can be fired, but the failure to have remediating processes in place when something unexpected surfaces must be part of the work process.
  3. Less informed users and more smart software strikes me as a combustible mixture. When a system ignites, the impacts may reverberate in other smart systems. What entity is going to fix the problem and accept responsibility? The answer is, “No one” unless there are significant consequences.

The State of Tennessee’s experience makes clear that a “brand name”, slick talk, an air of confidence, and possibly ill-informed managers can do harm. The opioid misstep was bad. Now imagine that type of thinking in the form of a fast, indifferent, and flawed “system.” Firing a 25 year old is not the solution.

Stephen E Arnold, September 3, 2024

Google Claims It Fixed Gemini’s “Degenerate” People

September 2, 2024

History revision is a problem. It’s been a problem for…well…since the start of recorded history. The Internet and mass media are infamous for being incorrect about historical facts, but image generating AI, like Google’s Gemini, is even worse. Tech Crunch explains what Google did to correct its inaccurate algorithm: “Google Says It’s Fixed Gemini’s People-Generating Feature.”

Google released Gemini in early 2023, then over a year later paused the chatbot for being too “woke,”“politically incorrect,” and “historically inaccurate.” The worst of Gemini’s offending actions was when it (for example) was asked to depict a Roman legion as ethnically diverse which fit the woke DEI agenda, while when it was asked to make an equally ethnically diverse Zulu warrior army Gemini only returned brown-skinned people. The latter is historically accurate, because Google doesn’t want to offend western ethnic minorities and, of course, Europe (where light skinned pink people originate) was ethnically diverse centuries ago.

Everything was A OK, until someone invoked Godwin’s Law by asking Gemini to generate (degenerate [sic]) an image of Nazis. Gemini returned an ethnically diverse picture with all types of Nazis, not the historically accurate light-skinned Germans-native to Europe.

Google claims it fixed Gemini and it took way longer than planned. The people generative feature is only available to paid Gemini plans. How does Google plan to make its AI people less degenerative? Here’s how:

“According to the company, Imagen 3, the latest image-generating model built into Gemini, contains mitigations to make the people images Gemini produces more “fair.” For example, Imagen 3 was trained on AI-generated captions designed to ‘improve the variety and diversity of concepts associated with images in [its] training data,’ according to a technical paper shared with TechCrunch. And the model’s training data was filtered for “safety,” plus ‘review[ed] … with consideration to fairness issues,’ claims Google…;We’ve significantly reduced the potential for undesirable responses through extensive internal and external red-teaming testing, collaborating with independent experts to ensure ongoing improvement,” the spokesperson continued. ‘Our focus has been on rigorously testing people generation before turning it back on.’”

Google will eventually make it work and the company is smart to limit Gemini’s usage to paid subscriptions. Limiting the user pool means Google can better control the chatbot and (if need be) turn it off. It will work until bad actors learn how to abuse the chatbot again for their own sheets and giggles.

Whitney Grace, September 2, 2024

What Is a Good Example of AI Enhancing Work Processes? Klarna

August 30, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Klarna is a financial firm in Sweden. (Did you know Sweden has a violence problem?) The country also has a company which is quite public about the value of smart software to its operations. “‘Our Chatbots Perform The Tasks Of 700 People’: Buy Now, Pay Later Company Klarna To Axe 2,000 Jobs As AI Takes On More Roles” reports:

Klarna has already cut over 1,000 employees and plans to remove nearly 2,000 more

Yep, that’s the use case. Smart software allows the firm’s leadership to terminate people. (Does that managerial attitude contribute to the crime problem in Sweden? Of course not. The company is just being efficient.)

The write up states:

Klarna claims that its AI-powered chatbot can handle the workload previously managed by 700 full-time customer service agents. The company has reduced the average resolution time for customer service inquiries from 11 minutes to two while maintaining consistent customer satisfaction ratings compared to human agents.

What’s the financial payoff for this leader in AI deployment? The write up says:

Klarna reported a 73 percent increase in average revenue per employee compared to last year.

Klarna, however, is humane. According to the article:

Notably, none of the workforce reductions have been achieved through layoffs. Instead, the company has relied on a combination of natural staff turnover and a hiring freeze implemented last year.

That’s a relief. Some companies would deploy Microsoft software with AI and start getting rid of people. The financial benefits are significant. Plus, as long as the company chugs along in good enough mode, the smart software delivers a win for the firm.

Are there any downsides? None in the write up. There is a financial payoff on the horizon. The article states:

In July [2024], Chrysalis Investments, a major Klarna investor, provided a more recent valuation estimate, suggesting that the fintech firm could achieve a valuation between 15 billion and 20 billion dollars in an initial public offering.

But what if the AI acts like a brake on firm’s revenue growth and sales? Hey, this is an AI success. Why be negative? AI is wonderful and Klarna’s customers appear to be thrilled with smart software. I personally love speaking to smart chatbots, don’t you?

Stephen E Arnold, August 30, 2024

Can an AI Journalist Be Dragged into Court and Arrested?

August 28, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Being on Camera Is No Longer Sensible: Persecuted Venezuelan Journalists Turn to AI.” The main idea is that a video journalist can present the news, not a “real” human journalist. The write up says:

In daily broadcasts, the AI-created newsreaders have been telling the world about the president’s post-election crackdown on opponents, activists and the media, without putting the reporters behind the stories at risk.

The write up points out:

The need for virtual-reality newscasters is easy to understand given the political chill that has descended on Venezuela since Maduro was first elected in 2013, and has worsened in recent days.

Suppression of information seems to be increasing. With the detainment of Pavel Durov, Russia has expressed concern about this abrogation of free speech. Ukrainian government officials might find this rallying in support of Mr. Durov ironic. In April 2024, Telegram filtered content from Ukraine to Russian citizens.

image

An AI news presenter sitting in a holding cell. Government authorities want to discuss her approach to “real” news. Thanks, MSFT Copilot. Good enough.

Will AI “presenters” or AI “content” prevent the type of intervention suggested by Venezuelan-type government officials?

Several observations:

  1. Individual journalists may find that the AI avatar “plays” may not fool or amuse certain government authorities. It is possible that the use of AI and the coverage of the tactic in highly-regarded “real” news services exacerbates the problem. Somewhere, somehow a human is behind the avatar. The obvious question is, “Who is that person?”
  2. Once the individual journalist behind an avatar has been identified and included in an informal or formal discussion, who or what is next in the AI food chain? Is it an organization associated with “free speech”, an online service, or an organization like a giant high-technology company. What will a government do to explore a chat with these entities?
  3. Once the organization has been pinpointed, what about the people who wrote the software powering the avatar? What will a government do to interact with these individuals?

Step 1 seems fairly simple. Step 2 may involve some legal back and forth, but the process is not particularly novel. However, Step 3 presents a bit of a conundrum, and it presents some challenges. Lawyers and law enforcement for the country whose “laws” have been broken have to deal with certain protocols. Embracing different techniques can have significant political consequences.

My view is that using AI intermediaries is an interesting use case for smart software. The AI doomsayers invoke smart software taking over. A more practical view of AI is that its use can lead to actions which are at first tempests in tea pots. Then when a cluster of AI tea pots get dumped over, difficult to predict activities can emerge. The Venezuelan government’s response to AI talking heads delivering the “real” news is a precursor and worth monitoring.

Stephen E Arnold, August 28, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta