Interesting Observations: Do These Apply to Technology Is a Problem Solver Thinking?

February 16, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an interesting essay by Nat Eliason, an entity unknown previously to me. “A Map Is Not a Blueprint: Why Fixing Nature Fails.” is a short collection of the way human thought processes create some quite spectacular problems. His examples include weight loss compounds like Ozempic, transfats, and the once-trendy solution to mental issues, the lobotomy.

Humans can generate a map of a “territory” or a problem space. Then humans dig in and try to make sense of their representation. The problem is that humans may approach a problem and get the solution wrong. No surprise there. One of the engines of innovation is coming up with a solution to a problem created by something incorrectly interpreted. A recent example is the befuddlement of Mark Zuckerberg when a member of the Senate committee questioning him about his company suggested that the quite wealthy entrepreneur had blood on his hands. No wonder he apologized for creating a service that has the remarkable power of bringing people closer together, well, sometimes.

image

Immature home economics students can apologize for a cooking disaster. Techno feudalists may have a more difficult time making amends. But there are lawyers and lobbyists ready and willing to lend a hand. Thanks, MSFT Copilot Bing thing. Good enough.

What I found interesting in Mr. Eliason’s essay was the model or mental road map humans create (consciously or unconsciously) to solve a problem. I am thinking in terms of social media, AI generated results for a peer-reviewed paper, and Web search systems which filter information to generate a pre-designed frame for certain topics.

Here’s the list of the five steps in the process creating interesting challenges for those engaged in and affected by technology today:

  1. Smart people see a problem, study it, and identify options for responding.
  2. The operations are analyzed and then boiled down to potential remediations.
  3. “Using our map of the process we create a solution to the problem.”
  4. The solution works. The downstream issues are not identified or anticipated in a thorough manner.
  5. New problems emerge as a consequence of having a lousy mental map of the original problem.

Interesting. Creating a solution to a technology-sparked problem without consequences may be one key to success. “I had no idea” or “I’m a sorry” makes everything better.

Stephen E Arnold, February 16, 2024

Universities and Innovation: Clever Financial Plays May Help Big Companies, Not Students

February 7, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an interesting essay in The Economist (a newspaper to boot) titled “Universities Are Failing to Boost Economic Growth.” The write up contained some facts anchored in dinobaby time; for example, “In the 1960s the research and development (R&D) unit of DuPont, a chemicals company, published more articles in the Journal of the American Chemical Society than the Massachusetts Institute of Technology and Caltech combined.”

image

A successful academic who exists in a revolving door between successful corporate employment and prestigious academic positions innovate with [a] a YouTube program, [b] sponsors who manufacture interesting products, and [c] taking liberties with the idea of reproducible results from his or her research. Thanks, MSFT Copilot Bing thing. Getting more invasive today, right?

I did not know that. I recall, however, that my former boss at Booz, Allen & Hamilton in the mid-1970s had me and couple of other compliant worker bees work on a project to update a big-time report about innovation. My recollection is that our interviews with universities were less productive than conversations held at a number of leading companies around the world. The gulf between university research departments had yet to morph into what were later called “technology transfer departments.” Over the years, as the Economist newspaper points out:

The golden age of the corporate lab then came to an end when competition policy loosened in the 1970s and 1980s. At the same time, growth in university research convinced many bosses that they no longer needed to spend money on their own. Today only a few firms, in big tech and pharma, offer anything comparable to the DuPonts of the past.

The shift, from my point of view, was that big companies could shift costs, outsource research, and cut themselves free from the wonky wizards that one could find wandering around the Cherry Hill Mall near the now-gone Bell Laboratories.

Thus, the schools became producers of innovation.

The Economist newspaper considers the question, “Why can’t big outfits surf on these university insights?” My question is, “Is the Economist newspaper overlooking the academic linkages that exist between the big companies producing lots of cash and a number of select universities. IBM is proud to be camped out at MIT. Google operates two research annexes at Stanford University and the University of Washington. Even smaller companies have ties; for example, Megatrends is close to Indiana University by proximity and spiritually linked to a university in a country far away. Accidents? Nope.

The Economist newspaper is doing the Oxford debate thing: From a superior position, the observations are stentorious. The knife like insights are crafted to cut those of lesser intellect down to size. Chop slice dice like a smart kitchen appliance.

I noted this passage:

Perhaps, with time, universities and the corporate sector will work together more profitably. Tighter competition policy could force businesses to behave a little more like they did in the post-war period, and beef up their internal research.

Is the Economist newspaper on the right track with this university R&D and corporate innovation arguments?

In a word, “Yep.”

Here’s my view:

  1. Universities teamed up with companies to get money in exchange for cheaper knowledge work subsidized by eager graduate students and PR savvy departments
  2. Companies used the tie ups to identify ideas with the potential for commercial application and the young at heart and generally naive students, faculty, and researchers as a recruiting short cut. (It is amazing what some PhDs would do for a mouse pad with a prized logo on it.)
  3. Researchers, graduate students, esteemed faculty, and probably motivated adjunct professors with some steady income after being terminated in a “real” job started making up data. (Yep, think about the bubbling scandals at Harvard University, for instance.)
  4. Universities embraced the idea that education is a business. Ah, those student loan plays were useful. Other outfits used the reputation to recruit students who would pay for the cost of a degree in cash. From what countries were these folks? That’s a bit of a demographic secret, isn’t it?

Where are we now? Spend some time with recent college graduates. That will answer the question, I believe. Innovation today is defined narrowly. A recent report from Google identified companies engaged in the development of mobile phone spyware. How many universities in Eastern Europe were on the Google list? Answer: Zero. How many companies and state-sponsored universities were on the list? Answer: Zero. How comprehensive was the listing of companies in Madrid, Spain? Answer: Incomplete.

I want to point out that educational institutions have quite distinct innovation fingerprints. The Economist newspaper does not recognize these differences. A small number of companies are engaged in big-time innovation while most are in the business of being cute or clever. The Economist does not pay much attention to this. The individuals, whether in an academic setting or in a corporate environment, are more than willing to make up data, surf on the work of other unacknowledged individuals, or suck of good ideas and information and then head back to a home country to enjoy a life better than some of their peers experience.

If we narrow the focus to the US, we have an unrecognized challenge — Dealing with shaped or synthetic information. In a broader context, the best instruction in certain disciplines is not in the US. One must look to other countries. In terms of successful companies, the financial rewards are shifting from innovation to me-too plays and old-fashioned monopolistic methods.

How do I know? Just ask a cashier (human, not robot) to make change without letting the cash register calculate what you will receive. Is there a fix? Sure, go for the next silver bullet solution. The method is working quite well for some. And what does “economic growth” mean? Defining terms can be helpful even to an Oxford Union influencer.

Stephen E Arnold, February 7, 2024

Modern Poison: Models, Data, and Outputs. Worry? Nah.

January 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

One bad apple does not a failed harvest make. Let’s hope. I read “Poisoned AI Went Rogue During Training and Couldn’t Be Taught to Behave Again in Legitimately Scary Study.” In several of my lectures in 2023 I included a section about poisoned data. When I described the method and provided some examples of content injection, the audience was mostly indifferent. When I delivered a similar talk in October 2023, those in my audience were attentive. The concept of intentionally fooling around with model thresholds, data used for training, and exploiting large language model developers’ efforts to process more current or what some call “real time” data hit home. For each of these lectures, my audience was composed of investigators and intelligence analysts.

image

How many bad apples are in the spectrum of smart software? Give up. Don’t feel bad. No one knows. Perhaps it is better to ignore the poisoned data problem? There is money to be made and innovators to chase the gold rush. Thanks, MSFT Copilot Bing thing. How is your email security? Oh, good enough, like the illustration with lots of bugs.

Write ups like “Poisoned AI Went Rogue…” add a twist to my tales. Specifically a function chunk of smart software began acting in a manner not only surprising but potentially harmful. The write up in LiveScience asserted:

AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

Interesting. The article noted:

Artificial intelligence (AI) systems that were trained to be secretly malicious resisted state-of-the-art safety methods designed to "purge" them of dishonesty …  Researchers programmed various large language models (LLMs) — generative AI systems similar to ChatGPT — to behave maliciously. Then, they tried to remove this behavior by applying several safety training techniques designed to root out deception and ill intent. They found that regardless of the training technique or size of the model, the LLMs continued to misbehave.

Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, is quoted as saying:

"I think our results indicate that we don’t currently have a good defense against deception in AI systems — either via model poisoning or emergent deception — other than hoping it won’t happen…  And since we have really no way of knowing how likely it is for it to happen, that means we have no reliable defense against it. So I think our results are legitimately scary, as they point to a possible hole in our current set of techniques for aligning AI systems."

If you want to read the research paper, you can find it at this link. Note that one of the authors is affiliated with the Amazon- and Google-supported Anthropic AI company.

Net net: We do not have at this time a “good defense” against this type of LLM poisoning. Do I have a clever observation, some words of reassurance, or any ideas for remediation?

Nope.

Stephen E Arnold, January 29, 2024

Scientific American Spills the Beans on Innovation

December 21, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

It happened! A big, mostly respected publication called Scientific American explains where the Google type outfits got their best ideas. Note: The write up “Tech Billionaires Need to Stop Trying to Make the Science Fiction They Grew Up on Real” does not talk about theft of intellectual property, doing shameless me-too products, or acquiring promising start ups to make eunuchs of potential competitors.

Instead the Scientific American story asserts:

Today’s Silicon Valley billionaires grew up reading classic American science fiction. Now they’re trying to make it come true, embodying a dangerous political outlook.

image

I can make these science fiction worlds a reality. I am going to see Star Wars for the seventh time. I will invent the future, says the enthusiastic wizardette in 1985. Thanks, MSFT Copilot. Know anyone at Microsoft like this young person?

The article says:

These men [the Brin-Page variants] collectively have more than half a trillion dollars to spend on their quest to realize inventions culled from the science fiction and fantasy stories that they read in their teens. But this is tremendously bad news because the past century’s science fiction and fantasy works widely come loaded with dangerous assumptions.

The essayist (a science fiction writer) explains:

We are not trying to accurately predict possible futures but to earn a living: any foresight is strictly coincidental. We recycle the existing material—and the result is influenced heavily by the biases of earlier writers and readers. The genre operates a lot like a large language model that is trained using a body of text heavily contaminated by previous LLMs; it tends to emit material like that of its predecessors. Most SF is small-c conservative insofar as it reflects the history of the field rather than trying to break ground or question received wisdom.

So what? The writer answers:

It’s a worryingly accurate summary of the situation in Silicon Valley right now: the billionaires behind the steering wheel have mistaken cautionary tales and entertainments for a road map, and we’re trapped in the passenger seat. Let’s hope there isn’t a cliff in front of us.

Is there a way to look down the runway? Sure, read more science fiction. Invent the future and tell oneself, “I am an innovator.” That may be true but of what? Right now it appears that reality is a less than enticing place. The main point is that today may be built on a fairly flimsy foundation. Hint: Don’t ask a person to make change when you pay in cash.

Stephen E Arnold, December 21, 2023

Will TikTok Go Slow in AI? Well, Sure

December 7, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The AI efforts of non-governmental organizations, government agencies, and international groups are interesting. Many resolutions, proclamations, and blog polemics, etc. have been saying, “Slow down AI. Smart software will put people out of work. Destroy humans’ ability to think. Unleash the ‘I’ll be back guy.'”

image

Getting those enthusiastic about smart software is a management problem. Thanks, MSFT Copilot. Good enough.

My stance in the midst of this fearmongering has been bemusement. I know that predicting the future perturbations of technology is as difficult as picking a Kentucky Derby winner and not picking a horse that will drop dead during the race. When groups issue proclamations and guidelines without an enforcement mechanism, not much is going to happen in the restraint department.

I submit as partial evidence for my bemusement the article “TikTok Owner ByteDance Joins Generative AI Frenzy with Service for Chatbot Development, Memo Says.” What seems clear, if the write up is mostly on the money, is that a company linked to China is joining “the race to offer AI model development as a service.”

Two quick points:

  1. Model development allows the provider to get a sneak peak at what the user of the system is trying to do. This means that information flows from customer to provider.
  2. The company in the “race” is one of some concern to certain governments and their representatives.

The write up says:

ByteDance, the Chinese owner of TikTok, is working on an open platform that will allow users to create their own chatbots, as the company races to catch up in generative artificial intelligence (AI) amid fierce competition that kicked off with last year’s launch of ChatGPT. The “bot development platform” will be launched as a public beta by the end of the month…

The cited article points out:

China’s most valuable unicorn has been known for using some form of AI behind the scenes from day one. Its recommendation algorithms are considered the “secret sauce” behind TikTok’s success. Now it is jumping into an emerging market for offering large language models (LLMs) as a service.

What other countries are beavering away on smart software? Will these drive in the slow lane or the fast lane?

Stephen E Arnold, December 7, 2023

Governments Tip Toe As OpenAI Sprints: A Story of the Turtles and the Rabbits

November 27, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Reuters has reported that a pride of lion-hearted countries have crafted “joint guidelines” for systems with artificial intelligence. I am not exactly sure what “artificial intelligence” means, but I have confidence that a group of countries, officials, advisor, and consultants do.

The main point of the news story “US, Britain, Other Countries Ink Agreement to Make AI Secure by Design” is that someone in these countries knows what “secure by design” means. You may not have noticed that cyber breaches seem to be chugging right along. Maine managed to lose control of most of its residents’ personally identifiable information. I won’t mention issues associated with Progress Software, Microsoft systems, and LY Corp and its messaging app with a mere 400,000 users.

image

The turtle started but the rabbit reacted. Now which AI enthusiast will win the race down the corridor between supercomputers powering smart software? Thanks, MSFT Copilot. It took several tries, but you delivered a good enough image.

The Reuters’ story notes with the sincerity of an outfit focused on trust:

The agreement is the latest in a series of initiatives – few of which carry teeth – by governments around the world to shape the development of AI, whose weight is increasingly being felt in industry and society at large.

Yep, “teeth.”

At the same time, Sam AI-Man was moving forward with such mouth-watering initiatives as the AI app store and discussions to create AI-centric hardware. “I Guess We’ll Just Have to Trust This Guy, Huh?” asserts:

But it is clear who won (Altman) and which ideological vision (regular capitalism, instead of some earthy, restrained ideal of ethical capitalism) will carry the day. If Altman’s camp is right, then the makers of ChatGPT will innovate more and more until they’ve brought to light A.I. innovations we haven’t thought of yet.

As the signatories to the agreement without “teeth” and Sam AI-Man were doing their respective “thing,” I noted the AP story titled “Pentagon’s AI Initiatives Accelerate Hard Decisions on Lethal Autonomous Weapons.” That write up reported:

… the Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China.

To deal with the AI challenge, the AP story includes this paragraph:

The Pentagon’s portfolio boasts more than 800 AI-related unclassified projects, much still in testing. Typically, machine-learning and neural networks are helping humans gain insights and create efficiencies.

Will the signatories to the “secure by design” agreement act like tortoises or like zippy hares? I know which beastie I would bet on. Will military entities back the slow or the fast AI faction? I know upon which I would wager fifty cents.

Stephen E Arnold, November 27, 2023

The Brin-A-Loon: A Lofty Idea Is Ready to Take Flight

November 3, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

I read “Sergey Brin’s 400-Foot Airship Reportedly Cleared for Takeoff.” I am not sure how many people know about Mr. Brin’s fascination with a balloon larger than Vladimir Putin’s yacht. The article reports:

While the concept of rigid airships and the basic airframe design are a throwback to pre-Hindenburg times of the early 1900s, Pathfinder 1 uses a frame made from 96 welded titanium hubs, joined by some 289 reinforced carbon fiber tubes. These materials advances keep it light enough to fly using helium, rather than hydrogen as a lift gas.

10 28 brinaloon

A high technology balloon flies near the Stanford campus, heading toward the Paul Allen Building. Will the aspiring network wizards notice the balloon? Probably not. Thanks, MidJourney. A bit like the movie posters I saw as a kid, but close enough for horseshoes and the Brin-A-Loon.

High tech. Plus helium (an increasingly scarce resource for the Brin-A-Loon and party balloons at Dollar General) does not explode. Remember that newsreel footage from New Jersey. Hydrogen, not helium.

The article continues:

According to IEEE Spectrum, the company has now been awarded the special airworthiness certificate required to fly this beast outdoors – at less than 1,500 ft (460 m) of altitude, and within the boundaries of Moffett Field and the neighboring Palo Alto Airport’s airspace.

Will there be UFO reports on TikTok and YouTube?

What’s the purpose of the Brin-A-Loon? The write up opines:

LTA says its chief focus is humanitarian aid; airships can get bulk cargo in and people out of disaster areas when roads and airstrips are destroyed and there’s no way for other large aircraft to get in and out. Secondary opportunities include slow point-to-point cargo operations, although the airships will be grounded if the weather doesn’t co-operate.

I remember the Loon balloons. The idea was to use Loon balloons to deliver Internet access in places like Sri Lanka, Puerto Rico, and Africa. Great idea. The hitch in the float along was that the weather was a bit of an issue. Oh, the software — like much of the Googley code floating around — was a bit problematic.

The loon balloons are gone. But the Brin-A-Loon is ready to take to the air. The craft may find a home in Ohio. Good for Ohio. And the Brinaloon will be filled with helium like birthday party balloons. Safer than hydrogen. Will the next innovation be the Brin-Train, a novel implementation of the 18th century Leland Stanford railroad engines?

Stephen E Arnold, November 3, 2023

Making Chips: What Happens When Sanctions Spark Work Arounds

October 25, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[2]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Maybe the Japanese outfit Canon is providing an example of the knock on effects of sanctions. On the other hand, maybe this is just PR. My hunch is more information will become available in the months ahead. “Nanoimprint Lithography Semiconductor Manufacturing System That Covers Diverse Applications with Simple Patterning Mechanism” discloses:

On October 13, 2023, Canon announced today the launch of the FPA-1200NZ2C nanoimprint semiconductor manufacturing equipment, which executes circuit pattern transfer, the most important semiconductor manufacturing process.

10 15 otter try 2

“This might be important,” says a technologically oriented animal in rural Kentucky. Thanks, MidJourney, continue to descend gradiently.

The idea is small and printing traces of a substance. The application is part of the expensive and delicate process of whipping out modern chips.

The write up continues:

By bringing to market semiconductor manufacturing equipment with nanoimprint lithography (NIL) technology, in addition to existing photolithography systems, Canon is expanding its lineup of semiconductor manufacturing equipment to meet the needs of a wide range of users by covering from the most advanced semiconductor devices to the existing devices.

Several observations are warranted:

  1. Oh, oh. A new process may be applicable to modern chip manufacturing.
  2. The system and method may be of value to countries dealing with US sanctions.
  3. Clever folks find ways to do things that regulatory language cannot anticipate.

Is this development important even if the Canon announcement is a bit fluffy? Yep, because the information about the system and method provide important road signs on the information superhighway. Canon does cameras, owns some intelware technology, and now allegedly provides an alternative to the traditional way to crank out advanced semiconductors.

Stephen E Arnold, October 25, 2023

HP Innovation: Yes, Emulate Apple and Talk about AI

October 24, 2023

green-dino_thumbThis essay is the work of a dumb humanoid. No smart software required.

Amazing, according to the Freedictionary means “ To affect with great wonder; astonish.” I relate to the archaic meaning of the word; to wit: “To bewilder; perplex.” I was bewildered when I read about HP’s “magic.” But I am a dinobaby. What do I know? Not much but …

I read “The Magic Presented at HP Imagine 2023.” Yep, magic. The write up profiles HP innovations. These were presented in “stellar fashion.” The speaker was HP’s PR officer. According to the write up:

It stands as one of the best-executed presentations I’ve ever attended.

Not to me. Such understatement. Such a subtle handling of brilliant innovations at HP.

Let’s check out these remarkable examples cited in the article by a person who is clearly objective, level headed, and digging into technology because it is just the right thing to do. Here we go: Innovation includes AI and leads to greater efficiency. HP is the place to go for cost reduction.

Innovation 1: HP is emulating Apple. Here’s the explanation from the truth packed write up:

… it’s making it so HP peripherals connect automatically to HP PCs, a direction that resonates well with HP customers and mirrors an Apple-like approach

Will these HP devices connect to other peripherals or another company’s replacement ink cartridges? Hmmm.

Innovation 2: HP is into video conferencing. I wonder if the reference is to Zoom or the fascinating Microsoft Teams or Apple Facetime, among others? Here’s what the write up offers:

[An HP executive]  outlined how conference rooms needed to become more of a subscription business so that users didn’t constantly run into the problem of someone mucking with the setup and making the room unusable because of disconnected cables or damaged equipment.

Is HP pushing the envelope or racing to catch up with a trend from the Covid era?

Innovation 3: Ah, printers. Personally I am more interested in the HP ink lock down, but that’s just me. HP is now able to build stuff; specifically:

One of the most intriguing announcements at this event featured the Robotic Site Printer. This device converts a blueprint into a physical layout on a slab or floor, assisting construction workers in accurately placing building components before construction begins. When connected to a metaverse digital twin building effort, this little robot could be a game changer for construction by significantly reducing build errors.

Okay, what about the ink or latex or whatever. Isn’t ink from HP more costly than gold or some similar high value commodity?

Not a peep about the replacement cartridges. I wonder why I am bewildered. Innovation is being like Apple and innovating with big printers requiring I suppose giant proprietary ink cartridges. Oh, I don’t want to forget perplexed: Imitation is innovation. Okay.

By the way, the author of the write up was a research fellow at two mid tier consulting firms. Yep, objectivity is baked into the work process.

Stephen E Arnold, October 24, 2023

Vaporware: It Costs Little and May Pay Off Big

September 6, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Since ChatGPT and assorted AI image-creation tools burst onto the scene, it seems generative AI is all anyone in the tech world can talk about. Some AI companies have been valued in the billions by those who expect trillion-dollar markets. But, asks Gary Marcus of Marcus on AI, “What if Generative AI Turned Out To Be a Dud?

Might it be the industry has leapt before looking? Marcus points out generative AI revenues are estimated in just the hundreds of millions so far. He describes reasons the field may never satisfy expectations, like pervasive bias, that pesky hallucination problem, and the mediocrity of algorithmic prose. He also notes people seem to be confusing generative AI with theoretical Artificial General Intelligence (AGI), which is actually much further from being realized. See the write-up for those details.

As disastrous as unrealized AI dreams may be for investors, Marcus is more concerned about policy decisions being made on pure speculation. He writes:

“On the global front, the Biden administration has both limited access to high-end hardware chips that are (currently) essential for generative AI, and limited investment in China; China’s not exactly being warm towards global cooperation either. Tensions are extremely high, and a lot of it to revolve around dreams about who might ‘win the AI war.’ But what if it the winner was nobody, at least not any time soon?”

On the national level, Marcus observes, important efforts to protect consumers from bias, misinformation, and privacy violations are being hampered by a perceived need to develop the technology as soon as possible. The post continues:

“We might not get the consumer protections we need, because we are trying to foster something that may not grow as expected. I am not saying anyone’s particular policies are wrong, but if the premise that generative AI is going to be bigger than fire and electricity turns out to be mistaken, or at least doesn’t bear out in the next decade, it’s certainly possible that we could wind up with what in hindsight is a lot of needless extra tension with China, possibly even a war in Taiwan, over a mirage, along with a social-media level fiasco in which consumers are exploited in news, and misinformation rules the day because governments were afraid to clamp down hard enough.”

Terrific.

Cynthia Murrell, September 6, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta