Alternative Channels, Superstar Writers, and Content Filtering

February 7, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In this post-Twitter world, a duel of influencers is playing out in the blogosphere. At issue: Substack’s alleged Nazi problem. The kerfuffle began with a piece in The Atlantic by Jonathan M. Katz, but has evolved into a debate between Platformer’s Casey Newton and Jesse Singal of Singal-Minded. Both those blogs are hosted by Substack.

To get up to speed on the controversy, see the original Atlantic article. Newton wrote a couple posts about Substack’s responses and detailing Platformer’s involvement. In “Substack Says It Will Remove Nazi Publications from the Platform,” he writes:

“Substack is removing some publications that express support for Nazis, the company said today. The company said this did not represent a reversal of its previous stance, but rather the result of reconsidering how it interprets its existing policies. As part of the move, the company is also terminating the accounts of several publications that endorse Nazi ideology and that Platformer flagged to the company for review last week.”

How many publications did Platformer flag, and how many of those did Substack remove? Were they significant publications, and did they really violate the rules? These are the burning questions Singal sought to answer. He shares his account in, “Platformer’s Reporting on Substack’s Supposed ‘Nazi Problem’ Is Shoddy and Misleading.” But first, he specifies his own perspective on Katz’ Atlantic article:

“In my view, this whole thing is little more than a moral panic. Moreover, Katz cut certain corners to obscure the fact that to the extent there are Nazis on Substack at all, it appears they have almost no following or influence, and make almost no money. In one case, for example, Katz falsely claimed that a white nationalist was making a comfortable living writing on Substack, but even the most cursory bit of research would have revealed that that is completely false.”

Singal says he plans a detailed article supporting that assertion, but first he must pick apart Platformer’s position. Readers are treated to details from an email exchange between the bloggers and reasons Singal feels Newton’s responses are inadequate. One can navigate to that post for those details if one wants to get into the weeds. As of this writing, Newton has not published a response to Singal’s diatribe. Were we better off when such duels took place 280 characters at a time?

One positive about newspapers: An established editorial process kept superstars grounded in reality. Now entitlement, more than content, seems to be in the driver’s seat.

Cynthia Murrell, February 7, 2024

Harvard University: Does Money Influence Academic Research?

December 5, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Harvard University has been on my radar since the ethics misstep. In case your memory is fuzzy, Francesca Gino, a big thinker about ethics and taking shortcuts, was accused of data fraud. The story did not attract much attention in rural Kentucky. Ethics and dishonesty? Come on. Harvard has to do some serious training to catch up with a certain university in Louisville. For a reasonable explanation of the allegations (because, of course, one will never know), navigate to “Harvard Professor Who Studies Dishonesty Is Accused of Falsifying Data” and dig in.

image

Thanks, MSFT Copilot, you have nailed the depressive void that comes about when philosophers learn that ethics suck.

Why am I thinking about Harvard and ethics? The answer is that I read “Harvard Gutted Initial Team Examining Facebook Files Following $500 Million Donation from Chan Zuckerberg Initiative, Whistleblower Aid Client Reveals.” I have no idea if the write up is spot on, weaponized information, or the work of someone who did not get into one of the university’s numerous money generating certification programs.

The write up asserts:

Harvard University dismantled its prestigious team of online disinformation experts after a foundation run by Facebook’s Mark Zuckerberg and his wife Priscilla Chan donated $500 million to the university, a whistleblower disclosure filed by Whistleblower Aid reveals. Dr. Joan Donovan, one of the world’s leading experts on social media disinformation, says she ran into a wall of institutional resistance and eventual termination after she and her team at Harvard’s Technology and Social Change Research Project (TASC) began analyzing thousands of documents exposing Facebook’s knowledge of how the platform has caused significant public harm.

Let’s assume that the allegation is horse feathers, not to be confused with Intel’s fabulous Horse Ridge. Harvard still has to do some fancy dancing with regard to the ethics professor and expert in dishonesty who is alleged to have violated the esteemed university’s ethics guidelines and was dishonest.

If we assume that the information in Dr. Donovan’s whistleblower declaration is close enough for horse shoes, something equine can be sniffed in the atmosphere of Dr. William James’s beloved institution.

What could Facebook or the Metazuck do which would cause significant public harm? The options range from providing tools to disseminate information which spark body shaming, self harm, and angst among young users. Are old timers possibly affected? I suppose buying interesting merchandise on Facebook Marketplace and experiencing psychological problems as a result of defriending are possibilities too.

If the allegations are proven to be accurate, what are the consequences for the two esteemed organizations? My hunch is zero. Money talks; prestige walks away to put ethics on display for another day.

Stephen E Arnold, December 5, 2023

The Google Magic Editor: Mom Knows Best and Will Ground You, You Goof Off

November 13, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

What’s better at enforcing rules? The US government and its Declaration of Independence, Constitution, and regulatory authority or Mother Google? If you think the US government legal process into Google’s alleged fancy dancing with mere users is opaque, you are correct. The US government needs the Google more than Google Land needs the world’s governments. Who’s in charge of Google? The real authority is Mother Google, a ghost like iron maiden creating and enforcing with smart software many rules and regulations. Think of Mother Google operating from a digital Star Chamber. Banned from YouTube? Mother Google did it. Lost Web site traffic overnight? Mother Google did it? Lost control of your user data? Mother Google did not do that, of course.

image

A stern mother says, “You cannot make deep fakes involving your gym teacher and your fifth grade teacher. Do you hear me?” Thanks, Microsoft Bing. Great art.

The author of “Google Photos’ Magic Editor Will Refuse to Make These Edits.” The write up states:

Code within the latest version of Google Photos includes specific error messages that highlight the edits that Magic Editor will refuse to do. Magic Editor will refuse to edit photos of ID cards, receipts, images with personally identifiable information, human faces, and body parts. Magic Editor already avoids many of these edits but without specific error messages, leaving users guessing on what is allowed and what is not.

What’s interesting is that user have to discover that which is forbidden by experimenting. My reaction to this assertion is that Google does not want to get in trouble when a crafty teen cranks out fake IDs in order to visit some of the more interesting establishments in town.

I have a nagging suspicion or two  I would like to share:

  1. The log files identifying which user tried to create what with which prompt would be interesting to review
  2. The list of don’ts is not published because it is adjusted to meet Google’s needs, not the users’
  3. Google wants to be able to say, “See, we are trying to keep the Internet safe, pure, and tidy.”

Net net: What happens when smart software enforces more potent and more subtle controls over the framing and presenting of information? Right, mom?

Stephen E Arnold, November 13, 2023

The Google: Dribs and Drabs of Information Suggest a Frisky Outfit

October 10, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I have been catching up since I returned from a law enforcement conference. One of the items in my “read” file concerned Google’s alleged demonstrations of the firm’s cleverness. Clever is often valued more than intelligence in some organization  in my experience. I picked up on an item describing the system and method for tweaking a Google query to enhance the results with some special content.

 How Google Alters Search Queries to Get at Your Wallet” appeared on October 2, 2023. By October 6, 2023, the article was disappeared. I want to point out for you open source intelligence professionals, the original article remains online.

image

Two serious and bright knowledge workers look confused when asked about alleged cleverness. One says, “I don’t understand. We are here to help you.” Thanks, Microsoft Bing. Highly original art and diverse too.

Nope. I won’t reveal where or provide a link to it. I read it and formulated three notions in my dinobaby brain:

  1. The author is making darned certain that he/she/it will not be hired by the Google.
  2. The system and method described in the write up is little more than a variation on themes which thread through a number of Google patent documents. I demonstrated in my monograph Google Version 2.0: The Calculating Predator that clever methods work for profiling users and building comprehensive data sets about products.
  3. The idea of editorial curation is alive, just not particularly effective at the “begging for dollars” outfit doing business as Wired Magazine.

Those are my opinions, and I urge you to formulate your own.

I noted several interesting comments on Hacker News about this publish and disappear event. Let me highlight several. You can find the posts at this link, but keep in mind, these also can vaporize without warning. Isn’t being a sysadmin fun?

  1. judge2020: “It’s obvious that they design for you to click ads, but it was fairly rocky suggesting that the backend reaches out to the ad system. This wouldn’t just destroy results, but also run afoul of FCC Ad disclosure requirements….”
  2. techdragon: “I notice it seems like Google had gotten more and more willing to assume unrelated words/concepts are sufficiently interchangeable that it can happily return both in a search query for either … and I’ll be honest here… single behavior is the number one reason I’m on the edge of leaving google search forever…”
  3. TourcanLoucan: “Increasingly the Internet is not for us, it is certainly not by us, it is simply where you go when you are bored, the only remaining third place that people reliably have access to, and in true free market fashion, it is wall-to-wall exploitation.”

I want to point out that online services operate like droplets of mercury. They merge and one has a giant blob of potentially lethal mercury. Is Google a blob of mercury? The disappearing content is interesting as are the comments about the incident. But some kids play with mercury; others use it in industrial processes; and some consume it (willingly or unwillingly) like sailors of yore with a certain disease. They did not know. You know or could know.

Stephen E Arnold, October 10, 2023

    Reading. Who Needs It?

    September 19, 2023

    Book banning aka media censorship is an act as old as human intellect. As technology advances so do the strategies and tools available to assist in book banning. Engadget shares the unfortunate story about how, “An Iowa School District Is Using AI To Ban Books.” Mason City, Iowa’s school board is leveraging AI technology to generate lists of books to potentially ban from the district’s libraries in the 2023-24 school year.

    Governor Kim Reynolds signed Senate File 496 into law after it passed the Republican-controlled state legislature. Senate File 496 changes the state’s curriculum and it includes verbiage that addresses what books are allowed in schools. The books must be “age appropriate” and be without “descriptions or visual depictions of a sex act.”

    “Inappropriate” titles have snuck past censors for years and Iowa’s school board discovered it is not so easy to peruse every school’s book collection. That is where the school board turned to an AI algorithm to undertake the task:

    “As such, the Mason City School District is bringing in AI to parse suspect texts for banned ideas and descriptions since there are simply too many titles for human reviewers to cover on their own. Per the district, a “master list” is first cobbled together from “several sources” based on whether there were previous complaints of sexual content. Books from that list are then scanned by “AI software” which tells the state censors whether or not there actually is a depiction of sex in the book.”

    The AI algorithm has so far listed nineteen titles to potentially ban. These include banned veteran titles such as The Color Purple, I Know Why the Caged Bird Sings, and The Handmaid’s Tale and “newer” titles compared to the formers: Gossip Girl, Feed, and A Court of Mist and Fury.

    While these titles are not appropriate for elementary schools, questionable for middle schools, and arguably age-appropriate for high schools, book banning is not good. Parents, teachers, librarians, and other leaders must work together to determine what is best for students. Books also have age ratings on them like videogames, movies, and TV shows. These titles are tame compared to what kids can access online and on TV.

    Whitney Grace, September 19, 2023

    Dust Up: Social Justice and STEM Publishing

    June 28, 2023

    Are you familiar with “social justice warriors?” These are people who. Take it upon themselves to police the world for their moral causes, usually from a self-righteous standpoint. Social justice warriors are also known my the acronym SJWs and can cross over into the infamous Karen zone. Unfortunately Heterodox STEM reports SJWs have invaded the science community and Anna Krylov and Jay Tanzman discussed the issue in their paper: “Critical Social Justice Subverts Scientific Publishing.”

    SJWs advocate for the politicization of science, adding an ideology to scientific research also known as critical social justice (CSJ). It upends the true purpose of science which is to help and advance humanity. CSJ adds censorship, scholarship suppression, and social engineering to science.

    Krylov and Tanzmans’ paper was presented at the Perils for Science in Democracies and Authoritarian Countries and they argue CSJ harms scientific research than helps it. They compare CSJ to Orwell’s fictional Ministry of Love; although real life examples such as Josef Goebbels’s Nazi Ministry of Propaganda, the USSR’s Department for Agitation and Propaganda, and China’s authoritarian regime work better. CSJ is the opposite of the Enlightenment that liberated human psyches from religious and royal dogmas. The Enlightenment engendered critical thinking, the scientific process, philosophy, and discovery. The world became more tolerant, wealthier, educated, and healthier as a result.

    CSJ creates censorship and paranoia akin to tyrannical regimes:

    “According to CSJ ideologues, the very language we use to communicate our findings is a minefield of offenses. Professional societies, universities, and publishing houses have produced volumes dedicated to “inclusive” language that contain long lists of proscribed words that purportedly can cause offense and—according to the DEI bureaucracy that promulgates these initiatives—perpetuate inequality and exclusion of some groups, disadvantage women, and promote patriarchy, racism, sexism, ableism, and other isms. The lists of forbidden terms include “master database,” “older software,” “motherboard,” “dummy variable,” “black and white thinking,” “strawman,” “picnic,” and “long time no see” (Krylov 2021: 5371, Krylov et al. 2022: 32, McWhorter 2022, Paul 2023, Packer 2023, Anonymous 2022). The Google Inclusive Language Guide even proscribes the term “smart phones” (Krauss 2022). The Inclusivity Style  Guide of the American Chemical Society (2023)—a major chemistry publisher of more than 100 titles—advises against using such terms as “double blind studies,” “healthy weight,” “sanity check,” “black market,” “the New World,” and “dark times”…”

    New meanings that cause offense are projected onto benign words and their use is taken out of context. At this rate, everything people say will be considered offensive, including the most uncontroversial topic: the weather.

    Science must be free from CSJ ideologies but also corporate ideologies that promote profit margins. Examples from American history include, Big Tobacco, sugar manufacturers, and Big Pharma.

    Whitney Grace, June 28, 2023

    Two Polemics about the Same Thing: Info Control

    June 12, 2023

    Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

    Polemics are fun. The term, as I use it, means:

    a speech or piece of writing expressing a strongly critical attack on or controversial opinion about someone or something.

    I took the definition from Google’s presentation of the “Oxford Languages.” I am not sure what that means, but since we are considering two polemics, the definition is close enough for horseshoes. Furthermore, polemics are not into facts, verifiable assertions, or hard data. I think of polemics as blog posts by individuals whom some might consider fanatics, apologists, crusaders, or zealots.

    Ah, you don’t agree? Tough noogies, gentle reader.

    The first document I read and fed into Browserling’s free word frequency tool was Marc Andreessen’s delightful “Why AI Will Save the World.” The document has a repetitive contents listing, which some readers may find useful. For me, the effort to stay on track added duplicate words.

    The second document I read and stuffed into the Browserling tool was the entertaining, and in my opinion, fluffy, Aeropagitica, made available by Dartmouth.

    The mechanics of the analysis were simple. I compared the frequency of words which I find indicative of a specific rhetorical intent. Mr. Andreessen is probably more well known to modern readers than John Milton. Mr. Andreessen’s contribution to polemic literature is arguably more readable. There’s the clumsy organization impedimenta. There are shorter sentences. There are what I would describe as Silicon Valley words. Furthermore, based on Bing, Google, and Yandex searches for the text of the document, one can find Mr. Andreessen’s contribution to the canon in more places than John Milton’s lame effort. I want to point out that Mr. Milton’s polemic is longer than Mr. Andreessen’s by a couple of orders of magnitude. I did what most careless analysts would do: I took the full text of Mr. Andreessen’s screed and snagged the first 8000 words of Mr. Milton’s writing. A writing known to bring tears to the eyes of first year college students asked to read the prose and write an analytic essay about Aeropagitica in 500 words. Good training for either a debate student, a future lawyer, or a person who wants to write for Reader’s Digest magazine I believe.

    So what did I find?

    First, both Mr. Andreessen and Mr. Milton needed to speak out for their ideas. Mr. Andreessen is an advocate of smart software. Mr. Milton wanted a censorship free approach to publishing. Both assumed that “they” or people not on their wave length needed convincing about the importance of their ideas. It is safe to say that the audiences for these two polemics are not clued into the subject. Mr. Andreessen is speaking to those who are jazzed on smart software, neglecting to point out that smart software is pretty common in the online advertising sector. Mr. Milton assumed that censorship was a new threat, electing to ignore that religious authorities, educational institutions, and publishers were happily censoring information 24×7. But that’s the world of polemicists.

    Second, what about the words used by each author. Since this is written for my personal blog, I will boil down my findings to a handful of words.

    The table below presents selected 12 words and a count of each:

    Words

    Andreessen

    Milton

    AI

    157

    0

    All

    34

    54

    Ethics

    1

    0

    Every

    20

    8

    Everyone

    7

    0

    Everything

    6

    0

    Everywhere

    4

    0

    Infinitely

    9

    0

    Moral

    9

    0

    Morality

    2

    0

    Obviously

    4

    0

    Should

    23

    22

    Would

    21

    10

    Several observations:

    1. Messrs. Andreessen and Milton share an absolutist approach. The word “all” figures prominently in both polemics.
    2. Mr. Andreessen uses “every” words to make clear that AI is applicable to just about anything one cares to name. Logical? Hey, these are polemics. The logic is internal.
    3. Messrs. Andreessen share a fondness for adulting. Note the frequency of “should” and “would.”
    4. Mr. Andreessen has an interest in ethical and moral behavior. Mr. Milton writes around these notions.

    Net net: Polemics are designed as marketing collateral. Mr. Andreessen is marketing as is Mr. Milton. Which pitch is better? The answer depends on the criteria one uses to judge polemics. I give the nod to Mr. Milton. His polemic is longer, has freight train scale sentences, and is for a modern college freshman almost unreadable. Mr. Andreessen’s polemic is sportier. It’s about smart software, not censorship directly. However, both polemics boil down to who has his or her hands on the content levers.

    Stephen E Arnold, June 12, 2023

    Has the Interior Magic of Cyber Security Professionals Been Revealed?

    April 14, 2023

    Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

    The idea of “real” secrets is an interesting one. Like much of life today, “real” and “secret” depend on the individual. Observation changes reality; therefore, information is malleable too. I wonder if this sounds too post-Heisenberg for a blog post by a dinobaby? The answer is, “Yes.” However, I don’t care, particularly after reading “40% of IT Security Pros Say They’ve Been Told Not to Report a Data Leak.”

    The write up states:

    According to responses from large companies in the US, EU, and Britain, half of organizations have experienced a data leak in the past year with America faring the worst: three quarters of respondents from that side of the pond said they experienced an intrusion of some kind. To further complicate matters, 40 percent of IT infosec folk polled said they were told to not report security incidents, and that climbs to 70.7 percent in the US, far higher than any other country.

    After reading the article, I thought about the “interior character” of the individuals who cover up cyber security weaknesses. My initial reaction is that individuals are concerned about their own aura of “excellence.” Money, the position each holds, the perception of others via a LinkedIn profile — The fact of the breach is secondary to this other, more important consideration. Upon reflection, the failure to talk about flaws may be a desire to prevent miscreants from exploiting what is a factual condition: Lousy cyber security.

    What about those marketing assurances from cyber security companies? What about the government oversight groups who are riding herd on appropriate cyber security actions and activities?

    Perhaps the marketing is better than the policies, procedures, software, and people involved in protecting information and systems from bad actors?

    Stephen E Arnold, April 14, 2023

    What Will the Twitter Dependent Do Now?

    November 7, 2022

    Here’s a question comparable to Roger Penrose’s, Michio Kaku’s, and Sabine Hossenfelder’s discussion of the multiverse. (One would think that the Institute of Art and Ideas could figure out sound, but that puts high-flying discussions in a context, doesn’t it?)

    What will the Twitter dependent do now?

    Since I am not Twitter dependent nor Twitter curious (twi-curious, perhaps?), I find the artifacts of Muskism interesting to examine. Let’s take one example; specifically, “Twitter, Cut in Half.” Yikes, castration by email! Not quite like the real thing, but for some, the imagery of chopping off the essence of the tweeter thing is psychologically disturbing.

    Consider this statement:

    After the layoffs, we asked some of the employees who had been cut what they made of the process. They told us that they had been struck by the cruelty: of ordering people to work around the clock for a week, never speaking to them, then firing them in the middle of the night, no matter what it might mean for an employee’s pregnancy or work visa or basic emotional state. More than anything they were struck by the fact that the world’s richest man, who seems to revel in attention on the platform they had made for him, had not once deigned to speak to them.

    image

    Knife cutting a quite vulnerable finger as collateral damage to major carrot chopping. Image by https://www.craiyon.com/

    Cruelty. Interesting word. Perhaps it reflects on the author who sees the free amplifier of his thoughts ripped from his warm fingers? The word cut keeps the metaphor consistent: Cutting the cord, cutting the umbilical, and cutting the unmentionables. Ouch! No wonder some babies scream when slicing and cleaving ensue. Ouch ouch.

    Then the law:

    whether they were laid off or not, several employees we’ve spoken to say they are hiring attorneys. They anticipate difficulties getting their full severance payments, among other issues. Tensions are running high.

    The flocking of the legal eagles will cut off the bright white light of twitterdom. The shadows flicker awaiting the legal LEDs to shine and light the path to justice in free and easy short messages to one’s followers. Yes, the law versus the Elon.

    So what’s left of the Fail Whale’s short messaging system and its functions designed to make “real” information available on a wide range of subjects? The write up reports:

    It was grim. It was also, in any number of ways, pointless: there had been no reason to do any of this, to do it this way, to trample so carelessly over the lives and livelihoods of so many people.

    Was it pointless? I am hopeful that Twitter goes away. The alternatives could spit out a comparable outfit. Time will reveal if those who must tweet will find another easy, cheap way to promote specific ideas, build a rock star like following, and provide a stage for performers who do more than tell jokes and chirp.

    Several observations:

    1. A scramble for other ways to find, build, and keep a loyal following is underway. Will it be the China-linked TikTok? Will it be the gamer-centric Discord? Will it be a ghost web service following the Telegram model?
    2. Fear is perched on the shoulder of the Twitter dependent celebrity. What worked for Kim has worked for less well known “stars.” Those stars may wonder how the Elon volcano could ruin more of their digital constructs.
    3. Fame chasers find that the information highway now offers smaller, less well traveled digital paths? Forget the two roads in the datasphere. The choices are difficult, time consuming to master, and may lead to dead ends or crashes on the information highway’s collector lanes.

    Net net: Change is afoot. Just watch out for smart automobiles with some Elon inside.

    Stephen E Arnold, November 7, 2022

    China Plans to Promote, and Regulate, Digital Humans

    October 14, 2022

    We learn from Rest of World that “Beijing Will Regulate ‘Digital Humans’ in the Metaverse and Beyond.” Because of course it will. The proclamation was issued in the government’s four-year Action Plan, a document that indicates to businesses what it expects of them in the near future. The Chinese seem quite taken with “digital humans,” from virtual idols to game avatars, and President Xi Jinping is eager to capitalize on the trend. Reporter Meaghan Tobin specifies:

    “The plan envisions huge growth in the next few years, projecting that by 2025, revenue will hit $7.3 billion in the capital city alone — and expecting that virtual humans will assist with online banking, shopping, and travel services within the next few years.”

    Though the growing virtual idol industry has a real problem with overworked employees, that is not a focus of the plan. It has two main priorities: One, naturally, is to promote the “healthy and orderly development of society.” Aka censorship. The other is the security of personal information. That sounds like a good thing—until one considers the government seeks to secure this data for its own purposes. Protecting users from criminals may be just a side benefit. Citing Hanyu Liu, an analysis of China’s gaming and metaverse industries, Tobin continues:

    “The plan also signals that Beijing will take a more active role in handling the personal data generated by these platforms. Some of the directives outlined in the plan require any user-facing aspect of the digital human industry to be subject to rules that protect information about and generated by platform users, while also treating user data as a resource to be traded on the country’s new data exchanges. As is the case on almost all user-facing tech platforms in China today, Liu noted, any users of metaverse or gaming platforms that could be considered part of the digital human industry will likely be required to tie their online personas to their real-life identification documents.”

    So we should not expect to see a wave of virtual protestors in China any time soon. According to Qiheng Chen, who has analyzed China’s tech policies, this push is an effort to garner talent and funds that will support its larger goal—making the country more self-sufficient in related industries like semiconductors and artificial intelligence. Those do sound a bit more strategic than simply embracing the whimsy of digital pop stars.

    Cynthia Murrell, October 14, 2022

    Next Page »

    • Archives

    • Recent Posts

    • Meta