Another Cultural Milestone for Social Media

April 16, 2024

Well this is an interesting report. PsyPost reports, “Researchers Uncover ‘Pornification’ Trend Among Female Streamers on Twitch.” Authored by Kristel Anciones-Anguita and Mirian Checa-Romero, the study was published in the  Humanities and Social Sciences Communications journal. The team analyzed clips from 1,920 livestreams on Twitch.tv, a platform with a global daily viewership of 3 million. They found women streamers sexualize their presentations much more often, and more intensely, than the men. Also, the number of sexy streams depends on the category. Not surprisingly, broadcasters in categories like ASMR and “Pools, Hot Tubs & Beaches” are more self-sexualized than, say, gamer girls. Shocking, we know.

The findings are of interest because Twitch broadcasters formulate their own images, as opposed to performers on traditional media. There is a longstanding debate, even among feminists, whether using sex to sell oneself is empowering or oppressive. Or maybe both. Writer Eric W. Dolan notes:

“Studies on traditional media (such as TV and movies) have extensively documented the sexualization of women and its consequences. However, the interactive and user-driven nature of new digital platforms like Twitch.tv presents new dynamics that warrant exploration, especially as they become integral to daily entertainment and social interaction. … This autonomy raises questions about the factors driving self-sexualization, including societal pressures, the pursuit of popularity, and the platform’s economic incentives.”

Or maybe women are making fully informed choices and framing them as victims of outside pressure is condescending. Just a thought. The issue gets more murky when the subjects, or their audiences, are underage. The write-up observes:

“These patterns of self-sexualization also have potential implications for the shaping of audience attitudes towards gender and sexuality. … ‘Our long-term goals for this line of research include deepening our understanding of how online sexualized culture affects adolescent girls and boys and how we can work to create more inclusive and healthy online communities,’ Anciones-Anguita said. ‘This study is just the beginning, and there is much more to explore in terms of the pornification of culture and its psychological impact on users.”

Indeed there is. See the article for more details on what the study considered “sexualization” and what it found.

Cynthia Murrell, April 16, 2024

Google Mandates YouTube AI Content Be Labeled: Accurately? Hmmmm

April 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The rules for proper use of AI-generated content are still up in the air, but big tech companies are already being pressured to induct regulations. Neowin reported that “Google Is Requiring YouTube Creators To Post Labels For Realistic AI-Created Content” on videos. This is a smart idea in the age of misinformation, especially when technology can realistically create images and sounds.

Google first announced the new requirement for realistic AI-content in November 2023. The YouTube’s Creator Studio now has a tool in the features to label AI-content. The new tool is called “Altered content” and asks creators yes and no questions. Its simplicity is similar to YouTube’s question about whether a video is intended for children or not. The “Altered content” label applies to the following:

• “Makes a real person appear to say or do something they didn’t say or do

• Alters footage of a real event or place

• Generates a realistic-looking scene that didn’t actually occur”

The article goes on to say:

“The blog post states that YouTube creators don’t have to label content made by generative AI tools that do not look realistic. One example was “someone riding a unicorn through a fantastical world.” The same applies to the use of AI tools that simply make color or lighting changes to videos, along with effects like background blur and beauty video filters.”

Google says it will have enforcement measures if creators consistently don’t label their realistic AI videos, but the consequences are specified. YouTube will also reserve the right to place labels on videos. There will also be a reporting system viewers can use to notify YouTube of non-labeled videos. It’s not surprising that Google’s algorithms can’t detect realistic videos from fake. Perhaps the algorithms are outsmarting their creators.

Whitney Grace, April 2, 2024

AI to AI Program for March 12, 2024, Now Available

March 12, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Erik Arnold, with some assistance from Stephen E Arnold (the father) has produced another installment of AI to AI: Smart Software for Government Use Cases.” The program presents news and analysis about the use of artificial intelligence (smart software) in government agencies.

image

The ad-free program features Erik S. Arnold, Managing Director of Govwizely, a Washington, DC consulting and engineering services firm. Arnold has extensive experience working on technology projects for the US Congress, the Capitol Police, the Department of Commerce, and the White House. Stephen E Arnold, an adviser to Govwizely, also participates in the program. The current episode explores five topics in an father-and-son exploration of important, yet rarely discussed subjects. These include the analysis of law enforcement body camera video by smart software, the appointment of an AI information czar by the US Department of Justice, copyright issues faced by UK artificial intelligence projects, the role of the US Marines in the Department of Defense’s smart software projects, and the potential use of artificial intelligence in the US Patent Office.

The video is available on YouTube at https://youtu.be/nsKki5P3PkA. The Apple audio podcast is at this link.

Stephen E Arnold, March 12, 2024

Forget the Words. Do Short-Form Video by Hiring a PR Professional

March 1, 2024

green-dino_thumbThis essay is the work of a dumb humanoid. No smart software required.

I think “Everyone’s a Sellout Now” is about 4,000 words. The main idea is that traditional publishing is roached. Artists and writers must learn to do video editing or have enough of mommy and daddy’s money to pay someone to promote the creator’s output. The essay is well written; however, I am not sure it conveys a TikTok fact unknown or hiding in the world of BlueSky-type services.

image

This bright young student should have used a ChatGPT-type service. Thanks, MSFT Copilot. At least you are outputting which is more than I can say for your fierce but lagging competitor.

I noted this passage:

Because self-promotion sucks.

I think I agree, but why not hire an “output handler.” The OH does the PR.

Here’s another quote to note:

The problem is that America more or less runs on the concept of selling out.

Is there a fix for the gasoline of America? Yes. The essay asserts:

author-content creators succeed by making the visually uninteresting labor of typing on a laptop worthwhile to watch.

The essay concludes with this less-than-uplifting comment:

To achieve the current iteration of the American dream, you’ve got to shout into the digital void and tell everyone how great you are. All that matters is how many people believe you.

Downer? Yes, and what makes it fascinating is that the author gets paid for writing. I think this is a “real job.”

Several observations:

  1. I think smart software is going to do more than write wacko stuff for SmartNews-type publications.
  2. Readers of “downer” essays are likely to go more “down”; that is, become less positive and increasingly antagonistic to what makes the US of A tick
  3. The essay delivers the news about the importance of TikTok without pointing out that the service is China-affiliated and provides content not permitted for consumption in China.

Net net: Hire a gig worker to do the OH. Pay for PR. Quit complaining or complain in fewer words.

PS. The categorical affirmative of “everyone” is disproved with a single example. As I have pointed out in an essay about a grousing Xoogler, I operate differently. Therefore, the everyone is like fuzzy antecedents. Sloppy.

Stephen E Arnold, March 1, 2024

Second Winds: Maybe There Are Other Factors Like AI?

February 28, 2024

green-dino_thumbThis essay is the work of a dumb humanoid. No smart software required.

I read “Second Winds,” which is an essay about YouTube. The main idea is that YouTube “content creators” are quitting, hanging up their Sony and Canon cameras, parking their converted vans, and finding an apartment, a job, or their parents’ basement. The reasons are, as the essay points out, not too hard to understand:

  1. Creating “content” for a long time and having a desire to do something different like not spending hours creating videos, explaining to mom what their “job” is, or living in a weird world without the support staff, steady income, and recognition that other work sometimes provides.
  2. Burnout because doing video is hard, tedious, and a general contributor to one’s developing a potato-like body type
  3. Running out of ideas (this is the hook to the Czech playwright unknown to most high school students in the US today I surmise).

I think there is another reason. I have minimal evidence; specifically, the videos of Thomas Gast, a person who ended up in the French Foreign Legion and turned to YouTube. His early videos were explanations about what the French Foreign Legion was, how to enlist, and how to learn useful skills in an austere, male-oriented military outfit. Then he made shooting videos with some of his pals. These morphed into “roughing it” videos in Scandinavia. The current videos include the survival angle and assorted military-themed topics as M.O.S. or Military-Outdoor-Survival. Some of the videos are in German (Gast’s native language); others are in English. It is clear that he knows his subject. However, he is not producing what I would call consistent content. The format is Mr. Gast talking. He sells merchandise. He hints that he does some odd jobs. He writes books. But the videos are beginning to repeat. For lovers of things associated with brave and motivated people, his work is interesting.

For me, he seems to be getting tired. He changes the name under which his videos appear. He is looking for an anchor in the YouTube rapids.

He is a pre-quitter. Let my hypothesize why:

  1. Making videos takes indoor time. For a person who likes being “outdoors,” the thrill of making videos recedes over time.
  2. YouTube changes the rules, usually without warning. As a result, Mr. Gast avoids certain “obvious” subjects directly germane to a military professional’s core interests.
  3. YouTube money is tricky to stabilize. A very few superstars emerge. Most “creators” cannot balance YouTube with their bank account.

Can YouTube change this? No. Why should it? Google needs revenue. Videos which draw eyeballs make Google money. So far the method works. Googlers just need to jam more ads into popular videos and do little to boost niche “creators.” How many people care about the French Foreign Legion? How many care about Mr. Beast? The delta between Mr. Gast and Mr. Beast illustrates Google’s approach. Get lots of clicks; get Google bucks.

Is there a risk to YouTube in the quitting trend, which seems to be coalescing into a trend? Yep, my research team and I have identified several factors. Let’s look at several (not our complete list) quickly:

  1. Alternative channels with fewer YouTube-type hidden rules. One can push out videos via end to end encrypted messaging platforms like Telegram. Believe us, the use of E2EE is a definite thing, and it is appealing to millions.
  2. The China-linked TikTok and its US “me too” services like Meta’s allow quick-and-dirty (often literally) videos. Experimentation is easy and lighter weight than YouTube’s method. Mr. Gast should do 30 second videos about weapons or specific French Foreign Legion tasks like avoiding an attack dog hunting one in a forest.
  3. New technology is attracting the attention of “creators” and may offer an alternative to the DIY demands of making videos the old-fashioned way. Once “creators” figure out AI, there may be a video Renaissance, but it may shift the center of gravity from Google’s YouTube to a different service. Maybe Telegram will emerge as the winner? Maybe Google or Meta will be the winner? Some type of change is guaranteed.

The “second winds” angle is okay. There may be more afoot.

Stephen E Arnold, February 28, 2024

AI to AI, Program 2 Now Online

February 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My son has converted one of our Zoom conversations into a podcast about AI for government entities. The program runs about 20 minutes and features our "host," a Deep Fake pointing out he lacks human emotions and tells AI-generated jokes. Erik talks about the British government’s test of chatbots and points out one of the surprising findings from the research. He also describes the use of smart software as Ukrainian soldiers write code in real time to respond to a dynamic battlefield. Erik asks me to explain the difference between predictive AI and generative AI. My use cases focus on border-related issues. He then tries to get me to explain how to sidestep US government, in-agency AI software testing. That did not work, and I turned his pointed question into a reason for government professionals to hire him and his team. The final story focuses on a quite remarkable acronym about US government smart software projects. What’s the acronym? Please, navigate to https://www.youtube.com/watch?v=fB_fNjzRsf4&t=7s to find out.

New AI to AI Audio and Video Program

February 6, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

This is Stephen E Arnold. I wanted to let you know that my son Erik and I have created an audio and video program about artificial intelligence, smart software, and machine learning. What makes this show different is our focus. Both of us have worked on government projects in the US and in other countries. Our experience suggested a program tailored for those working in government agencies at the national or federal level, state, county, or local level might be useful. We will try to combine examples of the use of smart software and related technical information. The theme of each program is “smart software for government use cases.”

image

In the first episode, our topics include a look at the State of Texas’s use of AI to help efficiency, a review of the challenges AI poses, a discussion about Human Resources departments,  a technical review of AI content crawlers, and lastly a look ahead in 2024 for smart software.

The format of each show segment is presentation of some facts. Then my son and I discuss our assessment of the information. We don’t always see “eye to eye.” That’s where the name of our program originated. AI to AI.

Our digital assistant is named Ivan Aitoo, pronounced “eye-two.” He is created by an artificial intelligence system. He plays an important part in the program. He introduces each show with a run down of the stories in the program. Also, he concludes each program by telling a joke generated by — what else? — yet another artificial intelligence system. Ivan is delightful, but he has no sense of humor and no audience sensitivity.

You can listen to the audio version of the program at this link on the Apple podcast service. A video version is available on YouTube at this link. The program runs about 20 minutes, and we hope to produce a program every two weeks. (The program is provided as an information service, and it includes neither advertising nor sponsored content.)

If you have comments about the program, you can email them to benkent2020 at yahoo dot com.

Stephen E Arnold, February 6, 2024

TikTok Weaponized? Who Knows

January 10, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

TikTok Restricts Hashtag Search Tool Used by Researchers to Assess Content on Its Platform” makes clear that transparency from a commercial entity is a work in progress or regress as the case may be. NBC reports:

TikTok has restricted one tool researchers use to analyze popular videos, a move that follows a barrage of criticism directed at the social media platform about content related to the Israel-Hamas war and a study that questioned whether the company was suppressing topics that don’t align with the interests of the Chinese government. TikTok’s Creative Center – which is available for anyone to use but is geared towards helping brands and advertisers see what’s trending on the app – no longer allows users to search for specific hashtags, including innocuous ones.

image

An advisor to TikTok who works at a Big Time American University tells his students that they are not permitted to view the data the mad professor has gathered as part of his consulting work for a certain company affiliated with the Middle Kingdom. The students don’t seem to care. Each is viewing TikTok videos about restaurants serving super sized burritos. Thanks, MSFT Copilot Bing thing. Good enough.

Does anyone really care?

Those with sympathy for the China-linked service do. The easiest way to reduce the hassling from annoying academic researchers or analysts at non-governmental organizations is to become less transparent. The method has proven its value to other firms.

Several observations can be offered:

  1. TikTok is an immensely influential online service for young people. Blocking access to data about what’s available via TikTok and who accesses certain data underscores the weakness of certain US governmental entities. TikTok does something to reduce transparency and what happens? NBC news does a report. Big whoop as one of my team likes to say.
  2. Transparency means that scrutiny becomes more difficult. That decision immediately increases my suspicion level about TikTok. The action makes clear that transparency creates unwanted scrutiny and criticism. The idea is, “Let’s kill that fast.”
  3. TikTok competitors have their work cut out for them. No longer can their analysts gather information directly. Third party firms can assemble TikTok data, but that is often slow and expensive. Competing with TikTok becomes a bit more difficult, right, Google?

To sum up, social media short form content can be weaponized. The value of a weapon is greater when its true nature is not known, right, TikTok?

Stephen E Arnold, January 10, 2024

YouTube: Personal Views, Policies, Historical Information, and Information Shaping about Statues

January 4, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have never been one to tour ancient sites. Machu Pichu? Meh. The weird Roman temple in Nimes? When’s lunch? The bourbon trail? You must be kidding me! I have a vivid memory of visiting the US Department of Justice building for a meeting, walking through the Hall of Justice, and seeing Lady Justice covered up. I heard that the drapery cost US$8,000. I did not laugh, nor did I make any comments about cover ups at that DoJ meeting or subsequent meetings. What a hoot! Other officials have covered up statues and possibly other disturbing things.

I recall the Deputy Administrator who escorted me and my colleague to a meeting remarking, “Yeah, Mr. Ashcroft has some deeply held beliefs.” Yep, personal beliefs, propriety, not offending those entering a US government facility, and a desire to preserve certain cherished values. I got it. And I still get it. Hey, who wants to lose a government project because some sculpture artist type did not put clothes on a stone statue?

image

Are large technology firms in a position to control, shape, propagandize, and weaponize information? If the answer is, “Sure”, then users are little more than puppets, right? Thanks, MSFT Copilot Bing thing. Good enough.

However, there are some people who do visit historical locations. Many of these individuals scrutinize the stone work, the carvings, and the difficulty of moving a 100 ton block from Point A (a quarry 50 miles away) to Point B (a lintel in the middle of nowhere). I am also ignorant of art because I skipped Art History in college. I am clueless about ancient history. (I took another useless subject like a math class.) And many of these individuals have deep-rooted beliefs about the “right way” to present information in the form of stone carvings.

Now let’s consider a YouTuber who shoots videos in temples in southeast Asia. The individual works hard to find examples of deep meanings in the carvings beyond what the established sacred texts present. His hobby horse, as I understand the YouTuber, is that ancient aliens, fantastical machines, and amazing constructions are what many carvings are “about.” Obviously if one embraces what might be received wisdom about ancient texts from Indian and adjacent / close countries, the presentation of statues with disturbing images and even more troubling commentary is a problem. I think this is the same type of problem that a naked statue in the US Department of Justice posed.

The YouTuber allegedly is Praveen Mohan, and his most recent video is “YouTube Will Delete Praveen Mohan Channel on January 31.” Mr. Mohan’s angle is to shoot a video of an ancient carving in a temple and suggest that the stone work conveys meanings orthogonal to the generally accepted story about giant temple carvings. From my point of view, I have zero clue if Mr. Mohan is on the money with his analyses or if he is like someone who thinks that Peruvian stone masons melted granite for Cusco’s walls. The point of the video is that by taking pictures of historical sites and their carvings violates YouTube’s assorted rules, regulations, codes, mandates, and guidelines.

Mr. Mohan expresses the opinion that he will be banned, blocked, downchecked, punished, or made into a poster child for stone pornography or some similar punishment. He shows images which have been demonetized. He shows his “dashboard” with visual proof that he is in hot water with the Alphabet Google YouTube outfit. He shows proof that his videos are violating copyright. Okay. Maybe a reincarnated stone mason from ancient times has hired a lawyer, contacted Google from a quantum world, and frightened the YouTube wizards? I don’t know.

Several question arose when my team and I discussed this interesting video addressing YouTube’s actions toward Mr. Mohan. Let me share several with you:

  1. Is the alleged intentional action against Mr. Mohan motivated by Alphabet Google YouTube managers with roots in southeast Asia? Maybe a country like India? Maybe?
  2. Is YouTube going after Mr. Mohan because his making videos about religious sites, icons, and architecture is indeed a violation of copyright? I thought India was reasonably aggressive in its enforcement of its laws? Has Alphabet Google YouTube decided to help out India and other countries with ancient art by southeast Asia countries’ ancient artisans?
  3. Has Mr. Mohan created a legal problem for YouTube and the company is taking action to shore up its legal arguments should the naked statue matter end up in court?
  4. Is Mr. Mohan’s assertion about directed punishment accurate?

Obviously there are many issues in play. Should one try to obtain more clarification from Alphabet Google YouTube? That’s a great idea. Mr. Mohan may pursue it. However, will Google’s YouTube or the Alphabet senior management provide clarification about policies?

I will not hold my breath. But those statues covered up in the US Department of Justice reflected one person’s perception of what was acceptable. That’s something I won’t forget.

Stephen E Arnold, January 4, 2024

Big Tech, Big Fakes, Bigger Money: What Will AI Kill?

December 7, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I don’t read The Hollywood Reporter. I did one job for a Hollywood big wheel. That was enough for me. I don’t drink. I don’t take drugs unless prescribed by my comic book addicted medical doctor in rural Kentucky. I don’t dress up and wear skin bronzers in the hope that my mobile will buzz. I don’t stay out late. I don’t fancy doing things which make my ethical compass buzz more angrily than my mobile phone. Therefore, The Hollywood Reporter does not speak to me.

One of my research team sent me a link to “The Rise of AI-Powered Stars: Big Money and Risks.” I scanned the write up and then I went through it again. By golly, The Hollywood Reporter hit on an “AI will kill us” angle not getting as much publicity as Sam AI-Man’s minimal substance interview.

image

Can a techno feudalist generate new content using what looks like “stars” or “well known” people? Probably. A payoff has to be within sight. Otherwise, move on to the next next big thing. Thanks, MSFT Copilot. Good enough cartoon.

Please, read the original and complete article in The Hollywood Reporter. Here’s the passage which rang the insight bell for me:

tech firms are using the power of celebrities to introduce the underlying technology to the masses. “There’s a huge possible business there and I think that’s what YouTube and the music companies see, for better or for worse

Let’s think about these statements.

First, the idea of consumerizing AI for the masses is interesting. However, I interpret the insight as having several force vectors:

  1. Become the plumbing for the next wave of user generated content (USG)
  2. Get paid by users AND impose an advertising tax on the USG
  3. Obtain real-time data about the efficacy of specific smart generation features so that resources can be directed to maintain a “moat” from would-be attackers.

Second, by signing deals with people who to me are essentially unknown, the techno giants are digging some trenches and putting somewhat crude asparagus obstacles where the competitors are like to drive their AI machines. The benefits include:

  1. First hand experience with the stars’ ego system responds
  2. The data regarding cost of signing up a star, payouts, and selling ads against the content
  3. Determining what push back exists [a] among fans and [b] the historical middlemen who have just been put on notice that they can find their future elsewhere.

Finally, the idea of the upside and the downside for particular entities and companies is interesting. There will be winners and losers. Right now, Hollywood is a loser. TikTok is a winner. The companies identified in The Hollywood Reporter want to be winners — big winners.

I may have to start paying more attention to this publication and its stories. Good stuff. What will AI kill? The cost of some human “talent”?

Stephen E Arnold, December 7, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta