An Ad Agency Decides: No Photoshopping of Bodies or Faces for Influencers

April 11, 2022

Presumably Ogilvy will exempt retouched food photos (what? hamburgers from a fast food outlet look different from the soggy burger in a box). Will Ogilvy outlaw retouched vehicle photographs (what? the Toyota RAV’s paint on your ride looks different from the RAV’s in print and online advertisements). Will models from a zippy London or Manhattan agency look different from the humanoid doing laundry at 11 15 on a Tuesday in Earl’s Court laundrette (what? a model with out make up, some retouching, and slick lighting?). Yes, Ogilvy has standards. See this CBS News item, which is allegedly accurate. Overbilling is not Photoshopping. Overbilling is a different beastie.

I think I know the answer to my doubts about the scope of this ad edit as reported in “Ogilvy Will No Longer Work with Influencers Who Edit Their Bodies or Faces for Ads.” The write up reports:

Ogilvy UK will no longer work with influencers who distort or retouch their bodies or faces for brand campaigns in a bid to combat social media’s “systemic” mental health harms.

I love the link to mental health harms. Here’s a quote which I find amusing:

The ban applies to all parts of the Ogilvy UK group, which counts the likes of Dove among its clients. Dove’s global vice president external communications and sustainability, Firdaous El Honsali, came out in support of the policy. “We are delighted to see our partner Ogilvy tackling this topic. Dove only works with influencers that do not distort their appearance on social media – and together with Ogilvy and our community of influencers, we have created several campaigns that celebrate no digital distortion,” El Honsali says.

Several observations:

  1. Ogilvy is trying to adjust to the new world of selling because influencers don’t think about Ogilvy. If you want an influencer, my hunch is that you take what the young giants offer.
  2. Like newspapers, ad agencies are trapped in models from the hay days of broadsheets sold on street corners. By the way, how are those with old business models doing in the zip zip TikTok world?
  3. Talking about rules is easy. Enforcing them is difficult. I bet the PowerPoint used in the meeting to create these rules for influencers was a work of marketing art.

Yep, online advertising, consolidation of agency power, and the likes of Amazon-, Facebook (Zuckbook), and YouTube illustrate one thing: The rules are set or left fuzzy by the digital platforms, not the intermediaries.

And the harm thing? Yep, save the children one influencer at a time.

Stephen E Arnold, April 11, 2022

Twitter and a Loophole? Unfathomable

April 6, 2022

Twitter knows Russia is pushing false narratives about the war in Ukraine. That is why it now refuses to amplify tweets from Russian state-affiliated media outlets like RT or Sputnik. However, the platform is not doing enough to restrain the other hundred-some Russian government accounts, according to the BBC News piece, “How Kremlin Accounts Manipulate Twitter.” Reporter James Clayton cites QUT Digital Media Research Centre‘s Tim Graham as he writes:

“Intrigued by this spider web of Russian government accounts, Mr Graham – who specializes in analyzing co-ordinated activity on social media – decided to investigate further. He analyzed 75 Russian government Twitter profiles which, in total, have more than 7 million followers. The accounts have received 30 million likes, been retweeted 36 million times and been replied to 4 million times. He looked at how many times each Twitter account retweeted one of the other 74 profiles within an hour. He discovered that the Kremlin’s network of Twitter accounts work together to retweet and drive up traffic. This practice is sometimes called ‘astroturfing’ – when the owner of several accounts uses the profiles they control to retweet content and amplify reach. ‘It’s a coordinated retweet network,’ Mr Graham says. ‘If these accounts weren’t retweeting stuff at the same time, the network would just be a bunch of disconnected dots. … They are using this as an engine to drive their preferred narrative onto Twitter, and they’re getting away with it,’ he says. Coordinated activity, using multiple accounts, is against Twitter’s rules.”

Twitter is openly more lenient on tweets by government officials under what it calls “public interest exceptions.” Even so, we are told there are supposed to be no exceptions on coordinated behavior. The BBC received no response from Twitter officials when it asked them about Graham’s findings. Clayton generously notes it can be difficult to prove content is false amid the chaos of war, and the platform has been removing claims as they are proven false. He also notes Facebook and other social media platforms have a similar Russia problem. The article allows Twitter may eventually ban Kremlin accounts entirely, as it banned Donald Trump in January 2021. Perhaps.

Cynthia Murrell, April 6, 2022

California: Knee Jerk Reflex Decades After the Knee Cap Whack

March 23, 2022

Talk about reflexes. I read “California Bill Would Let Parents Sue Social Media Companies for Addicting Kids.” [You will have to pay to read the original and wordy write up.] The main idea is that an attentive parent with an ambulance chaser or oodles of cash can sue outfits like the estimable Meta Zuck thing, the China-linked TikTok, or the “we do good” YouTube and other social media entities. (No, I don’t want to get into definitions. I will leave that to the legal eagles.) The write up states:

Assembly Bill 2408, or the Social Media Platform Duty to Children Act, was introduced by Republican Jordan Cunningham of Paso Robles and Democrat Buffy Wicks of Oakland with support from the University of San Diego School of Law Children’s Advocacy Institute. It’s the latest in a string of legislative and political efforts to crack down on social media platforms’ exploitation of their youngest users.

I like the idea that commercial enterprises should not addict child users. I want to point out that the phrasing is ambiguous. I assume the real news outfit means content consumers under a certain age, not the less positive meaning of the phrase.

I think the legislation is a baby step in a helpful direction. But it has taken decades for the toddler to figure out how to find the digital choo choo train. The reflex reaction seems to lag as well. Whack. And years later a foot moves forward.

Stephen E Arnold, March 23, 2022

TikTok: Child Harm?

March 10, 2022

I will keep this brief. Navigate to “TikTok under Investigation in US over Harms to Children.” The article explains why an Instagram probe is now embracing TikTok. From my point of view, this “harm” question must be addressed. Glib statements like “Senator, I will send you a report” have allowed certain high technology firms to skate with the wind at their backs. Now the low friction surface is cracking. The “environment” of questioning is changing. Will the digital speed skaters fall into chilly water or with the help of legal eagle glide over the danger spots? Kudos to the US attorneys general who, like me, believe that more than cute comments are needed. Note: I will be speaking at the 2022 National Cyber Crime Conference. The professionals at the Massachusetts’ Attorney General’s office are crafting another high value program.

Stephen E Arnold, March 10, 2022

Who Is the Bigger Disruptor: A Twitch Streamer or a Ring Announcer?

February 17, 2022

People can agree on is that there is a lot of misinformation surrounding COVID-19. What is considered “misinformation” depends on an individuals’ beliefs. The facts remains, however, that COVID-19 is real, vaccines do not contain GPS chips, and the pandemic has been one big pain. Whenever it is declared we are in a post-pandemic world, misinformation will be regarded as one of the biggest fallouts with an on-going ripple effect.

The Verge explores how one controversial misinformation spreader will be discussed about for years to come: “The Joe Rogan Controversy Is What Happens When You Put Podcasts Behind A Wall.” Misinformation supporters, among them conspiracy theorists, used to be self-contained in their own corner of the globe, but they erupted out of their crazy zone like Vesuvius engulfing Pompeii. Rogan’s faux pas caused Spotify podcast show to remove over seventy episodes of his show or deplatform him.

Other podcast platforms celebrated the demise of a popular Spotify show and attempted to sell more subscriptions for their own content. These platforms should not be celebrating, though. Spotify owned Rogan’s show and his controversy has effectively ruined the platform, but it could happen at any time to Spotify’s rivals. Rogan is not the only loose cannon with a podcast and it does not take much for anything to be considered offensive, then canceled. The rival platforms might be raking in more dollars right now, but:

“We’re moving away from a world in which a podcast player functions as a search engine and toward one in which they act as creators and publishers of that content. This means more backlash and room for questions like: why are you paying Rogan $100 million to distribute what many consider to be harmful information? Fair questions!

This is the cost of high-profile deals and attempts to expand podcasting’s revenue. Both creators and platforms are implicated in whatever content’s distributed, hosted, and sold, and both need to think clearly about how they’ll handle inevitable controversy.”

There is probably an argument about the right to Freedom of Speech throughout this controversy, but there is also the need to protect people from harm. It is one large, gray zone with only a tight rope to walk across it.

So Amouranth or Mr. Rogan? Jury’s out.

Whitney Grace, January 17, 2022

A News Blog Complains about Facebook Content Policies

January 20, 2022

Did you know that the BMJ (in 1840 known as the Provincial Medical and Surgical Journal and then after some organizational and administrative cartwheels emerged in 1857 as the British Medical Journal? Now the $64 question, “Did you know that Facebook appears to consider the BMJ as a Web log or blog?” Quite a surprise to me and probably to quite a few others who have worked in the snooty world of professional publishing.

The most recent summary of the dust up between the Meta Zuck outfit and the “news blog” BMJ appears in “Facebook Versus The BMJ: When Fact Checking Goes Wrong.” The write up contains a number of Meta gems, and a read of the “news blog” item is a good use of time.

I want to highlight one items from the write up:

Cochrane, the international provider of high quality systematic reviews of medical evidence, has experienced similar treatment by Instagram, which, like Facebook, is owned by the parent company Meta. A Cochrane spokesperson said that in October its Instagram account was “shadow banned” for two weeks, meaning that “when other users tried to tag Cochrane, a message popped up saying @cochraneorg had posted material that goes against ‘false content’ guidelines” (fig 1). Shadow banning may lead to posts, comments, or activities being hidden or obscured and stop appearing in searches. After Cochrane posted on Instagram and Twitter about the ban, its usual service was eventually restored, although it has not received an explanation for why it fell foul of the guidelines in the first place.

I like this shadow banning thing.

How did the Meta Zuck respond? According to the “news blog”:

Meta directed The BMJ to its advice page, which said that publishers can appeal a rating directly with the relevant fact checking organization within a week of being notified of it. “Fact checkers are responsible for reviewing content and applying ratings, and this process is independent from Meta,” it said. This means that, as in The BMJ’s case, if the fact checking organization declines to change a rating after an appeal from a publisher, the publisher has little recourse. The lack of an independent appeals process raises concerns, given that fact checking organizations have been accused of bias.

There are other interesting factoids in the “news blog’s” write up.

Quickly, several observations:

  1. Opaque actions plague the “news blog”, the British Medical Journal and other luminaries; for example, the plight of the esteemed performer Amouranth of the Inflate-a-Pool on Amazon Twitch. Double talking and fancy dancing from Meta- and Amazon-type outfits just call attention to the sophomoric and Ted Mack Amateur Hour approach to an important function of a publicly-traded organization with global influence.
  2. A failure of “self regulation” can cause airplanes to crash and financial disruption to occur. Now knowledge is the likely casualty of a lack of a backbone and an ethical compass. Right now I am thinking of a ethics free, shape shifting octopus like character with zero interest in other creatures except their function as money generators.
  3. A combination of “act now, apologize if necessary” has fundamentally altered the social contract among corporations, governments, and individuals.

So now the BMJ (founded in 1840) has been morphed into a “news blog” pitching cow doody?

Imposed change is warranted perhaps? Adulting is long overdue at a certain high-tech outfit and a number of others of this ilk.

Stephen E Arnold, January 20, 2022

Open Source How To: Hook Teams to Social Media

January 19, 2022

I read “Internal Facebook Note: Here Is A “Psychological Trick” To Target Teens.” Interesting stuff. One of the insightful items in the write up is that Facebook shut down the TBH operation. Well, that’s an assertion which a prudent person may want to verify. The write up also contains one of the Cambridge Analytica-type insights, a mini step by step guide to hooking a target sector.

Here’s the how to:

TBH noticed that teens often list their high school in their Instagram bio. So, using a private Instagram account of its own, the company would visit a school’s location page and follow all accounts that included the school’s name. TBH made sure its private account featured a mysterious call to action — something like “You’ve been invited to the new RHS app — stay tuned!” The startup would make one private account for each high school it wanted to target. The company found teens were naturally curious and would follow the private account back.

Helpful, particularly to bad actors without access to a pool of psychological tricks.

Stephen E Arnold, January 19, 2022

Mobile: Unexpected Consequences or Fuel for Social Media?

December 29, 2021

Study Finds Problematic Smartphone Use during Pandemic” could raise some fruitful avenues for researchers to explore. Frances Haugen’s document dump and her comments during her “Facebook is evil” road show. The article reports:

Statistical analysis of the survey results found that low sense of control, fear of missing out, and repetitive negative thinking were, indeed, all associated with greater severity of problematic smartphone use.

How does one fuel craziness? My hunch is that one tosses in content display which sparks a user’s clicking or doom scrolling.

If so, the impact of digital information via an addictive chunk of hardware might be the lever needed to topple the world the way it was in the years before the Big Tech revolution and a handy dandy pocket phone, computer, and content dispensing device.

Managing a Facebook-type of problem might not work if the corrosive impacts require a smartphone dance partner. The same might be slapped against mobile devices. Thus, meaningful dampening of the current digital craziness would require unplugging both the Facebook-like outfits and the mobile gizmo folks.

Unlikely? You bet. Forecast? Yep, more craziness ahead for 2022.

Stephen E Arnold, December 29, 2021

Reading Is Fundamental for Some

December 21, 2021

Reading is not a dead habit as the media would have you believe. It has only changed, but not necessary for the best. Ben Wajdi discussed how reading habits have changed in, “Is Internet Addiction Eradicating The Habit Of Reading?” He does not approach the world’s current reading situation as a condescending elitist that believes any new technology is subpar to the old. He instead focuses on how reading habits have changed and how they could improve.

Wadji discusses how famous writers view technology, reading, and writing. Some love it, while others hate it, but Wadji remains neutral to a point. The writers he examines are privileged because they live in developed countries, but Wadji did not have the same advantages as them and explains why the Internet is a great tool:

“On the one hand, I resonate with Franzen’s take on the internet, and on corporations, yet on the other hand, I can’t deny that for someone like me, a marginalized North African kid whose first interaction with any part of the internet dates back to circa 2005, the internet was the only way I could’ve accessed the body of knowledge that could fulfill my curiosity, and my eternal search for a way out of the “sh%thole”. Without the internet, I would have been a very different man. Without it, I would have succumbed to all the currents of local nationalism, religious fanaticism, and the currents of elite leftists running the “shithole” and confining everyone with them in eternal misery.”

People do spend way too much time attached to their screens. It has become a addiction. Depending on the individual, it could be as mentally consuming as alcoholism or as limiting as biting one’s nails. Wadji encourages people to become consciousness of their habits, relearn how to absorb what they read, and think think critically about it. Was the same argument made when young Egyptians spent too much time staring at hieroglyphics?

Whitney Grace, December 21, 2021

Red Kangaroos? Maybe a Nuisance. Online Trolls? Very Similar

December 16, 2021

It is arguable that trolls are the worst bullies in history, because online anonymity means they do not face repercussions. Trolls’ behavior caused innumerable harm, including suicides, psychological problems, and real life bullying. Local and international governments have taken measures to prevent cyber bullying, but ABC Australia says the country continent is taking a stand: “Social Media Companies Could Be Forced To Give Out Names And Contact Details, Under New Anti-Troll Laws.”

Australia’s federal government is drafting laws that could force social media companies to reveal trolls’ identities. The new legislation aims to hold trolls accountable for their poor behavior by having social media companies collect user information and share it with courts in defamation cases. The new laws would also hold social media companies liable for hosted content instead of users and management companies. Australia’s prime minister stated:

“Prime Minister Scott Morrison said he wanted to close the gap between real life and discourse online. ‘The rules that exist in the real world must exist in the digital and online world,’ he said. ‘The online world shouldn’t be a wild west, where bots and bigots and trolls and others can anonymously go around and harm people and hurt people.’”

The new law would require social media companies to have a complaints process for people who feel like they have been defamed. The process would ask users to delete defamatory material. If they do not, the complaint could be escalated to where users details are shared to issue court orders so people can pursue defamation action.

One of the biggest issues facing the legislation is who is responsible for trolls’ content. The new law wants social media companies to be held culpable. The complaints system would allow the social media companies to use it as a defense in defamation cases.

The article does not discuss what is deemed “defamatory” content. Anything and everything is offensive to someone, so the complaints system will be abused. What rules will be instituted to prevent abuse of the complaints system? Who will monitor it and who will pay for it? An analogous example is YouTube system of what constitutes as “appropriate” children’s videos and how they determine flagged videos for intellectual theft as well as inappropriate content. In short, YouTube’s system is not doing well.

The social media companies should be culpable in some way, such as sharing user information when there is dangerous behavior, i.e.e suicide, any kind of abuse, child pornography, planned shooting attacks and other crimes. Sexist and abusive comments that are not an opinion, i.e., saying someone should die or is stupid for being a woman, should be monitored and users held accountable. It is a fine line, though, determining the dangers in many cases.

Whitney Grace, December 16, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta