META and Another PR Content Marketing Play

October 4, 2024

dino 10 19This write up is the work of a dinobaby. No smart software required.

I worked through a 3,400 word interview in the orange newspaper. “Alice Newton-Rex: WhatsApp Makes People Feel Confident to Be Themselves: The Messaging Platform’s Director of Product Discusses Privacy Issues, AI and New Features for the App’s 2bn Users” contains a number of interesting statements. The write up is behind the Financial Times’s paywall, but it is worth subscribing if you are monitoring what Meta (the Zuck) is planning to do with regard to E2EE or end-to-end encrypted messaging. I want to pull out four statements from the WhatsApp professional. My approach will be to present the Meta statements and then pose one question which I thought the interviewer should have asked. After the quotes, I will offer a few observations, primarily focusing on Meta’s apparent “me too” approach to innovation. Telegram’s feature cadence appears to be two to four ahead of Meta’s own efforts.

image

A WhatsApp user is throwing big, soft, fluffy snowballs at the company. Everyone is impressed. Thanks, MSFT Copilot. Good enough.

Okay, let’s look at the quotes which I will color blue. My questions will be in black.

Meta Statement 1: The value of end-to-end encryption.

We think that end-to-end encryption is one of the best technologies for keeping people safe online. It makes people feel confident to be themselves, just like they would in a real-life conversation.

What data does Meta have to back up this “we think” assertion?

Meta Statement 2: Privacy

Privacy has always been at the core of WhatsApp. We have tons of other features that ensure people’s privacy, like disappearing messages, which we launched a few years ago. There’s also chat lock, which enables you to hide any particular conversation behind a PIN so it doesn’t appear in your main chat list.

Always? (That means that privacy is the foundation of WhatsApp in a categorically affirmative way.) What do you mean by “always”?

Meta Statement 3:

… we work to prevent abuse on WhatsApp. There are three main ways that we do this. The first is to design the product up front to prevent abuse, by limiting your ability to discover new people on WhatsApp and limiting the possibility of going viral. Second, we use the signals we have to detect abuse and ban bad accounts — scammers, spammers or fake ones. And last, we work with third parties, like law enforcement or fact-checkers, on misinformation to make sure that the app is healthy.

What data can you present to back up these statements about what Meta does to prevent abuse?

Meta Statement 4:

if we are forced under the Online Safety Act to break encryption, we wouldn’t be willing to do it — and that continues to be our position.

Is this position tenable in light of France’s action against Pavel Durov, the founder of Telegram, and the financial and legal penalties nation states can are are imposing on Meta?

Observations:

  1. Just like Mr. Zuck’s cosmetic and physical make over, these statements describe a WhatsApp which is out of step with the firm’s historical behavior.
  2. The changes in WhatsApp appear to be emulation of some Telegram innovations but with a two to three year time lag. I wonder if Meta views Telegram as a live test of certain features and functions.
  3. The responsiveness of Meta to lawful requests has, based on what I have heard from my limited number of contacts, has been underwhelming. Cooperation is something in which Meta requires some additional investment and incentivization of Meta employees interacting with government personnel.

Net net: A fairly high profile PR and content marketing play. FT is into kid glove leather interviews and throwing big soft Nerf balls, it seems.

Stephen E Arnold, October 4, 2024

The Zuck: Limited by Regulation. Is This a Surprise?

September 25, 2024

Privacy laws in the EU are having an effect on Meta’s actions in that region. That’s great. But what about the rest of the world? When pressed by Australian senators, a the company’s global privacy director Melinda Claybaugh fessed up. “Facebook Admits to Scraping Every Australian Adult User’s Public Photos and Posts to Train AI, with No Opt-Out Option,” reports ABC News. Journalist Jake Evans writes:

“Labor senator Tony Sheldon asked whether Meta had used Australian posts from as far back as 2007 to feed its AI products, to which Ms Claybaugh responded ‘we have not done that’. But that was quickly challenged by Greens senator David Shoebridge. Shoebridge: ‘The truth of the matter is that unless you have consciously set those posts to private since 2007, Meta has just decided that you will scrape all of the photos and all of the texts from every public post on Instagram or Facebook since 2007, unless there was a conscious decision to set them on private. That’s the reality, isn’t it? Claybaugh: ‘Correct.’ Ms Claybaugh added that accounts of people under 18 were not scraped, but when asked by Senator Sheldon whether public photos of his own children on his account would be scraped, Ms Claybaugh acknowledged they would. The Facebook representative could not answer whether the company scraped data from previous years of users who were now adults, but were under 18 when they created their accounts.”

Why do users in Australia not receive the same opt-out courtesy those in the EU enjoy? Simple, responds Ms. Claybaugh—their government has not required it. Not yet, anyway. But Privacy Act reforms are in the works there, a response to a 2020 review that found laws to be outdated. The updated legislation is expected to be announced in August—four years after the review was completed. Ah, the glacial pace of bureaucracy. Better late than never, one supposes.

Cynthia Murrell, September 25, 2024

What are the Real Motives Behind the Zuckerberg Letter?

September 5, 2024

Senior correspondent at Vox Adam Clarke Estes considers the motives behind Mark Zuckerberg’s recent letter to Rep. Jim Jordan. He believes “Mark Zuckerberg’s Letter About Facebook Censorship Is Not What it Seems.” For those who are unfamiliar: The letter presents no new information, but reminds us the Biden administration pressured Facebook to stop the spread of Covid-19 misinformation during the pandemic. Zuckerberg also recalls his company’s effort to hold back stories about Hunter Biden’s laptop after the FBI warned they might be part of a Russian misinformation campaign. Now, he insists, he regrets these actions and vows never to suppress “freedom of speech” due to political pressure again.

Naturally, Republicans embrace the letter as further evidence of wrongdoing by the Biden-Harris administration. Many believe it is evidence Zuckerberg is kissing up to the right, even though he specifies in the missive that his goal is to be apolitical. Estes believes there is something else going on. He writes:

“One theory comes from Peter Kafka at Business Insider: ‘Zuckerberg very carefully gave Jordan just enough to claim a political victory — but without getting Meta in any further trouble while it defends itself against a federal antitrust suit. To be clear, Congress is not behind the antitrust lawsuit. The case, which dates back to 2021, comes from the FTC and 40 states, which say that Facebook illegally crushed competition when it acquired Instagram and WhatsApp, but it must be top of mind for Zuckerberg. In a landmark antitrust case less than a month ago, a federal judge ruled against Google, and called it a monopoly. So antitrust is almost certainly on Zuckerberg’s mind. It’s also possible Zuckerberg was just sick of litigating events that happened years ago and wanted to close the loop on something that has caused his company massive levels of grief. Plus, allegations of censorship have been a distraction from his latest big mission: to build artificial general intelligence.”

So is it coincidence this letter came out during the final weeks of a severely close, high-stakes presidential election? Perhaps. An antitrust ruling like the one against Google could be inconvenient for Meta. Curious readers can navigate to the article for more background and more of Estes reasoning.

Cynthia Murrell, September 5, 2024

Good News: Meta To Unleash Automated AI Ads

August 19, 2024

Facebook generated its first revenue streams from advertising. Meta, Facebook’s parent company, continues to make huge profits from ads. Its products use cookies for targeted ads, collect user information to sell, and more. It’s not surprising that AI will soon be entering the picture says Computer Weekly: “Meta’s Zuckerberg Looks Ahead To AI-Generated Adverts.”

Meta increased its second-quarter revenues 22% from its first quarter. The company also reported that the cost of revenue increased by 23% due to higher infrastructure costs and Reality Labs needing a lot of cash. Zuckerberg explained that advertisers used to reach out to his company about the target audiences they wanted to reach. Meta eventually became so advanced that its ad systems predicted target audiences better than the advertisers. Zuckerberg plans for Meta to do the majority of work for advertising agencies. All they will need to provide Meta will be a budget and business objective.

Meta is investing and developing technology to make more money via AI. Meta is playing the long game:

“When asked about the payback time for investments in AI, Meta’s chief financial officer, Susan Li, said: ‘On our core AI work, we continue to take a very return on investment-based approach. We’re still seeing strong returns as improvements to both engagement and ad performance have translated into revenue gains, and it makes sense for us to continue investing here.’

Looking at generative AI (GenAI), she added: “We don’t expect our GenAI products to be a meaningful driver of revenue in 2024, but we do expect that they’re going to open up new revenue opportunities over time that will enable us to generate a solid return off of our investment…’”

Meta might see a slight dip in profit margins because it is investing in better technology, but AI generated ads will pay for themselves, literally.

Whitney Grace, August 19, 2024

Meta Deletes Workplace. Why? AI!

June 7, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Workplace was Meta’s attempt to jump into the office-productivity ring and face off against the likes of Slack and MS Teams. It did not fare well. Yahoo Finance shares the brief write-up, “Meta Is Shuttering Workplace, Its Enterprise Version of Facebook.” The company is spinning the decision as a shift to bigger and better things. Bloomberg’s Kurt Wagner cites reporting from TechCrunch as she writes:

“The service operated much like the original Facebook social network, but let people have separate accounts for their work interactions. Workplace had as many as 7 million total paying subscribers in May 2021. … Meta once had ambitious plans for Workplace, and viewed it as a way to make money through subscriptions as well as a chance to extend Facebook’s reach by infusing the product into work and office settings. At one point, Meta touted a list of high-profile customers, including Starbucks Corp., Walmart Inc. and Spotify Technology SA. The company will continue to focus on workplace-related products, a spokesperson said, but in other areas, such as the metaverse by building features for the company’s Quest VR headsets.”

The Meta spokesperson repeated the emphasis on those future products, also stating:

“We are discontinuing Workplace from Meta so we can focus on building AI and metaverse technologies that we believe will fundamentally reshape the way we work.”

Meta will continue to use Workplace internally, but everyone else has until the end of August 2025 before the service ends. Meta plans to keep user data accessible until the end of May 2026. The company also pledges to help users shift to Zoom’s Workvivo platform. What, no forced migration into the Metaverse and their proprietary headsets? Not yet, anyway.

Cynthia Murrell, June 7, 2024

Facebook Scams: A Warning or a Tutorial?

May 27, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

This headline caught my attention: “Facebook Marketplace’s Dirty Dozen: The 15 Most Common Scams and How to Avoid Them.” I had hopes of learning about new, clever, wonderfully devious ways to commit fraud and other larcenous acts. Was I surprised? Here’s a list of the “15 most common scams.” I want to point out that there is scant (a nice way of saying “No back up data”) for the assertions. (I have a hunch that this “helpful” write up was assisted with some sort of software, possibly dumb software.) Let’s look at the list of the dozen’s 15 scams:

  1. Defective or counterfeit gadgets. Fix: Inspection required
  2. Bait-and-switch. Fix: Don’t engage in interaction
  3. Fake payment receipts. Fix: What? I don’t understand
  4. Mouth-watering giveaways. Fix: Ignore
  5. Overpayment by a buyer. Fix: What? I don’t understand
  6. Moving conversations out of Facebook. Fix: Don’t have them.
  7. Fake rental posting. Fix: Ignore
  8. Advance payment requests. Fix: Ignore
  9. Asking for confirmation codes. Fix: Ignore
  10. Asking for car deposits. Fix: Say, “No”
  11. Requesting unnecessary charges. Fix: Ignore
  12. Mailing items. Fix: Say, “No”
  13. Fake claims of lost packages. Fix: What?
  14. Counterfeit money. Fix: What?
  15. Clicking a link to fill out more information. Fix: Don’t

My concern with this list is that it does not protect the buyer. If anything, it provides a checklist of tactics for a would-be bad actor. The social engineering aspect of fraud is often more important than the tactic. In the “emotional” moment, a would-be buyer can fall for the most obvious scam; for example, trusting the seller because the request for a deposit seems reasonable or buying something else from the seller.

image

Trying to help? The customer or the scammer? You decide. Thanks, MSFT Copilot. Good cartoon. In your wheelhouse, is it?

What does one do to avoid Facebook scams? Here’s the answer:

Fraudsters can exploit you on online marketplaces if you’re not careful; it is easy not to be aware of a scam if you’re not as familiar. You can learn to spot common Facebook Marketplace scams to ensure you have a safe shopping experience. Remember that scams can happen between buyers and sellers, so always be wary of the transaction practices before committing. Otherwise, consider other methods like ordering from Amazon or becoming a third-party vendor on a trusted platform.

Yep, Amazon. On the other hand you can avoid scams by becoming a “third-party vendor on a trusted platform.” Really?

The problem with this write up is that the information mixes up what sellers do with what buyers do. Stepping back, why is Facebook singled out for this mish mash of scams and tactics. After all, in a face-to-face deal who pays with counterfeit cash? It is the buyer. Who is the victim? It is the seller. Who rents an apartment without looking at it? Answer: Someone in Manhattan. In other cities, alternatives to Facebook exist, and they are not available via Amazon as far as I know.

Facebook and other online vendors have to step up their game. The idea that the platform does not have responsibility to vet buyers and sellers is not something I find acceptable. Facebook seems pleased with its current operation. Perhaps it is time for more directed action to [a] address Facebook’s policies and [b] bring more rigor to write ups which seem to provide ideas for scammers in my opinion.

Stephen E Arnold, May 27, 2024

Meta Mismatch: Good at One Thing, Not So Good at Another

May 27, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “While Meta Stuffs AI Into All Its Products, It’s Apparently Helpless to Stop Perverts on Instagram From Publicly Lusting Over Sexualized AI-Generated Children.” The main idea is that Meta has a problems stopping “perverts.” You know a “pervert,” don’t you. One can spot ‘em when one sees ‘em. The write up reports:

As Facebook and Instagram owner Meta seeks to jam generative AI into every feasible corner of its products, a disturbing Forbes report reveals that the company is failing to prevent those same products from flooding with AI-generated child sexual imagery. As Forbes reports, image-generating AI tools have given rise to a disturbing new wave of sexualized images of children, which are proliferating throughout social media — the Forbes report focused on TikTok and Instagram — and across the web.

What is Meta doing or not doing? The write up is short on technical details. In fact, there are no technical details. Is it possible that any online service allowing anyone able to comment or upload certain content will do something “bad”? Online requires something that most people don’t want. The secret ingredient is spelling out an editorial policy and making decisions about what is appropriate or inappropriate for an “audience.” Note that I have converted digital addicts into an audience, albeit one that participates.

image

Two fictional characters are supposed to be working hard and doing their level best. Thanks, MSFT Copilot. How has that Cloud outage affected the push to more secure systems? Hello, hello, are you there?

Editorial policies require considerable intellectual effort, crafted workflow processes, and oversight. Who does the overseeing? In the good old days when publishing outfits like John Wiley & Sons-type or Oxford University Press-type outfits were gatekeepers, individuals who met the cultural standards were able to work their way up the bureaucratic rock wall. Now the mantra is the same as the probability-based game show with three doors and “Come on down!” Okay, “users” come on down, wallow in anonymity, exploit a lack of consequences, and surf on the darker waves of human thought. Online makes clear that people who read Kant, volunteer to help the homeless, and respect the rights of others are often at risk from the denizens of the psychological night.

Personally I am not a Facebook person, a users or Instagram, or a person requiring the cloak of a WhatsApp logo. Futurism takes a reasonably stand:

it’s [Meta, Facebook, et al] clearly unable to use the tools at its disposal, AI included, to help stop harmful AI content created using similar tools to those that Meta is building from disseminating across its own platforms. We were promised creativity-boosting innovation. What we’re getting at Meta is a platform-eroding pile of abusive filth that the company is clearly unable to manage at scale.

How long has been Meta trying to be a squeaky-clean information purveyor? Is the article going overboard?

I don’t have answers, but after years of verbal fancy dancing, progress may be parked at a rest stop on the information superhighway. Who is the driver of the Meta construct? If you know, that is the person to whom one must address suggestions about content. What if that entity does not listen and act? Government officials will take action, right?

PS. Is it my imagination or is Futurism.com becoming a bit more strident?

Stephen E Arnold, May 27, 2024

Allegations about Medical Misinformation Haunt US Tech Giants

May 17, 2024

Access to legal and safe abortions also known as the fight for reproductive rights are controversial issues in the United States and countries with large Christian populations. Opposition against abortions often spread false information about the procedure. They’re also known to spread misinformation about sex education, especially birth control. Mashable shares the unfortunate story that tech giants “Meta And Google Fuel Abortion Misinformation Across Africa, Asia, And Latin America, Report Finds.”

The Center for Countering Digital Hate (CCDH) and MSI Reproductive Choices (MSI) released a new report that found Meta and sometimes Google restricted abortion information and disseminated misinformation and abuse in Latin America, Asia, and Africa. Abortion providers are prevented placing ads globally on Google and Meta. Meta also earns revenue from anti-abortion ads bought in the US and targeted at the aforementioned areas.

MSI claims in the report that Meta removed or rejected its ads in Vietnam, Nigeria, Nepal, Mexica, Kenya, and Ghana because of “sensitive content.” Meta also has a blanket advertising restrictions on MSI’s teams in Vietnam and Nepal without explanation. Google blocked ads with the keyword “pregnancy options” in Ghana and MSI claimed they were banned from using that term in a Google Adwords campaign.

Google offered an explanation:

“Speaking to Mashable, Google representative Michael Aciman said, ‘This report does not include a single example of policy violating content on Google’s platform, nor any examples of inconsistent enforcement. Without evidence, it claims that some ads were blocked in Ghana for referencing ‘pregnancy options’. To be clear, these types of ads are not prohibited from running in Ghana – if the ads were restricted, it was likely due to our longstanding policies against targeting people based on sensitive health categories, which includes pregnancy.’”

Google and Meta have been vague and inconsistent about why they’re removing pregnancy option ads, while allowing pro-life groups the spread unchecked misinformation about abortion. Meta, Google, and other social media companies mine user information, but they do scant to protect civil liberties and human rights.

Organizations like MSI and CCDH are doing what they can to fight bad actors. It’s an uphill battle and it would be easier if social media companies helped.

Whitney Grace, May 17, 2024

Meta: Innovating via Intentions

April 17, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Analytics India published “Meta Releases AI on WhatsApp, Looks Like Perplexity AI.” The headline caught my attention. I don’t pay much attention to the Zuckbook and the other Meta properties. The Analytics India story made this statement which caught my attention:

What users type in the search bar remains confidential and is not shared with Meta AI unless users intentionally send a query to the Meta AI chatbot.

I am okay with copying from Silicon Valley type outfits. That’s part of the game, which includes colors, shuffling staff, and providing jibber jabber instead of useful interfaces and documentation about policies. But think about the statement: “unless users intentionally send a query to the Meta AI chatbot.” Doesn’t that mean we don’t keep track of queries unless a user sends a query to the Zuckbook’s smart software? I love the “intention” because the user is making a choice between a search function which one of my team told me is not very useful and a “new” search system which will be better. If it is better, then user queries get piped into a smart search system for which the documentation is sparse. What happens to those data? How will those data be monetized? Will the data be shared with those who have a business relationship with Meta?

image

Thanks, MSFT Copilot. Good enough, but that’s what one might say about MSFT security, right?

So many questions.

The article states:

Users can still search their conversations for specific content without interacting with Meta AI, maintaining the same level of ease and privacy as before. Additionally, personal messages and calls remain end-to-end encrypted, ensuring neither WhatsApp nor Meta can access them, even with the Meta AI integration.

There is no substantiation of this assertion. Indeed, since the testimony of Frances Haugen, I am not certain what Meta does, and I am not willing to accept assertions about what is accessible to the firm’s employees and what is not. What about the metadata? Is that part of the chunk of data Meta cannot access?

Facebook, WhatsApp, and Instagram are interesting services. The information in the Meta services appears to be to be quite useful for a number of endeavors. Academic research groups are less helpful than they could be. Some have found data cut off or filtered. Imitating another AI outfit’s graphic design is the lowest on my list of Meta issues.

The company is profitable. It has considerable impact. The firm has oodles of data. But now a user’s intention gives permission to an interesting outfit to do whatever with that information. Unsettling? Nope, just part of the unregulated world of digital operations which some assert are having a somewhat negative impact on society. Yep, intentionally.

Stephen E Arnold, April 17, 2024

Meta Warns Limiting US AI Sharing Diminishes Influence

April 10, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Limiting tech information is a way organizations and governments prevent bad actors from using them for harmful reasons. Whether repressing the information is good or bad is a topic for debate, big tech leaders don’t want limitations. Yahoo Finance reports on what Meta thinks about the issue: “Meta Says Limits On Sharing AI Technology May Dim US Influence.”

Nick Clegg is Meta Platform’s policy chief and he told the US government that if they prevented tech companies from sharing AI technology publicly (aka open source) it would damage America’s influence on AI development. Clegg’s statement is alluding to if “if you don’t let us play, we can’t make the rules.” In more politically correct and also true words, Clegg argued that a more “restrictive approach” would mean other nations’ tech could become the “global norm.” It sounds like the old imperial vs. metric measurements argument.

Open source code is fundamentally for advancing new technology. Many big tech companies want to guard their proprietary code so they can exploit it for profits. Others, like Clegg, appear to want global industry influence for higher revenue margins and encourage new developments.

Meta’s argument for keeping the technology open may resonate with the current presidential administration and Congress. For years, efforts to pass legislation that restricts technology companies’ business practices have all died in Congress, including bills meant to protect children on social media, to limit tech giants from unfairly boosting their own products, and to safeguard users’ data online.

But other bills aimed at protecting American business interests have had more success, including the Chips and Science Act, passed in 2022 to support US chipmakers while addressing national security concerns around semiconductor manufacturing. Another bill targeting Chinese tech giant ByteDance Ltd. and its popular social network, TikTok, is awaiting a vote in the Senate after passing in the House earlier this month.”

Restricting technology sounds like the argument about controlling misinformation. False information does harm society but it begs the argument “what is to be considered harmful?” Another similarity is the use of a gun or car. Cars and guns are essential and dangerous tools to modern society, but in the wrong hands they’re deadly weapons.

Whitney Grace, April 10, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta