Meta Mismatch: Good at One Thing, Not So Good at Another

May 27, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “While Meta Stuffs AI Into All Its Products, It’s Apparently Helpless to Stop Perverts on Instagram From Publicly Lusting Over Sexualized AI-Generated Children.” The main idea is that Meta has a problems stopping “perverts.” You know a “pervert,” don’t you. One can spot ‘em when one sees ‘em. The write up reports:

As Facebook and Instagram owner Meta seeks to jam generative AI into every feasible corner of its products, a disturbing Forbes report reveals that the company is failing to prevent those same products from flooding with AI-generated child sexual imagery. As Forbes reports, image-generating AI tools have given rise to a disturbing new wave of sexualized images of children, which are proliferating throughout social media — the Forbes report focused on TikTok and Instagram — and across the web.

What is Meta doing or not doing? The write up is short on technical details. In fact, there are no technical details. Is it possible that any online service allowing anyone able to comment or upload certain content will do something “bad”? Online requires something that most people don’t want. The secret ingredient is spelling out an editorial policy and making decisions about what is appropriate or inappropriate for an “audience.” Note that I have converted digital addicts into an audience, albeit one that participates.

image

Two fictional characters are supposed to be working hard and doing their level best. Thanks, MSFT Copilot. How has that Cloud outage affected the push to more secure systems? Hello, hello, are you there?

Editorial policies require considerable intellectual effort, crafted workflow processes, and oversight. Who does the overseeing? In the good old days when publishing outfits like John Wiley & Sons-type or Oxford University Press-type outfits were gatekeepers, individuals who met the cultural standards were able to work their way up the bureaucratic rock wall. Now the mantra is the same as the probability-based game show with three doors and “Come on down!” Okay, “users” come on down, wallow in anonymity, exploit a lack of consequences, and surf on the darker waves of human thought. Online makes clear that people who read Kant, volunteer to help the homeless, and respect the rights of others are often at risk from the denizens of the psychological night.

Personally I am not a Facebook person, a users or Instagram, or a person requiring the cloak of a WhatsApp logo. Futurism takes a reasonably stand:

it’s [Meta, Facebook, et al] clearly unable to use the tools at its disposal, AI included, to help stop harmful AI content created using similar tools to those that Meta is building from disseminating across its own platforms. We were promised creativity-boosting innovation. What we’re getting at Meta is a platform-eroding pile of abusive filth that the company is clearly unable to manage at scale.

How long has been Meta trying to be a squeaky-clean information purveyor? Is the article going overboard?

I don’t have answers, but after years of verbal fancy dancing, progress may be parked at a rest stop on the information superhighway. Who is the driver of the Meta construct? If you know, that is the person to whom one must address suggestions about content. What if that entity does not listen and act? Government officials will take action, right?

PS. Is it my imagination or is Futurism.com becoming a bit more strident?

Stephen E Arnold, May 27, 2024

Allegations about Medical Misinformation Haunt US Tech Giants

May 17, 2024

Access to legal and safe abortions also known as the fight for reproductive rights are controversial issues in the United States and countries with large Christian populations. Opposition against abortions often spread false information about the procedure. They’re also known to spread misinformation about sex education, especially birth control. Mashable shares the unfortunate story that tech giants “Meta And Google Fuel Abortion Misinformation Across Africa, Asia, And Latin America, Report Finds.”

The Center for Countering Digital Hate (CCDH) and MSI Reproductive Choices (MSI) released a new report that found Meta and sometimes Google restricted abortion information and disseminated misinformation and abuse in Latin America, Asia, and Africa. Abortion providers are prevented placing ads globally on Google and Meta. Meta also earns revenue from anti-abortion ads bought in the US and targeted at the aforementioned areas.

MSI claims in the report that Meta removed or rejected its ads in Vietnam, Nigeria, Nepal, Mexica, Kenya, and Ghana because of “sensitive content.” Meta also has a blanket advertising restrictions on MSI’s teams in Vietnam and Nepal without explanation. Google blocked ads with the keyword “pregnancy options” in Ghana and MSI claimed they were banned from using that term in a Google Adwords campaign.

Google offered an explanation:

“Speaking to Mashable, Google representative Michael Aciman said, ‘This report does not include a single example of policy violating content on Google’s platform, nor any examples of inconsistent enforcement. Without evidence, it claims that some ads were blocked in Ghana for referencing ‘pregnancy options’. To be clear, these types of ads are not prohibited from running in Ghana – if the ads were restricted, it was likely due to our longstanding policies against targeting people based on sensitive health categories, which includes pregnancy.’”

Google and Meta have been vague and inconsistent about why they’re removing pregnancy option ads, while allowing pro-life groups the spread unchecked misinformation about abortion. Meta, Google, and other social media companies mine user information, but they do scant to protect civil liberties and human rights.

Organizations like MSI and CCDH are doing what they can to fight bad actors. It’s an uphill battle and it would be easier if social media companies helped.

Whitney Grace, May 17, 2024

Meta: Innovating via Intentions

April 17, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Analytics India published “Meta Releases AI on WhatsApp, Looks Like Perplexity AI.” The headline caught my attention. I don’t pay much attention to the Zuckbook and the other Meta properties. The Analytics India story made this statement which caught my attention:

What users type in the search bar remains confidential and is not shared with Meta AI unless users intentionally send a query to the Meta AI chatbot.

I am okay with copying from Silicon Valley type outfits. That’s part of the game, which includes colors, shuffling staff, and providing jibber jabber instead of useful interfaces and documentation about policies. But think about the statement: “unless users intentionally send a query to the Meta AI chatbot.” Doesn’t that mean we don’t keep track of queries unless a user sends a query to the Zuckbook’s smart software? I love the “intention” because the user is making a choice between a search function which one of my team told me is not very useful and a “new” search system which will be better. If it is better, then user queries get piped into a smart search system for which the documentation is sparse. What happens to those data? How will those data be monetized? Will the data be shared with those who have a business relationship with Meta?

image

Thanks, MSFT Copilot. Good enough, but that’s what one might say about MSFT security, right?

So many questions.

The article states:

Users can still search their conversations for specific content without interacting with Meta AI, maintaining the same level of ease and privacy as before. Additionally, personal messages and calls remain end-to-end encrypted, ensuring neither WhatsApp nor Meta can access them, even with the Meta AI integration.

There is no substantiation of this assertion. Indeed, since the testimony of Frances Haugen, I am not certain what Meta does, and I am not willing to accept assertions about what is accessible to the firm’s employees and what is not. What about the metadata? Is that part of the chunk of data Meta cannot access?

Facebook, WhatsApp, and Instagram are interesting services. The information in the Meta services appears to be to be quite useful for a number of endeavors. Academic research groups are less helpful than they could be. Some have found data cut off or filtered. Imitating another AI outfit’s graphic design is the lowest on my list of Meta issues.

The company is profitable. It has considerable impact. The firm has oodles of data. But now a user’s intention gives permission to an interesting outfit to do whatever with that information. Unsettling? Nope, just part of the unregulated world of digital operations which some assert are having a somewhat negative impact on society. Yep, intentionally.

Stephen E Arnold, April 17, 2024

Meta Warns Limiting US AI Sharing Diminishes Influence

April 10, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Limiting tech information is a way organizations and governments prevent bad actors from using them for harmful reasons. Whether repressing the information is good or bad is a topic for debate, big tech leaders don’t want limitations. Yahoo Finance reports on what Meta thinks about the issue: “Meta Says Limits On Sharing AI Technology May Dim US Influence.”

Nick Clegg is Meta Platform’s policy chief and he told the US government that if they prevented tech companies from sharing AI technology publicly (aka open source) it would damage America’s influence on AI development. Clegg’s statement is alluding to if “if you don’t let us play, we can’t make the rules.” In more politically correct and also true words, Clegg argued that a more “restrictive approach” would mean other nations’ tech could become the “global norm.” It sounds like the old imperial vs. metric measurements argument.

Open source code is fundamentally for advancing new technology. Many big tech companies want to guard their proprietary code so they can exploit it for profits. Others, like Clegg, appear to want global industry influence for higher revenue margins and encourage new developments.

Meta’s argument for keeping the technology open may resonate with the current presidential administration and Congress. For years, efforts to pass legislation that restricts technology companies’ business practices have all died in Congress, including bills meant to protect children on social media, to limit tech giants from unfairly boosting their own products, and to safeguard users’ data online.

But other bills aimed at protecting American business interests have had more success, including the Chips and Science Act, passed in 2022 to support US chipmakers while addressing national security concerns around semiconductor manufacturing. Another bill targeting Chinese tech giant ByteDance Ltd. and its popular social network, TikTok, is awaiting a vote in the Senate after passing in the House earlier this month.”

Restricting technology sounds like the argument about controlling misinformation. False information does harm society but it begs the argument “what is to be considered harmful?” Another similarity is the use of a gun or car. Cars and guns are essential and dangerous tools to modern society, but in the wrong hands they’re deadly weapons.

Whitney Grace, April 10, 2024

The Many Faces of Zuckbook

March 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

As evidenced by his business decisions, Mark Zuckerberg seems to be a complicated fellow. For example, a couple recent articles illustrate this contrast: On one hand is his commitment to support open source software, an apparently benevolent position. On the other, Meta is once again in the crosshairs of EU privacy advocates for what they insist is its disregard for the law.

First, we turn to a section of VentureBeat’s piece, “Inside Meta’s AI Strategy: Zuckerberg Stresses Compute, Open Source, and Training Data.” In it, reporter Sharon Goldman shares highlights from Meta’s Q4 2023 earnings call. She emphasizes Zuckerberg’s continued commitment to open source software, specifically AI software Llama 3 and PyTorch. He touts these products as keys to “innovation across the industry.” Sounds great. But he also states:

“Efficiency improvements and lowering the compute costs also benefit everyone including us. Second, open source software often becomes an industry standard, and when companies standardize on building with our stack, that then becomes easier to integrate new innovations into our products.”

Ah, there it is.

Our next item was apparently meant to be sneaky, but who did Meta think it was fooling? The Register reports, “Meta’s Pay-or-Consent Model Hides ‘Massive Illegal Data Processing Ops’: Lawsuit.” Meta is attempting to “comply” with the EU’s privacy regulations by making users pay to opt in to them. That is not what regulators had in mind. We learn:

“Those of us with aunties on FB or friends on Instagram were asked to say yes to data processing for the purpose of advertising – to ‘choose to continue to use Facebook and Instagram with ads’ – or to pay up for a ‘subscription service with no ads on Facebook and Instagram.’ Meta, of course, made the changes in an attempt to comply with EU law. But privacy rights folks weren’t happy about it from the get-go, with privacy advocacy group noyb (None Of Your Business), for example, sarcastically claiming Meta was proposing you pay it in order to enjoy your fundamental rights under EU law. The group already challenged Meta’s move in November, arguing EU law requires consent for data processing to be given freely, rather than to be offered as an alternative to a fee. Noyb also filed a lawsuit in January this year in which it objected to the inability of users to ‘freely’ withdraw data processing consent they’d already given to Facebook or Instagram.”

And now eight European Consumer Organization (BEUC) members have filed new complaints, insisting Meta’s pay-or-consent tactic violates the European General Data Protection Regulation (GDPR). While that may seem obvious to some, Meta insists it is in compliance with the law. Because of course it does.

Cynthia Murrell, March 29, 2024

Meta Never Met a Kid Data Set It Did Not Find Useful

January 5, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Adults are ripe targets for data exploitation in modern capitalism. While adults fight for their online privacy, most have rolled over and accepted the inevitable consumer Big Brother. When big tech companies go after monetizing kids, however, that’s when adults fight back like rabid bears. Engadget writes about how Meta is fighting against the federal government about kids’ data: “Meta Sues FTC To Block New Restrictions On Monetizing Kids’ Data.”

Meta is taking the FTC to court to prevent them from reopening a 2020 $5 billion landmark privacy case and to allow the company to monetize kids’ data on its apps. Meta is suing the FTC, because a federal judge ruled that the FTC can expand with new, more stringent rules about how Meta is allowed to conduct business.

Meta claims the FTC is out for a power grab and is acting unconstitutionally, while the FTC reports the claimants consistently violates the 2020 settlement and the Children’s Online Privacy Protection Act. FTC wants its new rules to limit Meta’s facial recognition usage and initiate a moratorium on new products and services until a third party audits them for privacy compliance.

Meta is not a huge fan of the US Federal Trade Commission:

“The FTC has been a consistent thorn in Meta’s side, as the agency tried to stop the company’s acquisition of VR software developer Within on the grounds that the deal would deter "future innovation and competitive rivalry." The agency dropped this bid after a series of legal setbacks. It also opened up an investigation into the company’s VR arm, accusing Meta of anti-competitive behavior."

The FTC is doing what government agencies are supposed to do: protect its citizens from greedy and harmful practices like those from big business. The FTC can enforce laws and force big businesses to pay fines, put leaders in jail, or even shut them down. But regulators have been decades ramping up to take meaningful action. The result? The thrashing over kiddie data.

Whitney Grace, January 5, 2024

Harvard University: Does Money Influence Academic Research?

December 5, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Harvard University has been on my radar since the ethics misstep. In case your memory is fuzzy, Francesca Gino, a big thinker about ethics and taking shortcuts, was accused of data fraud. The story did not attract much attention in rural Kentucky. Ethics and dishonesty? Come on. Harvard has to do some serious training to catch up with a certain university in Louisville. For a reasonable explanation of the allegations (because, of course, one will never know), navigate to “Harvard Professor Who Studies Dishonesty Is Accused of Falsifying Data” and dig in.

image

Thanks, MSFT Copilot, you have nailed the depressive void that comes about when philosophers learn that ethics suck.

Why am I thinking about Harvard and ethics? The answer is that I read “Harvard Gutted Initial Team Examining Facebook Files Following $500 Million Donation from Chan Zuckerberg Initiative, Whistleblower Aid Client Reveals.” I have no idea if the write up is spot on, weaponized information, or the work of someone who did not get into one of the university’s numerous money generating certification programs.

The write up asserts:

Harvard University dismantled its prestigious team of online disinformation experts after a foundation run by Facebook’s Mark Zuckerberg and his wife Priscilla Chan donated $500 million to the university, a whistleblower disclosure filed by Whistleblower Aid reveals. Dr. Joan Donovan, one of the world’s leading experts on social media disinformation, says she ran into a wall of institutional resistance and eventual termination after she and her team at Harvard’s Technology and Social Change Research Project (TASC) began analyzing thousands of documents exposing Facebook’s knowledge of how the platform has caused significant public harm.

Let’s assume that the allegation is horse feathers, not to be confused with Intel’s fabulous Horse Ridge. Harvard still has to do some fancy dancing with regard to the ethics professor and expert in dishonesty who is alleged to have violated the esteemed university’s ethics guidelines and was dishonest.

If we assume that the information in Dr. Donovan’s whistleblower declaration is close enough for horse shoes, something equine can be sniffed in the atmosphere of Dr. William James’s beloved institution.

What could Facebook or the Metazuck do which would cause significant public harm? The options range from providing tools to disseminate information which spark body shaming, self harm, and angst among young users. Are old timers possibly affected? I suppose buying interesting merchandise on Facebook Marketplace and experiencing psychological problems as a result of defriending are possibilities too.

If the allegations are proven to be accurate, what are the consequences for the two esteemed organizations? My hunch is zero. Money talks; prestige walks away to put ethics on display for another day.

Stephen E Arnold, December 5, 2023

A New Union or Just a Let’s Have Lunch Moment for Two Tech Giants

November 10, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

There is nothing like titans of technology and revenue generation discovering a common interest. The thrill is the consummation and reaping the subsequent rewards. “Meta Lets Amazon Shoppers Buy Products on Facebook and Instagram without Leaving the Apps” explains:

Meta doesn’t want you to leave its popular mobile apps when making that impulse Amazon purchase. The company debuted a new feature allowing users to link their Facebook and Instagram accounts to Amazon so they can buy goods by clicking on promotions in their feeds.

11 10 23 hugging bros

Two amped up, big time tech bros discover that each has something the other wants. What is that? An opportunity to extend and exploit perhaps? Thanks, Microsoft Bing, you do get the drift of my text prompt, don’t you?

The Zuckbook’s properties touch billions of people. Some of those people want to buy “stuff.” Legitimate stuff has required the user to click away and navigate to the online bookstore to purchase a copy of the complete works of Francis Bacon. Now, the Instagram user can buy without leaving the comforting arms of the Zuck.

Does anyone have a problem with that tie up? I don’t. It is definitely a benefit for the teen who must have the latest lip gloss. It is good for Amazon because the hope is that Zucksters will buy from the online bookstore. The Meta outfit probably benefits with some sort of inducement. Maybe it is just a hug from Amazon executives? Maybe it is an opportunity to mud wrestle with Mr. Bezos if he decides to get down and dirty to show his physical prowess?

Will US regulators care? Will EU regulators care? Will anyone care?

I am not sure how to answer these questions. For decades the high tech outfits have been able to emulate the captains of industry in the golden age without much cause for concern. Continuity is good.

Will teens buy copies of Novum Organum? Absolutely.

Stephen E Arnold, November 10, 2023

Definitely Not Zucking Up: Well, Maybe a Little Bit

November 9, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

I don’t pay too much attention to the outputs from CNN. However, this morning I spotted a story called “Mark Zuckerberg Personally Rejected Meta’s Proposals to Improve Teen Mental Health, Court Documents Allege.” Keep in mind that the magic word is “allege,” which could mean fakeroo.

Here’s the passage I found thought provoking:

Meta CEO Mark Zuckerberg has personally and repeatedly thwarted initiatives meant to improve the well-being of teens on Facebook and Instagram, at times directly overruling some of his most senior lieutenants

If I interpret this statement, it strikes me that [a] the Facebook service sparks some commentary about itself within the company and [b] what a horrible posture for a senior manager to display.

image

An unhappy young high school student contemplates a way to find happiness because she is, according to her social media “friends”, a loser. Nice work, Microsoft Bing.

I am setting aside possible downstream effects of self mutilation, suicide, depression, drug use, and excessive use of lip gloss.

The article states:

Zuckerberg’s rejection of opportunities to invest more heavily in well-being are reflective of his data-centric approach to management, said Arturo Bejar, the former Facebook engineering director and whistleblower who leveled his own allegations last week that Instagram has repeatedly ignored internal warnings about the app’s potential harms to teens.

Management via data — That’s a bit of the management grail for some outfits. I wonder what will happen when smart software is given the job of automating certain “features” of the Zuckbook.

With the Zuck’s increasing expertise in kinetic arts, I would not want to disagree with this estimable icon of social media. My prudent posture is that an individual capable of allowing harm to young people has the capacity to up his game. I am definitely not Zucking up to this outfit even if the allegations are proved false.

Stephen E Arnold, November 9, 2023

xx

test

Recent Facebook Experiments Rely on Proprietary Meta Data

September 25, 2023

When one has proprietary data, researchers who want to study that data must work with you. That gives Meta the home court advantage in a series of recent studies, we learn from the Science‘s article, “Does Social Media Polarize Voters? Unprecedented Experiments on Facebook Users Reveal Surprises.” The 2020 Facebook and Instagram Election Study has produced four papers so far with 12 more on the way. The large-scale experiments confirm Facebook’s algorithm pushes misinformation and reinforces filter bubbles, especially on the right. However, they seem to indicate less influence on users’ views and behavior than many expected. Hmm, why might that be? Writer Kai Kupferschmidt states:

“But the way the research was done, in partnership with Meta, is getting as much scrutiny as the results themselves. Meta collaborated with 17 outside scientists who were not paid by the company, were free to decide what analyses to run, and were given final say over the content of the research papers. But to protect the privacy of Facebook and Instagram users, the outside researchers were not allowed to handle the raw data. This is not how research on the potential dangers of social media should be conducted, says Joe Bak-Coleman, a social scientist at the Columbia School of Journalism.”

We agree, but when companies maintain a stranglehold on data researchers’ hands are tied. Is it any wonder big tech balks at calls for transparency? The article also notes:

“Scientists studying social media may have to rely more on collaborations with companies like Meta in the future, says [participating researcher Deen] Freelon. Both Twitter and Reddit recently restricted researchers’ access to their application programming interfaces or APIs, he notes, which researchers could previously use to gather data. Similar collaborations have become more common in economics, political science, and other fields, says [participating researcher Brendan] Nyhan. ‘One of the most important frontiers of social science research is access to proprietary data of various sorts, which requires negotiating these one-off collaboration agreements,’ he says. That means dependence on someone to provide access and engage in good faith, and raises concerns about companies’ motivations, he acknowledges.”

See the article for more details on the experiments, their results so far, and their limitations. Social scientist Michael Wagner, who observed the study and wrote a commentary to accompany their publication, sees the project as a net good. However, he acknowledges, future research should not be based on this model where the company being studied holds all the data cards. But what is the alternative?

Cynthia Murrell, September 25, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta