Meta Logo Mimic?

May 16, 2022

Suing the Big Tech company Meta is like tilting windmills. Most of these lawsuits are dismissed, but sometimes the little guy has a decent chance at beating the giant. Fast Company details one of Meta’s latest fiascos: “Meta Faces Lawsuit Over Logo.” The Swiss blockchain nonprofit organization Dfinity filed a trademark infringement against Meta in a Northern California court. Dfinity’s lawsuit alleges that Meta’s new logo, shaped like an M infinity sign, bears a striking resemblance to their infinity logo.

Dfinity claims that Meta’s logo puts their reputation at stake, because Meta has a horrible record of violating people’s privacy and the association would prevent them from attracting users. The infinity symbol is in the public domain. Only original variations on it, such as the Meta and Dfinity logo, can be trademarked. Dfinity probably does not have a leg to stand on or curve to rest on with their lawsuit, but they might win. The infinity symbol is not unique to Dfinity, so if Meta had purloined the “Dfinity” name there would be a better case.

While Dfinity’s case could be dismissed, it could mean something worse:

“But even if Dfinity fails to prove its case, the lawsuit could jeopardize Meta’s attempts to earn trademark protection for its logo. That’s because it could highlight how unremarkable the logo really is. (Meta filed for trademark protection in March.) Says Lee: ‘The U.S. Patent and Trademark Office might find that [Meta’s logo] is not inherently distinctive on its own and require more evidence that consumers associate the symbol with a single company.’”

Lately, Meta is not getting a decent press. The little guy will not likely win, especially if Meta has a good legal team. Meta could ultimately lose control of their logo and it could be a fiasco as bad as Disney losing the copyright on Mickey Mouse.

Whitney Grace, May 16, 2022

The Zuckbook Enlists Third Parties to Disseminate Interesting Information: Hey, Why Not?

May 11, 2022

The company formerly known as Facebook is not happy that its new Meta branding is not doing well. According to Protocol, formerly Facebook paid a Republican consulting firm to say mean things about TikTok: “Meta Paid A GOP Consulting Film To Drag TikTok Through The Mud.” Facebook is now the social media platform for grandparents and senior citizens. It has not been a cool thing in years, The cool social media platform is TikTok and it keeps pulling in younger crowds.

Meta decided to take a conservative route by hiring a Republican consulting firm to make the short video platform look bad. The selected firm is Targeted Victory and it was founded by the digital director of Mitt Romney’s 2012 presidential campaign. Meta wants Targeted Victory to run a campaign of op-eds and letters to the editor stating that TikTok is dangerous for kids. Targeted Victory employees promoted stories to regional news outlets about harmful TikTok trends that originated on Facebook.

“The emails show how Meta wants the public to see TikTok even as Meta itself tries to re-create some of the magic that led TikTok to become the top app for young people. Mark Zuckerberg has cited TikTok as a hurdle to getting young people back on his platforms. Both Facebook and Instagram have followed TikTok’s lead by pouring money into their own short-form video clones. ‘We believe all platforms, including TikTok, should face a level of scrutiny consistent with their growing success,’ Meta spokesperson Andy Stone told the Post in defense of the campaign.”

These op-eds and letters to the editor criticized TikTok. They were from concerned parents and even a Democratic Party chair. Targeted Victory also highlighted Meta in a good light, such as how the platform assists Black-owned businesses.

Meta refers to when something breaks the fourth wall or is so “out there” that it cannot be real. It is used in place of “stranger than fiction” and “you can’t make this stuff up.” The company formerly known as Facebook is making everything Meta buy making the world a self-referential self-congratulatory echo chamber.

Whitney Grace, May 11, 2022

Issues with the Zuckbook Smart Software: Imagine That

May 10, 2022

I was neither surprised by nor interested in “Facebook’s New AI System Has a ‘High Propensity’ for Racism and Bias.” The marketing hype encapsulated in PowerPoint decks and weaponized PDF files on Arxiv paint fantastical pictures of today’s marvel-making machine learning systems. Those who have been around smart software and really stupid software for a number of years understand two things: PR and marketing are easier than delivering high-value, high-utility systems and smart software works best when tailored and tuned to quite specific tasks. Generalized systems are not yet without a few flaws. Addressing these will take time, innovation, and money. Innovation is scarce in many high-technology companies. The time and money factors dictate that “good enough” and “close enough for horseshoes” systems and methods are pushed into products and services. “Good enough” works for search because no one knows what is in the index. Comparative evaluations of search and retrieval is tough when users (addicts) operate within a cloud of unknowing. The “close enough for horseshoes” produces applications which are sort of correct. Perfect for ad matching and suggesting what Facebook pages or Tweets would engage a person interested in tattoos or fad diets.

The cited article explains:

Facebook and its parent company, Meta, recently released a new tool that can be used to quickly develop state-of-the-art AI. But according to the company’s researchers, the system has the same problem as its predecessors: It’s extremely bad at avoiding results that reinforce racist and sexist stereotypes.

My recollection is that the Google has terminated some of its wizards and transformed these professionals into Xooglers in the blink of an eye. Why? Exposing some of the issues that continue to plague smart software.

Those interns, former college professors, and start up engineers rely on techniques used for decades. These are connected together, fed synthetic data, and bolted to an application. The outputs reflect the inherent oddities of the methods; for example, feed the system images spidered from Web sites and the system “learns” what is on the Web sites. Then generalize from the Web site images and produce synthetic data. The who process zooms along and costs less. The outputs, however, have minimal information about that which is not on a Web site; for example, positive images of a family in a township outside of Cape Town.

The write up states:

Meta researchers write that the model “has a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt.” This means it’s easy to get biased and harmful results even when you’re not trying. The system is also vulnerable to “adversarial prompts,” where small, trivial changes in phrasing can be used to evade the system’s safeguards and produce toxic content.

What’s new? These issues surfaced in the automated content processing in the early versions of the Autonomy Neuro Linguistic Programming approach. The fix was to retrain the system and tune the outputs. Few licensees had the appetite to spend the money needed to perform the retraining and reindexing of the processed content when the search results drifted into weirdness.

Since the mid 1990s, have developers solved this problem?

Nope.

Has the email with this information reached the PR professionals and the art history majors with a minor in graphic design who produce PowerPoints? What about the former college professors and a bunch of interns and recent graduates?

Nope.

What’s this mean? Here’s my view:

  1. Narrow applications of smart software can work and be quite useful; for example, the Preligens system for aircraft identification. Broad applications have to be viewed as demonstrations or works in progress.
  2. The MBA craziness which wants to create world-dominating methods to control markets must be recognized and managed. I know that running wild for 25 years creates some habits which are going to be difficult to break. But change is needed. Craziness is not a viable business model in my opinion.
  3. The over-the-top hyperbole must be identified. This means that PowerPoint presentations should carry a warning label: Science fiction inside. The quasi-scientific papers with loads of authors who work at one firm should carry a disclaimer: Results are going to be difficult to verify.

Without some common sense, the flood of semi-functional smart software will increase. Not good. Why? The impact of erroneous outputs will cause more harm than users of the systems expect. Screwing up content filtering for a political rally is one thing; outputting an incorrect medical action is another.

Stephen E Arnold, May 10, 2022

Facebook: Getting Softer, More Lovable?

May 9, 2022

Is the Zuckbook going soft? Sure, the company allegedly dorked around with Facebook pages in Australia. Sure, a former employee revealed the high school science club thought framework? Sure, the Zuck is getting heat for his semi-exciting vision of ZuckZoom and ZuckGraphics.

The article with the clicky title “Meta’s Challenge to OpenAI—Give Away a Massive Language Model. At 175 Billion Parameters, It’s As Powerful As OpenAI’s GPT-3, and It’s Open to All Researchers” shows that El Zucko is into freebies. The idea is that Zuck’s smart software is not going to allow the Google to dominate in this super-hyped sector. Think of it as the battle of the high school science clubs.

The ZuckVerse anyone who sells gets special treatment. Meta will charge about 48 percent commission.

Selling in Horizon Worlds will be limited to a few creators located in the US and Canada who must be eighteen years old. The 50% commission is a huge chunk of a creator’s profit, even if the item is an NFT:

“Meta spokesperson Sinead Purcell confirmed the figure to The Post, adding that Horizon Worlds will eventually become available on hardware made by other companies. In those cases, Meta will keep charging its 25% Horizon Worlds fee but the other companies will set their own store transaction fees. Vivek Sharma, Meta’s vice president of Horizon, told The Verge that the commission is ‘a pretty competitive rate in the market.’”

Zuckerberg criticized Google and Apple for taking 30% commission fees to digital creators. He claims that when the Metaverse adds a revenue share the commission rate will be less than 30%.

Zuckerberg claims he wants to support creators and help them make a living wage, but his statements are probably hot air. Talk is cheap, especially for tech giants. Zuckerberg wants to recoup the lost ad revenue through NFTs.

See. Kinder. Gentler. Maybe a Zuckbork?

Stephen E Arnold, May 9, 2022

Facebook and Litigation: A Magnet for Legal Eagles

May 6, 2022

Facebook now called Meta is doing everything it can to maintain relevance with kids and attract advertisers. A large portion of Facebook’s net profits comes from advertising fees. Meta has not been entirely clear with its customers, because CNN Business explains in the story: “Facebook Advertisers Can Pursue Class Action Over Ad Rates” that the company lied about the ability of its “potential reach” tool.

San Francisco US District Judge James Donato ruled that millions of people and businesses that paid for Facebook ads and Instagram, a subsidiary, can sue as a group. Facebook’s fiasco started in pre-pandemic days:

“The lawsuit began in 2018, as DZ Reserve and other advertisers accused Facebook of inflating its advertising reach, by increasing the number of potential viewers by as much as 400%, and charging artificially high premiums for ad placements. They also said senior Facebook executives knew for years that the company’s “potential reach” metric was inflated by duplicate and fake accounts, yet did nothing about it and took steps to cover it up.”

Knowingly deceiving customers is a common business tactic among executives. They do not want to disappoint their investors, or lose face, or money. It is such a standard business tactic that many bigwigs do get away with it, but some are caught with hands so red that ghee would make a bull angry (along with their customers). Facebook argued that a class action lawsuit was not possible, because the litigants were too diverse. The litigants are large corporations and individuals with home businesses. Facebook claimed they would not know how to calculate images.

Judge Donato said it made more sense for Facebook’s irate customers to sue as a group, because “ ‘no reasonable person’ would sue Meta individually to recover at most a $32 price premium.”

Ticketmaster faced a similar scandal when they charged buyers absurd fees for tickets. The fees went directly into the pockets of the executives. Ticketmaster’s class-action lawsuit resulted in all plaintiffs reaching $3-4 Ticketmaster gift certificates for every ticket they bought. The gift certificates could not be combined and had expiration dates.

Big businesses should be held accountable for their actions, but the payoff is not always that great for the individual.

Whitney Grace, May 6, 2022

Meta (Formerly Zuckbook) Chases Another Digital Ghost

May 5, 2022

High school science club thinking is alive and well as Meta (formerly Zuckbook). Here’s a flashback to the Information Industry Association meeting in Boston in 1081. A wizard of sorts (Marvin Weinberger maybe?) pointed out that artificial intelligence was just around the corner. The conference was not far from an MIT building, so his optimism may have had some vibes from the pre-Epstein era at that institution.

No one said anything. There were just chuckles.

Flash forward to 2022: Synthetic data, handwaving, unexplainable outputs, Teslas which get confused, YouTube ad placement, etc. The era of AI has arrived in its close-enough-for-horseshoes glory.

Meta AI Is Building AI That Processes Language Like the Human Brain” explains:

Meta AI announced a long-term research initiative to understand how the human brain processes language. In collaboration with neuroimaging center Neurospin (CEA) and INRIA, Meta AI is comparing how AI language models and the brain respond to the same spoken or written sentences.

Significant advancements based on “valuable insights” will allow the Zuckbook to offer services that process language like the humanoid brain.

And the progress? Well, MIT is not involved. Human brains at that institution apparently misunderstood Jeffrey Epstein. The Zuckbook will not make that mistake one hopes.

Neurospin? Niftier than plain old AI? Absolutely.

Stephen E Arnold, May 5, 2022

Gizmodo: The Facebook Papers, Void Filling, and Governance

May 2, 2022

If you need more evidence about the fine thought processes at Facebook, navigate to “We’re Publishing the Facebook Papers. Here’s What They Say About the Ranking Algorithms That Control Your News Feed.” In the story is a link to the link tucked into the article where the once-confidential documents are posted. In the event you just want to go directly to the list, here it is: https://bit.ly/3vWqLKD.

I reacted to the expansion of the Gizmodo Facebook papers with a chuckle. I noted this statement in the cited article:

Today, as part of a rolling effort to make the Facebook Papers available publicly, Gizmodo is releasing a second batch of documents—37 files in all.

I noted the phrase “rolling effort.”

In my OSINT lecture at the National Cyber Crime Conference, I mentioned that information once reserved for “underground” sites was making its way to mainstream Web sites. Major news organizations have dabbled in document “dumps.” The Pentagon Papers and the Snowden PowerPoints are examples some remember. An Australian “journalist” captured headlines, lived in an embassy, and faces a trip to the US because of document dumps.

Is Gizmodo moving from gadget reviews into the somewhat uncertain seas of digital information once viewed as proprietary, company confidential, or even trade secrets?

I don’t know if the professionals at Gizmodo are chasing clicks, thinking about emulating bigly media outfits, or doing what seems right and just.

I find the Facebook papers amusing. The legal eagles may have a different reaction. Remember. I found the embrace of interesting content amusing. From my point of view, gadget reviews are more interesting if less amusing.

Stephen E Arnold, May 2, 2022

NCC April Users Might Accept Corrections to Fake News, if Facebook Could be Bothered

April 28, 2022

Facebook (aka Meta) has had a bumpy road of late, but perhaps a hypothetical tweak to the news feed could provide a path forward for the Zuckbook. We learn from Newswise that a study recently published in the Journal of Politics suggests that “Corrections on Facebook News Feed Reduces Misinformation.” The paper was co-authored by George Washington University’s Ethan Porter and Ohio State University’s Thomas J. Wood and funded in part by civil society non-profit Avaaz. It contradicts previous research that suggested such an approach could backfire. The article from George Washington University explains:

“Social media users were tested on their accuracy in recognizing misinformation through exposure to corrections on a simulated news feed that was made to look like Facebook’s news feed. However, just like in the real world, people in the experiment were free to ignore the information in the feed that corrected false stories also posted on the news feed. Even when given the freedom to choose what to read in the experiment, users’ accuracy improved when fact-checks were included with false stories. The study’s findings contradict previous research that suggests displaying corrections on social media was ineffective or could even backfire by increasing inaccuracy. Instead, even when users are not compelled to read fact-checks in a simulation of Facebook’s news feed, the new study found they nonetheless became more factually accurate despite exposure to misinformation. This finding was consistent for both liberal and conservative users with only some variation depending on the topic of the misinformation.”

Alongside a control group of subjects who viewed a simulated Facebook feed with no corrections, researchers ran two variants of the experiment. In the first, they placed corrections above the original false stories (all of which had appeared on the real Facebook at some point). In the second, the fake news was blurred out beneath the corrections. Subjects in both versions were asked to judge the stories’ veracity on a scale of 1 – 5. See the write-up for more on the study’s methodology. One caveat—researchers acknowledge potential influences from friends, family, and other connections were outside the scope of the study.

If Facebook adopted a similar procedure on its actual news feed, perhaps it could curb the spread of fake news. But does it really want to? We suppose it must weigh its priorities—reputation and legislative hassles vs profits. Hmm.

Cynthia Murrell, April 28, 2022

Zuckerberg and Management: The Eye of What?

April 12, 2022

I am not familiar with Consequence.net. (I know. I am a lazy phat, phaux, phrench bulldog.) Plus I assume that everything I read on the Internet is actual factual. (One of my helpers clued me into that phrase. I am so grateful for young person speak.)

I spotted this article: “Mark Zuckerberg Says Meta Employees Lovingly Refer to Him as The Eye of Sauron.” The hook was the word “lovingly.” The article reported that the Zuck said on a very energetic, somewhat orthogonal podcast:

“Some of the folks I work with at the company — they say this lovingly — but I think that they sometimes refer to my attention as the Eye of Sauron. You have this unending amount of energy to go work on something, and if you point that at any given team, you will just burn them.”

My recollection of the eye in question is that the Lord of the Rings crowd is recycling the long Wikipedia article about looking at someone and causing no end of grief. Mr. Zuck cause grief? Not possible. A “Zuck up” means in Harrod’s Creek a sensitive, ethical action. A “Zuck eye”, therefore, suggests the look of love, understanding, and compassion. I have seen those eyes in printed motion picture posters; for example, the film “Evil Eye” released in the Time of Covid.

The article points out:

Without delving too deeply into fantasy lore, it is canonically nefarious, and bad things happen when it notices you. Zuckerberg’s computer nerd demeanor doesn’t quite scream “Dark Lord” to us, but we don’t deny that Meta employees would compare his semi-autocratic mode of operation to that of the Eye.

Interesting management method.

Stephen E Arnold, April 12, 2022

Facebook Defines Excellence: Also Participated?

April 5, 2022

Slick AI and content moderation functions are not all they are cracked up to be, sometimes with devastating results. SFGate provides one distressing example in, “‘Kill More’: Facebook Fails to Detect Hate Against Rohingya.” Rights group Global Witness recently put Facebook’s hate speech algorithms to the test. The AI failed spectacularly. The hate-filled ads submitted by the group were never posted, of course, though all eight received Facebook’s seal of approval. However, ads with similar language targeting Myanmar’s Rohingya Muslim minority have made it onto the platform in the past. Those posts were found to have contributed to a vicious campaign of genocide against the group. Associated Press reporters Victoria Milko and Barbara Ortutay write:

“The army conducted what it called a clearance campaign in western Myanmar’s Rakhine state in 2017 after an attack by a Rohingya insurgent group. More than 700,000 Rohingya fled into neighboring Bangladesh and security forces were accused of mass rapes, killings and torching thousands of homes. … On Feb. 1 of last year, Myanmar’s military forcibly took control of the country, jailing democratically elected government officials. Rohingya refugees have condemned the military takeover and said it makes them more afraid to return to Myanmar. Experts say such ads have continued to appear and that despite its promises to do better and assurances that it has taken its role in the genocide seriously, Facebook still fails even the simplest of tests — ensuring that paid ads that run on its site do not contain hate speech calling for the killing of Rohingya Muslims.”

The language in these ads is not subtle—any hate-detection algorithm that understands Burmese should have flagged it. Yet Meta (now Facebook’s “parent” company) swears it is doing its best to contain the problem. According to a recent statement sent to the AP, a company rep claims:

“We’ve built a dedicated team of Burmese speakers, banned the Tatmadaw, disrupted networks manipulating public debate and taken action on harmful misinformation to help keep people safe. We’ve also invested in Burmese-language technology to reduce the prevalence of violating content.”

Despite such assurances, Facebook has a history of failing to allocate enough resources to block propaganda with disastrous consequences for foreign populations. Perhaps taking more responsibility for their product’s impact in the world is too dull a topic for Zuck and company. They would much prefer to focus on the Metaverse, their latest shiny object, though that path is also fraught with collateral damage. Is Meta too big for anyone to hold it accountable?

Cynthia Murrell, April 5, 2022

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta