Microsoft: Not Deteriorating, Just Normal Behavior

June 26, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Gee, Microsoft, you are amazing. We just fired up a new Windows 11 Professional machine and guess what? Yep, the printers are not recognized. Nice work and consistent good enough quality.

Then I read “Microsoft Admits to Problems Upgrading Windows 11 Pro to Enterprise.” That write up says:

There are problems with Microsoft’s last few Windows 11 updates, leaving some users unable to make the move from Windows 11 Pro to Enterprise. Microsoft made the admission in an update to the "known issues" list for the June 11, 2024, update for Windows 11 22H2 and 23H2 – KB5039212. According to Microsoft, "After installing this update or later updates, you might face issues while upgrading from Windows Pro to a valid Windows Enterprise subscription."

Bad? Yes. But then I worked through this write up: “Microsoft Chose Profit Over Security and Left U.S. Government Vulnerable to Russian Hack, Whistleblower Says.” Is the information in the article on the money? I don’t know. I do know that bad actors find Windows the equivalent of an unlocked candy store. Goodies are there for greedy teens to cart off the chocolate-covered peanuts and gummy worms.

image

Everyone interested in entering the Microsoft Windows Theme Park wants to enjoy the thrills of a potentially lucrative experience. Thanks, MSFT Copilot. Why is everyone in your illustration the same?

This remarkable story of willful ignorance explains:

U.S. officials confirmed reports that a state-sponsored team of Russian hackers had carried out SolarWinds, one of the largest cyberattacks in U.S. history.

How did this happen? The write up asserts:

The federal government was preparing to make a massive investment in cloud computing, and Microsoft wanted the business. Acknowledging this security flaw could jeopardize the company’s chances, Harris [a former Microsoft security expert and whistleblower] recalled one product leader telling him. The financial consequences were enormous. Not only could Microsoft lose a multibillion-dollar deal, but it could also lose the race to dominate the market for cloud computing.

Bad things happened. The article includes this interesting item:

From the moment the hack surfaced, Microsoft insisted it was blameless. Microsoft President Brad Smith assured Congress in 2021 that “there was no vulnerability in any Microsoft product or service that was exploited” in SolarWinds.

Okay, that’s the main idea: Money.

Several observations are warranted:

  1. There seems to be an issue with procurement. The US government creates an incentive for Microsoft to go after big contracts and then does not require Microsoft products to work or be secure. I know generals love PowerPoint, but it seems that national security is at risk.
  2. Microsoft itself operates with a policy of doing what’s necessary to make as much money as possible and avoiding the cost of engineering products that deliver what the customer wants: Stable, secure software and services.
  3. Individual users have to figure out how to make the most basic functions work without stopping business operations. Printers should print; an operating system should be able to handle what my first personal computer could do in the early 1980s. After 25 years, printing is not a new thing.

Net net: In a consequence-filled business environment, I am concerned that Microsoft will not improve its security and the most basic computer operations. I am not sure the company knows how to remediate what I think of as a Disneyland for bad actors. And I wanted the new Windows 11 Professional to work. How stupid of me?

Stephen E Arnold, June 26, 2024

X: The Prominent (Fake) News Source

June 26, 2024

Many of us have turned away from X, formerly Twitter, since its Musky takeover and now pay it little mind. However, it seems many Americans still trust the platform to deliver their news. This is concerning, considering “X Has Highest Rate of Misinformation As a New Source, Study Finds.”

Citing a recent Pew Research study, MediaDailyNews reports 65% of X users say news is a reason they visit the platform. Breaking news is even more of a draw, with 75% of users getting their real-time news on the platform. This is understandable given Twitter’s legacy, but are users unaware how unreliable X has become? Writer Colin Kirkland emphasizes:

“What may the greatest concern in Pew’s findings is that while X touts that it has the most devoted base of news seekers, it also ranked the highest in terms of inaccurate reporting. All of the platforms Pew studied proliferate misinformation-based news stories, but 86% of X’s base reported seeing inaccurate news, and 37% say they see it often. As Meta makes definitive moves to curb its news output on apps like Instagram, Facebook and Threads — the only other potential breaking-news alternative to X — Elon Musk’s app reigns supreme in the proliferation and digestion of news content, which could have effects on the upcoming presidential election, especially due to the amount of misinformation circling the platform.”

Yep. How can one reach X users with this important update? Pew is trying the direct route. Will it make any difference?

Cynthia Murrell, June 26, 2024

Falling Apples: So Many to Harvest and Sell to Pay the EU

June 25, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

What’s goes up seems to come down. Apple is peeling back on the weird headset gizmo. The company’s AI response — despite the thrills Apple Intelligence produced in some acolytes — is “to be” AI or vaporware. China dependence remains a sticky wicket. And if the information in “Apple Has Very Serious Issues Under Sweeping EU Digital Rules, Competition Chief Says,” the happy giant in Cupertino will be writing some Jupiter-sized checks. Imagine. Pesky Europeans are asserting that Apple has a monopoly and has been acting less like Johnny Appleseed and more like Andrew Carnegie.

image

A powerful force causes Tim Apple to wonder why so many objects are falling on his head. Thanks, MSFT Copilot. Good enough.

The write up says:

… regulators are preparing charges against the iPhone maker. In March [2024], the European Commission, the EU’s executive arm, opened a probe into Apple, Alphabet and Meta, under the sweeping Digital Markets Act tech legislation that became applicable this year. The investigation featured several concerns about Apple, including whether the tech giant is blocking businesses from telling their users about cheaper options for products or about subscriptions outside of the App Store.

Would Apple, the flag bearer for almost-impossible to repaid products and software that just won’t charge laptop batteries no matter what the user needs to do prior to a long airplane flight prevent the free flow of information?

The EU nit pickers believe that Apple’s principles and policies are a “serious issue.”

How much money is possibly involved if the EU finds Apple a — pardon the pun — a bad apple in a barrel of rotten US high technology companies? The write up says:

If it is found in breach of Digital Markets Act rules, Apple could face fines of up to 10% of the company’s total worldwide annual turnover.

For FY2023, Apple captured about $380 billion, this works out to a potential payday for the EU of about US$ 38 billion and change.

Speaking of change, will a big fine cause those Apples to levitate? Nope.

Stephen E Arnold, June 25, 2024

Two EU Firms Unite in Pursuit of AI Sovereignty

June 25, 2024

Europe would like to get out from under the sway of North American tech firms. This is unsurprising, given how differently the EU views issues like citizen privacy. Then there are the economic incentives of localizing infrastructure, data, workforce, and business networks. Now, two generative AI firms are uniting with that goal in mind. The Next Web reveals, “European AI Leaders Aleph Alpha and Silo Ink Deal to Deliver ‘Sovereign AI’.” Writer Thomas Macaulay reports:

“Germany’s Aleph Alpha and Finland’s Silo AI announced the partnership [on June 13, 2024]. The duo plan to create a ‘one-stop-solution’ for European industrial firms exploring generative AI. Their collaboration brings together distinctive expertise. Aleph Alpha has been described a European rival to OpenAI, but with a stronger focus on data protection, security, and transparency. The company also claims to operate Europe’s fastest commercial AI data center. Founded in 2019, the firm has become Germany’s leading AI startup. In November, it raised $500mn in a funding round backed by Bosch, SAP, and Hewlett Packard Enterprise. Silo AI, meanwhile, calls itself ‘Europe’s largest private AI lab.’ The Helsinki-based startup provides custom LLMs through a SaaS subscription. Use cases range from smart devices and cities to autonomous vehicles and industry 4.0. Silo also specializes in building LLMs for low-resource languages, which lack the linguistic data typically needed to train AI models. By the end of this year, the company plans to cover every official EU language.”

Both Aleph Alpha CEO Jonas Andrulis and Silo AI CEO Peter Sarlin enthusiastically advocate European AI sovereignty. Will the partnership strengthen their mutual cause?

Cynthia Murrell, June 25, 2024

A Discernment Challenge for Those Who Are Dull Normal

June 24, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness. 

Techradar, an online information service, published “Ahead of GPT-5 Launch, Another Test Shows That People Cannot Distinguish ChatGPT from a Human in a Conversation Test — Is It a Watershed Moment for AI?”  The headline implies “change everything” rhetoric, but that is routine AI jargon-hype.

Once again, academics who are unable to land a job in a “real” smart software company studied the work of their former colleagues who make a lot more money than those teaching do. Well, what do academic researchers do when they are not sitting in the student union or the snack area in the lab whilst waiting for a graduate student to finish a task? In my experience, some think about their CVs or résumés. Others ponder the flaws in a commercial or allegedly commercial product or service.

image

A young shopper explains that the outputs of egg laying chickens share a similarity. Insightful observation from a dumb carp. Thanks, MSFT Copilot. How’s that Recall project coming along?

The write up reports:

The Department of Cognitive Science at UC San Diego decided to see how modern AI systems fared and evaluated ELIZA (a simple rules-based chatbot from the 1960’s included as a baseline in the experiment), GPT-3.5, and GPT-4 in a controlled Turing Test. Participants had a five-minute conversation with either a human or an AI and then had to decide whether their conversation partner was human.

Here’s the research set up:

In the study, 500 participants were assigned to one of five groups. They engaged in a conversation with either a human or one of the three AI systems. The game interface resembled a typical messaging app. After five minutes, participants judged whether they believed their conversation partner was human or AI and provided reasons for their decisions.

And what did the intrepid academics find? Factoids that will get them a job at a Perplexity-type of company? Information that will put smart software into focus for the elected officials writing draft rules and laws to prevent AI from making The Terminator come true?

The results were interesting. GPT-4 was identified as human 54% of the time, ahead of GPT-3.5 (50%), with both significantly outperforming ELIZA (22%) but lagging behind actual humans (67%). Participants were no better than chance at identifying GPT-4 as AI, indicating that current AI systems can deceive people into believing they are human.

What does this mean for those labeled dull normal, a nifty term applied to some lucky people taking IQ tests. I wanted to be a dull normal, but I was able to score in the lowest possible quartile. I think it was called dumb carp. Yes!

Several observations to disrupt your clear thinking about smart software and research into how the hot dogs are made:

  1. The smart software seems to have stalled. Our tests of You.com which allows one to select which object models parrots information, it is tough to differentiate the outputs. Cut from the same transformer cloth maybe?
  2. Those judging, differentiating, and testing smart software outputs can discern differences if they are way above dull normal or my classification dumb carp. This means that indexing systems, people, and “new” models will be bamboozled into thinking what’s incorrect is a-okay. So much for the informed citizen.
  3. Will the next innovation in smart software revolutionize something? Yep, some lucky investors.

Net net: Confusion ahead for those like me: Dumb carp. Dull normals may be flummoxed. But those super-brainy folks have a chance to rule the world. Bust out the party hats and little horns.

Stephen E Arnold, June 24, 2024

Ad Hominem Attack: A Revived Rhetorical Form

June 24, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I remember my high school debate coach telling my partner Nick G. (I have forgotten the budding prosecutor’s name, sorry) you should not attack the character of our opponents. Nick G. had interacted with Bill W. on the basketball court in an end-of-year regional game. Nick G., as I recall got a bloody nose, and Bill W. was thrown out of the basketball game. When fisticuffs ensued, I thanked my lucky stars I was a hopeless athlete. Give me the library, a debate topic, a pile of notecards, and I was good to go. Nick G. included in his rebuttal statement comments about the character of Bill W. When the judge rendered a result and his comments, Nick G. was singled out as being wildly inappropriate. After the humiliating defeat, the coach explained that an ad hominem argument is not appropriate for 15-year-olds. Nick G.’s attitude was, “I told the truth.” As Nick G. learned, the truth is not what wins debate tournaments or life in some cases.

I thought about ad hominem arguments as I read “Silicon Valley’s False Prophet.” This essay reminded me of the essay by the same author titled “The Man Who Killed Google Search.” I must admit the rhetorical trope is repeatable. Furthermore it can be applied to an individual who may be clueless about how selling advertising nuked relevance (or what was left of it) at the Google and to the dealing making of a person whom I call Sam AI-Man. Who knows? Maybe other authors will emulate these two essays, and a new Silicon Valley genre may emerge ready for the real wordsmiths and pooh-bahs of Silicon Valley to crank out a hit piece every couple of days.

To the essay at hand: The false profit is the former partner of Elon Musk and the on-again-off-again-on-again Big Dog at OpenAI. That’s an outfit where “open” means closed, and closed means open to the likes of Apple. The main idea, I think, is that AI sucks and Sam AI-Man continues to beat the drum for a technology that is likely to be headed for a correction. In Silicon Valley speak, the bubble will burst. It is, I surmise, Mr. AI-man’s fault.

The essay explains:

Sam Altman, however, exists in a category of his own. There are many, many, many examples of him saying that OpenAI — or AI more broadly — will do something it can’t and likely won’t, and it being meekly accepted by the Fourth Estate without any real pushback. There are more still of him framing the limits of the present reality as a positive — like when, in a fireside sitdown with 1980s used car salesman Salesforce CEO Marc Benioff, Altman proclaimed that AI hallucinations (when an LLM asserts something untrue as fact, because AI doesn’t know anything) are a feature, not a bug, and rather than being treated as some kind of fundamental limitation, should be regarded as a form of creative expression.

I understand. Salesperson. Quite a unicorn in Silicon Valley. I mean when I worked there I would encounter hyperbole artists every few minutes. Yeah, Silicon Valley. Anchored in reality, minimum viable products, and lots of hanky pinky.

The essay provides a bit of information about the background of Mr. AI-Man:

When you strip away his ability to convince people that he’s smart, Altman had actually done very little — he was a college dropout with a failing-then-failed startup, one where employees tried to get him fired twice.

If true, that takes some doing. Employees tried to get the false prophet fired twice. In olden times, burning at the stake might have been an option. Now it is just move on to another venture. Progress.

The essay does provide some insight into Sam AI-Man’s core competency:

Altman is adept at using connections to make new connections, in finding ways to make others owe him favors, in saying the right thing at the right time when he knew that nobody would think about it too hard. Altman was early on Stripe, and Reddit, and Airbnb — all seemingly-brilliant moments in the life of a man who had many things handed to him, who knew how to look and sound to get put in the room and to get the capital to make his next move. It’s easy to conflate investment returns with intellectual capital, even though the truth is that people liked Altman enough to give him the opportunity to be rich, and he took it.

I cannot figure out if the author envies Sam AI-Man, reviles him for being clever (a key attribute in some high-technology outfits), or genuinely perceives Mr. AI-Man as the first cousin to Beelzebub. Whatever the motivation, I find the phoenix-like rising of the ad hominem attack a refreshing change from the entitled pooh-bahism of some folks writing about technology.

The only problem: I think it is unlikely that the author will be hired by OpenAI. Chance blown.

Stephen E Arnold, June 24, 2024

Chasing a Folly: Identifying AI Content

June 24, 2024

As are other academic publishers, Springer Nature Group is plagued by fake papers. Now the company announces, “Springer Nature Unveils Two New AI Tools to Protect Research Integrity.” How effective the tools are remains to be proven, but at least the company is making an effort. The press release describes text-checker Geppetto and image-analysis tool SnappShot. We learn:

“Geppetto works by dividing the paper up into sections and uses its own algorithms to check the consistency of the text in each section. The sections are then given a score based on the probability that the text in them has been AI generated. The higher the score, the greater the probability of there being problems, initiating a human check by Springer Nature staff. Geppetto is already responsible for identifying hundreds of fake papers soon after submission, preventing them from being published – and from taking up editors’ and peer reviewers’ valuable time.

SnappShot, also developed in-house, is an AI-assisted image integrity analysis tool. Currently used to analyze PDF files containing gel and blot images and look for duplications in those image types – another known integrity problem within the industry – this will be expanded to cover additional image types and integrity problems and speed up checks on papers.”

Springer Nature’s Chris Graf emphasizes the importance of research integrity and vows to continue developing and improving in-house tools. To that end, we learn, the company is still growing its fraud-detection team. The post points out Springer Nature is a contributing member of the STM Integrity Hub.

Based in Berlin, Springer Nature was formed in 2015 through the combination of Nature Publishing Group, Macmillan Education, and Springer Science+Business Media. A few of its noteworthy publications include Scientific American, Nature, and this collection of Biology, Clinical Medicine, and Health journals.

Cynthia Murrell, June 24, 2024

Thomson Reuters: A Trust Report about Trust from an Outfit with Trust Principles

June 21, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Thomson Reuters is into trust. The company has a Web page called “Trust Principles.” Here’s a snippet:

The Trust Principles were created in 1941, in the midst of World War II, in agreement with The Newspaper Proprietors Association Limited and The Press Association Limited (being the Reuters shareholders at that time). The Trust Principles imposed obligations on Reuters and its employees to act at all times with integrity, independence, and freedom from bias. Reuters Directors and shareholders were determined to protect and preserve the Trust Principles when Reuters became a publicly traded company on the London Stock Exchange and Nasdaq. A unique structure was put in place to achieve this. A new company was formed and given the name ‘Reuters Founders Share Company Limited’, its purpose being to hold a ‘Founders Share’ in Reuters.

Trust nestles in some legalese and a bit of business history. The only reason I mention this anchoring in trust is that Thomson Reuters reported quarterly revenue of $1.88 billion in May 2024, up from $1.74 billion in May 2023. The financial crowd had expected $1.85 billion in the quarter, and Thomson Reuters beat that. Surplus funds makes it possible to fund many important tasks; for example, a study of trust.

image

The ouroboros, according to some big thinkers, symbolizes the entity’s journey and the unity of all things; for example, defining trust, studying trust, and writing about trust as embodied in the symbol.

My conclusion is that trust as a marketing and business principle seems to be good for business. Therefore, I trust, and I am confident that the information in “Global Audiences Suspicious of AI-Powered Newsrooms, Report Finds.” The subject of the trusted news story is the Reuters Institute for the Study of Journalism. The Thomson Reuters reporter presents in a trusted way this statement:

According to the survey, 52% of U.S. respondents and 63% of UK respondents said they would be uncomfortable with news produced mostly with AI. The report surveyed 2,000 people in each country, noting that respondents were more comfortable with behind-the-scenes uses of AI to make journalists’ work more efficient.

To make the point a person working for the trusted outfit’s trusted report says in what strikes me as a trustworthy way:

“It was surprising to see the level of suspicion,” said Nic Newman, senior research associate at the Reuters Institute and lead author of the Digital News Report. “People broadly had fears about what might happen to content reliability and trust.”

In case you have lost the thread, let me summarize. The trusted outfit Thomson Reuters funded a study about trust. The research was conducted by the trusted outfit’s own Reuters Institute for the Study of Journalism. The conclusion of the report, as presented by the trusted outfit, is that people want news they can trust. I think I have covered the post card with enough trust stickers.

I know I can trust the information. Here’s a factoid from the “real” news report:

Vitus “V” Spehar, a TikTok creator with 3.1 million followers, was one news personality cited by some of the survey respondents. Spehar has become known for their unique style of delivering the top headlines of the day while laying on the floor under their desk, which they previously told Reuters is intended to offer a more gentle perspective on current events and contrast with a traditional news anchor who sits at a desk.

How can one not trust a report that includes a need met by a TikTok creator? Would a Thomson Reuters’ professional write a news story from under his or her desk or cube or home office kitchen table?

I think self funded research which finds that the funding entity’s approach to trust is exactly what those in search of “real” news need. Wikipedia includes some interesting information about Thomson Reuters in its discussion of the company in the section titled “Involvement in Surveillance.” Wikipedia alleges that Thomson Reuters licenses data to Palantir Technologies, an assertion which if accurate I find orthogonal to my interpretation of the word “trust.” But Wikipedia is not Thomson Reuters.

I will not ask questions about the methodology of the study. I trust the Thomson Reuters’ professionals. I will not ask questions about the link between revenue and digital information. I have the trust principles to assuage any doubt. I will not comment on the wonderful ouroboros-like quality of an enterprise embodying trust, funding a study of trust, and converting those data into a news story about itself. The symmetry is delicious and, of course, trustworthy. For information about Thomson Reuters’s trust use of artificial intelligence see this Web page.

Stephen E Arnold, June 21, 2024

The Key to Success at McKinsey & Company: The 2024 Truth Is Out!

June 21, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

When I was working at a “real” company, I wanted to labor in the vineyards of a big-time, blue-chip consulting firm. I achieved that goal and, after a suitable period of time in the penal colony, I escaped to a client. I made it out, unscathed, and entered a more interesting, less nutso working life. When the “truth” about big-time, blue-chip consulting firms appears in public sources, I scan the information. Most of it is baloney; for example, the yip yap about McKinsey and its advice pertaining to addictive synthetics. Hey, stuff happens when one is objective. “McKinsey Exec Tells Summer Interns That Learning to Ask AI the Right Questions Is the Key to Success” contains some information which I find quite surprising. First, I don’t know if the factoids in the write up are accurate or if they are the off-the-cuff baloney recruiters regularly present to potential 60-hour-a-week knowledge worker serfs or if the person has a streaming video connection to the McKinsey managing partner’s work-from-the-resort office.

Let’s assume the information is correct and consider some of its implications. An intern is a no-pay or low-pay job for students from the right institutions, the right background, or the right connections. The idea is that associates (one step above the no-pay serf) and partners (the set for life if you don’t die of heart failure crowd) can observe, mentor, and judge these field laborers. The write up states:

Standing out in a summer internship these days boils down to one thing — learning to talk to AI. At least, that’s the advice McKinsey’s chief client officer, Liz Hilton Segel, gave one eager intern at the firm. “My advice to her was to be an outstanding prompt engineer,” Hilton Segel told The Wall Street Journal.

But what about grades? What about my family’s connections to industry, elected officials, and a supreme court judge? What about my background scented with old money, sheepskin from prestigious universities, and a Nobel Prize awarded a relative 50 years ago? These questions, its seems, may no longer be relevant. AI is coming to the blue-chip consulting game, and the old-school markers of building big revenues may not longer matter.

AI matters. Despite McKinsey’s 11-month effort, the firm has produced Lilli. The smart systems, despite fits and starts, has delivered results; that is, a payoff, cash money, engagement opportunities. The write up says:

Lilli’s purpose is to aggregate the firm’s knowledge and capabilities so that employees can spend more time engaging with clients, Erik Roth, a senior partner at McKinsey who oversaw Lili’s development, said last year in a press release announcing the tool.

And the proof? I learned:

“We’ve [McKinsey humanoids] answered over 3 million prompts and add about 120,000 prompts per week,” he [Erik Roth] said. “We are saving on average up to 30% of a consultants’ time that they can reallocate to spend more time with their clients instead of spending more time analyzing things.”

Thus, the future of success is to learn to use Lilli. I am surprised that McKinsey does not sell internships, possibly using a Ticketmaster-type system.

Several observations:

  1. As Lilli gets better or is replaced by a more cost efficient system, interns and newly hired professionals will be replaced by smart software.
  2. McKinsey and other blue-chip outfits will embrace smart software because it can sell what the firm learns to its clients. AI becomes a Petri dish for finding marketable information.
  3. The hallucinative functions of smart software just create an opportunity for McKinsey and other blue-chip firms to sell their surviving professionals at a more inflated fee. Why fail and lose money? Just pay the consulting firm, sidestep the stupidity tax, and crush those competitors to whom the consulting firms sell the cookie cutter knowledge.

Net net: Blue-chip firms survived the threat from gig consultants and the Gerson Lehrman-type challenge. Now McKinsey is positioning itself to create a no-expectation environment for new hires, cut costs, and increase billing rates for the consultants at the top of the pyramid. Forget opioids. Go AI.

Stephen E Arnold, June 21, 2024

Meta Case Against Intelware Vendor Voyager Lags to Go Forward

June 21, 2024

Another clever intelware play gets trapped and now moves to litigation. Meta asserts that when Voyager Labs scraped data on over 600,000 Facebook users, it violated its contract. Furthermore, it charges, the scraping violated anti-hacking laws. While Voyager insists the case should be summarily dismissed, U.S. District Court Judge Araceli Martinez-Olguin disagrees. MediaDailyNews reports, “Meta Can Proceed With Claims that Voyager Labs Scraped Users’ Data.” Writer Wendy Davis explains:

“Voyager argued the complaint should be dismissed at an early stage for several reasons. Among others, Voyager said the allegations regarding Facebook’s terms of service were too vague. Meta’s complaint ‘refers to a catchall category of contracts … but then says nothing more about those alleged contracts, their terms, when they are supposed to have been executed, or why they allegedly bind Voyager UK today,’ Voyager argued to Martinez-Olguin in a motion filed in February. The company also said California courts lacked jurisdiction to decide whether the company violated federal or state anti-hacking laws. Martinez-Olguin rejected all of Voyager’s arguments on Thursday. She wrote that while Meta’s complaint could have set out the company’s terms of service ‘with more clarity,’ the allegations sufficiently informed Voyager of the basis for Meta’s claim.”

This battle began in January 2023 when Meta first filed the complaint. Now it can move forward. How long before the languid wheels of justice turn out a final ruling? A long time we wager.

Cynthia Murrell, June 21, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta