Llama Beans? Is That the LLM from Zuckbook?

August 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

We love open-source projects. Camelids that masquerade as such, not so much. According to The Register, “Meta Can Call Llama 2 Open Source as Much as It Likes, but That Doesn’t Mean It Is.” The company asserts its new large language model is open source because it is freely available for research and (some) commercial use. Are Zuckerburg and his team of Meta marketers fuzzy on the definition of open source? Writer Steven J. Vaughan-Nichols builds his case with quotes from several open source authorities. First up:

“As Erica Brescia, a managing director at RedPoint, the open source-friendly venture capital firm, asked: ‘Can someone please explain to me how Meta and Microsoft can justify calling Llama 2 open source if it doesn’t actually use an OSI [Open Source Initiative]-approved license or comply with the OSD [Open Source Definition]? Are they intentionally challenging the definition of OSS [Open Source Software]?'”

Maybe they are trying. After all, open source is good for business. And being open to crowd-sourced improvements does help the product. However, as the post continues:

“The devil is in the details when it comes to open source. And there, Meta, with its Llama 2 Community License Agreement, falls on its face. As The Register noted earlier, the community agreement forbids the use of Llama 2 to train other language models; and if the technology is used in an app or service with more than 700 million monthly users, a special license is required from Meta. It’s also not on the Open Source Initiative’s list of open source licenses.”

Next, we learn OSI‘s executive director Stefano Maffulli directly states Llama 2 does not meet his organization’s definition of open source. The write-up quotes him:

“While I’m happy that Meta is pushing the bar of available access to powerful AI systems, I’m concerned about the confusion by some who celebrate Llama 2 as being open source: if it were, it wouldn’t have any restrictions on commercial use (points 5 and 6 of the Open Source Definition). As it is, the terms Meta has applied only allow some commercial use. The keyword is some.”

Maffulli further clarifies Meta’s license specifically states Amazon, Google, Microsoft, Bytedance, Alibaba, and any startup that grows too much may not use the LLM. Such a restriction is a no-no in actual open source projects. Finally, Software Freedom Conservancy executive Karen Sandler observes:

“It looks like Meta is trying to push a license that has some trappings of an open source license but, in fact, has the opposite result. Additionally, the Acceptable Use Policy, which the license requires adherence to, lists prohibited behaviors that are very expansively written and could be very subjectively applied.”

Perhaps most egregious for Sandler is the absence of a public drafting or comment process for the Llama 2 license. Llamas are not particularly speedy creatures.

Cynthia Murrell, August 4, 2023

Stanford: Llama Hallucinating at the Dollar Store

March 21, 2023

Editor’s Note: This essay is the work of a real, and still alive, dinobaby. No smart software involved with the exception of the addled llama.

What happens when folks at Stanford University use the output of OpenAI to create another generative system? First, a blog article appears; for example, “Stanford’s Alpaca Shows That OpenAI May Have a Problem.” Second, I am waiting for legal eagles to take flight. Some may already be aloft and circling.

image

A hallucinating llama which confused grazing on other wizards’ work with munching on mushrooms. The art was a creation of ScribbledDiffusion.com. The smart software suggests the llama is having a hallucination.

What’s happening?

The model trained from OWW or Other Wizards’ Work mostly works. The gotcha is that using OWW without any silly worrying about copyrights was cheap. According to the write up, the total (excluding wizards’ time) was $600.

The article pinpoints the issue:

Alignment researcher Eliezer Yudkowsky summarizes the problem this poses for companies like OpenAI:” If you allow any sufficiently wide-ranging access to your AI model, even by paid API, you’re giving away your business crown jewels to competitors that can then nearly-clone your model without all the hard work you did to build up your own fine-tuning dataset.” What can OpenAI do about that? Not much, says Yudkowsky: “If you successfully enforce a restriction against commercializing an imitation trained on your I/O – a legal prospect that’s never been tested, at this point – that means the competing checkpoints go up on BitTorrent.”

I love the rapid rise in smart software uptake and now the snappy shift to commoditization. The VCs counting on big smart software payoffs may want to think about why the llama in the illustration looks as if synapses are forming new, low cost connections. Low cost as in really cheap I think.

Stephen E Arnold, March 21, 2023

Does a LLamA Bite? No, But It Can Be Snarky

February 28, 2023

Everyone in Harrod’s Creek knows the name Yann LeCun. The general view is that when it comes to smart software, this wizard wrote or helped write the book. I spotted a tweet thread “LLaMA Is a New *Open-Source*, High-Performance Large Language Model from Meta AI – FAIR.” The link to the Facebook research paper “LLaMA: Open and Efficient Foundation Language Models” explains the innovation for smart software enthusiasts. In a nutshell, the Zuck approach is bigger, faster, and trained without using data not available to everyone. Also, it does not require Googzilla scale hardware for some applications.

That’s the first tip off that the technical paper has a snarky sparkle. Exactly what data have been used to train Google and other large language models. The implicit idea is that the legal eagles flock to sue for copyright violating actions, the Zuckers are alleged flying in clean air.

Here are a few other snarkifications I spotted:

  1. Use small models trained on more data. The idea is that others train big Googzilla sized models trained on data, some of which is not public available
  2. The Zuck approach an “efficient implementation of the causal multi-head attention operator.” The idea is that the Zuck method is more efficient; therefore, better
  3. In testing performance, the results are all over the place. The reason? The method for determining performance is not very good. Okay, still Meta is better. The implication is that one should trust Facebook. Okay. That’s scientific.
  4. And cheaper? Sure. There will be fewer legal fees to deal with pesky legal challenges about fair use.

What’s my take? Another open source tool will lead to applications built on top of the Zuckbook’s approach.

Now the developers and users will have to decide if the LLamA can bite? Does Facebook have its wizardly head in the Azure clouds? Will the Sages of Amazon take note?

Tough questions. At first glance, llamas have other means of defending themselves. Teeth may not be needed. Yes, that’s snarky.

Stephen E Arnold, February 28, 2023

The Many Faces of Zuckbook

March 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

As evidenced by his business decisions, Mark Zuckerberg seems to be a complicated fellow. For example, a couple recent articles illustrate this contrast: On one hand is his commitment to support open source software, an apparently benevolent position. On the other, Meta is once again in the crosshairs of EU privacy advocates for what they insist is its disregard for the law.

First, we turn to a section of VentureBeat’s piece, “Inside Meta’s AI Strategy: Zuckerberg Stresses Compute, Open Source, and Training Data.” In it, reporter Sharon Goldman shares highlights from Meta’s Q4 2023 earnings call. She emphasizes Zuckerberg’s continued commitment to open source software, specifically AI software Llama 3 and PyTorch. He touts these products as keys to “innovation across the industry.” Sounds great. But he also states:

“Efficiency improvements and lowering the compute costs also benefit everyone including us. Second, open source software often becomes an industry standard, and when companies standardize on building with our stack, that then becomes easier to integrate new innovations into our products.”

Ah, there it is.

Our next item was apparently meant to be sneaky, but who did Meta think it was fooling? The Register reports, “Meta’s Pay-or-Consent Model Hides ‘Massive Illegal Data Processing Ops’: Lawsuit.” Meta is attempting to “comply” with the EU’s privacy regulations by making users pay to opt in to them. That is not what regulators had in mind. We learn:

“Those of us with aunties on FB or friends on Instagram were asked to say yes to data processing for the purpose of advertising – to ‘choose to continue to use Facebook and Instagram with ads’ – or to pay up for a ‘subscription service with no ads on Facebook and Instagram.’ Meta, of course, made the changes in an attempt to comply with EU law. But privacy rights folks weren’t happy about it from the get-go, with privacy advocacy group noyb (None Of Your Business), for example, sarcastically claiming Meta was proposing you pay it in order to enjoy your fundamental rights under EU law. The group already challenged Meta’s move in November, arguing EU law requires consent for data processing to be given freely, rather than to be offered as an alternative to a fee. Noyb also filed a lawsuit in January this year in which it objected to the inability of users to ‘freely’ withdraw data processing consent they’d already given to Facebook or Instagram.”

And now eight European Consumer Organization (BEUC) members have filed new complaints, insisting Meta’s pay-or-consent tactic violates the European General Data Protection Regulation (GDPR). While that may seem obvious to some, Meta insists it is in compliance with the law. Because of course it does.

Cynthia Murrell, March 29, 2024

Content Mastication: A Controversial Business Tactic

January 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In the midst of the unfolding copyright issues, I found this post quite interesting. Torrent Freak published a story titled “Meta Admits Use of ‘Pirated’ Book Dataset to Train AI.” Is the story spot on? I sure don’t know. Nevertheless, the headline is a magnetic one. The story reports:

The cases allege that tech companies, including Meta and OpenAI, used the controversial Books3 dataset to train their models. The Books3 dataset has a clear piracy angle. It was created by AI researcher Shawn Presser in 2020, who scraped the library of ‘pirate’ site Bibliotik. This book archive was publicly hosted by digital archiving collective ‘The Eye‘ at the time, alongside various other data sources.

image

A combination of old-fashioned content collection and smart systems move information from Point A (a copyright owner’s night table) to a smart software system. MSFT’s second class Copilot Bing thing created this cartoon. Sigh. Not even good enough now in my opinion.

What was in the Books3 data collection? The TF story elucidates:

The general vision was that the plaintext collection of more than 195,000 books, which is nearly 37GB…

What did Meta allegedly do to make its Llama smarter than the average member of the Camelidae family? Let’s roll the TF quote:

Responding to a lawsuit from writer/comedian Sarah Silverman, author Richard Kadrey, and other rights holders, the tech giant admits that “portions of Books3” were used to train the Llama AI model before its public release. “Meta admits that it used portions of the Books3 dataset, among many other materials, to train Llama 1 and Llama 2,” Meta writes in its answer [to a court].

The article does not include any statements like “Thank you for the question” or “I don’t know. My team will provide the answer at the earliest possible moment.” Nope. Just an alleged admission.

How will the Meta and parallel copyright legal matter evolve? Beyond Search has zero clue. The US judicial system has deep and mysterious logic. One thing is certain: Senior executives do not like uncertainty and risk. The copyright litigation seems tailored to cause some techno feudalists to imagine a world in which laws, annoying regulators, and people yapping about intellectual property were nudged into a different line of work. One example which comes to mind is building secure bunkers or taking care of the lawn.

Stephen E Arnold, January 25, 2024

Another AI Output Detector

January 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

It looks like AI detection may have a way to catch up with AI text capabilities. But for how long? Nature reports, “’ChatGPT Detector’ Catches AI Generated Papers with Unprecedented Accuracy.” The key to this particular tool’s success is its specificity—it was developed by chemist Heather Desaire and her team at the University of Kansas specifically to catch AI-written chemistry papers. Reporter McKenzie Prillaman tells us:

“Using machine learning, the detector examines 20 features of writing style, including variation in sentence lengths, and the frequency of certain words and punctuation marks, to determine whether an academic scientist or ChatGPT wrote a piece of text. The findings show that ‘you could use a small set of features to get a high level of accuracy’, Desaire says.”

The model was trained on human-written papers from 10 chemistry journals then tested on 200 samples written by ChatGPT-3.5 and ChatGPT-4. Half the samples were based on the papers’ titles, half on the abstracts. Their tool identified the AI text 100% and 98% of the time, respectively. That clobbers the competition: ZeroGPT only caught about 35–65% and OpenAI’s own text-classifier snagged 10–55%. The write-up continues:

“The new ChatGPT catcher even performed well with introductions from journals it wasn’t trained on, and it caught AI text that was created from a variety of prompts, including one aimed to confuse AI detectors. However, the system is highly specialized for scientific journal articles. When presented with real articles from university newspapers, it failed to recognize them as being written by humans.”

The lesson here may be that AI detectors should be tailor made for each discipline. That could work—at least until the algorithms catch on. On the other hand, developers are working to make their systems more and more like humans.

Cynthia Murrell, January 1, 2024

AI Greed and Apathy: A Winning Combo

November 9, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Grinding through the seemingly endless strings of articles and news releases about smart software or AI as the 50-year-old “next big thing” is labeled, I spotted this headline: “Poll: AI Regulation Is Not a Priority for Americans.”

The main point of the write is that ennui captures the attitude of Americans in the survey sample. But ennui toward what? The rising price of streaming? The bulk fentanyl shipped to certain nation states not too far from the US? The oddball weapons some firearm experts show their students? Nope.

image

The impact of smart software is unlikely to drive over the toes of Mr. and Mrs. Average Family (a mythical average family). Some software developers are likely to become roadkill on the Information Highway. Thanks, Bing. Nice cartoon. I like the red noses. Apparently MBAs drink a lot maybe?

The answer is artificial intelligence, smart software, or everyone’s friends Bard, Bing, GPT, Llama, et al. Let me highlight three factoids from the write up. No, I won’t complain about sample size, methodology, and skipping Stats 201 class to get the fresh-from-the-oven in the student union. (Hey, doesn’t every data wrangler have that hidden factoid?)

Let’s look at the three items I selected. Please, navigate to the cited write up for more ennui outputs:

  • 53% of women would not let their kids use AI at all, compared to 26% of men. (Good call, moms.)
  • Regulating tech companies came in 14th (just above federally legalizing marijuana), with 22% calling it a top priority and 35% saying it’s "important, but a lower priority."
  • Since our last survey in August, the percentage of people who say "misinformation spread by artificial intelligence" will have an impact on the 2024 presidential election saw an uptick from 53% to 58%. (Gee, that seems significant.)

I have enough information to offer a few observations about the push to create AI rules for the Information Highway. Here we go:

  1. Ignore the rules. Go fast. Have fun. Make money in unsanctioned races. (Look out pedestrians.)
  2. Consultants and lawyers are looking at islands to buy and exotic cars to lease. Why? Bonanza explaining the threats and opportunities when more people manifest concern about AI.
  3. Government regulators will have meetings and attend international conferences. Some will be in places where personal safety is not a concern and the weather is great. (Hooray!)

Net net: Indifference has some upsides. Plus, it allows US AI giants time to become more magnetic and pull money, users, and attention. Great days.

Stephen E Arnold, November 9, 2023

xx

xx

xx

x

Big Tech: Your Money or Your Digital Life? We Are Thinking

September 20, 2023

Why is anyone surprised that big tech companies want to exploit AI for profit? Business Insider gives a quick rundown on how big tech advertised AI as beneficial research tool while now they are prioritizing it as commercial revenue tool in, “Silicon Valley Presented AI As A Noble Research Tool. Now It’s All About Cold, Hard Cash.”

Big tech companies presented AI research as a societal boon and would share the findings with everyone. The research was done without worrying about costs and it is the ideal situation or ultimate discovery. Google wrote off $1.3 million of DeepMind’s debt to demonstrate its commitment to advancing AI research.

As inflation rises, big tech companies are worried about their bottom lines. ChatGPT and similar algorithms has made significant headway in AI science, so big tech companies are eager to exploit it for money. Big tech companies are racing to commercialize chatbots by promoting the benefits with consumers. Competitors are forced to develop their own chatbots or lose business.

Meta is prioritizing AI research but ironically sacked a team researching protein folding. Meta wants to cut the fat to concentrate on profits. Unfortunately the protein folding was axed despite how understanding protein folding could help scientists understand diseases, such as Parkinson’s and Alzheimer’s.

Google is focusing on net profits too. One good example is a new DeepMind unit that shares AI research papers to improve people’s lives as well as products. Google did make the new large language model Llama 2 an open source tool for businesses with fewer than 700 million monthly active users. Google continues to output smart chatbots. Hey, students, isn’t that helpful?

It is unfortunate that humans are inherently selfish beings. If we did everything for the benefit of society it would be great, but history has shown socialism and communism does not work. There is a way to fund exploratory research without worrying about money. We just have not found it yet.

Whitney Grace, September 20, 2023

Accidental Bias or a Finger on the Scale?

September 18, 2023

Who knew? According to Bezos’ rag The Washington Post, “Chat GPT Leans Liberal, Research Shows.” Writer Gerrit De Vynck cites a study on OpenAI’s ChatGPT from researchers at the University of East Anglia:

“The results showed a ‘significant and systematic political bias toward the Democrats in the U.S., Lula in Brazil, and the Labour Party in the U.K.,’ the researchers wrote, referring to Luiz Inácio Lula da Silva, Brazil’s leftist president.”

Then there’s research from Carnegie Mellon’s Chan Park. That study found Facebook’s LLaMA, trained on older Internet data, and Google’s BERT, trained on books, supplied right-leaning or even authoritarian answers. But Chat GPT-4, trained on the most up-to-date Internet content, is more economically and socially liberal. Why might the younger algorithm, much like younger voters, skew left? There’s one more juicy little detail. We learn:

“Researchers have pointed to the extensive amount of human feedback OpenAI’s bots have gotten compared to their rivals as one of the reasons they surprised so many people with their ability to answer complex questions while avoiding veering into racist or sexist hate speech, as previous chatbots often did. Rewarding the bot during training for giving answers that did not include hate speech, could also be pushing the bot toward giving more liberal answers on social issues, Park said.”

Not exactly a point in conservatives’ favor, we think. Near the bottom, the article concedes this caveat:

“The papers have some inherent shortcomings. Political beliefs are subjective, and ideas about what is liberal or conservative might change depending on the country. Both the University of East Anglia paper and the one from Park’s team that suggested ChatGPT had a liberal bias used questions from the Political Compass, a survey that has been criticized for years as reducing complex ideas to a simple four-quadrant grid.”

Read more about the Political Compass here and here. So does ChatGPT lean left or not? Hard to say from the available studies. But will researchers ever be able to pin down the rapidly evolving AI?

Cynthia Murrell, September 18, 2023

Sam AI-Man: A Big Spender with Trouble Ahead?

August 15, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

$700,000 per day. That’s an interesting number if it is accurate. “ChatGPT In Trouble: OpenAI May Go Bankrupt by 2024, AI Bot Costs Company $700,000 Every Day” states that the number is the number. What’s that mean? First, forget salaries, general and administrative costs, the much-loved health care for humans, and the oddments one finds on balance sheets. (What was that private executive flight to Tampa Bay?)

81 cannt pay ees

A young entrepreneur realizes he cannot pay his employees. Thanks, MidJourney, whom did you have in your digital mind?

I am a dinobaby, but I can multiply. The total is $255,500,000. I want to ask about money (an investment, of course) from Microsoft, how the monthly subscription fees are floating the good ship ChatGPT, and the wisdom of hauling an orb to scan eyeballs from place to place. (Doesn’t that take away from watching the bourbon caramel cookies reach their peak of perfection? My hunch is, “For sure.”)

The write up reports:

…the shift from non-profit to profit-oriented, along with CEO Sam Altman’s lack of equity ownership, indicates OpenAI’s interest in profitability. Although Altman might not prioritize profits, the company does. Despite this, OpenAI hasn’t achieved profitability; its losses reached $540 million since the development of ChatGPT.

The write up points out that Microsoft’s interest in ChatGPT continues. However, the article observes:

Complicating matters further is the ongoing shortage of GPUs. Altman mentioned that the scarcity of GPUs in the market is hindering the company’s ability to enhance and train new models. OpenAI’s recent filing for a trademark on ‘GPT-5’ indicates their intention to continue training models. However, this pursuit has led to a notable drop in ChatGPT’s output quality.

Another minor issue facing Sam AI-Man is that legal eagles are circling. The Zuck dumped his pet Llama as open source. And the Google and Googley chugs along and Antropic “clawed” into visibility.

Net net: Sam AI-Man may find that he will an opportunity to explain how the dial on the garage heater got flipped from Hot to Fan Only.

Stephen E Arnold, August 15, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta