Lawyers and High School Students Cut Corners

March 6, 2025

Cost-cutting lawyers beware: using AI in your practice may make it tough to buy a new BMW this quarter. TechSpot reports, "Lawyer Faces $15,000 Fine for Using Fake AI-Generated Cases in Court Filing." Writer Rob Thubron tells us:

"When representing HooserVac LLC in a lawsuit over its retirement fund in October 2024, Indiana attorney Rafael Ramirez included case citations in three separate briefs. The court could not locate these cases as they had been fabricated by ChatGPT."

Yes, ChatGPT completely invented precedents to support Ramirez’ case. Unsurprisingly, the court took issue with this:

"In December, US Magistrate Judge for the Southern District of Indiana Mark J. Dinsmore ordered Ramirez to appear in court and show cause as to why he shouldn’t be sanctioned for the errors. ‘Transposing numbers in a citation, getting the date wrong, or misspelling a party’s name is an error,’ the judge wrote. ‘Citing to a case that simply does not exist is something else altogether. Mr Ramirez offers no hint of an explanation for how a case citation made up out of whole cloth ended up in his brief. The most obvious explanation is that Mr Ramirez used an AI-generative tool to aid in drafting his brief and failed to check the citations therein before filing it.’ Ramirez admitted that he used generative AI, but insisted he did not realize the cases weren’t real as he was unaware that AI could generate fictitious cases and citations."

Unaware? Perhaps he had not heard about the similar case in 2023. Then again, maybe he had. Ramirez told the court he had tried to verify the cases were real—by asking ChatGPT itself (which replied in the affirmative). But that query falls woefully short of the due diligence required by the Federal Rule of Civil Procedure 11, Thubron notes. As the judge who ultimately did sanction the firm observed, Ramirez would have noticed the cases were fiction had his attempt to verify them ventured beyond the ChatGPT UI.

For his negligence, Ramirez may face disciplinary action beyond the $15,000 in fines. We are told he continues to use AI tools, but has taken courses on its responsible use in the practice of law. Perhaps he should have done that before building a case on a chatbot’s hallucinations.

Cynthia Murrell, March 6, 2025

Mathematics Is Going to Be Quite Effective, Citizen

March 5, 2025

dino orange_thumbThis blog post is the work of a real-live dinobaby. No smart software involved.

The future of AI is becoming more clear: Get enough people doing something, gather data, and predict what humans will do. What if an individual does not want to go with the behavior of the aggregate? The answer is obvious, “Too bad.”

How do I know that as a handful of organizations will use their AI in this manner? I read “Spanish Running of the Bulls’ Festival Reveals Crowd Movements Can Be Predictable, Above a Certain Density.” If the data in the report are close to the pin, AI will be used to predict and then those predictions can be shaped by weaponized information flows. I got a glimpse of how this number stuff works when I worked at Halliburton Nuclear with Dr. Jim Terwilliger. He and a fellow named Julian Steyn were only too happy to explain that the mathematics used for figuring out certain nuclear processes would work for other applications as well. I won’t bore you with comments about the Monte Carl method or the even older Bayesian statistics procedures. But if it made certain nuclear functions manageable, the approach was okay mostly.

Let’s look at what the Phys.org write up says about bovines:

Denis Bartolo and colleagues tracked the crowds of an estimated 5,000 people over four instances of the San Fermín festival in Pamplona, Spain, using cameras placed in two observation spots in the plaza, which is 50 meters long and 20 meters wide. Through their footage and a mathematical model—where people are so packed that crowds can be treated as a continuum, like a fluid—the authors found that the density of the crowds changed from two people per square meter in the hour before the festival began to six people per square meter during the event. They also found that the crowds could reach a maximum density of 9 people per square meter. When this upper threshold density was met, the authors observed pockets of several hundred people spontaneously behaving like one fluid that oscillated in a predictable time interval of 18 seconds with no external stimuli (such as pushing).

I think that’s an important point. But here’s the comment that presages how AI data will be used to control human behavior. Remember. This is emergent behavior similar to the hoo-hah cranked out by the Santa Fe Institute crowd:

The authors note that these findings could offer insights into how to anticipate the behavior of large crowds in confined spaces.

Once probabilities allow one to “anticipate”, it follows that flows of information can be used to take or cause action. Personally I am going to make a note in my calendar and check in one year to see how my observation turns out. In the meantime, I will try to keep an eye on the Sundars, Zucks, and their ilk for signals about their actions and their intent, which is definitely concerned with individuals like me. Right?

Stephen E Arnold, March 5, 2025

The EU Rains on the US Cloud Parade

March 3, 2025

At least one European has caught on. Dutch blogger Bert Hubert is sounding the alarm to his fellow Europeans in the post, "It Is No Longer Safe to Move Our Governments and Societies to US Clouds." Governments and organizations across Europe have been transitioning systems to American cloud providers for reasons of cost and ease of use. Hubert implores them to prioritize security instead. He writes:

"We now have the bizarre situation that anyone with any sense can see that America is no longer a reliable partner, and that the entire large-scale US business world bows to Trump’s dictatorial will, but we STILL are doing everything we can to transfer entire governments and most of our own businesses to their clouds. Not only is it scary to have all your data available to US spying, it is also a huge risk for your business/government continuity. From now on, all our business processes can be brought to a halt with the push of a button in the US. And not only will everything then stop, will we ever get our data back? Or are we being held hostage? This is not a theoretical scenario, something like this has already happened."

US firms have been wildly successful in building reliance on their products around the world. So much so, we are told, that some officials would rather deny reality than switch to alternative systems. The post states:

"’Negotiating with reality’ is for example the letter three Dutch government ministers sent last week. Is it wise to report every applicant to your secret service directly to Google, just to get some statistics? The answer the government sent: even if we do that, we don’t, because ‘Google cannot see the IP address‘. This is complete nonsense of course, but it’s the kind of thing you tell yourself (or let others tell you) when you don’t want to face reality (or can’t)."

Though Hubert does not especially like Microsoft tools, for example, he admits Europeans are accustomed to them and have "become quite good at using them." But that is not enough reason to leave data vulnerable to "King Trump," he writes. Other options exist, even if they may require a bit of effort to implement. Security or convenience: pick one.

Cynthia Murrell, March 3, 2025

Curricula Ideas That Will Go Nowhere Fast

February 28, 2025

dino orange_thumbNo smart software. Just a dinobaby doing his thing.

I read “Stuff You Should Have Been Taught in College But Weren’t” reveals a young person who has some dinobaby notions. Good for Casey Handmer, PhD. Despite his brush with Hyperloop, he has retained an ability to think clearly about education. Caltech and the JPL have shielded him from some intellectual cubby holes.

So why am I mentioning the “Stuff You Should Have…” essay and the author? I found the write up in line with thoughts my colleagues and I have shared. Let me highlight a few of Dr. Handmer’s “Should haves” despite my dislike for “woulda coulda shoulda” as a mental bookshelf.

The write up says:

in the sorts of jobs you want to have, no-one should have to spell anything out for you.

I want to point out that the essay may not be appropriate for a person who seeks a job washing dishes at the El Nopal restaurant on Goose Creek Road. The observation strikes me as appropriate for an individual who seeks employment at a high-performing organization or an aspiring “performant” outfit. (I love the coinage “performant”; it is very with it.

What are other dinobaby-in-the-making observations in the write up. I have rephrased some of the comments, and I urge you to read the original essay. Here’s goes:

  1. Do something tangible to demonstrate your competence. Doom scrolling and watching TikTok-type videos may not do the job.
  2. Offer proof you deliver value in whatever you do. I am referring to “good” actors, not “bad” actors selling Telegram and WhatsApp hacking services on the Dark Web. “Proof” is verifiable facts, a reference from an individual of repute, or demonstrating a bit of software posted on GitHub or licensed from you.
  3. Watch, learn, and act in a way that benefits the organization, your colleagues, and your manager.
  4. Change jobs to grow and demonstrate your capabilities.
  5. Suck it up, buttercup. Life is a series of challenges. Meet them. Deliver value.

I want to acknowledge that not all dinobabies exhibit these traits as they toddle toward the holding tank for the soon-to-be-dead. However, for an individual who wants to contribute and grow, the ideas in this essay are good ones to consider and then implement.

I do have several observations:

  1. The percentage of a cohort who can consistently do and deliver is very small. Excellence is not for everyone. This has significant career implications unless you have a lot of money, family connections, or a Hollywood glow.
  2. Most of the young people with whom I interact say they have these or similar qualities. Then their own actions prove they don’t. Here’s an example: I met a business school dean. I offered to share some ideas relevant to the job market. I gave him my card because he forgot his cards. He never emailed me. I contacted him and said politely, “What’s up?” He double talked and wanted to meet up in the spring. What’s that tell me about this person’s work ethic? Answer: Loser.
  3. Universities and other formal training programs struggle even when the course material and teacher is on point. The “problem” begins before the student shows up in class. The impact of family stress on a person creates a hot house of sorts. What grows in the hortorium? Species with an inability to concentrate, a pollen that cannot connect with an ovule, and a baked in confusion of “I will do it” and “doing it.”

Net net: This dinobaby is happy to say that Dr. Handmer will make a very good dinobaby some day.

Stephen E Arnold, February 28, 2025

Meta and Torrents: True, False, or Rationalization?

February 26, 2025

AIs gobble datasets for training. It is another fact that many LLMs and datasets contain biased information, are incomplete, or plain stink. One ethical but cumbersome way to train algorithms would be to notify people that their data, creative content, or other information will be used to train AI. Offering to pay for the right to use the data would be a useful step some argue.

Will this happen? Obviously not.

Why?

Because it’s sometimes easier to take instead of asking. According to Toms Hardware, “Meta Staff Torrented Nearly 82TB Of Pirated Books For AI Training-Court Records Reveal Copyright Violations.” The article explains that Meta pirated 81.7 TB of books from the shadow libraries Anna’s Archive, Z-Library, and LibGen. These books were then used to train AI models. Meta is now facing a class action lawsuit about using content from the shadow libraries.

The allegations arise from Meta employees’ written communications. Some of these messages provide insight into employees’ concern about tapping pirated materials. The employees were getting frown lines, but then some staffers’ views rotated when they concluded smart software helped people access information.

Here’s a passage from the cited article I found interesting:

“Then, in January 2023, Mark Zuckerberg himself attended a meeting where he said, “We need to move this stuff forward… we need to find a way to unblock all this.” Some three months later, a Meta employee sent a message to another one saying they were concerned about Meta IP addresses being used “to load through pirate content.” They also added, “torrenting from a corporate laptop doesn’t feel right,” followed by laughing out loud emoji. Aside from those messages, documents also revealed that the company took steps so that its infrastructure wasn’t used in these downloading and seeding operations so that the activity wouldn’t be traced back to Meta. The court documents say that this constitutes evidence of Meta’s unlawful activity, which seems like it’s taking deliberate steps to circumvent copyright laws.”

If true, the approach smacks of that suave Silicon Valley style. If false, my faith in a yacht owner with gold chains might be restored.

Whitney Grace, February 26, 2025

AI Research Tool from Perplexity Is Priced to Undercut the Competition

February 26, 2025

Are prices for AI-generated research too darn high? One firm thinks so. In a Temu-type bid to take over the market, reports VentureBeat, "Perplexity Just Made AI Research Crazy Cheap—What that Means for the Industry." CEO Aravind Srinivas credits open source software for making the move possible, opining that "knowledge should be universally accessible." Knowledge, yes. AI research? We are not so sure. Nevertheless, here we are. The write-up describes the difference in pricing:

"While Anthropic and OpenAI charge thousands monthly for their services, Perplexity offers five free queries daily to all users. Pro subscribers pay $20 monthly for 500 daily queries and faster processing — a price point that could force larger AI companies to explain why their services cost up to 100 times more."

Not only is Perplexity’s Deep Research cheaper than the competition, crows the post, its accuracy rivals theirs. We are told:

"[Deep Research] scored 93.9% accuracy on the SimpleQA benchmark and reached 20.5% on Humanity’s Last Exam, outperforming Google’s Gemini Thinking and other leading models. OpenAI’s Deep Research still leads with 26.6% on the same exam, but OpenAI charges $200 percent for that service. Perplexity’s ability to deliver near-enterprise level performance at consumer prices raises important questions about the AI industry’s pricing structure."

Well, okay. Not to stray too far from the point, but is a 20.5% or a 26.6% on Humanity’s Last Exam really something to brag about? Last we checked, those were failing grades. By far. Isn’t it a bit too soon to be outsourcing research to any LLM? But I digress.

We are told the low, low cost Deep Research is bringing AI to the micro-budget masses. And, soon, to the Windows-less—Perplexity is working on versions for iOS, Android, and Mac. Will this spell disaster for the competition?

Cynthia Murrell, February 26, 2025

Rest Easy. AI Will Not Kill STEM Jobs

February 25, 2025

dino orangeWritten by a dinobaby, not smart software. But I would replace myself with AI if I could.

Bob Hope quipped, “A sense of humor is good for you. Have you ever heard of a laughing hyena with heart burn?” No, Bob, I have not.

Here’s a more modern joke for you from the US Bureau of Labor Statistics circa 2025. It is much fresher than Mr. Hope’s quip from a half century ago.

The Bureau of Labor Statistics says:

Employment in the professional, scientific, and technical services sector is forecast to increase by 10.5% from 2023 to 2033, more than double the national average. (Source: Investopedia)

Okay, I wonder what those LinkedIn, XTwitter, and Reddit posts about technology workers not being able to find jobs in these situations:

  1. Recent college graduates with computer science degrees
  2. Recently terminated US government workers from agencies like 18F
  3. Workers over 55 urged to take early retirement?

The item about the rosy job market appeared in Slashdot too. Here’s the quote I noted:

Employment in the professional, scientific, and technical services sector is forecast to increase by 10.5% from 2023 to 2033, more than double the national average. According to the BLS, the impact AI will have on tech-sector employment is highly uncertain. For one, AI is adept at coding and related tasks. But at the same time, as digital systems become more advanced and essential to day-to-day life, more software developers, data managers, and the like are going to be needed to manage those systems. "Although it is always possible that AI-induced productivity improvements will outweigh continued labor demand, there is no clear evidence to support this conjecture," according to BLS researchers.

Robert Half, an employment firm, is equally optimistic. Just a couple of weeks ago, that outfit said:

Companies continue facing strong competition from other firms for tech talent, particularly for candidates with specialized skills. Across industries, AI proficiency tops the list of most-sought capabilities, with organizations needing expertise for everything from chatbots to predictive maintenance systems. Other in-demand skill areas include data science, IT operations and support, cybersecurity and privacy, and technology process automation.

What am I to conclude from these US government data? Here are my preliminary thoughts:

  1. The big time consulting firms are unlikely to change their methods of cost reduction; that is, if software (smart or dumb) can do a job for less money, that software will be included on a list of options. Given a choice of going out of business or embracing smart software, a significant percentage of consulting firm clients will give AI a whirl. If AI works and the company stays in business or grows, the humans will be repurposed or allowed to find their future elsewhere.
  2. The top one percent in any discipline will find work. The other 99 percent will need to have family connections, family wealth, or a family business to provide a boost for a great job. What if a person is not in the top one percent of something? Yeah, well, that’s not good for quite a few people.
  3. The permitted dominance of duopolies or oligopolies in most US business sectors means that some small and mid-sized businesses will have to find ways to generate revenue. My experience in rural Kentucky is that local accounting, legal, and technology companies are experimenting with smart software to boost productivity (the MBA word for cheaper work functions). Local employment options are dwindling because the smaller employers cannot stay in business. Potential employees want more pay than the company can afford. Result? Downward spiral which appears to be accelerating.

Am I confident in statistics related to wages, employment, and the growth of new businesses and industrial sectors? No, I am not. Statistical projects work pretty well in nuclear fuel management. Nested mathematical procedures in smart software work pretty well for some applications. Using smart software to reduce operating costs work pretty well right now.

Net net: Without meaningful work, some of life’s challenges will spark unanticipated outcomes. Exactly what type of stress breaks a social construct? Those in the job hunt will provide numerous test cases, and someone will do an analysis. Will it be correct? Sure, close enough for horseshoes.

Stop complaining. Just laugh as Mr. Hope noted. No heartburn and cost savings too boot.

Stephen E Arnold, February 25, 2025

Advice for Programmers: AI-Proof Your Career

February 24, 2025

Software engineer and blogger Sean Goedecke has some career advice for those who, like himself, are at risk of losing their programming jobs to AI. He counsels, "To Avoid Being Replaced by LLMs, Do What They Can’t." Logical enough. But what will these tools be able to do, and when will they be able to do it? That is the $25 million question. Goedecke has suggestions for the medium term, and the long term.

Right now, he advises, engineers should do three things: First, use the tools. They can help you gain an advantage in the field. And also, know-thine-enemy, perhaps? Next, learn how LLMs work so you can transition to the growing field of AI work. If you can’t beat them, join them, we suppose. Finally, climb the ranks posthaste, for those in junior roles will be the first to go. Ah yes, the weak get eaten. It is a multipronged approach.

For the medium term, Goedecke predicts which skills LLMs are likely to master first. Get good at the opposite of that. For example, ill-defined or poorly-scoped problems, solutions that are hard to verify, and projects with huge volumes of code are all very difficult for algorithms. For now.

In the long term, work yourself into a position of responsibility. There are few of those to go around. So, as noted above, start vigorously climbing over your colleagues now. Why? Because executives will always need at least one good human engineer they can trust. The post observes:

"A LLM strong enough to take responsibility – that is, to make commitments and be trusted by management – would have to be much, much more powerful than a strong engineer. Why? Because a LLM has no skin in the game, which means the normal mechanisms of trust can’t apply. Executives trust engineers because they know those engineers will experience unpleasant consequences if they get it wrong. Because the engineer is putting something on the line (e.g. their next bonus, or promotion, or in the extreme case being fired), the executive can believe in the strength of their commitment. A LLM has nothing to put on the line, so trust has to be built purely on their track record, which is harder and takes more time. In the long run, when almost every engineer has been replaced by LLMs, all companies will still have at least one engineer around to babysit the LLMs and to launder their promises and plans into human-legible commitments. Perhaps that engineer will eventually be replaced, if the LLMs are good enough. But they’ll be the last to go."

If you are lucky, it will be time to retire by then. For those young enough that this is unlikely, or for those who do not excel at the rat race, perhaps a career change is in order. What jobs are safe? Sadly, this dino-baby writer does not have the answer to that question.

Cynthia Murrell, February 24, 2025

OpenAI Furthers Great Research

February 21, 2025

Unsatisfied with existing AI cheating solutions? If so, Gizmodo has good news for you: “OpenAI’s ‘Deep Research’ Gives Students a Whole New Way to Cheat on Papers.” Writer Kyle Barr explains:

“OpenAI’s new ‘Deep Research’ tool seems perfectly designed to help students fake their way through a term paper unless asked to cite sources that don’t include Wikipedia. OpenAI’s new feature, built on top of its upcoming o3 model and released on Sunday, resembles one Google introduced late last year with Gemini 2.0. Google’s ‘Deep Research’ is supposed to generate long-form reports over the course of 30 minutes or more, depending on the depth of the requested topic. Boiled down, Google’s and OpenAI’s tools are AI agents capable of performing multiple internet searches while reasoning about the next step to generate a report.”

Deep Research even functions in a side panel, providing updates on its direction and progress. So helpful! However, the tool is not for those looking to score an A. Like a student rushing to finish a paper the old-fashioned way, Barr notes, it relies heavily on Wikipedia. An example report did include a few trusted sites, like Pew Research, but such reliable sources were in the minority. Besides, the write-up emphasizes:

“Remember, this is just a bot scraping the internet, so it won’t be accessing any non-digitized books or—ostensibly—any content locked behind a paywall. … Because it’s essentially an auto-Googling machine, the AI likely won’t have access to the most up-to-date and large-scale surveys from major analysis firms. … That’s not to say the information was inaccurate, but anybody who generates a report is at the mercy of suspect data and the AI’s interpretation of that data.”

Meh, we suppose that is okay if one just needs a C to get by. But is it worth the $200 per month subscription? I suppose that depends on the student, and the parents willingness to sign up for services that will make gentle Ben and charming Chrissie smarter. Besides, we are sure more refined versions are in our future.

Cynthia Murrell, February 21, 2025

Sam Altman: The Waffling Man

February 17, 2025

Hopping DinoAnother dinobaby commentary. No smart software required.

Chaos is good. Flexibility is good. AI is good. Sam Altman, whom I reference as “Sam AI-Man” has some explaining to do. OpenAI is a consumer of cash. The Chinese PR push suggests that Deepseek has found a way to do OpenAI-type computing like Shein and Temu do gym clothes.

I noted “Sam Altman Admits OpenAI Was On the Wrong Side of History in Open Source Debate.” The write up does not come out state, “OpenAI was stupid when it embraced proprietary software’s approach” to meeting user needs. To be frank, Sam AI-Man was not particularly clear either.

The write up says that Sam AI-Man said:

“Yes, we are discussing [releasing model weights],” Altman wrote. “I personally think we have been on the wrong side of history here and need to figure out a different open source strategy.” He noted that not everyone at OpenAI shares his view and it isn’t the company’s current highest priority. The statement represents a remarkable departure from OpenAI’s increasingly proprietary approach in recent years, which has drawn criticism from some AI researchers and former allies, most notably Elon Musk, who is suing the company for allegedly betraying its original open source mission.

My view is that Sam AI-Man wants to emulate other super techno leaders and get whatever he wants. Not surprisingly, other super techno leaders have their own ideas. I would suggest that the objective of these AI jousts is power, control, and money.

“What about the users?” a faint voice asks. “And the investors?” another bold soul queries.

Who?

Stephen E Arnold, February 17, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta