From $20 a Month to $20K a Month. Great Idea… or Not?

March 10, 2025

dino orange_thumbAnother post from the dinobaby. Alas, no smart software used for this essay.

OpenAI was one of many smart software companies. If you meet the people on my team, you will learn that I dismissed most of the outfits as search-and-retrieval outfits looking for an edge. Search definitely needs an edge, but I was not confident that predictive generation of an “answer” was a solution. It was a nifty party trick, but then the money started flowing. In January 2023, Microsoft put Google’s cute sharp teeth on edge. Suddenly AI or smart software was the next big thing. The virtual reality thing did not ring the bell. The increasingly weird fiddling with mobile phones did not get the brass ring. And the idea of Apple becoming the next big thing in chips has left everyone confused. My M1 devices work pretty well, and unless I look at the label on the gizmos I can tell an M1 from and M3. Do I care? Nope.

But OpenAI became news. It squabbled with the mastermind of “renewable” satellites, definitely weird trucks, and digging tunnels in Las Vegas. (Yeah, nice idea, just not for anyone who does not want to get stalled in traffic.) When ChatGPT became available, one of those laboring in my digital vineyards signed me up. I fiddled with it and decided that I would run some of my research through the system. I learned that my research was not in the OpenAI “system.” I had it do some images. Those sucked. I will cancel this week.

I put in my AI folder this article “OpenAI’s is Getting Ready to Release PhD Level AI Agents.” I was engaging in some winnowing and I scanned it. In early February 2025, Digital Marketing News wrote about PhD level agents. I am not a PhD. I quite before I finished my dissertation to work in the really socially conscious nuclear unit of that lovable outfit Halliburton. You know the company. That’s the one that charged about $950.00 for a gallon of fuel during the Iraq war. You will also associate Dick Cheney, a fun person, with the company. So no PhD for me.

I was skeptical because of the dismal performance of ChatGPT 4, oh, whatever, trying to come up with the information I have assembled for my new book for law enforcement professionals. Then I read a Slashdot post with the title “OpenAI Plots Charging $20,000 a Month For PhD-Level Agents” shared from a publication I don’t know much about. I think it is like 404 or a for-fee Substack. The publication has great content, and you have to pay for it.

Be that as it may, the Slashdot post reports or recycles information that suggests the fee per month for a PhD level version of OpenAI’s smart software will be a modest $20,000 a month. I think the service one of my team registered costs $20.00 per month. What’s with the 20s? Twenty is a pronic number; that is, it can be slapped on a high school math test so students can say it is the product of two consecutive integers. In college I knew a person who was a numerologist. I recall that the meaning of 20 was cooperation.

The interesting part of the Slashdot post was the comments. I scanned them and concluded that some of the commenters saw the high-end service killing jobs for high-end programmers and consultants. Yeah, maybe. Somehow I doubt that a code base that struggles with information related to a widely-used messaging application is suddenly going to replicate the information I have obtained from my sources in Eastern Europe seems a bit of stretch. Heck, ChatGPT could barely do English. Russian? Not a change, but who knows. And for $200,000 it is not likely this dinobaby will take what seems like unappetizing bait.

One commenter allegedly named TheGreatEmu said:

I was about to make a similar comment, but the cost still doesn’t add up. I’m at a national lab with generally much higher overheads than most places, and a postdoc runs us $160k/year fully burdened. And of course the AI sure as h#ll can’t connect cables, turn knobs, solder, titrate, use a drill press, clean, chat with the machinist who doesn’t use email, sneaker net data out of the air-gapped lab, or understand napkin drawings over beer where all real science gets done. Or do anything useful with information that isn’t already present in the training data, and if you’re not pushing past existing knowledge boundaries, you’re not really doing science are you?

My hunch is that this is a PR or marketing play. Let’s face it. With Microsoft cutting off data center builds and Google floundering with cheese, the smart software revolution is muddling forward. The wins are targeted applications in quite specific domains. Yes, gentle reader, that’s why people pay for Chemical Abstracts online. The information is not on the public Internet. The American Chemical Society has information that the super capable AI outfits have not figured as something the non-computational, organic, or inorganic chemist will use from a somewhat volatile outfit. Get something wrong in a nuclear lab and smart software won’t be too helpful if it hallucinates.

Net net: Is everything marketing? At age 80, my answer is, “Absolutely.” Sam AI-Thinks in terms of trillions. Is $20 trillion the next pricing level?

Stephen E Arnold, March 10, 2025

Next-Gen IT Professionals: Up for Doing a Good Job?

March 10, 2025

The entirety of the United States is facing a crisis when it comes to decent paying jobs. Businesses are watching their budgets like misers clutch their purse strings, so they’re hiring the cheapest tech workers possible. Medium explains that “8 Out Of 10 Senior Engineers Feel Undervalued: The Hidden Crisis In Tech’s Obsession With Junior Talent.”

Another term for budgeting and being cheaper is “cost optimization.” Experienced tech workers are being replaced with green newbies who wouldn’t know how to find errors if it was on the back of their hands. Or the experienced tech workers are bogged down by mentoring/fixing the mistakes of their younger associates.

It’s a recipe for disaster, but cost optimization is what businesses care about. There will be casualties in the trend, not all of them human:

“The silent casualties of this trend:

1. Systems designed by juniors who’ve never seen a server catch fire

2. Codebases that work right up until they dont

3. The quiet exodus of graybeards into early retirement”

Junior tech workers are cheaper, but it is difficult to just ask smart software to impart experience in a couple hundred words. Businesses are also treating their seasoned employees like they are mentors:

“I’m all for mentoring. But when companies treat seniors as:

  • Free coding bootcamp instructors
  • Human linters for junior code
  • On-call explainers of basic algorithms

…they’re not paying for mentorship. They’re subsidizing cheap labor with senior salaries.”

There’s a happy medium where having experienced tech experts work with junior tech associates can be beneficial for those involved. It is cheaper to dump the dinobabies and assume that those old systems can be fixed when they go south.

Whitney Grace, March 10, 2025

AI Generated Code Adds To Technical Debt

March 7, 2025

Technical debt refers to using flawed code that results in more work. It’s okay for projects to be ruled out with some technical debt as long as it is paid back. The problem comes when the code isn’t corrected and it snowballs into a huge problem. LeadDev explores how AI code affects projects: “How AI Generated Code Compounds Technical Debt.” The article highlights that it has never been easier to write code especially with AI, but there’s a large amassment of technical debt. The technical debt is so large that it is comparable to the US’s ballooning debt.

GitClear tracked the an eight-gold increase in code frequency blocks with give or more lines that duplicate adjectives code during 2024. This was ten times higher than the previous two years. GitClear found some more evidence of technical debt:

“That same year, 46% of code changes were new lines, while copy-pasted lines exceeded moved lines. “Moved,” lines is a metric GitClear has devised to track the rearranging of code, an action typically performed to consolidate previous work into reusable modules. “Refactored systems, in general, and moved code in particular, are the signature of code reuse,” says Bill Harding, CEO of Amplenote and GitClear. A year-on-year decline in code movement suggests developers are less likely to reuse previous work, a marked shift from existing industry best practice that would lead to more redundant systems with less consolidation of functions.”

These facts might not seem alarming, especially if one reads Google’s 2024 DORA report that said there was a 25% increase in AI usage to quicken code reviews and documentation. The downside was a 7.2% decrease in delivery and stability. These numbers might be small now but what is happening is like making a copy of a copy of a copy: the integrity is lost.

It’s also like relying entirely on spellcheck to always correct your spelling and grammar. While these are good tools to have, what will you do when you don’t have fundamentals in your toolbox or find yourself in a spontaneous spelling bee?

Whitney Grace, March 7, 2025

Patents, AI, and Lawyers: Litigators, Start Your Engines

March 7, 2025

Patents can be a useful source of insights, a fact startup Patlytics is banking on. TechCrunch reports, "Patlytics Raises $14M for its Patent Analytics Platform." The firm turbo-charges intellectual property research with bespoke AI. We learn:

"Patlytics’ large language models (LLMs) and generative AI-powered engine are custom-built for IP-related research and other work such as patent application drafting, invention disclosures, invalidity analysis, infringement detection/analysis, Standard Essential Patents (SEPs) analysis, and IP assets portfolio management."

Apparently, the young firm is already meeting with success. We learn:

"The 1-year-old startup said it has seen a 20x increase in ARR and an 18x expansion in its customer base within six months, with a sustained 300% month-over-month growth rate. Patlytics did not disclose how many customers it has but said approximately 50% of its customer base are law firms, and the other half are corporate clients from industries like semiconductors, bio, pharmaceuticals, and more. Additionally, the company now serves customers in South Korea and Japan, and recently launched its first pilot product in London and Germany. Its clients include Abnormal Security, Google, Koch Disruptive Technologies, Quinn Emanuel Urquhart & Sullivan, Richardson Oliver, Reichman Jorgensen Lehman & Feldberg, Xerox, and Young Basile."

That is quite a client roster in such a short time. This round, combined with April’s seed round, brings the companies funding total to $21 million. The firm will put the funds to use hiring new engineers and expanding its products. Based in New York, Patlytics was launched in January, 2024.

Will AI increase patent litigation? Do Tesla Cybertrucks attract attention?

Cynthia Murrell, March 7, 2025

Another New Search System with AI Too

March 7, 2025

There’s a new AI engine in town down specifically designed to assist with research. The Next Web details the newest invention that comes from a big name in the technology industry: “Tech mogul Launches AI Research Engine Corpora.ai.” Mel Morris is a British tech mogul and the man behind the latest research engine: Corpora.ai.

Morris had Corpora.ai designed to provided in-depth research from single prompts. It is also an incredibly fast engine. It can process two million documents per second. Corpora.ai works by reading a prompt then the AI algorithm scans information, including legal documents, news articles, academic papers, and other Web data. The information is then compiled into summaries or reports.

Morris insists that Corpora.ai is a research engine, not a search engine. He invested $15 million of his personal fortune into the project. Morris doesn’t want to compete with other AI projects, instead he wants to form working relationships:

“His funding aims to create a new business model for LLMs. Rather than challenge the leading GenAI firms, Corpora plans to bring a new service to the sector. The research engine can also integrate existing models on the market. ‘We don’t compete with OpenAI, Google, or Deepseek,’ Morris said. ‘The nice thing is, we can play with all of these AI vendors quite nicely. As they improve their models, our output gets better. It’s a really great symbiotic relationship.’

Mel Morris is a self-made businessman who is the former head of King, the Candy Crush game creator. He also owned and sold the dating Web site, uDate. He might see a return on his Corpora.ai investment .

Whitney Grace, March 7, 2025

Encryption: Not the UK Way but Apple Is A-Okay

March 6, 2025

The UK is on a mission. It seems to be making progress. The BBC Reports, "Apple Pulls Data Protection Tool After UK Government Security Row." Technology editor Zoe Kleinman explains:

"Apple is taking the unprecedented step of removing its highest level data security tool from customers in the UK, after the government demanded access to user data. Advanced Data Protection (ADP) means only account holders can view items such as photos or documents they have stored online through a process known as end-to-end encryption. But earlier this month the UK government asked for the right to see the data, which currently not even Apple can access. Apple did not comment at the time but has consistently opposed creating a ‘backdoor’ in its encryption service, arguing that if it did so, it would only be a matter of time before bad actors also found a way in. Now the tech giant has decided it will no longer be possible to activate ADP in the UK. It means eventually not all UK customer data stored on iCloud – Apple’s cloud storage service – will be fully encrypted."

The UK’s security agency, the Home Office, refused to comment on the matter. Apple states it was "gravely disappointed" with this outcome. It emphasizes its longstanding refusal to build any kind of back door or master key. It is the principle of the thing. Instead, it is now removing the locks on the main entrance. Much better.

As of the publication of Kleinman’s article, new iCloud users who tried to opt into ADP received an error message. Apparently, protection for existing users will be stripped at a later date. Some worry Apple’s withdrawal of ADP from the UK sets a bad precedent in the face of similar demands in other countries. Of course, so would caving in to them. The real culprit here, some say, is the UK government that put its citizens’ privacy at risk. Will other governments follow its lead? Will tech firms develop some best practices in the face of such demands? We wonder what their priorities will be.

Cynthia Murrell, March 6, 2025

Attention, New MBAs in Finance: AI-gony Arrives

March 6, 2025

dino orange_thumb_thumbAnother post from the dinobaby. Alas, no smart software used for this essay.

I did a couple of small jobs for a big Wall Street outfit years ago. I went to meetings, listened, and observed. To be frank, I did not do much work. There were three or four young, recent graduates of fancy schools. These individuals were similar to the colleagues I had at the big time consulting firm at which I worked earlier in my career.

Everyone was eager and very concerned that their Excel fevers were in full bloom: Bright eyes, earnest expressions, and a gentle but persistent panting in these meetings. Wall Street and Wall Street like firms in London, England, and Los Angeles, California, were quite similar. These churn outfits and deal makers shared DNA or some type of quantum entanglement.

These “analysts” or “associates” gathered data, pumped it into Excel spreadsheets set up by colleagues or technical specialists. Macros processed the data and spit out tables, charts, and graphs. These were written up as memos, reports for those with big sticks, or senior deciders.

My point is that the “work” was done by cannon fodder from well-known universities business or finance programs.

Well, bad news, future BMW buyers, an outfit called PublicView.ai may have curtailed your dreams of a six figure bonus in January or whatever month is the big momma at your firm. You can take a look at example outputs and sign up free at https://www.publicview.ai/.

If the smart product works as advertised, a category of financial work is going to be reshaped. It is possible that fewer analyst jobs will become available as the gathering and importing are converted to automated workflows. The meetings and the panting will become fewer and father between.

I don’t have data about how many worker bees power the Wall Street type outfits. I showed up, delivered information when queried, departed, and sent a bill for my time and travel. The financial hive and its quietly buzzing drones plugged away 10 or more hours a day, mostly six days a week.

The PublicView.ai FAQ page answers some basic questions; for example, “Can I perform quantitative analysis on the files?” The answer is:

Yes, you can ask Publicview to perform computations on the files using Python code. It can create graphs, charts, tables and more.

This is good news for the newly minted MBAs with programming skills. The bad news is that repeatable questions can be converted to workflows.

Let’s assume this product is good enough. There will be no overnight change in the work for existing employees. But slowly the senior managers will get the bright idea of hiring MBAs with different skills, possibly on a  contract basis. Then the work will begin to shift to software. At some point in the not-to-distant future, jobs for humans will be eliminated.

The question is, “How quickly can new hires make themselves into higher value employees in what are the early days of smart software?”

I suggest getting on a fast horse and galloping forward. Donkeys with Excel will fall behind. Software does not require health care, ever increasing inducements, and vacations. What’s interesting is that at some point many “analyst” jobs, not just in finance, will be handled by “good enough” smart software.

Remember a 51 percent win rate from code that does not hang out with a latte will strike some in carpetland as a no brainer. The good news is that MBAs don’t have a graduate degree in 18th century buttons or the Brutalist movement in architecture.

Stephen E Arnold, March 6, 2025

Lawyers and High School Students Cut Corners

March 6, 2025

Cost-cutting lawyers beware: using AI in your practice may make it tough to buy a new BMW this quarter. TechSpot reports, "Lawyer Faces $15,000 Fine for Using Fake AI-Generated Cases in Court Filing." Writer Rob Thubron tells us:

"When representing HooserVac LLC in a lawsuit over its retirement fund in October 2024, Indiana attorney Rafael Ramirez included case citations in three separate briefs. The court could not locate these cases as they had been fabricated by ChatGPT."

Yes, ChatGPT completely invented precedents to support Ramirez’ case. Unsurprisingly, the court took issue with this:

"In December, US Magistrate Judge for the Southern District of Indiana Mark J. Dinsmore ordered Ramirez to appear in court and show cause as to why he shouldn’t be sanctioned for the errors. ‘Transposing numbers in a citation, getting the date wrong, or misspelling a party’s name is an error,’ the judge wrote. ‘Citing to a case that simply does not exist is something else altogether. Mr Ramirez offers no hint of an explanation for how a case citation made up out of whole cloth ended up in his brief. The most obvious explanation is that Mr Ramirez used an AI-generative tool to aid in drafting his brief and failed to check the citations therein before filing it.’ Ramirez admitted that he used generative AI, but insisted he did not realize the cases weren’t real as he was unaware that AI could generate fictitious cases and citations."

Unaware? Perhaps he had not heard about the similar case in 2023. Then again, maybe he had. Ramirez told the court he had tried to verify the cases were real—by asking ChatGPT itself (which replied in the affirmative). But that query falls woefully short of the due diligence required by the Federal Rule of Civil Procedure 11, Thubron notes. As the judge who ultimately did sanction the firm observed, Ramirez would have noticed the cases were fiction had his attempt to verify them ventured beyond the ChatGPT UI.

For his negligence, Ramirez may face disciplinary action beyond the $15,000 in fines. We are told he continues to use AI tools, but has taken courses on its responsible use in the practice of law. Perhaps he should have done that before building a case on a chatbot’s hallucinations.

Cynthia Murrell, March 6, 2025

Sergey Says: Work Like It Was 1975 at McKinsey or Booz, Allen

March 6, 2025

dino orange_thumbYep, another dinobaby original.

Sergey Brin, invigorated with his work at the Google on smart software, has provided some management and work life tips to today’s job hunters and aspiring Alphabet employees. “In Leaked Memo to Google’s AI Workers, Sergey Brin Says 60 hours a Week Is the Sweet Spot and Doing the Bare Minimum Can Demoralize Peers”, Mr. Brin offers his view of sage management and career advice. (I do want to point out that the write up does not reference the work ethic and other related interactions of the Google Glass marketing team. My view of this facet of Mr. Brin’s contributions suggest that it is tough to put in 60 hours a week while an employee is ensconced in the Stanford Medical psychiatric ward. But that’s water under the bridge, so let’s move on to the current wisdom.)

The write up reports:

Sergey Brin believes Google can win the race to artificial general intelligence and outlined his ideas for how to do that—including a workweek that’s 50% longer than the standard 40 hours.

Presumably working harder will allow Google to avoid cheese mishaps related to pizza and Super Bowl advertising. Harder working Googlers will allow the company to avoid the missteps which have allowed unenlightened regulators in the European Union and the US to find the company exercising behavior which is not in the best interest of the service’s “users.”

The write up says:

“A number of folks work less than 60 hours and a small number put in the bare minimum to get by,” Brin wrote on Wednesday. “This last group is not only unproductive but also can be highly demoralizing to everyone else.”

I wonder if a consistent, document method for reviewing the work of employees would allow management to offer training, counseling, or incentives to get the mavericks back in the herd.

The protests, the allegations of erratic punitive actions like firing people who use words like “stochastic”, and the fact that the 60-hour information comes from a leaked memo — each of these incidents suggests that the management of the Google may have some work to do. You know, that old nosce teipsum stuff.

The Fortune write up winds down with this statement:

Last year, he acknowledged that he “kind of came out of retirement just because the trajectory of AI is so exciting.” That also coincided with some high-profile gaffes in Gemini’s AI, including an image generator that produced racially diverse Na#is. [Editor note: Member of a German affiliation group in the 1930s and 1940s. I have to avoid the Google stop words list.]

And the cheese, the Google Glass marketing tours, and so much more.

Stephen E Arnold, March 6, 2025

Shocker! Students Use AI and Engage in Sex, Drugs, and Rock and Roll

March 5, 2025

dino orange_thumb_thumbThe work of a real, live dinobaby. Sorry, no smart software involved. Whuff, whuff. That’s the sound of my swishing dino tail. Whuff.

I read “Surge in UK University Students Using AI to Complete Work.” The write up says:

The number of UK undergraduate students using artificial intelligence to help them complete their studies has surged over the past 12 months, raising questions about how universities assess their work. More than nine out of 10 students are now using AI in some form, compared with two-thirds a year ago…

I understand the need to create “real” news; however, the information did not surprise me. But the weird orange newspaper tosses in this observation:

Experts warned that the sheer speed of take-up of AI among undergraduates required universities to rapidly develop policies to give students clarity on acceptable uses of the technology.

As a purely practical matter, information has crossed my about professors cranking out papers for peer review or the ever-popular gray literature consumers that are not reproducible, contain data which have been shaped like a kindergartener’s clay animal, and links to pals who engage in citation boosting.

Plus, students who use Microsoft have a tough time escaping the often inept outputs of the Redmond crowd. A Google user is no longer certain what information is created by a semi reputable human or a cheese-crazed Google system. Emails write themselves. Message systems suggest emojis. Agentic AIs take care of mum’s and pop’s questions about life at the uni.

The topper for me was the inclusion in the cited article of this statement:

it was almost unheard of to see such rapid changes in student behavior…

Did this fellow miss drinking, drugs, staying up late, and sex on campus? How fast did those innovations take to sweep through the student body?

I liked the note of optimism at the end of the write up. Check this:

Janice Kay, a director of a higher education consulting firm: ““There is little evidence here that AI tools are being misused to cheat and play the system. [But] there are quite a lot of signs that will pose serious challenges for learners, teachers and institutions and these will need to be addressed as higher education transforms,” she added.”

That encouraging. The academic research crowd does one thing, and I am to assume that students will do everything the old-fashioned way. When you figure out how to remove smart software from online systems and local installations of smart helpers, let me know. Fix up AI usage and then turn one’s attention to changing student behavior in the drinking, sex, and drug departments too.

Good luck.

Stephen E Arnold, March 5, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta