Curricula Ideas That Will Go Nowhere Fast
February 28, 2025
No smart software. Just a dinobaby doing his thing.
I read “Stuff You Should Have Been Taught in College But Weren’t” reveals a young person who has some dinobaby notions. Good for Casey Handmer, PhD. Despite his brush with Hyperloop, he has retained an ability to think clearly about education. Caltech and the JPL have shielded him from some intellectual cubby holes.
So why am I mentioning the “Stuff You Should Have…” essay and the author? I found the write up in line with thoughts my colleagues and I have shared. Let me highlight a few of Dr. Handmer’s “Should haves” despite my dislike for “woulda coulda shoulda” as a mental bookshelf.
The write up says:
in the sorts of jobs you want to have, no-one should have to spell anything out for you.
I want to point out that the essay may not be appropriate for a person who seeks a job washing dishes at the El Nopal restaurant on Goose Creek Road. The observation strikes me as appropriate for an individual who seeks employment at a high-performing organization or an aspiring “performant” outfit. (I love the coinage “performant”; it is very with it.
What are other dinobaby-in-the-making observations in the write up. I have rephrased some of the comments, and I urge you to read the original essay. Here’s goes:
- Do something tangible to demonstrate your competence. Doom scrolling and watching TikTok-type videos may not do the job.
- Offer proof you deliver value in whatever you do. I am referring to “good” actors, not “bad” actors selling Telegram and WhatsApp hacking services on the Dark Web. “Proof” is verifiable facts, a reference from an individual of repute, or demonstrating a bit of software posted on GitHub or licensed from you.
- Watch, learn, and act in a way that benefits the organization, your colleagues, and your manager.
- Change jobs to grow and demonstrate your capabilities.
- Suck it up, buttercup. Life is a series of challenges. Meet them. Deliver value.
I want to acknowledge that not all dinobabies exhibit these traits as they toddle toward the holding tank for the soon-to-be-dead. However, for an individual who wants to contribute and grow, the ideas in this essay are good ones to consider and then implement.
I do have several observations:
- The percentage of a cohort who can consistently do and deliver is very small. Excellence is not for everyone. This has significant career implications unless you have a lot of money, family connections, or a Hollywood glow.
- Most of the young people with whom I interact say they have these or similar qualities. Then their own actions prove they don’t. Here’s an example: I met a business school dean. I offered to share some ideas relevant to the job market. I gave him my card because he forgot his cards. He never emailed me. I contacted him and said politely, “What’s up?” He double talked and wanted to meet up in the spring. What’s that tell me about this person’s work ethic? Answer: Loser.
- Universities and other formal training programs struggle even when the course material and teacher is on point. The “problem” begins before the student shows up in class. The impact of family stress on a person creates a hot house of sorts. What grows in the hortorium? Species with an inability to concentrate, a pollen that cannot connect with an ovule, and a baked in confusion of “I will do it” and “doing it.”
Net net: This dinobaby is happy to say that Dr. Handmer will make a very good dinobaby some day.
Stephen E Arnold, February 28, 2025
Meta and Torrents: True, False, or Rationalization?
February 26, 2025
AIs gobble datasets for training. It is another fact that many LLMs and datasets contain biased information, are incomplete, or plain stink. One ethical but cumbersome way to train algorithms would be to notify people that their data, creative content, or other information will be used to train AI. Offering to pay for the right to use the data would be a useful step some argue.
Will this happen? Obviously not.
Why?
Because it’s sometimes easier to take instead of asking. According to Toms Hardware, “Meta Staff Torrented Nearly 82TB Of Pirated Books For AI Training-Court Records Reveal Copyright Violations.” The article explains that Meta pirated 81.7 TB of books from the shadow libraries Anna’s Archive, Z-Library, and LibGen. These books were then used to train AI models. Meta is now facing a class action lawsuit about using content from the shadow libraries.
The allegations arise from Meta employees’ written communications. Some of these messages provide insight into employees’ concern about tapping pirated materials. The employees were getting frown lines, but then some staffers’ views rotated when they concluded smart software helped people access information.
Here’s a passage from the cited article I found interesting:
“Then, in January 2023, Mark Zuckerberg himself attended a meeting where he said, “We need to move this stuff forward… we need to find a way to unblock all this.” Some three months later, a Meta employee sent a message to another one saying they were concerned about Meta IP addresses being used “to load through pirate content.” They also added, “torrenting from a corporate laptop doesn’t feel right,” followed by laughing out loud emoji. Aside from those messages, documents also revealed that the company took steps so that its infrastructure wasn’t used in these downloading and seeding operations so that the activity wouldn’t be traced back to Meta. The court documents say that this constitutes evidence of Meta’s unlawful activity, which seems like it’s taking deliberate steps to circumvent copyright laws.”
If true, the approach smacks of that suave Silicon Valley style. If false, my faith in a yacht owner with gold chains might be restored.
Whitney Grace, February 26, 2025
AI Research Tool from Perplexity Is Priced to Undercut the Competition
February 26, 2025
Are prices for AI-generated research too darn high? One firm thinks so. In a Temu-type bid to take over the market, reports VentureBeat, "Perplexity Just Made AI Research Crazy Cheap—What that Means for the Industry." CEO Aravind Srinivas credits open source software for making the move possible, opining that "knowledge should be universally accessible." Knowledge, yes. AI research? We are not so sure. Nevertheless, here we are. The write-up describes the difference in pricing:
"While Anthropic and OpenAI charge thousands monthly for their services, Perplexity offers five free queries daily to all users. Pro subscribers pay $20 monthly for 500 daily queries and faster processing — a price point that could force larger AI companies to explain why their services cost up to 100 times more."
Not only is Perplexity’s Deep Research cheaper than the competition, crows the post, its accuracy rivals theirs. We are told:
"[Deep Research] scored 93.9% accuracy on the SimpleQA benchmark and reached 20.5% on Humanity’s Last Exam, outperforming Google’s Gemini Thinking and other leading models. OpenAI’s Deep Research still leads with 26.6% on the same exam, but OpenAI charges $200 percent for that service. Perplexity’s ability to deliver near-enterprise level performance at consumer prices raises important questions about the AI industry’s pricing structure."
Well, okay. Not to stray too far from the point, but is a 20.5% or a 26.6% on Humanity’s Last Exam really something to brag about? Last we checked, those were failing grades. By far. Isn’t it a bit too soon to be outsourcing research to any LLM? But I digress.
We are told the low, low cost Deep Research is bringing AI to the micro-budget masses. And, soon, to the Windows-less—Perplexity is working on versions for iOS, Android, and Mac. Will this spell disaster for the competition?
Cynthia Murrell, February 26, 2025
Rest Easy. AI Will Not Kill STEM Jobs
February 25, 2025
Written by a dinobaby, not smart software. But I would replace myself with AI if I could.
Bob Hope quipped, “A sense of humor is good for you. Have you ever heard of a laughing hyena with heart burn?” No, Bob, I have not.
Here’s a more modern joke for you from the US Bureau of Labor Statistics circa 2025. It is much fresher than Mr. Hope’s quip from a half century ago.
The Bureau of Labor Statistics says:
Employment in the professional, scientific, and technical services sector is forecast to increase by 10.5% from 2023 to 2033, more than double the national average. (Source: Investopedia)
Okay, I wonder what those LinkedIn, XTwitter, and Reddit posts about technology workers not being able to find jobs in these situations:
- Recent college graduates with computer science degrees
- Recently terminated US government workers from agencies like 18F
- Workers over 55 urged to take early retirement?
The item about the rosy job market appeared in Slashdot too. Here’s the quote I noted:
Employment in the professional, scientific, and technical services sector is forecast to increase by 10.5% from 2023 to 2033, more than double the national average. According to the BLS, the impact AI will have on tech-sector employment is highly uncertain. For one, AI is adept at coding and related tasks. But at the same time, as digital systems become more advanced and essential to day-to-day life, more software developers, data managers, and the like are going to be needed to manage those systems. "Although it is always possible that AI-induced productivity improvements will outweigh continued labor demand, there is no clear evidence to support this conjecture," according to BLS researchers.
Robert Half, an employment firm, is equally optimistic. Just a couple of weeks ago, that outfit said:
Companies continue facing strong competition from other firms for tech talent, particularly for candidates with specialized skills. Across industries, AI proficiency tops the list of most-sought capabilities, with organizations needing expertise for everything from chatbots to predictive maintenance systems. Other in-demand skill areas include data science, IT operations and support, cybersecurity and privacy, and technology process automation.
What am I to conclude from these US government data? Here are my preliminary thoughts:
- The big time consulting firms are unlikely to change their methods of cost reduction; that is, if software (smart or dumb) can do a job for less money, that software will be included on a list of options. Given a choice of going out of business or embracing smart software, a significant percentage of consulting firm clients will give AI a whirl. If AI works and the company stays in business or grows, the humans will be repurposed or allowed to find their future elsewhere.
- The top one percent in any discipline will find work. The other 99 percent will need to have family connections, family wealth, or a family business to provide a boost for a great job. What if a person is not in the top one percent of something? Yeah, well, that’s not good for quite a few people.
- The permitted dominance of duopolies or oligopolies in most US business sectors means that some small and mid-sized businesses will have to find ways to generate revenue. My experience in rural Kentucky is that local accounting, legal, and technology companies are experimenting with smart software to boost productivity (the MBA word for cheaper work functions). Local employment options are dwindling because the smaller employers cannot stay in business. Potential employees want more pay than the company can afford. Result? Downward spiral which appears to be accelerating.
Am I confident in statistics related to wages, employment, and the growth of new businesses and industrial sectors? No, I am not. Statistical projects work pretty well in nuclear fuel management. Nested mathematical procedures in smart software work pretty well for some applications. Using smart software to reduce operating costs work pretty well right now.
Net net: Without meaningful work, some of life’s challenges will spark unanticipated outcomes. Exactly what type of stress breaks a social construct? Those in the job hunt will provide numerous test cases, and someone will do an analysis. Will it be correct? Sure, close enough for horseshoes.
Stop complaining. Just laugh as Mr. Hope noted. No heartburn and cost savings too boot.
Stephen E Arnold, February 25, 2025
Advice for Programmers: AI-Proof Your Career
February 24, 2025
Software engineer and blogger Sean Goedecke has some career advice for those who, like himself, are at risk of losing their programming jobs to AI. He counsels, "To Avoid Being Replaced by LLMs, Do What They Can’t." Logical enough. But what will these tools be able to do, and when will they be able to do it? That is the $25 million question. Goedecke has suggestions for the medium term, and the long term.
Right now, he advises, engineers should do three things: First, use the tools. They can help you gain an advantage in the field. And also, know-thine-enemy, perhaps? Next, learn how LLMs work so you can transition to the growing field of AI work. If you can’t beat them, join them, we suppose. Finally, climb the ranks posthaste, for those in junior roles will be the first to go. Ah yes, the weak get eaten. It is a multipronged approach.
For the medium term, Goedecke predicts which skills LLMs are likely to master first. Get good at the opposite of that. For example, ill-defined or poorly-scoped problems, solutions that are hard to verify, and projects with huge volumes of code are all very difficult for algorithms. For now.
In the long term, work yourself into a position of responsibility. There are few of those to go around. So, as noted above, start vigorously climbing over your colleagues now. Why? Because executives will always need at least one good human engineer they can trust. The post observes:
"A LLM strong enough to take responsibility – that is, to make commitments and be trusted by management – would have to be much, much more powerful than a strong engineer. Why? Because a LLM has no skin in the game, which means the normal mechanisms of trust can’t apply. Executives trust engineers because they know those engineers will experience unpleasant consequences if they get it wrong. Because the engineer is putting something on the line (e.g. their next bonus, or promotion, or in the extreme case being fired), the executive can believe in the strength of their commitment. A LLM has nothing to put on the line, so trust has to be built purely on their track record, which is harder and takes more time. In the long run, when almost every engineer has been replaced by LLMs, all companies will still have at least one engineer around to babysit the LLMs and to launder their promises and plans into human-legible commitments. Perhaps that engineer will eventually be replaced, if the LLMs are good enough. But they’ll be the last to go."
If you are lucky, it will be time to retire by then. For those young enough that this is unlikely, or for those who do not excel at the rat race, perhaps a career change is in order. What jobs are safe? Sadly, this dino-baby writer does not have the answer to that question.
Cynthia Murrell, February 24, 2025
OpenAI Furthers Great Research
February 21, 2025
Unsatisfied with existing AI cheating solutions? If so, Gizmodo has good news for you: “OpenAI’s ‘Deep Research’ Gives Students a Whole New Way to Cheat on Papers.” Writer Kyle Barr explains:
“OpenAI’s new ‘Deep Research’ tool seems perfectly designed to help students fake their way through a term paper unless asked to cite sources that don’t include Wikipedia. OpenAI’s new feature, built on top of its upcoming o3 model and released on Sunday, resembles one Google introduced late last year with Gemini 2.0. Google’s ‘Deep Research’ is supposed to generate long-form reports over the course of 30 minutes or more, depending on the depth of the requested topic. Boiled down, Google’s and OpenAI’s tools are AI agents capable of performing multiple internet searches while reasoning about the next step to generate a report.”
Deep Research even functions in a side panel, providing updates on its direction and progress. So helpful! However, the tool is not for those looking to score an A. Like a student rushing to finish a paper the old-fashioned way, Barr notes, it relies heavily on Wikipedia. An example report did include a few trusted sites, like Pew Research, but such reliable sources were in the minority. Besides, the write-up emphasizes:
“Remember, this is just a bot scraping the internet, so it won’t be accessing any non-digitized books or—ostensibly—any content locked behind a paywall. … Because it’s essentially an auto-Googling machine, the AI likely won’t have access to the most up-to-date and large-scale surveys from major analysis firms. … That’s not to say the information was inaccurate, but anybody who generates a report is at the mercy of suspect data and the AI’s interpretation of that data.”
Meh, we suppose that is okay if one just needs a C to get by. But is it worth the $200 per month subscription? I suppose that depends on the student, and the parents willingness to sign up for services that will make gentle Ben and charming Chrissie smarter. Besides, we are sure more refined versions are in our future.
Cynthia Murrell, February 21, 2025
Sam Altman: The Waffling Man
February 17, 2025
Another dinobaby commentary. No smart software required.
Chaos is good. Flexibility is good. AI is good. Sam Altman, whom I reference as “Sam AI-Man” has some explaining to do. OpenAI is a consumer of cash. The Chinese PR push suggests that Deepseek has found a way to do OpenAI-type computing like Shein and Temu do gym clothes.
I noted “Sam Altman Admits OpenAI Was On the Wrong Side of History in Open Source Debate.” The write up does not come out state, “OpenAI was stupid when it embraced proprietary software’s approach” to meeting user needs. To be frank, Sam AI-Man was not particularly clear either.
The write up says that Sam AI-Man said:
“Yes, we are discussing [releasing model weights],” Altman wrote. “I personally think we have been on the wrong side of history here and need to figure out a different open source strategy.” He noted that not everyone at OpenAI shares his view and it isn’t the company’s current highest priority. The statement represents a remarkable departure from OpenAI’s increasingly proprietary approach in recent years, which has drawn criticism from some AI researchers and former allies, most notably Elon Musk, who is suing the company for allegedly betraying its original open source mission.
My view is that Sam AI-Man wants to emulate other super techno leaders and get whatever he wants. Not surprisingly, other super techno leaders have their own ideas. I would suggest that the objective of these AI jousts is power, control, and money.
“What about the users?” a faint voice asks. “And the investors?” another bold soul queries.
Who?
Stephen E Arnold, February 17, 2025
What Happens When Understanding Technology Is Shallow? Weakness
February 14, 2025
Yep, a dinobaby wrote this blog post. Replace me with a subscription service or a contract worker from Fiverr. See if I care.
I like this question. Even more satisfying is that a big name seems to have answered it. I refer to an essay by Gary Marcus in “The Race for “AI Supremacy” Is Over — at Least for Now.”
Here’s the key passage in my opinion:
China caught up so quickly for many reasons. One that deserves Congressional investigation was Meta’s decision to open source their LLMs. (The question that Congress should ask is, how pivotal was that decision in China’s ability to catch up? Would we still have a lead if they hadn’t done that? Deepseek reportedly got its start in LLMs retraining Meta’s Llama model.) Putting so many eggs in Altman’s basket, as the White House did last week and others have before, may also prove to be a mistake in hindsight. … The reporter Ryan Grim wrote yesterday about how the US government (with the notable exception of Lina Khan) has repeatedly screwed up by placating big companies and doing too little to foster independent innovation
The write up is quite good. What’s missing, in my opinion, is the linkage of a probe to determine how a technology innovation released as a not-so-stealthy open source project can affect the US financial markets. The result was satisfying to the Chinese planners.
Also, the write up does not put the probe or “foray” in a strategic context. China wants to make certain its simple message “China smart, US dumb” gets into the world’s communication channels. That worked quite well.
Finally, the write up does not point out that the US approach to AI has given China an opportunity to demonstrate that it can borrow and refine with aplomb.
Net net: I think China is doing Shien and Temu in the AI and smart software sector.
Stephen E Arnold, February 14, 2025
Orchestration Is Not Music When AI Agents Work Together
February 13, 2025
Are multiple AIs better than one? Megaputer believes so. The data firm sent out a promotional email urging us to “Build Multi-Agent Gen-AI Systems.” With the help of its products, of course. We are told:
“Most business challenges are too complex for a single AI engine to solve. What is the way forward? Introducing Agent-Chain Systems: A novel groundbreaking approach leveraging the collaborative strengths of specialized AI models, each configured for distinct analytical tasks.
- Validate results through inter-agent verification mechanisms, minimizing hallucinations and inconsistencies.
- Dynamically adapt workflows by redistributing tasks among Gen-AI agents based on complexity, optimizing resource utilization and performance.
- Build AI applications in hours for tasks like automated taxonomy building and complex fact extraction, going beyond traditional AI limitations.”
If this approach really reduces AI hallucinations, there may be something to it. The firm invites readers to explore a few case studies they have put together: One is for an anonymous pharmaceutical company, one for a US regulatory agency, and the third for a large retail company. Snapshots of each project’s dashboard further illustrate the concept. Are cooperative AI agents the next big thing in generative AI? Megaputer, for one, is banking on it. Founded back in 1997, the small business is based in Bloomington, Indiana.
Cynthia Murrell, February 10, 2025
The Google: Tell Me, Please, What Is a Malicious App?
February 12, 2025
Yep, another dinobaby emission. No smart software required.
I suggest you take a quick look at an important essay about the data which flows from Google’s Android and Apple’s iOS. The paper is “Everyone Knows Your Location: Tracking Myself Down Through In-App Ads” by Tim. The main point of the write up is to disclose information that has been generally closely held by a number of entities. I strongly recommend the write up, and it is possible that it could be made difficult to locate in the near future. The article says:
After more than couple dozen hours of trying, here are the main takeaways:
- I found a couple requests sent by my phone with my location + 5 requests that leak my IP address, which can be turned into geolocation using reverse DNS.
- Learned a lot about the RTB (real-time bidding) auctions and OpenRTB protocol and was shocked by the amount and types of data sent with the bids to ad exchanges.
- Gave up on the idea to buy my location data from a data broker or a tracking service, because I don’t have a big enough company to take a trial or $10-50k to buy a huge database with the data of millions of people + me.
Well maybe I do, but such expense seems a bit irrational.
Turns out that EU-based peoples` data is almost the most expensive.But still, I know my location data was collected and I know where to buy it!
Tim’s essay sets the stage for a Google Security Blog post titled “How We Kept the Google Play & Android App Ecosystems Safe in 2024.” That write up is another example of Google’s self-promotion. It lacks the snap of the quantum supremacy pitch and the endless backpatting about Google’s smart software.
The write up says:
To keep out bad actors, we have always used a combination of human security experts and the latest threat-detection technology. In 2024, we used Google’s advanced AI to improve our systems’ ability to proactively identify malware, enabling us to detect and block bad apps more effectively. It also helps us streamline review processes for developers with a proven track record of policy compliance. Today, over 92% of our human reviews for harmful apps are AI-assisted, allowing us to take quicker and more accurate action to help prevent harmful apps from becoming available on Google Play.
I want to ask one question, “Is Google’s advertising a malicious app?” The answer depends on one’s point of view. Google would assert that it is not doing anything other than making high value services available either for free or at a very low cost to the consumer.
A skeptical person might respond, “Your system sustains the digital online advertising sector. Your technology helps, to some degree, the third party advertising services firms to gather information and cross correlate it for the fine-grained intelligence described in Tim’s article?”
Google, which is it? Is your advertising system malicious or is it a benefit to users? This is a serious question, and it is one that smarmy self promotion and PR campaigns are likely to have difficulty answering.
Stephen E Arnold, February 11, 2025