Rest Easy. AI Will Not Kill STEM Jobs
February 25, 2025
Written by a dinobaby, not smart software. But I would replace myself with AI if I could.
Bob Hope quipped, “A sense of humor is good for you. Have you ever heard of a laughing hyena with heart burn?” No, Bob, I have not.
Here’s a more modern joke for you from the US Bureau of Labor Statistics circa 2025. It is much fresher than Mr. Hope’s quip from a half century ago.
The Bureau of Labor Statistics says:
Employment in the professional, scientific, and technical services sector is forecast to increase by 10.5% from 2023 to 2033, more than double the national average. (Source: Investopedia)
Okay, I wonder what those LinkedIn, XTwitter, and Reddit posts about technology workers not being able to find jobs in these situations:
- Recent college graduates with computer science degrees
- Recently terminated US government workers from agencies like 18F
- Workers over 55 urged to take early retirement?
The item about the rosy job market appeared in Slashdot too. Here’s the quote I noted:
Employment in the professional, scientific, and technical services sector is forecast to increase by 10.5% from 2023 to 2033, more than double the national average. According to the BLS, the impact AI will have on tech-sector employment is highly uncertain. For one, AI is adept at coding and related tasks. But at the same time, as digital systems become more advanced and essential to day-to-day life, more software developers, data managers, and the like are going to be needed to manage those systems. "Although it is always possible that AI-induced productivity improvements will outweigh continued labor demand, there is no clear evidence to support this conjecture," according to BLS researchers.
Robert Half, an employment firm, is equally optimistic. Just a couple of weeks ago, that outfit said:
Companies continue facing strong competition from other firms for tech talent, particularly for candidates with specialized skills. Across industries, AI proficiency tops the list of most-sought capabilities, with organizations needing expertise for everything from chatbots to predictive maintenance systems. Other in-demand skill areas include data science, IT operations and support, cybersecurity and privacy, and technology process automation.
What am I to conclude from these US government data? Here are my preliminary thoughts:
- The big time consulting firms are unlikely to change their methods of cost reduction; that is, if software (smart or dumb) can do a job for less money, that software will be included on a list of options. Given a choice of going out of business or embracing smart software, a significant percentage of consulting firm clients will give AI a whirl. If AI works and the company stays in business or grows, the humans will be repurposed or allowed to find their future elsewhere.
- The top one percent in any discipline will find work. The other 99 percent will need to have family connections, family wealth, or a family business to provide a boost for a great job. What if a person is not in the top one percent of something? Yeah, well, that’s not good for quite a few people.
- The permitted dominance of duopolies or oligopolies in most US business sectors means that some small and mid-sized businesses will have to find ways to generate revenue. My experience in rural Kentucky is that local accounting, legal, and technology companies are experimenting with smart software to boost productivity (the MBA word for cheaper work functions). Local employment options are dwindling because the smaller employers cannot stay in business. Potential employees want more pay than the company can afford. Result? Downward spiral which appears to be accelerating.
Am I confident in statistics related to wages, employment, and the growth of new businesses and industrial sectors? No, I am not. Statistical projects work pretty well in nuclear fuel management. Nested mathematical procedures in smart software work pretty well for some applications. Using smart software to reduce operating costs work pretty well right now.
Net net: Without meaningful work, some of life’s challenges will spark unanticipated outcomes. Exactly what type of stress breaks a social construct? Those in the job hunt will provide numerous test cases, and someone will do an analysis. Will it be correct? Sure, close enough for horseshoes.
Stop complaining. Just laugh as Mr. Hope noted. No heartburn and cost savings too boot.
Stephen E Arnold, February 25, 2025
Content Injection Can Have Unanticipated Consequences
February 24, 2025
The work of a real, live dinobaby. Sorry, no smart software involved. Whuff, whuff. That’s the sound of my swishing dino tail. Whuff.
Years ago I gave a lecture to a group of Swedish government specialists affiliated with the Forestry Unit. My topic was the procedure for causing certain common algorithms used for text processing to increase the noise in their procedures. The idea was to input certain types of text and numeric data in a specific way. (No, I will not disclose the methods in this free blog post, but if you have a certain profile, perhaps something can be arranged by writing benkent2020 at yahoo dot com. If not, well, that’s life.)
We focused on a handful of methods widely used in what now is called “artificial intelligence.” Keep in mind that most of the procedures are not new. There are some flips and fancy dancing introduced by individual teams, but the math is not invented by TikTok teens.
In my lecture, the forestry professionals wondered if these methods could be used to achieve specific objectives or “ends”. The answer was and remains, “Yes.” The idea is simple. Once methods are put in place, the algorithms chug along, some are brute force and others are probabilistic. Either way, content and data injections can be shaped, just like the gizmos required to make kinetic events occur.
The point of this forestry excursion is to make clear that a group of people, operating in a loosely coordinated manner can create data or content. Those data or content can be weaponized. When ingested by or injected into a content processing flow, the outputs of the larger system can be fiddled: More emphasis here, a little less accuracy there, and an erosion of whatever “accuracy” calculations are used to keep the system within the engineers’ and designers’ parameters. A plebian way to describe the goal: Disinformation or accuracy erosion.
I read “Meet the Journalists Training AI Models for Meta and OpenAI.” The write up explains that journalists without jobs or in search of extra income are creating “content” for smart software companies. The idea is that if one just does the Silicon Valley thing and sucks down any and all content, lawyers might come calling. Therefore, paying for “real” information is a better path.
Please, read the original article to get a sense of who is doing the writing, what baggage or mind set these people might bring to their work.
If the content is distorted — either intentionally or unintentionally — the impact of these content objects on the larger smart software system might have some interesting consequences. I just wanted to point out that weaponized information can have an impact. Those running smart software and buying content assuming it is just fine, might find some interesting consequences in the outputs.
Stephen E Arnold, February 24, 2025
AI Worriers, Play Some Bing Crosby Music
February 24, 2025
This blog post is the work of a real-live dinobaby. No smart software involved.
The Guardian newspaper ran an interesting write up about smart software and the inevitability of complaining to stop it in its tracks. “I Met the Godfathers of AI in Paris – Here’s What They Told Me to Really Worry About.” I am not sure what’s being taught in British schools, but the headline features the author, a split infinitive, and the infamous “ending a sentence with a preposition” fillip. Very sporty.
The write up includes quotes from the godfathers:
“It’s not today’s AI we need to worry about, it’s next year’s,” Tegmark told me. “It’s like if you were interviewing me in 1942, and you asked me: ‘Why aren’t people worried about a nuclear arms race?’ Except they think they are in an arms race, but it’s actually a suicide race.”
I am not sure what psychologists call worrying about the future. Bing Crosby took a different approach. He sang, “Don’t worry about tomorrow” and offered:
Why should we cling to some old faded thing
That used to be
Bing looked beyond the present but did not seem unduly worried. The Guardian is a bit more up tight.
The write up says:
The idea that we, on Earth, might lose control of an AGI that then turns on us sounds like science fiction – but is it really so far-fetched considering the exponential growth of AI development? As Bengio [an AI godfather, according to the Guardian] pointed out, some of the most advanced AI models have already attempted to deceive human programmers during testing, both in pursuit of their designated objectives and to escape being deleted or replaced with an update.
I circled this passage:
It seems as if we have a shrinking opportunity to lay down the incentives for companies to create the kind of AI that actually benefits our individual and collective lives: sustainable, inclusive, democracy-compatible, controlled. And beyond regulation, “to make sure there is a culture of participation embedded in AI development in general”, as Eloïse Gabadou, a consultant to the OECD on technology and democracy, put it.
Okay, so what’s the fix? Who implements the fix? Will the fix stop British universities in Manchester, Cambridge, and Oxford among others from teaching about AI or stop researchers from fiddling with snappier methods? Will the Mayor of London shut down the DeepMind outfit?
Nope. I am delighted that some people are talking about smart software. However, in the high tech world in which we love, I want to remind the Guardian, the last train for Chippenham has left the station. Too late, old chap. Learn to play Bing’s song. Chill.
Stephen E Arnold, February 24, 2025
Advice for Programmers: AI-Proof Your Career
February 24, 2025
Software engineer and blogger Sean Goedecke has some career advice for those who, like himself, are at risk of losing their programming jobs to AI. He counsels, "To Avoid Being Replaced by LLMs, Do What They Can’t." Logical enough. But what will these tools be able to do, and when will they be able to do it? That is the $25 million question. Goedecke has suggestions for the medium term, and the long term.
Right now, he advises, engineers should do three things: First, use the tools. They can help you gain an advantage in the field. And also, know-thine-enemy, perhaps? Next, learn how LLMs work so you can transition to the growing field of AI work. If you can’t beat them, join them, we suppose. Finally, climb the ranks posthaste, for those in junior roles will be the first to go. Ah yes, the weak get eaten. It is a multipronged approach.
For the medium term, Goedecke predicts which skills LLMs are likely to master first. Get good at the opposite of that. For example, ill-defined or poorly-scoped problems, solutions that are hard to verify, and projects with huge volumes of code are all very difficult for algorithms. For now.
In the long term, work yourself into a position of responsibility. There are few of those to go around. So, as noted above, start vigorously climbing over your colleagues now. Why? Because executives will always need at least one good human engineer they can trust. The post observes:
"A LLM strong enough to take responsibility – that is, to make commitments and be trusted by management – would have to be much, much more powerful than a strong engineer. Why? Because a LLM has no skin in the game, which means the normal mechanisms of trust can’t apply. Executives trust engineers because they know those engineers will experience unpleasant consequences if they get it wrong. Because the engineer is putting something on the line (e.g. their next bonus, or promotion, or in the extreme case being fired), the executive can believe in the strength of their commitment. A LLM has nothing to put on the line, so trust has to be built purely on their track record, which is harder and takes more time. In the long run, when almost every engineer has been replaced by LLMs, all companies will still have at least one engineer around to babysit the LLMs and to launder their promises and plans into human-legible commitments. Perhaps that engineer will eventually be replaced, if the LLMs are good enough. But they’ll be the last to go."
If you are lucky, it will be time to retire by then. For those young enough that this is unlikely, or for those who do not excel at the rat race, perhaps a career change is in order. What jobs are safe? Sadly, this dino-baby writer does not have the answer to that question.
Cynthia Murrell, February 24, 2025
Thailand Creeps into Action with Some Swiss Effort
February 24, 2025
Hackers are intelligent bad actors who use their skills for evil. They do black hat hacking tricks for their own gains. The cyber criminal recently caught in a raid performed by three countries was definitely a huge scammer. Khaosod English reports on the takedown: “Thai-Swiss-US Operation Nets Hackers Behind 1,000+ Cyber Attacks.”
Four European hackers were arrested on the Thai island Phuket. They were charged with using ransomware to steal $16 million from over 1000 victims. The hackers were wanted by Swiss and US authorities.
Thai, Swiss, and US law enforcement officials teamed up in Operation Phobos Aetor to arrest the bad actors. They were arrested on February 10, 2025 in Phuket. The details are as follows:
“The suspects, two men and two women, were apprehended at Mono Soi Palai, Supalai Palm Spring, Supalai Vista Phuket, and Phyll Phuket x Phuketique Phyll. Police seized over 40 pieces of evidence, including mobile phones, laptops, and digital wallets. The suspects face charges of Conspiracy to Commit an Offense Against the United States and Conspiracy to Commit Wire Fraud.
The arrests stemmed from an urgent international cooperation request from Swiss authorities and the United States, involving Interpol warrants for the European suspects who had entered Thailand as part of a transnational criminal organization.”
The ransomware attacks accessed private networks to steal personal data and they also encrypted files. The hackers demanded cryptocurrency payments for decryption keys and threatened to publish data if the ransoms weren’t paid.
Let’s give a round of applause to putting these crooks behind bars! On to Myanmar and Lao PDR!
Whitney Grace, February 24, 2025
Tales of Silicon Valley Management Method: Perceived Cruelty
February 21, 2025
A dinobaby post. No smart software involved.
I read an interesting write up. Is it representative? A social media confection? A suggestion that one of the 21st centuries masters of the universe harbors a Vlad the Impaler behavior? I don’t know. But the article “Laid-Off Meta Employees Blast Zuckerberg for Running the Cruelest Tech Company Out There As Some Claim They Were Blindsided after Parental Leave” caught my attention. Note: This is a paywalled write up and you have to pay up.
Straight away I want to point out:
- AI does not have organic carbon based babies — at least not yet
- AI does not require health care — routine maintenance but the down time should be less than a year
- AI does not complain on social media about its gradient descents and Bayesian drift — hey, some do like the new “I remember” AI from Google.
Now back to the write up. I noted this passage:
Over on Blind, an anonymous app for verified employees often used in the tech space, employees are noting that an unseasonable chill has come over Silicon Valley. Besides allegations of the company misusing the low-performer label, some also claimed that Meta laid them off while they were taking approved leave.
Yep, a social media business story.
There are other tech giants in the story, but one is cited as a source of an anonymous post:
A Microsoft employee wrote on Blind that a friend from Meta was told to “find someone” to let go even though everyone was performing at or above expectations. “All of these layoffs this year are payback for 2021–2022,” they wrote. “Execs were terrified of the power workers had [at] that time and saw the offers and pay at that time [are] unsustainable. Best way to stop that is put the fear of god back in the workers.”
I think that a big time, mainstream business publication has found a new source of business news: Employee complaint forums.
In the 1970s I worked with a fellow who was a big time reporter for Fortune. He ended up at the blue chip consulting firm helping partners communicate. He communicated with me. He explained how he tracked down humans, interviewed them, and followed up with experts to crank out enjoyable fact-based feature stories. He seemed troubled that the approach at a big time consulting firm was different from that of a big time magazine in Manhattan. He had an attitude, and he liked spending months working on a business story.
I recall him because he liked explaining his process.
I am not sure the story about the cruel Zuckster would have been one that he would have written. What’s changed? I suppose I could answer the question if I prowled social media employee grousing sites. But we are working on a monograph about Telegram, and we are taking a different approach. I suppose my method is closer to what my former colleague did in his Fortune days reduced like a French sauce by the approach I learned at the blue chip consulting firm.
Maybe I should give social media research, anonymous sources, and something snappy like cruelty to enliven our work? Nah, probably not.
Stephen E Arnold, February 21, 2025
OpenAI Furthers Great Research
February 21, 2025
Unsatisfied with existing AI cheating solutions? If so, Gizmodo has good news for you: “OpenAI’s ‘Deep Research’ Gives Students a Whole New Way to Cheat on Papers.” Writer Kyle Barr explains:
“OpenAI’s new ‘Deep Research’ tool seems perfectly designed to help students fake their way through a term paper unless asked to cite sources that don’t include Wikipedia. OpenAI’s new feature, built on top of its upcoming o3 model and released on Sunday, resembles one Google introduced late last year with Gemini 2.0. Google’s ‘Deep Research’ is supposed to generate long-form reports over the course of 30 minutes or more, depending on the depth of the requested topic. Boiled down, Google’s and OpenAI’s tools are AI agents capable of performing multiple internet searches while reasoning about the next step to generate a report.”
Deep Research even functions in a side panel, providing updates on its direction and progress. So helpful! However, the tool is not for those looking to score an A. Like a student rushing to finish a paper the old-fashioned way, Barr notes, it relies heavily on Wikipedia. An example report did include a few trusted sites, like Pew Research, but such reliable sources were in the minority. Besides, the write-up emphasizes:
“Remember, this is just a bot scraping the internet, so it won’t be accessing any non-digitized books or—ostensibly—any content locked behind a paywall. … Because it’s essentially an auto-Googling machine, the AI likely won’t have access to the most up-to-date and large-scale surveys from major analysis firms. … That’s not to say the information was inaccurate, but anybody who generates a report is at the mercy of suspect data and the AI’s interpretation of that data.”
Meh, we suppose that is okay if one just needs a C to get by. But is it worth the $200 per month subscription? I suppose that depends on the student, and the parents willingness to sign up for services that will make gentle Ben and charming Chrissie smarter. Besides, we are sure more refined versions are in our future.
Cynthia Murrell, February 21, 2025
Gemini, the Couch Potato, Watches YouTube
February 21, 2025
Have you ever told yourself that you have too many YouTube videos to watch? Now you can save time by using Gemini AI to watch them for you. What is Gemini AI? According to Make Use Of, the algorithm can “Gemini Can Now Watch YouTube Videos And Save Hours Of Time.”
Google recently uploaded a new an update to its Gemini AI that allows users to catch up YouTube videos without having to actually watch them. The new feature is a marvelous advancement! The new addition to Gemini 2.0 Flash will watch the video then it can answer questions or provide a summary of it. Google users can access Gemini through the Gemini site or the smartphone app. It’s also available for free without the Gemini Advanced subscription.
To access video watching feature, users must select the 2.0 Flash Thinking Experimental with apps model from the sidebar.
Here’s how the cited article’s author used Gemini:
… I came across a YouTube video about eight travel tips for Las Vegas. Instead of watching the entire video, I simply asked Gemini, “What are the eight travel tips in this video?” Gemini then processed the video and provided a concise summary of the travel tips. I also had Gemini summarize a video on changing a windshield wiper on a Honda CR-V, a chore I needed to complete. The results were simple and easy to understand, allowing me to glance at my iPhone screen instead of constantly stopping and starting the video during the process. The easiest way to grab a YouTube link is through your web browser or the Share button under the video.”
YouTube videos can be long and boring. Gemini condenses the information into digestible and quick to read bits. It’s an awesome tool, but if Gemini watches a video does it count as a view for advertising? Will Gemini put on a few pounds snacking on Pringles?
Whitney Grace, February 21, 2025
Google and Personnel Vetting: Careless?
February 20, 2025
No smart software required. This dinobaby works the old fashioned way.
The Sundar & Prabhakar Comedy Show pulled another gag. This one did not delight audiences the way Prabhakar’s AI presentation did, nor does it outdo Google’s recent smart software gaffe. It is, however, a bit of a hoot for an outfit with money, smart people, and smart software.
I read the decidedly non-humorous news release from the Department of Justice titled “Superseding Indictment Charges Chinese National in Relation to Alleged Plan to Steal Proprietary AI Technology.” The write up states on February 4, 2025:
A federal grand jury returned a superseding indictment today charging Linwei Ding, also known as Leon Ding, 38, with seven counts of economic espionage and seven counts of theft of trade secrets in connection with an alleged plan to steal from Google LLC (Google) proprietary information related to AI technology. Ding was initially indicted in March 2024 on four counts of theft of trade secrets. The superseding indictment returned today describes seven categories of trade secrets stolen by Ding and charges Ding with seven counts of economic espionage and seven counts of theft of trade secrets.
Thanks, OpenAI, good enough.
Mr. Ding, obviously a Type A worker, appears to have quite industrious at the Google. He was not working for the online advertising giant; he was working for another entity. The DoJ news release describes his set up this way:
While Ding was employed by Google, he secretly affiliated himself with two People’s Republic of China (PRC)-based technology companies. Around June 2022, Ding was in discussions to be the Chief Technology Officer for an early-stage technology company based in the PRC. By May 2023, Ding had founded his own technology company focused on AI and machine learning in the PRC and was acting as the company’s CEO.
What technology caught Mr. Ding’s eye? The write up reports:
Ding intended to benefit the PRC government by stealing trade secrets from Google. Ding allegedly stole technology relating to the hardware infrastructure and software platform that allows Google’s supercomputing data center to train and serve large AI models. The trade secrets contain detailed information about the architecture and functionality of Google’s Tensor Processing Unit (TPU) chips and systems and Google’s Graphics Processing Unit (GPU) systems, the software that allows the chips to communicate and execute tasks, and the software that orchestrates thousands of chips into a supercomputer capable of training and executing cutting-edge AI workloads. The trade secrets also pertain to Google’s custom-designed SmartNIC, a type of network interface card used to enhance Google’s GPU, high performance, and cloud networking products.
At least, Mr. Ding validated the importance of some of Google’s sprawling technical insights. That’s a plus I assume.
One of the more colorful items in the DoJ news release concerned “evidence.” The DoJ says:
As alleged, Ding circulated a PowerPoint presentation to employees of his technology company citing PRC national policies encouraging the development of the domestic AI industry. He also created a PowerPoint presentation containing an application to a PRC talent program based in Shanghai. The superseding indictment describes how PRC-sponsored talent programs incentivize individuals engaged in research and development outside the PRC to transmit that knowledge and research to the PRC in exchange for salaries, research funds, lab space, or other incentives. Ding’s application for the talent program stated that his company’s product “will help China to have computing power infrastructure capabilities that are on par with the international level.”
Mr. Ding did not use Google’s cloud-based presentation program. I found the explicit desire to “help China” interesting. One wonders how Google’s Googley interview process run by Googley people failed to notice any indicators of Mr. Ding’s loyalties? Googlers are very confident of their Googliness, which obviously tolerates an insider threat who conveys data to a nation state known to be adversarial in its view of the United States.
I am a dinobaby, and I find this type of employee insider threat at Google. Google bought Mandiant. Google has internal security tools. Google has a very proactive stance about its security capabilities. However, in this case, I wonder if a Googler ever noticed that Mr. Ding used PowerPoint, not the Google-approved presentation program. No true Googler would use PowerPoint, an archaic, third party program Microsoft bought eons ago and has managed to pump full of steroids for decades.
Yep, the tell — Googlers who use Microsoft products. Sundar & Prabhakar will probably integrate a short bit into their act in the near future.
Stephen E Arnold, February 20, 2025
A Super Track from You Know Who
February 20, 2025
Those CAPTCHA hoops we jump through are there to protect sites from bots, right? As Boing Boing reports, not so much. In fact, bots can now easily beat reCAPTCHA tests. Then why are we forced to navigate them to do anything online? So the software can track us, collect our data, and make parent company Google even richer. And data brokers. Writer Mark Frauenfelder cites a recent paper in, “reCAPTCHA: 819 Million Hours of Wasted Human Time and Billions of Dollars in Google Profits.” We learn:
“‘They essentially get access to any user interaction on that web page,’ says Dr. Andrew Searles, a former computer security researcher at UC Irvine. Searle’s paper, titled ‘Dazed & Confused: A Large-Scale Real-World User Study of reCAPTCHAv2,’ found that Google’s widely-used CAPTCHA system is primarily a mechanism for tracking user behavior and collecting data while providing little actual security against bots. The study revealed that reCAPTCHA extensively monitors users’ cookies, browsing history, and browser environment (including canvas rendering, screen resolution, mouse movements, and user-agent data) — all of which can be used for advertising and tracking purposes. Through analyzing over 3,600 users, the researchers found that solving image-based challenges takes 557% longer than checkbox challenges and concluded that reCAPTCHA has cost society an estimated 819 million hours of human time valued at $6.1 billion in wages while generating massive profits for Google through its tracking capabilities and data collection, with the value of tracking cookies alone estimated at $888 billion.”
That is quite a chunk of change. No wonder Google does not want to give up its CAPTCHA system—even if it no longer performs its original function. Why bother with matters of user inconvenience or even privacy when there are massive profits to be made?
Cynthia Murrell, February 20, 2025