Keeping an Eye on AI? Here Are Fifteen People of Interest for Some
March 13, 2025
Underneath the hype, there are some things AI is actually good at. But besides the players who constantly make the news, who is really shaping the AI landscape? A piece at Silicon Republic introduces us to "15 Influential Players Driving the AI Revolution." Writer Jenny Darmody observes:
"As AI continues to dominate the headlines, we’re taking a closer look at some of the brightest minds and key influencers within the industry. Throughout the month of February, SiliconRepublic.com has been putting AI under the microscope for more of a deep dive, looking beyond the regular news to really explore what this technology could mean. From the challenges around social media advertising in the new AI world to the concerns around its effect on the creative industries, there were plenty of worrying trends to focus on. However, there were also positive sides to the technology, such as its ability to preserve minority languages like Irish and its potential to reduce burnout in cybersecurity. While exploring these topics, the AI news just kept rolling: Deepseek continued to ruffle industry feathers, Thomson Reuters won a partial victory in its AI copyright case and the Paris AI Summit brought further investments and debates around regulation. With so much going on in the industry, we thought it was important to draw your attention to some key influencers you should know within the AI space."
Ugh, another roster of tech bros? Not so fast. On this list, the women actually outnumber the men, eight to seven. In fact, the first entry is Ireland’s first AI Ambassador Patricia Scanlon, who has hopes for truly unbiassed AI. Then there is the EU’s Lucilla Sioli, head of the European Commission’s AI Office. She is tasked with both coordinating Europe’s AI strategy and implementing the AI Act. We also happily note the inclusion of New York University’s Juliette Powell, who advises clients from gaming companies to banks in the responsible use of AI. See the write-up for the rest of the women and men who made the list.
Cynthia Murrell, March 13, 2025
Automobile Trivia: The Tesla Cybertruck and the Ford Pinto
March 11, 2025
Another post from the dinobaby. Alas, no smart software used for this essay.
I don’t cover the auto industry. However, this article caught my eye: “Cybertruck Goes to Mardi Gras Parade, Gets Bombarded by Trash and Flees in Shame: That’s Gotta Hurt.”
The write up reports:
With a whopping seven recalls in just over a year — and a fire fatality rate exceeding the infamous Ford Pinto— it’s never been a particularly great time to be a Cybertruck owner. But now, thanks to the political meddling of billionaire Tesla owner Elon Musk, it might be worse than ever. That’s what some Cybertruck drivers discovered firsthand at a Lundi Gras parade on Monday — the “Fat Monday” preamble to the famed Mardi Gras — when their hulking electric tanks were endlessly mocked and pelted with trash by revelers.
I did not know that the Tesla vehicle engaged in fire events at a rate greater than the famous Ford Pinto. I know the Pinto well. I bought one for a very low price. I drove it for about a year and sold it for a little more than I paid for it. I think I spent more time looking in my rear view mirrors than looking down the road. The Pinto, if struck from behind, would burn. I think the gas tank was made of some flimsy material. A bump in the back would cause the tank to leak and sometimes the vehicle would burst into flame. A couple of unlucky Pinto drivers suffered burns and some went to the big Ford dealership in the great beyond. I am not sure if the warranty was upheld.
I think this is interesting automotive trivia; for example, “What vehicle has a fire fatality rate exceeding the Ford Pinto?” The answer as I now know is the lovely and graceful Tesla Cybertruck.
The write up (which may be from The Byte or from Futurism) says:
According to a post on X-formerly-Twitter, at least one Cybertruck had its “bulletproof window” shattered by plastic beads before tucking tail and fleeing the parade under police protection. At least three Cybertrucks were reportedly there as part of a coordinated effort by an out-of-state Cybertruck Club to ferry parade marshals down the route. One marshal posted about their experience riding in the EV on Reddit, saying it was “boos and attacks from start to evacuation.”
I got a kick (not a recall or a fire) out of the write up and the plastic bead reference. Not as slick as “bouffon sous kétamine,” but darned good. And, no, I am not going to buy a Cybertruck. One year in Pinto fear was quite enough.
Now a test question: Which is more likely to explode? [a] a Space X rocket, [b] a Pinto, or [c] a Cybertruck?
Stephen E Arnold, March 11, 2025
A French Outfit Points Out Some Issues with Starlink-Type Companies
March 10, 2025
Another one from the dinobaby. No smart software. I spotted a story on the Thales Web site, but when I went back to check a detail, it had disappeared. After a bit of poking I found a recycled version called “Thales Warns Governments Over Reliance on Starlink-Type Systems.” The story must be accurate because it is from the “real” news outfit that wants my belief in their assertion of trust. Well, what do you know about trust?
Thales, as none of the people in Harrod’s Creek knows, is a French defence, intelligence, and go-to military hardware type of outfit. Thales and Dassault Systèmes are among the world leaders in a number cutting edge technology sectors. As a person who did some small work in France, I heard the Thales name mentioned a number of times. Thales has a core competency in electronics, military communications, and related fields.
The cited article reports:
Thales CEO Patrice Caine questioned the business model of Starlink, which he said involved frequent renewal of satellites and question marks over profitability. Without further naming Starlink, he went on to describe risks of relying on outside services for government links. “Government actors need reliability, visibility and stability,” Caine told reporters. “A player that – as we have seen from time to time – mixes up economic rationale and political motivation is not the kind that would reassure certain clients.”
I am certainly no expert in the lingo of a native French speaker using English words. I do know that the French language has a number of nuances which are difficult for a dinobaby like me to understand without saying, “Pourriez-vous répéter, s’il vous plaît?”
I noticed several things; specifically:
- The phrase “satellite renewal.” The idea is that the useful life of a Starlink-type device is shorter than some other technologies such as those from Thales-type of companies. Under the surface is the French attitude toward “fast fashion”. The idea is that cheap products are wasteful; well-made products, like a well-made suite, last a long time. Longer than a black baseball cap is how I interpreted the reference to “renewal.” I may be wrong, but this is a quite serious point underscoring the issue of engineering excellence.
- The reference to “profitability” seems to echo news reports that Starlink itself may be on the receiving end of preferential contract awards. If those type of cozy deals go away, will the Starlink-type business generate sufficient revenue to sustain innovation, higher quality, and longer life spans? Based on my limited knowledge of thing French, this is a fairly direct way of pointing out the weak business model of the Starlink-type of service.
- The use of the words “reliability” and “stability” struck me as directing two criticisms at the Starlink-type of company. On one level the issue of corporate stability is obvious. However, “stability” applies to engineering methods as well as mental set up. Henri Bergson observed, ““Think like a man of action, act like a man of thought.” I am not sure what M. Bergson would have thought about a professional wielding a chainsaw during a formal presentation.
- The direct reference to “mixing up” reiterates the mental stability and corporate stability referents. But the killer comment is the merging of “economic rationale and political motivation” flashes bright warning lights to some French professionals and probably would resonate with other Europeans. I wonder what Austrian government officials thought about the chainsaw performance.
Net net: Some of the actions of a Starlink-type of company have been disruptive. In game theory, “keep people guessing” is a proven tactic. Will it work in France? Unlikely. Chainsaws will not be permitted in most meetings with Thales or French agencies. The baseball cap? Probably not.
Stephen E Arnold, March 10, 2025
AI Generated Code Adds To Technical Debt
March 7, 2025
Technical debt refers to using flawed code that results in more work. It’s okay for projects to be ruled out with some technical debt as long as it is paid back. The problem comes when the code isn’t corrected and it snowballs into a huge problem. LeadDev explores how AI code affects projects: “How AI Generated Code Compounds Technical Debt.” The article highlights that it has never been easier to write code especially with AI, but there’s a large amassment of technical debt. The technical debt is so large that it is comparable to the US’s ballooning debt.
GitClear tracked the an eight-gold increase in code frequency blocks with give or more lines that duplicate adjectives code during 2024. This was ten times higher than the previous two years. GitClear found some more evidence of technical debt:
“That same year, 46% of code changes were new lines, while copy-pasted lines exceeded moved lines. “Moved,” lines is a metric GitClear has devised to track the rearranging of code, an action typically performed to consolidate previous work into reusable modules. “Refactored systems, in general, and moved code in particular, are the signature of code reuse,” says Bill Harding, CEO of Amplenote and GitClear. A year-on-year decline in code movement suggests developers are less likely to reuse previous work, a marked shift from existing industry best practice that would lead to more redundant systems with less consolidation of functions.”
These facts might not seem alarming, especially if one reads Google’s 2024 DORA report that said there was a 25% increase in AI usage to quicken code reviews and documentation. The downside was a 7.2% decrease in delivery and stability. These numbers might be small now but what is happening is like making a copy of a copy of a copy: the integrity is lost.
It’s also like relying entirely on spellcheck to always correct your spelling and grammar. While these are good tools to have, what will you do when you don’t have fundamentals in your toolbox or find yourself in a spontaneous spelling bee?
Whitney Grace, March 7, 2025
Shocker! Students Use AI and Engage in Sex, Drugs, and Rock and Roll
March 5, 2025
The work of a real, live dinobaby. Sorry, no smart software involved. Whuff, whuff. That’s the sound of my swishing dino tail. Whuff.
I read “Surge in UK University Students Using AI to Complete Work.” The write up says:
The number of UK undergraduate students using artificial intelligence to help them complete their studies has surged over the past 12 months, raising questions about how universities assess their work. More than nine out of 10 students are now using AI in some form, compared with two-thirds a year ago…
I understand the need to create “real” news; however, the information did not surprise me. But the weird orange newspaper tosses in this observation:
Experts warned that the sheer speed of take-up of AI among undergraduates required universities to rapidly develop policies to give students clarity on acceptable uses of the technology.
As a purely practical matter, information has crossed my about professors cranking out papers for peer review or the ever-popular gray literature consumers that are not reproducible, contain data which have been shaped like a kindergartener’s clay animal, and links to pals who engage in citation boosting.
Plus, students who use Microsoft have a tough time escaping the often inept outputs of the Redmond crowd. A Google user is no longer certain what information is created by a semi reputable human or a cheese-crazed Google system. Emails write themselves. Message systems suggest emojis. Agentic AIs take care of mum’s and pop’s questions about life at the uni.
The topper for me was the inclusion in the cited article of this statement:
it was almost unheard of to see such rapid changes in student behavior…
Did this fellow miss drinking, drugs, staying up late, and sex on campus? How fast did those innovations take to sweep through the student body?
I liked the note of optimism at the end of the write up. Check this:
Janice Kay, a director of a higher education consulting firm: ““There is little evidence here that AI tools are being misused to cheat and play the system. [But] there are quite a lot of signs that will pose serious challenges for learners, teachers and institutions and these will need to be addressed as higher education transforms,” she added.”
That encouraging. The academic research crowd does one thing, and I am to assume that students will do everything the old-fashioned way. When you figure out how to remove smart software from online systems and local installations of smart helpers, let me know. Fix up AI usage and then turn one’s attention to changing student behavior in the drinking, sex, and drug departments too.
Good luck.
Stephen E Arnold, March 5, 2025
Azure Insights: A Useful and Amusing Resource
March 4, 2025
This blog post is the work of a real live dinobaby. At age 80, I will be heading to the big natural history museum in the sky. Until then, creative people surprise and delight me.
I read some of the posts in a service named “Daily Azure Sh$t.” You can find the content on Mastodon.social at this link. Reading through the litany of issues, glitches, and goofs had me in stitches. If you work with Microsoft Azure, you might not be reading the Mastodon stomps with a chortle. You might be a little worried.
The post states:
This account is obviously not affiliated with Microsoft.
My hunch is that like other Microsoft-skeptical blogs, some of the Softies’ legal eagles will take flight. Upon determining the individual responsible for the humorous summary of technical antics, the individual may find that knocking off the service is one of the better ideas a professional might have. But until then, check out the newsy items.
As interesting are the comments on Hacker News. You will find these at this link.
For your delectation and elucidation, here are some of the comments from Hacker News:
- Osigurdson said: “Businesses are theoretically all about money but end up being driven by pride half the time.”
- Amarant said: “Azure was just utterly unable to deliver on anything they promised, thus the write-off on my part.”
- Abrookewood said: “Years ago, we migrated of Rackspace to Azure, but the database latency was diabolical. In the end, we got better performance by pointing the Azure web servers to the old database that was still in Rackspace than we did trying to use the database that was supposedly in the same data center.”
You may have a sense of humor different from mine. Enjoy either the laughing or the weeping.
Stephen E Arnold, March 9, 2025
Advice for Programmers: AI-Proof Your Career
February 24, 2025
Software engineer and blogger Sean Goedecke has some career advice for those who, like himself, are at risk of losing their programming jobs to AI. He counsels, "To Avoid Being Replaced by LLMs, Do What They Can’t." Logical enough. But what will these tools be able to do, and when will they be able to do it? That is the $25 million question. Goedecke has suggestions for the medium term, and the long term.
Right now, he advises, engineers should do three things: First, use the tools. They can help you gain an advantage in the field. And also, know-thine-enemy, perhaps? Next, learn how LLMs work so you can transition to the growing field of AI work. If you can’t beat them, join them, we suppose. Finally, climb the ranks posthaste, for those in junior roles will be the first to go. Ah yes, the weak get eaten. It is a multipronged approach.
For the medium term, Goedecke predicts which skills LLMs are likely to master first. Get good at the opposite of that. For example, ill-defined or poorly-scoped problems, solutions that are hard to verify, and projects with huge volumes of code are all very difficult for algorithms. For now.
In the long term, work yourself into a position of responsibility. There are few of those to go around. So, as noted above, start vigorously climbing over your colleagues now. Why? Because executives will always need at least one good human engineer they can trust. The post observes:
"A LLM strong enough to take responsibility – that is, to make commitments and be trusted by management – would have to be much, much more powerful than a strong engineer. Why? Because a LLM has no skin in the game, which means the normal mechanisms of trust can’t apply. Executives trust engineers because they know those engineers will experience unpleasant consequences if they get it wrong. Because the engineer is putting something on the line (e.g. their next bonus, or promotion, or in the extreme case being fired), the executive can believe in the strength of their commitment. A LLM has nothing to put on the line, so trust has to be built purely on their track record, which is harder and takes more time. In the long run, when almost every engineer has been replaced by LLMs, all companies will still have at least one engineer around to babysit the LLMs and to launder their promises and plans into human-legible commitments. Perhaps that engineer will eventually be replaced, if the LLMs are good enough. But they’ll be the last to go."
If you are lucky, it will be time to retire by then. For those young enough that this is unlikely, or for those who do not excel at the rat race, perhaps a career change is in order. What jobs are safe? Sadly, this dino-baby writer does not have the answer to that question.
Cynthia Murrell, February 24, 2025
Programming: Missing the Message
February 18, 2025
This blog post is the work of a real-live dinobaby. No smart software involved.
I read “New Junior Developers Can’t Actually Code.” The write up is interesting. I think an important point in the essay has been either overlooked or sidestepped. The main point of the article in my opinion is:
The foundational knowledge that used to come from struggling through problems is just… missing. We’re trading deep understanding for quick fixes, and while it feels great in the moment, we’re going to pay for this later.
I agree. The push is to make creating software has shifted to what I like to describe as a TikTok mindset. The idea is that one can do a quick search and get an answer, preferably in less than 30 seconds. I know there are young people who spend time working through problems. We have one of these 12 year olds in our family. The problem is that I am not sure how many other 12-year-olds have this baked in desire to work through problems. From what I see and hear, teachers are concerned that students are in TikTok mode, not in “work through” mode, particularly in class.
The write up says:
Here’s the reality: The acceleration has begun and there’s nothing we can do about it. Open source models are taking over, and we’ll have AGI running in our pockets before we know it. But that doesn’t mean we have to let it make us worse developers. The future isn’t about whether we use AI—it’s about how we use it. And maybe, just maybe, we can find a way to combine the speed of AI with the depth of understanding that we need to learn.
I agree. Now the “however”:
- Mistakes with older software may not be easily remediated. I am a dinobaby. Dinobabies drop out or die. The time required to figure out why something isn’t working may not be available. That might be a problem for a small issue. For something larger, like a large bank, the problem can be a difficult one.
- People with modern skills may not know where to look for an answer. The reference materials, the snippets of code, or the knowledge about a specific programming language may not be available. There are many reasons for this “knowledge loss.” Once gone, it will take time and money to get the information, not a TikTok fix.
- The software itself may be a hack job. We did a project for Bell Labs at the time of the Judge Green break up. The regional manager running my project asked the people working with me on this minor job if Alan and Howard (my two mainframe IBM CICS specialists) if they wrote documentation. Howard said, “Ho ho ho. We just use Assembler and make it work.” The project manager said, “You can’t do that for this project.” Alan said, “How do you propose to get the service you want us to implement to work?” We got the job, and the system is still almost 50 years later still in service. Okay, young wizard with smart software, fix up our work.
So what? We are reaching a point when the disconnect between essential computer science knowledge and actual implementation in large-scale, mission-critical systems is being lost. Maybe AI can do what Alan, Howard, and I did to comply with Judge Green’s order relating to Baby Bell information exchange in the IBM environment.
I am skeptical. That’s a problem with the TikTok approach and smart software. If the model gets it wrong, there may be no fix. TikTok won’t be much help either. (I think Steve Gibson might agree with some of my assertions.) The write up does not flip over the rock. There is some shocking stuff beneath the gray, featureless surface.
Stephen E Arnold, February 18, 2025
Software Is Changing and Not for the Better
February 17, 2025
I read a short essay “We Are Destroying Software.” What struck me about the write up was the author’s word choice. For example, here’s a simple frequency count of the terms in the essay:
- The two most popular words in the essay are “destroying” and “software” with 15 occurrences each.
- The word “complex” is used three times
- The words “systems,” “dependencies,” “reinventing,” “wheel,” and “work” are used twice each.
The structure of the essay is a series of declarative statements like this:
We are destroying software claiming that code comments are useless.
I quite like the essay.
Several observations:
- The author is passionate about his subject. “Destroy” is not a neutral word.
- “Complex” appears to be a particular concern. This makes sense. Some systems like those in use at the US Internal Revenue Service may be difficult, if not impossible, to remediate within available budgets and resources. Gradual deterioration seems to be a characteristic of many systems today, particularly when computer technology interfaces with workers.
- The notion of “joy” of hacking comes across, not as a solution to a problem, but the reason the author was motivated to capture his thoughts.
Interesting stuff. Tough to get around entropy, however. Who is the “we” by the way?
Stephen E Arnold, February 17, 2025
Sweden Embraces Books for Student: A Revolutionary Idea
February 14, 2025
Yep, another dinobaby emission. No smart software required.
Doom scrolling through the weekend’s newsfeeds, I spotted “Sweden Swapped Books for Computers in 2009. Now, They’re Spending Millions to Bring Them Back.” Sweden has some challenges. The problems with kinetic devices are not widely known in Harrod’s Creek, Kentucky, and probably not in other parts of the US. Malmo bears some passing resemblance to parts of urban enclaves like Detroit or Las Vegas. To make life interesting, the country has a keen awareness of everyone’s favorite leader in Russia.
The point of the write up is that Sweden’s shift from old-fashioned dinobaby books to those super wonderful computers and tablets has become unpalatable. The write up reports:
The Nordic country is reportedly exploring ways to reintroduce traditional methods of studying into its educational system.
The reason for the shift to books? The write up observes:
…experts noted that modern, experiential learning methods led to a significant decline in students’ essential skills, such as reading and writing.
Does this statement sound familiar?
Most teachers and parents complain that their kids have increasingly started relying on these devices instead of engaging in classrooms.
Several observations:
- Nothing worthwhile comes easy. Computers became a way to make learning easy. The downside is that for most students, the negatives have life long consequences
- Reversing gradual loss of the capability to concentrate is likely to be a hit-and-miss undertaking.
- Individuals without skills like reading become the new market for talking to a smartphone because writing is too much friction.
How will these individuals, regardless of country, be able to engage in life long learning? The answer is one that may make some people uncomfortable: They won’t. These individuals demonstrate behaviors not well matched to independent, informed thinking.
This dinobaby longs for a time when tiny dinobabies had books, not gizmos. I smell smoke. Oh, I think that’s just some informed mobile phone users burning books.
Stephen E Arnold, February 14, 2025