Advice for Programmers: AI-Proof Your Career

February 24, 2025

Software engineer and blogger Sean Goedecke has some career advice for those who, like himself, are at risk of losing their programming jobs to AI. He counsels, "To Avoid Being Replaced by LLMs, Do What They Can’t." Logical enough. But what will these tools be able to do, and when will they be able to do it? That is the $25 million question. Goedecke has suggestions for the medium term, and the long term.

Right now, he advises, engineers should do three things: First, use the tools. They can help you gain an advantage in the field. And also, know-thine-enemy, perhaps? Next, learn how LLMs work so you can transition to the growing field of AI work. If you can’t beat them, join them, we suppose. Finally, climb the ranks posthaste, for those in junior roles will be the first to go. Ah yes, the weak get eaten. It is a multipronged approach.

For the medium term, Goedecke predicts which skills LLMs are likely to master first. Get good at the opposite of that. For example, ill-defined or poorly-scoped problems, solutions that are hard to verify, and projects with huge volumes of code are all very difficult for algorithms. For now.

In the long term, work yourself into a position of responsibility. There are few of those to go around. So, as noted above, start vigorously climbing over your colleagues now. Why? Because executives will always need at least one good human engineer they can trust. The post observes:

"A LLM strong enough to take responsibility – that is, to make commitments and be trusted by management – would have to be much, much more powerful than a strong engineer. Why? Because a LLM has no skin in the game, which means the normal mechanisms of trust can’t apply. Executives trust engineers because they know those engineers will experience unpleasant consequences if they get it wrong. Because the engineer is putting something on the line (e.g. their next bonus, or promotion, or in the extreme case being fired), the executive can believe in the strength of their commitment. A LLM has nothing to put on the line, so trust has to be built purely on their track record, which is harder and takes more time. In the long run, when almost every engineer has been replaced by LLMs, all companies will still have at least one engineer around to babysit the LLMs and to launder their promises and plans into human-legible commitments. Perhaps that engineer will eventually be replaced, if the LLMs are good enough. But they’ll be the last to go."

If you are lucky, it will be time to retire by then. For those young enough that this is unlikely, or for those who do not excel at the rat race, perhaps a career change is in order. What jobs are safe? Sadly, this dino-baby writer does not have the answer to that question.

Cynthia Murrell, February 24, 2025

Programming: Missing the Message

February 18, 2025

dino orange_thumb_thumb_thumb_thumb_thumbThis blog post is the work of a real-live dinobaby. No smart software involved.

I read “New Junior Developers Can’t Actually Code.” The write up is interesting. I think an important point in the essay has been either overlooked or sidestepped. The main point of the article in my opinion is:

The foundational knowledge that used to come from struggling through problems is just… missing. We’re trading deep understanding for quick fixes, and while it feels great in the moment, we’re going to pay for this later.

I agree. The push is to make creating software has shifted to what I like to describe as a TikTok mindset. The idea is that one can do a quick search and get an answer, preferably in less than 30 seconds. I know there are young people who spend time working through problems. We have one of these 12 year olds in our family. The problem is that I am not sure how many other 12-year-olds have this baked in desire to work through problems. From what I see and hear, teachers are concerned that students are in TikTok mode, not in “work through” mode, particularly in class.

The write up says:

Here’s the reality: The acceleration has begun and there’s nothing we can do about it. Open source models are taking over, and we’ll have AGI running in our pockets before we know it. But that doesn’t mean we have to let it make us worse developers. The future isn’t about whether we use AI—it’s about how we use it. And maybe, just maybe, we can find a way to combine the speed of AI with the depth of understanding that we need to learn.

I agree. Now the “however”:

  1. Mistakes with older software may not be easily remediated. I am a dinobaby. Dinobabies drop out or die. The time required to figure out why something isn’t working may not be available. That might be a problem for a small issue. For something larger, like a large bank, the problem can be a difficult one.
  2. People with modern skills may not know where to look for an answer. The reference materials, the snippets of code, or the knowledge about a specific programming language may not be available. There are many reasons for this “knowledge loss.” Once gone, it will take time and money to get the information, not a TikTok fix.
  3. The software itself may be a hack job. We did a project for Bell Labs at the time of the Judge Green break up. The regional manager running my project asked the people working with me on this minor job if Alan and Howard (my two mainframe IBM CICS specialists) if they wrote documentation. Howard said, “Ho ho ho. We just use Assembler and make it work.” The project manager said, “You can’t do that for this project.” Alan said, “How do you propose to get the service you want us to implement to work?” We got the job, and the system is still almost 50 years later still in service. Okay, young wizard with smart software, fix up our work.

So what? We are reaching a point when the disconnect between essential computer science knowledge and actual implementation in large-scale, mission-critical systems is being lost. Maybe AI can do what Alan, Howard, and I did to comply with Judge Green’s order relating to Baby Bell information exchange in the IBM environment.

I am skeptical. That’s a problem with the TikTok approach and smart software. If the model gets it wrong, there may be no fix. TikTok won’t be much help either. (I think Steve Gibson might agree with some of my assertions.) The write up does not flip over the rock. There is some shocking stuff beneath the gray, featureless surface.

Stephen E Arnold, February 18, 2025

Software Is Changing and Not for the Better

February 17, 2025

I read a short essay “We Are Destroying Software.” What struck me about the write up was the author’s word choice. For example, here’s a simple frequency count of the terms in the essay:

  1. The two most popular words in the essay are “destroying” and “software” with 15 occurrences each.
  2. The word “complex” is used three times
  3. The words “systems,” “dependencies,” “reinventing,” “wheel,” and “work” are used twice each.

The structure of the essay is a series of declarative statements like this:

We are destroying software claiming that code comments are useless.

I quite like the essay.

Several observations:

  1. The author is passionate about his subject. “Destroy” is not a neutral word.
  2. “Complex” appears to be a particular concern. This makes sense. Some systems like those in use at the US Internal Revenue Service may be difficult, if not impossible, to remediate within available budgets and resources. Gradual deterioration seems to be a characteristic of many systems today, particularly when computer technology interfaces with workers.
  3. The notion of “joy” of hacking comes across, not as a solution to a problem, but the reason the author was motivated to capture his thoughts.

Interesting stuff. Tough to get around entropy, however. Who is the “we” by the way?

Stephen E Arnold, February 17, 2025

Sweden Embraces Books for Student: A Revolutionary Idea

February 14, 2025

dino orangeYep, another dinobaby emission. No smart software required.

Doom scrolling through the weekend’s newsfeeds, I spotted “Sweden Swapped Books for Computers in 2009. Now, They’re Spending Millions to Bring Them Back.” Sweden has some challenges. The problems with kinetic devices are not widely known in Harrod’s Creek, Kentucky, and probably not in other parts of the US. Malmo bears some passing resemblance to parts of urban enclaves like Detroit or Las Vegas. To make life interesting, the country has a keen awareness of everyone’s favorite leader in Russia.

The point of the write up is that Sweden’s shift from old-fashioned dinobaby books to those super wonderful computers and tablets has become unpalatable. The write up reports:

The Nordic country is reportedly exploring ways to reintroduce traditional methods of studying into its educational system.

The reason for the shift to books? The write up observes:

…experts noted that modern, experiential learning methods led to a significant decline in students’ essential skills, such as reading and writing.

Does this statement sound familiar?

Most teachers and parents complain that their kids have increasingly started relying on these devices instead of engaging in classrooms.

Several observations:

  1. Nothing worthwhile comes easy. Computers became a way to make learning easy. The downside is that for most students, the negatives have life long consequences
  2. Reversing gradual loss of the capability to concentrate is likely to be a hit-and-miss undertaking.
  3. Individuals without skills like reading become the new market for talking to a smartphone because writing is too much friction.

How will these individuals, regardless of country, be able to engage in life long learning? The answer is one that may make some people uncomfortable: They won’t. These individuals demonstrate behaviors not well matched to independent, informed thinking.

This dinobaby longs for a time when tiny dinobabies had books, not gizmos. I smell smoke. Oh, I think that’s just some informed mobile phone users burning books.

Stephen E Arnold, February 14, 2025

Another Bad Apple? Is It This Shipment or a Degraded Orchard?

February 3, 2025

dino orangeYep, a dinobaby wrote this blog post. Replace me with a subscription service or a contract worker from Fiverr. See if I care.

I read “Siri Is Super Dumb and Getting Dumber.” Now Siri someone told me had some tenuous connection to the Stanford Research Institute. Then the name and possibly some technology DNA wafted to Cupertino. The juicy apple sauce company produced smart software. Someone demonstrated it to me by asking Siri to call a person named “Yankelovich” by saying the name. That just did not work.

The write up explains that my experience was “dumb” and the new Apple smart software is dumber. That is remarkable. A big company and a number of mostly useful products like the estimable science fiction headset and a system demanding that I log into Facetime, iMessage, and iCloud every time I use the computer even though I don’t use these features is mostly perceived as one of the greatest companies on earth.

The write up says:

It’s just incredible how stupid Siri is about a subject matter of such popularity.

Stupid about a popular subject? Even the even more estimable Google figured out a long time ago that one could type just about any spelling of Britney Spears into the search box and the Google would spit out a nifty but superficial report about this famous person and role model for young people.

But Apple? The write up says from a really, truly objective observer of Apple:

New Siri — powered by Apple Intelligence™ with ChatGPT integration enabled — gets the answer completely but plausibly wrong, which is the worst way to get it wrong. It’s also inconsistently wrong — I tried the same question four times, and got a different answer, all of them wrong, each time. It’s a complete failure.

The write up points out:

It’s like Siri is a special-ed student permitted to take an exam with the help of a tutor who knows the correct answers, and still flunks.

Hmmm. Consistently wrong with variations of incorrectness — Do you want to log in to iCloud?

But the killer statement in the write up in my opinion is this one:

Misery loves company they say, so perhaps Apple should, as they’ve hinted since WWDC last June, partner with Google to add Gemini as another “world knowledge” partner to power — or is it weaken? — Apple Intelligence.

Several observations are warranted even though I don’t use Apple mobile devices, but I do like the ruggedness of the Mac Air laptops. (No, I don’t want to log into Apple Media Services or Facetime, thanks.) Here we go with my perceptions:

  1. Skip the Sam AI-Man stuff, the really macho Zuck stuff, and the Sundar & Prabhakar stuff. Go with Deepseek. (Someone in Beijing will think positively about the iPhone. Maybe?)
  2. Face up to the fact that Apple does reasonably good marketing. Those M1, M2, M3 chips in more flavors than the once-yummy Baskin-Robbins offered are easy for consumers to gobble up.
  3. Innovation is not just marketing. The company has to make what its marketers describe in words. That leap is not working in my opinion.

So where does that leave the write up, the Siri thing, and me? Free to select another vendor and consider shorting Apple stock. The orchard is dropping fruit not fit for human consumption but a few can be converted to apple sauce. That’s a potential business. AI slop, not so much.

Stephen E Arnold, February 3, 2025

A Failure Retrospective

February 3, 2025

Every year has tech failures, some of them will join the zeitgeist as cultural phenomenons like Windows Vista, Windows Me, Apple’s Pippin game console, chatbots, etc. PC Mag runs down the flops in: “Yikes: Breaking Down the 10 Biggest Tech Fails of 2024.” The list starts with Intel’s horrible year with its booted CEO, poor chip performance. It follows up with the Salt Typhoon hack that proved (not that we didn’t already know it with TikTok) China is spying on every US citizen with a focus on bigwigs.

National Public Data lost 272 million social security numbers to a hacker. That was a great day in summer for hacker, but the summer travel season became a nightmare when a CrowdStrike faulty kernel update grounded over 2700 flights and practically locked down the US borders. Microsoft’s Recall, an AI search tool that took snapshots of user activity that could be recalled later was a concern. What if passwords and other sensitive information were recorded?

The fabulous Internet Archive was hacked and taken down by a bad actor to protest the Israel-Gaza conflict. It makes us worry about preserving Internet and other important media history. Rabbit and Humane released AI-powered hardware that was supposed to be a hands free way to use a digital assistant, but they failed. JuiceBox ended software support on its EV car chargers, while Scarlett Johansson’s voice was stolen by OpenAI for its Voice Mode feature. She sued.

The worst of the worst is this:

“Days after he announced plans to acquire Twitter in 2022, Elon Musk argued that the platform needed to be “politically neutral” in order for it to “deserve public trust.” This approach, he said, “effectively means upsetting the far right and the far left equally.” In March 2024, he also pledged to not donate to either US presidential candidate, but by July, he’d changed his tune dramatically, swapping neutrality for MAGA hats. “If we want to preserve freedom and a meritocracy in America, then Trump must win,” Musk tweeted in September. He seized the @America X handle to promote Trump, donated millions to his campaign, shared doctored and misleading clips of VP Kamala Harris, and is now working closely with the president-elect on an effort to cut government spending, which is most certainly a conflict of interest given his government contracts. Some have even suggested that he become Speaker of the House since you don’t have to be a member of Congress to hold that position. The shift sent many X users to alternatives like BlueSky, Threads, and Mastodon in the days after the US election.”

It doesn’t matter what Musk’s political beliefs are. He has no right to participate in politics.

Whitney Grace, February 3, 2025

AI Smart, Humans Dumb When It Comes to Circuits

February 3, 2025

Anyone who knows much about machine learning knows we don’t really understand how AI comes to its conclusions. Nevertheless, computer scientists find algorithms do some things quite nicely. For example, ZME Science reports, "AI Designs Computer Chips We Can’t Understand—But They Work Really Well." A team from Princeton University and IIT Madras decided to flip the process of chip design. Traditionally, human engineers modify existing patterns to achieve desired results. The task is difficult and time-consuming. Instead, these researchers fed their AI the end requirements and told it to take it from there. They call this an "inverse design" method. The team says the resulting chips work great! They just don’t really know how or why. Writer Mihai Andrei explains:

"Whereas the previous method was bottom-up, the new approach is top-down. You start by thinking about what kind of properties you want and then figure out how you can do it. The researchers trained convolutional neural networks (CNNs) — a type of AI model — to understand the complex relationship between a circuit’s geometry and its electromagnetic behavior. These models can predict how a proposed design will perform, often operating on a completely different type of design than what we’re used to. … Perhaps the most exciting part is the new types of designs it came up with."

Yes, exciting. That is one word for it. Lead researcher Kaushik Sengupta notes:

"’We are coming up with structures that are complex and look randomly shaped, and when connected with circuits, they create previously unachievable performance,’ says Sengupta. The designs were unintuitive and very different than those made by the human mind. Yet, they frequently offered significant improvements."

But at what cost? We may never know. It is bad enough that health care systems already use opaque algorithms, with all their flaws, to render life-and-death decisions. Just wait until these chips we cannot understand underpin those calculations. New world, new trade-offs for a world with dumb humans.

Cynthia Murrell, February 3, 2025

Two Rules for Software. All Software If You Can Believe It

January 31, 2025

Did you know that there are two rules that dictate how all software is written? No, we didn’t either. FJ van Wingerde from the Ask The User blog states and explains what the rules are in his post: “The Two Rules Of Software Creation From Which Every Problem Derives.” After a bunch of jib jab about the failures of different codes, Wingerde states the questions:

“It’s the two rules that actually are behind every statement in the agile manifesto. The manifesto unfortunately doesn’t name them really; the people behind it were so steeped in the problems of software delivery—and what they thought would fix it—that they posited their statements without saying why each of these things are necessary to deliver good software. (Unfortunately, necessary but not enough for success, but that we found out in the next decades.) They are [1] Humans cannot accurately describe what they want out of a software system until it exists. and [2] Humans cannot accurately predict how long any software effort will take beyond four weeks. And after 2 weeks it is already dicey.”

The first rule is a true statement for all human activities, except the inability to accurately describe the problem. That may be true for software, however. Humans know they have a problem, but they don’t have a solution to fix. The smart humans figure out how to solve the problem and learn how to describe it with greater accuracy.

As for number two, is project management and weekly maintenance on software all a lucky guess then? Unless effort changes daily and that justifies paying software developers. Then again, someone needs to keep the systems running. Tech people are what keep businesses running, not to mention the entire world.

If software development only has these two rules, we now know why why developers cannot provide time estimates or provide assurances that their software works as leadership trained as accountants and lawyers expect. Rest easy. Software is hopefully good enough and advertising can cover the costs.

Whitney Grace, January 31, 2025

Ah, the Warmth of the Old, Friendly Internet. For Real?

January 30, 2025

I never thought I’d be looking back at the Internet of yesteryear nostalgically. I hated the sound of dial-up and the instant messaging sounds were annoying. Also AOL had the knack of clogging up machines with browsing history making everything slow. Did I mention YouTube wasn’t around? There are somethings that were better in the past, including parts of the Internet, but not all of it.

We also like to think that the Internet was “safer” and didn’t have predatory content. Wrong! Since the Internet’s inception, parents were worried about their children being the victims of online predators. Back then it was easier to remain anonymous, however. El País agrees that the Internet was just as bad as it is today: “‘The internet Hasn’t Made Us Bad, We Were Already Like That’: The Mistake Of Yearning For The ‘Friendly’ Online World Of 20 Years Ago."

It’s strange to see artists using Y2K era technology as art pieces and throwbacks. It’s a big eye-opener to aging Millennials, but it also places these items on par with the nostalgia of all past eras. All generations love the stuff from their youth and proclaim it to be superior. As the current youth culture and even those middle-aged are obsessed with retro gear, a new slang term has arisen: “cozy tech.”

“‘Cozy tech’ is the label that groups together content about users sipping from a steaming cup, browsing leisurely or playing nice, simple video games on devices with smooth, ergonomic designs. It’s a more powerful image than it seems because it conveys something we lost at some point in the last decade: a sense of control; the idea that it is possible to enjoy technology in peace again.”

They’re conflating the idea with reading a good book or listening to music on a record player. These “cozy tech” people are forgetting about the dangers of chatrooms or posting too much information on the Internet. Dare we bring up Omegle without drifting down channels of pornography?

Check out this statement:

“Mayte Gómez concludes: “We must stop this reactionary thinking and this fear of technology that arises from the idea that the internet has made us bad. That is not true: we were already like that. If the internet is unfriendly it is because we are becoming less so. We cannot perpetuate the idea that machines are entities with a will of their own; we must take responsibility for what happens on the internet.

Sorry, Mayte, I disagree. Humans have always been unfriendly. We now have a better record of it.

Whitney Grace, January 30, 2025

Sonus, What Is That Painful Sound I Hear?

January 21, 2025

Sonos CEO Swap: Are Tech Products Only As Good As Their Apps?

Lawyers and accountants leading tech firms, please, take note: The apps customers use to manage your products actually matter. Engadget reports, “Sonos CEO Patrick Spence Falls on his Sword After Horrible App Launch.” Reporter Lawrence Bonk writes:

“Sonos CEO Patrick Spence is stepping down from the company after eight years on the job, according to reporting by Bloomberg. This follows last year’s disastrous app launch, in which a redesign was missing core features and was broken in nearly every major way. The company has tasked Tom Conrad to steer the ship as interim CEO. Conrad is a current member of the Sonos board, but was a co-founder of Pandora, VP at Snap and product chief at, wait for it, the short-lived video streaming platform Quibi. He also reportedly has a Sonos tattoo. The board has hired a firm to find a new long-term leader.”

Conrad told employees that “we” let people down with the terrible app. And no wonder. Bonk explains:

“The decision to swap leadership comes after months of turmoil at the company. It rolled out a mobile app back in May that was absolutely rife with bugs and missing key features like alarms and sleep timers. Some customers even complained that entire speaker systems would no longer work after updating to the new app. It was a whole thing.”

Indeed. And despite efforts to rekindle customer trust, the company is paying the price of its blunder. Its stock price has fallen about 13 percent, revenue tanked 16 percent in the fiscal fourth quarter, and it has laid off more than 100 workers since August. Chief Product Officer Patrick Spence is also leaving the firm. Will the CEO swap help Sonos recover? As he takes the helm, Conrad vows a return to basics. At the same time, he wants to expand Sonos’ products. Interesting combination. Meanwhile, the search continues for a more permanent replacement. 

Cynthia Murrell, January 21, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta