Programming: Missing the Message

February 18, 2025

dino orange_thumb_thumb_thumb_thumb_thumbThis blog post is the work of a real-live dinobaby. No smart software involved.

I read “New Junior Developers Can’t Actually Code.” The write up is interesting. I think an important point in the essay has been either overlooked or sidestepped. The main point of the article in my opinion is:

The foundational knowledge that used to come from struggling through problems is just… missing. We’re trading deep understanding for quick fixes, and while it feels great in the moment, we’re going to pay for this later.

I agree. The push is to make creating software has shifted to what I like to describe as a TikTok mindset. The idea is that one can do a quick search and get an answer, preferably in less than 30 seconds. I know there are young people who spend time working through problems. We have one of these 12 year olds in our family. The problem is that I am not sure how many other 12-year-olds have this baked in desire to work through problems. From what I see and hear, teachers are concerned that students are in TikTok mode, not in “work through” mode, particularly in class.

The write up says:

Here’s the reality: The acceleration has begun and there’s nothing we can do about it. Open source models are taking over, and we’ll have AGI running in our pockets before we know it. But that doesn’t mean we have to let it make us worse developers. The future isn’t about whether we use AI—it’s about how we use it. And maybe, just maybe, we can find a way to combine the speed of AI with the depth of understanding that we need to learn.

I agree. Now the “however”:

  1. Mistakes with older software may not be easily remediated. I am a dinobaby. Dinobabies drop out or die. The time required to figure out why something isn’t working may not be available. That might be a problem for a small issue. For something larger, like a large bank, the problem can be a difficult one.
  2. People with modern skills may not know where to look for an answer. The reference materials, the snippets of code, or the knowledge about a specific programming language may not be available. There are many reasons for this “knowledge loss.” Once gone, it will take time and money to get the information, not a TikTok fix.
  3. The software itself may be a hack job. We did a project for Bell Labs at the time of the Judge Green break up. The regional manager running my project asked the people working with me on this minor job if Alan and Howard (my two mainframe IBM CICS specialists) if they wrote documentation. Howard said, “Ho ho ho. We just use Assembler and make it work.” The project manager said, “You can’t do that for this project.” Alan said, “How do you propose to get the service you want us to implement to work?” We got the job, and the system is still almost 50 years later still in service. Okay, young wizard with smart software, fix up our work.

So what? We are reaching a point when the disconnect between essential computer science knowledge and actual implementation in large-scale, mission-critical systems is being lost. Maybe AI can do what Alan, Howard, and I did to comply with Judge Green’s order relating to Baby Bell information exchange in the IBM environment.

I am skeptical. That’s a problem with the TikTok approach and smart software. If the model gets it wrong, there may be no fix. TikTok won’t be much help either. (I think Steve Gibson might agree with some of my assertions.) The write up does not flip over the rock. There is some shocking stuff beneath the gray, featureless surface.

Stephen E Arnold, February 18, 2025

Hackers and AI: Of Course, No Hacker Would Use Smart Software

February 18, 2025

dino orangeThis blog post is the work of a real live dinobaby. Believe me, after reading the post, you know that smart software was not involved.

Hackers would never ever use smart software. I mean those clever stealer distributors preying on get-rich-quick stolen credit card users. Nope. Those people using online games to lure kiddies and people with kiddie-level intelligence into providing their parents’ credit card data? Nope and double nope. Those people in computer science classes in Azerbaijan learning how to identify security vulnerability while working as contractors for criminals. Nope. Never. Are you crazy. These bad actors know that smart software is most appropriate for Mother Teresa type activities and creating Go Fund Me pages to help those harmed by natural disasters, bad luck, or not having a job except streaming.

I mean everyone knows that bad actors respect the firms providing smart software. It is common knowledge that bad actors play fair. Why would a criminal use smart software to create more efficacious malware payloads, compromise Web sites, or defeat security to trash the data on Data.gov. Ooops. Bad example. Data.gov has been changed.

I read “Google Says Hackers Abuse Gemini AI to Empower Their Attacks.” That’s the spirit. Bad actors are using smart software. The value of the systems is evident to criminals. The write up says:

Multiple state-sponsored groups are experimenting with the AI-powered Gemini assistant from Google to increase productivity and to conduct research on potential infrastructure for attacks or for reconnaissance on targets. Google’s Threat Intelligence Group (GTIG) detected government-linked advanced persistent threat (APT) groups using Gemini primarily for productivity gains rather than to develop or conduct novel AI-enabled cyberattacks that can bypass traditional defenses. Threat actors have been trying to leverage AI tools for their attack purposes to various degrees of success as these utilities can at least shorten the preparation period. Google has identified Gemini activity associated with APT groups from more than 20 countries but the most prominent ones were from Iran and China.

Stop the real time news stream! Who could have imagined that bad actors would be interested in systems and methods that would make their behaviors more effective and efficient.

When Microsoft rolled out its marketing gut punch aimed squarely at Googzilla, the big online advertising beast responded. The Code Red and Code Yellow lights flashed. Senior managers held meetings after Foosball games and hanging at Philz’ Coffee.

Did Google management envision the reality of bad actors using Gemini? No. It appears that the Google acquisition Mandiant figured it out. Eventually — it’s been two years and counting since Microsoft caused the AI tsunami — the Eureka! moment arrived.

The write up reports:

Google also mentions having observed cases where the threat actors attempted to use public jailbreaks against Gemini or rephrasing their prompts to bypass the platform’s security measures. These attempts were reportedly unsuccessful.

Of course, the attacks were. Do US banks tell their customers when check fraud or other cyber dishonesty relieves people of their funds. Sure they don’t. Therefore, it is only the schlubs who are unfortunate enough to have the breach disclosed. Then the cyber security outfits leap into action and issue fixes. Everything is the cyber security world is buttoned up and buttoned down. Absolutely.

Several observations:

  1. How has free access without any type of vetting working out? The question is directed at the big tech outfits who are beavering away in this technology blast zone.
  2. What are the providers of free smart software doing to make certain that the method can only produce seventh grade students’ essays about the transcontinental railroad?
  3. What exactly is a user of free smart software supposed to do to reign in the actions of nation states with which most Americans are somewhat familiar. I mean there is a Chinese restaurant near Harrod’s Creek. Am I to discuss the matter with the waitress?

Why worry? That worked for Mad Magazine until it didn’t. Hey, Google, thanks for the information. Who could have known smart software can be used for nefarious purposes? (Obviously not Google.)

Stephen E Arnold, February 18, 2025

Unified Data Across Governments? How Useful for a Non Participating Country

February 18, 2025

dino orangeA dinobaby post. No smart software involved.

I spoke with a person whom I have known for a long time. The individual lives and works in Washington, DC. He mentioned “disappeared data.” I did some poking around and, sure enough, certain US government public facing information had been “disappeared.” Interesting. For a short period of time I made a few contributions to what was FirstGov.gov, now USA.gov.

For those who don’t remember or don’t know about President Clinton’s Year 2000 initiative, the idea was interesting. At that time, access to public-facing information on US government servers was via the Web search engines. In order to locate a tax form, one would navigate to an available search system. On Google one would just slap in IRS or IRS and the form number.

Most of the US government public-facing Web sites were reasonably straight forward. Others were fairly difficult to use. The US Marine Corps’ Web site had poor response times. I think it was hosted on something called Server Beach, and the would-be recruit would have to wait for the recruitment station data to appear. The Web page worked but it was slow.

President Clinton wanted or someone in his administration wanted the problem to be fixed with a search system for US government public-facing content. After a bit of work, the system went online in September 2000. The system morphed into a US government portal a bit like the Yahoo.com portal model.

I thought about the information in “Oracle’s Ellison Calls for Governments to Unify Data to Feed AI.” The write up reports:

Oracle Corp.’s co-founder and chairman Larry Ellison said governments should consolidate all national data for consumption by artificial intelligence models, calling this step the “missing link” for them to take full advantage of the technology. Fragmented sets of data about a population’s health, agriculture, infrastructure, procurement and borders should be unified into a single, secure database that can be accessed by AI models…

Several questions arise; for instance:

  1. What country or company provides the technology?
  2. Who manages what data are added and what data are deleted?
  3. What are the rules of access?
  4. What about public data which are not available for public access; for example, the “disappeared” data from US government Web sites?
  5. What happens to commercial or quasi-commercial government units which repackage public data and sell it at a hefty mark up?

Based on my brief brush with the original Clinton project, I think the idea is interesting. But I have one other question in mind: What happens when non-participating countries get access to the aggregated public facing data. Digital information is a tricky resource to secure. In fact, once data are digitized and connected to a network, it is fair game. Someone, somewhere will figure out how to access, obtain, exfiltrate, and benefit from aggregated data.

The idea is, in my opinion, a bit of grandstanding like Google’s quantum supremacy claims. But US high technology wizards are ready and willing to think big thoughts and take even bigger actions. We live in interesting times, but I am delighted that I am old.

Stephen E Arnold, February 18, 2025

A Vulnerability Bigger Than SolarWinds? Yes.

February 18, 2025

dino orangeNo smart software. Just a dinobaby doing his thing.

I read an interesting article from WatchTowr Labs. (The spelling is what the company uses, so the url is labs.watchtowr.com.) On February 4, 2024, the company reported that it discovered what one can think of as orphaned or abandoned-but-still alive Amazon S3 “buckets.” The discussion of the firm’s research and what it revealed is presented in “8 Million Requests Later, We Made The SolarWinds Supply Chain Attack Look Amateur.”

The company explains that it was curious if what it calls “abandoned infrastructure” on a cloud platform might yield interesting information relevant to security. We worked through the article and created what in the good old days would have been called an abstract for a database like ABI/INFORM. Here’s our summary:

The article from WatchTowr Labs describes a large-scale experiment where researchers identified and took control of about 150 abandoned Amazon Web Services S3 buckets previously used by various organizations, including governments, militaries, and corporations. Over two months, these buckets received more than eight million requests for software updates, virtual machine images, and sensitive files, exposing a significant vulnerability. Watchtowr explain that bad actors could have injected malicious content. Abandoned infrastructure could be used for supply chain attacks like SolarWinds. Had this happened, the impact would have been significant.

Several observations are warranted:

  1. Does Amazon Web Services have administrative functions to identify orphaned “buckets” and take action to minimize the attack surface?
  2. With companies information technology teams abandoning infrastructure, how will these organizations determine if other infrastructure vulnerabilities exist and remediate them?
  3. What can cyber security vendors’ software and systems do to identify and neutralize these “shoot yourself in the foot” vulnerabilities?

One of the most compelling statements in the WatchTowr article, in my opinion, is:

… we’d demonstrated just how held-together-by-string the Internet is and at the same time point out the reality that we as an industry seem so excited to demonstrate skills that would allow us to defend civilization from a Neo-from-the-Matrix-tier attacker – while a metaphorical drooling-kid-with-a-fork-tier attacker, in reality, has the power to undermine the world.

Is WatchTowr correct? With government and commercial organizations leaving S3 buckets available, perhaps WatchTowr should have included gum, duct tape, and grade-school white glue in its description of the Internet?

Stephen E Arnold, February 18, 2025

Real AI News? Yes, with Fact Checking, Original Research, and Ethics Too

February 17, 2025

dino orange_thumb_thumb_thumb_thumbThis blog post is the work of a real-live dinobaby. No smart software involved.

This is “real” news… if the story is based on fact checking, original research, and those journalistic ethics pontifications. Let’s assume that these conditions of old-fashioned journalism to apply. This means that the story “New York Times Goes All-In on Internal AI Tools” pinpoints a small shift in how “real” news will be produced.

The write up asserts:

The New York Times is greenlighting the use of AI for its product and editorial staff, saying that internal tools could eventually write social copy, SEO headlines, and some code.

Yep, some. There’s ground truth (that’s an old-fashioned journalism concept) in blue-chip consulting. The big money maker is what’s called scope creep. Stated simply, one starts small like a test or a trial. Then if the sky does not fall as quickly as some companies’ revenue, the small gets a bit larger. You check to make sure the moon is in the sky and the revenues are not falling, hopefully as quickly as before. Then you expand. At each step there are meetings, presentations, analyses, and group reassurances from others in the deciders category. Then — like magic! — the small project is the rough equivalent of a nuclear-powered aircraft carrier.

Ah, scope creep.

Understate what one is trying. Watch it. Scale it. End up with an aircraft carrier scale project. Yes, it is happening at an outfit like the New York Times if the cited article is accurate.

What scope creep stage setting appears in the write up? Let look:

  1. Staff will be trained. You job, one assumes, is safe. (Ho ho ho)
  2. AI will help uncover “the truth.” (Absolutely)
  3. More people will benefit (Don’t forget the stakeholders, please)

What’s the write up presenting as actual factual?

The world’s greatest newspaper will embrace hallucinating technology, but only a little bit.

Scope creep begins, and it won’t change a thing, but that information will appear once the cost savings, revenue, and profit data become available at the speed of newspaper decision making.

Stephen E Arnold, February 17, 2025

Sam Altman: The Waffling Man

February 17, 2025

Hopping DinoAnother dinobaby commentary. No smart software required.

Chaos is good. Flexibility is good. AI is good. Sam Altman, whom I reference as “Sam AI-Man” has some explaining to do. OpenAI is a consumer of cash. The Chinese PR push suggests that Deepseek has found a way to do OpenAI-type computing like Shein and Temu do gym clothes.

I noted “Sam Altman Admits OpenAI Was On the Wrong Side of History in Open Source Debate.” The write up does not come out state, “OpenAI was stupid when it embraced proprietary software’s approach” to meeting user needs. To be frank, Sam AI-Man was not particularly clear either.

The write up says that Sam AI-Man said:

“Yes, we are discussing [releasing model weights],” Altman wrote. “I personally think we have been on the wrong side of history here and need to figure out a different open source strategy.” He noted that not everyone at OpenAI shares his view and it isn’t the company’s current highest priority. The statement represents a remarkable departure from OpenAI’s increasingly proprietary approach in recent years, which has drawn criticism from some AI researchers and former allies, most notably Elon Musk, who is suing the company for allegedly betraying its original open source mission.

My view is that Sam AI-Man wants to emulate other super techno leaders and get whatever he wants. Not surprisingly, other super techno leaders have their own ideas. I would suggest that the objective of these AI jousts is power, control, and money.

“What about the users?” a faint voice asks. “And the investors?” another bold soul queries.

Who?

Stephen E Arnold, February 17, 2025

Software Is Changing and Not for the Better

February 17, 2025

I read a short essay “We Are Destroying Software.” What struck me about the write up was the author’s word choice. For example, here’s a simple frequency count of the terms in the essay:

  1. The two most popular words in the essay are “destroying” and “software” with 15 occurrences each.
  2. The word “complex” is used three times
  3. The words “systems,” “dependencies,” “reinventing,” “wheel,” and “work” are used twice each.

The structure of the essay is a series of declarative statements like this:

We are destroying software claiming that code comments are useless.

I quite like the essay.

Several observations:

  1. The author is passionate about his subject. “Destroy” is not a neutral word.
  2. “Complex” appears to be a particular concern. This makes sense. Some systems like those in use at the US Internal Revenue Service may be difficult, if not impossible, to remediate within available budgets and resources. Gradual deterioration seems to be a characteristic of many systems today, particularly when computer technology interfaces with workers.
  3. The notion of “joy” of hacking comes across, not as a solution to a problem, but the reason the author was motivated to capture his thoughts.

Interesting stuff. Tough to get around entropy, however. Who is the “we” by the way?

Stephen E Arnold, February 17, 2025

IBM Faces DOGE Questions?

February 17, 2025

Simon Willison reminded us of the famous IBM internal training document that reads: “A Computer Can Never Be Held Accountable.” The document is also relevant for AI algorithms. Unfortunately the document has a mysterious history and the IBM Corporate Archives don’t have a copy of the presentation. A Twitter user with the name @bumblebike posted the original image. He said he found it when he went through his father’s papers. Unfortunately, the presentation with the legendary statement was destroyed in a 2019 flood.

image

I believe the image was first shared online in this tweet by @bumblebike in February 2017. Here’s where they confirm it was from 1979 internal training.

Here’s another tweet from @bumblebike from December 2021 about the flood:

Unfortunately destroyed by flood in 2019 with most of my things. Inquired at the retirees club zoom last week, but there’s almost no one the right age left. Not sure where else to ask.”

We don’t need the actual IBM document to know that IBM hasn’t done well when it comes to search. IBM, like most firms tried and sort of fizzled. (Remember Data Fountain or CLEVER?) IBM also moved into content management. Yep, the semi-Xerox, semi-information thing. But the good news is that a time sharing solution called Watson is doing pretty well. It’s not winning Jeopardy! but it is chugging along.

Now IBM professionals in DC have to answer the Doge nerd squad questions? Why not give OpenAI a whirl? The old Jeopardy! winner is kicking back. Doge wants to know.

Whitney Grace, February 17, 2025

Sweden Embraces Books for Student: A Revolutionary Idea

February 14, 2025

dino orangeYep, another dinobaby emission. No smart software required.

Doom scrolling through the weekend’s newsfeeds, I spotted “Sweden Swapped Books for Computers in 2009. Now, They’re Spending Millions to Bring Them Back.” Sweden has some challenges. The problems with kinetic devices are not widely known in Harrod’s Creek, Kentucky, and probably not in other parts of the US. Malmo bears some passing resemblance to parts of urban enclaves like Detroit or Las Vegas. To make life interesting, the country has a keen awareness of everyone’s favorite leader in Russia.

The point of the write up is that Sweden’s shift from old-fashioned dinobaby books to those super wonderful computers and tablets has become unpalatable. The write up reports:

The Nordic country is reportedly exploring ways to reintroduce traditional methods of studying into its educational system.

The reason for the shift to books? The write up observes:

…experts noted that modern, experiential learning methods led to a significant decline in students’ essential skills, such as reading and writing.

Does this statement sound familiar?

Most teachers and parents complain that their kids have increasingly started relying on these devices instead of engaging in classrooms.

Several observations:

  1. Nothing worthwhile comes easy. Computers became a way to make learning easy. The downside is that for most students, the negatives have life long consequences
  2. Reversing gradual loss of the capability to concentrate is likely to be a hit-and-miss undertaking.
  3. Individuals without skills like reading become the new market for talking to a smartphone because writing is too much friction.

How will these individuals, regardless of country, be able to engage in life long learning? The answer is one that may make some people uncomfortable: They won’t. These individuals demonstrate behaviors not well matched to independent, informed thinking.

This dinobaby longs for a time when tiny dinobabies had books, not gizmos. I smell smoke. Oh, I think that’s just some informed mobile phone users burning books.

Stephen E Arnold, February 14, 2025

Who Knew? AI Makes Learning Less Fun

February 14, 2025

Bill Gates was recently on the Jimmy Fallon show to promote his biography. In the interviews Gates shared views on AI stating that AI will replace a lot of jobs. Fallon hoped that TV show hosts wouldn’t be replaced and he probably doesn’t have anything to worry about. Why? Because he’s entertaining and interesting.

Humans love to be entertained, but AI just doesn’t have the capability of pulling it off. Media And Learning shared one teacher’s experience with AI-generated learning videos: “When AI Took Over My Teaching Videos, Students Enjoyed Them Less But Learned The Same.” Media and Learning conducted an experiment to see whether students would learn more from teacher-made or AI-generated videos. Here’s how the experiment went:

“We used generative AI tools to generate teaching videos on four different production management concepts and compared their effectiveness versus human-made videos on the same topics. While the human-made videos took several days to make, the analogous AI videos were completed in a few hours. Evidently, generative AI tools can speed up video production by an order of magnitude.”

The AI videos used ChatGPT written video scripts, MidJourney for illustrations, and HeyGen for teacher avatars. The teacher-made videos were made in the traditional manner of teachers writing scripts, recording themselves, and editing the video in Adobe Premier.

When it came to students retaining and testing on the educational content, both videos yielded the same results. Students, however, enjoyed the teacher-made videos over the AI ones. Why?

“The reduced enjoyment of AI-generated videos may stem from the absence of a personal connection and the nuanced communication styles that human educators naturally incorporate. Such interpersonal elements may not directly impact test scores but contribute to student engagement and motivation, which are quintessential foundations for continued studying and learning.”

Media And Learning suggests that AI could be used to complement instruction time, freeing teachers up to focus on personalized instruction. We’ll see what happens as AI becomes more competent, but we can rest easy for now that human engagement is more interesting than algorithms. Or at least Jimmy Fallon can.

Whitney Grace, February 14, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta