AI Worriers, Play Some Bing Crosby Music

February 24, 2025

dino orangeThis blog post is the work of a real-live dinobaby. No smart software involved.

The Guardian newspaper ran an interesting write up about smart software and the inevitability of complaining to stop it in its tracks. “I Met the Godfathers of AI in Paris – Here’s What They Told Me to Really Worry About.” I am not sure what’s being taught in British schools, but the headline features the author, a split infinitive, and the infamous “ending a sentence with a preposition” fillip. Very sporty.

The write up includes quotes from the godfathers:

“It’s not today’s AI we need to worry about, it’s next year’s,” Tegmark told me. “It’s like if you were interviewing me in 1942, and you asked me: ‘Why aren’t people worried about a nuclear arms race?’ Except they think they are in an arms race, but it’s actually a suicide race.”

I am not sure what psychologists call worrying about the future. Bing Crosby took a different approach. He sang, “Don’t worry about tomorrow” and offered:

Why should we cling to some old faded thing
That used to be

Bing looked beyond the present but did not seem unduly worried. The Guardian is a bit more up tight.

The write up says:

The idea that we, on Earth, might lose control of an AGI that then turns on us sounds like science fiction – but is it really so far-fetched considering the exponential growth of AI development? As Bengio [an AI godfather, according to the Guardian] pointed out, some of the most advanced AI models have already attempted to deceive human programmers during testing, both in pursuit of their designated objectives and to escape being deleted or replaced with an update.

I circled this passage:

It seems as if we have a shrinking opportunity to lay down the incentives for companies to create the kind of AI that actually benefits our individual and collective lives: sustainable, inclusive, democracy-compatible, controlled. And beyond regulation, “to make sure there is a culture of participation embedded in AI development in general”, as Eloïse Gabadou, a consultant to the OECD on technology and democracy, put it.

Okay, so what’s the fix? Who implements the fix? Will the fix stop British universities in Manchester, Cambridge, and Oxford among others from teaching about AI or stop researchers from fiddling with snappier methods? Will the Mayor of London shut down the DeepMind outfit?

Nope. I am delighted that some people are talking about smart software. However, in the high tech world in which we love, I want to remind the Guardian, the last train for Chippenham has left the station. Too late, old chap. Learn to play Bing’s song. Chill.

Stephen E Arnold, February 24, 2025

Advice for Programmers: AI-Proof Your Career

February 24, 2025

Software engineer and blogger Sean Goedecke has some career advice for those who, like himself, are at risk of losing their programming jobs to AI. He counsels, "To Avoid Being Replaced by LLMs, Do What They Can’t." Logical enough. But what will these tools be able to do, and when will they be able to do it? That is the $25 million question. Goedecke has suggestions for the medium term, and the long term.

Right now, he advises, engineers should do three things: First, use the tools. They can help you gain an advantage in the field. And also, know-thine-enemy, perhaps? Next, learn how LLMs work so you can transition to the growing field of AI work. If you can’t beat them, join them, we suppose. Finally, climb the ranks posthaste, for those in junior roles will be the first to go. Ah yes, the weak get eaten. It is a multipronged approach.

For the medium term, Goedecke predicts which skills LLMs are likely to master first. Get good at the opposite of that. For example, ill-defined or poorly-scoped problems, solutions that are hard to verify, and projects with huge volumes of code are all very difficult for algorithms. For now.

In the long term, work yourself into a position of responsibility. There are few of those to go around. So, as noted above, start vigorously climbing over your colleagues now. Why? Because executives will always need at least one good human engineer they can trust. The post observes:

"A LLM strong enough to take responsibility – that is, to make commitments and be trusted by management – would have to be much, much more powerful than a strong engineer. Why? Because a LLM has no skin in the game, which means the normal mechanisms of trust can’t apply. Executives trust engineers because they know those engineers will experience unpleasant consequences if they get it wrong. Because the engineer is putting something on the line (e.g. their next bonus, or promotion, or in the extreme case being fired), the executive can believe in the strength of their commitment. A LLM has nothing to put on the line, so trust has to be built purely on their track record, which is harder and takes more time. In the long run, when almost every engineer has been replaced by LLMs, all companies will still have at least one engineer around to babysit the LLMs and to launder their promises and plans into human-legible commitments. Perhaps that engineer will eventually be replaced, if the LLMs are good enough. But they’ll be the last to go."

If you are lucky, it will be time to retire by then. For those young enough that this is unlikely, or for those who do not excel at the rat race, perhaps a career change is in order. What jobs are safe? Sadly, this dino-baby writer does not have the answer to that question.

Cynthia Murrell, February 24, 2025

OpenAI Furthers Great Research

February 21, 2025

Unsatisfied with existing AI cheating solutions? If so, Gizmodo has good news for you: “OpenAI’s ‘Deep Research’ Gives Students a Whole New Way to Cheat on Papers.” Writer Kyle Barr explains:

“OpenAI’s new ‘Deep Research’ tool seems perfectly designed to help students fake their way through a term paper unless asked to cite sources that don’t include Wikipedia. OpenAI’s new feature, built on top of its upcoming o3 model and released on Sunday, resembles one Google introduced late last year with Gemini 2.0. Google’s ‘Deep Research’ is supposed to generate long-form reports over the course of 30 minutes or more, depending on the depth of the requested topic. Boiled down, Google’s and OpenAI’s tools are AI agents capable of performing multiple internet searches while reasoning about the next step to generate a report.”

Deep Research even functions in a side panel, providing updates on its direction and progress. So helpful! However, the tool is not for those looking to score an A. Like a student rushing to finish a paper the old-fashioned way, Barr notes, it relies heavily on Wikipedia. An example report did include a few trusted sites, like Pew Research, but such reliable sources were in the minority. Besides, the write-up emphasizes:

“Remember, this is just a bot scraping the internet, so it won’t be accessing any non-digitized books or—ostensibly—any content locked behind a paywall. … Because it’s essentially an auto-Googling machine, the AI likely won’t have access to the most up-to-date and large-scale surveys from major analysis firms. … That’s not to say the information was inaccurate, but anybody who generates a report is at the mercy of suspect data and the AI’s interpretation of that data.”

Meh, we suppose that is okay if one just needs a C to get by. But is it worth the $200 per month subscription? I suppose that depends on the student, and the parents willingness to sign up for services that will make gentle Ben and charming Chrissie smarter. Besides, we are sure more refined versions are in our future.

Cynthia Murrell, February 21, 2025

Gemini, the Couch Potato, Watches YouTube

February 21, 2025

Have you ever told yourself that you have too many YouTube videos to watch? Now you can save time by using Gemini AI to watch them for you. What is Gemini AI? According to Make Use Of, the algorithm can “Gemini Can Now Watch YouTube Videos And Save Hours Of Time.”

Google recently uploaded a new an update to its Gemini AI that allows users to catch up YouTube videos without having to actually watch them. The new feature is a marvelous advancement! The new addition to Gemini 2.0 Flash will watch the video then it can answer questions or provide a summary of it. Google users can access Gemini through the Gemini site or the smartphone app. It’s also available for free without the Gemini Advanced subscription.

To access video watching feature, users must select the 2.0 Flash Thinking Experimental with apps model from the sidebar.

Here’s how the cited article’s author used Gemini:

… I came across a YouTube video about eight travel tips for Las Vegas. Instead of watching the entire video, I simply asked Gemini, “What are the eight travel tips in this video?” Gemini then processed the video and provided a concise summary of the travel tips. I also had Gemini summarize a video on changing a windshield wiper on a Honda CR-V, a chore I needed to complete. The results were simple and easy to understand, allowing me to glance at my iPhone screen instead of constantly stopping and starting the video during the process. The easiest way to grab a YouTube link is through your web browser or the Share button under the video.”

YouTube videos can be long and boring. Gemini condenses the information into digestible and quick to read bits. It’s an awesome tool, but if Gemini watches a video does it count as a view for advertising? Will Gemini put on a few pounds snacking on Pringles?

Whitney Grace, February 21, 2025

What Do Gamers Know about AI? Nothing, Nothing at All

February 20, 2025

Take-Two Games CEO says, "There’s no such thing" as AI.

Is the head of a major gaming publisher using semantics to downplay the role of generative AI in his industry? PC Gamer reports, "Take-Two CEO Strauss Zelnick Takes a Moment to Remind Us Once Again that ‘There’s No Such Thing’ as Artificial Intelligence." Writer Andy Chalk quotes Strauss’ from a recent GamesIndustry interview:

"Artificial intelligence is an oxymoron, there’s no such thing. Machine learning, machines don’t learn. Those are convenient ways to explain to human beings what looks like magic. The bottom line is that these are digital tools and we’ve used digital tools forever. I have no doubt that what is considered AI today will help make our business more efficient and help us do better work, but it won’t reduce employment. To the contrary, the history of digital technology is that technology increases employment, increases productivity, increases GDP and I think that’s what’s going to happen with AI. I think the videogame business will probably be on the leading, if not bleeding, edge of using AI."

So AI, which does not exist, will actually create jobs instead of eliminate them? The write-up correctly notes the evidence points to the contrary. On the other hand, Strauss seems clear-eyed on the topic of copyright violations. AI-on-AI violations, anyway. We learn:

"That’s a mess Zelnick seems eager to avoid. ‘In terms of [AI] guardrails, if you mean not infringing on other people’s intellectual property by poaching their LLMs, yeah, we’re not going to do that,’ he said. ‘Moreover, if we did, we couldn’t protect that, we wouldn’t be able to protect our own IP. So of course, we’re mindful of what technology we use to make sure that it respects others’ intellectual property and allows us to protect our own.’"

Perhaps Strauss is on to something. It is true that generative AI is just another digital tool—albeit one that tends to put humans out of work. But as we know, hype is more important than reality for those chasing instant fame and riches.

Cynthia Murrell, February 20, 2025

Smart Software and Law Firms: Realities Collide

February 19, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumbThis blog post is the work of a real-live dinobaby. No smart software involved.

TechCrunch published “Legal Tech Startup Luminance, Backed by the Late Mike Lynch, Raises $75 Million.” Good news for Luminance. Now the company just needs to ring the bell for those putting up the money. The write up says:

Claiming to be capable of highly accurate interrogation of legal issues and contracts, Luminance has raised $75 million in a Series C funding round led by Point72 Private Investments. The round is notable because it’s one of the largest capital raises by a pure-play legal AI company in the U.K. and Europe. The company says it has raised over $115 million in the last 12 months, and $165 million in total.  Luminance was originally developed by Cambridge-based academics Adam Guthrie (founder and chief technical architect) and Dr. Graham Sills (founder and director of AI).

Why is Luminance different? The method is similar to that used by Deepseek. With concerns about the cost of AI, a method which might be less expensive to get up and keep running seems like a good bet.

However, Eudia has raised $105 million with backing from people familiar with Relativity’s legal business. Law dot com suggests that Eudia will streamline legal business processes.

The article “Massive Law Firm Gets Caught Hallucinating Cases” offers an interesting anecdote about a large law firm’s facing sanctions. What did the big boys and girls at the law firm do? Those hard working Type A professionals cited nine cases to support an argument. There is just one trivial issue perplexing the senior partners. Eight of those cases were “nonexistent.” That means made up, invented, and spot out by a nifty black box of probabilities and their methods.

I am no lawyer. I did work as an expert witness and picked up some insight about the thought processes of big time lawyers. My observations may not apply to the esteemed organizations to which I linked in this short essay, but I will assume that I am close enough for horseshoes.

  1. Partners want big pay and juicy bonuses. If AI can help reduce costs and add protein powder to the compensation package, AI is definitely a go-to technology to use.
  2. Lawyers who are very busy all of the billable time and then some want to be more efficient. The hyperbole swirling around AI makes it clear that using an AI is a productivity booster. Do lawyers have time to check what the AI system did? Nope. Therefore, hallucination is going to be part of the transformer-based methodologies until something better becomes feasible. (Did someone say, “Quantum computers?)
  3. The marketers (both directly compensated and the social media remoras) identify a positive. Then that upside is gilded like Tzar Nicholas’ powder room and repeated until it sure seems true.

The reality for the investors is that AI could be a winner. Go for it. The reality is for the lawyers that time to figure out what’s in bounds and what’s out of bounds is unlikely to be available. Other professionals will discover what the cancer docs did when using the late, great IBM Watson. AI can do some things reasonably well. Other things can have severe consequences.

Stephen E Arnold, February 19, 2025

Speed Up Your Loss of Critical Thinking. Use AI

February 19, 2025

While the human brain isn’t a muscle, its neurology does need to be exercised to maintain plasticity. When a human brain is rigid, it’s can’t function in a healthy manner. AI is harming brains by making them not think good says 404 Media: “Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared.” You can read the complete Microsoft research report at this link. (My hunch is that this type of document would have gone the way of Timnit Gebru and the flying stochastic parrot, but that’s just my opinion, Hank, Advait, Lev, Ian, Sean, Dick, and Nick.)

Carnegie Mellon University and Microsoft researchers released a paper that says the more humans rely on generative AI the “result in the deterioration of cognitive faculties that ought to be preserved.”

Really? You don’t say! What else does this remind you of? How about watching too much television or playing too many videogames? These passive activities (arguable with videogames) stunt the development of brain gray matter and in a flight of Mary Shelley rhetoric make a brain rot! What else did the researchers discover when they studied 319 knowledge workers who self-reported their experiences with generative AI:

“ ‘The data shows a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight when using GenAI,’ the researchers wrote. ‘Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving.’”

By the way, we definitely love and absolutely believe data based on self reporting. Think of the mothers who asked their teens, “Where did you go?” The response, “Out.” The mothers ask, “What did you do?” The answer, “Nothing.” Yep, self reporting.

Does this mean generative AI is a bad thing? Yes and no. It’ll stunt the growth of some parts of the brain, but other parts will grow in tandem with the use of new technology. Humans adapt to their environments. As AI becomes more ingrained into society it will change the way humans think but will only make them sort of dumber [sic]. The paper adds:

“ ‘GenAI tools could incorporate features that facilitate user learning, such as providing explanations of AI reasoning, suggesting areas for user refinement, or offering guided critiques,’ the researchers wrote. ‘The tool could help develop specific critical thinking skills, such as analyzing arguments, or cross-referencing facts against authoritative sources. This would align with the motivation enhancing approach of positioning AI as a partner in skill development.’”

The key is to not become overly reliant AI but also be aware that the tool won’t go away. Oh, when my mother asked me, “What did you do, Whitney?” I responded in the best self reporting manner, “Nothing, mom, nothing at all.”

Whitney Grace, February 19, 2025

Programming: Missing the Message

February 18, 2025

dino orange_thumb_thumb_thumb_thumb_thumbThis blog post is the work of a real-live dinobaby. No smart software involved.

I read “New Junior Developers Can’t Actually Code.” The write up is interesting. I think an important point in the essay has been either overlooked or sidestepped. The main point of the article in my opinion is:

The foundational knowledge that used to come from struggling through problems is just… missing. We’re trading deep understanding for quick fixes, and while it feels great in the moment, we’re going to pay for this later.

I agree. The push is to make creating software has shifted to what I like to describe as a TikTok mindset. The idea is that one can do a quick search and get an answer, preferably in less than 30 seconds. I know there are young people who spend time working through problems. We have one of these 12 year olds in our family. The problem is that I am not sure how many other 12-year-olds have this baked in desire to work through problems. From what I see and hear, teachers are concerned that students are in TikTok mode, not in “work through” mode, particularly in class.

The write up says:

Here’s the reality: The acceleration has begun and there’s nothing we can do about it. Open source models are taking over, and we’ll have AGI running in our pockets before we know it. But that doesn’t mean we have to let it make us worse developers. The future isn’t about whether we use AI—it’s about how we use it. And maybe, just maybe, we can find a way to combine the speed of AI with the depth of understanding that we need to learn.

I agree. Now the “however”:

  1. Mistakes with older software may not be easily remediated. I am a dinobaby. Dinobabies drop out or die. The time required to figure out why something isn’t working may not be available. That might be a problem for a small issue. For something larger, like a large bank, the problem can be a difficult one.
  2. People with modern skills may not know where to look for an answer. The reference materials, the snippets of code, or the knowledge about a specific programming language may not be available. There are many reasons for this “knowledge loss.” Once gone, it will take time and money to get the information, not a TikTok fix.
  3. The software itself may be a hack job. We did a project for Bell Labs at the time of the Judge Green break up. The regional manager running my project asked the people working with me on this minor job if Alan and Howard (my two mainframe IBM CICS specialists) if they wrote documentation. Howard said, “Ho ho ho. We just use Assembler and make it work.” The project manager said, “You can’t do that for this project.” Alan said, “How do you propose to get the service you want us to implement to work?” We got the job, and the system is still almost 50 years later still in service. Okay, young wizard with smart software, fix up our work.

So what? We are reaching a point when the disconnect between essential computer science knowledge and actual implementation in large-scale, mission-critical systems is being lost. Maybe AI can do what Alan, Howard, and I did to comply with Judge Green’s order relating to Baby Bell information exchange in the IBM environment.

I am skeptical. That’s a problem with the TikTok approach and smart software. If the model gets it wrong, there may be no fix. TikTok won’t be much help either. (I think Steve Gibson might agree with some of my assertions.) The write up does not flip over the rock. There is some shocking stuff beneath the gray, featureless surface.

Stephen E Arnold, February 18, 2025

Hackers and AI: Of Course, No Hacker Would Use Smart Software

February 18, 2025

dino orangeThis blog post is the work of a real live dinobaby. Believe me, after reading the post, you know that smart software was not involved.

Hackers would never ever use smart software. I mean those clever stealer distributors preying on get-rich-quick stolen credit card users. Nope. Those people using online games to lure kiddies and people with kiddie-level intelligence into providing their parents’ credit card data? Nope and double nope. Those people in computer science classes in Azerbaijan learning how to identify security vulnerability while working as contractors for criminals. Nope. Never. Are you crazy. These bad actors know that smart software is most appropriate for Mother Teresa type activities and creating Go Fund Me pages to help those harmed by natural disasters, bad luck, or not having a job except streaming.

I mean everyone knows that bad actors respect the firms providing smart software. It is common knowledge that bad actors play fair. Why would a criminal use smart software to create more efficacious malware payloads, compromise Web sites, or defeat security to trash the data on Data.gov. Ooops. Bad example. Data.gov has been changed.

I read “Google Says Hackers Abuse Gemini AI to Empower Their Attacks.” That’s the spirit. Bad actors are using smart software. The value of the systems is evident to criminals. The write up says:

Multiple state-sponsored groups are experimenting with the AI-powered Gemini assistant from Google to increase productivity and to conduct research on potential infrastructure for attacks or for reconnaissance on targets. Google’s Threat Intelligence Group (GTIG) detected government-linked advanced persistent threat (APT) groups using Gemini primarily for productivity gains rather than to develop or conduct novel AI-enabled cyberattacks that can bypass traditional defenses. Threat actors have been trying to leverage AI tools for their attack purposes to various degrees of success as these utilities can at least shorten the preparation period. Google has identified Gemini activity associated with APT groups from more than 20 countries but the most prominent ones were from Iran and China.

Stop the real time news stream! Who could have imagined that bad actors would be interested in systems and methods that would make their behaviors more effective and efficient.

When Microsoft rolled out its marketing gut punch aimed squarely at Googzilla, the big online advertising beast responded. The Code Red and Code Yellow lights flashed. Senior managers held meetings after Foosball games and hanging at Philz’ Coffee.

Did Google management envision the reality of bad actors using Gemini? No. It appears that the Google acquisition Mandiant figured it out. Eventually — it’s been two years and counting since Microsoft caused the AI tsunami — the Eureka! moment arrived.

The write up reports:

Google also mentions having observed cases where the threat actors attempted to use public jailbreaks against Gemini or rephrasing their prompts to bypass the platform’s security measures. These attempts were reportedly unsuccessful.

Of course, the attacks were. Do US banks tell their customers when check fraud or other cyber dishonesty relieves people of their funds. Sure they don’t. Therefore, it is only the schlubs who are unfortunate enough to have the breach disclosed. Then the cyber security outfits leap into action and issue fixes. Everything is the cyber security world is buttoned up and buttoned down. Absolutely.

Several observations:

  1. How has free access without any type of vetting working out? The question is directed at the big tech outfits who are beavering away in this technology blast zone.
  2. What are the providers of free smart software doing to make certain that the method can only produce seventh grade students’ essays about the transcontinental railroad?
  3. What exactly is a user of free smart software supposed to do to reign in the actions of nation states with which most Americans are somewhat familiar. I mean there is a Chinese restaurant near Harrod’s Creek. Am I to discuss the matter with the waitress?

Why worry? That worked for Mad Magazine until it didn’t. Hey, Google, thanks for the information. Who could have known smart software can be used for nefarious purposes? (Obviously not Google.)

Stephen E Arnold, February 18, 2025

Unified Data Across Governments? How Useful for a Non Participating Country

February 18, 2025

dino orangeA dinobaby post. No smart software involved.

I spoke with a person whom I have known for a long time. The individual lives and works in Washington, DC. He mentioned “disappeared data.” I did some poking around and, sure enough, certain US government public facing information had been “disappeared.” Interesting. For a short period of time I made a few contributions to what was FirstGov.gov, now USA.gov.

For those who don’t remember or don’t know about President Clinton’s Year 2000 initiative, the idea was interesting. At that time, access to public-facing information on US government servers was via the Web search engines. In order to locate a tax form, one would navigate to an available search system. On Google one would just slap in IRS or IRS and the form number.

Most of the US government public-facing Web sites were reasonably straight forward. Others were fairly difficult to use. The US Marine Corps’ Web site had poor response times. I think it was hosted on something called Server Beach, and the would-be recruit would have to wait for the recruitment station data to appear. The Web page worked but it was slow.

President Clinton wanted or someone in his administration wanted the problem to be fixed with a search system for US government public-facing content. After a bit of work, the system went online in September 2000. The system morphed into a US government portal a bit like the Yahoo.com portal model.

I thought about the information in “Oracle’s Ellison Calls for Governments to Unify Data to Feed AI.” The write up reports:

Oracle Corp.’s co-founder and chairman Larry Ellison said governments should consolidate all national data for consumption by artificial intelligence models, calling this step the “missing link” for them to take full advantage of the technology. Fragmented sets of data about a population’s health, agriculture, infrastructure, procurement and borders should be unified into a single, secure database that can be accessed by AI models…

Several questions arise; for instance:

  1. What country or company provides the technology?
  2. Who manages what data are added and what data are deleted?
  3. What are the rules of access?
  4. What about public data which are not available for public access; for example, the “disappeared” data from US government Web sites?
  5. What happens to commercial or quasi-commercial government units which repackage public data and sell it at a hefty mark up?

Based on my brief brush with the original Clinton project, I think the idea is interesting. But I have one other question in mind: What happens when non-participating countries get access to the aggregated public facing data. Digital information is a tricky resource to secure. In fact, once data are digitized and connected to a network, it is fair game. Someone, somewhere will figure out how to access, obtain, exfiltrate, and benefit from aggregated data.

The idea is, in my opinion, a bit of grandstanding like Google’s quantum supremacy claims. But US high technology wizards are ready and willing to think big thoughts and take even bigger actions. We live in interesting times, but I am delighted that I am old.

Stephen E Arnold, February 18, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta