A Perfect Plan: Mainframes Will Live Forever
September 7, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Experienced COBOL programmers are in high demand and short supply, but IBM is about to release an AI tool that might render that lucrative position obsolete. The Register reports: “IBM Says GenAI Can Convert that Old COBOL Code to Java for You.” Dubbed the watsonx Code Assistant for Z, the tool should be available near the end of this year. Reporter Dan Robinson gives us a little background:
“COBOL supports many vital processes within organizations globally – some that would surprise newbie devs. The language was designed specifically to be portable and easier for coding business applications. The good news is that it works. The bad news is it’s been working for a little long. COBOL has been around for over 60 years, and many of the developers who wrote those applications have since retired or are no longer with us. ‘If you can find a COBOL programmer, they are expensive. I have seen figures showing they can command some of the highest salaries because so many mission critical apps are written in COBOL and they need maintenance,’ Omdia Chief Analyst Roy Illsley told us.
Migrating the code to Java means there are many more programmers around, he added, and if the apps run on Linux on Z then they can potentially be moved off the mainframe more easily in future.”
Perhaps. There are an estimated 775 to 850 billion lines of COBOL code at work in the business world, and IBM is positioning Code Assistant to help prioritize, refactor, and convert them all into Java. There is just one pesky problem:
“IBM is not the only IT outfit turning to AI tools to help developers code or maintain applications, however, the quality of AI-assisted output has been questioned. A Stanford University study found that programmers who accepted help from AI tools like Github Copilot produce less secure code than those who did not.”
So maybe firms should hold on to those COBOL programmers’ contact info, just in case.
Cynthia Murrell, September 7, 2023
Gannett: Whoops! AI Cost Cutting Gets Messy
September 6, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Gannett, the “real” news bastion of excellence experimented with smart software. The idea is that humanoids are expensive, unreliable, and tough to manage. Software — especially smart software — is just “set it an forget it.”
A young manager / mother appears in distress after her smart software robot spilled the mild. Thanks, MidJourney. Not even close to what I requested.
That was the idea in the Gannett carpetland. How did that work out?
“Gannett to Pause AI Experiment after Botched High School Sports Articles” reports:
Newspaper chain Gannett has paused the use of an artificial intelligence tool to write high school sports dispatches after the technology made several major flubs in articles in at least one of its papers.
The estimable Gannett organization’s effort generated some online buzz. The CNN article adds:
The reports were mocked on social media for being repetitive, lacking key details, using odd language and generally sounding like they’d been written by a computer with no actual knowledge of sports.
That statement echoes my views of MBAs with zero knowledge of business making bonehead management decisions. Gannett is well managed; therefore, the executives are not responsible for the decision to use smart software to cut costs and expand the firm’s “real” news coverage.
I wonder if the staff terminated would volunteer to return to work to write “real” news? You know. The hard stuff like high school sports articles.
Stephen E Arnold, September 6, 2023
Amazon Offers AI-Powered Review Consolidation for Busy Shoppers
September 6, 2023
I read the reviews for a product. I bought the product. Reality was — how shall I frame it — different from the word pictures. Trust those reviews. ? Hmmm. So far, Amazon’s generative AI focus has been on supplying services to developers on its AWS platform. Now, reports ABC News, “Amazon Is Rolling Out a Generative AI Feature that Summarizes Product Reviews.” Writer Haleluya Hadero tells us:
“The feature, which the company began testing earlier this year, is designed to help shoppers determine at a glance what other customers said about a product before they spend time reading through individual reviews. It will pick out common themes and summarize them in a short paragraph on the product detail page.”
A few mobile shoppers have early access to the algorithmic summaries while Amazon tweaks the tool with user feedback. Eventually, the company said, shoppers will be able to surface common themes in reviews. Sounds nifty, but there is one problem: Consolidating reviews that are fake, generated by paid shills, or just plain wrong does nothing to improve their accuracy. But Amazon is more eager to jump on the AI bandwagon than to perform quality control on its reviews system. We learn:
“The Seattle-based company has been looking for ways to integrate more artificial intelligence into its product offerings as the generative AI race heats up among tech companies. Amazon hasn’t released its own high-profile AI chatbot or imaging tool. Instead, it’s been focusing on services that will allow developers to build their own generative AI tools on its cloud infrastructure AWS. Earlier this year, Amazon CEO Andy Jassy said in his letter to shareholders that generative AI will be a ‘big deal’ for the company. He also said during an earnings call with investors last week that ‘every single one’ of Amazon’s businesses currently has multiple generative AI initiatives underway, including its devices unit, which works on products like the voice assistant Alexa.”
Perhaps one day Alexa will recite custom poetry or paint family portraits for us based on the eavesdropping she’s done over the years. Heartwarming. One day, sure.
Cynthia Murrell, September 19, 2023
Smart Software Wizard: Preparation for a Future As a Guru
September 5, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “I Hope I’m Wrong: the Co-Founder of DeepMind on How AI Threatens to Reshape Life As We Know It.” The article appears to be an interview with one of the founders of the Google DeepMind outfit. There are numerous somewhat astounding quotes in the write up. To enjoy the humble bragging, the “gee whiz, trouble ahead” deflection, and the absolutism of the Google way — read the edited interview yourself. (You will have to click through the increasingly strident appeals for cash from the Guardian newspaper. I find them amusing since the “real news” business decided itself into the pickle many “real news” outfits find themselves.) The “interview” is a book review. Just scroll to the end of the lengthy PR piece about “The Coming Wave.” If there were old fashioned bookstores, you might be able to view the wizard and buy a signed copy. But another high-tech outfit fixed up the bookstore business so you will have to Google it. Heck, Google everything.
A serious looking AI expert ponders what to do with smart software. It looks to me as if the wizard is contemplating cashing in, becoming famous, and buying a super car. Will he volunteer for the condo association’s board of directors? Thanks, MidJourney. No Mother MJ hassling me for this highly original AI art.
Back to the write up which seems to presage doom from smart software.
Here’s a statement I found interesting:
“I think that what we haven’t really come to grips with is the impact of … family. Because no matter how rich or poor you are, or which ethnic background you come from, or what your gender is, a kind and supportive family is a huge turbo charge,” he says. “And I think we’re at a moment with the development of AI where we have ways to provide support, encouragement, affirmation, coaching and advice. We’ve basically taken emotional intelligence and distilled it. And I think that is going to unlock the creativity of millions and millions of people for whom that wasn’t available.”
Very smart person that developer of smart software. The leap from the family to unlocking creativity is interesting. Do you think right wing political movements are zipping along on encrypted messaging apps, just wait until AI adds a turbo boost. That’s something to anticipate. Also, I like the idea of Google DeepMind taking “intelligence” and distilling it like a chef in a three star Michelin restaurant reducing a sauce with goose fat in it.
I also noted this statement:
I think this idea that we need to dismantle the state, we need to have maximum freedom – that’s really dangerous. On the other hand, I’m obviously very aware of the danger of centralized authoritarianism and, you know, even in its minuscule forms like nimbyism*. That’s why, in the book, we talk about a narrow corridor between the danger of dystopian authoritarianism and this catastrophe caused by openness. That is the big governance challenge of the next century: how to strike that balance. [Editor’s Note: Nimbyism means you don’t want a prison built adjacent to million dollar homes.]
Imagine, a Googler and DeepMinder looking down the Information Highway.
How are social constructs coping with the information revolution. If AI is an accelerant, what will go up in flames? One answer is Dr. Jeff Dean’s career. He was a casualty of a lateral arabesque because the DeepMind folks wanted to go faster. “Old” Dr. Dean was a turtle wandering down the Information Superhighway. He’s lucky he was swatted to the side of the road.
What are these folks doing? In my opinion, these statements do little to reduce my anxiety about the types of thinkers who knowingly create software systems purpose built to extend the control of a commercial enterprise. Without regulation, the dark flowers of some wizards are blooming in the Google walled garden.
One of these wizards is hoping that he is wrong about the negative impacts of smart software. Nice try, but it won’t work for me. Creating publicity and excuses are advertising. But that’s the business of Google, isn’t it. The core competence of some wizards is not moral or ethical action in my opinion. PR is good, however.
Stephen E Arnold, September 4, 2023
Meta Play Tactic or Pop Up a Level. Heh Heh Heh
September 4, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Years ago I worked at a blue chip consulting firm. One of the people I interacted with had a rhetorical ploy when cornered in a discussion. The wizard would say, “Let’s pop up a level.” He would then shift the issue which was, for example, a specific problem, into a higher level concept and bring his solution into the bigger context.
The clever manager pop up a level to observe the lower level tasks from a broader view. Thank Mother MJ. Not what I specified, but the gradient descent is alive and well.
Let’s imagine that the topic is a plain donut or a chocolate covered donut with sprinkles. There are six people in the meeting. The discussion is contentious because that’s what blue chip consulting Type As do: Contention, sometime nice, sometimes not. The “pop up a level” guy says, “Let pop up a level. The plain donut has less sugar. We are concerned about everyone’s health, right? The plain donut does not have so much evil, diabetes linked sugar. It makes sense to just think of health and obviously the decreased risk for increasing the premiums for health insurance.” Unless one is paying attention and not eating the chocolate chip cookies provided for the meeting attendees, the pop-up-a-level approach might work.
A current example of pop-up-a-level thinking, navigate to “Designing Deep Networks to Process Other Deep Networks.” Nvidia is in hog heaven with the smart software boom. The company realizes that there are lots of people getting in the game. The number of smart software systems and methods, products and services, grifts and gambles, and heaven knows what else is increasing. Nvidia wants to remain the Big Dog even though some outfits wants to design their own chips or be like Apple and maybe find a way to do the Arm thing. Enter the pop-up-a-level play.
The write up says:
The solution is to use convolutional neural networks. They are designed in a way that is largely “blind” to the shifting of an image and, as a result, can generalize to new shifts that were not observed during training…. Our main goal is to identify simple yet effective equivariant layers for the weight-space symmetries defined above. Unfortunately, characterizing spaces of general equivariant functions can be challenging. As with some previous studies (such as Deep Models of Interactions Across Sets), we aim to characterize the space of all linear equivariant layers.
Translation: Our system and method can make use of any other accessible smart software plumbing. Stick with Nvidia.
I think the pop-up-a-level approach is a useful one. Are the competitors savvy enough to counter the argument?
Stephen E Arnold, September 4, 2023
Planning Ahead: Microsoft User Agreement Updates To Include New AI Stipulations
September 4, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Microsoft is eager to capitalize on its AI projects, but first it must make sure users are legally prohibited from poking around behind the scenes. For good measure, it will also ensure users take the blame if they misuse its AI tools. “Microsoft Limits Use of AI Services in Upcoming Services Agreement Update,” reports Ghacks.net. Writer Martin Brinkman notes these services include but are not limited to Bing Chat, Windows Copilot, Microsoft Security Copilot, Azure AI platform, and Teams Premium. We learn:
“Microsoft lists five rules regarding AI Services in the section. The rules prohibit certain activity, explain the use of user content and define responsibilities. The first three rules limit or prohibit certain activity. Users of Microsoft AI Services may not attempt to reverse engineer the services to explore components or rulesets. Microsoft prohibits furthermore that users extract data from AI services and the use of data from Microsoft’s AI Services to train other AI services. … The remaining two rules handle the use of user content and responsibility for third-party claims. Microsoft notes in the fourth entry that it will process and store user input and the output of its AI service to monitor and/or prevent ‘abusive or harmful uses or outputs.’ Users of AI Services are also solely responsible regarding third-party claims, for instance regarding copyright claims.”
Another, non-AI related change is that storage for one’s Outlook.com attachments will soon affect OneDrive storage quotas. That could be an unpleasant surprise for many when changes take effect on September 30. Curious readers can see a summary of the changes here, on Microsoft’s website.
Cynthia Murrell, September 4, 2023
Regulating Smart Software: Let Us Form a Committee and Get Industry Advisors to Help
September 1, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
The Boston Globe published what I thought was an amusing “real” news story about legislators and smart software. I know. I know. I am entering oxymoron land. The article is “The US Regulates Cars, Radio, and TV. When Will It Regulate AI? A number of passages received True Blue check marks.
A person living off the grid works to make his mobile phone deliver generative content to solve the problem of … dinner. Thanks, MidJourney. You did a Stone Age person but you would not generate a street person. How helpful!
Let me share two passages and then offer a handful of observations.
How about this statement attributed to Microsoft’s Brad Smith. He is the professional who was certain Russia organized 1,000 programmers to figure out the SolarWinds’ security loopholes. Yes, that Brad Smith. The story quotes him as saying:
“We should move quickly,” Brad Smith, the president of Microsoft, which launched an AI-powered version of its search engine this year, said in May. “There’s no time for waste or delay,” Chuck Schumer, the Senate majority leader, has said. “Let’s get ahead of this,” said Sen. Mike Rounds, R-S.D.
Microsoft moved fast. I think the reason was to make Google look stupid. Both of these big outfits know that online services aggregate and become monopolistic. Microsoft wants to be the AI winner. Microsoft is not spending extra time helping elected officials understand smart software or the stakes on the digital table. No way.
The second passage is:
Historically, regulation often happens gradually as a technology improves or an industry grows, as with cars and television. Sometimes it happens only after tragedy.
Please, read the original “real” news story for Captain Obvious statements. Here are a few observations:
- Smart software is moving along at a reasonable clip. Big bucks are available to AI outfits in Germany and elsewhere. Something like 28 percent of US companies are fiddling with AI. Yep, even those raising chickens have AI religion.
- The process of regulation is slow. We have a turtle and a hare situation. Nope, the turtle loses unless an exogenous power kills the speedy bunny.
- If laws were passed, how would one get fast action to apply them? How is the FTC doing? What about the snappy pace of the CDC in preparing for the next pandemic?
Net net: Yes, let’s understand AI.
Stephen E Arnold, September 1, 2023.
The statement aligns with my experience.
Slackers, Rejoice: Google Has a Great Idea Just for You
August 31, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I want to keep this short because the idea of not doing work to do work offends me deeply. Just like the big thinkers who want people to relax, take time, smell the roses, and avoid those Type A tendencies annoy me. I like being a Type A. In fact, if I were not a Type A, I would not “be” to use some fancy Descartes logic.
Is anyone looking down the Information Superhighway to see what speeding AI vehicle is approaching? Of course not, everyone is on break or playing Foosball. Thanks, Mother MidJourney, you did not send me to the arbitration committee for my image request.
“Google Meet’s New AI Will Be Able to Go to Meetings for You” reports:
…you might never need to pay attention to another meeting again — or even show up at all.
Let’s think about this new Google service. If AI continues to advance at a reasonable pace, an AI which can attend a meeting for a person can at some point replace the person. Does that sound reasonable? What a GenZ thrill. Money for no work. The advice to take time for kicking back and living a stress free life is just fantastic.
In today’s business climate, I am not sure that delegating knowledge work to smart software is a good idea. I like to use the phrase “gradient descent.” My connotation of this jargon means a cushioned roller coaster to one or more of the Seven Deadly Sins. I much prefer intentional use of software. I still like most of the old-fashioned methods of learning and completing projects. I am happy to encounter a barrier like my search for the ultimate owners of the domain rrrrrrrrrrr.com or the methods for enabling online fraud practiced by some Internet service providers. (Sorry, I won’t name these fine outfits in this free blog post. If you are attending my keynote at the Massachusetts and New York Association of Crime Analysts’ conference in early October, say, “Hello.” In that setting, I will identify some of these outstanding companies and share some thoughts about how these folks trample laws and regulations. Sound like fun?
Google’s objective is to become the source for smart software. In that position, the company will have access to knobs and levers controlling information access, shaping, and distribution. The end goal is a quarterly financial report and the diminution of competition from annoying digital tsetse flies in my opinion.
Wouldn’t it be helpful if the “real news” looked down the Information Highway? No, of course not. For a Type A, the new “Duet” service does not “do it” for me.
Stephen E Arnold, August 31, 2023
New Learning Model Claims to Reduce Bias, Improve Accuracy
August 30, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Promises, promises. We have seen developers try and fail to eliminate bias in machine learning models before. Now ScienceDaily reports, “New Model Reduces Bias and Enhances Trust in AI Decision-Making and Knowledge Organization.” Will this effort by University of Waterloo researchers be the first to succeed? The team worked in a field where AI bias and inaccuracy can be most devastating: healthcare. The write-up tells us:
“Hospital staff and medical professionals rely on datasets containing thousands of medical records and complex computer algorithms to make critical decisions about patient care. Machine learning is used to sort the data, which saves time. However, specific patient groups with rare symptomatic patterns may go undetected, and mislabeled patients and anomalies could impact diagnostic outcomes. This inherent bias and pattern entanglement leads to misdiagnoses and inequitable healthcare outcomes for specific patient groups. Thanks to new research led by Dr. Andrew Wong, a distinguished professor emeritus of systems design engineering at Waterloo, an innovative model aims to eliminate these barriers by untangling complex patterns from data to relate them to specific underlying causes unaffected by anomalies and mislabeled instances. It can enhance trust and reliability in Explainable Artificial Intelligence (XAI.)”
Wong states his team was able to disentangle statistics in a certain set of complex medical results data, leading to the development of a new XAI model they call Pattern Discovery and Disentanglement (PDD). The post continues:
“The PDD model has revolutionized pattern discovery. Various case studies have showcased PDD, demonstrating an ability to predict patients’ medical results based on their clinical records. The PDD system can also discover new and rare patterns in datasets. This allows researchers and practitioners alike to detect mislabels or anomalies in machine learning.”
If accurate, PDD could lead to more thorough algorithms that avoid hasty conclusions. Less bias and fewer mistakes. Can this ability to be extrapolated to other fields, like law enforcement, social services, and mortgage decisions? Assurances are easy.
Cynthia Murrell, August 30, 2023
AI Weird? Who Knew?
August 29, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Captain Obvious here. Today’s report comes from the IEEE, an organization for really normal people. Oh, you are not an electrical engineer? Then, you are not normal. Just ask an EE and inquire about normalcy?
Enough electrical engineer humor. Oh, well, one more: Which is a more sophisticated engineer? [a] Civil, [b] Mechanical, [c] Electrical, [d] Nuclear. The answer is [d] nuclear. Why? You have to be able to do math, chemistry, and fix a child’s battery powered toy. Get it? I must admit that I did not when Dr. James Terwilliger told it to me when I worked at the Halliburton nuclear outfit. Never heard of it? Well, there you go. Just ask a chatbot to fill you in.
I read “Why Today’s Chatbots Are Weird, Argumentative, and Wrong.” The IEEE article is going to create some tension in engineering-forward organizations. Most of these outfits are in the words of insightful leaders like the stars of the “All In” podcast. Booze, money, gambling, and confidence — a heady mixture indeed.
What does the write up say that Captain Obvious did not know? That’s a poor question. The answer is, “Not much.”
Here’s a passage which received the red marker treatment from this dinobaby:
[Generative AI services have] become way more fluent and more subtly wrong in ways that are harder to detect.
I love the “way more.” The key phrase in the extract, at least for me, is: “Harder to detect.” But why? Is it because developers are improving their generative systems a tweak and a human judgment at a time. The “detect” folks are in react mode. Does this suggest that at least for now the cat-and-mouse game ensures an advantage to the steadily improving generative systems. In simple terms, non-electrical engineers are going to be “subtly” fooled? It sure does.
A second example of my big Japanese chunky marker circling behavior is this snippet:
The problem is the answers do look vaguely correct. But [the chatbots] are making up papers, they’re making up citations or getting facts and dates wrong, but presenting it the same way they present actual search results. I think people can get a false sense of confidence on what is really just probability-based text.
Are you getting a sense that if a person who is not really informed about a topic will read baloney and perceive it as a truffle?
Captain Obvious is tired of this close reading game. For more AI insights, just navigate to the cited IEEE article. And be kind to electrical engineers. These individuals require respect and adulation. Make a misstep and your child’s battery powered toy will never emit incredibly annoying squeaks again.
Stephen E Arnold, August 29, 2023