GitHub Identifies a Sooty Pot and Does Not Offer a Fix
January 9, 2025
This is an official dinobaby post. No smart software involved in this blog post.
GitLab’s Sabrina Farmer is a sharp thinking person. Her “Three Software Development Challenges Slowing AI Progress” articulates an issue often ignored or just unknown. Specifically, according to her:
AI is becoming an increasingly critical component in software development. However, as is the case when implementing any new tool, there are potential growing pains that may make the transition to AI-powered software development more challenging.
Ms. Farmer is being kind and polite. I think she is suggesting that the nest with the AI eggs from the fund-raising golden goose has become untidy. Perhaps, I should use the word “unseemly”?
She points out three challenges which I interpret as the equivalent of one of those unsolved math problems like cracking the Riemann Hypothesis or the Poincaré Conjecture. These are:
- AI training. Yeah, marketers write about smart software. But a relatively small number of people fiddle with the knobs and dials on the training methods and the rat’s nests of computational layers that make life easy for an eighth grader writing an essay about Washington’s alleged crossing of the Delaware River whilst standing up in a boat rowed by hearty, cheerful lads. Big demand, lots of pretenders, and very few 10X coders and thinkers are available. AI Marketers? A surplus because math and physics are hard and art history and social science are somewhat less demanding on today’s thumb typers.
- Tools, lots of tools. Who has time to keep track of every “new” piece of smart software tooling? I gave up as the hyperbole got underway in early 2023. When my team needs to do something specific, they look / hunt for possibilities. Testing is required because smart software often gets things wrong. Some call this “innovation.” I call it evidence of the proliferation of flawed or cute software. One cannot machine titanium with lousy tools.
- Management measurements. Give me a break, Ms. Farmer. Managers are often evidence of the Peter Principle, an accountant, or a lawyer. How can one measure what one does not use, understand, or creates? Those chasing smart software are not making spindles for a wooden staircase. The task of creating smart software that has a shot at producing money is neither art nor science. It is a continuous process of seeing what works, fiddling, and fumbling. You want to measure this? Good luck, although blue chip consultants will gladly create a slide deck to show you the ropes and then churn out a spectacular invoice for professional services.
One question: Is GitLab part of the problem or part of the solution?
Stephen E Arnold, January 9, 2025
Why Buzzwords Create Problems. Big Problems, Right, Microsoft?
January 7, 2025
This is an official dinobaby post. No smart software involved in this blog post.
I read an essay by Steven Sinofsky. He worked at Microsoft. You can read about him in Wikipedia because he was a manager possibly associated with Clippy. He wrote an essay called “225. Systems Ideas that Sound Good But Almost Never Work—”Let’s just…” The write up is about “engineering patterns that sound good but almost never work as intended.”
I noticed something interesting about his explanation of why many software solutions go off the rails, fail to work, create security opportunities for bad actors associated with entities not too happy with the United States, and on-going headaches for for hundreds of millions of people.
Here is a partial list of the words and bound phrases from his essay:
Add an API
Anomaly detection
Asynchronous
Cross platform
DSL
Escape to native
Hybrid parallelism
Multi-master writes
Peer to peer
Pluggable
Sync the data
What struck me about this essay is that it reveals something I think is important about Microsoft and probably other firms tapping the expertise of the author; that is, the jargon drives how the software is implemented.
I am not certain that my statement is accurate for software in general. But for this short blog post, let’s assume that it applies to some software (and I am including Microsoft’s own stellar solutions as well as products from other high profile and wildly successful vendors). With the ground rules established, I want to offer several observations about this “jargon drives the software engineering” assertion.
First, the resulting software is flawed. Problems are not actually resolved. The problems are papered over with whatever the trendy buzzword says will work. The approach makes sense because actual problem solving may not be possible within a given time allocation or a working solution may fail which requires figuring out how to not fail again.
Second, the terms reveal that marketing think takes precedence over engineering think. Here’s what the jargon creators do. These sales oriented types grab terms that sound good and refer to an approach. The “team” coalesces around the jargon, and the jargon directs how the software is approached. Does hybrid parallelism “work”? Who knows, but it is the path forward. The manager says, “Let’s go team” and Clippy emerges or the weird opaqueness of the “ribbon.”
Third, the jargon shaped by art history majors and advertising mavens defines the engineering approach. The more successful the technical jargon, the more likely those people who studied Picasso’s colors or Milton’s Paradise Regained define the technical frame in which a “solution” is crafted.
How good is software created in this way? Answer: Good enough.
How reliable is software created in this way? Answer: Who knows until someone like a paying customer actually uses the software.
How secure is the software created in this way? Answer: It is not secure as the breaches of the Department of Treasury, the US telecommunications companies, and the mind boggling number of security lapses in 2024 prove.
Net net: Engineering solutions based on jargon are not intended to deliver excellence. The approach is simply “good enough.” Now we have some evidence that industry leaders realize the fact. Right, Clippy?
Stephen E Arnold, January 8, 2025
Code Graveyards: Welcome, Bad Actors
January 3, 2025
Did you know that the siloes housing nuclear missiles are still run on systems from the 1950s-1960s? These systems use analog computers and code more ancient than some people’s grandparents. The manuals for these codes are outdated and hard to find, except in archives and libraries that haven’t deaccessioned items for decades. There’s actually money to be made in these old systems and the Datosh Blog explains how: “The Hidden Risks of High-Quality Code.”
There are haunted graveyards of code in more than nuclear siloes. They exist in enterprise systems and came into existence in many ways: created by former IT employees, made for a specific project, or out-of-the-box code that no one wants to touch in case it causes system implosion. Bureaucratic layers and indecisive mentalities trap these codebases in limbo and they become the haunted graveyards. Not only are they haunted by ghosts of coding projects past, the graveyards pose existential risks.
The existential risks are magnified when red tape and attitudes prevent companies from progressing. Haunted graveyards are the root causes of technical debt, such as accumulated inefficiencies, expensive rewrites, and prevention from adapting to change.
Tech developers can avoid technical debt by prioritizing simplicity, especially matching a team’s skill level. Being active in knowledge transfer is important, because it means system information is shared between developers beyond basic SOP. Also use self-documenting code, understandable patterns for technology, don’t underestimate the value of team work and feedback. Haunted graveyards can be avoided:
“A haunted graveyard is not always an issue of code quality, but may as well be a mismatch between code complexity and the team’s ability to grapple with it. As a consultant, your goal is to avoid these scenarios by aligning your work with the team’s capabilities, transferring knowledge effectively, and ensuring the team can confidently take ownership of your contributions.”
Haunted graveyards are also huge opportunities for IT code consultants. Anyone with versatile knowledge, the right education/credentials, and chutzpah could establish themselves in this niche field. It’s perfect for a young 20-something with youthful optimism and capital to start a business in consulting for haunted graveyard systems. They will encounter data hoarders, though.
Whitney Grace, January 3, 2024
Technical Debt: A Weight Many Carry Forward to 2025
December 31, 2024
Do you know what technical debt is? It’s also called deign debt and code debt. It refers to a development team prioritizing a project’s delivery over a functional product and the resulting consequences. Usually the project has to be redone. Data debt is a type of technical debt and it refers to the accumulated costs of poor data management that hinder decision-making and efficiency. Which debt is worse? The Newstack delves into that topic in: “Who’s the Bigger Villain? Data Debt vs. Technical Debt.”
Technical debt should only be adopted for short-term goals, such as meeting a release date, but it shouldn’t be the SOP. Data debt’s downside is that it results in poor data and manual management. It also reduces data quality, slows decision making, and increases costs. The pair seem indistinguishable but the difference is that with technical debt you can quit and start over. That’s not an option with data debt and the ramifications are bad:
“Reckless and unintentional data debt emerged from cheaper storage costs and a data-hoarding culture, where organizations amassed large volumes of data without establishing proper structures or ensuring shared context and meaning. It was further fueled by resistance to a design-first approach, often dismissed as a potential bottleneck to speed. It may also have sneaked up through fragile multi-hop medallion architectures in data lakes, warehouses, and lakehouses.”
The article goes on to recommend adopting early data-modeling and how to restructure your current systems. You do that by drawing maps or charts of your data, then project where you want them to go. It’s called planning:
“To reduce your data debt, chart your existing data into a transparent, comprehensive data model that maps your current data structures. This can be approached iteratively, addressing needs as they arise — avoid trying to tackle everything at once.
Engage domain experts and data stakeholders in meaningful discussions to align on the data’s context, significance, and usage.
From there, iteratively evolve these models — both for data at rest and data in motion—so they accurately reflect and serve the needs of your organization and customers.
Doing so creates a strong foundation for data consistency, clarity, and scalability, unlocking the data’s full potential and enabling more thoughtful decision-making and future innovation.”
Isn’t this just good data, project, or organizational management? Charting is a basic tool taught in kindergarten. Why do people forget it so quickly?
Whitney Grace, December 31, 2024
Microservices Are Perfect, Are They Not?
December 31, 2024
“Microservices” is another synergetic jargon term that is looping the IT industry like the latest viral video. Microservices are replacing monolithic architecture and are supposed to resolve all architectural problems. Cerbos’s article says otherwise: “The Value Of Monitoring And Observability In Microservices, And Associated Challenges.” The article is part of a ten part series that focuses on how to effectively handle any challenges during a transfer from a monolithic architecture to microservices.
This particular article is chapter five and it stresses how observability and monitoring are important to know what is happening on every level of a microservices application. This is important because a microservices environment has multiple tasks concurrently running and it makes traditional tools obsolete. Observability means using tools to observe the system’s internal status, while monitoring tools collect and analyze traces, logs, and metrics. When the two are combined, it provides an overall consensus of a system’s health. The challenges of installing monitoring and observability tools in a microservices architecture are as follows:
1. “Interaction of data silos. Treating each microservice separately when implementing monitoring and observability solutions creates “data silos”. These silos are easy to understand in isolation, without fully understanding how they interact as one. This can lead to difficulty when debugging or understanding the root cause of problems.
2. Scalability. As your microservices architecture scales, the complexity of monitoring and observability grows with it. So monitoring everything with the same tools you were using for a single monolith quickly becomes unmanageable.
3. Lack of standard tools. One of the benefits of microservices is that different teams can choose the data storage system that makes the most sense for their microservice (as we covered in blog 2 of the series, “Data management and consistency”). But, if you don’t have a standard for monitoring and observability, tying siloed insights together to gain insights on the system as a whole is challenging.”
The foundations for observability are installing tools that track metrics, logging, and tracing. Metrics are quantitative measurements of a system that include: error rates, throughput, resource utilization, and response time. These provide a system’s overall performance. Logging means capturing and centralizing log messages made by services like applications. Tracing follows end-to-end requests between services. They provide valuable insights into potential bottlenecks and errors.
This article verifies what we already know with every new technology adoption: same problems, new packaging. There isn’t any solution that will solve all technology problems. New technology has its own issues that will resolve old problems but bring up new ones. There’s no such thing as a one stop shop.
Whitney Grace, December 31, 2024
The Fatal Flaw in Rules-Based Smart Software
December 17, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
As a dinobaby, I have to remember the past. Does anyone know how the “smart” software in AskJeeves worked? At one time before the cute logo and the company followed the path of many, many other breakthrough search firms, AskJeeves used hand-crafted rules. (Oh, the reference to breakthrough is a bit of an insider joke with which I won’t trouble you.) A user would search for “weather 94401” and the system would “look up” in the weather rule the zip code for Foster City, California, and deliver the answer. Alternatively, I could have when I ran the query looked out my window. AskJeeves went on a path painfully familiar to other smart software companies today: Customer service. AskJeeves was acquired by IAC Corp. which moved away from the rules-based system which was “revolutionizing” search in the late 1990s.
Rules-based wranglers keep busy a-fussin’ and a-changin’ all the dang time. The patient mule Jeeves just wants lunch. Thanks, MidJourney, good enough.
I read “Certain Names Make ChatGPT Grind to a Halt, and We Know Why.” The essay presents information about how the wizards at OpenAI solve problems its smart software creates. The fix is to channel the “rules-based approach” which was pretty darned exciting decades ago. Like the AskJeeves’ approach, the use of hand-crafted rules creates several problems. The cited essay focuses on the use of “rules” to avoid legal hassles created when smart software just makes stuff up.
I want to highlight several other problems with rules-based decision systems which are far older in computer years than the AskJeeves marketing success in 1996. Let me highlight a few which may lurk within the OpenAI and ChatGPT smart software:
- Rules have to be something created by a human in response to something another (often unpredictable) human did. Smart software gets something wrong like saying a person is in jail or dead when he is free and undead.
- Rules have to be maintained. Like legacy code, setting and forgetting can have darned exciting consequences after the original rules creator changed jobs or fell into the category “in jail” or “dead.”
- Rules work with a limited set of bounded questions and answers. Rules fail when applied to the fast-changing and weird linguistic behavior of humans. If a “rule” does know a word like “debanking”, the system will struggle, crash, or return zero results. Bummer.
- Rules seem like a great idea until someone calculates how many rules are needed, how much it costs to create a rule, and how much maintenance rules require (typically based on the cost of creating a rule in the first place). To keep the math simple, rules are expensive.
I liked the cited essay about OpenAI. It reminds me how darned smart today’s developers of smart software are. This dinobaby loved the article. What a great anecdote! I want to say, “OpenAI should have “asked Jeeves.” I won’t. I will point out that IBM Watson, the Jeopardy winner version, was rules based. In fact, rules are still around, and they still carry like a patient donkey the cost burden.
Stephen E Arnold, December 17, 2024
The Starlink Angle: Yacht, Contraband, and Global Satellite Connectivity
December 16, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
I have followed the ups and downs of Starlink satellite Internet connectivity in the Russian special operation. I have not paid much attention to more routine criminal use of the Starlink technology. Before I direct your attention to a write up about this Elon Musk enterprise, I want to mention that this use case for satellites caught my attention with Equatorial Communications’ innovations in 1979. Kudos to that outfit!
“Police Demands Starlink to Reveal Buyer of Device Found in $4.2 Billion Drug Bust” has a snappy subtitle:
Smugglers were caught with 13,227 pounds of meth
Hmmm. That works out to 6,000 kilograms or 6.6 short tons of meth worth an estimated $4 billion on the open market. And it is the government of India chasing the case. (Starlink is not yet licensed for operation in that country.)
The write up states:
officials have sent Starlink a police notice asking for details about the purchaser of one of its Starlink Mini internet devices that was found on the boat. It asks for the buyer’s name and payment method, registration details, and where the device was used during the smugglers’ time in international waters. The notice also asks for the mobile number and email registered to the Starlink account.
The write up points out:
Starlink has spent years trying to secure licenses to operate in India. It appeared to have been successful last month when the country’s telecom minister said Starlink was in the process of procuring clearances. The company has not yet secured these licenses, so it might be more willing than usual to help the authorities in this instance.
Starlink is interesting because it is a commercial enterprise operating in a free wheeling manner. Starlink’s response is not known as of December 12, 2024.
Stephen E Arnold, December 16, 2024
China Good, US Bad: Australia Reports the Unwelcome News
December 13, 2024
This write up was created by an actual 80-year-old dinobaby. If there is art, assume that smart software was involved. Just a tip.
I read “Critical Technology Tracker: Two Decades of Data Show Rewards of Long-Term Investment.” The write up was issued in September 2024, and I have no confidence that much has changed. I believe the US is the leader in marketing hyperbole output. Other countries are far behind, but some are closing the gaps. I will focus on the article, and I will leave it to you to read the full report available from the ASPI Australia Web site.
The main point of this report by the Australian Strategic Policy Institute is that the US has not invested in long-term research. I am not sure how much of this statement is a surprise to those who have watched as US patents have become idea recyclers, the deterioration of US education, and the fascinating quest for big money.
The cited summary of the research reports:
The US led in 60 of 64 technologies in the five years from 2003 to 2007, but in the most recent five year period, it was leading in just seven.
I want to point out that playing online games and doom scrolling are not fundamental technologies. The US has a firm grip on the downstream effects of applied technology. The fundamentals are simply not there. AI which seems to be everywhere is little more than word probability which is not a fundamental; it is an application of methods.
The cited article points out:
The chart is easy to read. The red line heading up is China. The blue line going down is the US.
In what areas are China’s researchers making headway other than its ability to terminate some US imports quickly? Here’s what the cited article reports:
China has made its new gains in quantum sensors, high-performance computing, gravitational sensors, space launch and advanced integrated circuit design and fabrication (semiconductor chip making). The US leads in quantum computing, vaccines and medical countermeasures, nuclear medicine and radiotherapy, small satellites, atomic clocks, genetic engineering and natural language processing.
The list, one can argue, is arbitrary and easily countered by US researchers. There are patents, start ups, big financial winners, and many fine research institutions. With AI poised to become really smart in a few years, why worry?
I am not worried because I am old. The people who need to worry are the parents of children who cannot read and comprehend, who do not study and master mathematics, who do not show much interest in basic science, and are indifferent to the concept of work ethic.
Australia is worried. It is making an attempt to choke off the perceived corrosive effects of the US social media juggernaut for those under 16 years of age. It is monitoring China’s activities in the Pacific. It is making an effort to enhance its military capabilities.
Is America worried? I would characterize the attitude here in rural Kentucky as the mascot of Mad Magazine’s catchphrase, “What, me worry?”
Stephen E Arnold, December 13, 2024
China Seeks to Curb Algorithmic Influence and Manipulation
December 5, 2024
Someone is finally taking decisive action against unhealthy recommendation algorithms, AI-driven price optimization, and exploitative gig-work systems. That someone is China. ”China Sets Deadline for Big Tech to Clear Algorithm Issues, Close ‘Echo Chambers’,” reports the South China Morning Post. Ah, the efficiency of a repressive regime. Writer Hayley Wong informs us:
‘Tech operators in China have been given a deadline to rectify issues with recommendation algorithms, as authorities move to revise cybersecurity regulations in place since 2021. A three-month campaign to address ‘typical issues with algorithms’ on online platforms was launched on Sunday, according to a notice from the Communist Party’s commission for cyberspace affairs, the Ministry of Industry and Information Technology, and other relevant departments. The campaign, which will last until February 14, marks the latest effort to curb the influence of Big Tech companies in shaping online views and opinions through algorithms – the technology behind the recommendation functions of most apps and websites. System providers should avoid recommendation algorithms that create ‘echo chambers’ and induce addiction, allow manipulation of trending items, or exploit gig workers’ rights, the notice said.
They should also crack down on unfair pricing and discounts targeting different demographics, ensure ‘healthy content’ for elderly and children, and impose a robust ‘algorithm review mechanism and data security management system’.”
Tech firms operating within China are also ordered to conduct internal investigations and improve algorithms’ security capabilities by the end of the year. What happens if firms fail? Reeducation? A visit to the death van? Or an opportunity to herd sheep in a really nice area near Xian? The brief write-up does not specify.
We think there may be a footnote to the new policy; for instance, “Use algos to advance our policies.”
Cynthia Murrell, December 5, 2024
Listary: A Chinese Alternative to Windows File Explorer
December 5, 2024
For anyone frustrated with Windows’ built-in search function, Lifehacker suggests an alternative. “Listary Is a Fast, Powerful Search Tool for Windows,” declares writer Justin Pot. He tells us:
“Listary is a free app with great indexing that allows you to find any file on your computer in just a couple of keystrokes. Tap the control key twice, start typing, and hit enter when you see what you want. You can also use the tool to launch applications or search the web. … The keyboard shortcut brings up a search window similar to Spotlight on the Mac. There is also a more advanced version of the application which you can bring up by clicking the tray icon for the application. This lets you do things like filter your search by file type or how recently it was created. This view also notably allows you to preview files before opening them, which I appreciate. You’re not limited to searching on your computer—you can also start web searches from here.”
That Web search function is preloaded with a few search engines, like Google, Wikipedia, IMDB, and YouTube, but one can add more platforms. The free version of Listary is for personal use only. The company, Bopsoft, makes its money on the Pro version, which is $20. Just once, not monthly or annually. That version offers network-drive indexing and customization options. Bopsoft appears to be based in Zaozhuang, China.
Cynthia Murrell, December 5, 2024