Google and X: Shall We Again Love These Bad Dogs?
November 30, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Two stories popped out of my blah newsfeed this morning (Thursday, November 30, 2023). I want to highlight each and offer a handful of observations. Why? I am a dinobaby, and I remember the adults who influenced me telling me to behave, use common sense, and follow the rules of “good” behavior. Dull? Yes. A license to cut corners and do crazy stuff? No.
The first story, if it is indeed accurate, is startling. “Google Caught Placing Big-Brand Ads on Hardcore Porn Sites, Report Says” includes a number of statements about the Google which make me uncomfortable. For instance:
advertisers who feel there’s no way to truly know if Google is meeting their brand safety standards are demanding more transparency from Google. Ideally, moving forward, they’d like access to data confirming where exactly their search ads have been displayed.
Where are big brand ads allegedly appearing? How about “undesirable sites.” What comes to mind for me is adult content. There are some quite sporty ads on certain sites that would make a Methodist Sunday school teacher blush.
These two big dogs are having a heck of a time ruining the living room sofa. Neither dog knows that the family will not be happy. These are dogs, not the mental heirs of Immanuel Kant. Thanks, MSFT Copilot. The stuffing looks like soap bubbles, but you are “good enough,” the benchmark for excellence today.
But the shocking factoid is that Google does not provide a way for advertisers to know where their ads have been displayed. Also, there is a possibility that Google shared ad revenue with entities which may be hostile to the interests of the US. Let’s hope that the assertions reported in the article are inaccurate. But if the display of big brand ads on sites with content which could conceivably erode brand value, what exactly is Google’s system doing? I will return to this question in the observations section of this essay.
The second article is equally shocking to me.
“Elon Musk Tells Advertisers: ‘Go F*** Yourself’” reports that the EV and rocket man with a big hole digging machine allegedly said about advertisers who purchase promotions on X.com (Twitter?):
Don’t advertise,” … “If somebody is going to try to blackmail me with advertising, blackmail me with money, go fuck yourself. Go f*** yourself. Is that clear? I hope it is.” … ” If advertisers don’t return, Musk said, “what this advertising boycott is gonna do is it’s gonna kill the company.”
The cited story concludes with this statement:
The full interview was meandering and at times devolved into stream of consciousness responses; Musk spoke for triple the time most other interviewees did. But the questions around Musk’s own actions, and the resulting advertiser exodus — the things that could materially impact X — seemed to garner the most nonchalant answers. He doesn’t seem to care.
Two stories. Two large and successful companies. What can a person like myself conclude, recognizing that there is a possibility that both stories may have some gaps and flaws:
- There is a disdain for old-fashioned “values” related to acceptable business practices
- The thread of pornography and foul language runs through the reports. The notion of well-crafted statements and behaviors is not part of the Google and X game plan in my view
- The indifference of the senior managers at both companies seeps through the descriptions of how Google and X operate strikes me as intentional.
Now why?
I think that both companies are pushing the edge of business behavior. Google obviously is distributing ad inventory anywhere it can to try and create a market for more ads. Instead of telling advertisers where their ads are displayed or giving an advertiser control over where ads should appear, Google just displays the ads. The staggering irrelevance of the ads I see when I view a YouTube video is evidence that Google knows zero about me despite my being logged in and using some Google services. I don’t need feminine undergarments, concealed weapons products, or bogus health products.
With X.com the dismissive attitude of the firm’s senior management reeks of disdain. Why would someone advertise on a system which promotes behaviors that are detrimental to one’s mental set up?
The two companies are different, but in a way they are similar in their approach to users, customers, and advertisers. Something has gone off the rails in my opinion at both companies. It is generally a good idea to avoid riding trains which are known to run on bad tracks, ignore safety signals, and demonstrate remarkably questionable behavior.
What if the write ups are incorrect? Wow, both companies are paragons. What if both write ups are dead accurate? Wow, wow, the big dogs are tearing up the living room sofa. More than “bad dog” is needed to repair the furniture for living.
Stephen E Arnold, November 30, 2023
Google Maps: Rapid Progress on Un-Usability
November 30, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I read a Xhitter.com post about Google Maps. Those who have either heard me talk about the “new” Google Maps or who have read some of my blog posts on the subject know my view. The current Google Maps is useless for my needs. Last year, as one of my team were driving to a Federal secure facility, I bought an overpriced paper map at one of the truck stops. Why? I had no idea how to interact with the map in a meaningful way. My recollection was that I could coax Google Maps and Waze to be semi-helpful. Now the Google Maps’s developers have become tangled in a very large thorn bush. The team discusses how large the thorn bush is, how sharp the thorns are, and how such a large thorn bush could thrive in the Googley hot house.
This dinobaby expresses some consternation at [a] not knowing where to look, [b] how to show the route, and [c] not cause a motor vehicle accident. Thanks, MSFT Copilot. Good enough I think.
The result is enhancements to Google Maps which are the digital equivalent of skin cancer. The disgusting result is a vehicle for advertising and engagement that no one can use without head scratching moments. Am I alone in my complaint. Nope, the afore mentioned Xhitter.com post aligns quite well with my perception. The author is a person who once designed a more usable version of Google Maps.
Her Xhitter.com post highlights the digital skin cancer the team of Googley wizards has concocted. Here’s a screen capture of her annotated, life-threatening disfigurement:
She writes:
The map should be sacred real estate. Only things that are highly useful to many people should obscure it. There should be a very limited number of features that can cover the map view. And there are multiple ways to add new features without overlaying them directly on the map.
Sounds good. But Xooglers and other outsiders are not likely to get much traction from the Map team. Everyone is working hard at landing in the hot AI area or some other discipline which will deliver a bonus and a promotion. Maps? Nope.
The former Google Maps’ designer points out:
In 2007, I was 1 of 2 designers on Google Maps. At that time, Maps had already become a cluttered mess. We were wedging new features into any space we could find in the UI. The user experience was suffering and the product was growing increasingly complicated. We had to rethink the app to be simple and scale for the future.
Yep, Google Maps, a case study for people who are brilliant who have lost the atlas to reality. And “sacred” at Google? Ad revenue, not making dear old grandma safer when she drives. (Tesla, Cruise, where are those smart, self-driving cars? Ah, I forgot. They are with Waymo, keeping their profile low.)
Stephen E Arnold, November 30, 2023
Another Xoogler and More Process Insights
November 23, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Google employs many people. Over the last 25 years, quite a few Xooglers (former Google employees) are out and about. I find the essays by the verbal Xooglers interesting. “Reflecting on 18 Years at Google” contains several intriguing comments. Let me highlight a handful of these. You will want to read the entire Hixie article to get the context for the snips I have selected.
The first point I underlined with blushing pink marker was:
I found it quite frustrating how teams would be legitimately actively pursuing ideas that would be good for the world, without prioritizing short-term Google interests, only to be met with cynicism in the court of public opinion.
Old timers share stories about the golden past in the high-technology of online advertising. Thanks, Copilot, don’t overdo the schmaltz.
The “Google as a victim” is a notion not often discussed — except by some Xooglers. I recall a comment made to me by a seasoned manager at another firm, “Yes, I am paranoid. They are out to get me.” That comment may apply to some professionals at Google.
How about this passage?
My mandate was to do the best thing for the web, as whatever was good for the web would be good for Google (I was explicitly told to ignore Google’s interests).
The oft-repeated idea is that Google cares about its users and similar truisms are part of what I call the Google mythology. Intentionally, in my opinion, Google cultivates the “doing good” theme as part of its effort to distract observers from the actual engineering intent of the company. (You love those Google ads, don’t you?)
Google’s creative process is captured in this statement:
We essentially operated like a startup, discovering what we were building more than designing it.
I am not sure if this is part of Google’s effort to capture the “spirit” of the old-timey days of Bell Laboratories or an accurate representation of Google’s directionless methods became over the years. What people “did” is clearly dissociated from the advertising mechanisms on which the oversized tires and chrome do-dads were created and bolted on the ageing vehicle.
And, finally, this statement:
It would require some shake-up at the top of the company, moving the center of power from the CFO’s office back to someone with a clear long-term vision for how to use Google’s extensive resources to deliver value to users.
What happened to the ideas of doing good and exploratory innovation?
Net net: Xooglers pine for the days of the digital gold rush. Googlers may not be aware of what the company is and does. That may be a good thing.
Stephen E Arnold, November 23, 2023
Anti-AI Fact Checking. What?
November 21, 2023
This essay is the work of a dumb dinobaby. No smart software required.
If this effort is sincere, at least one news organization is taking AI’s ability to generate realistic fakes seriously. Variety briefly reports, “CBS Launches Fact-Checking News Unit to Examine AI, Deepfakes, Misinformation.” Aptly dubbed “CBS News Confirmed,” the unit will be led by VPs Claudia Milne and Ross Dagan. Writer Brian Steinberg tells us:
“The hope is that the new unit will produce segments on its findings and explain to audiences how the information in question was determined to be fake or inaccurate. A July 2023 research note from the Northwestern Buffett Institute for Global Affairs found that the rapid adoption of content generated via A.I. ‘is a growing concern for the international community, governments and the public, with significant implications for national security and cybersecurity. It also raises ethical questions related to surveillance and transparency.’”
Why yes, good of CBS to notice. And what will it do about it? We learn:
“CBS intends to hire forensic journalists, expand training and invest in new technology, [CBS CEO Wendy] McMahon said. Candidates will demonstrate expertise in such areas as AI, data journalism, data visualization, multi-platform fact-checking, and forensic skills.”
So they are still working out the details, but want us to rest assured they have a plan. Or an outline. Or maybe a vague notion. At least CBS acknowledges this is a problem. Now what about all the other news outlets?
Cynthia Murrell, November 21, 2023
How Google Works: Think about Making Sausage in 4K on a Big Screen with Dolby Sound
November 16, 2023
This essay is the work of a dumb, dinobaby humanoid. No smart software required.
I love essays which provide a public glimpse of the way Google operates. An interesting insider description of the machinations of Googzilla’s lair appears in “What I Learned Getting Acquired by Google.” I am going to skip the “wow, the Google is great,” and focus on the juicy bits.
Driving innovation down Google’s Information Highway requires nerves of steel and the patience of Job. A good sense of humor, many brain cells, and a keen desire to make the techno-feudal system dominate are helpful as well. Thanks, Microsoft Bing. It only took four tries to get an illustration of vehicles without parts of each chopped off.
Here are the article’s “revelations.” It is almost like sitting in the Google cafeteria and listening to Tony Bennett croon. Alas, those days are gone, but the “best” parts of Google persist if the write up is on the money.
Let me highlight a handful of comments I found interesting and almost amusing:
- Google, according to the author, “an ever shifting web of goals and efforts.” I think this means going in many directions at once. Chaos, not logic, drives the sports car down the Information Highway
- Google has employees who want “to ship great work, but often couldn’t.” Wow, the Googley management method wastes resources and opportunities due to the Googley outfit’s penchant for being Googley. Yeah, Googley because lousy stuff is one output, not excellence. Isn’t this regressive innovation?
- There are lots of managers or what the author calls “top heavy.” But those at the top are well paid, so what’s the incentive to slim down? Answer: No reason.
- Google is like a teen with a credit card and no way to pay the bill. The debt just grows. That’s Google except it is racking up technical debt and process debt. That’s a one-two punch for sure.
- To win at Google, one must know which game to play, what the rules of that particular game are, and then have the Machiavellian qualities to win the darned game. What about caring for the users? What? The users! Get real.
- Google screws up its acquisitions. Of course. Any company Google buys is populated with people not smart enough to work at Google in the first place. “Real” Googlers can fix any acquisition. The technique was perfected years ago with Dodgeball. Hey, remember that?
Please, read the original essay. The illustration shows a very old vehicle trying to work its way down an information highway choked with mud, blocked by farm equipment, and located in an isolated fairy land. Yep, that’s the Google. What happens if the massive flows of money are reduced? Yikes!
Stephen E Arnold, November 16, 2023
Google and the Tom Sawyer Method, Part Two
November 15, 2023
This essay is the work of a dumb humanoid. No smart software required.
What does a large online advertising company do when it cannot figure out what’s fake and what’s not? The answer, as I suggested in this post, is to get other people to do the work. The approach is cheap, shifts the burden to other people, and sidesteps direct testing of an automated “smart” system to detect fake data in the form of likenesses of living people or likenesses for which fees must be paid to use the likeness.
“YouTube Will Let Musicians and Actors Request Takedowns of Their Deepfakes” explains (sort of):
YouTube is making it “possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice.” Individuals can submit calls for removal through YouTube’s privacy request process …
I find this angle on the process noted in my “Google Solves Fake Information with the Tom Sawyer Method” a useful interpretation of what Google is doing.
From my point of view, Google wants others to do the work of monitoring, identifying, and filling out a form to request fake information be removed. Nevermind that Google has the data, the tags, and (in theory) the expertise to automate the process.
I admire Google. I bet Tom Sawyer’s distant relative now works at Google and cooked up this approach. Well done. Hit that Foosball game while others hunt for their fake or unauthorized likeness, their music, or some other copyrighted material.
Stephen E Arnold, November 15, 2023
Hitting the Center Field Wall, AI Suffers an Injury!
November 15, 2023
This essay is the work of a dumb, dinobaby humanoid. No smart software required.
At a reception at a government facility in Washington, DC, last week, one of the bright young sparks told me, “Every investment deal I see gets fund if it includes the words ‘artificial intelligence.’” I smiled and moved to another conversation. Wow, AI has infused the exciting world of a city built on the swampy marge of the Potomac River.
I think that the go-go era of smart software has reached a turning point. Venture firms and consultants may not have received the email with this news. However, my research team has, and the update contains information on two separate thrusts of the AI revolution.
The heroic athlete, supported by his publicist, makes a heroic effort to catch the long fly ball. Unfortunately our star runs into the wall, drops the ball, and suffers what may be a career-ending injury to his left hand. (It looks broken, doesn’t it?)Oh, well. Thanks, MSFT Bing. The perspective is weird and there is trash on the ground, but the image is good enough.
The first signal appears in “AI Companies Are Running Out of Training Data.” The notion that online information is infinite is a quaint one. But in the fever of moving to online, reality is less interesting that the euphoria of the next gold rush or the new Industrial Revolution. Futurism reports:
Data plays a central role, if not the central role, in the AI economy. Data is a model’s vital force, both in basic function and in quality; the more natural — as in, human-made — data that an AI system has to train on, the better that system becomes. Unfortunately for AI companies, though, it turns out that natural data is a finite resource — and if that tap runs dry, researchers warn they could be in for a serious reckoning.
The information or data in question is not the smog emitted by modern automobiles’ chip stuffed boxes. Nor is the data the streams of geographic information gathered by mobile phone systems. The high value data are those which matter; for example, in a stream of security information, which specific stock is moving because it is being manipulated by one of those bright young minds I met at the DC event.
The article “AI Companies Are Running Out of Training Data” adds:
But as data becomes increasingly valuable, it’ll certainly be interesting to see how many AI companies can actually compete for datasets — let alone how many institutions, or even individuals, will be willing to cough their data over to AI vacuums in the first place. But even then, there’s no guarantee that the data wells won’t ever run dry. As infinite as the internet seems, few things are actually endless.
The fix is synthetic or faked data; that is, fabricated data which appears to replicate real-life behavior. (Don’t you love it when Google predicts the weather or a smarty pants games the crypto market?)
The message is simple: Smart software has ground through the good stuff and may face its version of an existential crisis. That’s different from the rah rah one usually hears about AI.
The second item my team called to my attention appears in a news story called “OpenAI Pauses New ChatGPT Plus Subscriptions De to Surge in Demand.” I read the headline as saying, “Oh, my goodness, we don’t have the money or the capacity to handle more users requests.”
The article expresses the idea in this snappy 21st century way:
The decision to pause new ChatGPT signups follows a week where OpenAI services – including ChatGPT and the API – experienced a series of outages related to high-demand and DDoS attacks.
Okay, security and capacity.
What are the implications of these two unrelated stories:
- The run up to AI has been boosted with system operators ignoring copyright and picking low hanging fruit. The orchard is now looking thin. Apples grow on trees, just not quickly and over cultivation can ruin the once fertile soil. Think a digital Dust Bowl perhaps?
- The friction of servicing user requests is causing slow downs. Can the heat be dissipated? Absolutely but the fix requires money, more than high school science club management techniques, and common sense. Do AI companies exhibit common sense? Yeah, sure. Everyday.
- The lack of high-value or sort of good information is a bummer. Machines producing insights into the dark activities of bad actors and the thoughts of 12-year-olds are grinding along. However, the value of the information outputs seems to be lagging behind the marketers’ promises. One telling example is the outright failure of Israel’s smart software to have utility in identifying the intent of bad actors. My goodness, if any country has smart systems, it’s Israel. Based on events in the last couple of months, the flows of data produced what appears to be a failing grade.
If we take these two cited articles’ information at face value, one can make a case that the great AI revolution may be facing some headwinds. In a winner-take-all game like AI, there will be some Sad Sacks at those fancy Washington, DC receptions. Time to innovate and renovate perhaps?
Stephen E Arnold, November 15, 2023
The Risks of Smart Software in the Hands of Fullz Actors and Worse
November 7, 2023
This essay is the work of a dumb humanoid. No smart software required.
The ChatGPT and Sam AI-Man parade is getting more acts. I spotted some thumbs up from Satya Nadella about Sam AI-Man and his technology. The news service Techmeme provided me with dozens of links and enticing headlines about enterprise this and turbo that GPT. Those trumpets and tubas were pumping out the digital version of Funiculì, Funiculà.
I want to highlight one write up and point out an issue with smart software that appears to have been ignored, overlooked, or like the iceberg possibly that sank the RMS Titanic, was a heck of a lot more dangerous than Captain Edward Smith appreciated.
The crowd is thrilled with the new capabilities of smart software. Imagine automating mundane, mindless work. Over the oom-pah of the band, one can sense the excitement of the Next Big Thing getting Bigger and more Thingier. In the crowd, however, are real or nascent bad actors. They are really happy too. Imagine how easy it will be to automate processes designed to steal personal financial data or other chinks in humans’ armor!
The article is “How OpenAI Is Building a Path Toward AI Agents.” The main idea is that one can type instructions into Sam AI-Man’s GPT “system” and have smart software hook together discrete functions. These functions can then deliver an output requiring the actions of different services.
The write up approaches this announcement or marketing assertion with some prudence. The essay points out that “customer chatbots aren’t a new idea.” I agree. Connecting services has been one of the basic ideas of the use of software. Anyone who has used notched cards to retrieve items related to one another is going to understand the value of automation. And now, if the Sam AI-Man announcements are accurate that capability no longer requires old-fashioned learning the ropes.
The cited write up about building a path asserts:
Once you start enabling agents like the ones OpenAI pointed toward today, you start building the path toward sophisticated algorithms manipulating the stock market; highly personalized and effective phishing attacks; discrimination and privacy violations based on automations connected to facial recognition; and all the unintended (and currently unimaginable) consequences of infinite AIs colliding on the internet.
Fear, uncertainty, and doubt are staples of advanced technology. And the essay makes clear that the rule maker in chief is Sam AI-Man; to wit the essay says:
After the event, I asked Altman how he was thinking about agents in general. Which actions is OpenAI comfortable letting GPT-4 take on the internet today, and which does the company not want to touch? Altman’s answer is that, at least for now, the company wants to keep it simple. Clear, direct actions are OK; anything that involves high-level planning isn’t.
Let me introduce my observations about the Sam AI-Man innovations and the type of explanations about the PR and marketing event which has whipped up pundits, poohbahs, and Twitter experts (perhaps I should say X-spurts?)
First, the Sam AI-Man announcements strike me as making orchestration a service easy to use and widely available. Bad things won’t be allowed. But the core idea of what I call “orchestration” is where the parade is marching. I hear the refrain “Some think the world is made for fun and frolic.” But I don’t agree, I don’t agree. Because as advanced tools become widely available, the early adopters are not exclusively those who want to link a calendar to an email to a document about a meeting to talk about a new marketing initiative.
Second, the ability of Sam AI-Man to determine what’s in bounds and out of bounds is different from refereeing a pickleball game. Some of the players will be nation states with an adversarial view of the US of A. Furthermore, there are bad actors who have a knack for linking automated information to online extortion. These folks will be interested in cost cutting and efficiency. More problematic, some of these individuals will be more active in testing how orchestration can facilitate their human trafficking activities or drug sales.
Third, government entities and people like Sam AI-Man are, by definition, now in reactive mode. What I mean is that with the announcement and the chatter about automating the work required to create a snappy online article is not what a bad actor will do. Individuals will see opportunities to create new ways to exploit the cluelessness of employees, senior citizens, and young people. The cheerful announcements and the parade tunes cannot drown out the low frequency rumbles of excitement now rippling through the bad actor grapevines.
Net net: Crime propelled by orchestration is now officially a thing. The “regulations” of smart software, like the professionals who will have to deal with the downstream consequences of automation, are out of date. Am I worried? For me personally, no, I am not worried. For those who have to enforce the laws which govern a social construct? Yep, I have a bit of concern. Certainly more than those who are laughing and enjoying the parade.
Stephen E Arnold, November 7, 2023
Missing Signals: Are the Tools or Analysts at Fault?
November 7, 2023
This essay is the work of a dumb humanoid. No smart software required.
Returning from a trip to DC yesterday, I thought about “signals.” The pilot — a specialist in hit-the-runway-hard landings — used the word “signals” in his welcome-aboard speech. The word sparked two examples of missing signals. The first is the troubling kinetic activities in the Middle East. The second is the US Army reservist who went on a shooting rampage.
The intelligence analyst says, “I have tools. I have data. I have real time information. I have so many signals. Now which ones are important, accurate, and actionable?” Our intrepid professionals displays the reality of separating the signal from the noise. Scary, right? Time for a Starbuck’s visit.
I know zero about what software and tools, systems and informers, and analytics and smart software the intelligence operators in Israel relied upon. I know even less about what mechanisms were in place when Robert Card killed more than a dozen people.
The Center for Strategic and International Studies published “Experts React: Assessing the Israeli Intelligence and Potential Policy Failure.” The write up stated:
It is incredible that Hamas planned, procured, and financed the attacks of October 7, likely over the course of at least two years, without being detected by Israeli intelligence. The fact that it appears to have done so without U.S. detection is nothing short of astonishing. The attack was complex and expensive.
And one more passage:
The fact that Israeli intelligence, as well as the international intelligence community (specifically the Five Eyes intelligence-sharing network), missed millions of dollars’ worth of procurement, planning, and preparation activities by a known terrorist entity is extremely troubling.
Now let’s shift to the Lewiston Maine shooting. I had saved on my laptop “Six Missed Warning Signs Before the Maine Mass Shooting Explained.” The UK newspaper The Guardian reported:
The information about why, despite the glaring sequence of warning signs that should have prevented him from being able to possess a gun, he was still able to own over a dozen firearms, remains cloudy.
Those “signs” included punching a fellow officer in the US Army Reserve force, spending some time in a mental health facility, family members’ emitting “watch this fellow” statements, vibes about issues from his workplace, and the weapon activity.
On one hand, Israel had intelligence inputs from just about every imaginable high-value source from people and software. On the other hand, in a small town the only signal that was not emitted by Mr. Card was buying a billboard and posting a message saying, “Do not invite Mr. Card to a church social.”
As the plane droned at 1973 speeds toward the flyover state of Kentucky, I jotted down several thoughts. Like or not, here these ruminations are:
- Despite the baloney about identifying signals and determining which are important and which are not, existing systems and methods failed bigly. The proof? Dead people. Subsequent floundering.
- The mechanisms in place to deliver on point, significant information do not work. Perhaps it is the hustle bustle of everyday life? Perhaps it is that humans are not very good at figuring out what’s important and what’s unimportant. The proof? Dead people. Constant news releases about the next big thing in open source intelligence analysis. Get real. This stuff failed at the scale of SBF’s machinations.
- The uninformed pontifications of cyber security marketers, the bureaucratic chatter flowing from assorted government agencies, and the cloud of unknowing when the signals are as subtle as the foghorn on cruise ship with a passenger overboard. Hello, hello, the basic analysis processes don’t work. A WeWork investor’s thought processes were more on point than the output of reporting systems in use in Maine and Israel.
After the aircraft did the thump-and-bump landing, I was able to walk away. That’s more than I can say for the victims of analysis, investigation, and information processing methods in use where moose roam free and where intelware is crafted and sold like canned beans at TraderJoe’s.
Less baloney and more awareness that talking about advanced information methods is a heck of a lot easier than delivering actual signal analysis.
Stephen E Arnold, November 7, 2023
test
Bankrupting a City: Big Software, Complexity, and Human Shortcomings Does the Trick
September 15, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I have noticed failures in a number of systems. I have no empirical data, just anecdotal observations. In the last few weeks, I have noticed glitches in a local hospital’s computer systems. There have been some fascinating cruise ship problems. And the airlines are flying the flag for system ineptitudes. I would be remiss if I did not mention news reports about “near misses” at airports. A popular food chain has suffered six recalls in a four or five weeks.
Most of these can be traced to software issues. Others are a hot mess combination of inexperienced staff and fouled up enterprise resource planning workflows. None of the issues were a result of smart software. To correct that oversight, let me mention the propensity of driverless automobiles to mis-identify emergency vehicles or possessing some indifference to side street traffic at major intersections.
“The information technology manager looks at the collapsing data center and asks, “Who is responsible for this issue?” No one answers. Those with any sense have adopted the van life, set up stalls to sell crafts at local art fairs, or accepted another job. Thanks, MidJourney. I guarantee your sliding down the gradient descent is accelerating.
What’s up?
My person view is that some people do not know how complex software works but depend on it despite that cloud of unknowing. Other people just trust the marketing people and buy what seems better, faster, and cheaper than an existing system which requires lots of money to keep chugging along.
Now we have an interesting case example that incorporates a number of management and technical issues. Birmingham, England is now bankrupt. The reason? The cost of a new system sucked up the cash. My hunch is that King Charles or some other kind soul will keep the city solvent. But the idea of city going broke because it could not manage a software project is illustrative of the future in my opinion.
“Largest Local Government Body in Europe Goes Under amid Oracle Disaster” reports:
Birmingham City Council, the largest local authority in Europe, has declared itself in financial distress after troubled Oracle project costs ballooned from £20 million to around £100 million ($125.5 million).
An extra £80 million would make little difference to an Apple, Google, or Microsoft. To a city in the UK, the cost is a bit of a problem.
Several observations:
- Large project management expertise does not deliver functional solutions. How is that air traffic control or IRS system enhancement going?
- Vendors rely on marketing to close deals, and then expect engineers to just make the system work. If something is incomplete or not yet coded, the failure rate may be anticipated, right? Nope, what’s anticipated in a scope change and billing more money.
- Government agencies are not known for smooth, efficient technical capabilities. Agencies are good at statements of work which require many interesting and often impossible features. The procurement attorneys cannot spot these issues, but those folks ride herd on the legal lingo. Result? Slips betwixt cup and lip.
Are the names of the companies involved important? Nope. The same situation exists when any enterprise software vendor wins a contract based on a wild and wooly statement of work, managed by individuals who are not particularly adept at keeping complex technical work on time and on target, and when big outfits let outfits sell via PowerPoints and demonstrations, not engineering realities.
Net net: More of these types of cases will be coming down the pike.
Stephen E Arnold, September 15, 2023