Another Xoogler and More Process Insights
November 23, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Google employs many people. Over the last 25 years, quite a few Xooglers (former Google employees) are out and about. I find the essays by the verbal Xooglers interesting. “Reflecting on 18 Years at Google” contains several intriguing comments. Let me highlight a handful of these. You will want to read the entire Hixie article to get the context for the snips I have selected.
The first point I underlined with blushing pink marker was:
I found it quite frustrating how teams would be legitimately actively pursuing ideas that would be good for the world, without prioritizing short-term Google interests, only to be met with cynicism in the court of public opinion.
Old timers share stories about the golden past in the high-technology of online advertising. Thanks, Copilot, don’t overdo the schmaltz.
The “Google as a victim” is a notion not often discussed — except by some Xooglers. I recall a comment made to me by a seasoned manager at another firm, “Yes, I am paranoid. They are out to get me.” That comment may apply to some professionals at Google.
How about this passage?
My mandate was to do the best thing for the web, as whatever was good for the web would be good for Google (I was explicitly told to ignore Google’s interests).
The oft-repeated idea is that Google cares about its users and similar truisms are part of what I call the Google mythology. Intentionally, in my opinion, Google cultivates the “doing good” theme as part of its effort to distract observers from the actual engineering intent of the company. (You love those Google ads, don’t you?)
Google’s creative process is captured in this statement:
We essentially operated like a startup, discovering what we were building more than designing it.
I am not sure if this is part of Google’s effort to capture the “spirit” of the old-timey days of Bell Laboratories or an accurate representation of Google’s directionless methods became over the years. What people “did” is clearly dissociated from the advertising mechanisms on which the oversized tires and chrome do-dads were created and bolted on the ageing vehicle.
And, finally, this statement:
It would require some shake-up at the top of the company, moving the center of power from the CFO’s office back to someone with a clear long-term vision for how to use Google’s extensive resources to deliver value to users.
What happened to the ideas of doing good and exploratory innovation?
Net net: Xooglers pine for the days of the digital gold rush. Googlers may not be aware of what the company is and does. That may be a good thing.
Stephen E Arnold, November 23, 2023
Anti-AI Fact Checking. What?
November 21, 2023
This essay is the work of a dumb dinobaby. No smart software required.
If this effort is sincere, at least one news organization is taking AI’s ability to generate realistic fakes seriously. Variety briefly reports, “CBS Launches Fact-Checking News Unit to Examine AI, Deepfakes, Misinformation.” Aptly dubbed “CBS News Confirmed,” the unit will be led by VPs Claudia Milne and Ross Dagan. Writer Brian Steinberg tells us:
“The hope is that the new unit will produce segments on its findings and explain to audiences how the information in question was determined to be fake or inaccurate. A July 2023 research note from the Northwestern Buffett Institute for Global Affairs found that the rapid adoption of content generated via A.I. ‘is a growing concern for the international community, governments and the public, with significant implications for national security and cybersecurity. It also raises ethical questions related to surveillance and transparency.’”
Why yes, good of CBS to notice. And what will it do about it? We learn:
“CBS intends to hire forensic journalists, expand training and invest in new technology, [CBS CEO Wendy] McMahon said. Candidates will demonstrate expertise in such areas as AI, data journalism, data visualization, multi-platform fact-checking, and forensic skills.”
So they are still working out the details, but want us to rest assured they have a plan. Or an outline. Or maybe a vague notion. At least CBS acknowledges this is a problem. Now what about all the other news outlets?
Cynthia Murrell, November 21, 2023
How Google Works: Think about Making Sausage in 4K on a Big Screen with Dolby Sound
November 16, 2023
This essay is the work of a dumb, dinobaby humanoid. No smart software required.
I love essays which provide a public glimpse of the way Google operates. An interesting insider description of the machinations of Googzilla’s lair appears in “What I Learned Getting Acquired by Google.” I am going to skip the “wow, the Google is great,” and focus on the juicy bits.
Driving innovation down Google’s Information Highway requires nerves of steel and the patience of Job. A good sense of humor, many brain cells, and a keen desire to make the techno-feudal system dominate are helpful as well. Thanks, Microsoft Bing. It only took four tries to get an illustration of vehicles without parts of each chopped off.
Here are the article’s “revelations.” It is almost like sitting in the Google cafeteria and listening to Tony Bennett croon. Alas, those days are gone, but the “best” parts of Google persist if the write up is on the money.
Let me highlight a handful of comments I found interesting and almost amusing:
- Google, according to the author, “an ever shifting web of goals and efforts.” I think this means going in many directions at once. Chaos, not logic, drives the sports car down the Information Highway
- Google has employees who want “to ship great work, but often couldn’t.” Wow, the Googley management method wastes resources and opportunities due to the Googley outfit’s penchant for being Googley. Yeah, Googley because lousy stuff is one output, not excellence. Isn’t this regressive innovation?
- There are lots of managers or what the author calls “top heavy.” But those at the top are well paid, so what’s the incentive to slim down? Answer: No reason.
- Google is like a teen with a credit card and no way to pay the bill. The debt just grows. That’s Google except it is racking up technical debt and process debt. That’s a one-two punch for sure.
- To win at Google, one must know which game to play, what the rules of that particular game are, and then have the Machiavellian qualities to win the darned game. What about caring for the users? What? The users! Get real.
- Google screws up its acquisitions. Of course. Any company Google buys is populated with people not smart enough to work at Google in the first place. “Real” Googlers can fix any acquisition. The technique was perfected years ago with Dodgeball. Hey, remember that?
Please, read the original essay. The illustration shows a very old vehicle trying to work its way down an information highway choked with mud, blocked by farm equipment, and located in an isolated fairy land. Yep, that’s the Google. What happens if the massive flows of money are reduced? Yikes!
Stephen E Arnold, November 16, 2023
Google and the Tom Sawyer Method, Part Two
November 15, 2023
This essay is the work of a dumb humanoid. No smart software required.
What does a large online advertising company do when it cannot figure out what’s fake and what’s not? The answer, as I suggested in this post, is to get other people to do the work. The approach is cheap, shifts the burden to other people, and sidesteps direct testing of an automated “smart” system to detect fake data in the form of likenesses of living people or likenesses for which fees must be paid to use the likeness.
“YouTube Will Let Musicians and Actors Request Takedowns of Their Deepfakes” explains (sort of):
YouTube is making it “possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice.” Individuals can submit calls for removal through YouTube’s privacy request process …
I find this angle on the process noted in my “Google Solves Fake Information with the Tom Sawyer Method” a useful interpretation of what Google is doing.
From my point of view, Google wants others to do the work of monitoring, identifying, and filling out a form to request fake information be removed. Nevermind that Google has the data, the tags, and (in theory) the expertise to automate the process.
I admire Google. I bet Tom Sawyer’s distant relative now works at Google and cooked up this approach. Well done. Hit that Foosball game while others hunt for their fake or unauthorized likeness, their music, or some other copyrighted material.
Stephen E Arnold, November 15, 2023
Hitting the Center Field Wall, AI Suffers an Injury!
November 15, 2023
This essay is the work of a dumb, dinobaby humanoid. No smart software required.
At a reception at a government facility in Washington, DC, last week, one of the bright young sparks told me, “Every investment deal I see gets fund if it includes the words ‘artificial intelligence.’” I smiled and moved to another conversation. Wow, AI has infused the exciting world of a city built on the swampy marge of the Potomac River.
I think that the go-go era of smart software has reached a turning point. Venture firms and consultants may not have received the email with this news. However, my research team has, and the update contains information on two separate thrusts of the AI revolution.
The heroic athlete, supported by his publicist, makes a heroic effort to catch the long fly ball. Unfortunately our star runs into the wall, drops the ball, and suffers what may be a career-ending injury to his left hand. (It looks broken, doesn’t it?)Oh, well. Thanks, MSFT Bing. The perspective is weird and there is trash on the ground, but the image is good enough.
The first signal appears in “AI Companies Are Running Out of Training Data.” The notion that online information is infinite is a quaint one. But in the fever of moving to online, reality is less interesting that the euphoria of the next gold rush or the new Industrial Revolution. Futurism reports:
Data plays a central role, if not the central role, in the AI economy. Data is a model’s vital force, both in basic function and in quality; the more natural — as in, human-made — data that an AI system has to train on, the better that system becomes. Unfortunately for AI companies, though, it turns out that natural data is a finite resource — and if that tap runs dry, researchers warn they could be in for a serious reckoning.
The information or data in question is not the smog emitted by modern automobiles’ chip stuffed boxes. Nor is the data the streams of geographic information gathered by mobile phone systems. The high value data are those which matter; for example, in a stream of security information, which specific stock is moving because it is being manipulated by one of those bright young minds I met at the DC event.
The article “AI Companies Are Running Out of Training Data” adds:
But as data becomes increasingly valuable, it’ll certainly be interesting to see how many AI companies can actually compete for datasets — let alone how many institutions, or even individuals, will be willing to cough their data over to AI vacuums in the first place. But even then, there’s no guarantee that the data wells won’t ever run dry. As infinite as the internet seems, few things are actually endless.
The fix is synthetic or faked data; that is, fabricated data which appears to replicate real-life behavior. (Don’t you love it when Google predicts the weather or a smarty pants games the crypto market?)
The message is simple: Smart software has ground through the good stuff and may face its version of an existential crisis. That’s different from the rah rah one usually hears about AI.
The second item my team called to my attention appears in a news story called “OpenAI Pauses New ChatGPT Plus Subscriptions De to Surge in Demand.” I read the headline as saying, “Oh, my goodness, we don’t have the money or the capacity to handle more users requests.”
The article expresses the idea in this snappy 21st century way:
The decision to pause new ChatGPT signups follows a week where OpenAI services – including ChatGPT and the API – experienced a series of outages related to high-demand and DDoS attacks.
Okay, security and capacity.
What are the implications of these two unrelated stories:
- The run up to AI has been boosted with system operators ignoring copyright and picking low hanging fruit. The orchard is now looking thin. Apples grow on trees, just not quickly and over cultivation can ruin the once fertile soil. Think a digital Dust Bowl perhaps?
- The friction of servicing user requests is causing slow downs. Can the heat be dissipated? Absolutely but the fix requires money, more than high school science club management techniques, and common sense. Do AI companies exhibit common sense? Yeah, sure. Everyday.
- The lack of high-value or sort of good information is a bummer. Machines producing insights into the dark activities of bad actors and the thoughts of 12-year-olds are grinding along. However, the value of the information outputs seems to be lagging behind the marketers’ promises. One telling example is the outright failure of Israel’s smart software to have utility in identifying the intent of bad actors. My goodness, if any country has smart systems, it’s Israel. Based on events in the last couple of months, the flows of data produced what appears to be a failing grade.
If we take these two cited articles’ information at face value, one can make a case that the great AI revolution may be facing some headwinds. In a winner-take-all game like AI, there will be some Sad Sacks at those fancy Washington, DC receptions. Time to innovate and renovate perhaps?
Stephen E Arnold, November 15, 2023
The Risks of Smart Software in the Hands of Fullz Actors and Worse
November 7, 2023
This essay is the work of a dumb humanoid. No smart software required.
The ChatGPT and Sam AI-Man parade is getting more acts. I spotted some thumbs up from Satya Nadella about Sam AI-Man and his technology. The news service Techmeme provided me with dozens of links and enticing headlines about enterprise this and turbo that GPT. Those trumpets and tubas were pumping out the digital version of Funiculì, Funiculà.
I want to highlight one write up and point out an issue with smart software that appears to have been ignored, overlooked, or like the iceberg possibly that sank the RMS Titanic, was a heck of a lot more dangerous than Captain Edward Smith appreciated.
The crowd is thrilled with the new capabilities of smart software. Imagine automating mundane, mindless work. Over the oom-pah of the band, one can sense the excitement of the Next Big Thing getting Bigger and more Thingier. In the crowd, however, are real or nascent bad actors. They are really happy too. Imagine how easy it will be to automate processes designed to steal personal financial data or other chinks in humans’ armor!
The article is “How OpenAI Is Building a Path Toward AI Agents.” The main idea is that one can type instructions into Sam AI-Man’s GPT “system” and have smart software hook together discrete functions. These functions can then deliver an output requiring the actions of different services.
The write up approaches this announcement or marketing assertion with some prudence. The essay points out that “customer chatbots aren’t a new idea.” I agree. Connecting services has been one of the basic ideas of the use of software. Anyone who has used notched cards to retrieve items related to one another is going to understand the value of automation. And now, if the Sam AI-Man announcements are accurate that capability no longer requires old-fashioned learning the ropes.
The cited write up about building a path asserts:
Once you start enabling agents like the ones OpenAI pointed toward today, you start building the path toward sophisticated algorithms manipulating the stock market; highly personalized and effective phishing attacks; discrimination and privacy violations based on automations connected to facial recognition; and all the unintended (and currently unimaginable) consequences of infinite AIs colliding on the internet.
Fear, uncertainty, and doubt are staples of advanced technology. And the essay makes clear that the rule maker in chief is Sam AI-Man; to wit the essay says:
After the event, I asked Altman how he was thinking about agents in general. Which actions is OpenAI comfortable letting GPT-4 take on the internet today, and which does the company not want to touch? Altman’s answer is that, at least for now, the company wants to keep it simple. Clear, direct actions are OK; anything that involves high-level planning isn’t.
Let me introduce my observations about the Sam AI-Man innovations and the type of explanations about the PR and marketing event which has whipped up pundits, poohbahs, and Twitter experts (perhaps I should say X-spurts?)
First, the Sam AI-Man announcements strike me as making orchestration a service easy to use and widely available. Bad things won’t be allowed. But the core idea of what I call “orchestration” is where the parade is marching. I hear the refrain “Some think the world is made for fun and frolic.” But I don’t agree, I don’t agree. Because as advanced tools become widely available, the early adopters are not exclusively those who want to link a calendar to an email to a document about a meeting to talk about a new marketing initiative.
Second, the ability of Sam AI-Man to determine what’s in bounds and out of bounds is different from refereeing a pickleball game. Some of the players will be nation states with an adversarial view of the US of A. Furthermore, there are bad actors who have a knack for linking automated information to online extortion. These folks will be interested in cost cutting and efficiency. More problematic, some of these individuals will be more active in testing how orchestration can facilitate their human trafficking activities or drug sales.
Third, government entities and people like Sam AI-Man are, by definition, now in reactive mode. What I mean is that with the announcement and the chatter about automating the work required to create a snappy online article is not what a bad actor will do. Individuals will see opportunities to create new ways to exploit the cluelessness of employees, senior citizens, and young people. The cheerful announcements and the parade tunes cannot drown out the low frequency rumbles of excitement now rippling through the bad actor grapevines.
Net net: Crime propelled by orchestration is now officially a thing. The “regulations” of smart software, like the professionals who will have to deal with the downstream consequences of automation, are out of date. Am I worried? For me personally, no, I am not worried. For those who have to enforce the laws which govern a social construct? Yep, I have a bit of concern. Certainly more than those who are laughing and enjoying the parade.
Stephen E Arnold, November 7, 2023
Missing Signals: Are the Tools or Analysts at Fault?
November 7, 2023
This essay is the work of a dumb humanoid. No smart software required.
Returning from a trip to DC yesterday, I thought about “signals.” The pilot — a specialist in hit-the-runway-hard landings — used the word “signals” in his welcome-aboard speech. The word sparked two examples of missing signals. The first is the troubling kinetic activities in the Middle East. The second is the US Army reservist who went on a shooting rampage.
The intelligence analyst says, “I have tools. I have data. I have real time information. I have so many signals. Now which ones are important, accurate, and actionable?” Our intrepid professionals displays the reality of separating the signal from the noise. Scary, right? Time for a Starbuck’s visit.
I know zero about what software and tools, systems and informers, and analytics and smart software the intelligence operators in Israel relied upon. I know even less about what mechanisms were in place when Robert Card killed more than a dozen people.
The Center for Strategic and International Studies published “Experts React: Assessing the Israeli Intelligence and Potential Policy Failure.” The write up stated:
It is incredible that Hamas planned, procured, and financed the attacks of October 7, likely over the course of at least two years, without being detected by Israeli intelligence. The fact that it appears to have done so without U.S. detection is nothing short of astonishing. The attack was complex and expensive.
And one more passage:
The fact that Israeli intelligence, as well as the international intelligence community (specifically the Five Eyes intelligence-sharing network), missed millions of dollars’ worth of procurement, planning, and preparation activities by a known terrorist entity is extremely troubling.
Now let’s shift to the Lewiston Maine shooting. I had saved on my laptop “Six Missed Warning Signs Before the Maine Mass Shooting Explained.” The UK newspaper The Guardian reported:
The information about why, despite the glaring sequence of warning signs that should have prevented him from being able to possess a gun, he was still able to own over a dozen firearms, remains cloudy.
Those “signs” included punching a fellow officer in the US Army Reserve force, spending some time in a mental health facility, family members’ emitting “watch this fellow” statements, vibes about issues from his workplace, and the weapon activity.
On one hand, Israel had intelligence inputs from just about every imaginable high-value source from people and software. On the other hand, in a small town the only signal that was not emitted by Mr. Card was buying a billboard and posting a message saying, “Do not invite Mr. Card to a church social.”
As the plane droned at 1973 speeds toward the flyover state of Kentucky, I jotted down several thoughts. Like or not, here these ruminations are:
- Despite the baloney about identifying signals and determining which are important and which are not, existing systems and methods failed bigly. The proof? Dead people. Subsequent floundering.
- The mechanisms in place to deliver on point, significant information do not work. Perhaps it is the hustle bustle of everyday life? Perhaps it is that humans are not very good at figuring out what’s important and what’s unimportant. The proof? Dead people. Constant news releases about the next big thing in open source intelligence analysis. Get real. This stuff failed at the scale of SBF’s machinations.
- The uninformed pontifications of cyber security marketers, the bureaucratic chatter flowing from assorted government agencies, and the cloud of unknowing when the signals are as subtle as the foghorn on cruise ship with a passenger overboard. Hello, hello, the basic analysis processes don’t work. A WeWork investor’s thought processes were more on point than the output of reporting systems in use in Maine and Israel.
After the aircraft did the thump-and-bump landing, I was able to walk away. That’s more than I can say for the victims of analysis, investigation, and information processing methods in use where moose roam free and where intelware is crafted and sold like canned beans at TraderJoe’s.
Less baloney and more awareness that talking about advanced information methods is a heck of a lot easier than delivering actual signal analysis.
Stephen E Arnold, November 7, 2023
test
Bankrupting a City: Big Software, Complexity, and Human Shortcomings Does the Trick
September 15, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I have noticed failures in a number of systems. I have no empirical data, just anecdotal observations. In the last few weeks, I have noticed glitches in a local hospital’s computer systems. There have been some fascinating cruise ship problems. And the airlines are flying the flag for system ineptitudes. I would be remiss if I did not mention news reports about “near misses” at airports. A popular food chain has suffered six recalls in a four or five weeks.
Most of these can be traced to software issues. Others are a hot mess combination of inexperienced staff and fouled up enterprise resource planning workflows. None of the issues were a result of smart software. To correct that oversight, let me mention the propensity of driverless automobiles to mis-identify emergency vehicles or possessing some indifference to side street traffic at major intersections.
“The information technology manager looks at the collapsing data center and asks, “Who is responsible for this issue?” No one answers. Those with any sense have adopted the van life, set up stalls to sell crafts at local art fairs, or accepted another job. Thanks, MidJourney. I guarantee your sliding down the gradient descent is accelerating.
What’s up?
My person view is that some people do not know how complex software works but depend on it despite that cloud of unknowing. Other people just trust the marketing people and buy what seems better, faster, and cheaper than an existing system which requires lots of money to keep chugging along.
Now we have an interesting case example that incorporates a number of management and technical issues. Birmingham, England is now bankrupt. The reason? The cost of a new system sucked up the cash. My hunch is that King Charles or some other kind soul will keep the city solvent. But the idea of city going broke because it could not manage a software project is illustrative of the future in my opinion.
“Largest Local Government Body in Europe Goes Under amid Oracle Disaster” reports:
Birmingham City Council, the largest local authority in Europe, has declared itself in financial distress after troubled Oracle project costs ballooned from £20 million to around £100 million ($125.5 million).
An extra £80 million would make little difference to an Apple, Google, or Microsoft. To a city in the UK, the cost is a bit of a problem.
Several observations:
- Large project management expertise does not deliver functional solutions. How is that air traffic control or IRS system enhancement going?
- Vendors rely on marketing to close deals, and then expect engineers to just make the system work. If something is incomplete or not yet coded, the failure rate may be anticipated, right? Nope, what’s anticipated in a scope change and billing more money.
- Government agencies are not known for smooth, efficient technical capabilities. Agencies are good at statements of work which require many interesting and often impossible features. The procurement attorneys cannot spot these issues, but those folks ride herd on the legal lingo. Result? Slips betwixt cup and lip.
Are the names of the companies involved important? Nope. The same situation exists when any enterprise software vendor wins a contract based on a wild and wooly statement of work, managed by individuals who are not particularly adept at keeping complex technical work on time and on target, and when big outfits let outfits sell via PowerPoints and demonstrations, not engineering realities.
Net net: More of these types of cases will be coming down the pike.
Stephen E Arnold, September 15, 2023
Generative AI: Not So Much a Tool But Something Quite Different
August 24, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Thirty years ago I had an opportunity to do a somewhat peculiar job. I had written for a publisher in the UK a version of a report my team and I prepared about Japanese investments in its Fifth Generation Computer Revolution or some such government effort. A wealthy person who owned a medium-sized financial firm asked me if I would comment on a book called The Meaning of the Microcosm. “Sure,” I said.
This tiny, cute technology creature has just crawled from the ocean, and it is looking for lunch. Who knew that it could morph into a much larger and more disruptive beast? Thanks, MidJourney. No review committee for me this morning.
What I described was technology’s Darwinian behavior. I am not sure I was breaking new ground, but it seemed safe for me to point to how a technology survived. Therefore, I argued in a private report to this wealthy fellow, that if betting on a winner would make one rich. I tossed in an idea that I have thought about for many years; specifically, as technologies battle to “survive,” the technologies evolve and mutate. The angle I have commented about for many years is simple: Predicting how a technology mutates is a tricky business. Mutations can be tough to spot or just pop up. Change just says, “Hello, I am here.”
I thought about this “book commentary project” when I read “How ChatGPT Turned Generative AI into an Anything Tool.” The article makes a number of interesting observations. Here’s one I noted:
But perhaps inadvertently, these same changes let the successors to GPT3, like GPT3.5 and GPT4, be used as powerful, general-purpose information-processing tools—tools that aren’t dependent on the knowledge the AI model was originally trained on or the applications the model was trained for. This requires using the AI models in a completely different way—programming instead of chatting, new data instead of training. But it’s opening the way for AI to become general purpose rather than specialized, more of an “anything tool.”
I am not sure that “anything tool” is a phrase with traction, but it captures the idea of a technology that began as a sea creature, morphing, and then crawling out of the ocean looking for something to eat. The current hungry technology is smart software. Many people see the potential of combining repetitive processes with smart software in order to combine functions, reduce costs, or create alternatives to traditional methods of accomplishing a task. A good example is the use college students are making of the “writing” ability of free or low cost services like ChatGPT or You.com.
But more is coming. As I recall, in my discussion of the microcosm book, I made the point that Mr. Gilder’s point that small-scale systems and processes can have profound effects on larger systems and society as a whole. But a technology “innovation” like generative AI is simultaneously “small” and “large”. Perspective and point of view are important in software. Plus, the innovations of the transformer and the larger applications of generative AI to college essays illustrate the scaling impact.
What makes AI interesting for me at this time is that genetic / Darwinian change is occurring across the scale spectrum. On one hand, developers are working to create big applications; for instance, SaaS solutions that serve millions of users. On the other hand, shifting from large language models to smaller, more efficient methods of getting smart aim to reduce costs and speed the functioning of the plumbing.
The cited essay in Arstechnica is on the right track. However, the examples chosen are, it seems to me, ignoring the surprises the iterations of the technology will deliver. Is this good or bad? I have no opinion. What is important than wild and crazy ideas about control and regulation strike me as bureaucratic time wasting. It was millions a years ago to get out of the way of the hungry creature from the ocean of ones and zeros and try to figure out how to make catch the creature and have dinner, turn its body parts into jewelry which can be sold online, or processing the beastie into a heat-and-serve meal at Trader Joe’s.
My point is that the generative innovations do not comprise a “tool.” We’re looking at something different, semi-intelligent, and evolving with speed. Will it be let’s have lunch or one is lunch?
Stephen E Arnold, August 24, 2023
A Group without a Leader: Lost in the Digital Wilderness. No Signal, No Hope
August 10, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read a story in a business magazine which may not make executives at a certain company happy. In fact, some of these executives may be thinking about some form of digital retribution. The story concerns Google Maps, a Google product/service which I find is pretty much unusable. Keep in mind that I am a dinobaby and dinobaby talons can’t hit the miniature graphics which cover Google maps like my ninth-grade acne. (Yeah, ugly.)
A high technology company’s project team. The group is lost. No one has any idea what to do or which direction to take. Their manager told them, “Rely on the digital maps your colleagues made.” How is that working out for you? Thanks, MidJourney. You have the desperation look nailed.
“Google Maps Has Become an Eyesore. 5 Examples of How the App Has Lost Its Way” contains a list of issues the author who probably has more online experience than I do identifies five issues with the much-loved service. The “love” refers to the revenue generated from Google Maps, not the “love” or lack of it from users like me.
These range from icon acne to weird behaviors with the street name “feature.” I am not going to romp through the issues the article flags. I want to focus on two which are deal breakers for me. In fact, the digital map thing recently forced me to purchase a trucker’s edition of a printed road map to the United States.
For me, Google has made it difficult for me (probably not you, dear GenX reader) to find the street view. I quite like finding a location and then being able to look at the physical surroundings. How do I address this need now? I use Bing Maps.
The second issue that drives me crazy is the omission of businesses (intentionally or unintentionally) because the business does not advertise. I have written about the Cuba Libre Restaurant issue, and it bedevils me even today. I was standing in front of the bustling Washington, DC, restaurant, but my digital map service did not show it. Objectivity, they name is not Googzilla, I say.
Let me shift gears and offer my hypothesis why Google Maps is almost unusable for me.
Imagine a team responsible for a mapping product. There are a couple of people who have some tenure with the team. A couple have escaped from a more dysfunctional team; for example, a failed smart software project. Plus, there are two new hires with zero clue how or why they are working on maps. These individuals are experts in data center engineering and never leave the servers and, therefore, don’t know anything about maps, just wiring diagrams.
Okay, now the group sits around and someone says, “What are we supposed to do?” The most senior person who is totally occupied about getting on a hot team focused on killing another vendor’s AI effort, says, “Let’s just come up with some ideas and implement a couple.” The group mumbles, plays with their mobile devices, chats with the data center wizard about slow response on the internal messaging system, and look out the windows. One hard charger says, “Let’s make a list of ideas on the whiteboard, rank them, and do the top two or three.” More mumbles. A list is generated. The six team breaks into two groups and the employees retreat to the snack area to talk about implementing the functions. The work is agreed upon and the coding is dumped on the two network professionals. These individuals write the code, make sure it doesn’t kill anything, and emails it to the others on the team. No one looks at it but everyone says, “Fine.” Done.
This work procedure evidences:
- Zero guidance from an involved, motivated “manager”
- The mental attitude of the engineers
- The indifference of the individuals to the idea of delivering useful, quality features.
Now the author of the article about Google Maps knows nothing about this modern management approach to adding features at an unnamed high technology company.
That’s why I don’t rely on digital maps. The printed map works just fine. Plus I have to stop and read the map. None of the figure out a map driving or walking, which can lead to a collision with a smart, self driving automobile or an engineer looking for work at another company.
Stephen E Arnold, August 10, 2023