Judgment Before? No. Backing Off After? Yes.
August 5, 2024
I wanted to capture two moves from two technology giants. The first item is the report that Google pulled the oh-so-Googley ad about a father using Gemini to write personal note to his daughter. If you are not familiar with the burst of creative marketing, you can glean a few details from “Google Pulls Gemini AI Ad from Olympics after Backlash.” The second item is the report that according to Bloomberg, “Apple Pulls Commercial After Thai Backlash, Calls for Boycott.”
I reacted to these two separate announcements by thinking about what these do it-reverse it decisions suggest about the management controls at two technology giants.
Some management processes operated to think up the ad ideas. Then the project had to be given the green light from “leadership” at the two outfits. Next third party providers had to be enlisted to do some of the “knowledge work”. Finally, I assume there were meetings to review the “creative.” Finally, one ad from several candidates was selected by each firm. The money paid. And then the ads appeared. That’s a lot of steps and probably more than two or three people working in a cube next to a Foosball tables.
Plus, the about faces by the two companies did not take much time. Google caved after a few days. Apple also hopped on its havester and chopped the India advertisement quickly as well. Decisiveness. Actually decisiveness after the fact.
Why not less obvious processes like using better judgment before releasing the advertisements? Why not focus on working with people who are more in tune with audience reactions than being clever, smooth talking, and desperate-eager for big company money?
Several observations:
- Might I hypothesize that both companies lack a fabric of common sense?
- If online ads “work,” why use what I would call old-school advertising methods? Perhaps the online angle is not correct for such important messaging from two companies that seem to do whatever they want most of the time?
- The consequences of these do-then-undo actions are likely to be close to zero. Is that what operating in a no-consequences environment fosters?
I wonder if the back away mentality is now standard operating procedure. We have Intel and nVidia with some back-away actions. We have a nation state agreeing to a plea bargain and the un-agreeing the next day. We have a net neutraility rule, then don’t, then we do, and now we don’t. Now that I think about it, perhaps because there are no significant consequences, decision quality has taken a nose dive?
Some believe that great complexity sets the stage for bad decisions which regress to worse decisions.
Stephen E Arnold, August 5, 2024
Fancy Cyber Methods Are Useless Against Insider Threats
August 2, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
In my lectures to law enforcement and intelligence professionals, I end the talks with one statement: “Do not assume. Do not reduce costs by firing experienced professionals. Do not ignore human analyses of available information. Do not take short cuts.” Cyber security companies are often like the mythical kids of the village shoemaker. Those who can afford to hire the shoemaker have nifty kicks and slides. Those without resources have almost useless footware.
Companies in the security business often have an exceptionally high opinion of their capabilities and expertise. I think of this as the Google Syndrome or what some have called by less salubrious names. The idea is that one is just so smart, nothing bad can happen here. Yeah, right.
An executive answers questions about a slight security misstep. Thanks, Microsoft Copilot. You have been there and done that I assume.
I read “North Korean Hacker Got Hired by US Security Vendor, Immediately Loaded Malware.” The article is a reminder that outfits in the OSINT, investigative, and intelligence business can make incredibly interesting decisions. Some of these lead to quite significant consequences. This particular case example illustrates how a hiring process using humans who are really smart and dedicated can be fooled, duped, and bamboozled.
The write up explains:
KnowBe4, a US-based security vendor, revealed that it unwittingly hired a North Korean hacker who attempted to load malware into the company’s network. KnowBe4 CEO and founder Stu Sjouwerman described the incident in a blog post yesterday, calling it a cautionary tale that was fortunately detected before causing any major problems.
I am a dinobaby, and I translated the passage to mean: “We hired a bad actor but, by the grace of the Big Guy, we avoided disaster.”
Sure, sure, you did.
I would suggest you know you trapped an instance of the person’s behavior. You may not know and may never know what that individual told a colleague in North Korea or another country what the bad actor said or emailed from a coffee shop using a contact’s computer. You may never know what business processes the person absorbed, converted to an encrypted message, and forwarded via a burner phone to a pal in a nation-state whose interests are not aligned with America’s.
In short, the cyber security company dropped the ball. It need not feel too bad. One of the companies I worked for early in my 60 year working career hired a person who dumped top secrets into journalists’ laps. Last week a person I knew was complaining about Delta Airlines which was shown to be quite addled in the wake of the CrowdStrike misstep.
What’s the fix? Go back to how I end my lectures. Those in the cyber security business need to be extra vigilant. The idea that “we are so smart, we have the answer” is an example of a mental short cut. The fact is that the company KnowBe4 did not. It is lucky it KnewAtAll. Some tips:
- Seek and hire vetted experts
- Question procedures and processes in “before action” and “after action” incidents
- Do not rely on assumptions
- Do not believe the outputs of smart software systems
- Invest in security instead of fancy automobiles and vacations.
Do these suggestions run counter to your business goals and your image of yourself? Too bad. Life is tough. Cyber crime is the growth business. Step up.
Stephen E Arnold, August 2, 2024
Google and Its Smart Software: The Emotion Directed Use Case
July 31, 2024
This essay is the work of a dumb humanoid. No smart software required.
How different are the Googlers from those smack in the middle of a normal curve? Some evidence is provided to answer this question in the Ars Technica article “Outsourcing Emotion: The Horror of Google’s “Dear Sydney” AI Ad.” I did not see the advertisement. The volume of messages flooding through my channels each days has allowed me to develop what I call “ad blindness.” I don’t notice them; I don’t watch them; and I don’t care about the crazy content presentation which I struggle to understand.
A young person has to write a sympathy card. The smart software is encouraging to use the word “feel.” This is a word foreign to the individual who wants to work for big tech someday. Thanks, MSFT Copilot. Do you have your hands full with security issues today?
Ars Technica watches TV and the Olympics. The write up reports:
In it, a proud father seeks help writing a letter on behalf of his daughter, who is an aspiring runner and superfan of world-record-holding hurdler Sydney McLaughlin-Levrone. “I’m pretty good with words, but this has to be just right,” the father intones before asking Gemini to “Help my daughter write a letter telling Sydney how inspiring she is…” Gemini dutifully responds with a draft letter in which the LLM tells the runner, on behalf of the daughter, that she wants to be “just like you.”
What’s going on? The father wants to write something personal to his progeny. A Hallmark card may never be delivered from the US to France. The solution is an emessage. That makes sense. Essential services like delivering snail mail are like most major systems not working particularly well.
Ars Technica points out:
But I think the most offensive thing about the ad is what it implies about the kinds of human tasks Google sees AI replacing. Rather than using LLMs to automate tedious busywork or difficult research questions, “Dear Sydney” presents a world where Gemini can help us offload a heartwarming shared moment of connection with our children.
I find the article’s negative reaction to a Mad Ave-type of message play somewhat insensitive. Let’s look at this use of smart software from the point of view of a person who is at the right hand tail end of the normal distribution. The factors in this curve are compensation, cleverness as measured in a Google interview, and intelligence as determined by either what school a person attended, achievements when a person was in his or her teens, or solving one of the Courant Institute of Mathematical Sciences brain teasers. (These are shared at cocktail parties or over coffee. If you can’t answer, you pay the bill and never get invited back.)
Let’s run down the use of AI from this hypothetical right of loser viewpoint:
- What’s with this assumption that a Google-type person has experience with human interaction. Why not send a text even though your co-worker is at the next desk? Why waste time and brain cycles trying to emulate a Hallmark greeting card contractor’s phraseology. The use of AI is simply logical.
- Why criticize an alleged Googler or Googler-by-the-gig for using the company’s outstanding, quantumly supreme AI system? This outfit spends millions on running AI tests which allow the firm’s smart software to perform in an optimal manner in the messaging department. This is “eating the dog food one has prepared.” Think of it as quality testing.
- The AI system, running in the Google Cloud on Google technology is faster than even a quantumly supreme Googler when it comes to generating feel-good platitudes. The technology works well. Evaluate this message in terms of the effectiveness of the messaging generated by Google leadership with regard to the Dr. Timnit Gebru matter. Upper quartile of performance which is far beyond the dead center of the bell curve humanoids.
My view is that there is one positive from this use of smart software to message a partially-developed and not completely educated younger person. The Sundar & Prabhakar Comedy Act has been recycling jokes and bits for months. Some find them repetitive. I do not. I am fascinated by the recycling. The S&P Show has its fans just as Jack Benny does decades after his demise. But others want new material.
By golly, I think the Google ad showing Google’s smart software generating a parental note is a hoot and a great demo. Plus look at the PR the spot has generated.
What’s not to like? Not much if you are Googley. If you are not Googley, sorry. There’s not much that can be done except shove ads at you whenever you encounter a Google product or service. The ad illustrates the mental orientation of Google. Learn to love it. Nothing is going to alter the trajectory of the Google for the foreseeable future. Why not use Google’s smart software to write a sympathy note to a friend when his or her parent dies? Why not use Google to write a note to the dean of a college arguing that your child should be admitted? Why not let Google think for you? At least that decision would be intentional.
Stephen E Arnold, July 31, 2024
How
How
How
How
How
AI Reduces Productivity: Quick Another Study Needed Now
July 29, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
At lunch one of those at my table said with confidence that OpenAI was going to lose billions in 2024. Another person said, “Meta has published an open source AI manifesto.” I said, “Please, pass the pepper.”
The AI marketing and PR generators are facing a new problem. More information about AI is giving me a headache. I want to read about the next big thing delivering Ford F-150s filled with currency to my door. Enough of this Debbie Downer talk.
Then I spotted this article in Forbes Magazine, the capitalist tool. “77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds.”
The write up should bring tears of joy to those who thought they would be replaced by one of the tech giants smart software concoctions. Human employees hallucinate too. But humans have a couple of notable downsides. First, they require care and feeding, vacations, educational benefits and/or constant retraining, and continuous injections of cash. Second, they get old and walk out the door with expertise when they retire or just quit. And, third, they protest and sometimes litigate. That means additional costs and maybe a financial penalty to the employer. Smart software, on the other hand, does not impose those costs. The work is okay, particularly for intense knowledge work like writing meaningless content for search engine optimization or flipping through thousands of pages of documents looking for a particular name or a factoid of perceived importance.
But this capitalist tool write up says:
Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains. Not only is AI increasing the workloads of full-time employees, it’s hampering productivity and contributing to employee burnout.
Interesting. An Upwork wizard Kelly Monahan is quoted to provide a bit of context I assume:
“In order to reap the full productivity value of AI, leaders need to create an AI-enhanced work model,” Monahan continues. “This includes leveraging alternative talent pools that are AI-ready, co-creating measures of productivity with their workforces, and developing a deep understanding of and proficiency in implementing a skills-based approach to hiring and talent development. Only then will leaders be able to avoid the risk of losing critical workers and advance their innovation agenda.”
The phrase “full productivity value” is fascinating. There’s a productivity payoff somewhere amidst the zeros and ones in the digital Augean Stable. There must be a pony in there?
What’s the fix? Well, it is not AI because the un-productive or intentionally non-productive human who must figure out how to make smart software pirouette can get trained up in AI and embrace any AI consultant who shows up to explain the ropes.
But the article is different from the hyperbolic excitement of those in the Red Alert world and the sweaty foreheads at AI pitch meetings. AI does not speed up. AI slows down. Slowing down means higher costs. AI is supposed to reduce costs. I am confused.
Net net: AI is coming productive or not. When someone perceives a technology will reduce costs, install that software. The outputs will be good enough. One hopes.
Stephen E Arnold, July 29, 2024
Silicon Valley Streetbeefs
July 26, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
If you are not familiar with Streetbeefs, I suggest you watch the “Top 5 Boxing Knock Outs” on YouTube. Don’t forget to explore the articles about Streetbeefs at the about section of the Community page for the producer. Useful information appears on the main Web site too.
So what’s Streetbeefs do? The organization’s tag line explains:
Guns down. Gloves up.
If that is not sufficiently clear, the organization offers:
This org is for fighters and friends of STREETBEEFS! Everyone who fights for Streetbeefs does so to solve a dispute, or for pure sport… NOONE IS PAID TO FIGHT HERE… We also have many dedicated volunteers who help with reffing, security, etc..but if you’re joining the group looking for a paid position please know we’re not hiring. This is a brotherhood/sisterhood of like minded people who love fighting, fitness, and who hate gun violence. Our goal is to build a large community of people dedicated to stopping street violence and who can depend on each other like family. This organization is filled with tough people…but its also filled with the most giving people around..Streetbeefs members support each other to the fullest.
I read “Elon Musk Is Still Insisting He’s Down to Fight Mark Zuckerberg: ‘Any Place, Any Time, Any Rules.” This article makes clear that Mark (Senator, thank you for that question) Zuckerberg and Elon (over promise and under deliver) Musk want to fight one another. The article says (I cannot believe I read this by the way):
Elon Musk is impressed with Meta’s latest AI model, but he’s still raring for a bout in the ring with Mark Zuckerberg. “I’ll fight Zuckerberg any place, any time, any rules,” Musk told reporters in Washington on Wednesday [July 24, 2024].
My thought is that Mr. Musk should ring up Sunshine Trask, who is the group manager for fighter signups at Streetbeefs. Trask can coordinate the fight plus the undercard with the two testosterone charged big technology giants. With most fights held in make shift rings outside, Golden Gate Park might be a suitable venue for the event. Another possibility is to rope off the street near Philz coffee in Sunnyvale and hold the “beef” in Plaza de Sol.
Both Mr. Musk and Mr. Zuckerberg can meet with Scarface, the principal figure at Streetbeefs and get a sense of how the show will go down, what the rules are, and why videographers are in the ring with the fighter.
If these titans of technology want to fight, why not bring their idea of manly valor to a group with considerable experience handling individuals of diverse character.
Perhaps the “winner” would ask Scarface if he would go a few rounds to test his skills and valor against the victor of the truly bizarre dust up the most macho of the Sillycon Valley superstars. My hunch is that talking with Scarface and his colleagues might inform Messrs. Musk and Zuckerberg of the brilliance, maturity, and excitement of fighting for real.
On the other hand, Scarface might demonstrate his street and business acumen by saying, “You guys are adults?”
Stephen E Arnold, July 26, 2024
Google and Third-Party Cookies: The Writing Is on the Financial Projection Worksheet
July 25, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I have been amused by some of the write ups about Google’s third-party cookie matter. Google is the king of the jungle when it comes to saying one thing and doing another. Let’s put some wood behind social media. Let’s make that Dodgeball thing take off. Let’s make that AI-enhanced search deliver more user joy. Now we are in third-party cookie revisionism. Even Famous Amos has gone back to its “original” recipe after the new and improved Famous Amos chips tanked big time. Google does not want to wait to watch ad and data sale-related revenue fall. The Google is changing its formulation before the numbers arrive.
“Google No Longer Plans to Eliminate Third-Party Cookies in Chrome” explains:
Google announced its cookie updates in a blog post shared today, where the company said that it instead plans to focus on user choice.
What percentage of Google users alter default choices? Don’t bother to guess. The number is very, very few. The one-click away baloney is a fabrication, an obfuscation. I have technical support which makes our systems as secure as possible given the resources an 80-year-old dinobaby has. But check out those in the rest home / warehouse for the soon to die? I would wager one US dollar that absolutely zero individuals will opt out of third-party cookies. Most of those in Happy Trail Ending Elder Care Facility cannot eat cookies. Opting out? Give me a break.
The MacRumors’ write up continues:
Back in 2020, Google claimed that it would phase out support for third-party cookies in Chrome by 2022, a timeline that was pushed back multiple times due to complaints from advertisers and regulatory issues. Google has been working on a Privacy Sandbox to find ways to improve privacy while still delivering info to advertisers, but third-party cookies will now be sticking around so as not to impact publishers and advertisers.
The Apple-centric online publication notes that UK regulators will check out Google’s posture. Believe me, Googzilla sits up straight when advertising revenue is projected to tank. Losing click data which can be relicensed, repurposed, and re-whatever is not something the competitive beastie enjoys.
MacRumors is not anti-Google. Hey, Google pays Apple big bucks to be “there” despite Safari. Here’s the online publications moment of hope:
Google does not plan to stop working on its Privacy Sandbox APIs, and the company says they will improve over time so that developers will have a privacy preserving alternative to cookies. Additional privacy controls, such as IP Protection, will be added to Chrome’s Incognito mode.
Correct. Google does not plan. Google outputs based on current situational awareness. That’s why Google 2020 has zero impact on Google 2024.
Three observations which will pain some folks:
- Google AI search and other services are under a microscope. I find the decision one which may increase scrutiny, not decrease regulators’ interest in the Google. Google made a decision which generates revenue but may increase legal expenses
- No matter how much money swizzles at each quarter’s end, Google’s business model may be more brittle than the revenue and profit figures suggest. Google is pumping billions into self driving cars, and doing an about face on third party cookies? The new Google puzzles me because search seems to be in the background.
- Google’s management is delivering revenues and profit, so the wizardly leaders are not going anywhere like some of Google’s AI initiatives.
Net net: After 25 years, the Google still baffles me. Time to head for Philz Coffee.
Stephen E Arnold, July 25, 2024
The Simple Fix: Door Dash Some Diversity in AI
July 25, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “There’s a Simple Answer to the AI Bias Conundrum: More Diversity.” I have read some amazing online write ups in the last few days, but this essay really stopped me in my tracks. Let’s begin with an anecdote from 1973. A half century ago I worked at a nuclear consulting firm which became part of Halliburton Industries. You know Halliburton. Dick Cheney. Ringing a bell?
One of my first tasks for the senior vice president who hired me was to assist the firm in identifying minorities in American universities about to graduate with a PhD in nuclear engineering. I am a Type A, and I set about making telephone calls, doing site visits, and working with our special librarian Dominique Doré, who had had a similar job when she worked in France for a nuclear outfit in that country. I chugged along and identified two possibles. Each was at the US Naval Academy at different stages of their academic career. The individuals would not be available for a commercial job until each had completed military service. So I failed, right?
Not even the clever wolf can put on a simple costume and become something he is not. Is this a trope for a diversity issue? Thanks, OpenAI. Good enough because I got tired of being told, “Inappropriate prompt.”
Nope. The project was designed to train me to identify high-value individuals in PhD programs. I learned three things:
- Nuclear engineers with PhDs in the early 1970s comprised a small percentage of those with the pre-requisites to become nuclear engineers. (I won’t go into details, but you can think in terms of mathematics, physics, and something like biology because radiation can ruin one’s life in a jiffy.)
- The US Navy, the University of California-Berkeley, and a handful of other universities with PhD programs in nuclear engineering were scouting promising high school students in order to convince them to enter the universities’ or the US government’s programs.
- The demand for nuclear engineers (forget female, minority, or non-US citizen engineers) was high. The competition was intense. My now-deceased friend Dr. Jim Terwilliger from Virginia Tech told me that he received job offers every week, including one from an outfit in the then Soviet Union. The demand was worldwide, yet the pool of qualified individuals graduating with a PhD seemed to be limited to six to 10 in the US, 20 in France, and a dozen in what was then called “the Far East.”
Everyone wanted the PhDs in nuclear engineering. Diversity simply did not exist. The top dog at Halliburton 50 years ago, told me, “We need more nuclear engineers. It is just not simple.”
Now I read “There’s a Simple Answer to the AI Bias Conundrum: More Diversity.” Okay, easy to say. Why not try to deliver? Believe me if the US Navy, Halliburton, and a dumb pony like myself could not figure out how to equip a person with the skills and capabilities required to fool around with nuclear material, how will a group of technology wizards in Silicon Valley with oodles of cash just do what’s simple? The answer is, “It will take structural change, time, and an educational system that is similar to that which was provided a half century ago.”
The reality is that people without training, motivation, education, and incentives will not produce the content outputs at a scale needed to recalibrate those wondrous smart software knowledge spaces and probabilistic-centric smart software systems.
Here’s a passage from the write up which caught my attention:
Given the rapid race for profits and the tendrils of bias rooted in our digital libraries and lived experiences, it’s unlikely we’ll ever fully vanquish it from our AI innovation. But that can’t mean inaction or ignorance is acceptable. More diversity in STEM and more diversity of talent intimately involved in the AI process will undoubtedly mean more accurate, inclusive models — and that’s something we will all benefit from.
Okay, what’s the plan? Who is going to take the lead? What’s the timeline? Who will do the work to address the educational and psychological factors? Simple, right? Words, words, words.
Stephen E Arnold, July 25, 2024
Why Is Anyone Surprised That AI Is Biased?
July 25, 2024
Let’s top this one last time, all right? Algorithms are biased against specific groups.
Why are they biased? They’re biased because the testing data sets contain limited information about diversity.
What types of diversity? There’s a range but it usually involves racism, sexism, and socioeconomic status.
How does this happen? It usually happens, not because the designers are racist or whatever, but from blind ignorance. They don’t see outside their technology boxes so their focus is limited.
But they can be racist, sexist, etc? Yes, they’re human and have their personal prejudices. Those can be consciously or inadvertently programmed into a data set.
How can this be fixed? Get larger, cleaner data sets that are more reflective of actual populations.
Did you miss any minority groups? Unfortunately yes and it happens to be an oldie but a goodie: disabled folks. Stephen Downes writes that, “ChatGPT Shows Hiring Bias Against People With Disabilities.” Downes commented on an article from Futurity that describes how a doctoral student from the University of Washington studies on ChatGPT ranks resumes of abled vs. disabled people.
The test discovered when ChatGPT was asked to rank resumes, people with resumes that included references to a disability were ranked lower. This part is questionable because it doesn’t state the prompt given to ChatGPT. When the generative text AI was told to be less “ableist” and some of the “disabled” resumes ranked higher. The article then goes into a valid yet overplayed argument about diversity and inclusion. No solutions were provided.
Downes asked questions that also beg for solutions:
“This is a problem, obviously. But in assessing issues of this type, two additional questions need to be asked: first, how does the AI performance compare with human performance? After all, it is very likely the AI is drawing on actual human discrimination when it learns how to assess applications. And second, how much easier is it to correct the AI behaviour as compared to the human behaviour? This article doesn’t really consider the comparison with humans. But it does show the AI can be corrected. How about the human counterparts?”
Solutions? Anyone?
Whitney Grace, July 25, 2024
Crowd What? Strike Who?
July 24, 2024
This essay is the work of a dumb dinobaby. No smart software required.
How are those Delta cancellations going? Yeah, summer, families, harried business executives, and lots of hand waving. I read a semi-essay about the minor update misstep which caused blue to become a color associated with failure. I love the quirky sad face and the explanations from the assorted executives, experts, and poohbahs about how so many systems could fail in such a short time on a global scale.
In “Surely Microsoft Isn’t Blaming EU for Its Problems?” I noted six reasons the CrowdStrike issue became news instead of a system administrator annoyance. In a nutshell, the reasons identified harken back to Microsoft’s decision to use an “open design.” I like the phrase because it beckons a wide range of people to dig into the plumbing. Microsoft also allegedly wants to support its customers with older computers. I am not sure older anything is supported by anyone. As a dinobaby, I have first-hand experience with this “we care about legacy stuff.” Baloney. The essay mentions “kernel-level access.” How’s that working out? Based on CrowdStrike’s remarkable ability to generate PR from exceptions which appear to have allowed the super special security software to do its thing, that access sure does deliver. (Why does the nationality of CrowdStrike’s founder not get mentioned? Dmitri Alperovitch, a Russian who became a US citizen and a couple of other people set up the firm in 2012. Is there any possibility that the incident was a test play or part of a Russian long game?)
Satan congratulates one of his technical professionals for an update well done. Thanks, MSFT Copilot. How’re things today? Oh, that’s too bad.
The essay mentions that the world today is complex. Yeah, complexity goes with nifty technology, and everyone loves complexity when it becomes like an appliance until it doesn’t work. Then fixes are difficult because few know what went wrong. The article tosses in a reference to Microsoft’s “market size.” But centralization is what an appliance does, right? Who wants a tube radio when the radio can be software defined and embedded in another gizmo like those FM radios in some mobile devices. Who knew? And then there is a reference to “security.” We are presented with a tidy list.
The one hitch in the git along is that the issue emerges from a business culture which has zero to do with technology. The objective of a commercial enterprise is to generate profits. Companies generate profits by selling high, subtracting costs, and keeping the rest for themselves and stakeholders.
Hiring and training professionals to do jobs like quality checks, environmental impact statements, and ensuring ethical business behavior in work processes is overhead. One can hire a blue chip consulting firm and spark an opioid crisis or deprecate funding for pre-release checks and quality assurance work.
Engineering excellence takes time and money. What’s valued is maximizing the payoff. The other baloney is marketing and PR to keep regulators, competitors, and lawyers away.
The write up encapsulates the reason that change will be difficult and probably impossible for a company whether in the US or Ukraine to deliver what the customer expects. Regulators have failed to protect citizens from the behaviors of commercial enterprises. The customers assume that a big company cares about excellence.
I am not pessimistic. I have simply learned to survive in what is a quite error-prone environment. Pundits call the world fragile or brittle. Those words are okay. The more accurate term is reality. Get used to it and knock off the jargon about failure, corner cutting, and profit maximization. The reality is that Delta, blue screens, and yip yap about software chock full of issues define the world.
Fancy talk, lists, and entitled assurances won’t do the job. Reality is here. Accept it and blame.
Stephen E Arnold, July 24, 2024
Automating to Reduce Staff: Money Talks, Employees? Yeah, Well
July 24, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Are you a developer who oversees a project? Are you one of those professionals who toiled to understand the true beauty of a PERT chart invented by a Type A blue-chip consulting firm I have heard? If so, you may sport these initials on your business card: PMP, PMI-RMP, PRINCE2, etc. I would suggest that Google is taking steps to eliminate your role. How do I know the death knell tolls for thee? Easy. I read “Google Brings AI Agent Platform Project Oscar Open Source.” The write up doesn’t come out and say, “Dev managers or project managers, find your future elsewhere, but the intent bubbles beneath the surface of the Google speak.
A 35-year-old executive gets the good news. As a project manager, he can now seek another information-mediating job at an indendent plumbing company, a local dry cleaner, or the outfit that repurposes basketball courts to pickleball courts. So many futures to find. Thanks, MSFT Copilot. That’s a pretty good Grim Reaper. The former PMP looks snappy too. Well, good enough.
The “Google Brings AI Agent Platform Project Oscar Open Source” real “news” story says:
Google has announced Project Oscar, a way for open-source development teams to use and build agents to manage software programs.
Say hi, to Project Oscar. The smart software is new, so expect it to morph, be killed, resurrected, and live a long fruitful life.
The write up continues:
“I truly believe that AI has the potential to transform the entire software development lifecycle in many positive ways,” Karthik Padmanabhan, lead Developer Relations at Google India, said in a blog post. “[We’re] sharing a sneak peek into AI agents we’re working on as part of our quest to make AI even more helpful and accessible to all developers.” Through Project Oscar, developers can create AI agents that function throughout the software development lifecycle. These agents can range from a developer agent to a planning agent, runtime agent, or support agent. The agents can interact through natural language, so users can give instructions to them without needing to redo any code.
Helpful? Seems like it. Will smart software reduce costs and allow for more “efficiency methods” to be implemented? Yep.
The article includes a statement from a Googler; to wit:
“We wondered if AI agents could help, not by writing code which we truly enjoy, but by reducing disruptions and toil,” Balahan said in a video released by Google. Go uses an AI agent developed through Project Oscar that takes issue reports and “enriches issue reports by reviewing this data or invoking development tools to surface the information that matters most.” The agent also interacts with whoever reports an issue to clarify anything, even if human maintainers are not online.
Where is Google headed with this “manage” software programs? A partial answer may be deduced from this write up from Linklemon. Its commercial “We Automate Workflows for Small to Medium (sic) Businesses.” The image below explains the business case clearly:
Those purple numbers are generated by chopping staff and making an existing system cheaper to operate. Translation: Find your future elsewhere, please.”
My hunch is that if the automation in Google India is “good enough,” the service will be tested in the US. Once that happens, Microsoft and other enterprise vendors will jump on the me-too express.
What’s that mean? Oh, heck, I don’t want to rerun that tired old “find your future elsewhere line,” but I will: Many professionals who intermediate information will here, “Great news, you now have the opportunity to find your future elsewhere.” Lucky folks, right, Google.
Stephen E Arnold, July 24, 2024