AI and the Obvious: Hire Us and Pay Us to Tell You Not to Worry

December 26, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Accenture Chief Says Most Companies Not Ready for AI Rollout.” The paywalled write up is an opinion from one of Captain Obvious’ closest advisors. The CEO of Accenture (a general purpose business expertise outfit) reveals some gems about artificial intelligence. Here are three which caught my attention.

#1 — “Sweet said executives were being “prudent” in rolling out the technology, amid concerns over how to protect proprietary information and customer data and questions about the accuracy of outputs from generative AI models.”

image

The secret to AI consulting success: Cost, fear of failure, and uncertainty or CFU. Thanks, MSFT Copilot. Good enough.

Arnold comment: Yes, caution is good because selling caution consulting generates juicy revenues. Implementing something that crashes and burns is a generally bad idea.

#2 — “Sweet said this corporate prudence should assuage fears that the development of AI is running ahead of human abilities to control it…”

Arnold comment: The threat, in my opinion, comes from a handful of large technology outfits and from the legions of smaller firms working overtime to apply AI to anything that strikes the fancy of the entrepreneurs. These outfits think about sizzle first, consequences maybe later. Much later.

# 3 — ““There are no clients saying to me that they want to spend less on tech,” she said. “Most CEOs today would spend more if they could. The macro is a serious challenge. There are not a lot of green shoots around the world. CEOs are not saying 2024 is going to look great. And so that’s going to continue to be a drag on the pace of spending.”

Arnold comment: Great opportunity to sell studies, advice, and recommendations when customers are “not saying 2024 is going to look great.” Hey, what’s “not going to look great” mean?

The obvious is — obvious.

Stephen E Arnold, December 26, 2023

AI Is Here to Help Blue Chip Consulting Firms: Consultants, Tighten Your Seat Belts

December 26, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Deloitte Is Looking at AI to Help Avoid Mass Layoffs in Future.” The write up explains that blue chip consulting firms (“the giants of the consulting world”) have been allowing many Type A’s to find their future elsewhere. (That’s consulting speak for “You are surplus,” “You are not suited for another team,” or “Hasta la vista.”) The message Deloitte is sending strikes me as, “We are leaders in using AI to improve the efficiency of our business. You (potential customers) can hire us to implement AI strategies and tactics to deliver the same turbo boost to your firm.) Deloitte is not the only “giant” moving to use AI to improve “efficiency.” The big folks and the mid-tier players are too. But let’s look at the Deloitte premise in what I see as a PR piece.

image

Hey, MSFT Copilot. Good enough. Your colleagues do have experience with blue-chip consulting firms which obviously assisted you.

The news story explains that Deloitte wants to use AI to help figure out who can be billed at startling hourly fees for people whose pegs don’t fit into the available round holes. But the real point of the story is that the “giants” are looking at smart software to boost productivity and margins. How? My answer is that management consulting firms are “experts” in management. Therefore, if smart software can make management better, faster, and cheaper, the “giants” have to use best practices.

And what’s a best practice in the context of the “giants” and the “avoid mass layoffs” angle? My answer is, “Money.”

The big dollar items for the “giants” are people and their associated costs, travel, and administrative tasks. Smart software can replace some people. That’s a no brainer. Dump some of the Type A’s who don’t sell big dollar work, winnow those who are not wedded to the “giant” firm, and move the administrivia to orchestrated processes with smart software watching and deciding 24×7.

Imagine the “giants” repackaging these “learnings” and then selling the information about how to and payoffs to less informed outfits. Once that is firmly in mind, the money for the senior partners who are not on on the “hasta la vista” list goes up. The “giants” are not altruistic. The firms are built fro0m the ground up to generate cash, leverage connections, and provide services to CEOs with imposter syndrome and other issues.

My reaction to the story is:

  1. Yep, marketing. Some will do the Harvard Business Review journey; others will pump out white papers; many will give talks to “preferred” contacts; and others will just imitate what’s working for the “giants”
  2. Deloitte is redefining what expertise it will require to get hired by a “giant” like the accounting/consulting outfit
  3. The senior partners involved in this push are planning what to do with their bonuses.

Are the other “giants” on the same path? Yep. Imagine. Smart software enabled “giants” making decisions for the organizations able to pay for advice, insight, and warm embrace of AI-enabled humanoids. What’s the probability of success? Close enough for horseshoes. and even bigger money for some blue chip professionals. Did Deloitte over hiring during the pandemic?

Of course not, the tactic was part of the firm’s plan to put AI to a real world test. Sound good. I cannot wait until the case studies become available.

Stephen E Arnold, December 26, 2023

Amazon and the US Government: Doing Just Fine, Thanks

December 26, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

OSHA was established to protect workers from unsafe conditions. Big technology barons like Jeff Bezos with Amazon don’t give rat’s hind quarters about employee safety. They might project an image of caring and kindness but that’s from Amazon’s PR department. Amazon is charged with innumerable workplace violations, including micromanaging yawning to poor compensation. The Washington Posts details one of Amazon’s latest scandals, “A 20-Year-Old Amazon Employee Died At Work. Indiana Issued A $7000 Fine.”

Twenty-year old Caes Gruesbeck was clearing a blockage on an overhead conveyor belt at the Amazon distribution center in Fort Wayne, Indiana. He needed to use an elevated lift to reach the blockage. His head collided with the conveyor and became trapped. Gruesbeck later died from blunt force trauma.

Indiana safety officials investigated for eleven weeks and found that Amazon failed to ensure a safe work environment. Amazon was only cited and fined $7000. Amazon employees continue to be injured and the country’s second largest private employer is constantly scrutinized, but state and federal safety regulators are failing to enforce policies. They are failing because Amazon is a powerful corporation with a hefty legal department.

“‘Seven thousand dollars for the death of a 20-year-old? What’s that going to do to Amazon?’ said Stephen Wagner, an Indiana attorney who has advocated for more worker-friendly laws in the state. ‘There’s no real financial incentive for an employer like Amazon to change their working environment to make it more safe.’”

Federal and state governments are trying to make Amazon take responsibility through the current system but it’s slow. Safety regulators can’t inspect every Amazon complaint and building. They are instead working towards a sweeping company approach like the Family Dollar and Dollar Tree investigations about blocked fire exits. It took six years, resulting in $15 million in fines and a $1.35 million settlement.

Once companies are hit with large fines it changes how they do business. Amazon probably will be brought to justice but it will take a long time.

Whitney Grace, December 26, 2023

Quantum Supremacy in Management: A Google Incident

December 25, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I spotted an interesting story about an online advertising company which has figured out how to get great PR in respected journals. But this maneuver is a 100 yard touchdown run for visibility. “Hundreds Gather at Google’s San Francisco Office to Protest $1.2 Billion Contract with Israel” reports:

More than 400 protesters gathered at Google’s San Francisco office on Thursday to demand the tech company cut ties with Israel’s government.

image

Some managers and techno wizards envy companies which have the knack for attracting crowds and getting free publicity. Thanks, MSFT Copilot. Close enough for horseshoes

The demonstration, according to the article, was a response to Google and its new BFF’s project for Israel. The SFGate article contains some interesting photographs. One is a pretend dead person wrapped in a shroud with the word “Genocide” in bright, cheerful Google log colors. I wanted to reproduce it, but I am not interested in having copyright trolls descend on me like a convocation of legal eagles. The “Project Nimbus” — nimbus is a type of cloud which I learned about in the fifth- or sixth-grade — “provides the country with local data centers and cloud computing services.”

The article contains words which cause OpenAI’s art generators to become uncooperative. That banned word is “genocide.” The news story adds some color to the fact of the protest on December 14, 2023:

Multiple speakers mentioned an article from The Intercept, which reported that Nimbus delivered Israel the technology for “facial detection, automated image categorization, object tracking, and even sentiment analysis.” Others referred to an NPR investigation reporting that Israel says it is using artificial intelligence to identify targets in Gaza, though the news outlet did not link the practice to Google’s technology.

Ah, ha. Cloud services plus useful technologies. (I wonder if the facial recognition system allegedly becoming available to the UK government is included in the deal?) The story added a bit of spice too:

For most of Thursday’s protest, two dozen people lay wrapped in sheets — reading “Genocide” in Google’s signature rainbow lettering — in a “die-in” performance. At the end, they stood to raise up white kites, as a speaker read Refaat Alareer’sIf I must die,” written just over a month before the Palestinian poet was killed by an Israeli airstrike.

The article included a statement from a spokesperson, possible from Google. This individual said:

“We have been very clear that the Nimbus contract is for workloads running on our commercial platform by Israeli government ministries such as finance, healthcare, transportation, and education,” she said. “Our work is not directed at highly sensitive or classified military workloads relevant to weapons or intelligence services.”

Does this sound a bit like an annoyed fifth- or sixth-grade teacher interrupted by a student who said out loud: “Clouds are hot air.” While not technically accurate, the student was sent to the principal’s office. What will happen in this situation?

Some organizations know how to capture users’ attention. Will the company be able to monetize it via a YouTube Short or a more lengthy video. Google is quite skilled at making videos which purport to show reality as Google wants it to be. The “real” reality maybe be different. Revenue is important, particularly as regulatory scrutiny remains popular in the EU and the US.

Stephen E Arnold, December 25, 2023

A Grade School Food Fight Could Escalate: Apples Could Become Apple Sauce

December 25, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A squabble is blowing up into a court fight. “Beeper vs Apple Battle Intensifies: Lawmakers Demand DOJ Investigation” reports:

US senators have urged the DOJ to probe Apple’s alleged anti-competitive conduct against Beeper.

Apple killed a messaging service in the name of protecting apple pie, mom, love, truth, justice, and the American way. Ooops, sorry. That’s something from the Superman comix.

image

“You squashed my apple. You ruined my lunch. You ruined my life. My mommy will call your mommy, and you will be in trouble,” says the older, more mature child. The principal appears and points out that screeching is not comely. Thanks, MSFT Copilot. Close enough for horseshoes.

The article said:

The letter to the DOJ is signed by Minnesota Senator Amy Klobuchar, Utah Senator Mike Lee, Congressman Jerry Nadler, and Congressman Ken Buck. They have urged the law enforcement body to investigate “whether Apple’s potentially anti-competitive conduct against Beeper violates US antitrust laws.” Apple has been constantly trying to block Beeper Mini and Beeper Cloud from accessing iMessage. The two Beeper messaging apps allow Android users to interact with iPhone users through iMessage — an interoperability Apple has been opposed to for a long time now.

As if law enforcement did not have enough to think about. Now an alleged monopolist is engaged in a grade school cafeteria spat with a younger, much smaller entity. By golly, that big outfit is threatened by the jejune, immature, and smaller service.

How will this play out?

  1. A payday for Beeper when Apple makes the owners of Beeper an offer that would be tough to refuse. Big piles of money can alter one’s desire to fritter away one’s time in court
  2. The dust up spirals upwards. What if the attitude toward Apple’s approach to its competitors becomes a crusade to encourage innovation in a tough environment for small companies? Containment may be difficult.
  3. The jury decision against Google may kindle more enthusiasm for another probe of Apple and its posture in some tricky political situations; for example, the iPhone in China, the non-repairability issues, and Apple’s mesh of inter-connected services which may be seen as digital barriers to user choice.

In 2024, Apple may find that some government agencies are interested in the fruit growing on the company’s many trees.

Stephen E Arnold, December 25, 2023

An Important, Easily Pooh-Poohed Insight

December 24, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Dinobaby here. I am on the regular highway, not the information highway. Nevertheless l want to highlight what I call an “easily poohpoohed factoid. The source of the item this morning is an interview titled “Google Cloud Exec: Enterprise AI Is Game-Changing, But Companies Need to Prepare Their Data.”

I am going to skip the PR baloney, the truisms about Google fumbling the AI ball, and rah rah about AI changing everything. Let me go straight to factoid which snagged my attention:

… at the other side of these projects, what we’re seeing is that organizations did not have their data house in order. For one, they had not appropriately connected all the disparate data sources that make up the most effective outputs in a model. Two, so many organizations had not cleansed their data, making certain that their data is as appropriate and high value as possible. And so we’ve heard this forever — garbage in, garbage out. You can have this great AI project that has all the tenets of success and everybody’s really excited. Then, it turns out that the data pipeline isn’t great and that the data isn’t streamlined — all of a sudden your predictions are not as accurate as they could or should have been.

Why are points about data significant?

First, investors, senior executives, developers, and the person standing on line with you at Starbucks dismisses data normalization as a solved problem. Sorry, getting the data boat to float is a work in progress. Few want to come to grips with the issue.

Second, fixing up data is expensive. Did you ever wonder why the Stanford president made up data, forcing his resignation? The answer is that the “cost of fixing up data is too high.” If the president of Stanford can’t do it, is the run-fo-the-mill fast talking AI guru different? Answer: Nope.

Third, knowledge of exception folders and non-conforming data is confined to a small number of people. Most will explain what is needed to make a content intake system work. However, many give up because the cloud of unknowing is unlikely to disperse.

The bottom line is that many data sets are not what senior executives, marketers, or those who use the data believe they are. The Google comment — despite Google’s sketchy track record in plain honest talk — is mostly correct.

So what?

  1. Outputs are often less useful than many anticipated. But if the user is uninformed or the downstream system uses whatever is pushed to it, no big deal.
  2. The thresholds and tweaks needed to make something semi useful are not shared, discussed, or explained. Keep the mushrooms in the dark and feed them manure. What do you get? Mushrooms.
  3. The graphic outputs are eye candy and distracting. Look here, not over there. Sizzle sells and selling is important.

Net net: Data are a problem. Data have been due to time and cost issues. Data will remain a problem because one can sidestep a problem few recognize and those who do recognize the pit find a short cut. What’s this mean for AI? Those smart systems will be super. What’s in your AI stocking this year?

Stephen E Arnold, December 24, 2023

Bugged? Hey, No One Can Get Our Data

December 22, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “The Obscure Google Deal That Defines America’s Broken Privacy Protections.” In the cartoon below, two young people are confident that their lunch will be undisturbed. No “bugs” will chow down on their hummus, sprout sandwiches, or their information. What happens, however, is that the young picnic fans cannot perceive what is out of sight. Are these “bugs” listening? Yep. They are. 24×7.

image

What the young fail to perceive is that “bugs” are everywhere. These digital creatures are listening, watching, harvesting, and consuming every scrap of information. The image of the picnic evokes an experience unfolding in real time. Thanks, MSFT Copilot. My notion of “bugs” is obviously different from yours. Good enough and I am tired of finding words you can convert to useful images.

The essay explains:

While Meta, Google, and a handful of other companies subject to consent decrees are bound by at least some rules, the majority of tech companies remain unfettered by any substantial federal rules to protect the data of all their users, including some serving more than a billion people globally, such as TikTok and Apple.

The situation is simple: Major centers of techno gravity remain unregulated. Law makers, regulators, and “users” either did not understand or just believed what lobbyists told them. The senior executives of certain big firms smiled, said “Senator, thank you for that question,” and continued to build out their “bug” network. Do governments want to lose their pride of place with these firms? Nope. Why? Just reference bad actors who commit heinous acts and invoke “protect our children.” When these refrains from the techno feudal playbook sound, calls to take meaningful action become little more than a faint background hum.

But the article continues:

…there is diminishing transparency about how Google’s consent decree operates.

I think I understand. Google-type companies pretend to protect “privacy.” Who really knows? Just ask a Google professional. The answer in my experience is, “Hey, dude, I have zero idea.”

How does Wired, the voice of the techno age, conclude its write up? Here you go:

The FTC agrees that a federal privacy law is long overdue, even as it tries to make consent decrees more powerful. Samuel Levine, director of the FTC’s Bureau of Consumer Protection, says that successive privacy settlements over the years have become more limiting and more specific to account for the growing, near-constant surveillance of Americans by the technology around them. And the FTC is making every effort to enforce the settlements to the letter…

I love the “every effort.” The reality is that the handling of online data collection presages the trajectory for smart software. We live with bugs. Now those bugs can “think”, adapt, and guide. And what’s the direction in which we are now being herded? Grim, isn’t it?

Stephen E Arnold, December 23, 2023

A High Profile Religious Leader: AI? Yeah, Well, Maybe Not So Fast, Folks

December 22, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The trusted news outfit Thomson Reuters put out a story about the thoughts of the Pope, the leader of millions of Catholics. Presumably many of these people use ChatGPT-type systems to create content. (I wonder if Leonardo would have used an OpenAI system to crank out some art work. He was an innovator. My hunch is that he would have given MidJourney-type smart software a whirl.)

image

A group of religious individuals thinking about artificial intelligence. Thanks, MidJourney, a good enough engraving.

Pope Francis Calls for Binding Global Treaty to Regulate AI” reports that Pope Francis wants someone to create a legally binding international treaty. The idea is that AI numerical recipes would be prevented from replacing humans with good old human values. The idea is that AI would output answers, and humans would use those answers to find pizza joints, develop smart weapons, and eliminate carbon by eliminating carbon generating entities (maybe humans?).

The trusted news outfit’s report included this quote from the Pope:

I urge the global community of nations to work together in order to adopt a binding international treaty that regulates the development and use of artificial intelligence in its many forms…

The Pope mentioned a need to avoid a technological dictatorship. He added:

Research on emerging technologies in the area of so-called Lethal Autonomous Weapon Systems, including the weaponization of artificial intelligence, is a cause for grave ethical concern. Autonomous weapon systems can never be morally responsible subjects…

Several observations are warranted:

  1. Is this a UN job or is some other entity responsible to obtain consensus and effective enforcement?
  2. Who develops the criteria for “good” AI, “neutral” AI, and “bad” AI?
  3. What are the penalties for implementing “bad” AI?

For me the Pope’s statement is important. It may be difficult to implement without a global dictatorship or a sudden change in how informed people debate and respond to difficult issues. From my point of view, the Pope should worry. When I look at the images of the Four Horsemen of the Apocalypse, the riders remind of four high profile leaders in AI. That’s my imagination reading into the depictions of conquest, war, famine, and death.

Stephen E Arnold, December 22, 2023

Cyber Security Crumbles When Staff Under Stress

December 22, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

How many times does society need to say that happy employees mean a better, more profitable company? The world is apparently not getting the memo, because employees, especially IT workers, are overworked, stressed, exhausted, and burnt out like blackened match. While zombie employees are bad for productivity, they’re even worse for cyber security. BetaNews reports on an Adarma, a detection and response specialist company, survey, “Stressed Staff Put Enterprises At Risk Of Cyberattack.”

The survey responders believe they’re at a greater risk of cyberattack due to the poor condition of their employees. Five hundred cybersecurity professionals from UK companies with over 2000 employees were studied and 51% believed their IT security are dead inside. This puts them at risk of digital danger. Over 40% of the cybersecurity leaders felt that their skills were limited to understand threats. An additional 43% had little or zero expertise to respond or detect threats to their enterprises.

IT people really love computers and technology but when they’re working in an office environment and dealing with people, stress happens:

“‘Cybersecurity professionals are typically highly passionate people, who feel a strong personal sense of duty to protect their organization and they’ll often go above and beyond in their roles. But, without the right support and access to resources in place, it’s easy to see how they can quickly become victims of their own passion. The pressure is high and security teams are often understaffed, so it is understandable that many cybersecurity professionals are reporting frustration, burnout, and unsustainable stress. As a result, the potential for mistakes being made that will negatively impact an organization increases. Business leaders should identify opportunities to ease these gaps, so that their teams can focus on the main task at hand, protecting the organization,’ says John Maynard, Adarma’s CEO.”

The survey demonstrates why it’s important to diversify the cybersecurity talent pool? Wait, is this in regard to ethnicity and biological sex? Is Adarma advocating for a DEI quota in cybersecurity or is the organization advocating for a diverse talent pool with varied experience to offer differ perspectives?

While it is important to have different education backgrounds and experience, hiring someone simply based on DEI quotas is stupid. It’s failing in the US and does more harm than good.

Whitney Grace, December 22, 2023

Scientific American Spills the Beans on Innovation

December 21, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

It happened! A big, mostly respected publication called Scientific American explains where the Google type outfits got their best ideas. Note: The write up “Tech Billionaires Need to Stop Trying to Make the Science Fiction They Grew Up on Real” does not talk about theft of intellectual property, doing shameless me-too products, or acquiring promising start ups to make eunuchs of potential competitors.

Instead the Scientific American story asserts:

Today’s Silicon Valley billionaires grew up reading classic American science fiction. Now they’re trying to make it come true, embodying a dangerous political outlook.

image

I can make these science fiction worlds a reality. I am going to see Star Wars for the seventh time. I will invent the future, says the enthusiastic wizardette in 1985. Thanks, MSFT Copilot. Know anyone at Microsoft like this young person?

The article says:

These men [the Brin-Page variants] collectively have more than half a trillion dollars to spend on their quest to realize inventions culled from the science fiction and fantasy stories that they read in their teens. But this is tremendously bad news because the past century’s science fiction and fantasy works widely come loaded with dangerous assumptions.

The essayist (a science fiction writer) explains:

We are not trying to accurately predict possible futures but to earn a living: any foresight is strictly coincidental. We recycle the existing material—and the result is influenced heavily by the biases of earlier writers and readers. The genre operates a lot like a large language model that is trained using a body of text heavily contaminated by previous LLMs; it tends to emit material like that of its predecessors. Most SF is small-c conservative insofar as it reflects the history of the field rather than trying to break ground or question received wisdom.

So what? The writer answers:

It’s a worryingly accurate summary of the situation in Silicon Valley right now: the billionaires behind the steering wheel have mistaken cautionary tales and entertainments for a road map, and we’re trapped in the passenger seat. Let’s hope there isn’t a cliff in front of us.

Is there a way to look down the runway? Sure, read more science fiction. Invent the future and tell oneself, “I am an innovator.” That may be true but of what? Right now it appears that reality is a less than enticing place. The main point is that today may be built on a fairly flimsy foundation. Hint: Don’t ask a person to make change when you pay in cash.

Stephen E Arnold, December 21, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta