A Pivot al Moment in Management Consulting

October 4, 2023

The practice of selling “management consulting” has undergone a handful of tectonic shifts since Edwin Booz convinced Sears, the “department” store outfit to hire him. (Yes, I am aware I am cherry picking, but this is a blog post, not a for fee report.)

The first was the ability of a consultant to move around quickly. Trains and Chicago became synonymous with management razzle dazzle. The center of gravity shifted to New York City because consulting thrives where there are big companies. The second was the institutionalization of the MBA as a certification of a 23 year old’s expertise. The third was the “invention” of former consultants for hire. The innovator in this business was Gerson Lehrman Group, but there are many imitators who hire former blue-chip types and resell them without the fee baggage of the McKinsey & Co. type outfits. And now the fourth earthquake is rattling carpetland and the windows in corner offices (even if these offices are in an expensive home in Wyoming.)

9 30 centaur and cybord

A centaur and a cyborg working on a client report. Thanks, MidJourney. Nice hair style on the cyborg.

Now we have the era of smart software or what I prefer to call the era of hyperbole about semi-smart semi-automated systems which output “information.” I noted this write up from the estimable Harvard University. Yes, this is the outfit who appointed an expert in ethics to head up the outfit’s ethics department. The same ethics expert allegedly made up data for peer reviewed publications. Yep, that Harvard University.

Navigating the Jagged Technological Frontier” is an essay crafted by the D^3 faculty. None of this single author stuff in an institution where fabrication of research is a stand up comic joke. “What’s the most terrifying word for a Harvard ethicist?” Give up? “Ethics.” Ho ho ho.

What are the highlights of this esteemed group of researches, thinkers, and analysts. I quote:

  • For tasks within the AI frontier, ChatGPT-4 significantly increased performance, boosting speed by over 25%, human-rated performance by over 40%, and task completion by over 12%.
  • The study introduces the concept of a “jagged technological frontier,” where AI excels in some tasks but falls short in others.
  • Two distinct patterns of AI use emerged: “Centaurs,” who divided and delegated tasks between themselves and the AI, and “Cyborgs,” who integrated their workflow with the AI.

Translation: We need fewer MBAs and old timers who are not able to maximize billability with smart or semi smart software. Keep in mind that some consultants view clients with disdain. If these folks were smart, they would not be relying on 20-somethings to bail them out and provide “wisdom.”

This dinobaby is glad he is old.

Stephen E Arnold, October 4, 2023

Blue Chip Consultancy Gets Cute and Caught

October 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I was not going to read “PwC Caught Hiding Terms of secret Review.” However, my eye spotted the delectable name “Ziggy Switkowski” and I had to devour the write up. Imagine a blue chip outfit and a blue chip consultant named Ziggy.

The story reports about PwC (once the esteemed Price Waterhouse Coopers firm) and how it conducted a “secret internal investigation” in a “tax affair.” To me, the fuzzy words suggest tax fraud, but I am a dinobaby and a resident of Harrods Creek, Kentucky.

The Ziggy affair warranted this comment by the Klaxon, an Australian online publication:

“There’s only one reason why you’re not releasing your terms of reference,” governance expert Dr Andy Schmulow told The Klaxon last night. “And that’s because you know you’ve set up a sham inquiry”.

Imagine that! A blue chip consulting firm and a professional named Ziggy. What’s not to believe?

The article adds a rhetorical flourish; to wit:

In an interim report called “PwC: A calculated breach of trust” the inquiry found PwC was continuing to obfuscate, with its actions indicating “poor corporate culture” and a lack of “governance and accountability”. “PwC does not appear to understand proper process, nor do they see the need for transparency and accountability,” the report states. “Given the extent of the breach and subsequent cover-up now revealed on the public record, when is PwC going to come clean and begin to do the right thing?”

My hunch is that blue chip consulting firms may have a different view of what’s appropriate and what’s not. Tax irregularities. Definitely not worth the partners’ time. But Ziggy?

Stephen E Arnold, October 4, 2023

A Complement to Bogus Amazon Product Reviews?

October 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The Author’s Guild and over 10,000 of its members have been asking Amazon to do something about AI-written books on its platform for months. Now, the AP reports, “Amazon to Require Some Authors to Disclose the Use of AI Material.” Writer Hillel Italie tells us:

“The Authors Guild praised the new regulations, which were posted Wednesday, as a ‘welcome first step’ toward deterring the proliferation of computer-generated books on the online retailer’s site. Many writers feared computer-generated books could crowd out traditional works and would be unfair to consumers who didn’t know they were buying AI content.”

Legitimate concerns. But how much good will the new requirements do, really? Amazon now requires those submitting works to its e-book program to disclose any AI-generated content. But we wonder how that is supposed to help since that information is not, as of this writing, publicly disclosed. We learn:

“A passage posted this week on Amazon’s content guideline page said, ‘We define AI-generated content as text, images, or translations created by an AI-based tool.’ Amazon is differentiating between AI-assisted content, which authors do not need to disclose, and AI-generated work. But the decision’s initial impact may be limited because Amazon will not be publicly identifying books with AI, a policy that a company spokesperson said it may revise. Guild CEO Mary Rasenberger said that her organization has been in discussions with Amazon about AI material since early this year. ‘Amazon never opposed requiring disclosure but just said they had to think it through, and we kept nudging them. We think and hope they will eventually require public disclosure when a work is AI-generated,’ she told The Associated Press on Friday.”

Perhaps. But even if Ms. Rasenberger’s gracious optimism is warranted, the requirement only applies to Amazon’s e-book program. What about the rest of the texts sold through the platform? Or, for that matter, through Amazon-owned Goodreads? Perhaps it is old-fashioned, but I for one would like to know whether a book was written by a human or by software before I buy.

Cynthia Murrell, October 4, 2023

Google and Its Embarrassing Document: Sounds Like Normal Google Talk to Me

October 3, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “DOJ Finally Posted That Embarrassing Court Doc Google Wanted to Hide.” I was surprised that the anti-trust trial exhibit made its way to this link. My initial reaction was that the judge was acting in a non-Googley way. I am not sure some of the people I know want Google’s activities to be impaired in any way.

9 30 lizard

The senior technology executive who seems to look like a gecko lizard is explaining how a business process for an addictive service operates. Those attending the meeting believe that a “lock in” approach is just the ticket to big bucks in the zippy world of digital trank. Hey, MidJourney, nice lizard. Know any?

That geo-fencing capability is quite helpful to some professionals. The second thing that surprised me was… no wait. Let me quote the Ars Technica article first. The write up says:

The document in question contains meeting notes that Google’s vice president for finance, Michael Roszak, “created for a course on communications,” Bloomberg reported. In his notes, Roszak wrote that Google’s search advertising “is one of the world’s greatest business models ever created” with economics that only certain “illicit businesses” selling “cigarettes or drugs” “could rival.” At trial, Roszak told the court that he didn’t recall if he ever gave the presentation. He said that the course required that he tell students “things I don’t believe as part of the presentation.” He also claimed that the notes were “full of hyperbole and exaggeration” and did not reflect his true beliefs, “because there was no business purpose associated with it.”

Gee, I believe this. Sincere, open comment about one’s ability to “recall” is in line with other Google professionals’ commentary; for example, Senator, thank you for the question. I don’t know the answer, but we will provide your office with that information. (Note: I am paraphrasing something I may have heard or hallucinated with Bard, or I may not “recall” where and when I heard that type of statement.)

Ars Technica is doing the he said thing in this statement:

A Google spokesman told Bloomberg that Roszak’s statements “don’t reflect the company’s opinion” and “were drafted for a public speaking class in which the instructions were to say something hyperbolic and attention-grabbing.” The spokesman also noted that Roszak “testified he didn’t believe the statements to be true.” According to Bloomberg, Google lawyer Edward Bennett told the court that Roszak’s notes suggest that the senior executive’s plan for his presentation was essentially “cosplaying Gordon Gekko”—a movie villain who symbolizes corporate greed from 1987’s Wall Street.

I think the Gordon Gekko comparison is unfair. The lingo strikes me as normal Silicon Valley sell-it-with-sizzle lingo.

Stephen E Arnold, October 3, 2023

Teens, Are You Bing-ing Yet?

October 3, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The online advertising and all-time champion of redacting documents has innovated again. “Google Expands Its Generative AI Search Experience to Teens Expected to Interact With a Chatbot—Is It Safe?” reports:

Google is opening its generative AI search experience to teenagers aged 13 to 17 in the United States with a Google Account. This expansion allows every teen to participate in Search Labs and engage with AI technology conversationally.

What will those teens do with smart software interested in conversational interactions. As a dinobaby, the memories of my teen experiences are fuzzy. I do recall writing reports for some of my classmates. If I were a teenie bopper with access to generative outputs, I would probably use that system to crank out for-fee writings. On the other hand, those classmates would just use the system themselves. Who wants to write about Lincoln’s night at the theater or how eager people from Asia built railroads.

The article notes:

Google is implementing an update to enhance the AI model’s ability to identify false or offensive premise queries, ensuring more accurate and higher-quality responses. The company is also actively developing solutions to enable large language models to self-assess their initial responses on sensitive subjects and rewrite them based on quality and safety criteria.

That’s helpful. Imagine training future Google advertising consumers to depend on the Google for truth. Redactions included, of course.

Stephen E Arnold, October 3, 2023

Savvy GenZs: Scammers Love Those Kids

October 3, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Many of us assumed the generation that has grown up using digital devices would be the most cyber-crime savvy. Apparently not. Vox reports, “Gen Z Falls for Online Scams More than their Boomer Grandparents Do.” Writer A.W. Ohlheiser cites a recent Deloitte survey that found those born between 1997 and 2012 were three times more likely to fall victim to an online scam than Boomers, twice as likely to have their social media accounts hacked, and more likely to have location information misused than any other generation.

One might think they should know better and, apparently, they do: the survey found Gen Z respondents to be quite aware of cybersecurity issues. The problem may instead lie in the degree to which young people are immersed in the online world(s). We learn:

“There are a few theories that seem to come up again and again. First, Gen Z simply uses technology more than any other generation and is therefore more likely to be scammed via that technology. Second, growing up with the internet gives younger people a familiarity with their devices that can, in some instances, incentivize them to choose convenience over safety. And third, cybersecurity education for school-aged children isn’t doing a great job of talking about online safety in a way that actually clicks with younger people’s lived experiences online.”

So one thing we might to is adjust our approach to cybersecurity education in schools. How else can we persuade Gen Z to accept hassles like two-factor authentication in the interest of safety? Maybe that is the wrong question. Ohlheiser consulted 21-year-old Kyla Guru, a Stanford computer science student and founder of a cybersecurity education organization. The article suggests:

“Instead, online safety best practices should be much more personalized to how younger people are actually using the internet, said Guru. Staying safer online could involve switching browsers, enabling different settings in the apps you use, or changing how you store passwords, she noted. None of those steps necessarily involve compromising your convenience or using the internet in a more limited way. Approaching cybersecurity as part of being active online, rather than an antagonist to it, might connect better with Gen Z, Guru said.”

Guru also believes learning about online bad actors and their motivations may help her peers be more attentive to the issue. The write-up also points to experts who insist apps and platforms bear at least some responsibility to protect users, and there is more they could be doing. For example, social media platforms could send out test phishing emails, as many employers do, then send educational resources to anyone who bites. And, of course, privacy settings could be made much easier to access and understand. Those steps, in fact, could help users of all ages.

Cynthia Murrell, October 3, 2023

Do Teens Read or Screen Surf? Yes, Your Teens

October 2, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I am glad I am old. I read “Study Reveals Some Teens Receive 5,000 Notifications Daily, Most Spend Almost Two Hours on TikTok.” The write up is a collection of factoids. I don’t know if these are verifiable, but taken as a group, the message is tough to swallow. Here’s a sample of the data:

  • Time spent of TikTok: Two hours a day or 38 percent of daily online use. Why? “Reading and typing are exhausting.”
  • 20 percent of the teenies in the sample receive more than 500 notifications a day. A small percentage get 5,000 per day.
  • 97 percent of teenies were on their phone during the school day.

The future is in the hands of the information gatekeepers and quasi-monopolies, not parents and teachers it seems.

What will a population of swipers, scrollers, and kick-backer do?

My answer is, “Not much other than information grazing.”

Sheep need herders and border collies nipping at their heels.

Thus, I am glad I am old.

Stephen E Arnold, October 2, 2023

Need Free Data? Two Thousand Terabytes Are Available

October 2, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Censys Reveals Open Directories Share More Than 2,000 TB of Unprotected Data.” What’s an open directory? According to the champion of redactions the term refers to lists of direct links to files. True?

The article reports:

These open directories could leak sensitive data, intellectual property or technical data and let an attacker compromise the entire system.

Why do these “lists” exist? Laziness, lack of staff who know what to do, and forgetting how an intern configured a server years ago?

The article states:

Why don’t search engines prohibit people from seeing those open directories? Censys researchers told TechRepublic that “while this may initially sound like a reasonable approach, it’s a bandage on the underlying issue of open directories being exposed on the internet in the first place.

Are open directories a good thing? I think it depends on one’s point of view. Why are bad actors generally cheerful these days? Attack surfaces are abundant and management floats above such hard-to-grasp details about online systems and services. Hey, what time is lunch?

Stephen E Arnold, October 2, 2023

The Murdoch Effect: Outstanding Information 24×7

October 2, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Rupert Murdoch is finally retiring and leaving his propaganda empire to his son Lachlan, who may or may not be even more right-wing than dear old dad. While other outlets ponder what this means for the future of News Corp, Gizmodo examines “All the Ways Rupert Murdoch Left his Grubby Fingerprints on Tech.” Writer Kyle Barr writes:

“You don’t become the biggest name in worldwide media without also becoming something of a major influence on tech. With his direct influence now waning, we can do a bit of an obituary on the mogul’s efforts to influence the world of tech, and how both his direct and unintended efforts have contributed to the shape of our current digital landscape. News Corp wanted to be the biggest name in digital media, and at every step it failed to compete with other big names, leaving it to rely on the bread and butter of its conservative news apparatus. Murdoch’s billions were involved in consolidating the world’s online media experience. His no-holds-barred operating philosophy would end up violating people’s privacy and setting us up for the state of current social media and content streaming. All the while, News Corp’s entities would struggle to find an actual, legitimate foothold in the digital frontier. Instead, Fox News and other Murdoch-owned brands facilitated a new media environment where disinformation ruled the day and truth was laid aside for conservative grievance.”

The write-up shares 11 indelible blotches Murdoch made on the tech landscape in slideshow form. A few key moments include buying up MySpace, thereby clearing the way for Facebook and its countless consequences; helping Mr. Trump rise to power; and buying and forwarding the decimation of one of my favorite childhood institutions, National Geographic. A couple noteworthy fumbles include investment in the fraudulent Theranos and the Dominion Lawsuit against Fox News. See the article for more of Barr’s examples. Now, we wonder, what marks will the junior Murdoch make?

Cynthia Murrell, October 2, 2023

« Previous Page

  • Archives

  • Recent Posts

  • Meta