Consulting Tips: How to Guide Group Thinking

August 27, 2024

One of the mysteries of big time consulting is answering the question, “Why do these guys seem so smart?” One trick is to have a little knowledge valise stuffed with thinking and questioning tricks. One example is the Boston Consulting Group dog, star, loser, and uncertain matrix. If you remember the “I like Ike” buttons, you may know that the General used this approach to keep some frisky reports mostly in line during meetings.

Are there other knowledge tools or thinking frameworks? The answer is, “Sure.” When someone asks you to name six, can you deliver a prompt, concise answer? The answer, in my 50 plus years of professional services work, “Not a chance.”

The good news is that you can locate frameworks, get some tips on how to use these to knock the socks off those in a group, and become a walking, talking Blue Chip Consultant without the pain and expense of a fancy university, hours of drudgery, or enduring scathing comments from more experienced peers.

Navigate to “Tools for Better Thinking.” The link, one hopes, displays the names of thinking frameworks in boxes. Click a box, and you get a description and a how-to about the tool.

I think the site is quite good, and it may help some people sell consulting work in certain situations.

Worth a look.

Stephen E Arnold, August 27, 2024

Anthropic AI: New Allegations of Frisky Behavior

August 27, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Who knew high school science club members would mature into such frisky people. But rules are made to be broken. Apologies make the problem go away. Perhaps in high school with some indulgent faculty advisors? In the real world where lawyers are more plentiful than cardinals in Kentucky, apologies may not mean anything. I learned that the highly-regarded AI outfit Anthropic will be spending some time with the firm’s lawyers.

Anthropic Faces New Class-Action Lawsuit from Book Authors” reported:

AI company Anthropic is still battling a lyrics-focused lawsuit from music publishers, but now it has a separate legal fight on its hands. Authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson are suing the company in a class-action lawsuit in California. As with the music publishers, their focus is on the training of Anthropic’s Claude chatbot.

I anticipate a few of the really smart and oh-so-busy wizards will be sitting in a conference room doing the deposition thing. That involves lawyers who are not particularly as scientifically oriented as AI wizards trying to make sense of Anthropic’s use of OPW (other people’s work) without permission. If you are a fan of legal filings, you can read the 20-page document at this link.

Those AI wizards are clever, aren’t they?

Stephen E Arnold, August 27, 2024

A Tool to Fool AI Detectors

August 27, 2024

Here is one way to fool the automated ai detectors: AIHumanizer. A Redditor in the r/ChatGPT subreddit has created a “New Tool that Removes Frequent AI Phrases like ‘Unleash’ or ‘Elevate’.” Sufficient_Ice_6113 writes:

“I created a simple tool which lets you humanize your texts and remove all the robotic or repeated phrases ChatGPT usually uses like ‘Unleash’ ‘elevate’ etc. here is a longer list of them: Most used AI words 100 most common AI words. To remove them and improve your texts you can use aihumanizer.com which completely rewrites your text to be more like it was written by a human. It also makes it undetectable by AI detectors as a side effect because the texts don’t have the common AI pattern any longer. It is really useful in case you want to use an AI text for any work related things, most people can easily tell an email or application to a job was written by AI when it includes ‘Unleash’ and ‘elevate’ a dozen times.”

The author links to their example result at Undetectable AI, the “AI Detector and Humanizer.” That site declares the sample text “appears human.” See the post’s comments for another example, submitted by benkei_sudo. They opine the tool “is a good start, but needs a lot of improvement” because, though it fools the AI checkers, an actual human would have their suspicions. That could be a problem for emails or press releases that, at least for now, tend to be read by people. But how many actual humans are checking resumes or standardized-test essays these days? Besides, Sufficient Ice emphasizes, AIHumanizer offers an upgraded version for an undisclosed price (though a free trial is available). The AI-content arms race continues.

Cynthia Murrell, August 27, 2024

Eric Schmidt, Truth Teller at Stanford University, Bastion of Ethical Behavior

August 26, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I spotted some of the quotes in assorted online posts about Eric Schmidt’s talk / interview at Stanford University. I wanted to share a transcript of the remarks. You can find the ASCII transcript on GitHub at this link. For those interested in how Silicon Valley concepts influence one’s view of appropriate behavior, this talk is a gem. Is it at the level of the Confessions of St. Augustine? Well, the content is darned close in my opinion. Students of Google’s decision making past and present may find some guideposts. Aspiring “leadership” type people may well find tips and tricks.

Stephen E Arnold, August 26, 2024

Meta Leadership: Thank you for That Question

August 26, 2024

Who needs the Dark Web when one has Facebook? We learn from The Hill, “Lawmakers Press Meta Over Illicit Drug Advertising Concerns.” Writer Sarah Fortinsky pulls highlights from the open letter a group of House representatives sent directly to Mark Zuckerberg. The rebuke follows a March report from The Wall Street Journal that Meta was under investigation for “facilitating the sale of illicit drugs.” Since that report, the lawmakers lament, Meta has continued to run such ads. We learn:

The Tech Transparency Project recently reported that it found more than 450 advertisements on those platforms that sell pharmaceuticals and other drugs in the last several months. ‘Meta appears to have continued to shirk its social responsibility and defy its own community guidelines. Protecting users online, especially children and teenagers, is one of our top priorities,’ the lawmakers wrote in their letter, which was signed by 19 lawmakers. ‘We are continuously concerned that Meta is not up to the task and this dereliction of duty needs to be addressed,’ they continued. Meta uses artificial intelligence to moderate content, but the Journal reported the company’s tools have not managed to detect the drug advertisements that bypass the system.”

The bipartisan representatives did not shy from accusing Meta of dragging its heels because it profits off these illicit ad campaigns:

“The lawmakers said it was ‘particularly egregious’ that the advertisements were ‘approved and monetized by Meta.’ … The lawmakers noted Meta repeatedly pushes back against their efforts to establish greater data privacy protections for users and makes the argument ‘that we would drastically disrupt this personalization you are providing,’ the lawmakers wrote. ‘If this personalization you are providing is pushing advertisements of illicit drugs to vulnerable Americans, then it is difficult for us to believe that you are not complicit in the trafficking of illicit drugs,’ they added.”

The letter includes a list of questions for Meta. There is a request for data on how many of these ads the company has discovered itself and how many it missed that were discovered by third parties. It also asks about the ad review process, how much money Meta has made off these ads, what measures are in place to guard against them, and how minors have interacted with them. The legislators also ask how Meta uses personal data to target these ads, a secret the company will surely resist disclosing. The letter gives Zuckerberg until September 6 to respond.

Cynthia Murrell, August 26, 2024

AI Snake Oil Hisses at AI

August 23, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Enthusiasm for certain types of novel software or gadgets rises and falls. The Microsoft marketing play with OpenAI marked the beginning of the smart software hype derby. Google got the message and flipped into Red Alert mode. Now about 20 months after Microsoft’s announcement about its AI tie up with Sam AI-Man, we have Google’s new combo: AI in a mobile phone. Bam! Job done. Slam dunk.

image

Thanks, MSFT Copilot. On top of the IPv6 issue? Oh, too bad.

I wonder if the Googlers were thinking along the same logical lines at the authors of “AI Companies Are Pivoting from Creating Gods to Building Products. Good.”

The snake oil? Dripping. Here’s a passage from the article I noted:

AI companies are collectively planning to spend a trillion dollars on hardware and data centers, but there’s been relatively little to show for it so far.

A trillion? That’s a decent number. Sam AI-Man wants more, but the scale is helpful, particularly when most numbers are mere billions in the zoom zoom world of smart software.

The most important item in the write up, in my opinion, is the list of five “challenges.” The article focuses on consumer AI. A couple of these apply to the enterprise sector as well. Let’s look at the five “challenges.” These are and, keep in mind, I a paraphrasing as dinobabies often do:

  1. Cost. In terms of consumers, one must consider making Hamster Kombat smart. (This is a Telegram dApp.) My team informed me that this little gem has 35 million users, and it is still growing. Imagine the computational cost to infuse each and every Hamster Kombat “game” player with AI goodness. But it’s a game and a distributed one at that, one might say. Someone has to pay for these cycles. And Hamster Kombat is not on the radar of most consumers’ radar. Telegram has about 950 million users, so 35 million users comes from that pool. What are the costs of AI infused games outside of a walled garden. And the hardware? And the optimization engineering? And the fooling around with ad deals? Costs are not a hurdle. Costs might be a Grand Canyon-scale leap into a financial mud bank.
  2. Reliability. Immature systems and methods, training content issues (real and synthetic), and the fancy math which uses a lot of probability procedures guarantees some interesting outputs.
  3. Privacy. The consumer or user facing services are immature. Developers want to get something to most work in a good enough manner. Then security may be discussed. But on to the next feature. As a result, I am not sure if anyone has a decent grasp of the security issues which smart software might pose. Look at Microsoft. It’s been around almost half a century, and I learn about new security problems every day. Is smart software different?
  4. Safety and security. This is a concomitant to privacy. Good luck knowing what the systems do or do not do.
  5. User interface. I am a dinobaby. The interfaces are pale, low contrast, and change depending on what a user clicks. I like stability. Smart software simply does not comprehend that word.

Good points. My view is that the obstacle to surmount is money. I am not sure that the big outfits anticipated the costs of their sally into the hallucinating world of AI. And what are those costs, pray tell. Here’s are selected items the financial managers at the Big Dogs are pondering along with the wording of their updated LinkedIn profile:

  • Litigation. Remarks by some icons of the high technology sector have done little to assuage the feelings of those whose content was used without permission or compensation. Some, some people. A few Big Dogs are paying cash to scrape.
  • Power. Yep, electricity, as EV owners know, is not really free.
  • Water, Yep, modern machines produce heat if what I learned in physics was actual factual.
  • People (until they can be replaced by a machine that does not require health care or engage in signing petitions).
  • Data and indexing. Yep, still around and expensive.
  • License fees. They are comin’ round the mountain of legal filings.
  • Meals, travel and lodging. Leadership will be testifying, probably a lot.
  • PR advisors and crisis consultants. See the first bullet, Litigation.

However, slowly but surely some commercial sectors are using smart software. There is an AI law firm. There are dermatologists letting AI determine what to cut, freeze, or ignore. And there are college professors using AI to help them do “original” work and create peer-review fodder.

There was a snake in the Garden of Eden, right?

Stephen E Arnold, August 23, 2024

Google Leadership Versus Valued Googlers

August 23, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]This essay is the work of a dumb dinobaby. No smart software required.

The summer in rural Kentucky lingers on. About 2,300 miles away from the Sundar & Prabhakar Comedy Show’s nerve center, the Alphabet Google YouTube DeepMind entity is also “cyclonic heating from chaotic employee motion.” What’s this mean? Unsteady waters? Heat stroke? Confusion? Hallucinations? My goodness.

The Google leadership faces another round of employee pushback. I read “Workers at Google DeepMind Push Company to Drop Military Contracts.

How could the Google smart software fail to predict this pattern? My view is that smart software has some limitations when it comes to managing AI wizards. Furthermore, Google senior managers have not been able to extract full knowledge value from the tools at their disposal to deal with complexity. Time Magazine reports:

Nearly 200 workers inside Google DeepMind, the company’s AI division, signed a letter calling on the tech giant to drop its contracts with military organizations earlier this year, according to a copy of the document reviewed by TIME and five people with knowledge of the matter. The letter circulated amid growing concerns inside the AI lab that its technology is being sold to militaries engaged in warfare, in what the workers say is a violation of Google’s own AI rules.

Why are AI Googlers grousing about military work? My personal view is that the recent hagiography of Palantir’s Alex Karp and the tie up between Microsoft and Palantir for Impact Level 5 services means that the US government is gearing up to spend some big bucks for warfighting technology. Google wants — really needs — this revenue. Penalties for its frisky behavior as what Judge Mehta describes and “monopolistic” could put a hit in the git along of Google ad revenue. Therefore, Google’s smart software can meet the hunger militaries have for intelligent software to perform a wide variety of functions. As the Russian special operation makes clear, “meat based” warfare is somewhat inefficient. Ukrainian garage-built drones with some AI bolted on perform better than a wave of 18 year olds with rifles and a handful of bullets. The example which sticks in my mind is a Ukrainian drone spotting a Russian soldier in the field partially obscured by bushes. The individual is attending to nature’s call.l The drone spots the “shape” and explodes near the Russian infantry man.

image

A former consultant faces an interpersonal Waterloo. How did that work out for Napoleon? Thanks, MSFT Copilot. Are you guys working on the IPv6 issue? Busy weekend ahead?

Those who study warfare probably have their own ah-ha moment.

The Time Magazine write up adds:

Those principles state the company [Google/DeepMind] will not pursue applications of AI that are likely to cause “overall harm,” contribute to weapons or other technologies whose “principal purpose or implementation” is to cause injury, or build technologies “whose purpose contravenes widely accepted principles of international law and human rights.”) The letter says its signatories are concerned with “ensuring that Google’s AI Principles are upheld,” and adds: “We believe [DeepMind’s] leadership shares our concerns.”

I love it when wizards “believe” something.

Will the Sundar & Prabhakar brain trust do believing or banking revenue from government agencies eager to gain access to advantage artificial intelligence services and systems? My view is that the “believers” underestimate the uncertainty arising from potential sanctions, fines, or corporate deconstruction the decision of Judge Mehta presents.

The article adds this bit of color about the Sundar & Prabhakar response time to Googlers’ concern about warfighting applications:

The [objecting employees’] letter calls on DeepMind’s leaders to investigate allegations that militaries and weapons manufacturers are Google Cloud users; terminate access to DeepMind technology for military users; and set up a new governance body responsible for preventing DeepMind technology from being used by military clients in the future. Three months on from the letter’s circulation, Google has done none of those things, according to four people with knowledge of the matter. “We have received no meaningful response from leadership,” one said, “and we are growing increasingly frustrated.”

“No meaningful response” suggests that the Alphabet Google YouTube DeepMind rhetoric is not satisfactory.

The write up concludes with this paragraph:

At a DeepMind town hall event in June, executives were asked to respond to the letter, according to three people with knowledge of the matter. DeepMind’s chief operating officer Lila Ibrahim answered the question. She told employees that DeepMind would not design or deploy any AI applications for weaponry or mass surveillance, and that Google Cloud customers were legally bound by the company’s terms of service and acceptable use policy, according to a set of notes taken during the meeting that were reviewed by TIME. Ibrahim added that she was proud of Google’s track record of advancing safe and responsible AI, and that it was the reason she chose to join, and stay at, the company.

With Microsoft and Palantir, among others, poised to capture some end-of-fiscal-year money from certain US government budgets, the comedy act’s headquarters’ planners want a piece of the action. How will the Sundar & Prabhakar Comedy Act handle the situation? Why procrastinate? Perhaps the comedy act hopes the issue will just go away. The complaining employees have short attention spans, rely on TikTok-type services for information, and can be terminated like other Googlers who grouse, picket, boycott the Foosball table, or quiet quit while working on a personal start up.

The approach worked reasonably well before Judge Mehta labeled Google a monopoly operation. It worked when ad dollars flowed like latte at Philz Coffee. But today is different, and the unsettled personnel are not a joke and add to the uncertainty some have about the Google we know and love.

Stephen E Arnold, August 23, 2024

Which Is It, City of Columbus: Corrupted or Not Corrupted Data

August 23, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I learned that Columbus, Ohio, suffered one of those cyber security missteps. But the good news is that I learned from the ever reliable Associated Press, “Mayor of Columbus, Ohio, Says Ransomware Attackers Stole Corrupted, Unusable Data.” But then I read the StateScoop story “Columbus, Ohio, Ransomware Data Might Not Be Corrupted After All.”

image

The answer is, “I don’t know.” Thanks, MSFT Copilot. Good enough.

The story is a groundhog day tale. A bad actor compromises a system. The bad actor delivers ransomware. The senior officers know little about ransomware and even less about the cyber security systems marketed as a proactive, intelligent defense against bad stuff like ransomware. My view, as you know, is that it is easier to create sales decks and marketing collateral than it is is to deliver cyber security software that works. Keep in mind that I am a dinobaby. I like products that under promise and over deliver. I like software that works, not sort of works or mostly works. Works. That’s it.

What’s interesting about Columbus other than its zoo, its annual flower festival, and the OCLC organization is that no one can agree on this issue. I believe this is a variation on the Bud Abbott and Lou Costello routine “Who’s on First.”

StateScoop’s story reported:

An anonymous cybersecurity expert told local news station WBNS Tuesday that the personal information of hundreds of thousands of Columbus residents is available on the dark web. The claim comes one day after Columbus Mayor Andrew Ginther announced to the public that the stolen data had been “corrupted” and most likely “unusable.” That assessment was based on recent findings of the city’s forensic investigation into the incident.

The article noted:

Last week, the city shared a fact sheet about the incident, which explains: “While the city continues to evaluate the data impacted, as of Friday August 9, 2024, our data mining efforts have not revealed that any of the dark web-posted data includes personally identifiable information.”

What are the lessons I have learned from these two stories about a security violation and ransomware extortion?

  1. Lousy cyber security is a result of indifferent (maybe lousy) management? How do I know? The City of Columbus cannot generate a consistent story.
  2. The compromised data were described in two different and opposite ways. The confusion underscores that the individuals involved are struggling with basic data processes. Who’s on first? I don’t know. No, he’s on third.
  3. The generalization that no one wants the data misses an important point. Data, once available, is of considerable interest to state actors who might be interested in the employees associated with either the university, Chemical Abstracts, or some other information-centric entity in Columbus, Ohio.

Net net: The incident is one more grim reminder of the vulnerabilities which “managers” choose to ignore or leave to people who may lack certain expertise. The fix may begin in the hiring process.

Stephen E Arnold, August 23, 2024

Phishers: Targeting Government Contract Shoemakers Who Do Not Have Shoes But Talk about Them

August 22, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The proverb "The shoemaker’s children go barefoot" has inspired some bad actors who phish for online credentials. The obvious targets, some might suggest, are executives at major US government agencies. Those individuals are indeed targets, but a number of bad actors have found ways to get a GS-9 to click on a link designed to steal credentials. An even more promising barrel containing lots of fish may be the vendors who sell professional services, including cyber security, to the US government agencies.

image

Of course, our systems are secure. Thanks, MSFT Copilot. How is Word doing today? Still crashing?

This Sophisticated New Phishing Campaign Is Going after US Government Contractors” explains:

Researchers from Perception Point revealed the “Uncle Scam” campaign bypasses security checks to deliver sophisticated phishing emails designed by LLMs to be extremely convincing. The attackers use advanced tools, including AI-powered phishing kits and the Microsoft Dynamics 365 platform, to execute convincing multi-step attacks.

The write up then reveals one of the key — maybe the principal key to success:

One of the key elements that makes this phishing campaign particularly effective is the abuse of Microsoft’s Dynamics 365 Marketing platform. The attackers leverage the domain "dyn365mktg.com," associated with Dynamics 365, to send out their malicious emails. Because this domain is pre-authenticated by Microsoft and complies with DKIM and SPF standards, phishing emails are more likely to bypass spam filters and reach the inboxes of unsuspecting recipients.

If I understand this statement, the recipient sees email with a pattern set up to suck credentials. Why would a government contractor click on such an email? The domain is “pre-authenticated by Microsoft.” If it looks like a duck and walks like a duck, the email must be a duck. Yes, it is a digital duck which is designed to take advantage of yet another “security” and “trust” facet of the Microsoft ecosystem.

I found this series of statements interesting. Once again, the same old truisms are trotted out to help a victim avoid a similar problem in the future. I quote:

To safeguard your organization from falling victim to sophisticated phishing attacks like "Uncle Scam," Perception Point recommends taking the following precautions:

  • Double-check the Sender’s Email: Always scrutinize the sender’s email address for any signs of impersonation.
  • Hover Before You Click: Before clicking any link, hover over it to reveal the actual URL and ensure it is legitimate. 
  • Look for Errors: Pay attention to minor grammatical mistakes, unusual phrasing, or inconsistencies in the email content.
  • Leverage Advanced Detection Tools: Implement AI-powered multi-layered security solutions to detect and neutralize sophisticated phishing attempts.
  • Educate Your Team: Regularly train employees on how to identify phishing emails and the importance of verifying unsolicited communications.
  • Trust Your Instincts: If an email or offer seems too good to be true, it probably is. Always verify the authenticity of such communications through trusted channels.

How well do these tips work in today’s government contractor workspace? Answer: Not too well.

The issue is the underlying software. The fix is going to be difficult to implement. Microsoft is working to make its systems more secure. The government contractors can make shoes in the form of engineering change orders, scope changes, and responses to RFQs which hit every requirement in the RFP. But many of those firms have assumed that the cyber security systems will do their job.

Ignorance is bliss. Maybe not for the compromised contractor, but the bad actors are enjoying the Uncle Scam play and may for years to come.

Stephen E Arnold, August 22, 2024

AI Balloon: Losing Air and Boring People

August 22, 2024

Though tech bros who went all-in on AI still promise huge breakthroughs just over the horizon, Windows Central’s Kevin Okemwa warns: “The Generative AI Bubble Might Burst, Sending the Tech to an Early Deathbed Before Its Prime: ‘Don’t Believe the Hype’.” Sadly, it is probably too late to save certain career paths, like coding, from an AI takeover. But perhaps a slowdown would conserve some valuable resources. Wouldn’t that be nice? The write-up observes:

“While AI has opened up the world to endless opportunities and untapped potential, its hype might be short-lived, with challenges abounding. Aside from its high water and power demands, recent studies show that AI might be a fad and further claim that 30% of its projects will be abandoned after proof of concept. Similar sentiments are echoed in a recent Blood In The Machine newsletter, which points out critical issues that might potentially lead to ‘the beginning of the end of the generative AI boom.’ From the Blood in the Machine newsletter analysis by Brian Merchant, who is also the Los Angeles Times’ technology columnist:

‘This is it. Generative AI, as a commercial tech phenomenon, has reached its apex. The hype is evaporating. The tech is too unreliable, too often. The vibes are terrible. The air is escaping from the bubble. To me, the question is more about whether the air will rush out all at once, sending the tech sector careening downward like a balloon that someone blew up, failed to tie off properly, and let go—or, more slowly, shrinking down to size in gradual sputters, while emitting embarrassing fart sounds, like a balloon being deliberately pinched around the opening by a smirking teenager.’”

Such evocative imagery. Merchant’s article also notes that, though Enterprise AI was meant to be the way AI firms made their money, it is turning out to be a dud. There are several reasons for this, not the least of which is AI models’ tendency to “hallucinate.”

Okemwa offers several points to support Merchant’s deflating-balloon claim. For example, Microsoft was recently criticized by investors for wasting their money on AI technology. Then there NVIDIA: The chipmaker recently became the most valuable company in the world thanks to astronomical demand for its hardware to power AI projects. However, a delay of its latest powerful chip dropped its stock’s value by 5%, and market experts suspect its value will continue to decline. The write-up also points to trouble at generative AI’s flagship firm, OpenAI. The company is plagued by a disturbing exodus of top executives, rumors of pending bankruptcy, and a pesky lawsuit from Elon Musk.

Speaking of Mr. Musk, how do those who say AI will kill us all respond to the potential AI downturn? Crickets.

Cynthia Murrell, August 22, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta