Forget Deep Fakes. Watch for Shallow Fakes

December 6, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A Tech Conference Listed Fake Speakers for Years: I Accidentally Noticed” revealed a factoid about which I knew absolutely zero. The write up reveals:

For 3 years straight, the DevTernity conference listed non-existent software engineers representing Coinbase and Meta as featured speakers. When were they added and what could have the motivation been?

The article identifies and includes what appear to be “real” pictures of a couple of these made-up speakers. What’s interesting is that only females seem to be made up. Is that perhaps because conference organizers like to take the easiest path, choosing people who are “in the news” or “friends.” In the technology world, I see more entities which appear to be male than appear to be non-males.

image

Shallow fakes. Deep fakes. What’s the problem? Thanks, MSFT Copilot. Nice art which you achieved exactly how? Oh, don’t answer that question. I don’t want to know.

But since I don’t attend many conferences, I am not in touch with demographics. Furthermore, I am not up to speed on fake people. To be honest, I am not too interested in people, real or fake. After a half century of work, I like my French bulldog.

The write up points out:

We’ve not seen anything of this kind of deceit in tech – a conference inventing speakers, including fake images – and the mainstream media covered this first-of-a-kind unethical approach to organizing a conference,

That’s good news.

I want to offer a handful of thoughts about creating “fake” people for conferences and other business efforts:

  1. Why not? The practice went unnoticed for years.
  2. Creating digital “fakes” is getting easier and the tools are becoming more effective at duplicating “reality” (whatever that is). It strikes me that people looking for a short cut for a diverse Board of Directors, speaker line up, or a LinkedIn reference might find the shortest, easiest path to shape reality for a purpose.
  3. The method used to create a fake speaker is more correctly termed ka “shallow” fake. Why? As the author of the cited paper points out. Disproving the reality of the fakes was easy and took little time.

Let me shift gears. Why would conference organizers find fake speakers appealing? Here are some hypotheses:

  1. Conferences fall into a “speaker rut”; that is, organizers become familiar with certain speakers and consciously or unconsciously slot them into the next program because they are good speakers (one hopes), friendly, or don’t make unwanted suggestions to the organizers
  2. Conference staff are overworked and understaffed. Applying some smart workflow magic to organizing and filling in the blanks spaces on the program makes the use of fakery appealing, at least at one conference. Will others learn from this method?
  3. Conferences have become more dependent on exhibitors. Over the years, renting booth space has become a way for a company to be featured on the program. Yep, advertising, just advertising linked to “sponsors” of social gatherings or Platinum and Gold sponsors who get to put marketing collateral in a cheap nylon bag foisted on every registrant.

I applaud this write up. Not only will it give people ideas about how to use “fakes.” It will also inspire innovation in surprising ways. Why not “fake” consultants on a Zoom call? There’s an idea for you.

Stephen E Arnold, December 6, 2023

How about Fear and Paranoia to Advance an Agenda?

December 6, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I thought sex sells. I think I was wrong. Fear seems to be the barn burner at the end of 2023. And why not? We have the shadow of another global pandemic? We have wars galore. We have craziness on US air planes. We have a Cybertruck which spells the end for anyone hit by the behemoth.

I read (but did not shake like the delightful female in the illustration “AI and Mass Spying.” The author is a highly regarded “public interest technologist,” an internationally renowned security professional, and a security guru. For me, the key factoid is that he is a fellow at the Berkman Klein Center for Internet & Society at Harvard University and a lecturer in public policy at the Harvard Kennedy School. Mr. Schneier is a board member of the Electronic Frontier Foundation and the most, most interesting organization AccessNow.

image

Fear speaks clearly to those in retirement communities, elder care facilities, and those who are uninformed. Let’s say, “Grandma, you are going to be watched when you are in the bathroom.” Thanks, MSFT Copilot. I hope you are sending data back to Redmond today.

I don’t want to make too much of the Harvard University connection. I feel it is important to note that the esteemed educational institution got caught with its ethical pants around its ankles, not once, but twice in recent memory. The first misstep involved an ethics expert on the faculty who allegedly made up information. The second is the current hullabaloo about a whistleblower allegation. The AP slapped this headline on that report: “Harvard Muzzled Disinfo Team after $500 Million Zuckerberg Donation.” (I am tempted to mention the Harvard professor who is convinced he has discovered fungible proof of alien technology.)

So what?

The article “AI and Mass Spying” is a baffler to me. The main point of the write up strikes me as:

Summarization is something a modern generative AI system does well. Give it an hourlong meeting, and it will return a one-page summary of what was said. Ask it to search through millions of conversations and organize them by topic, and it’ll do that. Want to know who is talking about what? It’ll tell you.

I interpret the passage to mean that smart software in the hands of law enforcement, intelligence operatives, investigators in one of the badge-and-gun agencies in the US, or a cyber lawyer is really, really bad news. Smart surveillance has arrived. Smart software can process masses of data. Plus the outputs may be wrong. I think this means the sky is falling. The fear one is supposed to feel is going to be the way a chicken feels when it sees the Chik-fil-A butcher truck pull up to the barn.

Several observations:

  1. Let’s assume that smart software grinds through whatever information is available to something like a spying large language model. Are those engaged in law enforcement are unaware that smart software generates baloney along with the Kobe beef? Will investigators knock off the verification processes because a new system has been installed at a fusion center? The answer to these questions is, “Fear advances the agenda of using smart software for certain purposes; specifically, enforcement of rules, regulations, and laws.”
  2. I know that the idea that “all” information can be processed is a jazzy claim. Google made it, and those familiar with Google search results knows that Google does not even come close to all. It can barely deliver useful results from the Railway Retirement Board’s Web site. “All” covers a lot of ground, and it is unlikely that a policeware vendor will be able to do much more than process a specific collection of data believed to be related to an investigation. “All” is for fear, not illumination. Save the categorical affirmatives for the marketing collateral, please.
  3. The computational cost for applying smart software to large domains of data — for example, global intercepts of text messages — is fun to talk about over lunch. But the costs are quite real. Then the costs of the computational infrastructure have to be paid. Then the cost of the downstream systems and people who have to figure out if the smart software is hallucinating or delivering something useful. I would suggest that Israel’s surprise at the unhappy events in October 2023 to the present day unfolded despite the baloney for smart security software, a great intelligence apparatus, and the tons of marketing collateral handed out at law enforcement conferences. News flash: The stuff did not work.

In closing, I want to come back to fear. Exactly what is accomplished by using fear as the pointy end of the stick? Is it insecurity about smart software? Are there other messages framed in a different way to alert people to important issues?

Personally, I think fear is a low-level technique for getting one’s point across. But when those affiliated with an outfit with the ethics matter and now the payola approach to information, how about putting on the big boy pants and select a rhetorical trope that is unlikely to anything except remind people that the Covid thing could have killed us all. Err. No. And what is the agenda fear advances?

So, strike the sex sells trope. Go with fear sells.

Stephen E Arnold, December 6, 2023

AI: Big Ideas Become Money Savers and Cost Cutters

December 6, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Earlier this week (November 28, 2023,) The British newspaper The Guardian published “Sports Illustrated Accused of Publishing Articles Written by AI.” The main idea is that dependence on human writers became the focus of a bunch of bean counters. The magazine has a reasonably high profile among a demographic not focused on discerning the difference between machine output and sleek, intellectual, well groomed New York “real” journalists. Some cared. I didn’t. It’s money ball in the news business.

The day before the Sports Illustrated slick business and PR move, I noted a Murdoch-infused publication’s revelation about smart software. Barron’s published “AI Will Create—and Destroy—Jobs. History Offers a Lesson.” Barron’s wrote about it; Sports Illustrated got snared doing it.

Barron’s said:

That AI technology will come for jobs is certain. The destruction and creation of jobs is a defining characteristic of the Industrial Revolution. Less certain is what kind of new jobs—and how many—will take their place.

Okay, the Industrial Revolution. Exactly how long did that take? What jobs were destroyed? What were the benefits at the beginning, the middle, and end of the Industrial Revolution? What were the downsides of the disruption which unfolded over time? Decades wasn’t it?

The AI “revolution” is perceived to be real. Investors, testosterone-charged venture capitalists, and some Type A students are going to make the AI Revolution a reality. Damn, the regulators, the copyright complainers, and the dinobabies who want to read, think, and write themselves.

Barron’s noted:

A survey conducted by LinkedIn for the World Economic Forum offers hints about where job growth might come from. Of the five fastest-growing job areas between 2018 and 2022, all but one involve people skills: sales and customer engagement; human resources and talent acquisition; marketing and communications; partnerships and alliances. The other: technology and IT. Even the robots will need their human handlers.

image

I can think of some interesting jobs. Thanks, MSFT Copilot. You did ingest some 19th century illustrations, didn’t you, you digital delight.

Now those are rock solid sources: Microsoft’s LinkedIn and the charming McKinsey & Company. (I think of McKinsey as the opioid innovators, but that’s just my inexplicable predisposition toward an outstanding bastion of ethical behavior.)

My problem with the Sports Illustrated AI move and the Barron’s essay boils down to the bipolarism which surfaces when a new next big thing appears on the horizon. Predicting what will happen when a technology smashes into business billiard balls is fraught with challenges.

One thing is clear: The balls are rolling, and journalists, paralegals, consultants, and some knowledge workers are going to find themselves in the side pocket. The way out might be making TikToks or selling gadgets on eBay.

Some will say, “AI took our jobs, Billy. Now what?” Yes, now what?

Stephen E Arnold, December 6, 2023

Is Crypto the Funding Mechanism for Bad Actors?

December 6, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Allegations make news. The United States and its allies are donating monies and resources to Israel as they fight against Hamas. As a rogue group, Hamas is not as well-funded Israel and people are speculative about how it is financing its violent attacks. The Marketplace explains how the Palestinian group is receiving some of its funding and it’s a very obvious answer: “Crypto Is One Way Hamas Gets Its Funding.” David Brancaccio, host of the Marketplace Morning Report, interviewed former federal prosecutor and US Treasury Department official and current head of TRM Labs, Ari Redford. TRM Labs is a cryptocurrency compliance firm. Redford and Brancaccio discuss how Hamas uses crypto.

Hamas is subject to sanctions from the US Treasury Department, so the group’s access to international banking is restricted. Cryptocurrency allows Hamas to circumvent those sanctions. Ironically, cryptocurrency might make it easier for authorities to track illegal use of money because the ledger can’t be forged. Crypto moves along a network of computers known as blockchains. The blockchains are public, therefore traceable and transparent. Companies like TRM allow law enforcement and other authorities to track blockchains.

The US Department of Justice, IRS-CI, and FBI removed 150 crypto wallets associated with Hamas in 2020. TRM Labs is continuously tracking Hamas and its financial supporters, most appear to be in Iran. Hamas doesn’t accept bitcoin donations anymore:

“Brancaccio: I think it was April of this year, Hamas announced it would no longer take donations in bitcoin.. Perhaps it’s because of its traceability? Redbord: Yeah, really important point. And that’s essentially what Hamas itself said that, you know, law enforcement and other authorities have been coming down on their supporters because they’ve been able to trace and track these flows. And announced in April that they would not be soliciting donations in cryptocurrency. Now, whether that’s entirely true or not, it’s hard to say. We’re obviously seeing at least supporters of Hamas go out there raising funds in crypto.”

What will bad actors do to get money? Find options and use them.

Whitney Grace, December 18, 2023

Harvard University: Does Money Influence Academic Research?

December 5, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Harvard University has been on my radar since the ethics misstep. In case your memory is fuzzy, Francesca Gino, a big thinker about ethics and taking shortcuts, was accused of data fraud. The story did not attract much attention in rural Kentucky. Ethics and dishonesty? Come on. Harvard has to do some serious training to catch up with a certain university in Louisville. For a reasonable explanation of the allegations (because, of course, one will never know), navigate to “Harvard Professor Who Studies Dishonesty Is Accused of Falsifying Data” and dig in.

image

Thanks, MSFT Copilot, you have nailed the depressive void that comes about when philosophers learn that ethics suck.

Why am I thinking about Harvard and ethics? The answer is that I read “Harvard Gutted Initial Team Examining Facebook Files Following $500 Million Donation from Chan Zuckerberg Initiative, Whistleblower Aid Client Reveals.” I have no idea if the write up is spot on, weaponized information, or the work of someone who did not get into one of the university’s numerous money generating certification programs.

The write up asserts:

Harvard University dismantled its prestigious team of online disinformation experts after a foundation run by Facebook’s Mark Zuckerberg and his wife Priscilla Chan donated $500 million to the university, a whistleblower disclosure filed by Whistleblower Aid reveals. Dr. Joan Donovan, one of the world’s leading experts on social media disinformation, says she ran into a wall of institutional resistance and eventual termination after she and her team at Harvard’s Technology and Social Change Research Project (TASC) began analyzing thousands of documents exposing Facebook’s knowledge of how the platform has caused significant public harm.

Let’s assume that the allegation is horse feathers, not to be confused with Intel’s fabulous Horse Ridge. Harvard still has to do some fancy dancing with regard to the ethics professor and expert in dishonesty who is alleged to have violated the esteemed university’s ethics guidelines and was dishonest.

If we assume that the information in Dr. Donovan’s whistleblower declaration is close enough for horse shoes, something equine can be sniffed in the atmosphere of Dr. William James’s beloved institution.

What could Facebook or the Metazuck do which would cause significant public harm? The options range from providing tools to disseminate information which spark body shaming, self harm, and angst among young users. Are old timers possibly affected? I suppose buying interesting merchandise on Facebook Marketplace and experiencing psychological problems as a result of defriending are possibilities too.

If the allegations are proven to be accurate, what are the consequences for the two esteemed organizations? My hunch is zero. Money talks; prestige walks away to put ethics on display for another day.

Stephen E Arnold, December 5, 2023

23andMe: Those Users and Their Passwords!

December 5, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Silicon Valley and health are match fabricated in heaven. Not long ago, I learned about the estimable management of Theranos. Now I find out that “23andMe confirms hackers stole ancestry data on 6.9 million users.” If one follows the logic of some Silicon Valley outfits, the data loss is the fault of the users.

image

“We have the capability to provide the health data and bioinformation from our secure facility. We have designed our approach to emulate the protocols implemented by Jack Benny and his vault in his home in Beverly Hills,” says the enthusiastic marketing professional from a Silicon Valley success story. Thanks, MSFT Copilot. Not exactly Jack Benny, Ed, and the foghorn, but I have learned to live with “good enough.”

According to the peripatetic Lorenzo Franceschi-Bicchierai:

In disclosing the incident in October, 23andMe said the data breach was caused by customers reusing passwords, which allowed hackers to brute-force the victims’ accounts by using publicly known passwords released in other companies’ data breaches.

Users!

What’s more interesting is that 23andMe provided estimates of the number of customers (users) whose data somehow magically flowed from the firm into the hands of bad actors. In fact, the numbers, when added up, totaled almost seven million users, not the original estimate of 14,000 23andMe customers.

I find the leak estimate inflation interesting for three reasons:

  1. Smart people in Silicon Valley appear to struggle with simple concepts like adding and subtracting numbers. This gap in one’s education becomes notable when the discrepancy is off by millions. I think “close enough for horse shoes” is a concept which is wearing out my patience. The difference between 14,000 and almost 17 million is not horse shoe scoring.
  2. The concept of “security” continues to suffer some set backs. “Security,” one may ask?
  3. The intentional dribbling of information reflects another facet of what I call high school science club management methods. The logic in the case of 23andMe in my opinion is, “Maybe no one will notice?”

Net net: Time for some regulation, perhaps? Oh, right, it’s the users’ responsibility.

Stephen E Arnold, December 5, 2023 

Cyber Security Responsibility: Where It Belongs at Last!

December 5, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I want to keep this item brief. Navigate to “CISA’s Goldstein Wants to Ditch ‘Patch Faster, Fix Faster’ Model.”

CISA means the US government’s Cybersecurity and Infrastructure Security Agency. The “Goldstein” reference points to Eric Goldstein, the executive assistant director of CISA.

The main point of the write up is that big technology companies have to be responsible for cleaning up their cyber security messes. The write up reports:

Goldstein said that CISA is calling on technology providers to “take accountability” for the security of their customers by doing things like enabling default security controls such as multi-factor authentication, making security logs available, using secure development practices and embracing memory safe languages such as Rust.

I may be incorrect, but I picked up a signal that the priorities of some techno feudalists are not security. Perhaps these firms’ goals are maximizing profit, market share, and power over their paying customers. Security? Maybe it is easier to describe in a slide deck or a short YouTube video?

image

The use of a parental mode seems appropriate for a child? Will it work for techno feudalists who have created a digital mess in kitchens throughout the world? Thanks, MSFT Copilot. You must have ingested some “angry mommy” data when your were but a wee sprout.

Will this approach improve the security of mission-critical systems? Will the enjoinder make a consumer’s mobile phone more secure?

My answer? Without meaningful consequences, security is easier to talk about than deliver. Therefore, minimal change in the near future. I wish I were wrong.

Stephen E Arnold, December 5, 2023

Are There Consequences for Social Media? Well, Not Really

December 5, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

While parents and legal guardians are responsible for their kids screen time, the US government ruled that social media companies shoulder some responsibility for rotting kids’ brains. The Verge details the government’s ruling in the article, “Social Media Giants Must Face Child Safety Lawsuits, Judge Rules.” US District Judge Yvonne Gonzalez Rogers ruled that social media companies Snap, Alphabet, ByteDance, and Meta must proceed with a lawsuit alleging their platforms have negative mental health effects on kids. Judge Gonzalez Rogers dismissed the companies’ motions to dismiss the lawsuits that accuse the platforms of purposely being addictive.

The lawsuits were filed by 42 states and multiple school districts:

"School districts across the US have filed suit against Meta, ByteDance, Alphabet, and Snap, alleging the companies cause physical and emotional harm to children. Meanwhile, 42 states sued Meta last month over claims Facebook and Instagram “profoundly altered the psychological and social realities of a generation of young Americans.” This order addresses the individual suits and “over 140 actions” taken against the companies.”

Judge Gonzalez Rogers ruled that the First Amendment and Section 230, which say that online platforms shouldn’t be treated as third-party content publishers, don’t protect online platforms from liability. The judge also explained the lawsuits deal with the platforms’ “defects,” such as lack of a robust age verification system, poor parental controls, and a hard account deletion process.

She did dismiss other alleged defects that include no time limits on platforms, use of addictive algorithms, recommending children’s accounts to adults, and offering a beginning and end to a feed. These are protected by Section 230.

The ruling doesn’t determine if the social media platforms are harmful or hold them liable. It only allows lawsuits to go forward in court.

Whitney Grace, December 5, 2023

Why Google Dorks Exist and Why Most Users Do Not Know Why They Are Needed

December 4, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Many people in my lectures are not familiar with the concept of “dorks”. No, not the human variety. I am referencing the concept of a “Google dork.” If you do a quick search using Yandex.com, you will get pointers to different “Google dorks.” Click on one of the links and you will find information you can use to retrieve more precise and relevant information from the Google ad-supported Web search system.

Here’s what QDORKS.com looks like:

image

The idea is that one plugs in search terms and uses the pull down boxes to enter specific commands to point the ad-centric system at something more closely resembling a relevant result. Other interfaces are available; for example, the “1000 Best Google Dorks List." You get a laundry list of tips,commands, and ideas for wrestling Googzilla to the ground, twisting its tail, and (hopefully) yield relevant information. Hopefully. Good work.

image

Most people are lousy at pinning the tail on the relevance donkey. Therefore, let someone who knows define relevance for the happy people. Thanks, MSFT Copilot. Nice animal with map pins.

Why are Google Dorks or similar guides to Google search necessary? Here are three reasons:

  1. Precision reduces the opportunities for displaying allegedly relevant advertising. Semantic relaxation allows the Google to suggest that it is using Oingo type methods to find mathematically determined relationships. The idea is that razzle dazzle makes ad blasting something like an ugly baby wrapped in translucent fabric on a foggy day look really great.
  2. When Larry Page argued with me at a search engine meeting about truncation, he displayed a preconceived notion about how search should work for those not at Google or attending a specialist conference about search. Rational? To him, yep. Logical? To his framing of the search problem, the stance makes perfect sense if one discards the notion of tense, plurals, inflections, and stupid markers like “im” as in “impractical” and “non” as in “nonsense.” Hey, Larry had the answer. Live with it.
  3. The goal at the Google is to make search as intellectually easy for the “user” as possible. The idea was to suggest what the user intended. Also, Google had the old idea that a person’s past behavior can predict that person’s behavior now. Well, predict in the sense that “good enough” will do the job for vast majority of search-blind users who look for the short cut or the most convenient way to get information.

Why? Control, being clever, and then selling the dream of clicks for advertisers. Over the years, Google leveraged its information framing power to a position of control. I want to point out that most people, including many Googlers, cannot perceive. When pointed out, those individuals refuse to believe that Google does [a] NOT index the full universe of digital data, [b] NOT want to fool around with users who prefer Boolean algebra, content curation to identify the best or most useful content, and [c] fiddle around with training people to become effective searchers of online information. Obfuscation, verbal legerdemain, and the “do no evil” craziness make the railroad run the way Cornelius Vanderbilt-types implemented.

I read this morning (December 4, 2023) the Google blog post called “New Ways to Find Just What You Need on Search.” The main point of the write up in my opinion is:

Search will never be a solved problem; it continues to evolve and improve alongside our world and the web.

I agree, but it would be great if the known search and retrieval functions were available to users. Instead, we have a weird Google Mom approach. From the write up:

To help you more easily keep up with searches or topics you come back to a lot, or want to learn more about, we’re introducing the ability to follow exactly what you’re interested in.

Okay, user tracking, stored queries, and alerts. How does the Google know what you want? The answer is that users log in, use Google services, and enter queries which are automatically converted to search. You will have answers to questions you really care about.

There are other search functions available in the most recent version of Google’s attempts to deal with an unsolved problem:

As with all information on Search, our systems will look to show the most helpful, relevant and reliable information possible when you follow a topic.

Yep, Google is a helicopter parent. Mom will know what’s best, select it, and present it. Don’t like it? Mom will be recalcitrant, like shaping search results to meet what the probabilistic system says, “Take your medicine, you brat.” Who said, “Mother Google is a nice mom”? Definitely not me.

And Google will make search more social. Shades of Dr. Alon Halevy and the heirs of Orkut. The Google wants to bring people together. Social signals make sense to Google. Yep, content without Google ads must be conquered. Let’s hope the Google incentive plans encourage the behavior, or those valiant programmers will be bystanders to other Googlers’ promotions and accompanying money deliveries.

Net net: Finding relevant, on point, accurate information is more difficult today than at any other point in the 50+ year work career. How does the cloud of unknowing dissipate? I have no idea. I think it has moved in on tiny Googzilla feet and sits looking over the harbor, ready to pounce on any creature that challenges the status quo.

PS. Corny Vanderbilt was an amateur compared to the Google. He did trains; Google does information.

Stephen E Arnold, December 4, 2023

The High School Science Club Got Fined for Its Management Methods

December 4, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I almost missed this story. “Google Reaches $27 Million Settlement in Case That Sparked Employee Activism in Tech” which contains information about the cost of certain management methods. The write up asserts:

Google has reached a $27 million settlement with employees who accused the tech giant of unfair labor practices, setting a record for the largest agreement of its kind, according to California state court documents that haven’t been previously reported.

image

The kindly administrator (a former legal eagle) explains to the intelligent teens in the high school science club something unpleasant. Their treatment of some non sci-club types will cost them. Thanks, MSFT Copilot. Who’s in charge of the OpenAI relationship now?

The article pegs the “worker activism” on Google. I don’t know if Google is fully responsible. Googzilla’s shoulders and wallet are plump enough to carry the burden in my opinion. The article explains:

In terminating the employee, Google said the person had violated the company’s data classification guidelines that prohibited staff from divulging confidential information… Along the way, the case raised issues about employee surveillance and the over-use of attorney-client privilege to avoid legal scrutiny and accountability.

Not surprisingly, the Google management took a stand against the apparently unjust and unwarranted fine. The story notes via a quote from someone who is in the science club and familiar with its management methods::

“While we strongly believe in the legitimacy of our policies, after nearly eight years of litigation, Google decided that resolution of the matter, without any admission of wrongdoing, is in the best interest of everyone,” a company spokesperson said.

I want to point out that the write up includes links to other articles explaining how the Google is refining its management methods.

Several questions:

  • Will other companies hit by activist employees be excited to learn the outcome of Google’s brilliant legal maneuvers which triggered a fine of a mere $27 million
  • Has Google published a manual of its management methods? If not, for what is the online advertising giant waiting?
  • With more than 170,000 (plus or minus) employees, has Google found a way to replace the unpredictable, expensive, and recalcitrant employees with its smart software? (Let’s ask Bard, shall we?)

After 25 years, the Google finds a way to establish benchmarks in managerial excellence. Oh, I wonder if the company will change it law firm line up. I mean $27 million. Come on. Loose the semantic noose and make more ads “relevant.”

Stephen E Arnold, December 4, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta