Remarkable Zoom Advice

March 1, 2021

I am either 76 or 77. Who knows? Who cares? I do participate in Zoom calls, and I found this “recommendation” absolutely life changing. The information appears in “You SHOULD Wave at the End of Video Calls — Here’s Why.” Straight-away I marvel at the parental “should.” There’s nothing like a mom admonishment when it comes to Zoom meetings.

The write up posits:

I already know that every call here ends with a lot of waving), and the group unanimously favors waving.

The idea is that a particular group is into waving appears to support the generalization that waving good bye at the end of Zoom calls is the proper method of exiting a digital experience.

I learned:

Here’s the definitive ruling for the entire internet, from now until the end of time: waving at the end of video calls is good, and no one should feel bad for doing it. Ever.

Okay, maybe feeling bad is not the issue.

Looking stupid, inappropriate, weird, or childish may be other reasons for doubting this incredibly odd advice. Look. People exiting my Zoom meetings are not waving good bye to friends climbing on the Titanic in April 1912.

Why wave? The explanation:

Humans aren’t machines — we’re social animals. We want to feel connected to each other, even in a work context. Suddenly hanging up feels inhuman (because it is). Waving and saying goodbye solves this problem.

Holy Cow! Humans are not machines. News flash: At least one Googler wants to become a machine, and there will be others. In fact, I know humans who are machine like, in fact.

I hope I never see a wave ending my next lecture for law enforcement and intelligence professionals waving at me. I say thank you and punch the end meeting for all.

I am confident that those testifying via video conference connections will not wave at lawyers, elected officials, or investigators. Will Facebook’s Mark Zuckerberg wave to EU officials in the forthcoming probes into the company’s business methods?

Stephen E Arnold, March 1, 2021

The Crux of the Smart Software Challenge

February 24, 2021

I read “There Is No Intelligence without Human Brains.” The essay is not about machine learning, artificial intelligence, and fancy algorithms. One of the points which I found interesting was:

But, humans can opt for long-term considerations, sacrificing to help others, moral arguments, doing unhelpful things as a deep scream for emotional help, experimenting to learn, training themselves to get good at something, beauty over success, etc., rather than just doing what is comfortable or feels nice in the short run or simply pro-survival.

However, one sentence focused my thinking on the central problem of smart software and possibly explains the odd, knee jerk, and high profile personnel problems in Google’s AI ethics unit. Here’s the sentence:

Poisoning may greatly hinder our flexible intelligence.

Smart software has to be trained. The software system can be hand fed training sets crafted by fallible humans or the software system can ingest whatever is flowing into the system. There are smart software systems which do both. One of the first commercial products to rely on training sets and “analysis on the fly” was the Autonomy system. The phrase “neurolinguistic programming” was attached by a couple of people to the Autonomy black box.

What’s stirring up dust at Google may be nothing more than fear; for example:

  • Revelations by those terminated reveal that the bias in smart software is a fundamental characteristic of Google’s approach to artificial intelligence; that is, the datasets themselves are sending smart software off the rails
  • The quest for the root of the bias is to shine a light on the limitations of current commercial approaches to smart software; that is, vendors make outrageous claims into order to maintain a charade about software’s capabilities which may be quite narrow and biases
  • The data gathered by the Xooglers may reveal that Google’s approach is not as well formed as the company wants competitors and others to believe; that is, marketers and MBAs outpace what the engineers can deliver.

The information by which an artificial intelligence system “learns” may be poisoning the system. Check out the Times of Israel essay. It is thought provoking and may have revealed the source of Google’s interesting personnel management decisions.

Fear can trigger surprising actions.

Stephen E Arnold, February 23, 2021

Google: Adding Friction?

February 23, 2021

I read “Waze’s Ex-CEO Says App Could Have Grown Faster without Google.” Opinions are plentiful. However, reading about the idea of Google as an inhibitor is interesting. The write up reports:

Waze has struggled to grow within Alphabet Inc’s Google, the navigation app’s former top executive said, renewing concerns over whether it was stifled by the search giant’s $1 billion acquisition in 2013.

A counterpoint is that 140 million drivers use Waze each month. When Google paid about $1 billion for the traffic service in 2009, Waze attracted 10 million drivers.

The write up states:

But Waze usage is flat in some countries as Google Maps gets significant promotion, and Waze has lost money as it focuses on a little-used carpooling app and pursues an advertising business that barely registers within the Google empire…

Several observations about the points in the article:

  1. With litigation and other push back against Google and other large technology firms, it seems as if Google is in a defensive posture
  2. Wall Street is happy with Google’s performance, but that enjoyment may not be shared with that of some users and employees
  3. Google management methods may be generating revenue but secondary effects like the Waze case may become data points worth monitoring.

Google map related services are difficult for me to use. Some functions are baffling; others invite use of other services. Yep, friction as in slowing Waze’s growth maybe?

Stephen E Arnold, February 23, 2021

Alphabet Google: High School Science Club Management Breakthrough

February 20, 2021

The Google appears to support the concepts, decision making capabilities, and the savoir faire of my high school science club. I entered high school in 1958, and I was asked to join the Science Club. Like cool. Fat, thick glasses, and the sporty clothes my parents bought me at Robert Hall completed by look. And I fit right in. Arrogant, proud to explain that I missed the third and fourth grades because my tutor in Campinas died of snake bite. I did the passive resistance thing, and I refused to complete the 1950s version of distance learning via the Calvert Course, and socially unaware – yes, I fit right in. The Science Club of Woodruff High School! People sort of like me: Mid western in spirit, arrogant, and clueless. Were we immature? Does Mr. Putin have oligarchs as friends?

With my enthusiastic support, the Woodruff High School Science Club intercepted the principal’s morning announcements. We replaced mimeograph stencils with those we enhanced. We slipped calcium carbide into chemistry experiments involving sulfuric acid. When we we taken before the high school assistant principal Bull Durham, he would intone, “Grow up.”

We learned there were no consequences. We concluded that without the Science Club, it was hasta la vista to the math team, the quick recall team, the debate team, the trophies from the annual Science Fair, and the pride in the silly people who racked up top scores on standardized tests administered to everyone in the school.

The Science Club learned a life lesson. Apologize. Look at your shoes. Evidence meekness and humility. Forget asking for permission.

I thought about how the Science Club decided. That’s an overstatement. An idea caught our attention and we acted. I stepped into the nostalgia Jacuzzi when I read “Google Fires Another AI Ethics Leader.” A déjà vu moment. The Timnit Gibru incident flickers in the thumbtypers’ news feeds. Now a new name: Margaret Mitchell, the co-lead of Google’s Ethical AI team. Allegedly she was fired if the information in the “real” news story is accurate. The extra peachy keen Daily Mail alleged that the RIF was a result of Ms. Mitchell’s use of a script “to search for evidence of discrimination against fired black colleague.” Not exactly as nifty as my 1958 high school use of calcium carbide, but close enough for horseshoes.

Even the cast of characters in this humanoid unfriending is the same: Uber Googler Jeff Dean, who Sawzall and BigTable problems logically. The script is a recycling of a 1930’s radio drama. The management process unchanged: Conclude and act. Wham and bam.

The subject of ethics is slippery. Todd Pheifer, a doctor of education wrote Business Ethics: The Search for an Elusive Idea and required a couple of hundred pages to deal with a single branch of the definition of the concept. The book is a mere $900 on Amazon, but today (Saturday, February 20, 2021, it is not available.) Were the buyers Googlers?

Ethics is in the title of the Axios article “Google Fires Another AI Ethics Leader,” and ethics figures in many of the downstream retellings of this action. Are these instant AI ethicist zappings removals the Alphabet Google equivalent of the Luxe Half-Acre Mosquito Trap with Stand?  Hum buzz zap!

image1

In my high school science club, we often deferred to Don and Bernard or the Jackson Brothers. These high school wizards had published an article about moon phases in a peer-reviewed journal when Don was a freshman and Bernard was a sophomore. (I have a great anecdote about Don’s experience in astrophysics class at the University of Illinois. Ask me nicely, and I will recount it.)

The bright lads would mumble some idea about showing the administration how stupid it was, and we were off to the races. As I recall, we rarely considered the impact of our decisions. What about ethics, wisdom, social and political awareness? Who are you kidding? Snort, snort, snort. Life lesson: No consequences for those who revere good test takers.

As it turned out, most of us matured somewhat. Most got graduate degrees. Most of us avoided super life catastrophes. Bull Durham is long dead, but I would wager he would remember our brilliance if he were around today to reminisce about the Science Club in 1958.

I am grateful for the Googley, ethical AI related personnel actions actions. Ah, memories.

Several questions with answers in italic:

  • How will Alphabet Google’s effort to recruit individuals who are not like the original Google “science club” in the wake of the Backrub burnout? Answer: Paying ever higher salaries, larger bonuses, maybe an office at home.
  • Which “real” news outfit will label the ethical terminations as a failure of high school science club management methods? Answer: None.
  • What does ethics means? Answer: Learn about phenomenological existentialism and then revisit this question.

I miss those Science Club meetings on Tuesday afternoon from 3 30 to 4 30 pm Central time even today. But “real” news stories about Google’s ethical actions related to artificial intelligence are like a whiff of Dollar General air freshener.

Stephen E Arnold, February 22, 2021

Google: Alleged Candidate Filtering

February 18, 2021

Who knows if this story is 100 percent spot on. It does illustrate a desire to present the Google in a negative way, and it seems to make clear how simple filters can come back to bite the hands of the busy developers who add features and functions without much thought for larger implications.

The story is “Google Has Been Allowing Advertisers to Exclude Nonbinary People from Seeing Job Ads.” The main idea seems to be:

Google’s advertising system allowed employers or landlords to discriminate against nonbinary and some transgender people…

Oh, oh.

If true, the check box for “exclude these” could become a bit of a sink hole.

The write up points out:

It’s not clear if the advertisers meant to prevent nonbinary people or those identifying as transgender from finding out about job openings.

Interesting item if accurate.

Stephen E Arnold, February 18, 2021

Objectifying the Hiring Process: Human Judgment Must Be Shaped

February 18, 2021

The controversies about management-employee interactions are not efficient. Consider Google. Not only did the Timnit Gibru dust up sully the pristine, cheerful surface of the Google C-suite, the brilliance of the Google explanation moved the bar for high technology management acumen. Well, at least in terms of publicity it was a winner. Oh, the Gibru incident probably caught the attention of female experts in artificial intelligence. Other high technology and consumer of talent from high prestige universities paid attention as well.

What’s the fix for human intermediated personnel challenges? The answer is to get the humans out of the hiring process if possible. Software and algorithms, databases of performance data, and the jargon of psycho-babble are the path forward. If an employee requires termination, the root cause is an algorithm, not a human. So sue the math. Don’t sue the wizards in the executive suite.

These ideas formed in my mind when I read “The Computers Rejecting Your Job Application.” The idea is that individuals who want a real job with health care, a retirement program, and maybe a long tenure with a stable out” get interviewed via software. Decisions about hiring pivot on algorithms. Once the thresholds are crossed by a candidate, a human (who must take time out from a day filled with back to back Zoom meetings) will notify the applicant that he or she has a “real” job.

If something goes Gibru, the affected can point fingers at the company providing the algorithmic deciders. Damage can be contained. There’s a different throat to choke. What’s not to like?

The write up from the Beeb, a real news outfit banned in China, reports:

The questions, and your answers to them, are designed to evaluate several aspects of a jobseeker’s personality and intelligence, such as your risk tolerance and how quickly you respond to situations. Or as Pymetrics puts it, “to fairly and accurately measure cognitive and emotional attributes in only 25 minutes”.

Yes, online. Just 25 minutes. Forget those annoying interview days. Play a game. Get hired or not. Efficient. Logical.

Do online hiring and filtering systems work. The write up reminds the thumb typing reader about Amazon’s algorithmic hiring and filtering system:

In 2018 it was widely reported to have scrapped its own system, because it showed bias against female applicants. The Reuters news agency said that Amazon’s AI system had “taught itself that male candidates were preferable” because they more often had greater tech industry experience on their resume. Amazon declined to comment at the time.

From my vantage point, it seems as if these algorithmic hiring vendors are selling their services. That’s great until one of the customers takes the outfit to court.

Progress? Absolutely.

Stephen E Arnold, February 17, 2021

Alphabet Google Spells Misunderstanding with a You

February 17, 2021

Stadia Leadership Praised Development Studios For ‘Great Progress’ Just One Week Before Laying Them All Off” reports:

Developers at Google’s recently formed game studios were shocked February 1 when they were notified that the studios would be shut down, according to four sources with knowledge of what transpired. Just the week prior, Google Stadia vice president and general manager Phil Harrison sent an email to staff lauding the “great progress” its studios had made so far. Mass layoffs were announced a few days later, part of an apparent pattern of Stadia leadership not being honest and upfront with the company’s developers, many of which had upended their lives and careers to join the team.

The Stadia Xooglers-to-be tried to get more information from Alphabet Google. According to the article:

One source described the Q&A as an ultimately unsuccessful attempt at extracting some kind of accountability from Stadia management. “I think people really just wanted the truth of what happened,” said the source. “They just want an explanation from leadership. If you started this studio and hired a hundred or so of these people, no one starts that just for it to go away in a year or so, right? You can’t make a game in that amount of time…We had multi-year reassurance, and now we don’t.” The source added that the Q&A “wasn’t pretty.”

The management finesse is notable. If the information in the article is accurate, the consistency of Alphabet Google’s management methods is evident. I have labeled the approach “the high school science club management method” or HSSCMM. With the challenges many business schools face, the technique is not explored with the rigor of other approaches. Nevertheless, several characteristics of this Stadia motif are worth noting:

  • Misinformation
  • Awkward communications
  • Insensitivity to the needs of Googlers on the express bus to Xooglerdom
  • A certain blindness toward strategic and tactical planning.

Online games are bigger than many other forms of entertainment. I recall learning that in the mid 2000s, Google probed Yahoo about online games if I recall the presentation I heard 15 years ago.

Taking the article at face value, it appears that Alphabet Google spells misunderstanding with a you. There is no letter “we” in Alphabet I conclude. High school science club members struggle with the pronoun and spelling thing I conclude.

What’s the outlook for Alphabet Google in the burgeoning online game sector? Options include:

  1. Acquiring a company and integrating it into the Google
  2. Cleaning the high school and leaving the Science Club leadership intact
  3. Creating a duplicate service with activity centered in another country which is a variation on Google’s approach to messaging
  4. Going into a holding pattern and making a fresh start once the news cycle forgets that Alphabet Google failed on the well publicized game initiative.
  5. Teaming with Microsoft to create the bestest online game service ever.

Stephen E Arnold, February 17, 2021

Data Security: Clubhouse Security and Data Integrity Excitement?

February 15, 2021

Here in rural Kentucky “clubhouse” means a lower cost shack where some interesting characters gather. There are many “clubs” in rural Kentucky, and not many of them are into the digital flow of Silicon Valley. Some of those “members” do love the tweeter and similar real time, real “news” systems.

Imagine my surprise when I read Stanford Internet Observatory’s report from its Cyber Policy Center “Clubhouse in China: Is the Data Safe?” I thought that the estimable Stanford hired experts who knew that “data” is plural. Thus the headline from the highly intellectual SIPCPC would have written the headline “Clubhouse in China: Are the Data Safe?” (Even some of the members of the Harrod’s Creek moonshine club know that subject-verb agreement is preferred even for graduates of the local skill high school.

Let’s overlook the grammar and consider the “real” information in the write up. The write up has six authors. That’s quite a team.

The SIPCPC determined that Clubhouse uses software and services from a company based in Shanghai. The question is, “Does the Chinese government have access to the data flowing in the Clubhouse super select and awfully elite “conversations”?

The answer it turns out is, “Huh. What?”

Clubhouse was banned by the Chinese government. SIPCPC (I almost typed CCP but caught myself) and the response from the Clubhouse dances around the issue. There are assurances that Clubhouse is going to be more strong.

The only problem is that the SIPCPC and the Clubhouse write up skirt such topics as:

  • Implications of the SolarWinds’ misstep which operated for month prior to detection and there are zero indicators reporting that the breach and its malware have been put in the barn.
  • Intercept technology within data centers in many countries make it possible to capture information (bulk and targeted)
  • The decision to rely on Agora raises interesting implications about the judgment of the Clubhouse management team.

Net net: Interesting write up which casts an interesting light on the SIPCPC findings and the super zippy Clubhouse. If one cannot get subject verb agreement correct, what other issues have been ignored?

Stephen E Arnold, February 15, 2021

Managing Engineers: Make High School Science Club Management Methods More High School-Like?

February 4, 2021

I read an interesting and thoughtful essay in Okay HQ. “Engineering Productivity Can Be Measured – Just Not How You’d Expect.” The “you” seems to be me. That’s okay. As a student of the brilliant HSSCMM encapsulated in decisions related to handling staff, I am fascinated by innovations.

The write up points out:

Since the advent of the software industry, most engineering teams have seen productivity as a black box. Only recently have people even begun to build internal tools that optimize performance. Unfortunately, most of these tools measure the wrong metrics and are shockingly similar across companies.

The idea is that MBA like measures are off the mark.

How does the HSSCMM get back on track? The write up states:

Productivity in engineering therefore naturally increases when you remove the blockers getting in the way of your team.

The idea of a “blocker” is a way to encapsulate the ineffective, inefficient, and clumsy management tactics touted by Peter Drucker and other management experts.

What does a member of the science club perceive as a blocker?

  • Too many interruptions
  • Slow code reviews
  • Lousy development tools
  • Too much context switching (seems like a variant of interruptions, doesn’t it?)
  • Getting pinged to do work outside of business hours (yep, another variation of interrupting a science club member).

Let’s summarize my HSSCMM principles. The engineers — at least the ones in the elite of the science club — want to be managed by these precepts:

  • Don’t interrupt the productive engineers/professionals
  • Don’t give us tools the productive / engineers and professionals don’t find useful, helpful, good, or up to our standards
  • Provide feedback, right now, you inefficient and unproductive human
  • Don’t annoy productive engineers / professionals outside of “work” hours.

These seem perfectly reasonable if somewhat redundant. However, these productive engineers / professionals have created the systems, methods, apps, and conventions that destroy attention, yield lousy software and tools, and nourish the mind set which has delivered the joys of Twitter, Facebook, Robinhood, et al to the world.

Got that, Druckerites? If not, our innovations in artificial intelligence will predict your behaviors and our neuro morphic systems will make you follow the precepts of the science club.

That sound about right?

Stephen E Arnold, February 4, 2021

Google Management: What Happens When Science Club Management Methods Emulate Secret Societies?

January 27, 2021

A secret society is one with special handshakes, initiation routines, and a code of conduct which prohibits certain behavior. Sometimes even a secret society has a trusted, respected member whose IQ and personal characteristics are what might be called an “issue.” My hunch is that the write up “Google Hired a Lawyer to Probe Bullying Claims about DeepMind Cofounder Mustafa Suleyman and Shifted His Role” may be a good example — if the real news is indeed accurate — of mostly adult judgment. [The linked document resides behind a paywall … because money.]

As I understand the information in this write up, uber wizard Mustafa Suleyman allegedly engaged in behavior the Googlers found out of bounds. Note, however, that the alleged perpetrator was not terminated. Experts in smart software are tough to locate and hire. Mr. Suleyman was given a lateral arabesque. First defined by Laurence J. Peter is that some management issues can be resolved by shifts to a comparable level of the hierarchy just performing different management or job functions. A poor manager could be encouraged to accept a position as chief quality officer in an organization’s new office in Alert, in the Qikiqtaaluk Region, Nunavut, Canada. (Bring a Google sweater.)

DeepMind is known for crushing a human Go player, who may now be working as a delivery person for Fanji Braised Meat in Preserved Sauce on Zhubashi in Xian, China. The company developed software able to teach itself the game of checkers. Allegedly DeepMind performed magic with protein folding calculations, but it seems to have come up short on problems for solving death and providing artificial general intelligence for a user of Google calendar.

These notable technical accomplishments may have produced a sinkhole brimming with red ink. The 2019 Google financials indicate that about $1 billion in debt has been written off. Revenue appears to be a bit of a challenge for the Googlers working on technology that will generate sustainable revenue for Google’s next 20 years.,

And what about those management methods channeling how high school science clubs operated in the 1950s:

  1. Generate fog to make it difficult to discern exactly what happened and why Google’s in house people professionals could not gather the information about alleged bullying? Why a lawyer? Why not a private investigative group? There are some darned good ones in merrie olde Angleland.
  2. Mixed signals are emitted. If something actionable occurred, why not let the aggrieved go through appropriate legal and employee oversight channels to resolve the matter? Answer: Let someone else have the responsibility. The science club does science, not human like stuff.
  3. The dodge-deflect-apologize pattern is evident to me in rural Kentucky. How long will this adolescent tactic remain functional?

To sum up, the science club did something. What is fuzzy? Why is fuzzy? Keep folks guessing maybe? What will those bright sprouts in the high school science club do next? Put a cow on top of Big Ben?

Stephen E Arnold, January 27, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta