Google Gets Kicked Out of Wizard Class: Gibru Jibberish to Follow
March 5, 2021
I read “AI Ethics Research Conference Suspends Google Sponsorship.” Imagine, a science club type organization suspended. Assuming the “real” and ad-littered story is accurate, here’s the scoop:
The ACM Conference for Fairness, Accountability, and Transparency (FAccT) has decided to suspend its sponsorship relationship with Google, conference sponsorship co-chair and Boise State University assistant professor Michael Ekstrand confirmed today. The organizers of the AI ethics research conference came to this decision a little over a week after Google fired Ethical AI lead Margaret Mitchell and three months after the firing of Ethical AI co-lead Timnit Gebru. Google has subsequently reorganized about 100 engineers across 10 teams, including placing Ethical AI under the leadership of Google VP Marian Croak.
The Association for Computing Machinery no less. How many Googlers and Xooglers are in this ACM entity? How many Google and Xoogle papers has the ACM accepted? Now suspended. Yikes, just a high school punishment for an outfit infused with the precepts of high school science club management and behavior.
What’s interesting is the injection of the notion of “ethical.” The world’s largest education and scientific organization is not into talking, understanding the Google point of view, or finding common ground.
Disruptors, losers, and non-fitting wizards and wizardettes are not appropriate for the ethic sub group of ACM. Oh, is that ethical? Good question.
But ACM knows who writes checks. The ad besotted article states:
Putting Google sponsorship on hold doesn’t mean the end of sponsorship from Big Tech companies, or even Google itself. DeepMind, another sponsor of the FAccT conference that incurred an AI ethics controversy in January, is also a Google company. Since its founding in 2018, FAccT has sought funding from Big Tech sponsors like Google and Microsoft, along with the Ford Foundation and the MacArthur Foundation. An analysis released last year that compares Big Tech funding of AI ethics research to Big Tobacco’s history of funding health research found that nearly 60% of researchers at four prominent universities have taken money from major tech companies.
Should I raise another question about the ethics of this wallet sensitive posture? Nah. Money talks.
I find the blip on the ethical radar screen quite amusing. One learns each day what really matters in the world of computers and smart software. That’s a plus.
I am waiting for Google Gibru gibberish to explain the situation. I am all ears.
Stephen E Arnold, March 5, 2021
Gebru-Gibberish: A Promise, Consultants, and Surgical Management Action
March 1, 2021
I read “Google Reportedly Promises Change to Research Team after High Profile Firings.” The article explains that after female artificial intelligence researchers found their futures elsewhere, Google (the mom and pop neighborhood online ad agency) will:
will change its research review procedures this year.
Okay, 10 months.
The write up points out that the action is
an apparent bid to restore employee confidence in the wake of two high-profile firings of prominent women from the [AI ethics] division.
Yep, words. I found this passage redolent of Gebru-gibberish; that is, wordage which explains how smart software ethics became a bit of a problem for the estimable Google outfit:
By the end of the second quarter, the approvals process for research papers will be more smooth and consistent, division Chief Operating Officer Maggie Johnson reportedly told employees in the meeting. Research teams will have access to a questionnaire that allows them to assess their projects for risk and navigate review, and Johnson predicted that a majority of papers would not require additional vetting by Google. Johnson also said the division is bringing in a third-party firm to help it conduct a racial-equity impact assessment, Reuters reports, and she expects the assessment’s recommendations “to be pretty hard.”
Okay. A questionnaire. A third party firm. Pretty hard.
What’s this mean?
The Ars Technica write up does not translate. However, from my vantage point in rural Kentucky, I understand the Gebru-gibberish to mean:
- Talk about ethical smart software and the GOOG reacts in a manner informed by high school science club principles
- Female AI experts are perceived as soft targets but that may be a misunderstanding in the synapses of the Google
- The employee issues at Google are overshadowing other Google challenges; for example, the steady rise of Amazon product search, the legal storm clouds, and struggles with the relevance of ads displayed in response to user queries or viewed YouTube videos.
Do I expect more Gebru-gibberish?
Will Microsoft continue to insist that its SAML is the most wonderful business process in the whole wide world?
Stephen E Arnold, March 1, 2021
Remarkable Zoom Advice
March 1, 2021
I am either 76 or 77. Who knows? Who cares? I do participate in Zoom calls, and I found this “recommendation” absolutely life changing. The information appears in “You SHOULD Wave at the End of Video Calls — Here’s Why.” Straight-away I marvel at the parental “should.” There’s nothing like a mom admonishment when it comes to Zoom meetings.
The write up posits:
I already know that every call here ends with a lot of waving), and the group unanimously favors waving.
The idea is that a particular group is into waving appears to support the generalization that waving good bye at the end of Zoom calls is the proper method of exiting a digital experience.
I learned:
Here’s the definitive ruling for the entire internet, from now until the end of time: waving at the end of video calls is good, and no one should feel bad for doing it. Ever.
Okay, maybe feeling bad is not the issue.
Looking stupid, inappropriate, weird, or childish may be other reasons for doubting this incredibly odd advice. Look. People exiting my Zoom meetings are not waving good bye to friends climbing on the Titanic in April 1912.
Why wave? The explanation:
Humans aren’t machines — we’re social animals. We want to feel connected to each other, even in a work context. Suddenly hanging up feels inhuman (because it is). Waving and saying goodbye solves this problem.
Holy Cow! Humans are not machines. News flash: At least one Googler wants to become a machine, and there will be others. In fact, I know humans who are machine like, in fact.
I hope I never see a wave ending my next lecture for law enforcement and intelligence professionals waving at me. I say thank you and punch the end meeting for all.
I am confident that those testifying via video conference connections will not wave at lawyers, elected officials, or investigators. Will Facebook’s Mark Zuckerberg wave to EU officials in the forthcoming probes into the company’s business methods?
Stephen E Arnold, March 1, 2021
The Crux of the Smart Software Challenge
February 24, 2021
I read “There Is No Intelligence without Human Brains.” The essay is not about machine learning, artificial intelligence, and fancy algorithms. One of the points which I found interesting was:
But, humans can opt for long-term considerations, sacrificing to help others, moral arguments, doing unhelpful things as a deep scream for emotional help, experimenting to learn, training themselves to get good at something, beauty over success, etc., rather than just doing what is comfortable or feels nice in the short run or simply pro-survival.
However, one sentence focused my thinking on the central problem of smart software and possibly explains the odd, knee jerk, and high profile personnel problems in Google’s AI ethics unit. Here’s the sentence:
Poisoning may greatly hinder our flexible intelligence.
Smart software has to be trained. The software system can be hand fed training sets crafted by fallible humans or the software system can ingest whatever is flowing into the system. There are smart software systems which do both. One of the first commercial products to rely on training sets and “analysis on the fly” was the Autonomy system. The phrase “neurolinguistic programming” was attached by a couple of people to the Autonomy black box.
What’s stirring up dust at Google may be nothing more than fear; for example:
- Revelations by those terminated reveal that the bias in smart software is a fundamental characteristic of Google’s approach to artificial intelligence; that is, the datasets themselves are sending smart software off the rails
- The quest for the root of the bias is to shine a light on the limitations of current commercial approaches to smart software; that is, vendors make outrageous claims into order to maintain a charade about software’s capabilities which may be quite narrow and biases
- The data gathered by the Xooglers may reveal that Google’s approach is not as well formed as the company wants competitors and others to believe; that is, marketers and MBAs outpace what the engineers can deliver.
The information by which an artificial intelligence system “learns” may be poisoning the system. Check out the Times of Israel essay. It is thought provoking and may have revealed the source of Google’s interesting personnel management decisions.
Fear can trigger surprising actions.
Stephen E Arnold, February 23, 2021
Google: Adding Friction?
February 23, 2021
I read “Waze’s Ex-CEO Says App Could Have Grown Faster without Google.” Opinions are plentiful. However, reading about the idea of Google as an inhibitor is interesting. The write up reports:
Waze has struggled to grow within Alphabet Inc’s Google, the navigation app’s former top executive said, renewing concerns over whether it was stifled by the search giant’s $1 billion acquisition in 2013.
A counterpoint is that 140 million drivers use Waze each month. When Google paid about $1 billion for the traffic service in 2009, Waze attracted 10 million drivers.
The write up states:
But Waze usage is flat in some countries as Google Maps gets significant promotion, and Waze has lost money as it focuses on a little-used carpooling app and pursues an advertising business that barely registers within the Google empire…
Several observations about the points in the article:
- With litigation and other push back against Google and other large technology firms, it seems as if Google is in a defensive posture
- Wall Street is happy with Google’s performance, but that enjoyment may not be shared with that of some users and employees
- Google management methods may be generating revenue but secondary effects like the Waze case may become data points worth monitoring.
Google map related services are difficult for me to use. Some functions are baffling; others invite use of other services. Yep, friction as in slowing Waze’s growth maybe?
Stephen E Arnold, February 23, 2021
Alphabet Google: High School Science Club Management Breakthrough
February 20, 2021
The Google appears to support the concepts, decision making capabilities, and the savoir faire of my high school science club. I entered high school in 1958, and I was asked to join the Science Club. Like cool. Fat, thick glasses, and the sporty clothes my parents bought me at Robert Hall completed by look. And I fit right in. Arrogant, proud to explain that I missed the third and fourth grades because my tutor in Campinas died of snake bite. I did the passive resistance thing, and I refused to complete the 1950s version of distance learning via the Calvert Course, and socially unaware – yes, I fit right in. The Science Club of Woodruff High School! People sort of like me: Mid western in spirit, arrogant, and clueless. Were we immature? Does Mr. Putin have oligarchs as friends?
With my enthusiastic support, the Woodruff High School Science Club intercepted the principal’s morning announcements. We replaced mimeograph stencils with those we enhanced. We slipped calcium carbide into chemistry experiments involving sulfuric acid. When we we taken before the high school assistant principal Bull Durham, he would intone, “Grow up.”
We learned there were no consequences. We concluded that without the Science Club, it was hasta la vista to the math team, the quick recall team, the debate team, the trophies from the annual Science Fair, and the pride in the silly people who racked up top scores on standardized tests administered to everyone in the school.
The Science Club learned a life lesson. Apologize. Look at your shoes. Evidence meekness and humility. Forget asking for permission.
I thought about how the Science Club decided. That’s an overstatement. An idea caught our attention and we acted. I stepped into the nostalgia Jacuzzi when I read “Google Fires Another AI Ethics Leader.” A déjà vu moment. The Timnit Gibru incident flickers in the thumbtypers’ news feeds. Now a new name: Margaret Mitchell, the co-lead of Google’s Ethical AI team. Allegedly she was fired if the information in the “real” news story is accurate. The extra peachy keen Daily Mail alleged that the RIF was a result of Ms. Mitchell’s use of a script “to search for evidence of discrimination against fired black colleague.” Not exactly as nifty as my 1958 high school use of calcium carbide, but close enough for horseshoes.
Even the cast of characters in this humanoid unfriending is the same: Uber Googler Jeff Dean, who Sawzall and BigTable problems logically. The script is a recycling of a 1930’s radio drama. The management process unchanged: Conclude and act. Wham and bam.
The subject of ethics is slippery. Todd Pheifer, a doctor of education wrote Business Ethics: The Search for an Elusive Idea and required a couple of hundred pages to deal with a single branch of the definition of the concept. The book is a mere $900 on Amazon, but today (Saturday, February 20, 2021, it is not available.) Were the buyers Googlers?
Ethics is in the title of the Axios article “Google Fires Another AI Ethics Leader,” and ethics figures in many of the downstream retellings of this action. Are these instant AI ethicist zappings removals the Alphabet Google equivalent of the Luxe Half-Acre Mosquito Trap with Stand? Hum buzz zap!
In my high school science club, we often deferred to Don and Bernard or the Jackson Brothers. These high school wizards had published an article about moon phases in a peer-reviewed journal when Don was a freshman and Bernard was a sophomore. (I have a great anecdote about Don’s experience in astrophysics class at the University of Illinois. Ask me nicely, and I will recount it.)
The bright lads would mumble some idea about showing the administration how stupid it was, and we were off to the races. As I recall, we rarely considered the impact of our decisions. What about ethics, wisdom, social and political awareness? Who are you kidding? Snort, snort, snort. Life lesson: No consequences for those who revere good test takers.
As it turned out, most of us matured somewhat. Most got graduate degrees. Most of us avoided super life catastrophes. Bull Durham is long dead, but I would wager he would remember our brilliance if he were around today to reminisce about the Science Club in 1958.
I am grateful for the Googley, ethical AI related personnel actions actions. Ah, memories.
Several questions with answers in italic:
- How will Alphabet Google’s effort to recruit individuals who are not like the original Google “science club” in the wake of the Backrub burnout? Answer: Paying ever higher salaries, larger bonuses, maybe an office at home.
- Which “real” news outfit will label the ethical terminations as a failure of high school science club management methods? Answer: None.
- What does ethics means? Answer: Learn about phenomenological existentialism and then revisit this question.
I miss those Science Club meetings on Tuesday afternoon from 3 30 to 4 30 pm Central time even today. But “real” news stories about Google’s ethical actions related to artificial intelligence are like a whiff of Dollar General air freshener.
Stephen E Arnold, February 22, 2021
Google: Alleged Candidate Filtering
February 18, 2021
Who knows if this story is 100 percent spot on. It does illustrate a desire to present the Google in a negative way, and it seems to make clear how simple filters can come back to bite the hands of the busy developers who add features and functions without much thought for larger implications.
The story is “Google Has Been Allowing Advertisers to Exclude Nonbinary People from Seeing Job Ads.” The main idea seems to be:
Google’s advertising system allowed employers or landlords to discriminate against nonbinary and some transgender people…
Oh, oh.
If true, the check box for “exclude these” could become a bit of a sink hole.
The write up points out:
It’s not clear if the advertisers meant to prevent nonbinary people or those identifying as transgender from finding out about job openings.
Interesting item if accurate.
Stephen E Arnold, February 18, 2021
Objectifying the Hiring Process: Human Judgment Must Be Shaped
February 18, 2021
The controversies about management-employee interactions are not efficient. Consider Google. Not only did the Timnit Gibru dust up sully the pristine, cheerful surface of the Google C-suite, the brilliance of the Google explanation moved the bar for high technology management acumen. Well, at least in terms of publicity it was a winner. Oh, the Gibru incident probably caught the attention of female experts in artificial intelligence. Other high technology and consumer of talent from high prestige universities paid attention as well.
What’s the fix for human intermediated personnel challenges? The answer is to get the humans out of the hiring process if possible. Software and algorithms, databases of performance data, and the jargon of psycho-babble are the path forward. If an employee requires termination, the root cause is an algorithm, not a human. So sue the math. Don’t sue the wizards in the executive suite.
These ideas formed in my mind when I read “The Computers Rejecting Your Job Application.” The idea is that individuals who want a real job with health care, a retirement program, and maybe a long tenure with a stable out” get interviewed via software. Decisions about hiring pivot on algorithms. Once the thresholds are crossed by a candidate, a human (who must take time out from a day filled with back to back Zoom meetings) will notify the applicant that he or she has a “real” job.
If something goes Gibru, the affected can point fingers at the company providing the algorithmic deciders. Damage can be contained. There’s a different throat to choke. What’s not to like?
The write up from the Beeb, a real news outfit banned in China, reports:
The questions, and your answers to them, are designed to evaluate several aspects of a jobseeker’s personality and intelligence, such as your risk tolerance and how quickly you respond to situations. Or as Pymetrics puts it, “to fairly and accurately measure cognitive and emotional attributes in only 25 minutes”.
Yes, online. Just 25 minutes. Forget those annoying interview days. Play a game. Get hired or not. Efficient. Logical.
Do online hiring and filtering systems work. The write up reminds the thumb typing reader about Amazon’s algorithmic hiring and filtering system:
In 2018 it was widely reported to have scrapped its own system, because it showed bias against female applicants. The Reuters news agency said that Amazon’s AI system had “taught itself that male candidates were preferable” because they more often had greater tech industry experience on their resume. Amazon declined to comment at the time.
From my vantage point, it seems as if these algorithmic hiring vendors are selling their services. That’s great until one of the customers takes the outfit to court.
Progress? Absolutely.
Stephen E Arnold, February 17, 2021
Alphabet Google Spells Misunderstanding with a You
February 17, 2021
Developers at Google’s recently formed game studios were shocked February 1 when they were notified that the studios would be shut down, according to four sources with knowledge of what transpired. Just the week prior, Google Stadia vice president and general manager Phil Harrison sent an email to staff lauding the “great progress” its studios had made so far. Mass layoffs were announced a few days later, part of an apparent pattern of Stadia leadership not being honest and upfront with the company’s developers, many of which had upended their lives and careers to join the team.
The Stadia Xooglers-to-be tried to get more information from Alphabet Google. According to the article:
One source described the Q&A as an ultimately unsuccessful attempt at extracting some kind of accountability from Stadia management. “I think people really just wanted the truth of what happened,” said the source. “They just want an explanation from leadership. If you started this studio and hired a hundred or so of these people, no one starts that just for it to go away in a year or so, right? You can’t make a game in that amount of time…We had multi-year reassurance, and now we don’t.” The source added that the Q&A “wasn’t pretty.”
The management finesse is notable. If the information in the article is accurate, the consistency of Alphabet Google’s management methods is evident. I have labeled the approach “the high school science club management method” or HSSCMM. With the challenges many business schools face, the technique is not explored with the rigor of other approaches. Nevertheless, several characteristics of this Stadia motif are worth noting:
- Misinformation
- Awkward communications
- Insensitivity to the needs of Googlers on the express bus to Xooglerdom
- A certain blindness toward strategic and tactical planning.
Online games are bigger than many other forms of entertainment. I recall learning that in the mid 2000s, Google probed Yahoo about online games if I recall the presentation I heard 15 years ago.
Taking the article at face value, it appears that Alphabet Google spells misunderstanding with a you. There is no letter “we” in Alphabet I conclude. High school science club members struggle with the pronoun and spelling thing I conclude.
What’s the outlook for Alphabet Google in the burgeoning online game sector? Options include:
- Acquiring a company and integrating it into the Google
- Cleaning the high school and leaving the Science Club leadership intact
- Creating a duplicate service with activity centered in another country which is a variation on Google’s approach to messaging
- Going into a holding pattern and making a fresh start once the news cycle forgets that Alphabet Google failed on the well publicized game initiative.
- Teaming with Microsoft to create the bestest online game service ever.
Stephen E Arnold, February 17, 2021
Data Security: Clubhouse Security and Data Integrity Excitement?
February 15, 2021
Here in rural Kentucky “clubhouse” means a lower cost shack where some interesting characters gather. There are many “clubs” in rural Kentucky, and not many of them are into the digital flow of Silicon Valley. Some of those “members” do love the tweeter and similar real time, real “news” systems.
Imagine my surprise when I read Stanford Internet Observatory’s report from its Cyber Policy Center “Clubhouse in China: Is the Data Safe?” I thought that the estimable Stanford hired experts who knew that “data” is plural. Thus the headline from the highly intellectual SIPCPC would have written the headline “Clubhouse in China: Are the Data Safe?” (Even some of the members of the Harrod’s Creek moonshine club know that subject-verb agreement is preferred even for graduates of the local skill high school.
Let’s overlook the grammar and consider the “real” information in the write up. The write up has six authors. That’s quite a team.
The SIPCPC determined that Clubhouse uses software and services from a company based in Shanghai. The question is, “Does the Chinese government have access to the data flowing in the Clubhouse super select and awfully elite “conversations”?
The answer it turns out is, “Huh. What?”
Clubhouse was banned by the Chinese government. SIPCPC (I almost typed CCP but caught myself) and the response from the Clubhouse dances around the issue. There are assurances that Clubhouse is going to be more strong.
The only problem is that the SIPCPC and the Clubhouse write up skirt such topics as:
- Implications of the SolarWinds’ misstep which operated for month prior to detection and there are zero indicators reporting that the breach and its malware have been put in the barn.
- Intercept technology within data centers in many countries make it possible to capture information (bulk and targeted)
- The decision to rely on Agora raises interesting implications about the judgment of the Clubhouse management team.
Net net: Interesting write up which casts an interesting light on the SIPCPC findings and the super zippy Clubhouse. If one cannot get subject verb agreement correct, what other issues have been ignored?
Stephen E Arnold, February 15, 2021