The Google: Disrupting Education in the Covid Era

March 15, 2021

I thought the Covid thing disrupted education. As a result, Google’s video conferencing system failed to seize an opportunity. Even poor, confused Microsoft put some effort into Teams. Sure, Teams is not the most secure or easy to use video conferencing service, but it has more features than Google has chat apps and ad options. Google also watched the Middle Kingdom’s favorite video service “zoom” right into a great big lead. Arguably, Google’s video conferencing tool should have hooked into the Chromebook, which is in the hands of some students. But what’s happened? Zoom, zoom, zoom.

I read this crisp headline: “Inside Google’s Plan to Disrupt the College Degree (Exclusive). Get a First Look at Google’s New Certificate Programs and a New Feature of Google Search Designed to Help Job Seekers Everywhere.”

Wow. The write up is an enthusiastic extension of Google Gibru-ish. Here’s why:

  1. Two candidates. One is a PhD from Princeton with a degree in computer science. The other is a minority certificate graduate. Both compete for the same job. Which candidate gets the job?
  2. One candidate, either Timnit Gebru or Margaret Mitchell. Both complete a Google certification program. Will these individuals get a fair shake and maybe get hired?
  3. Many female candidates from India. Some are funded by Google’s grant to improve opportunities for Indian females. How many will get Google jobs? [a] 80 to 99 percent, [b] 60 to 79 percent, [c] fewer than 60 percent? (I am assuming this grant and certificate thing are more than a tax deduction or hand waving.)

High school science club management decisions are fascinating to me.

Got your answers? I have mine.

For the PhD versus the certificate holder, the answer is it depends. A PhD with non Googley notions about ethical AI is likely to be driving an Uber. The certificate holder with the right mental orientation gets to play Foosball and do Googley things.

For the Gebru – Mitchell question, my answer is neither. Female, non-Googley, and already Xooglers. Find your future elsewhere is what I intuit.

And the females in India. Hard to say. The country is far away. The $20 million or so is too little. The cultural friction within the still existing castes are too strong. Maybe a couple is my guess.

In short, Google can try to disrupt education. But Covid has disrupted education. Another outfit has zoomed into chinks in the Google carapace. So marketing it is. It may work. Google is indeed Google.

Stephen E Arnold, March 15, 2021

Amazon and Personnel Wizardry?

March 11, 2021

Amazon likes to say it successfully promotes diversity and inclusion in its company, and some of the numbers it touts do represent a measure of success. However, there appears to be a lot of work left to do and not enough will to do it from the powerful “S Team.” Recode discusses “Bias, Disrespect and Demotions: Black Employees Say Amazon Has a Race Problem.” The extensive article begins with the story of former employee Chanin Kelly-Rae, a former global manager of diversity for AWS. She began the position with high hopes, but quit in dismay 10 months later. Reporter Meron Menghistab writes:

“Kelly-Rae, who is Black, is one of more than a dozen former and current Amazon corporate employees — 10 of whom are Black — who told Recode in interviews over the past few months that they felt the company has failed to create a corporate-wide environment where all Black employees feel welcomed and respected. Instead, they told Recode that, in their experience, Black employees at the company often face both direct and insidious bias that harms their careers and personal lives. All of the current and former employees, other than Kelly-Rae, spoke on condition of anonymity either because of the terms of their employment with Amazon or because they fear retribution from Amazon for speaking out about their experiences. Current and former Amazon diversity and inclusion professionals — employees whose work focuses on helping Amazon create and maintain an equitable workplace and products — told Recode that internal data shows that Amazon’s review and promotion systems have created an unlevel playing field. Black employees receive ‘least effective’ marks more often than all other colleagues and are promoted at a lower rate than non-Black peers. Recode reviewed some of this data for the Amazon Web Services division of the company, and it shows large disparities in performance review ratings between Black and white employees.”

Amazon, of course, disagrees with this characterization, but it is difficult to argue with all the points Menghistab considers: the many unsettling comments made to and about Black employees by higher-ups; the reluctance of management to embrace best practices suggested by their own diversity experts; the fact that diversity goals do not extend to top management positions; the rampant “down-leveling” of employees of color, its long-term effects on each worker, and the low chances of promotion; a hesitation to hire from historically Black colleges; and the problematic “Earns Trust” evaluation metric. We suggest interested readers navigate to the article to learn more about each of these and other factors.

Some minority employees say they have reason to hope. For one thing, the problems do not pervade the entire company—many teams happily hum along without any of these problems. The company is making a few small steps in the right direction, like requiring workers undergo diversity and inclusion training, participating in the Management Leadership of Tomorrow’s Black Equity at Work Certification, and holding a virtual career-enrichment summit for Black, Latinx, and Native American prospective employees. There will never be a quick and easy fix for the tech behemoth, but as Kelly-Rae observes:

“Amazon is really good at things it wants to be good at, and if Amazon decided it really wanted to be good at this, I have no doubt it can be.”

Time to step it up, Amazon.

Cynthia Murrell, March 11, 2021

Business Process Management: Buzzy Again

March 10, 2021

If you never heard about business process management (BPM) it means the practice of discovering and controlling an organization’s processes so they will align with business goals as the company evolves.  BPM software is the next phase of business intelligence software for enterprises.  CIO explains what to expect from BPM software in the article: “What Is Business Process Management? The Key To Enterprise Agility.

BPM software maps definitions to existing processes, defines steps to carry out tasks, and tips for streamlining/improving practices.  Organizations are constantly shifting to meet their goals and BPM is software is advertised as the best way to refine and control changing environments.  All good BPM software should have the following: alignment of the firm’s resources, increase discipline in daily operations, and clarify on strategic direction.  While most organizations want flexibility they lack it:

“A company can only be as flexible, efficient, and agile as the interaction of its business processes allow. Here’s the problem: Many companies develop business processes in isolation from other processes they interact with, or worse, they don’t “develop” business processes at all. In many cases, processes simply come into existence as “the way things have always been done,” or because software systems dictate them. As a result, many companies are hampered by their processes, and will continue to be so until those processes are optimized.”

When selecting a BPM software it should be capable of integrations, analytics, collaboration, form generation, have a business rules engine, and workflow managements.

BPM sounds like the next phase of big data, where hidden insights are uncovered in unstructured data.  BPM takes these insights, then merges them with an organization’s goals.  Business intelligence improves business processes, big data discovers insights, and BPM organizes all of it.

Whitney Grace, March 10, 2021

Ah, Google: Does Confusion Signal a Mental Health Issue?

March 8, 2021

Upon rising this morning, I noted this item in “The New Google Pay Repeats All the Same Mistakes of Google Allo.” The idea is that Google management has reinvented an application, changed the fee method, and named the “new” Google Pay app “Google Pay.” According to the write up:

Google is killing one perfectly fine service and replacing it with a worse, less functional service.

Slashdot’s item about this remarkable “innovation” includes this comment:

The worst part of it all is that, like the move from Google Music to YouTube Music, there is no reward at the end of this transition.

I have to admit that I don’t remember much about my college psych course, but I seem to recall something called Schizoaffective Disorder. Shrinks revel in such behaviors as sometimes strange beliefs that the person refuses to give up, even when they get the facts; problems with speech and communication, only giving partial answers to questions or giving answers that are unrelated; and problems with speech and communication, only giving partial answers to questions, or giving answers that are unrelated, and trouble at work, school, or in social settings. (Yep, I had to get some help from the ever reliable Webmd.com.)

More intriguing was the news item “Google Advised Mental Health Care When Workers Complained about Racism and Sexism.” That article asserted:

In early 2020, a Black woman attended a Google meeting about supporting women at the company where data was presented that showed the rate that underrepresented minority employees were leaving the company. When she said that Black, Latina and Native American women have vastly different experiences than their white female colleagues and advised that Google address the issue internally, her manager brusquely responded, telling her that her suggestion was not relevant, the woman said. The woman then complained to human resources, who advised her to coach the manager about her problematic response or take medical leave to tend to her own mental health, she said. The woman also spoke on the condition of anonymity because she’s still an employee and not permitted to speak to reporters.

Does this mean that the women who worked in ethical artificial intelligent were “mentally unfit” for the Google?

Stepping back, the problem may not be with the Google Pay app or the people reported as mental health concerns. The problem appears to reside in the culture and explicit and implicit “rules of the road” for Alphabet Google.

Several observations may be warranted:

  • The legal attention Google is drawing should result in lower profile or significant efforts to avoid personnel related issues becoming news. Google’s behavior appears to generate significant attention and spark outrage, including increased employee annoyance.
  • The financial pressures on Google should be sparking wizards to craft well conceived, highly desirable ways to monetize billions of users who make use of “free” Google services. It certainly seems that Google is taking steps which seem to be irrational to those outside Google whilst appearing to be logical to those steeped in the Google milieu. The Google culture could be a form of milieu therapy which feeds to possible Schizoaffective Disorder.
  • Google’s management behaviors are interesting. On one hand, naming services underscores the problems the firm has with speech and communication. On the other hand, mashing racial, social, and ethical hot buttons seems to escalate the stakes in the personnel game.

Net net: I think these behaviors are interesting. What these actions really mean must be left to user, employees, lawyers, and probably psychiatrists. These actions are further evidence of the weaknesses of the high school science club approach to management. Here in rural Kentucky, one of my research team said, “Crazy.”

That’s quite an observation about a big, informed, powerful company.

Stephen E Arnold, March 8, 2021

Google Gets Kicked Out of Wizard Class: Gibru Jibberish to Follow

March 5, 2021

I read “AI Ethics Research Conference Suspends Google Sponsorship.” Imagine, a science club type organization suspended. Assuming the “real” and ad-littered story is accurate, here’s the scoop:

The ACM Conference for Fairness, Accountability, and Transparency (FAccT) has decided to suspend its sponsorship relationship with Google, conference sponsorship co-chair and Boise State University assistant professor Michael Ekstrand confirmed today. The organizers of the AI ethics research conference came to this decision a little over a week after Google fired Ethical AI lead Margaret Mitchell and three months after the firing of Ethical AI co-lead Timnit Gebru. Google has subsequently reorganized about 100 engineers across 10 teams, including placing Ethical AI under the leadership of Google VP Marian Croak.

The Association for Computing Machinery no less. How many Googlers and Xooglers are in this ACM entity? How many Google and Xoogle papers has the ACM accepted? Now suspended. Yikes, just a high school punishment for an outfit infused with the precepts of high school science club management and behavior.

What’s interesting is the injection of the notion of “ethical.” The world’s largest education and scientific organization is not into talking, understanding the Google point of view, or finding common ground.

Disruptors, losers, and non-fitting wizards and wizardettes are not appropriate for the ethic sub group of ACM. Oh, is that ethical? Good question.

But ACM knows who writes checks. The ad besotted article states:

Putting Google sponsorship on hold doesn’t mean the end of sponsorship from Big Tech companies, or even Google itself. DeepMind, another sponsor of the FAccT conference that incurred an AI ethics controversy in January, is also a Google company. Since its founding in 2018, FAccT has sought funding from Big Tech sponsors like Google and Microsoft, along with the Ford Foundation and the MacArthur Foundation. An analysis released last year that compares Big Tech funding of AI ethics research to Big Tobacco’s history of funding health research found that nearly 60% of researchers at four prominent universities have taken money from major tech companies.

Should I raise another question about the ethics of this wallet sensitive posture? Nah. Money talks.

I find the blip on the ethical radar screen quite amusing. One learns each day what really matters in the world of computers and smart software. That’s a plus.

I am waiting for Google Gibru gibberish to explain the situation. I am all ears.

Stephen E Arnold, March 5, 2021

Gebru-Gibberish: A Promise, Consultants, and Surgical Management Action

March 1, 2021

I read “Google Reportedly Promises Change to Research Team after High Profile Firings.” The article explains that after female artificial intelligence researchers found their futures elsewhere, Google (the mom and pop neighborhood online ad agency) will:

will change its research review procedures this year.

Okay, 10 months.

The write up points out that the action is

an apparent bid to restore employee confidence in the wake of two high-profile firings of prominent women from the [AI ethics] division.

Yep, words. I found this passage redolent of Gebru-gibberish; that is, wordage which explains how smart software ethics became a bit of a problem for the estimable Google outfit:

By the end of the second quarter, the approvals process for research papers will be more smooth and consistent, division Chief Operating Officer Maggie Johnson reportedly told employees in the meeting. Research teams will have access to a questionnaire that allows them to assess their projects for risk and navigate review, and Johnson predicted that a majority of papers would not require additional vetting by Google. Johnson also said the division is bringing in a third-party firm to help it conduct a racial-equity impact assessment, Reuters reports, and she expects the assessment’s recommendations “to be pretty hard.”

Okay. A questionnaire. A third party firm. Pretty hard.

What’s this mean?

The Ars Technica write up does not translate. However, from my vantage point in rural Kentucky, I understand the Gebru-gibberish to mean:

  1. Talk about ethical smart software and the GOOG reacts in a manner informed by high school science club principles
  2. Female AI experts are perceived as soft targets but that may be a misunderstanding in the synapses of the Google
  3. The employee issues at Google are overshadowing other Google challenges; for example, the steady rise of Amazon product search, the legal storm clouds, and struggles with the relevance of ads displayed in response to user queries or viewed YouTube videos.

Do I expect more Gebru-gibberish?

Will Microsoft continue to insist that its SAML is the most wonderful business process in the whole wide world?

Stephen E Arnold, March 1, 2021

Remarkable Zoom Advice

March 1, 2021

I am either 76 or 77. Who knows? Who cares? I do participate in Zoom calls, and I found this “recommendation” absolutely life changing. The information appears in “You SHOULD Wave at the End of Video Calls — Here’s Why.” Straight-away I marvel at the parental “should.” There’s nothing like a mom admonishment when it comes to Zoom meetings.

The write up posits:

I already know that every call here ends with a lot of waving), and the group unanimously favors waving.

The idea is that a particular group is into waving appears to support the generalization that waving good bye at the end of Zoom calls is the proper method of exiting a digital experience.

I learned:

Here’s the definitive ruling for the entire internet, from now until the end of time: waving at the end of video calls is good, and no one should feel bad for doing it. Ever.

Okay, maybe feeling bad is not the issue.

Looking stupid, inappropriate, weird, or childish may be other reasons for doubting this incredibly odd advice. Look. People exiting my Zoom meetings are not waving good bye to friends climbing on the Titanic in April 1912.

Why wave? The explanation:

Humans aren’t machines — we’re social animals. We want to feel connected to each other, even in a work context. Suddenly hanging up feels inhuman (because it is). Waving and saying goodbye solves this problem.

Holy Cow! Humans are not machines. News flash: At least one Googler wants to become a machine, and there will be others. In fact, I know humans who are machine like, in fact.

I hope I never see a wave ending my next lecture for law enforcement and intelligence professionals waving at me. I say thank you and punch the end meeting for all.

I am confident that those testifying via video conference connections will not wave at lawyers, elected officials, or investigators. Will Facebook’s Mark Zuckerberg wave to EU officials in the forthcoming probes into the company’s business methods?

Stephen E Arnold, March 1, 2021

The Crux of the Smart Software Challenge

February 24, 2021

I read “There Is No Intelligence without Human Brains.” The essay is not about machine learning, artificial intelligence, and fancy algorithms. One of the points which I found interesting was:

But, humans can opt for long-term considerations, sacrificing to help others, moral arguments, doing unhelpful things as a deep scream for emotional help, experimenting to learn, training themselves to get good at something, beauty over success, etc., rather than just doing what is comfortable or feels nice in the short run or simply pro-survival.

However, one sentence focused my thinking on the central problem of smart software and possibly explains the odd, knee jerk, and high profile personnel problems in Google’s AI ethics unit. Here’s the sentence:

Poisoning may greatly hinder our flexible intelligence.

Smart software has to be trained. The software system can be hand fed training sets crafted by fallible humans or the software system can ingest whatever is flowing into the system. There are smart software systems which do both. One of the first commercial products to rely on training sets and “analysis on the fly” was the Autonomy system. The phrase “neurolinguistic programming” was attached by a couple of people to the Autonomy black box.

What’s stirring up dust at Google may be nothing more than fear; for example:

  • Revelations by those terminated reveal that the bias in smart software is a fundamental characteristic of Google’s approach to artificial intelligence; that is, the datasets themselves are sending smart software off the rails
  • The quest for the root of the bias is to shine a light on the limitations of current commercial approaches to smart software; that is, vendors make outrageous claims into order to maintain a charade about software’s capabilities which may be quite narrow and biases
  • The data gathered by the Xooglers may reveal that Google’s approach is not as well formed as the company wants competitors and others to believe; that is, marketers and MBAs outpace what the engineers can deliver.

The information by which an artificial intelligence system “learns” may be poisoning the system. Check out the Times of Israel essay. It is thought provoking and may have revealed the source of Google’s interesting personnel management decisions.

Fear can trigger surprising actions.

Stephen E Arnold, February 23, 2021

Google: Adding Friction?

February 23, 2021

I read “Waze’s Ex-CEO Says App Could Have Grown Faster without Google.” Opinions are plentiful. However, reading about the idea of Google as an inhibitor is interesting. The write up reports:

Waze has struggled to grow within Alphabet Inc’s Google, the navigation app’s former top executive said, renewing concerns over whether it was stifled by the search giant’s $1 billion acquisition in 2013.

A counterpoint is that 140 million drivers use Waze each month. When Google paid about $1 billion for the traffic service in 2009, Waze attracted 10 million drivers.

The write up states:

But Waze usage is flat in some countries as Google Maps gets significant promotion, and Waze has lost money as it focuses on a little-used carpooling app and pursues an advertising business that barely registers within the Google empire…

Several observations about the points in the article:

  1. With litigation and other push back against Google and other large technology firms, it seems as if Google is in a defensive posture
  2. Wall Street is happy with Google’s performance, but that enjoyment may not be shared with that of some users and employees
  3. Google management methods may be generating revenue but secondary effects like the Waze case may become data points worth monitoring.

Google map related services are difficult for me to use. Some functions are baffling; others invite use of other services. Yep, friction as in slowing Waze’s growth maybe?

Stephen E Arnold, February 23, 2021

Alphabet Google: High School Science Club Management Breakthrough

February 20, 2021

The Google appears to support the concepts, decision making capabilities, and the savoir faire of my high school science club. I entered high school in 1958, and I was asked to join the Science Club. Like cool. Fat, thick glasses, and the sporty clothes my parents bought me at Robert Hall completed by look. And I fit right in. Arrogant, proud to explain that I missed the third and fourth grades because my tutor in Campinas died of snake bite. I did the passive resistance thing, and I refused to complete the 1950s version of distance learning via the Calvert Course, and socially unaware – yes, I fit right in. The Science Club of Woodruff High School! People sort of like me: Mid western in spirit, arrogant, and clueless. Were we immature? Does Mr. Putin have oligarchs as friends?

With my enthusiastic support, the Woodruff High School Science Club intercepted the principal’s morning announcements. We replaced mimeograph stencils with those we enhanced. We slipped calcium carbide into chemistry experiments involving sulfuric acid. When we we taken before the high school assistant principal Bull Durham, he would intone, “Grow up.”

We learned there were no consequences. We concluded that without the Science Club, it was hasta la vista to the math team, the quick recall team, the debate team, the trophies from the annual Science Fair, and the pride in the silly people who racked up top scores on standardized tests administered to everyone in the school.

The Science Club learned a life lesson. Apologize. Look at your shoes. Evidence meekness and humility. Forget asking for permission.

I thought about how the Science Club decided. That’s an overstatement. An idea caught our attention and we acted. I stepped into the nostalgia Jacuzzi when I read “Google Fires Another AI Ethics Leader.” A déjà vu moment. The Timnit Gibru incident flickers in the thumbtypers’ news feeds. Now a new name: Margaret Mitchell, the co-lead of Google’s Ethical AI team. Allegedly she was fired if the information in the “real” news story is accurate. The extra peachy keen Daily Mail alleged that the RIF was a result of Ms. Mitchell’s use of a script “to search for evidence of discrimination against fired black colleague.” Not exactly as nifty as my 1958 high school use of calcium carbide, but close enough for horseshoes.

Even the cast of characters in this humanoid unfriending is the same: Uber Googler Jeff Dean, who Sawzall and BigTable problems logically. The script is a recycling of a 1930’s radio drama. The management process unchanged: Conclude and act. Wham and bam.

The subject of ethics is slippery. Todd Pheifer, a doctor of education wrote Business Ethics: The Search for an Elusive Idea and required a couple of hundred pages to deal with a single branch of the definition of the concept. The book is a mere $900 on Amazon, but today (Saturday, February 20, 2021, it is not available.) Were the buyers Googlers?

Ethics is in the title of the Axios article “Google Fires Another AI Ethics Leader,” and ethics figures in many of the downstream retellings of this action. Are these instant AI ethicist zappings removals the Alphabet Google equivalent of the Luxe Half-Acre Mosquito Trap with Stand?  Hum buzz zap!

image1

In my high school science club, we often deferred to Don and Bernard or the Jackson Brothers. These high school wizards had published an article about moon phases in a peer-reviewed journal when Don was a freshman and Bernard was a sophomore. (I have a great anecdote about Don’s experience in astrophysics class at the University of Illinois. Ask me nicely, and I will recount it.)

The bright lads would mumble some idea about showing the administration how stupid it was, and we were off to the races. As I recall, we rarely considered the impact of our decisions. What about ethics, wisdom, social and political awareness? Who are you kidding? Snort, snort, snort. Life lesson: No consequences for those who revere good test takers.

As it turned out, most of us matured somewhat. Most got graduate degrees. Most of us avoided super life catastrophes. Bull Durham is long dead, but I would wager he would remember our brilliance if he were around today to reminisce about the Science Club in 1958.

I am grateful for the Googley, ethical AI related personnel actions actions. Ah, memories.

Several questions with answers in italic:

  • How will Alphabet Google’s effort to recruit individuals who are not like the original Google “science club” in the wake of the Backrub burnout? Answer: Paying ever higher salaries, larger bonuses, maybe an office at home.
  • Which “real” news outfit will label the ethical terminations as a failure of high school science club management methods? Answer: None.
  • What does ethics means? Answer: Learn about phenomenological existentialism and then revisit this question.

I miss those Science Club meetings on Tuesday afternoon from 3 30 to 4 30 pm Central time even today. But “real” news stories about Google’s ethical actions related to artificial intelligence are like a whiff of Dollar General air freshener.

Stephen E Arnold, February 22, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta