Be Resilient. Follow the Google Regimen

May 7, 2021

To learn the secret, navigate to “Google’s ‘Global Head of Resilience’ Says the Secret to Avoiding Burnout Is TEA.” The acronym says it all: TEA (not the street slang for cannabis). Google’s wizard in charge of resilience explains the secret:

  • Thoughts. Complete these sentences to help you learn “to differentiate between helpful and unproductive thinking patterns:” “Today my mind is …” “To refocus I need to …”
  • Energy. The goal of the ‘E’ section of TEA is “observing how we are feeling in the moment, and intentionally investing in activities or people that fuel positive enthusiasm and motivation.” Complete these sentences: “Today my energy is…” “To change or maintain, I need to…”
  • Attention. This one helps you become more intentional about where you place your attention by asking you to complete this sentence: “To be my best today, I will focus on doing or being…”

Quite a mnemonic device.

How widely is this secret employed at Google; specifically, in the AI ethics department? The write up does not elucidate on this matter.

Resilience for one former AI ethics type the secret was landing a job at Apple.

Stephen E Arnold, May 7, 2021

Google Caught In Digital and Sticky Ethical Web

May 3, 2021

Google is described as an employee first company. Employees are affectionately dubbed “Googlers” and are treated to great benefits, perks, and work environment. Google, however, has a dark side. While the company culture is supposedly great, misogynistic, racist attitudes run rampant. Bloomberg via Medium highlights recent ethical violations in the article, “Google Ethical AI Group’s AI Turmoil Began Long Before Public Unraveling.”

One of the biggest ethical problems Google has dealt with is the lack of diverse information in their facial recognition datasets. This has led to facial recognition AI’s inability to recognize minority populations. If ethical problems within their technology were not enough, Google had created an Ethical AI research team headed by respected scientists Margaret Mitchell and Timnit Gebru.

Google had Gebru forcefully resign from the company in December 2020, when she refused to retract a research paper that criticized Google’s AI. Mitchell was also terminated in February 2021 on the grounds she was sending Google sensitive documents to personal accounts.

During their short tenure as Google’s Ethical AI team leads, Mitchell and Gebru witnessed a sexist and racist environment. They both noticed that women with more experience held lower job titles than men with less experience. When female employees were harassed and incidents were reported nothing was done.

Head of Google AI Jeff Dean appeared to be a supporter of Gebru and Mitchell, but while he voiced supported his actions spoke louder:

“Dean struck a skeptical and cautious note about the allegations of harassment, according to people familiar with the conversation. He said he hadn’t heard the claims and would look into the matter. He also disputed the notion that women were being systematically put in lower positions than they deserved and pushed back on the idea that Mitchell’s treatment was related to her gender. Dean and the women discussed how to create a more inclusive environment, and he said he would follow up on the other topics….

About a month after Gebru and Heller reported the claims of sexual harassment and after the lunch meeting with Gebru and Mitchell, Dean announced a significant new research initiative, and put the accused individual in charge of it, according to several people familiar with the situation. That rankled the internal whistleblowers, who feared the influence their newly empowered colleague could have on women under his tutelage.”

Google had purposely created the Ethical AI research team to bring attention to disparities and poor behavior to their attention. When Gebru and Mitchell did their job, their observations were dismissed.

Google shot itself in the foot when they fired Gebru and Mitchell, because the pair were doing their job. Because the pair questioned Google’s potential problems with Google technology and fought against sexism and racism, the company treated them as disruptive liabilities. Mitchell and Gebru’ treatment point to issues of self-regulation. Companies are internally biased, because they want to make money and not make mistakes. However, this attitude creates a lackadaisical attitude towards self-regulation and responsibility. Biased technology leads to poor consequences for minorities that could potentially ruin their lives. Is it not better to be aware of these issues, accept the problem, then fix it?

Google is only going to champion itself and not racial/gender equality.

Whitney Grace, May 3, 2021

Google Bets: Chump Change

April 30, 2021

In the midst of stakeholder ebullience about Alphabet Google’s money making prowess, I spotted one interesting comment. “Alphabet Reports Q1 2021 Revenue of $55.3 Billion” included this statement:

The closely-watched “Other Bets” continues to lose money. It reported $198 million revenue primarily generated by Verily and Fiber from $135 million in Q1 of 2020. However, it lost $1.15 billion compared to $1.12 billion in the same quarter of last year.

For a company of Alphabet Google YouTube’s scale this is a modest loss. However, it does beg a couple of questions:

  1. Is the data analysis used to decide upon what to wager flawed?
  2. Is there high value information about the firm’s management of certain projects contained in these increasing and continuing losses?

Alphabet does online advertising and data vending. Innovation may be more of a reach than some expected.

Stephen E Arnold, April 30, 2021

A Test to Determine Googliness

April 12, 2021

I read “After Working at Google, I’ll Never Let Myself Love a Job Again.” I immediately thought of the statement, “You’ll Never Work in This Town Again.” Did the icon Harvey Weinstein say this? I can’t recall.

Okay, no loving a job. The real news “opinion” piece explains a harrowing, first-person account of harassment. Did I harass Mr. Weinstein with my use of the word “icon”? Yikes.

image

To learn about the mom-and-pop online ad agency’s approach to personnel management, read the real news “opinion” article.

Here’s what I gleaned from the write up:

1. Be a compliant engineer who stays within the bright white lines of behavior at the Google. What if the interactions are virtual? No matter. Bright white lines, real or imagined, are the markers.

2. Don’t pick a mentor who wants to keep his / her job, bonus, stock options, and invitations to select company events. (Once some events required a ski weekend. Whoooie! Fun.) Mentors who value something other than “relationships” may provide a re-introduction to the Maslow – Google hierarchy of needs.

3. Keep quiet and avoid the human resources people management wizard. After a sales call at SHRM or something like that, I knew that modern HR was a casualty of MBA think; for example, employees are at fault. Unproductive employees are self identified. Modern organizations don’t want flawed and profit-sucking humanoids. Maybe I have the human resource function wrong, but I too can have an opinion.

4. Life at the Googleplex does not dispense “Also Participated” badges like a really trendy private middle school.

These observations lend themselves to items on my “Checklist for Being Googley”; to wit:

  • Be smart enough to be compliant and “cooperative”
  • Work alone when possible delivering “good enough” outputs
  • Operate without official or unofficial visits to personnel professionals
  • Welcome inter-personal interactions warmly, enthusiastically, and without documenting such encounters
  • Do “what it takes” to join a hot team, get a promotion, and enter the private domains of the truly elite
  • Eschew interviews, book deals, and opportunities to contribute to a “real news” channel.

If you can tick off each of these items, you are ready to do a run through the Google Labs Aptitude Test. Rumored to have been retired, copies of these tests of Google grade knowledge are still available. Just search Google.com. Oh, strike that. This link returns the questions, not the attractive green of the original hard copy with the really hard questions like:

What’s broken with Unix? How would you Fix it?

Beyond Search had a copy but boxer Max ate it years ago. Yes, he passed the exam.

Stephen E Arnold, April 12, 2021

Want to Change Employee Behavior? What Not to Do

April 12, 2021

I read “The One System That Changes Employee Behavior.” Interesting but disconnected from good old reality. I assume that the breezy recommendations comprise the one system a manager with an MBA and a back ground in the disconnected world of high school science club decision making are perfect for thumbtypers.

Wrong. Behavior change in a commercial enterprise is induced by hooking compensation (tangible or intangible) to specific outcomes. Another way to think about change is to think about this statement, “Do this and you get a raise and a promotion.”

Let’s look at the four recommendations that comprise the “one system that changes employee behavior.” Here are what I call “thumbtyper” suggestions. My observations appear in italics after these bullets of high powered wisdom:

1. Define corporate values.

Okay, that’s something for a first year business class. Get those values down to a snappy phrase like “Do no evil.” One can also look to outfits like Credit Suisse. That outfit’s executives are in a tizzy because of its financial sinkhole related to the ethical paragons at Archegos. To understand corporate values, talk to the former McKinsey wizards who engineered success at a large pharmaceutical firm.

2. Define pinpointed behaviors aligned with values.

Many interesting examples of this alignment thing can be located. Examples include the fascinating tale of a Google attorney who was philandering to the Big Zuck who wanted to eat meat of animals he killed. Did he wear a PETA cap whist satisfying his culinary goals? Alignment of privacy and Facebook revenue are almost as interesting. I do like the word “pinpointed”, however. Precision is required for advertisers to buy click as well as for inducing pregnancy and killing a plump French bulldog tied to a door knob on University Avenue. As you ponder the canine metaphor, define value for attendees at a virtual venture funded entrepreneur-to-be conference.

3. Change your behaviors.

Ho, ho, ho. Try that with this senior manager at a high tech firm in the cradle of ethical behavior. The behavior requiring change is described in “Prostitute Convicted in Google Exec’s Overdose Death Charged.” Yep, intervention works great. On the other hand, step back and watch how behaviors evolve once a secret is exposed. Current examples fall readily to hand; for example, explanations about data loss from social media outfits.

4. Facilitate change in others.

This is an interesting idea. Let’s take the example of Uber. Travis Kalanick, who needed to grow up, did indeed alter others. Some of his methods are documented in the BBC article “Uber: The Scandals That Drove Travis Kalanick Out.” A more mundane example may lurk in one’s own mind. How often did someone tell you, gentle reader, do your homework? Works everytime for those under the age of 13, doesn’t it?

My thought is that these ideas do not comprise a system.

What works is incentives. Pay for specific actions. When the action is delivered in a satisfactory way, provide more payoffs. Magic. The somewhat shallow “one system” ain’t gonna do it. Cash is more reliable a motivator.

Stephen E Arnold, April 12, 2021

The Alphabet Google YouTube Thing Explains Good Old Outcome Centered Design

April 8, 2021

If you have tried to locate information on a Google Map, you know what good design is, right? What about trying to navigate the YouTube upload interface to add or delete a “channel”? Perfection, okay. What if you have discovered an AMP error email and tried to figure out how a static Web site generated by an AMP approved “partner” can be producing a single flawed Web page? Intuitive and helpful, don’t you think?

Truth is: Google Maps are almost impossible to use regardless of device. The YouTube interface is just weird and better for a 10-year-old video game player than a person over 30, and the AMP messages? Just stupid.

I read “Waymo’s 7 Principles of Outcome-Centered Design Are What Your Product Needs” and thought I stumbled upon a listicle crafted by Stephen Colbert and Jo Koy in the O’Hare Airport’s Jazz Bar.

Waymo (so named because one get way more with Alphabet Google YouTube — hereinafter, AGYT)technology — is managed by co-CEOs. It is semi famous for hiring uber engineer Anthony Levandowski. Plus the company has been beavering away to make driving down 101 semi fun since 2009. The good news is that Waymo seems to be making more headway than the Google team trying to solve death. The Wikipedia entry for Waymo documents 12 collisions, but the exact number of smart  errors by the Alphabet Google YouTube software is not known even to some Googlers. Need to know, you know.

What are the rules for outcome centered design; that is, ads but no crashes I presume. The write up presents seven. Here are three and you can let your Chrome browser steer you to the full list. Don’t run into the Tesla Web site either, please.

Principle 2. Create focus by clarifying you8r purpose.

Okay, focus. Let’s see. When riding in a vehicle with no human in charge, the idea is to avoid a crash. What about filtering YouTube for okay content? Well, that only works some of the time. The Waymo crashes appear to underscore the fuzz in the statistical routines.

And Principle 4. Clue in to your customer’s context.

Yep, in a vehicle which knows one browsing history and has access to nifty profiles with probabilities allows the vehicle to just get going. Forget what the humanoid may want. Alphabet Google YouTube is ahead of the humanoid. Sometimes. The AFYT approach is to trim down what the humanoid wants to three options. Close enough for horse shoes. Waymo, like Alphabet Google YouTube, knows best. Just like a digital mistress. The humanoid, however, is going to a previously unvisited location. Another humanoid told the rider face to face about an emergency. The AGYT system cannot figure out context. Not to worry. Those AGYT interfaces will make everything really easy. One can talk to the Waymo equipped smart vehicle. Just speak clearly, slowly, and in a language which Waymo parses in an acceptable manner. Bororo won’t work.

Finally, Principle 7: Edit edit edit.

I think this means revisions. Those are a great idea. Alphabet Google YouTube does an outstanding job with dots, hamburger menus, and breezy writing in low contrast colors. Oh, content? If you don’t get it, you are not Googley. Speak up and you may be the Timnit treatment or the Congressional obfuscation rhetoric. I also like ignoring the antics of senior managers.

Yep, outcome centered. Great stuff. Were Messrs. Colbert and Koy imbibing something other than Sprite at the airport when possibly conjuring this list of really good tips? What’s the outcome? How about ads displayed to passengers in Waymo infused vehicles? Context centered, relevant, and a feature one cannot turn off.

Stephen E Arnold, April 8, 2021

Eschewing the Google: Career Suicide or Ethical Savvy?

March 19, 2021

I spotted an interested quote in Wired’s “The Departure of 2 Google AI Researchers Spurs More Fallout.” Here’s the quote:

“Google has shown an astounding lack of leadership and commitment to open science, ethics, and diversity in their treatment of the Ethical AI team.”

It’s been several months since the Google engaged in Gebru-gibberish; that is, the firm’s explanations about the departure of a PhD who wrote a research paper suggesting that the Google’s methods may not be a-okay.

The Google is pressing forward with smart software, which is, the future of the company. I thought online advertising was, but what do I know.

The article also mentions that a high profile AI researcher would not attend a Google AI event. The reason? Here’s what Wired reports:

Friday morning, Kress-Gazit emailed the event’s organizers to say she would not attend because she didn’t wish to be associated with Google research in any way. “Not only is the research process and integrity of Google tainted, but it is clear, by the way these women were treated, that all the diversity talk of the company is performative,” she wrote. Kress-Gazit says she didn’t expect her action to have much effect on Google, or her own future work, but she wanted to show solidarity with Gebru and Mitchell, their team, and their research agenda.

A few years ago, professionals would covet a Google tchotchke like a mouse pad or a flashing Google LED pin. (My tarnished and went dead years ago.) Now high profile academics are unfriending Messrs. Brin and Page online ad machine.

Interesting shift in attitude toward the high school science club company in a few pulses of Internet time.

Stephen E Arnold, March 19, 2021

Facebook: The Polarization Position

March 17, 2021

I find Silicon Valley “real” news amusing. I like the publications themselves; for example, Buzzfeed. I like the stories themselves; for example, “Polarization Is Good For America, Actually, Says Facebook Executive.”

How much of the Google method has diffused into Facebook? From my point of view, a magnetic influence exists. The cited article points out:

Facebook has created a ”playbook” to help its employees rebut criticism that the company’s products fuel political polarization and social division.

The idea is that employees comprise a team. The team runs plays in order to score. The playbook also directs and informs team members on their roles.

Trapped Priors As a Basic Problem of Rationality” explains how feedback loops lead to a reinforcement of ideas, data, and rationality otherwise not noticed.

Buzzfeed references this Facebook research document:

In the [Facebook] paper, titled “What We Know About Polarization,” Cox and Raychodhury [Facebook experts] call polarization “an albatross public narrative for the company.” “The implicit argument is that Facebook is contributing to a social problem of driving societies into contexts where they can’t trust each other, can’t share common ground, can’t have conversation about issues, and can’t share a common view on reality,” they write, adding that “the media narrative in this case is generally not supported by the research.” While denying that Facebook meaningfully contributes to polarization, Pablo Barberá, a research scientist at the company, also suggested political polarization could be a good thing during Thursday’s presentation. “If we look back at history, a lot of the major social movements and major transformations, for example, the extension of civil rights or voting rights in this country have been the result of increasing polarization,” he told employees.

The value of polarization and a game plan to make explicit a particular business method are high. The fact that the trappings of research are required to justify the game plan is interesting. But those trapped priors are going to channel Facebook’s behavior into easy-to-follow grooves.

Scrutiny, legal action, and “more of the same” will allow pot holes to form. Some will be deep. Others will be no big deal.

Stephen E Arnold, March 17, 2021

The Google: Disrupting Education in the Covid Era

March 15, 2021

I thought the Covid thing disrupted education. As a result, Google’s video conferencing system failed to seize an opportunity. Even poor, confused Microsoft put some effort into Teams. Sure, Teams is not the most secure or easy to use video conferencing service, but it has more features than Google has chat apps and ad options. Google also watched the Middle Kingdom’s favorite video service “zoom” right into a great big lead. Arguably, Google’s video conferencing tool should have hooked into the Chromebook, which is in the hands of some students. But what’s happened? Zoom, zoom, zoom.

I read this crisp headline: “Inside Google’s Plan to Disrupt the College Degree (Exclusive). Get a First Look at Google’s New Certificate Programs and a New Feature of Google Search Designed to Help Job Seekers Everywhere.”

Wow. The write up is an enthusiastic extension of Google Gibru-ish. Here’s why:

  1. Two candidates. One is a PhD from Princeton with a degree in computer science. The other is a minority certificate graduate. Both compete for the same job. Which candidate gets the job?
  2. One candidate, either Timnit Gebru or Margaret Mitchell. Both complete a Google certification program. Will these individuals get a fair shake and maybe get hired?
  3. Many female candidates from India. Some are funded by Google’s grant to improve opportunities for Indian females. How many will get Google jobs? [a] 80 to 99 percent, [b] 60 to 79 percent, [c] fewer than 60 percent? (I am assuming this grant and certificate thing are more than a tax deduction or hand waving.)

High school science club management decisions are fascinating to me.

Got your answers? I have mine.

For the PhD versus the certificate holder, the answer is it depends. A PhD with non Googley notions about ethical AI is likely to be driving an Uber. The certificate holder with the right mental orientation gets to play Foosball and do Googley things.

For the Gebru – Mitchell question, my answer is neither. Female, non-Googley, and already Xooglers. Find your future elsewhere is what I intuit.

And the females in India. Hard to say. The country is far away. The $20 million or so is too little. The cultural friction within the still existing castes are too strong. Maybe a couple is my guess.

In short, Google can try to disrupt education. But Covid has disrupted education. Another outfit has zoomed into chinks in the Google carapace. So marketing it is. It may work. Google is indeed Google.

Stephen E Arnold, March 15, 2021

Amazon and Personnel Wizardry?

March 11, 2021

Amazon likes to say it successfully promotes diversity and inclusion in its company, and some of the numbers it touts do represent a measure of success. However, there appears to be a lot of work left to do and not enough will to do it from the powerful “S Team.” Recode discusses “Bias, Disrespect and Demotions: Black Employees Say Amazon Has a Race Problem.” The extensive article begins with the story of former employee Chanin Kelly-Rae, a former global manager of diversity for AWS. She began the position with high hopes, but quit in dismay 10 months later. Reporter Meron Menghistab writes:

“Kelly-Rae, who is Black, is one of more than a dozen former and current Amazon corporate employees — 10 of whom are Black — who told Recode in interviews over the past few months that they felt the company has failed to create a corporate-wide environment where all Black employees feel welcomed and respected. Instead, they told Recode that, in their experience, Black employees at the company often face both direct and insidious bias that harms their careers and personal lives. All of the current and former employees, other than Kelly-Rae, spoke on condition of anonymity either because of the terms of their employment with Amazon or because they fear retribution from Amazon for speaking out about their experiences. Current and former Amazon diversity and inclusion professionals — employees whose work focuses on helping Amazon create and maintain an equitable workplace and products — told Recode that internal data shows that Amazon’s review and promotion systems have created an unlevel playing field. Black employees receive ‘least effective’ marks more often than all other colleagues and are promoted at a lower rate than non-Black peers. Recode reviewed some of this data for the Amazon Web Services division of the company, and it shows large disparities in performance review ratings between Black and white employees.”

Amazon, of course, disagrees with this characterization, but it is difficult to argue with all the points Menghistab considers: the many unsettling comments made to and about Black employees by higher-ups; the reluctance of management to embrace best practices suggested by their own diversity experts; the fact that diversity goals do not extend to top management positions; the rampant “down-leveling” of employees of color, its long-term effects on each worker, and the low chances of promotion; a hesitation to hire from historically Black colleges; and the problematic “Earns Trust” evaluation metric. We suggest interested readers navigate to the article to learn more about each of these and other factors.

Some minority employees say they have reason to hope. For one thing, the problems do not pervade the entire company—many teams happily hum along without any of these problems. The company is making a few small steps in the right direction, like requiring workers undergo diversity and inclusion training, participating in the Management Leadership of Tomorrow’s Black Equity at Work Certification, and holding a virtual career-enrichment summit for Black, Latinx, and Native American prospective employees. There will never be a quick and easy fix for the tech behemoth, but as Kelly-Rae observes:

“Amazon is really good at things it wants to be good at, and if Amazon decided it really wanted to be good at this, I have no doubt it can be.”

Time to step it up, Amazon.

Cynthia Murrell, March 11, 2021

Next Page »

  • Archives

  • Recent Posts

  • Meta