Building Trust: Current Instances of Dubious Credibility

June 20, 2019

I buzzed through the overnight email and scanned the headlines dumped in my “Pay Attention” folder. Not much of interest to me. Sure, Congress is going to ask questions about the new sovereign currency from the People’s Republic of Facebook. That’s going to be a rerun of the managerial version of “So You Think You Can Dance.”

image

I did spot three items which make clear the ethical swamp in which some companies find themselves lost. Let’s look at these and ask, “Yeah, about that bridge to Brooklyn you sold my mother”?

ITEM ONE: Vice reports that Twitter is working on a bug fix which tells a user, “You know that person you unfollowed. Well, good news, that person is now following you.” The write up “A Nightmare Twitter Bug Is Sending Users Notifications When They’re Unfollowed” states:

For several days, untold numbers of Twitter users have been getting push notifications whenever someone unfollows them. To add insult to injury, the notifications say the user has “followed them back” when in fact the opposite is true.

Yep, a bug, not another programming error, not a failure of code QA prior to pushing the ones and zeros to a production system, not an example of a senior management team looking for fire extinguishers. Just a bug. Forget the cause, and, of course, the Twitteroids are going to fix it.

ITEM TWO: The somewhat frantic and chaotic methods of YouTube are going to more attention. “YouTube Under Federal Investigation over Allegations It Violates Children’s Privacy” reports:

A spokeswoman for YouTube, Andrea Faville, declined to comment on the FTC probe. In a statement, she emphasized that not all discussions about product changes come to fruition. “We consider lots of ideas for improving YouTube and some remain just that — ideas,” she said. “Others, we develop and launch, like our restrictions to minors live-streaming or updated hate speech policy.”

Okay, let’s clam up and face facts: The methods used to generate engagement, sell ads, and stave off the probes from Amazon Twitch are just algorithms. Once again, no human responsibility, no management oversight, and no candid statement about what the three ring video extravaganza is willing to do with regard to this long standing issue.

ITEM THREE: Facebook’s crypto currency play aside, I noted this admission that Facebook users have zero expectation of privacy, and, if I understand Facebook’s argument, you will get zero privacy from our platform. Navigate to “Facebook Under Oath: You Have No Expectation of Privacy” and note this statement:

In a San Francisco courtroom a few weeks ago, Facebook’s lawyers said the quiet part out loud: Users have no reasonable expectation of privacy. The admission came from Orin Snyder, a lawyer representing Facebook in a litigation stemming from the Cambridge Analytica scandal.

Now I am not sure this is an admission. It strikes me as a statement of a Facebook bedrock foundational principle.

What do these three current items trigger in my mind? Let me answer that question, gentle reader:

  1. Large, powerful high technology firms say what’s necessary to get past a problem.
  2. Situational decision making creates what are unmanageable business processes.
  3. The senior managers and spokes humans are happy to perform just like the talent on “So You think You Can Dance.”

For me, that show show is becoming tiresome, repetitive, and the intellectual equivalent of chowing down on KryspyKreme chocolate- iced, glazed-with-sprinkles donuts. The music is getting louder, and the tune is “Deflect, aplogize, keep goin’.” Boring.

Stephen E Arnold, June 20, 2019

Alphabet Google: Reality Versus Research in Actual Management Activities

June 19, 2019

For a few months, I have been using my Woodruff High School Science Club as a source of ideas for understanding Silicon Valley management decisions. I termed my method HSSCMM or “high school science club management method.” A number of people have told me that my approach was humorous. I suppose it is. One former colleague from a big name consulting name observed that I was making official, MBA-endorsed techniques look like a shanked drive at a fraternity reunion golf scramble. (MBA students seem to be figuring out that their business degrees may open doors at Lyft or Uber, not McKinsey & Company.

Where the HSSCMM differs from “situational thinking without context” (this is DarkCyber jargon), Google research has identified best practices for management. However, HSSCMM is intuitive and easy to explain. My touchstone for management appears in the article “Google Tried to Prove Managers Don’t Matter. Instead, It Discovered 10 Traits of the Very Best Ones.” Google’s original goal involved figuring out if a sports team manager was important or not. Google’s brilliant analysts crunched numbers and found that coaches matter. The best ones shared some data-backed characteristics. Let’s compare what Google found with the HSSCMM.

I made an a MBA-influenced table to keep thoughts clear.

# Google Research Says HSSCMM Approach Observations
1 Be a good coach Be arrogant because you understand differential equations Google is working on discrimination
2 Empower Others are stupid The smartest person is in charge
3 Inclusive team Exclusivity all the way. Google hires best talent, and Google defines “best”
4 Be results oriented Do what you want. Outsiders don’t get it. Boost ad sales
5 Communicate Don’t get it? You’re fired. Explain YouTube is too big to be fixed
6 Have a strategy React. Ignore the uninformed Make quick decisions like buying Motorola
7 Support career development Learn it yourself Find a team or leave
8 Advise the team Figure it out, or you don’t belong First day at work confusing? Try flipping burgers
9 Collaborate Work alone Fix the problem or quit
10 Be a strong decision maker Do what I say, dummy Obvious, right?

Answer this question: How many of the characteristics from each column match actions from Silicon Valley-type companies like Amazon, Apple, Facebook, Google, etc.?

image
Which type of management method is exemplified in this allegedly true incident? The Google management research findings or the high school science club management methods? Answer: HSSCMM.

As you formulate this answer, consider the decision making evidenced in this allegedly accurate article from 2017 about a Silicon Valley executive.

Stephen E Arnold, June 19, 2019

Alphabet: Employees and Shareholders Allegedly Spell Trouble

June 18, 2019

CNBC continues to generate “real” news. The challenge for me is that CNBC is a television service. Take note because I read “Alphabet Investors and Employees Are Planning a Joint Demonstration at Shareholders’ Meeting.” The title suggests a predictive story; that is, the event described has not happened, and I am not confident in people who predict the future. But that’s just me.

According to the write up:

The groups will try to pressure company stakeholders and leaders to vote on proposals that ban non-disclosure agreements in harassment and discrimination cases and tie executive compensation to its diversity goals. Another includes a proposal to publish a human rights impact assessment for its potential search engine with China called Project Dragonfly.

The targets, in my opinion, are the more obvious examples of what I call “high school science club management methods.” These work as long as the science club is small, homogeneous, and dismissed by other students as individuals unlikely to be captain of the football team, president of the student council, or realize that there is some value is attending the prom with another humanoid.

We noted:

Google janitorial staff and community group Silicon Valley Rising will also be there to vocalize their concern over wage gaps and the residential effects of its imminent expansion into San Jose.

Yikes. Can HSSCMM deal with people like service personnel and people who are mostly invisible?

CNBC did not speculate. If the write up is on the money, DarkCyber believes that the GOOG will pay attention to those who stir up memories of things past.

Stephen E Arnold, June 18, 2019

Google and Bungle: Math Meets PR

June 14, 2019

i read “Announcements Made Carlos Maza’s Harassment Worse” written by a student. I found the write up interesting but probably not for the reasons the editors of Vice did. The main point of the write up strikes me as:

In the hours following YouTube’s announcement, a range of far-right users received notices from the platform indicating that they could no longer convert their viewership into ad revenue. This is not the first time YouTube has enacted sweeping changes that affect content creators: in the past year or so, users have come to expect so-called “Adpocalypses” as YouTube attempts to stay advertiser-friendly.  This time, however, users weren’t only blaming YouTube—they were blaming a Vox journalist and YouTube creator who was now facing a torrent of abuse thanks in part to YouTube’s fumbling and poorly-timed announcements.

Online harassment may be like explaining the Mona Lisa. Is the figure smiling? Do the eyes follow a viewer? Is Mona Leonardo is a “get up”? Art history students are not likely to reach agreement. Google has discovered that it has its Mona Lisa smile moment.

For DarkCyber, the student essay is interesting and valuable because it reveals the disconnect between the scrambling Alphabet and the clown car of YouTube content. Now the clown car is displaying ads which explain what’s going on with filtering.

Why the disconnect the student captures in the essay?

The answer is that management precepts based on “we know better” and “sell ads” does not translate well. The Alphabet Google approach grates on the sensibilities of its “creators.”

When a person younger than I captures the consequences of high school wizards making decisions for the entire school, the message is, “Alphabet Google is not communicating effectively across the board.”

Thus, the student’s write up captures a moment in management history. If there were viable MBA programs, perhaps a bright student would study the “bungle” and Google management processes with a critical eye.

For now, we have student essays explaining how the world’s smartest “bungle” and with public relations no less. Where are the math wizards, the computer scientists, the engineers? Right, right. In management.

Stephen E Arnold, June 14, 2019

Google: Does That Clown Car Have a License Plate Which Reads Credibility000?

June 13, 2019

I am not going to write about the YouTube clown car regarding hate speech. Vice News makes the issue clear: High school science club management (HSSCM) does not deliver what practitioners hope and dream. I am not going to write about the pain Google caused. The Verge provides plenty of information on that angle.

In my opinion, Google’s after-the-fact explanations are unlikely to work like a dentist’s temporary anesthetic. I am getting tired of wading through reports about these types of HSSCM missteps.

I do want to call attention to Google’s explanation that “Chrome isn’t killing ad blockers.” The company is making “them” safer. The “them” are the developers trying to strip out obnoxious, never ending ads which are enhancing one’s experience when trying to read a one page article of interest. You can read the Googley words in “Improving Security and Privacy for Extensions Users.”

Here’s an example of the argument:

The Chrome Extensions ecosystem has seen incredible advancement, adoption, and growth since its launch over ten years ago. Extensions are a great way for users to customize their experience in Chrome and on the web. As this system grows and expands in both reach and power, user safety and protection remains a core focus of the Chromium project.

Here in Harrod’s Creek, some of Google’s innovations appear to be created to provide two things:

  1. More control over what users can do and see; e.g., ad blocker blocking
  2. Keeping users within Google’s version of the Internet; e.g., AMP.

We understand why a commercial enterprise, so far unregulated, takes these actions: Revenue. That’s the “law of the land” in the Wild West of Silicon Valley bro capitalism. Google needs cash because it costs the company more and more to get and keep users, to fight Facebook and Microsoft, and to fund the out-of-coontrol overhead the high school wizards put in place and have expanded. There’s none of the Amazon rip-and-replace thinking that hit Oracle in the chops earlier this year.

DarkCyber thinks that Google might have a bit more credibility if the company were to say: “We need ads to survive. If you use Chrome, you are going to get ads, lots of ads. We’ve relaxed our semantic fence to make sure more of these valuable messages are likely to be irrelevant to you.”

Some might find this type of clarity distasteful, but directness without inventing crazy rationales might restore some of the pre-IPO and pre-Overture/GoTo.com luster to the online ad giant. Calling itself a “search” engine doesn’t do it for a couple of the people I know in Harrod’s Creek.

Directness, clarity, and even a touch of honesty? That’s a stupid idea I assert. Making stuff up as the clown car rolls down the Information Highway may blaze trails the Bezos bulldozer will convert into monetization opportunities sooner rather than later.

Stephen E Arnold, June 13, 2019

The Jedi Return: Page and Brin Address Those Perceived to Be Really Smart

June 11, 2019

I read “Elusive Google Co-Founders Make Rare Appearance at Town Hall Meeting.” What these fine innovators do is not likely to become a talking point in Harrod’s Creek, Kentucky. I did note this passage in the write up:

Google co-founders Larry Page and Sergey Brin have long been the stars of the search giant’s weekly “TGIF” town hall meetings. But for the past six months, the pair had been no-shows, an absence that coincided with Google controversies over antitrust concerns, work in China and military contracts.

Interesting but what happened to the discrimination and sexual harassment dust ups? I assume that certain management flubs are more important than others. It is clear that the researcher working on this CNet article did not come across information about a certain liaison which triggered a divorce and an attempted suicide. And what about the Googler, the yacht, the alleged female of ill repute, and a drug overdose? Obviously fake, irrelevant, or long-forgotten items I assume.

I also noted this passage:

The disappearing act drew criticism from those who see Page’s and Brin’s absence as dodging accountability during the most tumultuous period in the company’s 20-year history.

What’s that reminder about correlation and causation? Probably the six month hiatus is a refine of the firm’s management techniques. Are there antecedents? What about restructuring to Alphabet to provide more insulation in the Googleplex from the heat of certain investigations? What about the “Gee, we’re not really working on a China centric search system”?

How about this statement from the article?

But as Google’s issues mount, the company’s co-founders have faded into the background.

There’s even a reference to the YouTube clown car.

Most recently, Google-owned YouTube drew blowback last week after the service refused to take down the channel of Steven Crowder, a conservative comedian who hurled homophobic slurs at Carlos Maza, a Vox journalist and video host who is gay.

And the discrimination and retribution approach to human resources warranted a comment:

One of the questions during the Q&A portion of the May 30 TGIF concerned alleged retaliation from management against employees, according to a partial transcript viewed by CNET. The question was about the departure of Claire Stapleton, a Google walkout organizer who said she was unfairly targeted because of her role in the protest. Stapleton announced her resignation in a blog post Friday. The questioner asked if “outside objectivity” could be added to HR investigations.

The write up is interesting, but there are aspects of the Google matter which warrant amplification, if not by the real new outfit CNet, then some other entity, perhaps former MBA adjunct professors embracing the gig economy of the MBA implosion?

What the write up makes clear but does not explain is the unwillingness of the Google to be forthright about what it has done, when it began to implement certain interesting monetization procedures, and how it decided upon certain management processes to deflect criticism and understanding of the firm’s Titanic algorithms.

The CNet write up is interesting, not for what it reveals, but for its omissions. Today that’s real news.

Stephen E Arnold, June 11, 2019

AT&T: A Job Creator. Wait, Job? Jobs?

June 6, 2019

Gee, why didn’t that work out as promised? We were told the administration’s tax cuts for big business would lead to more jobs, but here is the latest disappointment in that arena: BoingBoing reports, “AT&T Promised It Would Create 7,000 Jobs if Trump Went Through with Its $3B Tax-Cut, but They Cut 23,000 Jobs Instead.” The very brief but indignant write-up cites this Ars Technica piece, and summarizes:

“In 2017, AT&T CEO Randall Stephenson campaigned for Trump’s massive tax-cuts by promising that they would create 7,000 jobs with the $3,000,000,000 they stood to gain, as well as investing in new infrastructure: instead, the company has reduced its headcount by 23,328 workers (6,000 in the first three months of 2019!) while reducing capital expenditures by $1.4B (AT&T reduced capex by another $900m in Q1/2019). AT&T substantially increased executive bonuses over the same period.”

Of course the phone outfit did. AT&T acknowledges these findings by the Communications Workers of America (CWA), but blames the discrepancy on the fact that “technology is changing rapidly.” Was that a surprise? Perhaps high-tech companies should refrain from making promises they are not sure they can keep.

Cynthia Murrell, June 5, 2019

Googley Things

June 5, 2019

I wanted to write about Google’s recent outage. But the explanations were not exciting. That’s too bad. Configuration problems are what bedevil careless or inexperienced technicians. The fact that Google went down speaks volumes about what happens when the whizzy cloud technology is disrupted by the climate change Google faces. Are there storms gestating in Washington, DC?

The more interesting news, if it is indeed accurate in a boom time for the faux, appeared in “YouTube Says Homophobic Abuse Does Not Violate Harassment Rules.” The write up states:

In a compilation video Maza created of some of his mentions on Crowder’s show, the host attacks Maza as a “gay Mexican”, “lispy queer” and a “token Vox gay atheist sprite”.

DarkCyber assumes that Google has “data” to back up its decision. The company’s smart software and exceptional engineers do not make judgment calls without considering statistical analyses of clicks, word counts, and similar “hard” facts.

With these data, it seems to me that Google has put to rest any mewing and whimpering about the reason it was able to differentiate between abuse and debate.

This is the Google, not some liberal arts magnet. Logic is logic. Oh, about that outage and the quality of the Google technical talent? Well, that is just an outlier.

Stephen E Arnold, June 5, 2019

Google Management: Moving Forward One Promise at a Time

June 5, 2019

It looks like Google has made good on one of the promises it made after last year’s outcry from employees—“Google Updates Misconduct Reporting Amid Employee Discontent,” reports Phys.org. The company had already heeded calls to end mandatory arbitration and to modify benefit rules for some workers. Still, this is too little too late for some. Reporter Rachel Lerman writes:

“Google said Thursday it has updated the way it investigates misconduct claims—changes the company pledged to make after employees called for action last year. The company is simultaneously facing backlash from two employees who say they faced corporate retaliation after helping to organizing the November walkout protests.

We noted:

“Thursday’s changes are designed to make it simpler for employees to file complaints about sexual misconduct or other issues. Google also issued guidelines to tell employees what to expect during an investigation, and added a policy that allows workers to bring along a colleague for support during the reporting process. Google CEO Sundar Pichai promised to make these changes last fall after thousands of Google employees at company offices around the world briefly walked out to protest the company’s handling of sexual misconduct investigations and payouts to executives facing misconduct allegations.”

The charges of retaliation for spearheading last autumn’s walkout reached some employees via an email from those two organizers. One said she was commanded to stop her outside research into AI ethics, while the other says she was effectively demoted (until she brought in her own lawyer. Now she simply faces a “hostile” work environment, she says.) For its part, Google claims those actions had nothing to do with the protests. Naturally.

Cynthia Murrell, June 5, 2019

Reflecting about New Zealand

June 5, 2019

Following the recent attacks in two New Zealand mosques, during which a suspected terrorist successfully live-streamed horrific video of their onslaught for over a quarter-hour, many are asking why the AI tasked with keeping such content off social media failed us. As it turns out, context is key. CNN explains “Why AI Is Still Terrible at Spotting Violence Online.” Reporter Rachel Metz writes:

“A big reason is that whether it’s hateful written posts, pornography, or violent images or videos, artificial intelligence still isn’t great at spotting objectionable content online. That’s largely because, while humans are great at figuring out the context surrounding a status update or YouTube, context is a tricky thing for AI to grasp.”

Sites currently try to account for that shortfall with a combination of AI and human moderators, but they have trouble keeping up with the enormous influx of postings. For example, we’re told YouTube users alone upload more than 400 hours of video per minute. Without enough people to provide context, AI is simply at a loss. Metz notes:

“AI is not good at understanding things such as who’s writing or uploading an image, or what might be important in the surrounding social or cultural environment. … Comments may superficially sound very violent but actually be satire in protest of violence. Or they may sound benign but be identifiable as dangerous to someone with knowledge about recent news or the local culture in which they were created.

We also noted:

“… Even if violence appears to be shown in a video, it isn’t always so straightforward that a human — let alone a trained machine — can spot it or decide what best to do with it. A weapon might not be visible in a video or photo, or what appears to be violence could actually be a simulation.”

On top of that, factors that may not be apparent to human viewers, like lighting, background images, or even frames per seconds, complicate matters for AI. It appears it will be some time before we can rely on algorithms to shield social media from abhorrent content. Can platforms come up with some effective alternative in the meantime? The pressure is on.

Cynthia Murrell, June 5, 2019

Next Page »

  • Archives

  • Recent Posts

  • Meta