The GOOG: Bright People, Interesting Management Tactics
November 6, 2019
Silicon Valley is notorious for its leftist political leanings. As much as the workforce supports leftwing views, Silicon Valley leaders are more concerned with their bottom dollar and maintaining a politically correct image. BuzzFeed News shares that, “Google Removed Employee Questions About Its Hiring Of A Former DHS Staffer Who Defended The Muslim Travel Ban.”
In this recent example of maintaining an inoffensive image, Google removed questions related to hiring Miles Taylor, a former employee of the US Department of Homeland Security (DHS), on the internal Google message board, Dory. Dory is used to ask and vote on questions for management. Information was removed about Taylor due to his support of Trump’s travel ban of Muslims. Google staffers were especially upset about Taylor’s hiring in September 2019, because Google executives actually protested against policies Taylor implemented at the DHS.
Lately, Google is having many problems maintaining free expression for its staffers and “corporate harmony.” Earlier in 2019, Google settled with the National Labor and Review Board about the company’s attempts to prevent employees’ from discussing their dissatisfaction with the company.
Google defended hiring Taylor, because he was not involved in the original Muslim travel ban drafts nor the family separation. Google declined to comment on removing discussions about Taylor, but two close sources did confirm that some of the comments were removed because they were viewed as personal attacks on Taylor. Other discussions about him remained posted on Dory.
It is ironic that Google did hire Taylor based on the executives’ past views:
“Google and its leaders had voiced their strong opposition to the Muslim travel ban and family separations occurring at the Mexico border. In January 2017, following the announcement of the original travel ban, Google cofounder Sergey Brin joined protesters at San Francisco International Airport, while Google CEO Sundar Pichai pointedly voiced his displeasure on Twitter, in an email to staff, and in a much-publicized employee meeting.
‘The stories and images of families being separated at the border are gut-wrenching,’ Pichai tweeted as the Trump administration ramped up its anti-immigration policy in the summer of 2018. ‘Urging our government to work together to find a better, more humane way that is reflective of our values as a nation. #keepfamiliestogether.’”
Are Google executives unaware that their management decisions may be interpreted as off center? Are Google employees allowing politics to control their work place? Maybe it is reflective of the here and now?
Whitney Grace, November 6, 2019
Google: A Ray of Light?
November 5, 2019
Google’s algorithms may not be so bad after all—it seems that humans are the problem yet again. Wired discusses a recent study from Penn State in its article, “Maybe It’s Not YouTube’s Algorithm That Radicalizes People.” Extreme ideological YouTube channels have certainly been growing by leaps and bounds. Many reporters have pointed to the site’s recommendation engine as the culprit, saying its suggestions, often running on auto-play, guide viewers further and further down radicalization rabbit holes. However, political scientists Kevin Munger and Joseph Phillips could find no evidence to support that view. Reporter Paris Martineau writes:
“Instead, the paper suggests that radicalization on YouTube stems from the same factors that persuade people to change their minds in real life—injecting new information—but at scale. The authors say the quantity and popularity of alternative (mostly right-wing) political media on YouTube is driven by both supply and demand. The supply has grown because YouTube appeals to right-wing content creators, with its low barrier to entry, easy way to make money, and reliance on video, which is easier to create and more impactful than text.”
The write-up describes the researchers’ approach:
“They looked at 50 YouTube channels that researcher Rebecca Lewis identified in a 2018 paper as the ‘Alternative Influence Network.’ Munger and Phillips’ reviewed the metadata for close to a million YouTube videos posted by those channels and mainstream news organizations between January 2008 and October 2018. The researchers also analyzed trends in search rankings for the videos, using YouTube’s API to obtain snapshots of how they were recommended to viewers at different points over the last decade. Munger and Phillips divided Lewis’s Alternative Influence Network into five groups—from ‘Liberals’ to ‘Alt-right’—based on their degree of radicalization. … Munger and Phillips found that every part of the Alternative Influence Network rose in viewership between 2013 and 2016. Since 2017, they say, global hourly viewership of these channels ‘consistently eclipsed’ that of the top three US cable networks combined.”
The Penn State team also cites researcher Manoel Ribeiro, who insists his rigorous analysis of the subject, published in July, has been frequently misinterpreted to support the bad-algorithm narrative. Why would mainstream media want to shift focus to the algorithm? Because, Munger and Phillips say, that explanation points to a clear policy solution, wishful thinking though it might be. The messiness of human motivations is not so easily dealt with.
Both Lewis and Ribeiro praised the Penn State study, indicating it represents a shift in this field of research. Munger and Phillips note that, based on the sheer volume of likes and comments these channels garner, their audiences are building communities—a crucial factor in the process of radicalization. Pointing fingers at YouTube’s recommendation algorithm is a misleading distraction.
Cynthia Murrell, November 4, 2019
Zuck Under Fire
November 5, 2019
Mark Zuckerberg might be the lead smart dude at Facebook, but that is only one facet of his career. The Sydney Morning Herald published an editorial about Zuckerberg called, “Mr. Zuckerberg, Have You Considered Retirement?” and it opened with the following description of him:
“If I were Mark Zuckerberg — newfound defender-to-the-death of liberal free expression even if it includes outright lying except if there are female nipples, a would-be curer of all the world’s disease, side-gig education reformer, immigration crusader, quirky dad, fifth wealthiest person in the world, hobnobber to pundits and politicians and all-around do-gooder digital hegemony who is also now vying to run the world’s money supply, I mean my God, Mark, where does all this end?”
Whoa! Zuckerberg has his hands full! Farhad Manjoo, the editorial’s author, suggested that Zuckerberg should vanish from the spotlight and retire to a nice, quiet Pacific island. He draws a similarity between Microsoft founder Bill Gates, who stepped down from the company and transformed himself into a philanthropic billionaire. Google founders Larry Page and Sergey Brin pulled back too and are basically ghosting society.
Manjoo did state that Zuckerberg’s commitment to addressing hot topics might be seen as admirable, but his responses to his opponents are confusing and have mixed up what is good for Facebook vs. what is good for the US. Democrats have turned him into one of the party’s villains and the republicans are not to fond of him either.
Zuckerberg has a lot of power due to his wealth, most of which he earned by applying his intelligence to a product that became part of everyday life. He bows, however, to making money and “supporting” anyone that will put more dollars in his pocket.
Do not forget that Zuckerberg has high functioning autism spectrum disorder, which explains his awkward behavior in public. That does not excuse his behavior towards politics and amassing more and more wealth without a conscious. He is socially awkward, not ignorant of the world.
Whitney Grace, November 4, 2019
Google Protest: An Insulting Anniversary
November 2, 2019
DarkCyber noted this write up in CNet, an online information service, which may not be capturing too many Google ads in 2020. Here’s the title and subtitle of the story:
The headline is Googley; that is, it is designed to make the story appear in a Google search results list. The jabber may work. But what may not be as efficacious is building bridges to the Google itself. For example, the write up states:
The Google protests [maybe about sexual matters, management decisions, money?] didn’t achieve everything their organizers were seeking. Several Google workers and former workers are dissatisfied with the company’s response. Organizers say the company has done the bare minimum to address concerns, and employees allege that it has retaliated against workers and sought to quash dissent. “They’ve been constantly paying lip service,” said one Google employee who was involved with the walkout. “It’s insulting to our intelligence,” said the person, who requested anonymity because of fear of retribution from the company.
Then the observation:
Google declined to make its senior leadership team, including co-founders Larry Page and Sergey Brin, CEO Sundar Pichai and human resources chief Eileen Naughton, available for interviews. In a statement, Naughton touted changes Google has made over the past year, including streamlining the process for people to report abuse and other problems.
A few observations may be warranted:
- Google’s management methods may follow the pattern set in high school science clubs when those youthful wizards confront something unfamiliar
- A problem seems to exist within the GOOG
- Outfits like CNet are willing to explain what may be a Google shortcoming because Google is not longer untouchable.
Interesting? If paid employees won’t get along and go along, how will that translate into Google’s commitment to enterprise solutions? What if an employee inserts malicious code in a cloud service as a digital protest? What if… I don’t want to contemplate what annoyed smart people can do at 3 am with access credentials.
Yikes. Insulting.
Stephen E Arnold, November 2, 2019
Facebook: What Is a Threat to the Company?
October 29, 2019
I spotted a headline on Techmeme. Rewriting headlines is part of the Techmeme approach to communication. The link to which the headline points is this New York Times article. Here is the NYT headline:
Dissent Erupts at Facebook Over Hands-Off Stance on Political Ads
This is the Techmeme headline:
Sources: over 250 Facebook employees have signed a letter visible on an internal forum that says letting politicians lie in ads is “a threat” to the company
The messages are almost the same: Staff push back is a problem. But isn’t it part of the current high-tech company ethos.
The threat is management’s inability to maintain control. Companies typically work toward a goal; for example, manufacturing video doorbells or selling asbestos free baby powder. (Okay, those a bad examples.)
Perhaps something larger is afoot?
The corrosion of a ethical fabric allows certain aspects of human behavior to move through a weakened judgmental membrane may be more significant. The problem is not Facebook’s alone.
Are there similarities between a company shipping baby powder with questionable ingredients and Facebook?
Interesting question.
Stephen E Arnold, October 29, 2019
Economists: The Borjes Approach
October 28, 2019
Now this is a source among sources: Epoch Times. DarkCyber is not equipped to identify the information in “Krugman Admits He and Mainstream Economists Got Globalization Wrong.” One point in the write up evoked memories of a college course when I was a callow youth; to wit:
the consensus economists failed to measure adequately and properly account for the impact of globalization on specific communities, some of which were disproportionately hit hard. This despite the fact that models predicted, and figures later showed, that free trade was a net gain in terms of both jobs and wages in the broader American economy. Generalized gain but localized pain.
There you go. Better for everyone. Not so good for some others.
In business, the technology magnets are doing fine. Local retail shops, not so fine. Some countries are chugging along. Others seems to be shifting into riot mode. Planning a trip to Bogota, Lima, or Paris for a three day week end soon?
What about that college economics class through which I sat asking such questions as, “What is this professor talking about?” and “Have I awakened in a short story by Jose Luis Borges?”
Maybe the Epoch Times is neither wrong nor right about Paul Krugman? Paradoxical thoughts have legs in the online world. What’s real and what’s fake? Think of those riders in the wasteland in front of what seems to be a mountain range. Borges did and look what that earned him.
Stephen E Arnold, October 28, 2019
Amazon: Specialist in Complexity
October 22, 2019
The word “complexification” is tailor made for Amazon. A couple of examples might be helpful, right?
- Third party sellers provide expired food. Something’s wrong it seems. Complexification of the vendor vetting, product vetting, and warehouse vetting processes might be a reason. (I am setting aside “profit at all costs” because who wants to rain on the Amazon bulldozer.
- AWS services. Really, who can name the different types of Amazon databases. There’s an Oracle killer, an unstructured data killer, there’s an Amazon blockchain solution that’s just perfect for Dubai. Can’t keep ‘em straight? Take a cheap course in how to speak Amazon, you dynamo, you.
- Return authorizations. Use Opera? Well, the labels don’t print correctly. Call a human? It is helpful to speak two or three languages other than English. English as she is spoken at Amazon is — well, let’s think about it this way — may not be what talking heads on CNBC speak.
But the most interesting complexity problem concerns Twitch. Twitch may be a problem for YouTube and — get this, gentle reader — Facebook.
The hitch in the git along was summarized this way by Verge’s interview with Emmett Shear, the big Twitcher. Here’s the passage I noted:
The changes are coming, Shear said, because the company didn’t think it was doing well enough when it talked to streamers about moderating their channels. There were streamers with teams that had everything working, but there were also streamers who felt overwhelmed and like they couldn’t figure out how to use all of Twitch’s moderation tools. “It popped as a problem,” Shear said. “We decided we had to do better. And I think it’s a big step in the right direction.” Twitch’s moderation philosophy, in general, comprises two parts: enforcement works on the level of the individual and on the level of the platform.
Okay, complexity, two tier moderation, and a lack of “transparency.” Transparency is an interesting word because it suggests making stuff clear. A lack of transparency means stuff is not clear.
Complexity?
Yes.
In my recent lecture at the TechnoSecurity & Digital Forensics Conference I offered a few examples of Twitch’s challenges:
- Streaming gambling with links to donate money to the gamblers and tips for getting an advantage
- SweetSaltyPeach’s soft excitement morphing into RachelKay’s really dull doing nothing but providing a momentary glimpse of the old formula for success
- A first run movie available via a stream.
Net net: Amazon’s fatal flaw may be its burgeoning complexity. Not even Bezos billions can make some things simple, clear, and easy to understand.
If Twitchers can’t figure out what to do, what will lesser mortals in government agencies achieve? Let’s watch Dubai for clues.
Stephen E Arnold, October 21, 2019
Oxford University and HobbyLobby: A Criminal Duo?
October 17, 2019
I have wandered around Oxford and its hallowed halls. Interesting place. Not much going on. I have also visited a HobbyLobby. Lots going on. My take: Oxford was snooty; HobbyLobby was crass.
I read “Oxford Professor Accused of selling Ancient Bible Fragments.” When I saw the headline, I thought about MIT and its tie up and cover up of its Jeffrey Epstein connection.
What’s with universities?
The write up states:
An Oxford University professor has been accused of selling ancient Bible fragments to a controversial US company that has been involved in several high-profile scandals related to its aggressive purchases of biblical artifacts.
DarkCyber noted:
Commenting after the statement on Monday, Nongbri [New Testament scholar],said: “The sale of the manuscripts and the attempt to cover it up by removing records is almost unbelievable. But the first thing to note are the words ‘so far’. We don’t yet know the full extent of this. More items may well have been sold to Hobby Lobby.”
No digital connection. No Dark Web. Just a prestigious institution and an outfit which sells stuff to people who create owl plaques in their ovens.
Remember. Just allegations.
Stephen E Arnold, October 17, 2019
IQ and Health: Maybe Plausible?
October 16, 2019
DarkCyber noted the Scientific American (is this an oxymoron now?) article “Bad News for the Highly Intelligent: Superior IQs Aare Associated with Mental and Physical Disorders, Research Suggests.” DarkCyber enjoys the waffling baked into to the phrase “research suggests.”
The write up states:
The survey of Mensa’s highly intelligent members found that they were more likely to suffer from a range of serious disorders.
The write up reports:
The biggest differences between the Mensa group and the general population were seen for mood disorders and anxiety disorders.
A reasonable question to pose is, “Why?” Well, there is an answer:
To explain their findings, Karpinski [the researcher] and her colleagues propose the hyper brain/hyper body theory. This theory holds that, for all of its advantages, being highly intelligent is associated with psychological and physiological “over excitabilities,” or OEs. A concept introduced by the Polish psychiatrist and psychologist Kazimierz Dabrowski in the 1960s, an OE is an unusually intense reaction to an environmental threat or insult. This can include anything from a startling sound to confrontation with another person.
We noted this paragraph:
Psychological OEs include a heighted tendency to ruminate and worry, whereas physiological OEs arise from the body’s response to stress. According to the hyper brain/hyper body theory, these two types of OEs are more common in highly intelligent people and interact with each other in a “vicious cycle” to cause both psychological and physiological dysfunction. For example, a highly intelligent person may overanalyze a disapproving comment made by a boss, imagining negative outcomes that simply wouldn’t occur to someone less intelligent. That may trigger the body’s stress response, which may make the person even more anxious.
Interesting. Over excitabilities. A more informal way to reach a similar conclusion is to attend a hacker conference, observe the employee (not contractor) dining facility at Google Mountain View, or watch an episode or two of Sharktank. One can also dip into history: Van Gogh’s ear, Michelangelo’s aversion to clean feet, and the fierce Prioritätsstreit between Newton and Leibnitz. (Leibnitz’s notation won. Take that, you first-year students.) Do you hear a really smart person laughing?
Stephen E Arnold, October 16, 2019
Robots: Not Animals?
October 9, 2019
The prevailing belief is that if Google declares something to be true, then it is considered a fact. The same can be said about YouTube, because if someone sees it on YouTube then, of course, it must be real. YouTube already has trouble determining what truly is questionable content. For example, YouTube does not flag white supremacy and related videos taken down. Another curious YouTube incident about flagged content concerns robot abuse, “YouTube Concedes Robot Fight Videos Are Not Actually Animal Cruelty After Removing Them By Mistake” from Gizmodo.
YouTube rules state that videos displaying animals suffering, such as dog and cock fights, cannot be posted on the streaming service. For some reason, videos and channels centered on robot fighting were cited and content was removed.
“…the takedowns were first noted by YouTube channel Maker’s Muse and affected several channels run by Battle Bots contenders, including Jamison Go of Team SawBlaze (who had nine videos taken down) and Sarah Pohorecky of Team Uppercut. Pohorecky told Motherboard she estimated some 10 to 15 builders had been affected, with some having had multiple videos removed. There didn’t appear to be any pattern in the titles of either the videos or the robots themselves, beyond some of the robots being named after animals, she added.”
YouTube’s algorithms make mistakes and robots knocking the gears and circuit boards out of each other was deemed violent, along the lines of “inflicting suffering.” YouTubers can appeal removal citations, so that content can be reviewed again.
Google humans doing human deciding. Interesting.
Whitney Grace, October 9, 2019