Google Wants to Do Better: Read These Two Articles for Context
June 10, 2021
You will need to read these two articles before you scan my observations.
The first write up is “How an Ex Googler Turned Artist Hacked Her Work to the Top of Search Results.” The is a case example of considerable importance, at least to me and my research team. The methods of word use designed to bond to Google’s internal receptors is the secret sauce of search engine optimization experts. But here is a Xoogler manipulating Google’s clueless methods.
The second article is “Google Seeks to Break Vicious Cycle of Online Slander.” Ignore the self praise of the Gray Lady. The main point is that Google is going to take action to deal with the way in which its exemplary smart software handles “slander.” The main point is that Google has been converted from do gooder to the digital equivalent of Hakan Ayik, the individual the Australian Federal Police converted into the ultimate insider. The similarity is important at least to me.
Don’t agree with my interpretation? No problem. Nevertheless, I will offer my observations:
First, after 20 years of obfuscation, it is clear that the fragility and exploitability of Google’s smart software is known to the author of “How an Ex Googler Turned Artist Hacked…”. Therefore, the knowledge of Google’s willful blind spots is not secret to about 100,000 full time equivalent Googlers.
Second, Google – instead of taking direct, immediate action – once again is doing the “ask forgiveness” thing with words of assurance. Actions speak louder than words. Maybe this time?
Third, neither of the referenced articles speaks bluntly and clearly about the danger mishandling of meaning poses. Forget big, glittery issues like ethics or democracy. Think manipulation and becloudization.
Stephen E Arnold, June 10, 2021
The Ultimate Insider Tool: Work Technology
June 10, 2021
“Many Staff Are Still Using Work Devices for Personal and Illegal Activities” explains something about insiders. Here’s the write up’s comment about something that I thought everyone knew:
Remote employees do not always consider cybersecurity risks.
This bears live in the woods statement is supported by thumbtyping research too. The write up reports:
The password security company [Yubico, a dongle outfit] surveyed 3,000 remote staff from around Europe and found that almost half (42%) use work-issued devices for personal tasks. Roughly a third of this group use corporate tech for banking and shopping, while 7% visit illegal streaming websites. What’s more, senior members of staff are among the worst offenders; 43% of business owners and 39% of C-level executives admit to misusing work devices, with many also dabbling in illegal activities online.
How do you like that ratio seven percent? I a government agency has 50,000 full time equivalents, 3,500 are off the reservation. An industrious bad actor could seek out one of these individuals in an effort to create some fun; for example, crafting a way to generate false passports, gaining access to a “secure” network, or fiddling with geo coordinates to make a border surveillance drone watch a McDonald’s, not the area around Organ Pipe Cactus near Lukeville, Arizona.
The write up quotes the cyber security vendor responsible for the original study as saying:
“With millions of workers focused on the pressures of completing tasks in varying and sometimes unusual circumstances, security best practices are often put on the backburner.”
What’s the fix? A Yubico key, of course. But wait. Aren’t there other factors to address? Nah. Time to let the dog out and make an iced coffee with almond milk and cinnamon.
Stephen E Arnold, June 10, 2021
Does YouTube for Kids Need a Pause Button?
June 10, 2021
YouTube Kids is a child-friendly version of the video streaming platform. Google created YouTube Kids so younger viewers were only exposed to age appropriate content. While YouTube Kids is popular with its intended audiences, parents, child safety advocates, and Congressmen and women dislike its autoplay feature. Vox reports on the issue in “YouTube’s Kids App Has a Rabbit Hole Problem.”
The problem with Youtube Kids’ autoplay feature is that it never stops. When one video ends another begins in an endless loop. This would not be a problem if there was a way to disable autoplay, but that is not an option. Child safety advocates and parents are worried that a constant video stream teaches kids bad video consumption habits and tricks them into making certain choices.
Parents are also upset with the app for several other reasons. The inability to disable autoplay does not allow them to control their kids’ viewing habits. YouTube Kids does have a timer feature, but it clocks out at an hour and must be reset each time. Parents can also limit the app to certain videos or channels and they can disable the search function.
The biggest objections to the autoplay feature is the app’s algorithm that plays recommended videos. The problem with the algorithm is that it does return inappropriate videos about dieting, suicide, mean pranks, mature cartoons, and suicide ideation.
YouTube apparently wants to protect kids and work with parents, but they are slow to respond:
“For now, YouTube has not said whether YouTube Kids will have autoplay on or off by default. It’s also unclear how easy it will be to turn off autoplay. Still, the autoplay in YouTube Kids is a reminder that design choices made by tech platforms do have an impact on how parents and children interact with technology, and where regulators might step in.”
Google, YouTube’s parent company, does want to protect children, because if they do not it affects their profit margin.
Whitney Grace, June 10, 2021
Can AI Really Understand Human Emotion?
June 10, 2021
An article at IT News Africa makes quite an assumption about allegedly smart software. It purports to describe “How Emotion AI Can Make the World a Better Place.” Writer Jenna Delport is, of course, correct when she notes many people struggle to understand others’ emotions. But is the best answer really to hand off the interpretation to an algorithm? Delport writes:
“Researchers at Stanford University modified Google’s augmented reality glasses to read emotions in others and notify the wearer. The glasses detect someone’s mood through their eye contact, facial expressions and body language, and then tell the wearer what emotions it’s picking up. ‘Emotion AI taps into the individual,’ explains Zabeth Venter, CEO and co-founder of Averly. ‘If you think about facial recognition, which is a kind of emotion AI, I can pick up if you like what I’m saying by whether your smile is a smirk or a real genuine smile.’ Such nuances go deeper. Another example is polling: what is your favorite color? Maybe it’s purple. But did you say that enthusiastically? Did you hesitate? Did you just say it to say something? Did you even understand the question? We simply can’t get this level of context from the available surveys, sales data and the many other ways we try to understand humans through information. But through emotion AI, we can grasp incredible nuance.”
Incredible nuance is great—if it is accurate. The write-up acknowledges AI’s problem with bias, but suggests combating it is as simple as training AI on better data sets than what is simply (and cheaply) available on the Web. Instead, developers would capture cultural nuances by training their AI on localized data. They would also, we’re told, balance input from male and female perspectives. That sounds like a good start, but will eliminating bias really be that simple? It may be premature to make promises about AI’s new and improved emotional sensitivity.
Cynthia Murrell, June 10, 2021
Ninfex Is a New Take on Internet Search
June 10, 2021
The creator of “experimental” search site Ninfex is trying a different approach that uses neither crawlers nor AI. The site’s Hello page explains:
“We rely on 2 proxies for search relevance. First: URL score (user votes). Second: Links to discussions on major forums. All submissions on Ninfex are votable by users. When you submit a link, you can also submit up to 5 supporting links (to discussions about that link) from external forums. Among the current submissions you are most likely to see forum links to reddit, hackernews, lobsters, stackexchange & tildes because those are the forums that I frequent most often.”
Yes, the young site still leans heavily toward material based on its maker’s interests, but that could change as its usership grows. The founder, who goes by the name traindreams, writes:
“I am the maker of Ninfex and right now I’m actively pushing to build the index around my personal wiki / research notes / bookmarks. That is why, the home feed mostly contains topics of my interest. The following is a list of those topics.”
See the page to explore that list of diverse and interesting topics, from Art to Psychology to Startups. Perhaps you will be inspired to vote or add a link. Traindreams has already made some changes based on user feedback, like cleaning up the UI, and promises more to come as the index grows. It looks like the idea is quality over quantity; we are curious to see if the enterprise takes off.
Cynthia Murrell, June 10, 2021
High School Management Method: Blame a Customer
June 9, 2021
I noted another allegedly true anecdote. If the information is correct, gentle reader, we have another example of the high school science club management method. Think acne, no date for the prom, and a weird laugh type of science club. Before you get too excited, yes, I was a member of my high school’s science club and I think an officer as well as a proponent of the HSSC approach to social interaction. Proud am I.
“Fastly Claims a Single Customer Responsible for Widespread Internet Outage” asserts:
The company is now claiming the issue stemmed from a bug and one customer’s configuration change. “We experienced a global outage due to an undiscovered software bug that surfaced on June 8 when it was triggered by a valid customer configuration change,” Nick Rockwell, the company’s SVP of engineering and infrastructure wrote in a blog post last night. “This outage was broad and severe, and we’re truly sorry for the impact to our customers and everyone who relies on them.”
Yep, a customer using the Fastly cloud service.
Two observations:
- Unnoticed flaws will be found and noticed, maybe exploited. Fragility and vulnerability are engineered in.
- Customer service is likely to subject the individual to an inbound call loop. Take that, you valued customer.
And what about Amazon’s bulletproof, super redundant, fail over whiz bang system. Oh, it failed for users.
Yep, high school science club thinking says, “We did not do it.” Yeah.
Stephen E Arnold, June 9, 2021
Two Write Ups X Ray High Tech Practices Ignoring Management Micro Actions
June 9, 2021
This morning I noted two widely tweeted write ups. The first is Wired’s “What Really Happened When Google Ousted Timnit Gebru”; the second is “Will Apple Mail Threatened the Newsletter Boom?” Please, read both source documents. In this post I want to highlight what I think are important omissions in these and similar “real journalism” analyses. Some of the information in these two essays is informative; however, I think there is an issue which warrants commenting upon.
The first write up purports to reveal more about the management practices which created what is now a high profile case example of management practices. How many other smart software researchers have become what seems to be a household name among those in the digital world? Okay, Bill Gates. That’s fair. But Mr. Bill is a male, and males are not exactly prime beef in the present milieu. Here’s a passage I found representative of the write up:
Beyond Google, the fate of Timnit Gebru lays bare something even larger: the tensions inherent in an industry’s efforts to research the downsides of its favorite technology. In traditional sectors such as chemicals or mining, researchers who study toxicity or pollution on the corporate dime are viewed skeptically by independent experts. But in the young realm of people studying the potential harms of AI, corporate researchers are central.
What I noted is the “larger.” But what is missed is the Cinemascope story of a Balkanized workforce and management disconnectedness. I get the “larger”, but the story, from my point of view, does not explore the management methods creating the situation in the first place. It is these micro actions that create the “larger” situation in which some technology outfits find themselves mired. These practices are like fighting Covid with a failed tire on an F1 race car.
The second write — “Will Apple Mail Threaten the Newsletter Boom?” — up tackles the vendor saying one thing and doing a number of quite different “other things.” The write up is unusual because it puts privacy front and center. I noted this statement:
All that said, I can’t end without pointing out the ways in which Apple itself benefits from cracking down on email data collection. The first one is obvious: it further burnishes the company’s privacy credentials, part of an ongoing and incredibly successful public-relations campaign to build user trust during a time of collapsing faith in institutions.
Once again the big picture is privacy and security. From my point of view, Apple is taking steps to make certain it can do business with China and Russia. Apple wants to look out for itself, and it is conducting an effective information campaign. The company uses customers, “real journalists,” and services which provide covers for other actions. Like the story about Dr. Gebru and Google, this type of Apple write up misses the point of the great privacy trope: Increased revenues at the expense of any significant competitor. In this particular case, certain “real journalists” may have their financial interests suppressed. Management micro actions create collateral damage. Perhaps focusing on the micro actions in a management context will explain what’s happening and why “real journalists” are agitated?
What’s being missed? Stated clearly, the management micro actions that are fanning the flames of misunderstanding.
Stephen E Arnold, June 9, 2021
Google Is Not the Cause of a Decline in Newspaper Revenue
June 9, 2021
At a Google function, I met the founder of Craigslist. Now in a Silicon Valley way, the company has fingered that individual’s online service as the reason the newspaper industry collapsed. Well, maybe not completely collapsed but deteriorated enough for the likes of Silicon Valley titans to become the arbiters of truth.
The article “Google Decodes What Actually Led to Fall in Newspaper Revenue” states:
As print media houses struggle to sustain in the digital news era, a Google-led study has revealed that the decline of newspaper revenue is not happening because of Search or social advertising but from the loss of newspaper classifieds to specialist online players.
I believe this. I believe absolutely everything I read online. I am not a thumbtyper or a TikTokker, but I do try.
The analysis from economists at Accenture, commissioned by Google, looks at the revenues of newspapers in Western Europe over nearly two decades to reveal exactly what broke the old business model for newspapers.
The bad news is:
While many readers are not in the habit of paying for access to news, between 2013 and 2018, digital circulation volumes increased by 307 per cent to reach 31.5 million paying subscribers in the Western Europe region, more than offsetting the decline in paid print subscriptions.
The article reports that Google funded research revealed:
Google is significantly contributing to that growth. Over the past 20 years, Google has collaborated closely with the news industry and is one of the world’s biggest financial supporters of journalism, providing billions of dollars to support the creation of quality journalism in the digital age…
As I said, I believe everything I read online. And what about that person who created Craigslist? He may regret gobbling down those Googley hors d’oeuvres. Will newspaper publishers? Probably but those estimable titans of information may choke on the celery stick with weird sand color dip.
Stephen E Arnold, June 9, 2021
Microsoft: Corporate Athleticism and Missing Pro Day
June 9, 2021
Yep, now it is a “new” Windows. And Teams, the feature rich Word software which struggles to number stuff and keep text and images where the author put them. Plus the security system that will prevent SolarWinds’ missteps and Exchange Server from becoming the lap dog of bad actors. “How Microsoft Fumbled Skype – and Let Zoom Flourish” is an interesting article. The implicit messages in the document are intriguing: Microsoft is big but not really able to handle opportunities like Skype in a way that avoids head shaking and hand wringing.
I marked this passage in the source document:
Although Skype, launched in 2003, has been available nine years longer than Zoom and is owned by tech titan Microsoft, Zoom has effectively left it in its dust. People don’t say “I’ll Skype you” as often as they say “I’ll Zoom you” anymore.
The write up provides some historical color but nailed the reason for Microsoft’s Skype fumble:
In 2011, when Microsoft acquired Skype for US$8.5-billion, Zoom had just launched and Skype already had 100 million users. By 2014, Skype was popular enough to merit inclusion as a verb in the Oxford English Dictionary. And by 2015, it had 300 million users. But Skype’s technology wasn’t well-suited to mobile devices. When Microsoft set about to address that problem, it introduced a host of reliability nightmares for users. It gave them further headaches by redesigning Skype frequently and haphazardly while integrating messaging and video functions.
My experience with the new Skype is that the Teams’ environment is pretty darned confusing. This comment illustrates what happens when management guard rails are not in place for programmers who may have good ideas but cannot cope with the outstanding Microsoft operating systems:
When Microsoft set about to address that problem, it introduced a host of reliability nightmares for users. It gave them further headaches by redesigning Skype frequently and haphazardly while integrating messaging and video functions.
Could this Skype example provide some insight into the security issues Microsoft’s core systems face. I know which company will win the prize for most loved software from a coalition of Eastern European bad actors. Do you? Let’s ask a JEDI knight.
Stephen E Arnold, June 9, 2021
More Microsoft Finger Pointing: Not 1,000 Programmers, Just One
June 9, 2021
I got a kick out of “Microsoft Blames Human Error Amid Suspicion It Censored Bing Results for Tiananmen Square Tank Man.” The tank man reference refers to an individual who stood in front of a tank. Generally this is not a good idea because visibility within tanks is similar to that from a Honda CR-Z. Hold that. The tank has better visibility. Said tank continued forward, probably without noticing a slight impediment.
The story talks not about visibility; its focus is on Microsoft (yep, the SolarWinds’ and new Windows’ outfit). I read:
Throughout Friday afternoon, using the image search function on Microsoft-operated Bing using the words “Tank Man” returned the message, “There are no results for tank man / Check your spelling or try different keywords.” (According to Motherboard, the same is true in other countries outside the U.S., including France and Switzerland.)
DuckDuck and Yahoo search presented a similar no results message. These are metasearch systems eager to portray themselves as much, much more.
Why? The article reports:
Microsoft has done business in China for decades, and Bing is accessible there. Like competitors such as Apple, the company has long complied with the whims of Chinese censors to maintain access to the country’s massive market, and it purges Bing results within China of information its government deems sensitive. However, the company said that blocking image results for “Tank Man” in the U.S. was not intentional and the issue was being addressed. “This is due to an accidental human error and we are actively working to resolve this…”
Could a similar error been responsible for recent security lapses at the Redmond Defender office?
And no smart software, no rules-based instruction, and no filters involved in this curious search result?
Nope. I believe everything I read online about Microsoft. Call me silly.
Stephen E Arnold, June 9, 2021