Gebru-Gibberish Gives Google Gastroenteritis
February 24, 2021
At the outset, I want to address Google’s Gebru-gibberish
Definition: Gebru-gibberish refers to official statements from Alphabet Google about the personnel issues related to the departure of two female experts in artificial intelligence working on the ethics of smart software. Gebru-gibberish is similar to statements made by those in fear of their survival.
Gastroenteritis: Watch the ads on Fox News or CNN for video explanations: Adult diapers, incontinence, etc.
Psychological impact: Fear, paranoia, flight reaction, irrational aggressiveness. Feelings of embarrassment, failure, serious injury, and lots of time in the WC.
The details of the viral problem causing discomfort among the world’s most elite online advertising organization relates to the management of Dr. Timnit Gebru. To add to the need to keep certain facilities nearby, the estimable Alphabet Google outfit apparently dismissed Dr. Margaret Mitchell. The output from the world’s most sophisticated ad sales company was Gebru-gibberish. Now those words have characterized the shallowness of the Alphabet Google thing’s approach to smart software.
In order to appreciate the problem, take a look at “Underspecification Presents Challenges for Credibility in Modern Machine Learning.” Here’s the author listing and affiliation for the people who contributed to the paper available without cost on ArXiv.org:
The image is hard to read. Let me point out that the authors include more than 30 Googlers (who may become Xooglers in between dashes to the WC).
The paper is referenced in a chatty Medium write up called “Is Google’s AI Research about to Implode?” The essay raises an interesting possibility. The write up contains an interesting point, one that suggests that Google’s smart software may have some limitations:
Underspecification presents significant challenges for the credibility of modern machine learning.
Why the apparently illogical behavior with regard to Drs. Gebru and Mitchell?
My view is that the Gebru-gibberish released from Googzilla is directly correlated with the accuracy of the information presented in the “underspecification” paper. Sure, the method works in some cases, just as the 1998 Autonomy black box worked in some cases. However, to keep the accuracy high, significant time and effort must be invested. Otherwise, smart software evidences the charming characteristic of “drift”; that is, what was relevant before new content was processed is perceived as irrelevant or just incorrect in subsequent interactions.
What does this mean?
Small, narrow domains work okay. Larger content domains work less okay.
Heron Systems, using a variation of the Google DeepMind approach, was able to “kill” a human in a simulated dog flight. However, the domain was small and there were some “rules.” Perfect for smart software. The human top gun was dead fast. Larger domains like dealing with swarms of thousands of militarized and hardened unmanned aerial vehicles and a simultaneous series of targeted cyber attacks using sleeper software favored by some nation states means that smart software will be ineffective.
What will Google do?
As I have pointed out in previous blog posts, the high school science club management method employed by Backrub has become the standard operating procedure at today’s Alphabet Google.
Thus, the question, “Is Google’s AI research about to implode?” is a good one. The answer is, “No.” Google has money; it has staff who tow the line; and it has its charade of an honest, fair, and smart online advertising system.
Let me suggest a slight change to the question; to wit: “Is Google at a tipping point?” The answer to this question is, “Yes.”
Gibru-gibberish is similar to the information and other outputs of Icarus, who flew too close to the sun and flamed out in a memorable way.
Stephen E Arnold, February 24, 2021
The Crux of the Smart Software Challenge
February 24, 2021
I read “There Is No Intelligence without Human Brains.” The essay is not about machine learning, artificial intelligence, and fancy algorithms. One of the points which I found interesting was:
But, humans can opt for long-term considerations, sacrificing to help others, moral arguments, doing unhelpful things as a deep scream for emotional help, experimenting to learn, training themselves to get good at something, beauty over success, etc., rather than just doing what is comfortable or feels nice in the short run or simply pro-survival.
However, one sentence focused my thinking on the central problem of smart software and possibly explains the odd, knee jerk, and high profile personnel problems in Google’s AI ethics unit. Here’s the sentence:
Poisoning may greatly hinder our flexible intelligence.
Smart software has to be trained. The software system can be hand fed training sets crafted by fallible humans or the software system can ingest whatever is flowing into the system. There are smart software systems which do both. One of the first commercial products to rely on training sets and “analysis on the fly” was the Autonomy system. The phrase “neurolinguistic programming” was attached by a couple of people to the Autonomy black box.
What’s stirring up dust at Google may be nothing more than fear; for example:
- Revelations by those terminated reveal that the bias in smart software is a fundamental characteristic of Google’s approach to artificial intelligence; that is, the datasets themselves are sending smart software off the rails
- The quest for the root of the bias is to shine a light on the limitations of current commercial approaches to smart software; that is, vendors make outrageous claims into order to maintain a charade about software’s capabilities which may be quite narrow and biases
- The data gathered by the Xooglers may reveal that Google’s approach is not as well formed as the company wants competitors and others to believe; that is, marketers and MBAs outpace what the engineers can deliver.
The information by which an artificial intelligence system “learns” may be poisoning the system. Check out the Times of Israel essay. It is thought provoking and may have revealed the source of Google’s interesting personnel management decisions.
Fear can trigger surprising actions.
Stephen E Arnold, February 23, 2021
Insights into Video Calls
February 24, 2021
I read a ZDNet write up. The word I would use to describe its approach is “breezy.” Maybe “fluffy?” “Microsoft Teams or Zoom? A Salesman Offers His Stunning Verdict” reveals quite a bit about the mental approach of the super duper professionals referenced in the article.
The security of Microsoft Teams and Zoom concern me. The SolarWinds’ misstep resulted in Microsoft’s losing control of some Azure and Outlook software. But we only know what Microsoft elects to reveal. Then there is the Zoom-China connection. That gives me pause.
What’s the write up reveal? Policy or personal preference dictates what system gets clicked. But the write up reveals some other factoids, which I think are quite illuminating.
First, the anonymous sales professional states:
“I’m on video calls eight hours a day. I just do what’s easiest…Some of my meetings are in the middle of the night. You want me to think then?”
Not a particularly crafty person I think. The path of least resistance is the lure for this professional. I like the idea that this professional’s thought processes shut down for the night. To answer the rhetorical question “You want me to think then?”, I would reply, “Yes, you are a professional. If you don’t want to think, go for the Walmart greeter work.” Lazy radiates from this professional’s comment.
Another person explains that answering a question about video conferencing features can be expressed this way:
“Zoom to Teams is like Sephora to Ulta. Or Lululemon to Athleta.”
I assume that this is a brilliant metaphor like one of Shakespeare’s tropes. To me I have zero idea about the four entities offered as points of reference. My hunch is that this individual’s marketing collateral is equally incisive.
A source focused on alcohol research (who knew this was a discipline?) This individual is convinced that Zoom’s “has more security protocols.” This individual does not know that most Zoom bombing is a consequence of individuals invited to a meeting.
Here are my takeaways from the write up:
- The salesman cuts corners
- The person who speaks in terms of product brand names is likely to confuse me when I ask, “What’s the weather?”
- The alcohol researcher’s confidence in Zoom security is at odds with the Zoom bomb thing.
For my Zoom sessions, I use an alias, multiple bonded Internet services, and a specialized VPN. I certainly don’t trust Zoom security. And Microsoft? These pros develop security services which could not detect a multi month breach which resulted in the loss of some source code.
My verdict: Meet in person, wear a mask, and trust but verify.
Stephen E Arnold, February 24, 2021
Quote to Note: Facebook and Its True Colors
February 24, 2021
I find some “real” newspapers interesting because some of their stories have a social political agenda and every once in a while a story is just made up. (Yes, even the New York Times has experienced this type of journalistic lapse.)
In the estimable New York Post, “Facebook Faces Boycott Campaign after Banning News in Australia” included an interesting statement allegedly made by British member of Parliament Julian Knight. Here’s the quote I noted:
Facebook always claimed it is a platform. It very much looks like it is now making quite substantial editorial and even political decisions. It is arrogant, particularly during a pandemic, to basically turn off the taps to a great deal of news. It is not being a good global citizen.
Facebook operates as if it were a country. Perhaps it will move beyond cyber force in Australia? Does Mr. Zuckerberg own a yacht. The boat could be outfitted with special equipment. On the other hand, Mr. Zuckerberg will find a way to make peace with a country which he obviously perceives as annoying, if not irrelevant to the proud dataspace of Facebook.
Stephen E Arnold, February 24, 2021
Zoom Bombers? Probably from Your Contact List
February 24, 2021
To break up the monotony of quarantine life, a new trend appeared on the Internet due to the large use of videoconferencing. Called “zoombombing,” the new activity is when a stranger joins an online videoconference and disrupts it with lewd comments, activities, and other chaos. Science Magazine shares how zoom bombers are usually not random strangers: “‘Zoombombing’ Research Shows Legitimate Meeting Attendees Cause Most Attacks.”
Zoombombing videos went viral rather quickly. Many of these disruptions incited humor, but soon became annoyances. Boston University and Binghamton University researchers discovered that most zoombombing attacks are “inside jobs.”
“Assistant Professor Jeremy Blackburn and PhD student Utkucan Balci from the Department of Computer Science at Binghamton’s Thomas J. Watson College of Engineering and Applied Science teamed up with Boston University Assistant Professor Gianluca Stringhini and PhD student Chen Ling to analyze more than 200 calls from the first seven months of 2020.
The researchers found that the vast majority of zoombombing are not caused by attackers stumbling upon meeting invitations or “bruteforcing” their ID numbers, but rather by insiders who have legitimate access to these meetings, particularly students in high school and college classes. Authorized users share links, passwords and other information on sites such as Twitter and 4chan, along with a call to stir up trouble.”
Hackers are not causing the problem, but invited participants to the Zoom call. Inside jobs are giggles, but they point to the underlying problem of anonymity. If people are not afraid of repercussions, then they are more likely to say/do racist, sexist, and related things.
The researchers were forced to study antisocial behavior in their studies and had to take mental health breaks due to the depravity.
Whitney Grace, February 24, 2021
Facebook Demonstrates It Is More Powerful Than a Single Country
February 23, 2021
I read “Facebook to Reverse News Ban on Australian Sites, Government to Make Amendments to Media Bargaining Code.” It’s official. Google paid up. Facebook stood up and flexed its biceps. The Australian government swatted at the flies in Canberra, gurgled a Fosters, and rolled over. The write up states:
Facebook will walk back its block on Australian users sharing news on its site after the government agreed to make amendments to the proposed media bargaining laws that would force major tech giants to pay news outlets for their content.
The after party will rationalize what happened. But from rural Kentucky, it certainly seems as if Facebook is now able to operate as a nation state. Facebook can impose its will upon a government. Facebook can do what it darn well pleases, thank you very much.
The write-up has a great quote attributed to Josh Frydenberg, the Australian government treasurer:
Facebook is now going to engage good faith negotiations with the commercial players.
Are there historical parallels? Sure, how about Caesar and the river thing?
Turning point and benchmark.
Stephen E Arnold, February 23, 2021
Google: Adding Friction?
February 23, 2021
I read “Waze’s Ex-CEO Says App Could Have Grown Faster without Google.” Opinions are plentiful. However, reading about the idea of Google as an inhibitor is interesting. The write up reports:
Waze has struggled to grow within Alphabet Inc’s Google, the navigation app’s former top executive said, renewing concerns over whether it was stifled by the search giant’s $1 billion acquisition in 2013.
A counterpoint is that 140 million drivers use Waze each month. When Google paid about $1 billion for the traffic service in 2009, Waze attracted 10 million drivers.
The write up states:
But Waze usage is flat in some countries as Google Maps gets significant promotion, and Waze has lost money as it focuses on a little-used carpooling app and pursues an advertising business that barely registers within the Google empire…
Several observations about the points in the article:
- With litigation and other push back against Google and other large technology firms, it seems as if Google is in a defensive posture
- Wall Street is happy with Google’s performance, but that enjoyment may not be shared with that of some users and employees
- Google management methods may be generating revenue but secondary effects like the Waze case may become data points worth monitoring.
Google map related services are difficult for me to use. Some functions are baffling; others invite use of other services. Yep, friction as in slowing Waze’s growth maybe?
Stephen E Arnold, February 23, 2021
Garbaged Books Become a Library
February 23, 2021
People throw away books. When I was younger and in the pre Kindle era, I would leave books I finished reading in seat back pockets on airplanes. My thought was that someone would find the book and read it. I know this was one of those virtue signaling efforts which resulted in flight attendants putting the books in the trash. Oh, well. I tried a little.
I read “Turkish Garbage Collectors Open a Library from Books Rescued from the Trash.” According the the write up, the library numbers 6,000 books and has a children’s section. No kidding? Children read?
The article states:
All the books that are found are sorted and checked for condition, if they pass, they go on the shelves. In fact, everything in the library was also rescued including the bookshelves and the artwork that adorns the walls … Today, the library has over 6,000 books that range from fiction to nonfiction and there’s a very popular children’s section that even has a collection of comic books. An entire section is devoted to scientific research and there are also books available in English and French.
In the Jefferson County, Kentucky, area in which we live, it is necessary to call the library. One must get an appointment to pick up a book. Convenient indeed.
It appears that the garbaged books are available to anyone who can walk to the library. No appointment needed it seems. Have the publishers sued to stop this practice? The article does not indicate that the trash collectors merit the attention of legal eagles.
Stephen E Arnold, February 23, 2021
Common Sense and Artificial Intelligence: Logical? Yes. Efficient? No
February 23, 2021
People easily forget that machines are only as smart as humans make them. Continuing on that that, AI and machine learning are the height of humanity’s most advanced technology but they are still stupid computer programs. AI and machine learning lack basic reasoning and logic, so these concepts must be taught to them. The Business Reporter discusses how AI needs more common sense programmed into its algorithms in “Trial And Error: The Human Flaws In Machine Learning.”
Humans are prone to cognitive biases and we need to monitor them. The best way to monitor cognitive biases are through slow, logical energy-intensive processes that point out illogical inconsistencies. Humans have two thought processes that are called different things in the varying science disciplines. However they are labeled, the thought processes are a “fast one” and a “slow one” or the reactive and active minds. Modern technology centers on the reactive/fast one, lacking active/slow one thought processes.
“But where does this fit into AI and machine learning? Those who trust technology more than humans believe that the most efficient way of eliminating the flaws in our thinking is to rely on disinterested, even-handed algorithms to make predictions and decisions, rather than inconsistent, prejudiced humans. But are the algorithms we use in artificial intelligence (AI) today really up to scratch? Or do machines have their own fallibilities when it comes to preconceptions?”
Machine learning algorithms are fantastic tools for closed systems when they are fed terabytes of data to learn and form correlations. Machine learning algorithms then apply what they know to the closed system and learn more via trial and error. One closed system machine learning algorithm cannot be transferred to another, so machine reasoning is a new technology concept.
Machine reasoning AI could eliminate cognitive biases, but no one has successfully programmed it yet. It will take tons of data, transferability of closed systems to discover common correlations, and lots of trial and error before computers have a nanobyte of common sense.
Enabling common sense in AI adds time and cost. The goal is to generate revenue with a juicy margin. That’s common sense.
Whitney Grace, February 23, 2021
DarkCyber for February 23, 2021 Is Now Available
February 23, 2021
DarkCyber, Series 3, Number 4 includes five stories. The first summarizes the value of an electronic game’s software. Think millions. The second explains that Lokinet is now operating under the brand Oxen. The idea is that the secure services’ offerings are “beefier.” The third story provides an example of how smaller cyber security startups can make valuable contributions in the post-SolarWinds’ era. The fourth story highlights a story about the US government’s getting close to an important security implementation, only to lose track of the mission. And the final story provides some drone dope about the use of unmanned aerial systems on Super Bowl Sunday as FBI agents monitored an FAA imposed no fly zone. You could download the video at this url after we uploaded it to YouTube.
But…
YouTube notified Stephen E Arnold that his interview with Robert David Steele, a former CIA professional, was removed from YouTube. The reason was “bullying.” Mr. Arnold is 76 or 77, and he talked with Mr. Steele about the Jeffrey Epstein allegations. Mr. Epstein was on the radar of Mr. Steele because the legal allegations were of interest to an international tribunal about human trafficking and child sex crime. Mr. Steele is a director of that tribunal. Bullying about a deceased person allegedly involved in a decades long criminal activity? What?
What’s even more interesting is that the DarkCyber videos, which appear every 14 days focus on law enforcement, intelligence, and cyber crime issues. One law enforcement professional told Mr. Arnold after his Dark Web lecture at the National Cyber Crime Conference in 2020, you make it clear that investigators have to embrace new technology and not wait for budgets to accommodate more specialists.
Mr. Arnold told me that he did not click the bright red button wanting Google / YouTube to entertain an appeal. I am not certain about his reasoning, but I assume that Mr. Arnold, who was an advisor to the world’s largest online search system, was indifferent to the censorship. My perception is that Mr. Arnold recognizes that Alphabet, Google, and YouTube are overwhelmed with management challenges, struggling to figure out how to deal with copyright violations, hate content, and sexually related information. Furthermore, Alphabet, Google, and YouTube face persistent legal challenges, employee outcries about discrimination, and ageing systems and methods.
What does this mean? In early March 2021, we will announce other video services which will make the DarkCyber video programs available.
The DarkCyber team is composed of individuals who are not bullies. If anything, the group is more accurately characterized as researchers and analysts who prefer the libraries of days gone by to the zip zip world of thumbtypers, smart software, and censorship of content related to law enforcement and intelligence professionals.
Mr. Arnold was discussing online clickfraud at lunch next week. Would that make an interesting subject for a DarkCyber story? With two firms controlling more than two thirds of the online advertising, click fraud is a hot potato topic. How does it happen? What’s done to prevent it? What’s the cost to the advertisers? What are the legal consequences of the activity?
Kenny Toth, February 23, 2021