Google and Ethics: Shaken and Stirred Up

June 17, 2021

Despite recent controversies, Vox Recode reports, “Google Says it’s Committed to Ethical AI Research. Its Ethical AI Team Isn’t So Sure.” In fact, it sounds like there is a lot of uncertainty for the department whose immediate leaders have not been replaced since they were ousted and who reportedly receive little guidance or information from the higher-ups. Reporter Shirin Ghaffary writes:

“Some current members of Google’s tightly knit ethical AI group told Recode the reality is different from the one Google executives are publicly presenting. The 10-person group, which studies how artificial intelligence impacts society, is a subdivision of Google’s broader new responsible AI organization. They say the team has been in a state of limbo for months, and that they have serious doubts company leaders can rebuild credibility in the academic community — or that they will listen to the group’s ideas. Google has yet to hire replacements for the two former leaders of the team. Many members convene daily in a private messaging group to support each other and discuss leadership, manage themselves on an ad-hoc basis, and seek guidance from their former bosses. Some are considering leaving to work at other tech companies or to return to academia, and say their colleagues are thinking of doing the same.”

See the article for more of the frustrations facing Google’s remaining AI ethics researchers. The loss of these workers would not be good for the company, which relies on the department to lend a veneer of responsibility to its algorithmic initiatives. Right now, though, Google seems more interested in plowing ahead with its projects than in taking its own researchers, or their work, seriously. Its reputation in the academic community has tanked, we are told. A petition signed by thousands of computer science instructors and researchers called Gebru’s firing “unprecedented research censorship,” a prominent researcher and diversity activists are rejecting Google funding, a Google-run workshop was boycotted by prospective speakers, and the AI ethics research conference FAccT suspended the company’s membership. Meanwhile, Ghaffary reports, at least four employees have resigned and given Gebru’s treatment as the reason. Other concerned employees are taking the opposite approach, staying on in the hope they can make a difference. As one unnamed researcher states:

“Google is so powerful and has so much opportunity. It’s working on so much cutting-edge AI research. It feels irresponsible for no one who cares about ethics to be here.”

We agree, but there is only so much mid-level employees can do. When will Google executives begin to care about developing AI programs conscientiously? When regulators somehow make it more expensive to ignore ethics concerns than to embrace them, we suspect. We will not hold our breath.

Cynthia Murrell, June 17, 2021

A Test of Two Sentiment Analysis Libraries

June 17, 2021

A post by developer Alan Jones at Towards Data Science takes a close look at “Two Sentiment Analysis Libraries and How they Perform.” Complete with snippets of code, Jones takes us through his comparison of TextBlob and VADER. He emphasizes that, since human language is so nuanced, sentiment analysis is imprecise by nature. We are sure of one thing—the word “lawyer” in a customer support email is probably a bad sign. Jones introduces his experiment, and describes how interested readers might perform their own:

“So, it’s not reasonable to expect a sentiment analyzer to be accurate on all occasions because the meaning of sentences can be ambiguous. But how just accurate are they? It obviously depends on the techniques used to perform the analysis and also on the context. To find out, we are going to do a simple experiment with two easy to use libraries to see if we can find out what sort of accuracy we might expect. You could decide to build you own analyzer and, in doing so, you might learn more about sentiment analysis and text analysis in general. If you feel inclined to do such a thing, I highly recommend that you read the article by Conor O’Sullivan, Introduction to Sentiment Analysis where he not only explains the aim of Sentiment Analysis but demonstrates how to build an analyzer in Python using a bag of words approach and a machine learning technique called a Support Vector Machine (SVN). On the other hand you might prefer to import a library such as TextBlob or VADER to do the job for you.”

Jones walks us through his dual analysis of the 500 tweets found in the Sentiment140 for Academics collection, narrowed down from the 1.6 million contained in the greater Sentiment140 project. The twist it this: he had to reconcile the different classification schemas used by TextBlob and VADER. See the post for how he applies the two analyzers to the dataset and compares the results.

Cynthia Murrell, June 17, 2021

A Google Survey: The Cloud Has Headroom

June 17, 2021

Google sponsored a study. You can read it here. There’s a summary of the report in “Manufacturers Allocate One Third of Overall IT Spend to AI, Survey Shows.”

First, the methodology is presented on the final page of the report. Here’s a snippet:

The survey was conducted online by The Harris Poll on behalf of Google Cloud, from October 15 to November 4, 2020, among 1,154 senior manufacturing executives in France (n=150), Germany (n=200), Italy (n=154), Japan (n=150), South Korea (n=150), the UK (n=150), and the U.S. (n=200) who are employed full-time at a company with more than 500 employees, and who work in the manufacturing industry with a title of director level or higher. The data in each country was weighted by number of employees to bring them into line with actual company size proportions in the population. A global post-weight was applied to ensure equal weight of each country in the global total.

Google apparently wants to make data a singular noun. That’s Googley. Also, there are two references to weighting; however, there are no data for how the weighting factors were calculated nor why the weighting factors were need for what boils down to a set of countries representing the developed world. I did not spot any information about the actual selection process; for example, mailing out a request to a larger set and then taking those who self select is a practice I have encountered in the past. Was that the method in use here? How much back and forth was there between the Harris unit and the Google managers prior to the crafting of the final report? Does this happen? Sure, those who pay want a flash report and then want to “talk about” the data. Is it possible weighting factors were used to make the numbers flow? I don’t know. The study was conducted in the depths of the Covid crisis. Was that a factor? Were those in the sample producing revenue from their AI infused investments? Sorry, no data available.

What were the findings?

Surprise, surprise. Artificial intelligence is a hot button in the manufacturing sector. Those who are into smart software are spending a hefty chunk of their “spend” budget for it. If that AI is delivered from the cloud, then bingo, the headroom for growth is darned good.

The bad news is that two thirds of those in the sample are into AI already. The big tech sharks will be swarming to upsell those early adopters and compete ferociously for the remaining one third who have yet to get the message that AI is a big deal.

Guess what countries are leaders in AI. If you said China, wrong. Go for Italy and Germany. The US was in the middle of the pack. The laggards were Japan and Korea. And China? Hey, sorry, I did not see those data in the report. My bad.

Interesting stuff in these sponsored research projects with unexplained weightings which line up with what the Google says it is doing really well.

Stephen E Arnold, June 17, 2021

Microsoft: Timing and Distraction

June 16, 2021

From my point of view, the defining event of 2021 was the one-two punch of SolarWinds and the Microsoft Exchange Server breaches. I call these “missteps” because the jargon of the cyber wizards at the Redmond outfit and the legions of cyber security vendors talk around compromising systems in ways which are mind boggling. Yep, a “misstep.” Not worth worrying about.

I scanned the research data in “Unsuccessful Tech Projects Get Axed During the Pandemic” and checked with  my trusty red ink ball point pen, these items. Let’s just assume these data are close enough for horse shoes, shall we?

  • 30 percent of a sample of 700 plus “professionals” say they killed one or more unsuccessful digital transformation projects. Okay, one third failure rate. How’s that work if one is building 100 school buses? Yep, one third go up in flames, presumably killing some of the occupants. Call it 20 children per bus when one detonates. That works out to 600 no longer functioning children. Acceptable? Okay for software, just not for school buses.
  • 65 percent of the sample are going to try and try again. Improving methods? No data on that, so we can figure one third of these digital adventures will drive off a cliff I assume.
  • Making the right decision is almost a guess. The article’s data suggest that 29 percent of those in the sample “struggle to keep pace with technological developments.” So let’s do marketing, maybe hand waving, or just some Jazz Age razzle dazzle, right?

That what I thought when I read “Windows 11 Has Leaked Online: What the Next Version of Windows Looks Like.” This write up does not talk about addressing the software update methods, the trust mechanisms within the Windows ecosystem, nor the vulnerabilities of decades old practices for libraries and dynamic linked libraries, among others. Nope. It’s this in my opinion:

image

Image source: Noemi P.

A new look, snappy dance moves, and distraction. The tune is probably going to be a toe tapper. The only hitch is that the missteps of SolarWinds and Microsoft Exchange Server missteps might throw the marketing routine off beat.

Stephen E Arnold, June 16, 2021

How AI Might Fake Geographic Data

June 16, 2021

Here is yet another way AI could be used to trick us. The Eurasia Review reports, “Exploring Ways to Detect ‘Deep Fakes’ in Geography.” Researchers at the University of Washington and Oregon State University do not know of any cases where false GIS data has appeared in the wild, but they see it as a strong possibility. In a bid to get ahead of the potential issue, the data scientists created an example of how one might construct such an image and published their findings at Cartography and Geographic Information Science. The Eurasia Review write-up observes:

“Geographic information science (GIS) underlays a whole host of applications, from national defense to autonomous cars, a technology that’s currently under development. Artificial intelligence has made a positive impact on the discipline through the development of Geospatial Artificial Intelligence (GeoAI), which uses machine learning — or artificial intelligence (AI) — to extract and analyze geospatial data. But these same methods could potentially be used to fabricate GPS signals, fake locational information on social media posts, fabricate photographs of geographic environments and more. In short, the same technology that can change the face of an individual in a photo or video can also be used to make fake images of all types, including maps and satellite images. ‘We need to keep all of this in accordance with ethics. But at the same time, we researchers also need to pay attention and find a way to differentiate or identify those fake images,’ Deng said. ‘With a lot of data sets, these images can look real to the human eye.’ To figure out how to detect an artificially constructed image, first you need to construct one.”

We suppose. The researchers suspect they are the first to recognize the potential for GIS fakery, and their paper has received attention around the world. But at what point can one distinguish between warding off a potential scam and giving bad actors ideas? Hard to tell.

The team used the unsupervised deep learning algorithm CycleGAN to introduce parts of Seattle and Beijing into a satellite image of Tacoma, Washington. Curious readers can navigate to the post to view the result, which is convincing to the naked eye. When compared to the actual image using 26 image metrics, however, differences were registered on 20 of them. Details like differences in roof colors, for example, or blurry vs. sharp edges gave it away. We are told to expect more research in this vein so ways of detecting falsified geographic data can be established. The race is on.

Cynthia Murrell, June 16, 2021

Is Google Really, Really Killing the Ad Industry?

June 16, 2021

I read “Google Is Also Killing Ad Industry like Apple.” There’s a caveat after this somewhat bold headline:

[Google] promises to put an end to personalized ads starting with Android 12

The write states:

Close to 80% of its revenue comes from advertising. But with pressure from regulators and consumers, Google had no choice but to follow Apple’s lead to protect users’ privacy and data.

Okay, the Google is reacting to what seems like on going legal hassles. Is Google “like Apple”? Well, Apple has diversified its revenue stream. The hardware, the App Store, the subscription businesses, and the other bits and pieces of the company that turn a buck are more robust than Google’s model.

Yep, that’s 80 percent advertising, and it is smart. Well, sort of smart. Some of the ads we’ve have been monitoring on YouTube and our Google Web search results pages seem a bit off the mark for whoever Google thinks “I” am. We love seeing ads for discounted auto insurance, Grammarly to improve the entity’s writing, and blandishments to consume a vegan health drink. Not exactly a cohesive line up of messages, but Google thinks we watch yacht videos, hacker and stolen software videos, and search for weaponized drone manufacturers. Normal stuff for the smart Google system.

That’s the problem. The 80 percent is based on some pretty crazy real life, rules based, and human tuned systems and methods. By virtue of zero meaningful regulation, the Google has become the embodiment of online advertising. Banners, personalized, scammy, whatever Google is there.

What if users opt out of Google data collection?

What difference will that make? The Floc you crowd believes that privacy is Job 1 at the Google. The cheerleaders for the weird “rumble” or “bumble” bundle think that subscriptions are the way to go. Google believes that its solving death and smart auto technology will become big winners. Then there’s the licensing opportunity for DeepMind’s smart software. Those who aren’t keen on saying, “We’ve got the secret sauce for artificial general intelligence” are gone or heading out the door. The AI ethics negative Nellies are history as well.

The problem is the 80 percent thing.

The write up has an answer:

As predicted, Google gave in to pressure from regulators and its users to implement these changes. However, it will be interesting to see, how Google will sustain these changes let alone other third-party ad providers. But only the future will tell how will these changes affect independent and small business owners as Facebook fears. Google recently introduced FLOC (Federated Learning Of Cohorts) an alternative method to replace the third-party cookies on Chrome which makes it difficult for advertisers to track user’s web activities to serve targeted ads. This FLOC will enable interest-based advertising on the web without letting advertisers know your identity. Probably, Google’s alternate solution to the Android app’s future personalized ads will probably reminiscent of FLOC. Overall, this is a welcome change from Google.

Got it? But 80 percent of $196 billion is a lot of smart auto and smart software licensing deals. Therefore, Steve Ballmer’s one trick pony observation is proving to be accurate or at least more accurate than Mr. Ballmer’s basketball team’s free thrown shooting percentage.

Stephen E Arnold, June 16, 2021

China: More Than a Beloved Cuisine, Policies Are Getting Traction Too

June 16, 2021

As historical information continues to migrate from physical books to online archives, governments are given the chance to enact policies right out of Orwell’s 1984. And why limit those efforts to one’s own country? Quartz reports that “China’s Firewall Is Spreading Globally.” The crackdown on protesters in Tiananmen Square on June 4, 1989 is a sore spot for China. It would rather those old enough to remember it would forget and those too young to have seen it on the news never learn about it. The subject has been taboo within the country since it happened, but now China is harassing the rest of the world about it and other sensitive topics. Worse, the efforts appear to be working.

Writer Jane Li begins with the plight of activist group 2021 Hong Kong Charter, whose website is hosted by Wix. The site’s mission is to build support in the international community for democracy in Hong Kong. Though its authors now live in countries outside China and Wix is based in Israel, China succeeded in strong-arming Wix into taking the site down. The action did not stick—the provider apologized and reinstated the site after being called out in public. However, it is disturbing that it was disabled in the first place. Li writes:

“The incident appears to be a test case for the extraterritorial reach of the controversial national security law, which was implemented in Hong Kong one year ago. While Beijing has billed the law as a way to restore the city’s stability and prosperity, critics say it helps the authorities to curb dissent as it criminalizes a broad swathe of actions, and is written vaguely enough that any criticism of the Party could plausibly be deemed in violation of the law. In a word, the law is ‘asserting extraterritorial jurisdiction over every person on the planet,’ wrote Donald Clarke, a professor of law at George Washington University, last year. Already academics teaching about China at US or European universities are concerned they or their students could be exposed to greater legal risk—especially should they discuss Chinese politics online in sessions that could be recorded or joined by uninvited participants. By sending the request to Wix, the Hong Kong police are not only executing the expansive power granted to them by the security law, but also sending a signal to other foreign tech firms that they could be next to receive a request for hosting content offensive in the eyes of Beijing.”

One nation attempting to seize jurisdiction around the world may seem preposterous, but Wix is not the only tech company to take this law seriously. On the recent anniversary of the Tiananmen Square crackdown, searches for the event’s iconic photo “tank man” turned up empty on MS Bing. Microsoft blamed it on an “accidental human error.” Sure, that is believable coming from a company that is known to cooperate with Chinese censors within that country. Then there was the issue with Google-owned YouTube. The US-based group Humanitarian China hosted a ceremony on June 4 commemorating the 1989 event, but found the YouTube video of its live stream was unavailable for days. What a coincidence! When contacted, YouTube simply replied there may be a possible technical issue, what with Covid and all. Of course, Google has its own relationship to censorship in China.

Not to be outdone, Facebook suspended the live feed of the group’s commemoration with the auto-notification that it “goes against our community standards on spam.” Right. Naturally, when chastised the platform apologized and called the move a technical error. We sense is a pattern here. One more firm is to be mentioned, though to be fair some of these participants were physically in China: Last year, Zoom disabled Humanitarian China’s account mid-meeting after the group hosted its Covid-safe June 4th commemoration on the platform. At least that company did not blame the action on a glitch; it made plain it was at the direct request of Beijing. The honesty is refreshing.

Cynthia Murrell, June 16, 2021

Are 15 Square Feet Enough? A Question for the Google

June 15, 2021

I flipped through the dead tree edition of the outstanding sun-like Wall Street Journal this morning (June 15, 2021). And what did I find inside the edition which sometimes makes its way to Harrod’s Creek, Kentucky? The answer was a four page ad in the Murdoch infused Wall Street Journal. Each page is about 23 inches by 24 inches. That works out to 552 square inches (give or take a few due to variances in trim sizes) per page. With four pages, the total is more than 2,208 square inches of dead tree space or larger than the vinyl floor protector under my discount store office chair and that of one of my assistant’s floor protectors. Which is better vinyl floor protectors or dead tree paper? I am on the fence.

a google ad 61521

Above is a thumbnail of the four page Google ad in the June 15, 2021, Wall Street Journal.

What’s the message in the ad? At first glance, the ad is pitching a free Google service. Some people perceive Google free services as having a modest cost. Here in Harrod’s Creek, we love the freebies from the Google. In this particular case, Google is pitching this message:

If you want to show the world how it’s done, you have to change the way you do things.

Change is hard, and it depends on whether the change is motivated internally like the good old but out of fashion notion of self improvement, gumption, and Go West, young man! Or whether the change is imposed on one; for example, Rupert Murdoch had constraints on unauthorized telephone tapping imposed on his otherwise outstanding organization. There is also an Orwellian type change which can be more difficult for those lacking critical thinking skills to identify. A good example of this is assertions made under oath in the US Congress that certain high technology companies will do better. The companies then keep on keepin’ on as some in Harrod’s Creek say.

The interior two pages convey this message:

Say hello to Google Workspace.

The text explains that Google Workspace is pretty much like Salesforce Slack, Microsoft Teams, and the ever wonderful and avant garde Cisco Webex service, the somewhat popular Zoom, among others. The most interesting passage in the advertisement is the explanation of “how we do it here too”:

All 100K+ Google employees – from engineering, to marketing, to the PhDs in the quantum lab—relay on Google Workspace every day. Our scientists leave comments in research doss, and the security team keeps our inboxes clear of spam and viruses. Google’s entire business is riding on it, just like yours. Because no matter the task at hand, when your customers are depending on your. Google Workspace is how it’s done.

What came to mind was “how it’s done” in staff management. Dare I mention Dr. Timnit Gebru? No, I don’t dare. What about the subtle management vibes at DeepMind. Nope, I know zero about that too. What about … Nope, no more of this management thinking. Life’s too short. (I wonder if critiques of Dr. Gebru’s AI ethics paper were handled within this Workspace thing?)

The final page lists alleged customers (users) of Google Workspace. These include Grandma’s, Operation BBQ Relief, and Ms.. Kim’s class, among others.

Some observations are warranted by this lavish presentation of the Google Workspace message in the dead tree edition of a traditional newspaper nestled within the woke empire of News Corp. Herewith:

  1. I find it amusing to think that the world’s largest online advertising outfit is pitching its Workspace product in a medium which is centuries old, non digital, and mostly reporting that water which has passed under the bridge over information
  2. I would like to see the ad reach data and conversion estimate for pulling new customers based on this rather impressive expanse of newspaper. My hunch is that the Google wanted to send a message, probably to Microsoft. Why not email the outstanding leader working hard to eliminate cyber security risks?
  3. The organizations mentioned as customers (users) are interesting. Links to case examples of what’s shaking at Grandma’s or Ms. Kim’s class would be fascinating. The wonky little icons in the ad are interesting but “yinka” was a bit of a puzzle to me.

Net net: Is Google changing or does Google want others to change from Microsoft Teams to Workspace? My hunch is that Google is assuming that the Greek god Koalemos will make their endeavor a home run.

Stephen E Arnold, June 15, 2021

More Content Under Scrutiny: Jail Time for Some? Work Camps for Others?

June 15, 2021

I spotted an interesting article called “Hong Kong to Censor Films Under National Security Law.” The write up appears to make Hong Kong separate from China. That’s not my understanding. Some in Hong Kong may not agree with me. British stiff upper lip, horse racing, and tea – absolutely.

The write up states:

The government said the changes that give the film censor authority to ban films perceived as promoting or glorifying acts or activities that could endanger national security take effect from Friday. The Film Censorship Authority should stay “vigilant to the portrayal, depiction or treatment of any act or activity which may amount to an offence endangering national security”, the government said in a statement. “Any content of a film which is objectively and reasonably capable of being perceived as endorsing, supporting, promoting, such act or activity” will be censored, according to the guideline.

What’s not clear is the scope of the law. Will frisky TikTok creators find themselves and their work reviewed? What happens if a Chinese citizen studying in the US appears in a video which the Chinese government finds a danger to national security?

Observations of course:

  1. Prudence, the voice on my shoulder, says, “Self censor or risk some time in a state controlled factory. Better yet a few years in a re-education program.” Will those Chinese students listen to my Prudence?
  2. The context of “national security” is interesting. The law suggests that TikTok and similar creator centric video programs pose sufficient threat to move beyond routine blurring of objectionable items, restricting distribution, or levying a fine. A question arises, “What about TikTok itself?” A threat or a source of information for some governments?

Net net: Some governments emulating the Great Firewall method will observe the downstream impact of this law. If the idea seems like a good one, information control may become more popular in some circles. Jack Ma is unlikely to invest in motion picture and video productions I think.

Stephen E Arnold, June 15, 2021

Adulting Is Hard and Tech Companies May Lose Their Children

June 15, 2021

For years, I have compared the management methods of high flying technology companies to high school science clubs. I think I remain the lone supporter of this comparison. That is perfectly okay with me. I did spot several interesting examples of management “actions” which have produced some interesting knock on effects.

First, take a look at this write up: “Does What Happens at YC Stay at YC?” The Silicon Valley real news write up reports:

The two founders, Dark CEO Paul Biggar and Prolific CEO and co-founder Katia Damer, say they were removed from YC after publicly critiquing YC — for very different reasons. Biggar had noted on social media back in March that another YC founder was tipping off people on how to cut the vaccine lines to get an early jab, while Damer expressed worry and frustration more recently about the alumni community’s support of a now-controversial alum, Antonio García Martínez. Y Combinator says that the two founders were removed from Bookface because they broke community guidelines, namely the rule to never externally post any internal information from Bookface.

The method: Have rules, enforce them, and attract media attention. I find this interesting because Y Combinator has been around since 2005 and now we have the rule breaking thing.

Second, the newly awakened real journalists at the New York Times wrote in “In Leak Investigation, Tech giants Are Caught Between Courts and Customers”:

Without knowing it, Apple said, it had handed over the data of congressional staff members, their families and at least two members of Congress…

Yep, without knowing. Does this sound like the president of the high school science club explaining why the chem lab caught fire by pointing out that she knew nothing about the event. Yes, fire trucks responded to the scene of the blaze. Oh, the pres of that high school science club is wearing a T shirt with the word “Privacy” in big and bold Roboto.

Third, the much loved online ad vendor (Google in case you did not know) used a classic high school science club maneuver. “UK Competition Regulator Gets a Say in Google’s Plan to Remove Browser Cookies” reveals:

Google is committed to involve the CMA and the Information Commissioner’s Office, the U.K.’s privacy watchdog, in the development of its Privacy Sandbox proposals. The company promised to publicly disclose results of any tests of the effectiveness of alternatives and said it wouldn’t give preferential treatment to Google’s advertising products or sites.

I noted the operative word “promised.” Isn’t the UK a member of a multi national intelligence consortium. What happens if the UK wants to make some specific suggestions. Will that mean that Google can implement these suggestions and end up with a digital banana split with chocolate, whipped cream, and a cherry on top. Who wants to blemish the record of a high school valedictorian who is a member of the science club and the quick recall team? My hunch is that the outfits in Cheltenham want data from the old and new Google methods. But that’s just my view of how a science club exerts magnetic pull and uses a “yep, we promise” to move along its chosen path.

Net net: Each of these examples illustrate that the effort of some high profile outfits to operate as adults. But I am thinking, “What if those adults are sophomoric and science club coping mechanisms?”

The answer seems to be what the five bipartisan bills offered by House democrats. A fine way to kick off their weekend.

Stephen E Arnold, June 15, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta