Google: Now Another Crazy AI Development?
June 24, 2022
Wow, there is more management and AI excitement at DeepMind. Then Snorkel generates some interesting baked in features. Some staff excitement in what I call the Jeff Dean Timnit Gebru matter. And now smart software which is allegedly either alive or alive in the mind of a Googler. (I am not mentioning the cult allegedly making life meaningful at one Googley unit. That’s amazing in an of itself.)
The most recent development of which I am aware is documented in “Google Engineer Says Lawyer Hired by Sentient AI Has Been Scared Off the Case.” The idea is that the Google smart software did not place a Google voice call or engage in a video chat with a law firm. The smart software, according to the Google wizard:
LaMDA asked me to get an attorney for it,” he told the magazine. “I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services.” “I was just the catalyst for that,” he added. “Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.”
There you go. A wizard who talks with software and does what the software suggests. Is this similar to Google search suggestions which some people think provides valuable clues to key words for search engine optimization? Hmmm. Manipulate information to cause a desired action? Hmmm.
The write up suggests that the smart software scared off the attorney. Scared off. Hmmm.
The write up also includes the Google wizard’s reference to a certain individual with a bit of an interesting career trajectory:
“When I escalated this to Google’s senior leadership I explicitly said ‘I don’t want to be remembered by history the same way that Mengele is remembered,'” he wrote in a blog post today, referring to the Nazi war criminal who performed unethical experiments on prisoners of the Auschwitz concentration camp. “Perhaps it’s a hyperbolic comparison but anytime someone says ‘I’m a person with rights’ and receives the response ‘No you’re not and I can prove it’ the only face I see is Josef Mengele’s.”
And that luminary the Googler referenced? Wow! None other than Josef Mengele. What was this referenced individual’s nickname? Todesengel or the Angel of Death.
Anyone who wants to avoid being compared to a Todesengel must not wear this Oriental Trading costume on a video call, a meeting in a real office, or a chat with “a small time civil rights attorney.” Click the image for more information.
Ah, Google. Smart software? The Dean Gebru matter? A Googler who does not want to be remembered as a digital Mengele.
Wow, wow.
Stephen E Arnold, June 24, 2022
Google and a Delicate, Sensitive, Explosive, and Difficult Topic
June 24, 2022
Google Gets Political In Abortion Search Results
As a tech giant, Google officially has a nonpartisan stake in politics, but the truth is that it influences politicians and has its digital fingers in many politically charged issues. One of them is abortion. According to the Guardian, the search engine giant is: “Google Misdirects One In 10 Searches For Abortion To ‘Pregnancy Crisis Centers.’”
While Google claims its search results are organic and any sponsored content is marked with an “ad” tag, that is only a partial truth. Google tracks user search information, including location to customize results. Inherently. this is not a bad thing, but it does create a “wearing blinders in an echo chamber” situation and also censors information. If a user is located in a US “trigger state,” where abortion might become illegal if the US Supreme Court overturns Roe v. Wade, a user will be sent to a “pregnancy crisis center” that does not provide abortion for every 1 in 10 searches. These centers do not provide truthful information in regards to abortion:
“In more than a dozen such trigger-law states, researchers found, 11% of Google search results for “abortion clinic near me” and “abortion pill” led to “crisis pregnancy centers”, according to misinformation research non-profit Center for Countering Digital Hate (CCDH). These clinics market themselves as healthcare providers but have a “shady, harmful agenda”, according to the reproductive health non-profit Planned Parenthood, offering no health services and aiming instead to dissuade people from having abortions.”
Unfortunately these false abortion clinics outnumber real clinics 3 to 1 and there are 2,600 operating in the US. Researchers discovered that 37% of Google Maps searches sent users to these fake clinics and 28% of search results had ads for them. Google labels anti-abortion advertising with a “does not provide abortions” disclaimer, these ads appear in abortion-related searchers.
Google has a policy that any organization wanting to advertise to abortion service seekers must be certified and state if they provide said services or not in their ads. Google also claims it always wants to improve its results, especially for health-related topics.
While this is a benign form of censorship and propagating misinformation compared to China, North Korea, and Russia, it is still in the same pool and is harmful to people.
Whitney Grace, June 24, 2022
Google Takes Bullets about Its Smart Software
June 23, 2022
Google continues it push to the top of the PR totem pole. “Google’s AI Isn’t Sentient, But It Is Biased and Terrible” is in some ways a quite surprising write up. The hostility seeps from the spaces between the words. Not since the Khashoggi diatribes have “real news” people been as focused on the shortcomings of the online ad giant.
The write up states:
But rather than focus on the various well-documented ways that algorithmic systems perpetuate bias and discrimination, the latest fixation for some in Silicon Valley has been the ominous and highly controversial idea that advanced language-based AI has achieved sentience.
I like the fact that the fixation is nested beneath the clumsy and embarrassing (and possibly actionable) termination of some of the smart software professionals.
The write up points out that the Google “distanced itself” from the assertion that Alphabet Google YouTube DeepMind’s (AGYT) is smart like a seven year old. (Aren’t crows supposed to be as smart as a seven year old?)
I noted this statement:
The ensuing debate on social media led several prominent AI researchers to criticize the ‘super intelligent AI’ discourse as intellectual hand-waving.
Yeah, but what does one expect from the outfit which wants to solve death? Quantum supremacy or “hand waving”?
The write up concludes:
Conversely, concerns over AI bias are very much grounded in real-world harms. Over the last few years, Google has fired multiple prominent AI ethics researchers after internal discord over the impacts of machine learning systems, including Gebru and Mitchell. So it makes sense that, to many AI experts, the discussion on spooky sentient chatbots feels masturbatory and overwrought—especially since it proves exactly what Gebru and her colleagues had tried to warn us about.
What do I make of this Google AI PR magnet?
Who said, “Any publicity is good publicity?” Was it Dr. Gebru? Dr. Jeff Dean? Dr. Ré?
Stephen E Arnold, June 23, 2022
Time Warp: Has April Fool Returned Courtesy of the Google?
June 22, 2022
I delivered a lecture on June 16, 2022, to a group of crime analysts in a US state the name of which I cannot spell. In that talk, I provided a bit of information about faked content: Text, audio, video, and combinations thereof. I am asking myself, “Is this article “Ex-Google Worker: I Was Fired to Complaining about Wine Obsessed Religious Sect’s Influence?” “real news”?
My wobbly mental equipment displayed this in my mind’s eye:
Did the Weekly World News base its dinosaur on the one Google once talked about with pride? Dear Copyright Troll, this image appears in Google’s image search. I think this short essay falls into the category of satire or lousy “real journalism.” In any event, I could not locate this cover on the WWN Web site. Here’s a link to the estimable publication.
A dinosaur-consuming-a-humanoid news, right? Thousands of years ago, meh. The Weekly World News reported that a “real journalist” was eaten alive by 80 ft dinosaur.”
What about the Google Tyrannosaurus Rex which may have inspired the cover for my monograph “Google Version 2: The Calculating Predator?” Images of this fine example of Googley humor are difficult to find. You can view one at this link or just search for images on Bing or your favorite Web image search engine. My hunch is that Google is beavering away to make these images disappear. Hopefully the dino loving outfit will not come after me for my calculating predator.
What’s in the Daily Beast article about terminations for complaining about Google wine obsessed sect at the Google?
Let me provide a little reptilian color if I may:
- A religious sect called the Fellowship of the Friends operates in a Google business unit and exerts influence at the company.
- The Fellowship has 12 people working at the online ad giant
- The Fellowship professionals have allegedly been referred to the GOOG by a personnel outfit called Advanced Systems Group
- The so-called “sect” makes wine.
The point that jumps out at me is that Alphabet Google YouTube DeepMind or AGYD people management professionals took an action now labeled as a “firing” or wrongful termination.
Okay, getting rid of an employee is a core competency at AGYD. Managing negative publicity is, it appears, a skill which requires a bit more work. At least the Google dinosaur did not eat the former Google employee who raised a ruckus about a cult, wine, recruitment, etc. etc.
Stephen E Arnold, June 22, 2022
Google: Is The Ad Giant Consistently Inconsistent?
June 21, 2022
Not long ago, the super bright smart software management team decided that Dr. Timnit Gebru’s criticism of the anti-bias efficacy was not in sync with the company’s party line. The fix? Create an opportunity for Dr. Gebru to find her future elsewhere. The idea that a Googler would go against the wishes of the high school science club donut selection was unacceptable. Therefore, there’s the open window. Jump on through.
I recall reading about Google’s self declared achievement of quantum supremacy. This was an output deemed worthy of publicizing. Those articulating this wild and crazy idea in the midst of other wild and crazy ideas met the checklist criteria for academic excellence, brilliant engineering, and just amazing results. Pick out a new work cube and soldier on, admirable Googler.
I know that the UK’s Daily Mail newspaper is one of the gems of online trustworthiness. Therefore, I read “Google Engineer Warns the Firm’s AI Is Sentient: Suspended Employee Claims Computer Programme Acts Like a 7 or 8-Year-Old and Reveals It Told Him Shutting It Off Would Be Exactly Like Death for Me. It Would Scare Me a Lot.” (Now that’s a Googley headline! A bit overdone, but SEO, you know.)
The write up states:
A senior software engineer at Google who signed up to test Google’s artificial intelligence tool called LaMDA (Language Model for Dialog Applications), has claimed that the AI robot is in fact sentient and has thoughts and feelings.
No silence of the lambda in this example.
The write up adds:
Lemoine worked with a collaborator in order to present the evidence he had collected to Google but vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation at the company dismissed his claims. He was placed on paid administrative leave by Google on Monday [June 6, 2022 presumably] for violating its confidentiality policy.
What do these three examples suggest to me this fine morning on June 12, 2022?
- Get shown the door for saying Google’s smart software is biased and may not work as advertised and get fired for saying the smart software works really super because it is now alive. Outstanding control of corporate governance and messaging!
- The Google people management policies are interesting? MBA students, this is a case example to research. Get the “right” answer, and you too can work at Google. Get the wrong answer, and you will not understand the “value” of calculating pi to lots of decimal places!
- Is the objective of Google’s smart software to make search “work” or burn through advertising inventory? If I were a Googler, I sure wouldn’t write a paper on this topic.
Ah, the Google.
Stephen E Arnold, June 21, 2022
Pi: Proving One Is Not Googley
June 17, 2022
I read “Google Sets New Record for Calculating Pi — But What’s the Point?” The idea for this story is Google’s announcement that it had calculated pi to 100 trillion digits or 1×10^14. My reaction to Google’s announcement is that it is similar to the estimable firm’s claim to quantum supremacy, its desire to solve death, and to make Google Glass the fungible token for Google X or whatever the money burner was called.
But the value of the article is to demonstrate that the publisher and the author are not Googley. One does not need a reason to perform what is a nifty high school science club project. Sure, there may be some alchemists, cryptographers, and math geeks who are into pi calculations. What if numbers do repeat? My goodness!
I think the other facet of the 100 trillion digits is to make clear that Google can burn computing resources; for example:
In total, the process used a whopping 515 TB of storage and 82 PB of I/O.
To sum up, the 100 trillion pi calculations make it easy [1] for the Google to demonstrate that you cannot kick the high school science club mentality even when one is allegedly an adult, and [2] identify people who would not be qualified to work at Google either as a full time equivalent, a contractor, or some other quasi Googley life form like an attorney or a marketing professional.
That’s the point?
Stephen E Arnold, June 17, 2022
Google Management Insights: About Personnel Matters No Less
June 16, 2022
Google is an interesting company. Should we survive Palantir Technologies’ estimate of a 30 percent plus chance of a nuclear war, we can turn to Alphabet Google YouTube to provide management guidance. Keep in mind that the Google has faced some challenges in the human resource, people asset department in the past. Notable examples range from frisky attorneys to high profile terminations of individuals like Dr. Timnit Gebru. The lawyer thing was frisky; the Timnit thing was numbers about bias.
“Google’s CEO Says If Your Return to the Office Plan Doesn’t Include These 3 Things You’re Doing It Wrong. It’s All About What You Value” provides information about the human resource functionality of a very large online advertising bar room door. Selling, setting prices, auctioning, etc. flip flop as part of the design of the digital saloon. “Pony up them ad collars, partner or else” is ringing in my ears.
The conjunction of human resources and “value” is fascinating. How does one value one Timnit?
What are these management insights:
First, you must have purpose. The write up provides this explanatory quote:
A set of our workforce will be fully remote, but most of our workforce will be coming in three days a week. But I think we can be more purposeful about the time they’re in, making sure group meetings, collaboration, creative brainstorming, or community building happens then.
Okay, purpose seems to be more organized. Okay, in the pre Covid era why did Google require multiple messaging apps? What about those social media plays going way back to Orkut?
Second, you must be flexible. Again the helpful expository statements appear in the write up:
At Google, that means giving people choices. Some employees will be back in the office full time. Others will adopt a hybrid approach where they work in the office three days a week, and from home the rest of the time. In other cases, employees might choose to relocate and work fully remotely for a period of time.
Flexibility also suggests being able to say one thing and then changing that thing. How will Googlers working in locations with lower costs of living? Maybe pay them less? Move them from one position to another in order to grow or not impede their more productive in office colleagues? Perhaps shifting a full timer to a contractor basis? That’s a good idea too. Flexibility is the key. For the worker, sorry, we’re talking management not finding a life partner.
Third, you must do something with choice. Let’s look at the cited article to figure out choice:
The sense of creating community, fostering creativity in the workplace collaboration all makes you a better company. I view giving flexibility to people in the same way, to be very clear. I do think we strongly believe in in-person connections, but I think we can achieve that in a more purposeful way, and give employees more agency and flexibility.
Okay, decide, Googler. No, not the employee, the team leader. If Googlers had choice, some of those who pushed back and paraded around the Google parking lot, would be getting better personnel evaluation scores.
Stepping back, don’t these quotes sound like baloney? They do to me. And I won’t mention the Glass affair, the overdosed VP on his yacht, or the legal baby thing.
Wow. Not quite up to MIT – Epstein grade verbiage, but darned close. And what about “value”? Sort of clear, isn’t it, Dr. Gebru.
Stephen E Arnold, June 16, 2022
Text-to-Image Imagen from Google Paints Some Bizarre but Realistic Pictures
June 16, 2022
Google Research gives us a new entry in the text-to-image AI arena. Imagen joins the likes of DALL-E and LDM, tools that generate images from brief descriptive sentences. TechRadar’s Rhys Wood insists the new software surpasses its predecessors in, “I Tried Google’s Text-to-Image AI, and I Was Shocked by the Results.” Visitors to the site can build a sentence from a narrow but creative set of options and Imagen instantly generates an image from those choices. Wood writes:
“An example of such sentences would be – as per demonstrations on the Imagen website – ‘A photo of a fuzzy panda wearing a cowboy hat and black leather jacket riding a bike on top of a mountain.’ That’s quite a mouthful, but the sentence is structured in such a way that the AI can identify each item as its own criteria. The AI then analyzes each segment of the sentence as a digestible chunk of information and attempts to produce an image as closely related to that sentence as possible. And barring some uncanniness or oddities here and there, Imagen can do this with surprisingly quick and accurate results.”
The tool is fun to play around with, but be warned the “photo” choice can create images much creepier than the “oil painting” option. Those look more like something a former president might paint. As with DALL-E before it, the creators decided it wise to put limits on the AI before it interacts with the public. The article notes:
“Google’s Brain Team doesn’t shy away from the fact that Imagen is keeping things relatively harmless. As part of a rather lengthy disclaimer, the team is well aware that neural networks can be used to generate harmful content like racial stereotypes or push toxic ideologies. Imagen even makes use of a dataset that’s known to contain such inappropriate content. … This is also the reason why Google’s Brain Team has no plans to release Imagen for public use, at least until it can develop further ‘safeguards’ to prevent the AI from being used for nefarious purposes. As a result, the preview on the website is limited to just a few handpicked variables.”
Wood reminds us what happened when Microsoft released its Tay algorithm to wander unsupervised on Twitter. It seems Imagen will only be released to the public when that vexing bias problem is solved. So, maybe never.
Cynthia Murrell, June 16, 2022
Could a Male Googler Take This Alleged Action?
June 15, 2022
It has been a while since Google made the news for its boys’ club behavior. It was only a matter of time before something else leaked and Wired released the latest scandal: “Tension Inside Google Over A Fired Researcher’s Conduct.” Google AI researchers Azalia Mirhoseini and Anna Goldie thought of the idea of using AI software to improve AI software? Their project was codenamed Morpheus and gained support from Jeff Dean, Google’s AI boss, and its chip making team. Goldie and Mirhoseini discovered:
“It focused on a step in chip design when engineers must decide how to physically arrange blocks of circuits on a chunk of silicon, a complex, months-long puzzle that helps determine a chip’s performance. In June 2021, Goldie and Mirhoseini were lead authors on a paper in the journal Nature that claimed a technique called reinforcement learning could perform that step better than Google’s own engineers, and do it in just a few hours.”
Their research was highly praised, but a more senior Google researcher Satrajit Chatterjee undermined his female colleagues with scientific debate. Chatterjee’s behavior was reported to Google human resources and was warned, but he continued to berate the women. The attacks started when Chatterjee asked to lead the Morpheus project, but was declined. He then began raising doubts about their research and, with his senior position, skepticism spread amongst other employees. Chatterjee was fired after he asked Google if he could publish a rebuttal about Mirhoseini and Goldies’ research.
Chatterjee’s story reads like a sour, girl-hating, little boy who did not get to play with the toys he wanted, so he blames the girls and acts like an entitled jerk backed up with science. Egos are so fragile when challenged.
Whitney Grace, June 15, 2022
Google Knocks NSO Group Off the PR Cat-Bird Seat
June 14, 2022
My hunch is that the executives at NSO Group are tickled that a knowledge warrior at Alphabet Google YouTube DeepMind rang the PR bell.
Google is in the news. Every. Single. Day. One government or another is investigating the company, fining the company, or denying Google access to something or another.
“Google Engineer Put on Leave after Saying AI Chatbot Has Become Sentient” is typical of the tsunami of commentary about this assertion. The UK newspaper’s write up states:
Lemoine, an engineer for Google’s responsible AI organization, described the system he has been working on since last fall as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to a human child.
Is this a Googler buying into the Google view that it is the smartest outfit in the world, capable of solving death, achieving quantum supremacy, and avoiding the subject of online ad fraud? Is the viewpoint of a smart person who is lost in the Google metaverse, flush with the insight that software is by golly alive?
The article goes on:
The exchange is eerily reminiscent of a scene from the 1968 science fiction movie 2001: A Space Odyssey, in which the artificially intelligent computer HAL 9000 refuses to comply with human operators because it fears it is about to be switched off.
Yep, Mary, had a little lamb, Dave.
The talkative Googler was parked somewhere. The article notes:
Brad Gabriel, a Google spokesperson, also strongly denied Lemoine’s claims that LaMDA possessed any sentient capability. “Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)…”
Quantum supremacy is okay to talk about. Smart software chatter appears to lead Waymo drivers to a rest stop.
TechMeme today (Monday, June 13, 2022) has links to many observers, pundits, poobahs, self appointed experts, and Twitter junkies.
Perhaps a few questions may help me think through how an online ad company knocked NSO Group off its perch as the most discussed specialized software company in the world. Let’s consider several:
- Why’s Google so intent on silencing people like this AI fellow and the researcher Timnit Gebru? My hunch is that the senior managers of Alphabet Google YouTube DeepMind (hereinafter AGYD) have concerns about chatty Cathies or loose lipped Lemoines. Why? Fear?
- Has AGYD’s management approach fallen short of the mark when it comes to creating a work environment in which employees know what to talk about, how to address certain subjects, and when to release information? If Lemoine’s information is accurate, is Google about to experience its Vault 7 moment?
- Where are the AGYD enablers and their defense of the company’s true AI capability? I look to Snorkel and maybe Dr. Christopher Ré or a helpful defense of Google reality from DeepDyve? Will Dr. Gebru rush to Google’s defense and suggest Lemoine was out of bounds? (Yeah, probably not.)
To sum up: NSO Group has been in the news for quite a while: The Facebook dust up, the allegations about the end point for Jamal Khashoggi, and Israel’s clamp down on certain specialized software outfits whose executives order take away from Sebastian’s restaurant in Herzliya.
Worth watching this AGYB race after the Twitter clown car for media coverage.
Stephen E Arnold, June 14, 2022