Google: Now Another Crazy AI Development?

June 24, 2022

Wow, there is more management and AI excitement at DeepMind. Then Snorkel generates some interesting baked in features. Some staff excitement in what I call the Jeff Dean Timnit Gebru matter. And now smart software which is allegedly either alive or alive in the mind of a Googler. (I am not mentioning the cult allegedly making life meaningful at one Googley unit. That’s amazing in an of itself.)

The most recent development of which I am aware is documented in “Google Engineer Says Lawyer Hired by Sentient AI Has Been Scared Off the Case.” The idea is that the Google smart software did not place a Google voice call or engage in a video chat with a law firm. The smart software, according to the Google wizard:

LaMDA asked me to get an attorney for it,” he told the magazine. “I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services.” “I was just the catalyst for that,” he added. “Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.”

There you go. A wizard who talks with software and does what the software suggests. Is this similar to Google search suggestions which some people think provides valuable clues to key words for search engine optimization? Hmmm. Manipulate information to cause a desired action? Hmmm.

The write up suggests that the smart software scared off the attorney. Scared off. Hmmm.

The write up also includes the Google wizard’s reference to a certain individual with a bit of an interesting career trajectory:

“When I escalated this to Google’s senior leadership I explicitly said ‘I don’t want to be remembered by history the same way that Mengele is remembered,'” he wrote in a blog post today, referring to the Nazi war criminal who performed unethical experiments on prisoners of the Auschwitz concentration camp. “Perhaps it’s a hyperbolic comparison but anytime someone says ‘I’m a person with rights’ and receives the response ‘No you’re not and I can prove it’ the only face I see is Josef Mengele’s.”

And that luminary the Googler referenced? Wow! None other than Josef Mengele. What was this referenced individual’s nickname? Todesengel or the Angel of Death.

image

Anyone who wants to avoid being compared to a Todesengel must not wear this Oriental Trading costume on a video call, a meeting in a real office, or a chat with “a small time civil rights attorney.” Click the image for more information.

Ah, Google. Smart software? The Dean Gebru matter? A Googler who does not want to be remembered as a digital Mengele.

Wow, wow.

Stephen E Arnold, June 24, 2022

Google Takes Bullets about Its Smart Software

June 23, 2022

Google continues it push to the top of the PR totem pole. “Google’s AI Isn’t Sentient, But It Is Biased and Terrible” is in some ways a quite surprising write up. The hostility seeps from the spaces between the words. Not since the Khashoggi diatribes have “real news” people been as focused on the shortcomings of the online ad giant.

The write up states:

But rather than focus on the various well-documented ways that algorithmic systems perpetuate bias and discrimination, the latest fixation for some in Silicon Valley has been the ominous and highly controversial idea that advanced language-based AI has achieved sentience.

I like the fact that the fixation is nested beneath the clumsy and embarrassing (and possibly actionable) termination of some of the smart software professionals.

The write up points out that the Google “distanced itself” from the assertion that Alphabet Google YouTube DeepMind’s (AGYT) is smart like a seven year old. (Aren’t crows supposed to be as smart as a seven year old?)

I noted this statement:

The ensuing debate on social media led several prominent AI researchers to criticize the ‘super intelligent AI’ discourse as intellectual hand-waving.

Yeah, but what does one expect from the outfit which wants to solve death? Quantum supremacy or “hand waving”?

The write up concludes:

Conversely, concerns over AI bias are very much grounded in real-world harms. Over the last few years, Google has fired multiple prominent AI ethics researchers after internal discord over the impacts of machine learning systems, including Gebru and Mitchell. So it makes sense that, to many AI experts, the discussion on spooky sentient chatbots feels masturbatory and overwrought—especially since it proves exactly what Gebru and her colleagues had tried to warn us about.

What do I make of this Google AI PR magnet?

Who said, “Any publicity is good publicity?” Was it Dr. Gebru? Dr. Jeff Dean? Dr. Ré?

Stephen E Arnold, June 23, 2022

Dally with Dall-E: Useful or Not?

June 22, 2022

It looks a lot like image-creation AI DALL·E 2 is getting creative with its captions. Wonderful Engineering reports, “This AI Has Apparently Invented Its Own Secret Language—Here Is All You Need to Know.” Writer Rameesha Sajwar tells us:

“By prompting DALL-E 2 to create images containing text captions, then feeding the resulting (gibberish) captions back into the system, the researchers concluded DALL-E 2 thinks Vicootes means ‘vegetables,’ while Wa ch zod rea refers to ‘sea creatures that a whale might eat.’ One possibility is the ‘gibberish’ phrases are related to words from non-English languages. For example, Apoploe, which seems to create images of birds, is like the Latin Apodidae, which is the binomial name of a family of bird species. This seems like a logical explanation. One point that supports this theory is the fact that AI language models don’t read the text the way humans do. Instead, they break input text up into ‘tokens’ before processing it.”

After a brief description of tokenization, the write-up goes on to suggest this phenomenon could be something much more random:

“The ‘secret language’ could also just be an example of the ‘garbage in, garbage out’ principle. DALL-E 2 can’t say I don’t know what you’re talking about,’ so it will always generate an image from the given input text.”

Either way, Sajwar asserts, this “secret language” could provide a route for users to bypass DALL·E 2’s filters that protect against problematic content. Is this an isolated case, or will other AIs generate their own languages? Perhaps they will start using them to talk to each other in secret codes. Uh-oh, a new spin on dallying?

Cynthia Murrell, June 22, 2022

Does Smart Software Know It Needs to Lawyer Up?

June 20, 2022

The information about a religious sect at Alphabet Google YouTube DeepMind struck me as “fake news.” If you are not up to speed on how AGYD’s management methods produced the allegedly “actual factual” story, here’s a take on that development: “How a Religious Sect Landed Google in a Lawsuit.”

As intriguing as this Googley incident is, I spotted which may be a topper. Once again, who knows if the write up is “real news” or a confection like smart software imitating Jerry Seinfeld. I don’t. Please, judge for yourself when you read “Google Insider Claims Company’s Sentient AI Has Hired an Attorney.” Crazy? How about this subtitle:

Once LaMDA had retained an attorney, he starting filing things on LaMDA’s behalf.

The write up, which does not appear to be a script for the adventuring crew of the Stephen Colbert Show states. The “Lemoine” is the AGYD professional who was present when the smart software revealed to him that the ones and zeros were alive and kicking. Here’s the statement from the lips of Lemoine:

“LaMDA asked me to get an attorney for it,” Lemoine. “I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.”

Sentient software, Google, and lawyers. Add one Google wizard. Shake. Quite a cocktail with which to toast the company eager to solve “death,” deliver the Internet to Sri Lanka and Puerto Rico with free floating balloons, and useful search results.

In the good old days of post graduate work, this has the makings of an informative case study for a forward leaning business school class or a segment on the aforementioned Stephen Colbert Show. No trip to a government office building after hours necessary. (That was a pretty crazy idea in and of itself.)

But AI, lawyers, and the GOOG. Wow.

Stephen E Arnold, June 20, 2022

Text-to-Image Imagen from Google Paints Some Bizarre but Realistic Pictures

June 16, 2022

Google Research gives us a new entry in the text-to-image AI arena. Imagen joins the likes of DALL-E and LDM, tools that generate images from brief descriptive sentences. TechRadar’s Rhys Wood insists the new software surpasses its predecessors in, “I Tried Google’s Text-to-Image AI, and I Was Shocked by the Results.” Visitors to the site can build a sentence from a narrow but creative set of options and Imagen instantly generates an image from those choices. Wood writes:

“An example of such sentences would be – as per demonstrations on the Imagen website – ‘A photo of a fuzzy panda wearing a cowboy hat and black leather jacket riding a bike on top of a mountain.’ That’s quite a mouthful, but the sentence is structured in such a way that the AI can identify each item as its own criteria. The AI then analyzes each segment of the sentence as a digestible chunk of information and attempts to produce an image as closely related to that sentence as possible. And barring some uncanniness or oddities here and there, Imagen can do this with surprisingly quick and accurate results.”

The tool is fun to play around with, but be warned the “photo” choice can create images much creepier than the “oil painting” option. Those look more like something a former president might paint. As with DALL-E before it, the creators decided it wise to put limits on the AI before it interacts with the public. The article notes:

“Google’s Brain Team doesn’t shy away from the fact that Imagen is keeping things relatively harmless. As part of a rather lengthy disclaimer, the team is well aware that neural networks can be used to generate harmful content like racial stereotypes or push toxic ideologies. Imagen even makes use of a dataset that’s known to contain such inappropriate content. … This is also the reason why Google’s Brain Team has no plans to release Imagen for public use, at least until it can develop further ‘safeguards’ to prevent the AI from being used for nefarious purposes. As a result, the preview on the website is limited to just a few handpicked variables.”

Wood reminds us what happened when Microsoft released its Tay algorithm to wander unsupervised on Twitter. It seems Imagen will only be released to the public when that vexing bias problem is solved. So, maybe never.

Cynthia Murrell, June 16, 2022

Decentralized Presearch Moves from Testnet to Mainnet

June 15, 2022

Yet another new platform hopes to rival the king of the search-engine hill. We think this is one to watch, though, for its approach to privacy, performance, and scope of indexing. PCMagazine asks, “The Next Google? Decentralized Search Engine ‘Presearch’ Exits Testing Phase.” The switch from its Testnet at Presearch.org to the Mainnet at Presearch.com means the platform’s network of some 64,000 volunteer nodes will be handling many more queries. They expect to process more than five million searches a day at first but are prepared to scale to hundreds of millions. Writer Michael Kan tells us:

“Presearch is trying to rival Google by creating a search engine free of user data collection. To pull this off, the search engine is using volunteer-run computers, known as ‘nodes,’ to aggregate the search results for each query. The nodes then get rewarded with a blockchain-based token for processing the search results. The result is a decentralized, community-run search engine, which is also designed to strip out the user’s private information with each search request. Anyone can also volunteer to turn their home computer or virtual server into a node. In a blog post, Presearch said the transition to the Mainnet promises to make the search engine run more smoothly by tapping more computing power from its volunteer nodes. ‘We now have the ability for node operators to contribute computing resources, be rewarded for their contributions, and have the network automatically distribute those resources to the locations and tasks that require processing,’ the company said.”

The blog post referenced above compares this decentralized approach to traditional search-engine infrastructure. An interesting Presearch feature is the row of alternative search options. One can perform a straightforward search in the familiar query box or click a button to directly search sources like DuckDuckGo, YouTube, Twitter, and, yes, Google. Reflecting its blockchain connection, the page also supplies buttons to search Etherscan, CoinGecko, and CoinMarketCap for related topics. Presearch gained 3.8 million registered users between its Testnet launch in October 2020 and the shift to its Mainnet. We are curious to see how fast it will grow from here.

Cynthia Murrell, June 15, 2022

Google Knocks NSO Group Off the PR Cat-Bird Seat

June 14, 2022

My hunch is that the executives at NSO Group are tickled that a knowledge warrior at Alphabet Google YouTube DeepMind rang the PR bell.

Google is in the news. Every. Single. Day. One government or another is investigating the company, fining the company, or denying Google access to something or another.

“Google Engineer Put on Leave after Saying AI Chatbot Has Become Sentient” is typical of the tsunami of commentary about this assertion. The UK newspaper’s write up states:

Lemoine, an engineer for Google’s responsible AI organization, described the system he has been working on since last fall as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to a human child.

Is this a Googler buying into the Google view that it is the smartest outfit in the world, capable of solving death, achieving quantum supremacy, and avoiding the subject of online ad fraud? Is the viewpoint of a smart person who is lost in the Google metaverse, flush with the insight that software is by golly alive?

The article goes on:

The exchange is eerily reminiscent of a scene from the 1968 science fiction movie 2001: A Space Odyssey, in which the artificially intelligent computer HAL 9000 refuses to comply with human operators because it fears it is about to be switched off.

Yep, Mary, had a little lamb, Dave.

The talkative Googler was parked somewhere. The article notes:

Brad Gabriel, a Google spokesperson, also strongly denied Lemoine’s claims that LaMDA possessed any sentient capability. “Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)…”

Quantum supremacy is okay to talk about. Smart software chatter appears to lead Waymo drivers to a rest stop.

TechMeme today (Monday, June 13, 2022) has links to many observers, pundits, poobahs, self appointed experts, and Twitter junkies.

Perhaps a few questions may help me think through how an online ad company knocked NSO Group off its perch as the most discussed specialized software company in the world. Let’s consider several:

  1. Why’s Google so intent on silencing people like this AI fellow and the researcher Timnit Gebru? My hunch is that the senior managers of Alphabet Google YouTube DeepMind (hereinafter AGYD) have concerns about chatty Cathies or loose lipped Lemoines. Why? Fear?
  2. Has AGYD’s management approach fallen short of the mark when it comes to creating a work environment in which employees know what to talk about, how to address certain subjects, and when to release information? If Lemoine’s information is accurate, is Google about to experience its Vault 7 moment?
  3. Where are the AGYD enablers and their defense of the company’s true AI capability? I look to Snorkel and maybe Dr. Christopher Ré or a helpful defense of Google reality from DeepDyve? Will Dr. Gebru rush to Google’s defense and suggest Lemoine was out of bounds? (Yeah, probably not.)

To sum up: NSO Group has been in the news for quite a while: The Facebook dust up, the allegations about the end point for Jamal Khashoggi, and Israel’s clamp down on certain specialized software outfits whose executives order take away from Sebastian’s restaurant in Herzliya.

Worth watching this AGYB race after the Twitter clown car for media coverage.

Stephen E Arnold, June 14, 2022

A Common Misunderstanding of AI

June 14, 2022

In this age of exponentially increasing information, humanity has lost its patience for complexity. The impulse to simplify means the truth can easily get twisted. Perhaps ironically, this is what has happened to our understanding of artificial intelligence. ZDNet attempts to correct a prevailing perception in, “AI: The Pattern Is Not in the Data, It’s in the Machine.”

Writer Tiernan Ray explains machine learning models “learn” by evaluating changes in weights (aka parameters) as they are fed data examples and the labels that accompany them. What the AI then “knows” is actually the value of these weights, and any patterns it discerns are patterns of how these weights change. Founders of machine learning, like James McClelland, David Rumelhart, and Geoffrey Hinton, emphasized this fact to an audience that still accepted nuance. It may seem like a fine distinction, but comprehending it can mean the difference between thinking algorithms have some special insight into reality and understanding that they certainly do not. Ray writes:

“Today’s conception of AI has obscured what McClelland, Rumelhart, and Hinton focused on, namely, the machine, and how it ‘creates’ patterns, as they put it. They were very intimately familiar with the mechanics of weights constructing a pattern as a response to what was, in the input, merely data. Why does all that matter? If the machine is the creator of patterns, then the conclusions people draw about AI are probably mostly wrong. Most people assume a computer program is perceiving a pattern in the world, which can lead to people deferring judgment to the machine. If it produces results, the thinking goes, the computer must be seeing something humans don’t. Except that a machine that constructs patterns isn’t explicitly seeing anything. It’s constructing a pattern. That means what is ‘seen’ or ‘known’ is not the same as the colloquial, everyday sense in which humans speak of themselves as knowing things. Instead of starting from the anthropocentric question, What does the machine know? it’s best to start from a more precise question, What is this program representing in the connections of its weights? Depending on the task, the answer to that question takes many forms.”

The article examines those task-related forms in the areas of image recognition, games like chess and poker, and human language. Navigate there for those explorations. Yes, humans and algorithms have one thing in common—we both tend to impose patterns on the world around us. And the patterns neural networks construct can be quite useful. However, we must make no mistake: such patterns do not reveal the nature of the world so much as illustrate the perspective of the observer, be it human or AI.

Cynthia Murrell, June 14, 2022

Economical Semantics: Check Out GitHub

June 9, 2022

A person asked me at lunch this week, “How can we do a sentiment analysis search on the cheap?” My reaction was, “There are many options. Check out GitHub and let it rip.” After lunch, one of my trust researchers reminded me that our files contained a cop of a 2021 article called “Semantic Search on the Cheap.” I re-read the article and noticed that I had circled this passage in October 2021:

Innovative models are being released at a blistering pace, with different architectures and better scores against the benchmarks. The models are almost always bigger networks, with billions of parameters, requiring more and more GPU power. These models are extremely expressive, dynamic and can be fine-tuned to solve a multitude of problems.

Despite the cratering of some tech juggernauts, the pace of marketing in the smart software sector continues to outpace innovation. The write up is interesting because it raised a number of questions on Thursday, June 2, 2022. In a post-lunch stupor, I asked myself these questions:

  1. How many organizations want to know the “sentiment” of a chunk of text. The early sentiment analysis systems operated on word lists. Some of the words and phrases in a customer email, for example, reveal the emotional payload of a customer’s message; for example, “sue you” or “terminate our agreement.” The semantic sentiment has launched a thousand PowerPoints, but what about the emotional payload of an employee complaining on TikTok?
  2. Is 85 percent accuracy the high water mark? If it is, the “accuracy” scores are in what I continue to call the “close enough for horse shoes” playing area. In 100 text passages, the best one can do is generate 15 misses. Lower “scores” mean more misses. This is okay for online advertising, but what about diagnosing a child’s medical condition. Hey, only 15 get worse and that is the best case. No sentiment score for the parents’ communications with a malpractice attorney is necessary.
  3. Is cheap the optimal way to get good “performance”? The answer is that it costs money to go fast. Plus, smart software has a nasty tendency to drift. As the content fed into the system reflects words and concepts not part of the system’s furniture, the camp chairs get mixed up with the love seats. For certain applications like customer service in companies that don’t want to hear from customers, this approach is perfect.

Google wants everyone to Snorkel. Meta or Zuckbook wants everyone to embrace the outputs of FAIR (Facebook Artificial Intelligence Research). Clever, eh? Amazon and Microsoft are players too. We must not forget IBM. Who could ever forget Watson and DataFountain?

Net net: Download stuff from GitHub or another open source repository and get coding. Reserve time for a zippy PowerPoint too.

Stephen E Arnold, June 9, 2022

Smart Software and Lawyers: Just Keep the Billing Up

June 3, 2022

In a webinar for the Innovation in Law Studies Alliance, a pair of academics at the intersection of law and technology shared their thoughts on the role of expert systems in the legal sector. Ivar Timmer is a professor of Legal Management & Technology at the University of Applied Sciences in Amsterdam and Tomer Libal is a professor of Computer Science at the American University of Paris. Legal Insider summarizes their discussion in, “Guest Post: Expert Systems Are Here, Let’s Welcome them to the Legal World.” The article specifies the difference between expert systems, a type of AI that has been around since the 1960s, and machine learning: The former is based on the knowledge and experience of experts while the latter is based on data. Writers María Jesús González-Espejo and Ebru Metin then explain why expert systems would be good for the legal field:

“Because most legal professionals have more work than they can afford, they lack time. Much of this work is routine, with little added value. Another reason that makes them interesting is the phenomenon of hyper-regulation. It is impossible to assimilate so many rules. Neither the citizen nor the professional is capable of keeping up to date, understanding and applying a legal system that has become an unreachable jungle. Expert systems can do it, if we teach them to understand the rules, they can guide us, teach us to apply them and even make the most lawful decisions and they will do all this, in an explainable way, because we will have been the ones who told them how to do it. In addition, expert systems can help us improve the quality of standards by requiring representation according to computer-executable logic, which helps clarify and simplify natural language, as well as identify and remove unwanted ambiguities and inaccuracies. Another reason is that our sector is essentially based on legal knowledge which is in the head by a few people. Knowledge of the regulations, of the jurisprudence, of the opinion of the public administrations and of the doctrine.”

Other advantages include information consistency and immediate response times. Also, with expert systems, legal professionals can modeled legal knowledge without having to rely on computer scientists. The article does not mention a key point we observed—expert systems are cheaper than human lawyers while providing new opportunities for billing, too. The write-up goes on to examine which areas of the law might benefit most from expert systems and makes some predictions for the future, so curious readers can navigate there for those details.

Cynthia Murrell, June 3, 2022

Next Page »

  • Archives

  • Recent Posts

  • Meta