Text-to-Image Imagen from Google Paints Some Bizarre but Realistic Pictures

June 16, 2022

Google Research gives us a new entry in the text-to-image AI arena. Imagen joins the likes of DALL-E and LDM, tools that generate images from brief descriptive sentences. TechRadar’s Rhys Wood insists the new software surpasses its predecessors in, “I Tried Google’s Text-to-Image AI, and I Was Shocked by the Results.” Visitors to the site can build a sentence from a narrow but creative set of options and Imagen instantly generates an image from those choices. Wood writes:

“An example of such sentences would be – as per demonstrations on the Imagen website – ‘A photo of a fuzzy panda wearing a cowboy hat and black leather jacket riding a bike on top of a mountain.’ That’s quite a mouthful, but the sentence is structured in such a way that the AI can identify each item as its own criteria. The AI then analyzes each segment of the sentence as a digestible chunk of information and attempts to produce an image as closely related to that sentence as possible. And barring some uncanniness or oddities here and there, Imagen can do this with surprisingly quick and accurate results.”

The tool is fun to play around with, but be warned the “photo” choice can create images much creepier than the “oil painting” option. Those look more like something a former president might paint. As with DALL-E before it, the creators decided it wise to put limits on the AI before it interacts with the public. The article notes:

“Google’s Brain Team doesn’t shy away from the fact that Imagen is keeping things relatively harmless. As part of a rather lengthy disclaimer, the team is well aware that neural networks can be used to generate harmful content like racial stereotypes or push toxic ideologies. Imagen even makes use of a dataset that’s known to contain such inappropriate content. … This is also the reason why Google’s Brain Team has no plans to release Imagen for public use, at least until it can develop further ‘safeguards’ to prevent the AI from being used for nefarious purposes. As a result, the preview on the website is limited to just a few handpicked variables.”

Wood reminds us what happened when Microsoft released its Tay algorithm to wander unsupervised on Twitter. It seems Imagen will only be released to the public when that vexing bias problem is solved. So, maybe never.

Cynthia Murrell, June 16, 2022

Disadvantaged Groups and Simple Explanations

June 16, 2022

Bias in machine learning algorithms is a known problem. Decision makers, like admissions officers for example, sometimes turn to explanation models in an effort to avoid this issue. These tools construct simplified approximations of larger models’ predictions that are easier to understand. But wait, one may ask, aren’t these explanations also generated by machine learning AI? Indeed they are. MIT News examines this sticky wicket in its piece, “In Bias We Trust?” A team of MIT researchers checked for bias in some widely used explanation models and, low and behold, they found it. Writer Adam Zewe tells us:

“They found that the approximation quality of these explanations can vary dramatically between subgroups and that the quality is often significantly lower for minoritized subgroups. In practice, this means that if the approximation quality is lower for female applicants, there is a mismatch between the explanations and the model’s predictions that could lead the admissions officer to wrongly reject more women than men. Once the MIT researchers saw how pervasive these fairness gaps are, they tried several techniques to level the playing field. They were able to shrink some gaps, but couldn’t eradicate them. ‘What this means in the real-world is that people might incorrectly trust predictions more for some subgroups than for others. So, improving explanation models is important, but communicating the details of these models to end users is equally important. These gaps exist, so users may want to adjust their expectations as to what they are getting when they use these explanations,’ says lead author Aparna Balagopalan, a graduate student in the Healthy ML group of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).”

Or we could maybe go back to using human judgment for decisions that affect the lives of others? Nah. See the article for details of how the researchers evaluated explanation models’ fidelity and their work to narrow (but not eliminate) the gaps. Zewe reports the team plans to extend its research to ways fidelity gaps affect decisions in the real world. We look forward to learning what they find, though we suspect we will not be surprised by the results.

Cynthia Murrell, June 16, 2022

NSO Group: Is This a Baller Play to Regain Its PR Initiative or a Fumble?

June 15, 2022

Secrecy and confidentiality are often positive characteristics in certain specialized software endeavors. One might assume that firms engaged in providing technology, engineering support, and consulting services would operate with a low profile. I like to think of my first meeting with Admiral Craig Hosmer. We each arrived at the DC Army Navy Club at 2 30 pm Eastern time. The Admiral told me where to sit. He joined me about 15 minutes later. The Club was virtually empty; the room was small but comfortable; and the one staff member was behind the bar doing what bartenders do: Polishing glasses.

Looking back on that meeting in 1974, I am quite certain no one knew I was meeting the Admiral. I have no idea where the Admiral entered the building nor did I see who drove him to the 17th Street NW location. My thought is that this type of set up for a meeting was what I would call “low profile.”

US Defence Contractor in Talks to Take Over NSO Group’s Hacking Technology” illustrates what happens when the type of every day precautions Admiral Hosmer took are ignored. A British newspaper reports:

The US defence contractor L3Harris is in talks to take over NSO Group’s surveillance technology, in a possible deal that would give an American company control over one of the world’s most sophisticated and controversial hacking tools. Multiple sources confirmed that discussions were centered on a sale of the Israeli company’s core technology – or code – as well as a possible transfer of NSO personnel to L3Harris.

Okay, so much for low profiling this type of deal.

I am not sure what “multiple sources” mean. If someone were writing about my meeting the Admiral, the only sources of information would have been me, the Admiral’s technical aide (a nuclear scientist from Argonne National Laboratory), and probably the bartender who did not approach the area in which the former chair of the Joint Committee on Atomic Energy were sitting.

But what have we got?

  1. A major newspaper’s story about a company which has made specialized services as familiar as TikTok
  2. Multiple sources of information. What? Who is talking? Why?
  3. A White House “official” making a comment. Who? Why? To whom?
  4. A reference to a specialized news service called “Intelligence Online”. What was the source of this outfit’s information? Is that source high value? Why is a news service plunging into frog killing hot water?
  5. Ramblings about the need to involve government officials in at least two countries. Who are the “officials”? Why are these people identified without specifics?
  6. References to human rights advocates. Which advocates? Why?

Gentle reader, I am a dinobaby who was once a consultant to the company which made this term popular. Perhaps a return to the good old days of low-profiling certain activities is appropriate?

One thing is certain: Not even Google’s 10-thumb approach to information about its allegedly smart software can top this NSO Group PR milestone.

Stephen E Arnold, June 15, 2022

The Alleged Apple M1 Vulnerability: Just Like Microsoft?

June 15, 2022

I read “MIT Researchers Uncover Unpatchable Flaw in Apple M1 Chips.” I have no idea if the exploit is one that can be migrated to a Dark Web or Telegram Crime as a Service pitch. Let’s assume that there may be some truth to the clever MIT wizards’ discoveries.

First, note this statement from the cited article:

The researchers — which presented their findings to Apple — noted that the Pacman attack isn’t a “magic bypass” for all security on the M1 chip, and can only take an existing bug that pointer authentication protects against.

And this:

In May last year, a developer discovered an unfixable flaw in Apple’s M1 chip that creates a covert channel that two or more already-installed malicious apps could use to transmit information to each other. But the bug was ultimately deemed “harmless” as malware can’t use it to steal or interfere with data that’s on a Mac.

I may be somewhat jaded, but if these statements are accurate, the “unpatchable” adjective is a slide of today’s reality. Windows Defender may not defend. SolarWinds’ may burn with unexpected vigor. Cyber security software may be more compelling in a PowerPoint deck than installed on a licensee’s system wherever it resides.

The key point is that like many functions in modern life, there is no easy fix. Human error? Indifference? Clueless quality assurance and testing processes?

My hunch is that this is a culmination of the attitude of “good enough” and “close enough for horseshoes.”

One certainty: Bad actors are encouraged by assuming that whatever is produced by big outfits will have flaws, backdoors, loopholes, stupid mistakes, and other inducements to break laws.

Perhaps it is time for a rethink?

Stephen E Arnold, June 15, 2022

Decentralized Presearch Moves from Testnet to Mainnet

June 15, 2022

Yet another new platform hopes to rival the king of the search-engine hill. We think this is one to watch, though, for its approach to privacy, performance, and scope of indexing. PCMagazine asks, “The Next Google? Decentralized Search Engine ‘Presearch’ Exits Testing Phase.” The switch from its Testnet at Presearch.org to the Mainnet at Presearch.com means the platform’s network of some 64,000 volunteer nodes will be handling many more queries. They expect to process more than five million searches a day at first but are prepared to scale to hundreds of millions. Writer Michael Kan tells us:

“Presearch is trying to rival Google by creating a search engine free of user data collection. To pull this off, the search engine is using volunteer-run computers, known as ‘nodes,’ to aggregate the search results for each query. The nodes then get rewarded with a blockchain-based token for processing the search results. The result is a decentralized, community-run search engine, which is also designed to strip out the user’s private information with each search request. Anyone can also volunteer to turn their home computer or virtual server into a node. In a blog post, Presearch said the transition to the Mainnet promises to make the search engine run more smoothly by tapping more computing power from its volunteer nodes. ‘We now have the ability for node operators to contribute computing resources, be rewarded for their contributions, and have the network automatically distribute those resources to the locations and tasks that require processing,’ the company said.”

The blog post referenced above compares this decentralized approach to traditional search-engine infrastructure. An interesting Presearch feature is the row of alternative search options. One can perform a straightforward search in the familiar query box or click a button to directly search sources like DuckDuckGo, YouTube, Twitter, and, yes, Google. Reflecting its blockchain connection, the page also supplies buttons to search Etherscan, CoinGecko, and CoinMarketCap for related topics. Presearch gained 3.8 million registered users between its Testnet launch in October 2020 and the shift to its Mainnet. We are curious to see how fast it will grow from here.

Cynthia Murrell, June 15, 2022

Could a Male Googler Take This Alleged Action?

June 15, 2022

It has been a while since Google made the news for its boys’ club behavior. It was only a matter of time before something else leaked and Wired released the latest scandal: “Tension Inside Google Over A Fired Researcher’s Conduct.” Google AI researchers Azalia Mirhoseini and Anna Goldie thought of the idea of using AI software to improve AI software? Their project was codenamed Morpheus and gained support from Jeff Dean, Google’s AI boss, and its chip making team. Goldie and Mirhoseini discovered:

“It focused on a step in chip design when engineers must decide how to physically arrange blocks of circuits on a chunk of silicon, a complex, months-long puzzle that helps determine a chip’s performance. In June 2021, Goldie and Mirhoseini were lead authors on a paper in the journal Nature that claimed a technique called reinforcement learning could perform that step better than Google’s own engineers, and do it in just a few hours.”

Their research was highly praised, but a more senior Google researcher Satrajit Chatterjee undermined his female colleagues with scientific debate. Chatterjee’s behavior was reported to Google human resources and was warned, but he continued to berate the women. The attacks started when Chatterjee asked to lead the Morpheus project, but was declined. He then began raising doubts about their research and, with his senior position, skepticism spread amongst other employees. Chatterjee was fired after he asked Google if he could publish a rebuttal about Mirhoseini and Goldies’ research.

Chatterjee’s story reads like a sour, girl-hating, little boy who did not get to play with the toys he wanted, so he blames the girls and acts like an entitled jerk backed up with science. Egos are so fragile when challenged.

Whitney Grace, June 15, 2022

Google Knocks NSO Group Off the PR Cat-Bird Seat

June 14, 2022

My hunch is that the executives at NSO Group are tickled that a knowledge warrior at Alphabet Google YouTube DeepMind rang the PR bell.

Google is in the news. Every. Single. Day. One government or another is investigating the company, fining the company, or denying Google access to something or another.

“Google Engineer Put on Leave after Saying AI Chatbot Has Become Sentient” is typical of the tsunami of commentary about this assertion. The UK newspaper’s write up states:

Lemoine, an engineer for Google’s responsible AI organization, described the system he has been working on since last fall as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to a human child.

Is this a Googler buying into the Google view that it is the smartest outfit in the world, capable of solving death, achieving quantum supremacy, and avoiding the subject of online ad fraud? Is the viewpoint of a smart person who is lost in the Google metaverse, flush with the insight that software is by golly alive?

The article goes on:

The exchange is eerily reminiscent of a scene from the 1968 science fiction movie 2001: A Space Odyssey, in which the artificially intelligent computer HAL 9000 refuses to comply with human operators because it fears it is about to be switched off.

Yep, Mary, had a little lamb, Dave.

The talkative Googler was parked somewhere. The article notes:

Brad Gabriel, a Google spokesperson, also strongly denied Lemoine’s claims that LaMDA possessed any sentient capability. “Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)…”

Quantum supremacy is okay to talk about. Smart software chatter appears to lead Waymo drivers to a rest stop.

TechMeme today (Monday, June 13, 2022) has links to many observers, pundits, poobahs, self appointed experts, and Twitter junkies.

Perhaps a few questions may help me think through how an online ad company knocked NSO Group off its perch as the most discussed specialized software company in the world. Let’s consider several:

  1. Why’s Google so intent on silencing people like this AI fellow and the researcher Timnit Gebru? My hunch is that the senior managers of Alphabet Google YouTube DeepMind (hereinafter AGYD) have concerns about chatty Cathies or loose lipped Lemoines. Why? Fear?
  2. Has AGYD’s management approach fallen short of the mark when it comes to creating a work environment in which employees know what to talk about, how to address certain subjects, and when to release information? If Lemoine’s information is accurate, is Google about to experience its Vault 7 moment?
  3. Where are the AGYD enablers and their defense of the company’s true AI capability? I look to Snorkel and maybe Dr. Christopher Ré or a helpful defense of Google reality from DeepDyve? Will Dr. Gebru rush to Google’s defense and suggest Lemoine was out of bounds? (Yeah, probably not.)

To sum up: NSO Group has been in the news for quite a while: The Facebook dust up, the allegations about the end point for Jamal Khashoggi, and Israel’s clamp down on certain specialized software outfits whose executives order take away from Sebastian’s restaurant in Herzliya.

Worth watching this AGYB race after the Twitter clown car for media coverage.

Stephen E Arnold, June 14, 2022

Yandex: Just Helping Others to Understand Geography

June 14, 2022

The Yandex news has been interesting. Some staff turnover. Some outages. Some changes to Yandex images. But there’s more! Example:

On June 3, European Union introduced sanctions against one of the company’s founders, Arkady Volozh prompting his immediate resignation.

Russia’s Yandex Maps to Stop Displaying National Borders” also reports:

the company said that their updated digital maps would “focus on natural features rather than on state boundaries.”

What’s this “real news” statement mean?

My thought is that national borders can be fuzzy and then defined as necessary.

The map is not the territory as YouTube videos about a certain dust up near the Black Sea makes evident.

Stephen E Arnold, June 14, 2022

Microsoft: Helping Out Google Security. What about Microsoft Security?

June 14, 2022

While Microsoft is not among the big tech giants, the company still holds a prominent place within the technology industry. Microsoft studies rival services and products to gain insights as well as share anything to lower their standing such as a security threat, “Microsoft Researchers Discover Serious Security Vulnerabilities In Big-Name Android Apps.” The Microsoft 365 Defender Research Team found a slew of severe vulnerabilities in the mce Systems mobile framework used by large companies, including Rogers Communications, Bell Canada, and AT&T, for their apps.

Android phones have these apps preinstalled in the OS and they are downloaded by millions of users. These vulnerabilities could allow bad actors to remotely attack phones. The types of attacks range from command injection to privilege escalation.

The Microsoft 365 Defender Research Team shared the discovery:

“Revealing details of its findings, the security research team says: ‘Coupled with the extensive system privileges that pre-installed apps have, these vulnerabilities could have been attack vectors for attackers to access system configuration and sensitive information’.

In the course of its investigation, the team found the mce Systems’ framework had a “BROWSABLE” service activity that an attacker could remotely invoke to exploit several vulnerabilities that could allow adversaries to implant a persistent backdoor or take substantial control over the device.”

Vulnerabilities also affected apps on Apple phones. Preinstalled apps simplify device activation, troubleshooting, and optimize performance. Unfortunately, this gives apps control over the majority of the phone and the bad actors will exploit them to gain access. Microsoft is worked with mce Systems to fix the threats.

Interestingly, Microsoft found the security threats. Maybe Microsoft wants to reclaim its big tech title by protecting the world from Google’s spies?

Whitney Grace, June 14, 2022

A Common Misunderstanding of AI

June 14, 2022

In this age of exponentially increasing information, humanity has lost its patience for complexity. The impulse to simplify means the truth can easily get twisted. Perhaps ironically, this is what has happened to our understanding of artificial intelligence. ZDNet attempts to correct a prevailing perception in, “AI: The Pattern Is Not in the Data, It’s in the Machine.”

Writer Tiernan Ray explains machine learning models “learn” by evaluating changes in weights (aka parameters) as they are fed data examples and the labels that accompany them. What the AI then “knows” is actually the value of these weights, and any patterns it discerns are patterns of how these weights change. Founders of machine learning, like James McClelland, David Rumelhart, and Geoffrey Hinton, emphasized this fact to an audience that still accepted nuance. It may seem like a fine distinction, but comprehending it can mean the difference between thinking algorithms have some special insight into reality and understanding that they certainly do not. Ray writes:

“Today’s conception of AI has obscured what McClelland, Rumelhart, and Hinton focused on, namely, the machine, and how it ‘creates’ patterns, as they put it. They were very intimately familiar with the mechanics of weights constructing a pattern as a response to what was, in the input, merely data. Why does all that matter? If the machine is the creator of patterns, then the conclusions people draw about AI are probably mostly wrong. Most people assume a computer program is perceiving a pattern in the world, which can lead to people deferring judgment to the machine. If it produces results, the thinking goes, the computer must be seeing something humans don’t. Except that a machine that constructs patterns isn’t explicitly seeing anything. It’s constructing a pattern. That means what is ‘seen’ or ‘known’ is not the same as the colloquial, everyday sense in which humans speak of themselves as knowing things. Instead of starting from the anthropocentric question, What does the machine know? it’s best to start from a more precise question, What is this program representing in the connections of its weights? Depending on the task, the answer to that question takes many forms.”

The article examines those task-related forms in the areas of image recognition, games like chess and poker, and human language. Navigate there for those explorations. Yes, humans and algorithms have one thing in common—we both tend to impose patterns on the world around us. And the patterns neural networks construct can be quite useful. However, we must make no mistake: such patterns do not reveal the nature of the world so much as illustrate the perspective of the observer, be it human or AI.

Cynthia Murrell, June 14, 2022

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta