Has Microsoft Drilled into a Google Weak Point?
February 2, 2023
I want to point to a paper written by someone who is probably not on the short list to replace Jeff Dean or Prabhakar Raghavan at Google. The analysis of synthetic data and its role in smart software is titled “Machine Learning and the Politics of Synthetic Data.” The author is Benjamin N Jacobsen at Durham University. However, the first sentence of the paper invokes Microsoft’s AI Labs at Microsoft Cambridge. Clue? Maybe?
The paper does a good job of defining synthetic data. These are data generated by a smart algorithm. The fake data train other smart software. What could go wrong? The paper consumes 12 pages explaining that quite a bit can go off the rails; for example, just disconnected from the real world or delivering incorrect outputs. No big deal.
For me the key statement in the paper is this one:
… as I have sought to show in this paper, the claims that synthetic data are ushering in a new era of generated inclusion and non-risk for machinelearning algorithms is both misguided and dangerous. For it obfuscates how synthetic data are fundamentally a technology of risk, producing the parameters and conditions of what gets to count as risk in a certain context.
The idea of risk generated from synthetic data is an important one. I have been compiling examples of open source intelligence blind spots. How will a researcher know when an output is “real”? What if an output increases the risk of a particular outcome? Has the smart software begun to undermine human judgment and decision making? What happens if one approach emerges as the winner — for example the SAIL, Snorkel, Google method? What if a dominant company puts its finger on the scale to cause certain decisions to fall out of the synthetic training set?
With many rushing into the field of AI windmills, what will Google’s Code Red actions spark? Perhaps more synthetic data to make training easier, cheaper, and faster? Notice I did not use the word better. Did the stochastic parrot utter something?
Stephen E Arnold, February 2, 2023
Two Interesting Numbers
February 2, 2023
I spotted two interesting numbers.
The first appeared in this headline: “Facebook Now Has 2 Billion Users.” I am not sure how many people are alive on earth, but this seems like a big number. Facebook or what I call the Zuckbook has morphed into a penny pinching mini-metaverse. But there is the number: two billion. What happens if regulators want to trim down the Zuck’s middle age spread. Chop off WhatsApp. Snip away Instagram. What’s left? The Zuckbook. But is it exciting? Not for me.
Let’s look at the second number. The factoid appears in “ChatGPT Sets Record for Fastest-Growing User Base in History, Report Says.” I quote:
[The] AI bot ChatGPT reached an estimated 100 million active monthly users last month, a mere two months from launch, making it the “fastest-growing consumer application in history…
The Zuckbook thinks ChatGPT is a meh-thing.
Three differences:
First, the ChatGPT thing is a welcome change from the blah announcements about technology in the last six months. I mean another video card, more layoffs, and another Apple sort of new device. Now there is some zing.
Second, the speed of uptake is less of a positive move because ChatGPT is flawless. Nope. The uptake is an example of people annoyed with the status quo and grabbing something that seems a heck of a lot better than ads and more of Dr. Zuboff’s reminders about surveillance.
Third, ChatGPT offers something that almost anyone can use. The learning curve is nearly zero. Can you figure out how to see street views in Google Maps? Can you make Windows update leave your settings alone?
Net net: Fasten your seat belts. A wild ride is beginning.
Stephen E Arnold, February 2, 2023
You Have Been Googled!
February 1, 2023
If the information in “Google Engineer Who Was Laid Off While on Mental Health Leave Says She Silently Mourned After Receiving Her Severance Email at 2 a.m.” a new meaning for Google may have surfaced. The main point of the write up is that Google has been trimming some of its unwanted trees and shrubs (Populus Quisquilias). These are plants which have been cultivated with Google ideas, beliefs, and nutrients. But now: Root them out of the Google greenhouse, the spaces between cubes, and the grounds near lovely Shoreline Drive.
The article states:
Neil said she had an inclination that layoffs were coming but assumed she would be safe because she was already on leave. According to Neil, she “bled for Google.” She said she met and exceeded performance expectations, while also enjoying her job. Google felt like a safe and stable environment, where the risk of being laid off was very low, Neil said. She described the layoff process as “un-Googley” and done without care. “Now I’m left here having to find a job for the first time in years after being on mental health leave in quite possibly one of the most difficult hiring situations and housing markets,” Neil said. Google won’t allow Neil to go back to her office to drop off her work laptop and other devices, she said. The company has told her to meet security somewhere near the office, or ship the items in a box, she added.
I want to suggest that the new term for this management approach be called “googled.” To illustrate: In order to cut expenses, the firm googled 3,000 employees. Thus, the shift in meaning from “look up” to “look for your future elsewhere” represents a fresh approach for a cost conscious company.
It may be a signal of honor to have been “googled.” For the individual referenced in the write up, the pain and mental stress may take some time to go away. Does Google management know that Populus Quisquilias has feelings?
Stephen E Arnold, February 1, 2023
A Convenient Deep-Fake Time Saver
February 1, 2023
There are some real concerns about deepfakes, and identifying AI imposters remains a challenge. Amid the excitement, there is one outfit determined to put the troublesome tech to use for average folks. We learn about a recent trial run in Motherboard‘s piece, “Researcher Deepfakes His Voice, Uses AI to Demand Refund from Wells Fargo.”
Yes, among other things, Do Not Pay is working to take the tedium out of wrangling with customer service. Writer Joseph Cox describes a video posted on Twitter by founder Joshua Browder in which he uses an AI copy of his voice to request a refund for certain wire transfer fees. In the clip, the tool appears to successfully negotiate with a live representative, though a Wells Fargo spokesperson claims this was not the case and the video was doctored. Browder vigorously insists it was not. We are told Motherboard has requested a recording of the call from Wells Fargo’s side, but they had apparently not supplied on as of this writing. Cox writes:
“‘Hi, I’m calling to get a refund for wire transfer fees,’ the fake Browder says around half way through the clip. The customer support worker then asks for the callers first and last name, which the bot dutifully provides. For a while, the bot and worker spar back and forth on which wire transfer fees the bot is calling about, before settling on the fees for the past three months. In a tweet, Browder said the tool was built from a combination of Resemble.ai, a site that lets users create their own AI voices, GPT-J, an open source casual language model, and Do Not Pay’s own AI models for the script. Do Not Pay has previously used AI-powered bots to negotiate Comcast bills. The conversation from this latest bot is very unnatural. There are long pauses where the bot processes what the customer support worker has said, and works on its response. You can’t help but feel bad for the Wells Fargo worker who had to sit silently while the bot slowly did its thing. But in this case, the bot was effective and did manage to secure the refunds, judging by the video.”
Do Not Pay does plan to make this time-saving tool available to the public, though equipping it with one’s own voice will be a premium option. As uses for deep fake technology go, this does seem like one of the least nefarious. Corporations like Wells Fargo, however, may disagree.
Cynthia Murrell, February 1, 2023
YouTube Biggie and a CryptoZoo
February 1, 2023
Whenever YouTuber Logan Paul makes headlines it is always a cringeworthy event. Paul does not disappoint reports the BBC, because his latest idiotic incident involves cryptocurrency: “YouTube Star Logan Paul Apologizes For CryptoZoo Project Failure.” Using his celebrity, Paul encouraged his audience to purchase cryptocurrency items for a game project. He promised that the game was “really fun” and would make players money.
It has been more than a year since Paul made the announcement and the game still has not surfaced. It appears he has abandoned the project and people are hammering YouTube to investigate yet another Paul disaster.
The game was an autonomous ecosystem where players, called ZooKeepers, bought, sold, and traded cartoon eggs that would hatch into random animals. The images would be NFTs, then ZooKeepers could breed the images to spawn new species and earn $ZOO cryptocurrency. The game was supposed to debut in 2022, but nothing has surfaced. The game sold millions of dollars worth of crypto and NFTS and its Discord server has 500 members.
An armchair cryptocurrency detective took on the case:
“Last month, cryptocurrency scam investigator Stephen Findeisen, known as Coffeezilla on YouTube, began a three-part video series about CryptoZoo, calling it a “scam”.
The American spoke to investors around the world who claimed to have spent hundreds, sometimes thousands of dollars on CryptoZoo items and were angry at Paul.
In his videos, that have had nearly 18 million views, Coffeezilla accused Paul of scamming investors and abandoning them after selling them “worthless” digital items.
On Thursday, Paul posted an angry rebuttal video admitting that he made a mistake hiring “conmen” and “felons” to run his project, but denied the failures were his fault.
He accused Mr. Findeisen of getting facts wrong and threatened to sue him.”
Paul has since deleted the video, apologized to Coffezilla, and wrote on Discord that he would take responsibility, and make a plan for CryptoZoo.
Perhaps Mr. Paul needs to make Brenda Lee’s “I’m Sorry” his theme song, learn to keep his mouth shut, and focus on boxing.
Whitney Grace, February 1, 2023