Fake Defined? Next Up Trust, Ethics, and Truth
October 28, 2024
Another post from a dinobaby. No smart software required except for the illustration.
This is a snappy headline: “You Can Now Get Fined $51,744 for Writing a Fake Review Online.” The write up states:
This mandate includes AI-generated reviews (which have recently invaded Amazon) and also encompasses dishonest celebrity endorsements as well as testimonials posted by a company’s employees, relatives, or friends, unless they include an explicit disclaimer. The rule also prohibits brands from offering any sort of incentive to prompt such an action. Suppressing negative reviews is no longer allowed, nor is promoting reviews that a company knows or should know are fake.
So, what does “fake” mean? The word appears more than 160 times in the US government document.
My hunch is that the intrepid US Federal government does not want companies to hype their products with “fake” reviews. But I don’t see a definition of “fake.” On page 10 of the government document “Use of Consumer Reviews”, I noted:
“…the deceptive or unfair commercial acts or practices involving reviews or other endorsement.”
That’s a definition of sort. Other words getting at what I would call a definition are:
- buying reviews (these can be non fake or fake it seems)
- deceptive
- false
- manipulated
- misleading
- unfair
On page 23 of the government document, A. 465. – Definitions appears. Alas, the word “fake” is not defined.
The document is 163 pages long and strikes me as a summary of standard public relations, marketing, content marketing, and social media practices. Toss in smart software and Telegram-type BotFather capability and one has described the information environment which buzzes, zaps, and swirls 24×7 around anyone with access to any type of electronic communication / receiving device.
Look what You.com generated. A high school instructor teaching a debate class about a foundational principle.
On page 119, the authors of the government document arrive at a key question, apparently raised by some of the individuals sufficiently informed to ask “killer” questions; for example:
Several commenters raised concerns about the meaning of the term “fake” in the context of indicators of social media influence. A trade association asked, “Does ‘fake’ only mean that the likes and followers were created by bots or through fake accounts? If a social media influencer were to recommend that their followers also follow another business’ social media account, would that also be ‘procuring’ of ‘fake’ indicators of social media influence? . . . If the FTC means to capture a specific category of ‘likes,’ ‘follows,’ or other metrics that do not reflect any real opinions, findings, or experiences with the marketer or its products or services, it should make that intention more clear.”
Alas, no definition is provided. “Fake” exists in a cloud of unknowing.
What if the US government prosecutors find themselves in the position of a luminary who allegedly said: “Porn. I know it when I see it.” That posture might be more acceptable than trying to explain that an artificial intelligence content generator produced a generic negative review of an Italian restaurant. A competitor uses the output via a messaging service like Telegram Messenger and creates a script to plug in the name, location, and date for 1,000 Italian restaurants. The individual then lets the script rip. When investigators look into this defamation of Italian restaurants, the trail leads back to a virtual assert service provider crime as a service operation in Lao PDR. The owner of that enterprise resides in Cambodia and has multiple cyber operations supporting the industrialized crime as a service operation. Okay, then what?
In this example, “fake” becomes secondary to a problem as large or larger than bogus reviews on US social media sites.
What’s being done when actual criminal enterprises are involved in “fake” related work. According the the United Nations, in certain nation states, law enforcement is hampered and in some cases prevented from pursuing a bad actor.
Several observations:
- As most high school debaters learn on Day One of class: Define your terms. Present these in plain English, not a series of anecdotes and opinions.
- Keep the focus sharp. If reviews designed to damage something are the problem, focus on that. Avoid the hand waving.
- The issue exists due to a US government policy of looking the other way with regard to the large social media and online services companies. Why not become a bit more proactive? Decades of non-regulation cannot be buried under 160 page plus documents with footnotes.
Net net: “Fake,” like other glittering generalities cannot be defined. That’s why we have some interesting challenges in today’s world. Fuzzy is good enough.
PS. If you have money, the $50,000 fine won’t make any difference. Jail time will.
Stephen E Arnold, October 28, 2024
AI Has An Invisible Language. Bad Actors Will Learn It
October 28, 2024
Do you remember those Magic Eyes back from the 1990s? You needed to cross your eyes a certain way to see the pony or the dolphin. The Magic Eyes were a phenomenon of early computer graphics and it was like an exclusive club with a secret language. There’s a new secret language on the Internet generated by AI and it could potentially sneak in malicious acts says Ars Technica: “Invisible Text That AI Chatbots Understand And Humans Can’t? Yep, It’s A Thing.”
The secret text could potentially include harmful instructions into AI chatbots and other code. The purpose would be to steal confidential information and conduct other scams all without a user’s knowledge:
“The invisible characters, the result of a quirk in the Unicode text encoding standard, create an ideal covert channel that can make it easier for attackers to conceal malicious payloads fed into an LLM. The hidden text can similarly obfuscate the exfiltration of passwords, financial information, or other secrets out of the same AI-powered bots. Because the hidden text can be combined with normal text, users can unwittingly paste it into prompts. The secret content can also be appended to visible text in chatbot output.”
The steganographic framework is built into a text encoding network and LLMs and read it. Researcher Johann Rehberger ran two proof-of-concept attacks with the hidden language to discover potential risks. He ran the tests on Microsoft 365 Copilot to find sensitive information. It worked:
“When found, the attacks induced Copilot to express the secrets in invisible characters and append them to a URL, along with instructions for the user to visit the link. Because the confidential information isn’t visible, the link appeared benign, so many users would see little reason not to click on it as instructed by Copilot. And with that, the invisible string of non-renderable characters covertly conveyed the secret messages inside to Rehberger’s server.”
What is nefarious is that the links and other content generated by the steganographic code is literally invisible. Rehberger and his team used a tool to decode the attack. Regular users are won’t detect the attacks. As we rely more on AI chatbots, it will be easier to infiltrate a person’s system.
Thankfully the Big Tech companies are aware of the problem, but not before it will probably devastate some people and companies.
Whitney Grace, October 28, 2024
Boring Technology Ruins Innovation: Go, Chaos!
October 25, 2024
Jonathan E. Magen is an experienced computer scientist and he writes a blog called Yonkeltron. He recently posted, “Boring Tech Is Stifling Improvement.” After a brief anecdote about highway repair that wasn’t hindered because of bureaucracy and the repair crew a new material to speed up the job, Magen got to thinking about the current state of tech.
He thinks it is boring.
Magen supports tech teams being allocated budgets to adopt old technology. The montage of “don’t fix what’s not broken” comes to mind, but sometimes newer is definitely better. He relates that it is problematic if tech teams have too much technology or solution, but there’s also the problem if the one-size-fits all solution no longer works. It’s like having a document that can only be opened by Microsoft Office and you don’t have the software. It’s called a monoculture with a single point of failure. Tech nerds and philosophers have names for everything!
Magen bemoans that a boring tech environment is a buzzkill. He then shares this “happy thoughts”:
“A second negative effect is the chilling of innovation. Creating a better way of doing things definitionally requires deviation from existing practices. If that is too heavily disincentivized by “engineering standards”, then people don’t feel they have enough freedom to color outside the lines here and there. Therefore, it chills innovation in company environments where good ideas could, conceivably, come from anywhere. Put differently, use caution so as not to silence your pioneers.
Another negative effect is the potential to cause stagnation. In this case, devotion to boring tech leads to overlooking better ways of doing things. Trading actual improvement and progress for “the devil you know” seems a poor deal. One of the main arguments in favor of boring tech is operability in the polycontext composed of predictability and repairability. Despite the emergence of Site Reliability Engineering (SRE), I think that this highlights a troubling industry trope where we continually underemphasize, and underinvest in, production operations.”
Necessity is the mother of invention, but boring is the killer of innovation. Bring on chaos.
Whitney Grace, October 25, 2024
Mobiles in Schools: No and a Partial Ban Is No Ban
October 25, 2024
No smart software but we may use image generators to add some modern spice to the dinobaby’s output.
Common sense appears to be in short supply in about one-third of the US population. I am assuming that the data from Pew Research’s “Most Americans Back Cellphone Bans during Class, but Fewer Support All-Day Restrictions” are reasonably accurate. The write up reports:
Less than half of adults under 30 (45%) say they support banning students from using cellphones during class. This share rises to 67% among those ages 30 to 49 and 80% among those ages 50 and older.
I know going to school, paying attention, and (hopefully) learning how to read, write, and do arithmetic is irrelevant in the Smart Software Era. Why have a person who can select groceries and keep a rough running tally of how much money is represented by the items in the cart? Why have a young person working at a retail outlet able to make change without puzzling over a point-of-sale screen.
My dream: A class of students handing over their mobile phones to the dinobaby instructor. He also has an extendible baton. This is the ideal device for rapping a student on the head. Nuns used rulers. Too old technology for today’s easily distracted youthful geniuses. Thanks, Mr. AI-Man, good enough.
The write up adds:
Our survey finds the public is far less supportive of a full-day ban on cellphone use than a classroom ban. About one-third (36%) support banning middle and high school students from using cellphones during the entire school day, including at lunch as well as during and between classes. By comparison, 53% oppose this more restrictive approach.
If I understand this information, out of 100 parents of school age children, only 64 percent of those allegedly responsible adults want their progeny to be able to use their mobile devices during the school day. I suppose if I were a parent terrified that an outside was going to enter a school and cause a disturbance, I would like to get a call or a text that says, “Daddy, I am scared.” Exactly what can that parent do about that message? Drive to the school, possibly breaking speed limits, and demand to talk to the administrative assistant. What if there were a serious issue? Would those swarming parents obstruct the officers and possibly contribute to the confusion and chaos swirling around such an event? On the other hand, maybe the parent is a trained special operations officer, capable of showing credentials and participating in the response to the intruder?
As a dinobaby, here’s my view:
- School is where students go to learn.
- Like certain government facilities, mobile devices are surrendered prior to admission. The devices are returned when the student exits the premises.
- The policy is posted and communicated to parents and students. The message is, “This is the rule. Period.”
- In the event of a problem, a school official or law enforcement officer will determine when and how to retrieve the secured devices.
I have a larger concern. School is for the purpose of education. My dinobaby common sense dictates that a student’s attention should be available to the instructors. Other students, general fooling around, and the craziness of controlling young people are difficult enough. Ensuring that a student can lose his or her attention in a mobile device is out of step with my opinion.
Falling test scores, the desire of some parents to get their children into high-demand schools, and use of tutors tells me that some parents have their ducks in a row. The idea that one can sort of have mobile devices in schools is the opposite of a tidy row of ducks. Imagine the problems that will result if a mobile device with software specifically engineered to capture and retain attention were not allowed in a school. The horror! Jim or Jane might actually learn to read and do sums. But, hey, TikTok-type services and selfies are just more fun.
Check out Neil Postman’s Amusing Ourselves to Death: Public Discourse in the Age of Show Business. Is that required reading in some high school classes? Probably not.
Stephen E Arnold, October 25, 2024
The DoJ Wants to Break Up Google and Maybe Destroy the Future of AI
October 25, 2024
Contrary to popular belief, the United States is an economically frisky operation. The country runs on a fluid system that mixes aspects regulation, the Wild West, monopolies, oligopolies, and stuff operating off the reservation. The government steps in when something needs regulation. The ageing Sherman Anti-Trust Act forbids monopolies. Yahoo Finance says that “Google Is About To Learn How DOJ Wants To Remake Its Empire.”
There have been rumblings about breaking up Big Tech companies like Google for a while. District of Columbia Judge Amit Mehta ruled that Google abused its power and that its search and ad businesses violated antitrust law. Nothing is clear about what will happen to Google, but a penalty may emerge in 2025. Judge Mehta could potentially end Google’s business agreements that make it the default search engine of devices and force search data to be available to competition. Google’s products: AdWords, Chrome browser, and the Android OS could be broken up and no longer send users to the search engine.
Judge Mehta must consider how breaking up Google will affect third parties, especially those who rely on Google and associated products to (basically) run society. Mehta has a lot to think about: Judge Mehta, however, may have to consider how remedies to restore competition in the traditional search engine market may impact competition in the emerging market for AI-assisted search.
One concern, legal experts said, is that Google’s search dominance could unfairly entrench its position in the market for next-generation search.
At the same time, these fresh threats may work to Google’s advantage in the remedies trial, allowing it to argue that its overall search dominance is already under threat.”
Nothing is going to happen quickly. The 2024 presidential election results will influence Mehta’s decision. Politicians will definitely have their say and the US government needs to evaluate how they use Google.
What’s Google’s answer to these charges? The company is suggesting that fiddling with Google could end the future of AI. Promise or threat?
Whitney Grace, October 25, 2024
Meta, Politics, and Money
October 24, 2024
Meta and its flagship product, Facebook, makes money from advertising. Targeted advertising using Meta’s personalization algorithm is profitable and political views seem to turn the money spigot. Remember the January 6 Riots or how Russia allegedly influenced the 2016 presidential election? Some of the reasons those happened was due to targeted advertising through social media like Facebook.
Gizmodo reviews how much Meta generates from political advertising in: “How Meta Brings In Millions Off Political Violence.” The Markup and CalMatters tracked how much money Meta made from Trump’s July assassination attempt via merchandise advertising. The total runs between $593,000 -$813,000. The number may understate the actual money:
“If you count all of the political ads mentioning Israel since the attack through the last week of September, organizations and individuals paid Meta between $14.8 and $22.1 million dollars for ads seen between 1.5 billion and 1.7 billion times on Meta’s platforms. Meta made much less for ads mentioning Israel during the same period the year before: between $2.4 and $4 million dollars for ads that were seen between 373 million and 445 million times. At the high end of Meta’s estimates, this was a 450 percent increase in Israel-related ad dollars for the company. (In our analysis, we converted foreign currency purchases to current U.S. dollars.)”
The organizations that funded those ads were supporters of Palestine or Israel. Meta doesn’t care who pays for ads. Tracy Clayton is a Meta spokesperson and she said that ads go through a review process to determine if they adhere to community standards. She also that advertisers don’t run their ads during times of strife, because they don’t want their goods and services associates with violence.
That’s not what the evidence shows. The Markup and CalMatters researched the ads’ subject matter after the July assassination attempt. While they didn’t violate Meta’s guidelines, they did relate to the event. There were ads for gun holsters and merchandise about the shooting. It was a business opportunity and people ran with it with Meta holding the finish line ribbon.
Meta really has an interesting ethical framework.
Whitney Grace, October 24, 2024
Google Meet: Going in Circles Is Either Brilliant or Evidence of a Management Blind Spot
October 24, 2024
No smart software but we may use image generators to add some modern spice to the dinobaby’s output.
I read an article which seems to be a rhetorical semantic floor routine. “Google Meet (Original) Is Finally, Properly Dead” explains that once there was Google Meet. Actually there was something called Hangouts, which as I recall was not exactly stable on my steam powered system in rural Kentucky. Hangouts morphed into Hangouts Meet. Then Hangouts Meet forked itself (maybe Google forked its users?) and there was Hangouts Meet and Hangouts Chat. Hangouts Chat then became Google Chat.
The write up focuses on Hangouts Meet, which is now dead. But the write up says:
In April 2020, Google rebranded Hangouts Meet to just “Meet.” A couple of years later, in 2022, the company merged Google Duo into Google Meet due to Duo’s larger user base, aiming to streamline its video chat services. However, to avoid confusion between the two Meet apps, Google labeled the former Hangouts Meet as “Meet (Original)” and changed its icon to green. However, having two Google Meet apps didn’t make sense and the company began notifying users of the “Meet (Original)” app to uninstall it and switch to the Duo-rebranded Meet. Now, nearly 18 months later, Google is officially discontinuing the Meet (Original) app, consolidating everything and leaving just one version of Meet on the Play Store.
Got that? The article explains:
Phasing out the original Meet app is a logical move for Google as it continues to focus on developing and enhancing the newer, more widely used version of Meet. The Duo-rebranded Google Meet has over 5 billion downloads on the Play Store and is where Google has been adding new features. Redirecting users to this app aligns with Google’s goal of consolidating its video services into a single, feature-rich platform.
Let’s step back. What does this Meet tell us about Google’s efficiency? Here are my views:
- Without its monopoly money, Google could not afford the type of inefficiency evidenced by the tale of the Meets
- The product management process appears to operate without much, if any, senior management oversight
- Google allows internal developers to whack away, release services, and then flounder until a person decides, “Let’s try again, just with different Googlers.”
So how has that worked out for Google? First, I think Microsoft Teams is a deeply weird product. The Softies want Teams to have more functions than the elephantine Microsoft Word. But lots of companies use Word and they now use Teams. And there is Zoom. Poor Zoom has lost its focus on allowing quick and easy online video conferences. Now I have to hunt for options between a truly peculiar Zoom app and the even more clumsy Zoom Web site.
Then there is Google Meet Duo whatever. Amazing. The services are an example of a very confused dog chasing its tail. Round and round she goes until some adult steps in and says, “Down, girl, before you die.”
PS. Who Google Chats from email?
Stephen E Arnold, October 24, 2024
Google Is AI, Folks
October 24, 2024
Google’s legal team is certainly creative. In the face of the Justice Department’s push to break up the monopoly, reports Yahoo Finance, “Google’s New Antitrust Defense is AI.” Wait, what? Reporter Hamza Shaban points to a blog post by Google VP Lee-Anne Mulholland, writing:
“In Google’s view, the government’s heavy-handed approach to transforming the search market ignores the nascent developments in AI, the fresh competition in the space, and new modes of seeking information online, like AI-powered answer engines. The energy around AI and the potential disruption of how users interact with search is, competitively speaking, a negative for Google, said Wedbush analyst Dan Ives. But in another way, as a defense against antitrust charges, it’s a positive. ‘That’s an argument against monopoly that bodes well for Google,’ he said.”
Really? Some believe quite the opposite. We learn:
“‘The DOJ has specifically noted that this evolution in technology is precisely why they are intervening at this point in time,’ said Gil Luria, an analyst at DA Davidson. ‘They want to make sure that Google is not able to convert the monopoly it currently has in Search into a monopoly in AI Enhanced Search.’”
Exactly. Google is clearly a monopoly. We think their assertion means, "treat us special because we are special." This church-lady thinking may or may not work. We live in an interesting judicial moment.
Cynthia Murrell, October 24, 2024
OpenAI: An Illustration of Modern Management Acumen
October 23, 2024
Just a humanoid processing information related to online services and information access.
The Hollywood Reporter (!) published “What the Heck Is Going On At OpenAI? As executives flee with Warnings of Danger, the Company Says It Will Plow Ahead.” When I compare the Hollywood Reporter with some of the poohbah “real” news discussion of a company on track to lose an ballpark figure of $5 billion in 2024, the write up does a good job of capturing the managerial expertise on display at the company.
The wanna-be lion of AI is throwing a party. Will there be staff to attend? Thanks, MSFT Copilot. Good enough.
I worked through the write up and noted a couple of interesting passages. Let’s take a look at them and then ponder the caption in the smart software generated for my blog post. Full disclosure: I used the Microsoft Copilot version of OpenAI’s applications to create the art. Is it derivative? Heck, who knows when OpenAI is involved in crafting information with a click?
The first passage I circled is the one about the OpenAI chief technology officer bailing out of the high-flying outfit:
she left because she’d given up on trying to reform or slow down the company from within. Murati was joined in her departure from the high-flying firm by two top science minds, chief research officer Bob McGrew and researcher Barret Zoph (who helped develop ChatGPT). All are leaving for no immediately known opportunity.
That suggests stability in the virtual executive suite. I suppose the the prompt used to aid these wizards in their decision to find their future elsewhere was something like “Hello, ChatGTP 4o1, I want to work in a technical field which protects intellectual property, helps save the whales, and contributes to the welfare of those without deep knowledge of multi-layer neural networks. In order to find self-fulfillment not possible with YouTube TikTok videos, what do you suggest for a group of smart software experts? Please, provide examples of potential work paths and provide sources for the information. Also, do not include low probability job opportunities like sanitation worker in the Mission District, contract work for Microsoft, or negotiator for the countries involved in a special operation, war, or regional conflict. Thanks!”
The output must have been convincing because the write up says: “All are leaving for no immediately known opportunity.” Interesting.
The second passage warranting a blue underline is a statement attributed to another former OpenAI wizard, William Saunders. He apparently told a gathering of esteemed Congressional leaders:
“AGI [artificial general intelligence or a machine smarter than every humanoid] would cause significant changes to society, including radical changes to the economy and employment. AGI could also cause the risk of catastrophic harm via systems autonomously conducting cyberattacks, or assisting in the creation of novel biological weapons,” he told lawmakers. “No one knows how to ensure that AGI systems will be safe and controlled … OpenAI will say that they are improving. I and other employees who resigned doubt they will be ready in time.”
I wonder if he asked the OpenAI smart software for tips about testifying before a Senate Committee. If he did, he seems to be voicing the idea that smart software will help some people to develop “novel biological weapons.” Yep, we could all die in a sequel Covid 2.0: The Invisible Global Killer. (Does that sound like a motion picture suitable for Amazon, Apple, or Netflix? I have a hunch some people in Hollywood will do some tests in Peoria or Omaha wherever the “middle” of America is now.
The final snippet I underlined is:
OpenAI has something of a history of releasing products before the industry thinks they’re ready.
No kidding. But the object of the technology game is to become the first mover, obtain market share, and kill off any pretenders like a lion in Africa goes for the old, lame, young, and dumb. OpenAI wants to be the king of the AI jungle. The one challenge may be that the AI lion at the company is getting staff to attend his next party. I see empty cubicles.
Stephen E Arnold, October 23, 2024
FOGINT: FBI Nabs Alleged Crypto Swindlers
October 23, 2024
Nowhere does the phrase “buyer beware” apply more than the cryptocurrency market. But the FBI is on it. Crypto Briefing reports, “FBI Creates Crypto Token to Catch Fraudsters in Historic Market Manipulation Case.” The agency used its “NexFundAI” token to nab 18 entities—some individuals and also four major crypto firms: Gotbit, ZM Quant, CLS Global, and MyTrade. The mission was named “Operation Token Mirrors.” Snazzy. Writer Estefano Gomez explains:
“The charges stem from widespread fraud involving market manipulation and ‘wash trading’ designed to deceive investors and inflate crypto values. Working covertly, the FBI launched the token to attract the indicted firms’ services, which allegedly specialized in inflating trading volumes and prices for profit. The charges cover a broad scheme of wash trading, where defendants artificially inflated the value of more than 60 tokens, including the Saitama Token, which at its peak reached a market capitalization of $7.5 billion. The conspirators are alleged to have made false claims about the tokens and used deceptive tactics to mislead investors. After artificially pumping up the token prices, they would cash out at these inflated values, defrauding investors in a classic ‘pump and dump’ scheme. The crypto companies also allegedly hired market makers like ZM Quant and Gotbit to carry out these wash trades. These firms would execute sham trades using multiple wallets, concealing the true nature of the activity while creating fake trading volume to make the tokens seem more appealing to investors.”
If convicted, defendants could face up to two decades in prison. Several of those charged have already pled guilty. Authorities also shut down several trading bots used for wash trades and seized over $25 million in cryptocurrency. Assistant US Attorney Joshua Levy stresses that wash trading, long since illegal in traditional financial markets, is now also illegal in the crypto industry.
Cynthia Murrell, October 23, 2024