Pass a Law to Prevent Youngsters from Accessing Social Media. Yep, That Will Work Well
December 2, 2024
This is the work of a dinobaby. Smart software helps me with art, but the actual writing? Just me and my keyboard.
I spotted a very British “real” news story called “It’s So Easy to Lie: : A Fifth of Children Use Fake Age on Social Media.” I like the idea that one can pick 100 children at random from a school with 13 year olds, only 80 percent will allegedly follow the rules.
Thanks, Midjourney. Good enough. I might point out you did not present a young George Washington despite my efforts to feed you words to which you would respond.
Does the 20 percent figure seem low to you? I would suggest that if a TikTok-type video was popular at that school, more than 20 percent would find a way to get access to that video. If the video was about being thin or a fashion tip, the females would be more interested and they would lie to get that information. The boys might be more interested in other topics, which I shall leave to your imagination.
The write up says:
A newly released survey, conducted by the UK media regulator, indicates 22% of eight to 17 year olds lie that they are 18 or over on social media apps.
I doubt that my hypothetical group of 13 years olds are different from those who are four years older. The write up pointed out:
A number of tech firms have recently announced measures to make social media safer for young people, such as Instagram launching “teen accounts.” However, when BBC news spoke to a group of teenagers at Rosshall Academy, in Glasgow, all of them said they used adult ages for their social media accounts. “It’s just so easy to lie about your age”, said Myley, 15.
Australia believes it has a fix: Ban access. I quite like the $AUS 33 million fine too.
I would suggest that in a group of 100 teens, one will know how to create a fake persona, buy a fake ID from a Telegram vendor, and get an account. Will a Telegram user set up a small online business to sell fake identities or social media accounts to young people? Yep.
Cyber security firms cannot block bad actors. What makes regulators think that social media companies can prevent young people from getting access to their service. Enjoy those meetings. I hope the lunches are good.
My hunch is that the UK is probably going to ban social media access for those under a certain age. Good luck.
Stephen E Arnold, December 2, 2024
The Golden Fleecer of the Year: Boeing
November 29, 2024
When I was working in Washington, DC, I had the opportunity to be an “advisor” to the head of the Joint Committee on Atomic Energy. I recall a comment by Craig Hosmer (R. California) and retired rear admiral saying, “Those Air Force guys overpay.” The admiral was correct, but I think that other branches of the US Department of Defense have been snookered a time or two.
In the 1970s and 1980s, Senator William Proxmire (D. Wisconsin) had one of his staff keep an eye of reports about wild and crazy government expenditures. Every year, the Senator reminded people of a chivalric award dating allegedly from the 1400s. Yep, the Middle Ages in DC.
The Order of the Golden Fleece in old timey days of yore meant the recipient received a snazzy chivalric order intended to promote Christian values and the good neighbor policy of Spain and Austria. A person with the fleece was important, a bit like a celebrity arriving at a Hollywood Oscar event. (Yawn)
Thanks, Wikipedia. Allegedly an example of a chivalric Golden Fleece. Yes, that is a sheep, possibly dead or getting ready to be dipped. Thanks,
Reuters, the trusted outfit which tells me it is trusted each time I read one of its “real” news stories, published “Boeing Overcharged Air Force Nearly 8,000% for Soap Dispensers, Watchdog Alleges.” The write up stated in late October 2024:
Boeing overcharged the U.S. Air Force for spare parts for C-17 transport planes, including marking up the price on soap dispensers by 7,943%, according to a report by a Pentagon watchdog. The Department of Defense Office of Inspector General said on Tuesday the Air Force overpaid nearly $1 million for a dozen spare parts, including $149,072 for an undisclosed number of lavatory soap dispensers from the U.S. plane maker and defense contractor.
I have heard that the Department of Defense has not been able to monitor some of its administrative activities or complete an audit of what it does with its allocated funds.
According to the trusted write up:
The Pentagon’s budget is huge, breaking $900 billion last year, making overcharges by defense contractors a regular headache for internal watchdogs, but one that is difficult to detect. The Inspector General also noted it could not determine if the Air Force paid a fair price on $22 million of spare parts because the service did not keep a database of historical prices, obtain supplier quotes or identify commercially similar parts.
My view is that one of the elected officials in Washington, DC, should consider reviving the Proxmire Golden Fleece Award. Boeing may qualify, but there may be other contenders for the award as well.
I quite like the idea of scope changes and engineering change orders for some US government projects. But I have to admit that Senator Proxmire’s identification of a $600 hammer sold to the US Department of Defense is not interesting.
That 8,000 percent mark up is pretty nifty. Oh, on Amazon soap dispensers cost between $20 and $100. Should the Reuters’ story have mentioned:
- Procurement reform
- Poor financial controls
- Lack of common sense?
Of course not! The trust outfit does not get mired in silly technicalities. And Boeing? That outfit is doing a bang up job.
Stephen E Arnold, November 29, 2024
Google Chrome Generating Attention. A Lot of Attention
November 26, 2024
The US Department of Justice (DOJ) took the first step in breaking up Google’s Big Tech monopoly by forcing Alphabet Inc. to sell its popular Web browser, Chrome. Alphabet Inc. is responding like all past companies who had their market dominance broken up by the government: it is throwing a major temper tantrum. The BBC reports on Google’s meltdown in: “Google Reacts Angrily To Report It Will Have To Sell Chrome.”
Google claimed it had a right to retain its monopoly on search because it was the best in the world. Not so, the Judge Amit Mehta of the DOJ replied, especially since the word “Google” is now a verb and there’s no fair competition. Instead of facing their fate with dignity, Google is saying it will harm consumers and businesses if it’s forced to sell Chrome. While that could be interpreted as a threat, Google probably meant it to sound like it was worried about its users. We think it sounds like a disguised threat.
Google doesn’t want to lose its 90% hold on the global search market augmented by Chrome as the world’s most used Web browser at 64.61%. Chrome is the default browser on many PCs and mobile devices. Judge Mehta wants to end that dominance:
Judge Mehta said in his ruling in August that the default search engine was "extremely valuable real estate" for Google.
‘Even if a new entrant were positioned from a quality standpoint to bid for the default when an agreement expires, such a firm could compete only if it were prepared to pay partners upwards of billions of dollars in revenue share’ he wrote.
The DOJ had been expected to provide its final proposed remedies to the court by Wednesday.
It said in an October filing documenting initial proposals it would be considering seeking a break-up of Google.
Potential remedies "that would prevent Google from using products such as Chrome, Play [its app store], and Android to advantage Google search and Google search-related products" were among its considerations, it said then.”
Google replied:
“In response to the DOJ’s filing in October, Google said "splitting off" parts of its business like Chrome or Android would "break them".
‘Breaking them off would change their business models, raise the cost of devices, and undermine Android and Google Play in their robust competition with Apple’s iPhone and App Store,’ the company said.
It also said it would make it harder to keep Chrome secure.”
Those sounds like inflated arguments, especially when the only thing that will break is Google’s record profits. Investors will also be harmed, but that’s why it’s good to have a diverse portfolio. Wah Wah!
Whitney Grace, November 26, 2024
China Smart, US Dumb: LLMs Bad, MoEs Good
November 21, 2024
Okay, an “MoE” is an alternative to LLMs. An “MoE” is a mixture of experts. An LLM is a one-trick pony starting to wheeze.
Google, Apple, Amazon, GitHub, OpenAI, Facebook, and other organizations are at the top of the list when people think about AI innovations. We forget about other countries and universities experimenting with the technology. Tencent is a China-based technology conglomerate located in Shenzhen and it’s the world’s largest video game company with equity investments are considered. Tencent is also the developer of Hunyuan-Large, the world’s largest MoE.
According to Tencent, LLMs (large language models) are things of the past. LLMs served their purpose to advance AI technology, but Tencent realized that it was necessary to optimize resource consumption while simultaneously maintaining high performance. That’s when the company turned to the next evolution of LLMs or MoE, mixture of experts models.
Cornell University’s open-access science archive posted this paper on the MoE: “Hunyuan-Large: An Open-Source MoE Model With 52 Billion Activated Parameters By Tencent” and the abstract explains it is a doozy of a model:
In this paper, we introduce Hunyuan-Large, which is currently the largest open-source Transformer-based mixture of experts model, with a total of 389 billion parameters and 52 billion activation parameters, capable of handling up to 256K tokens. We conduct a thorough evaluation of Hunyuan-Large’s superior performance across various benchmarks including language understanding and generation, logical reasoning, mathematical problem-solving, coding, long-context, and aggregated tasks, where it outperforms LLama3.1-70B and exhibits comparable performance when compared to the significantly larger LLama3.1-405B model. Key practice of Hunyuan-Large include large-scale synthetic data that is orders larger than in previous literature, a mixed expert routing strategy, a key-value cache compression technique, and an expert-specific learning rate strategy. Additionally, we also investigate the scaling laws and learning rate schedule of mixture of experts models, providing valuable insights and guidance for future model development and optimization. The code and checkpoints of Hunyuan-Large are released to facilitate future innovations and applications.”
Tencent has released Hunyuan-Large as an open source project, so other AI developers can use the technology! The well-known companies will definitely be experimenting with Hunyuan-Large. Is there an ulterior motive? Sure. Money, prestige, and power are at stake in the AI global game.
Whitney Grace, November 21, 2024
EU Docks Meta (Zuckbook) Five Days of Profits! Wow, Painful, Right?
November 19, 2024
No smart software. Just a dumb dinobaby. Oh, the art? Yeah, MidJourney.
Let’s keep this short. According to “real” news outfits “Meta Fined Euro 798 Million by EU Over Abusing Classified Ads Dominance.” This is the lovable firm’s first EU antitrust fine. Of course, Meta (the Zuckbook) will let loose its legal eagles to dispute the fine.
The Facebook money machine keeps on doing its thing. Thanks, MidJourney. Good enough.
What the “real” news outfits did not do is answer this question, “How long does it take the Zuck outfit to generate about $840 million US dollars?
The answer is that it takes that fine firm about five days to earn or generate the cash to pay a fine that would cripple many organizations. In case you were wondering, five days works out to about 1.4 percent of a calendar year.
I bet that fine will definitely force the Zuck to change its ways. I wish I knew how much the EU spent pursuing this particular legal matter. My hunch is that the number has disappeared into the murkiness of Brussels’ bookkeeping.
And the Zuckbook? It will keep on keeping on.
Stephen E Arnold, November 19, 2024
Two New Coast Guard Cybersecurity Units Strengthen US Cyber Defense
November 13, 2024
Some may be surprised to learn the Coast Guard had one of the first military units to do signals intelligence. Early in the 20th century, the Coast Guard monitored radio traffic among US bad guys. It is good to see the branch pushing forward. “U.S. Coast Guard’s New Cyber Units: A Game Changer for National Security,” reveals a post from ClearanceJobs. The two units, the Coast Guard Reserve Unit USCYBER and 1941 Cyber Protection Team (CPT), will work with U.S. Cyber Command. Writer Peter Suciu informs us:
“The new cyber reserve units will offer service-wide capabilities for Coast Guardsman while allowing the service to retain cyber talent. The reserve commands will pull personnel from around the United States and will bring experience from the private and public sectors. Based in Washington, D.C., CPTs are the USCG’s deployable units responsible for offering cybersecurity capabilities to partners in the MTS [Marine Transportation System].”
Why tap reserve personnel for these units? Simple: valuable experience. We learn:
“‘Coast Guard Cyber is already benefitting from its reserve members,’ said Lt. Cmdr. Theodore Borny of the Office of Cyberspace Forces (CG-791), which began putting together these units in early 2023. ‘Formalizing reserves with cyber talent into cohesive units will give us the ability to channel a skillset that is very hard to acquire and retain.’”
The Coast Guard Reserve Unit will (mostly) work out of Fort Meade in Maryland, alongside the U.S. Cyber Command and the National Security Agency. The post reminds us the Coast Guard is unique: it operates under the Department of Homeland Security, while our other military branches are part of the Department of Defense. As the primary defender of our ports and waterways, brown water and blue water, we think the Coast Guard is well position capture and utilize cybersecurity intel.
Cynthia Murrell, November 13, 2024
The Bezos Bulldozer Could Stalls in a Nuclear Fuel Pool
November 11, 2024
Sorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.
Microsoft is going to flip a switch and one of Three Mile Islands’ nuclear units will blink on. Yeah. Google is investing in small nuclear power unit. But one, haul it to the data center of your choice, and plug it in. Shades of Tesla thinking. Amazon has also be fascinated by Cherenkov radiation which is blue like Jack Benny’s eyes.
A physics amateur learned about 880 volts by reading books on his Kindle. Thanks, MidJourney. Good enough.
Are these PR-tinged information nuggets for real? Sure, absolutely. The big tech outfits are able to do anything, maybe not well, but everything. Almost.
The “trusted” real news outfit (Thomson Reuters) published “US Regulators Reject Amended Interconnect for Agreement for Amazon Data Center.” The story reports as allegedly accurate information:
U.S. energy regulators rejected an amended interconnection agreement for an Amazon data center connected directly to a nuclear power plant in Pennsylvania, a filing showed on Friday. Members of the Federal Energy Regulatory Commission said the agreement to increase the capacity of the data center located on the site of Talen Energy’s Susquehanna nuclear generating facility could raise power bills for the public and affect the grid’s reliability.
Amazon was not inventing a function modular nuclear reactor using the better option thorium. No. Amazon just wanted to fun a few of those innocuous high voltage transmission line, plug in a converter readily available from one of Amazon’s third party merchants, and let a data center chock full of dolphin loving servers, storage devices, and other gizmos. What’s the big deal?
The write up does not explain what “reliability” and “national security” mean. Let’s just accept these as words which roughly translate to “unlikely.”
Is this an issue that will go away? My view is, “No.” Nuclear engineers are not widely represented among the technical professionals engaged in selling third-party vendors’ products, figuring out how to make Alexa into a barn burner of a product, or forcing Kindle users to smash their devices in frustration when trying to figure out what’s on their Kindle and what’s in Amazon’s increasingly bizarro cloud system.
Can these companies become nuclear adepts? Sure. Will that happen quickly? Nope. Why? Nuclear is specialized field and involves a number of quite specific scientific disciplines. But Amazon can always ask Alexa and point to its Ring door bell system as the solution to security concerns. The approach will impress regulatory authorities.
Stephen E Arnold, November 11, 2024
Hey, US Government, Listen Up. Now!
November 5, 2024
This post is the work of a dinobaby. If there is art, accept the reality of our using smart art generators. We view it as a form of amusement.
Microsoft on the Issues published “AI for Startups.” The write is authored by a dream team of individuals deeply concerned about the welfare of their stakeholders, themselves, and their corporate interests. The sensitivity is on display. Who wrote the 1,400 word essay? Setting aside the lawyers, PR people, and advisors, the authors are:
- Satya Nadella, Chairman and CEO, Microsoft
- Brad Smith, Vice-Chair and President, Microsoft
- Marc Andreessen, Cofounder and General Partner, Andreessen Horowitz
- Ben Horowitz, Cofounder and General Partner, Andreessen Horowitz
Let me highlight a couple of passages from essay (polemic?) which I found interesting.
In the era of trustbusters, some of the captains of industry had firm ideas about the place government professionals should occupy. Look at the railroads. Look at cyber security. Look at the folks living under expressway overpasses. Tumultuous times? That’s on the money. Thanks, MidJourney. A good enough illustration.
Here’s the first snippet:
Artificial intelligence is the most consequential innovation we have seen in a generation, with the transformative power to address society’s most complex problems and create a whole new economy—much like what we saw with the advent of the printing press, electricity, and the internet.
This is a bold statement of the thesis for these intellectual captains of the smart software revolution. I am curious about how one gets from hallucinating software to “the transformative power to address society’s most complex problems and cerate a whole new economy.” Furthermore, is smart software like printing, electricity, and the Internet? A fact or two might be appropriate. Heck, I would be happy with a nifty Excel chart of some supporting data. But why? This is the first sentence, so back off, you ignorant dinobaby.
The second snippet is:
Ensuring that companies large and small have a seat at the table will better serve the public and will accelerate American innovation. We offer the following policy ideas for AI startups so they can thrive, collaborate, and compete.
Ah, companies large and small and a seat at the table, just possibly down the hall from where the real meetings take place behind closed doors. And the hosts of the real meeting? Big companies like us. As the essay says, “that only a Big Tech company with our scope and size can afford, creating a platform that is affordable and easily accessible to everyone, including startups and small firms.”
The policy “opportunity” for AI startups includes many glittering generalities. The one I like is “help people thrive in an AI-enabled world.” Does that mean universal basic income as smart software “enhances” jobs with McKinsey-like efficiency. Hey, it worked for opioids. It will work for AI.
And what’s a policy statement without a variation on “May live in interesting times”? The Microsoft a2z twist is, “We obviously live in a tumultuous time.” That’s why the US Department of Justice, the European Union, and a few other Luddites who don’t grok certain behaviors are interested in the big firms which can do smart software right.
Translation: Get out of our way and leave us alone.
Stephen E Arnold, November 5, 2024
Enter the Dragon: America Is Unhealthy
November 4, 2024
Written by a humanoid dinobaby. No AI except the illustration.
The YouTube video “A Genius Girl Who Is Passionate about Repairing Machines” presents a simple story in a 38 minute video. The idea is that a young woman with no help fixes a broken motorcycles with basic hand tools outside in what looks like a hoarder’s backyard. The message is: Wow, she is smart and capable. Don’t you wish you knew person like this who could repair your broken motorcycle.
This video is from @vutvtgamming and not much information is provided. After watching this and similar videos like “Genius Girl Restored The 280mm Lathe From 50 Years Ago And Made It Look Like”, I feel pretty stupid for an America dinobaby. I don’t think I can recall meeting a person with similar mechanical skills when I worked at Keystone Steel, Halliburton Nuclear, or Booz, Allen & Hamilton’s Design & Development division. The message I carried away was: I was stupid as were many people with whom I associated.
Thanks, MSFT Copilot. Good enough. (I slipped a put down through your filters. Imagine that!)
I picked up a similar vibe when I read “Today’s AI Ecosystem Is Unsustainable for Most Everyone But Nvidia, Warns Top Scholar.” On the surface, the ZDNet write up is an interview with the “scholar” Kai-Fu Lee, who, according to the article:
served as founding director of Microsoft Research Asia before working at Google and Apple, founded his current company, Sinovation Ventures, to fund startups such as 01.AI, which makes a generative AI search engine called BeaGo.
I am not sure how “scholar” correlates with commercial work for US companies and running an investment firm with a keen interest in Chinese start ups. I would not use the word “scholar.” My hunch is that the intent of Kai-Fu Lee is to present as simple and obvious something that US companies don’t understand. The interview is a different approach to explaining how advanced Kai-Fu Lee’s expertise is. He is, via this interview, sharing an opinion that the US is creating a problem and overlooking the simple solution. Just like the young woman able to repair a motorcycle or the lass fixing up a broken industrial lathe alone, the American approach does not get the job done.
What does ZDNet present as Kai-Fu Lee’s message. Here are a couple of examples:
“The ecosystem is incredibly unhealthy,” said Kai-Fu Lee in a private discussion forum earlier this month. Lee was referring to the profit disparity between, on the one hand, makers of AI infrastructure, including Nvidia and Google, and, on the other hand, the application developers and companies that are supposed to use AI to reinvent their operations.
Interesting. I wonder if the “healthy” ecosystem might be China’s approach of pragmatism and nuts-and-bolts evidenced in the referenced videos. The unhealthy versus healthy is a not-so-subtle message about digging one’s own grave in my opinion. The “economics” of AI are unhealthy, which seems to say, “America’s approach to smart software is going to kill it. A more healthy approach is the one in which government and business work to create applications.” Translating: China, healthy; America, sick as a dog.
Here’s another statement:
Today’s AI ecosystem, according to Lee, consists of Nvidia, and, to a lesser extent, other chip makers such as Intel and Advanced Micro Devices. Collectively, the chip makers rake in $75 billion in annual chip sales from AI processing. “The infrastructure is making $10 billion, and apps, $5 billion,” said Lee. “If we continue in this inverse pyramid, it’s going to be a problem,” he said.
Who will flip the pyramid? Uganda, Lao PDR, Greece? Nope, nope, nope. The flip will take an outfit with a strong mind and body. A healthy entity is needed to flip the pyramid. I wonder if that strong entity is China.
Here’s Kai-Fu kung fu move:
He recommended that companies build their own vertically integrated tech stack the way Apple did with the iPhone, in order to dramatically lower the cost of generative AI. Lee’s striking assertion is that the most successful companies will be those that build most of the generative AI components — including the chips — themselves, rather than relying on Nvidia. He cited how Apple’s Steve Jobs pushed his teams to build all the parts of the iPhone, rather than waiting for technology to come down in price.
In the write up Kai-Fu Lee refers to “we”. Who is included in that we? Excluded will be the “unhealthy.” Who is left? I would suggest that the pragmatic and application focused will be the winners. The reason? The “we” includes the healthy entities. Once again I am thinking of China’s approach to smart software.
What’s the correct outcome? Kai-Fu Lee allegedly said:
What should result, he said, is “a smaller, leaner group of leaders who are not just hiring people to solve problems, but delegating to smart enterprise AI for particular functions — that’s when this will make the biggest deal.”
That sounds like the Chinese approach to a number of technical, social, and political challenges. Healthy? Absolutely.
Several observations:
- I wonder if ZDNet checked on the background of the “scholar” interviewed at length?
- Did ZDNet think about the “healthy” versus “unhealthy” theme in the write up?
- Did ZDNet question the “scholar’s” purpose in explaining what’s wrong with the US approach to smart software?
I think I know the answer. The ZDNet outfit and the creators of this unusual private interview believe that the young women rebuilt complicated devices without any assistance. Smart China; dumb America. I understand the message which seems to have not been internalized by ZDNet. But I am a dumb dinobaby. What do I know? Exactly. Unhealthy that American approach to AI.
Stephen E Arnold, October 30, 2024
Fake Defined? Next Up Trust, Ethics, and Truth
October 28, 2024
Another post from a dinobaby. No smart software required except for the illustration.
This is a snappy headline: “You Can Now Get Fined $51,744 for Writing a Fake Review Online.” The write up states:
This mandate includes AI-generated reviews (which have recently invaded Amazon) and also encompasses dishonest celebrity endorsements as well as testimonials posted by a company’s employees, relatives, or friends, unless they include an explicit disclaimer. The rule also prohibits brands from offering any sort of incentive to prompt such an action. Suppressing negative reviews is no longer allowed, nor is promoting reviews that a company knows or should know are fake.
So, what does “fake” mean? The word appears more than 160 times in the US government document.
My hunch is that the intrepid US Federal government does not want companies to hype their products with “fake” reviews. But I don’t see a definition of “fake.” On page 10 of the government document “Use of Consumer Reviews”, I noted:
“…the deceptive or unfair commercial acts or practices involving reviews or other endorsement.”
That’s a definition of sort. Other words getting at what I would call a definition are:
- buying reviews (these can be non fake or fake it seems)
- deceptive
- false
- manipulated
- misleading
- unfair
On page 23 of the government document, A. 465. – Definitions appears. Alas, the word “fake” is not defined.
The document is 163 pages long and strikes me as a summary of standard public relations, marketing, content marketing, and social media practices. Toss in smart software and Telegram-type BotFather capability and one has described the information environment which buzzes, zaps, and swirls 24×7 around anyone with access to any type of electronic communication / receiving device.
Look what You.com generated. A high school instructor teaching a debate class about a foundational principle.
On page 119, the authors of the government document arrive at a key question, apparently raised by some of the individuals sufficiently informed to ask “killer” questions; for example:
Several commenters raised concerns about the meaning of the term “fake” in the context of indicators of social media influence. A trade association asked, “Does ‘fake’ only mean that the likes and followers were created by bots or through fake accounts? If a social media influencer were to recommend that their followers also follow another business’ social media account, would that also be ‘procuring’ of ‘fake’ indicators of social media influence? . . . If the FTC means to capture a specific category of ‘likes,’ ‘follows,’ or other metrics that do not reflect any real opinions, findings, or experiences with the marketer or its products or services, it should make that intention more clear.”
Alas, no definition is provided. “Fake” exists in a cloud of unknowing.
What if the US government prosecutors find themselves in the position of a luminary who allegedly said: “Porn. I know it when I see it.” That posture might be more acceptable than trying to explain that an artificial intelligence content generator produced a generic negative review of an Italian restaurant. A competitor uses the output via a messaging service like Telegram Messenger and creates a script to plug in the name, location, and date for 1,000 Italian restaurants. The individual then lets the script rip. When investigators look into this defamation of Italian restaurants, the trail leads back to a virtual assert service provider crime as a service operation in Lao PDR. The owner of that enterprise resides in Cambodia and has multiple cyber operations supporting the industrialized crime as a service operation. Okay, then what?
In this example, “fake” becomes secondary to a problem as large or larger than bogus reviews on US social media sites.
What’s being done when actual criminal enterprises are involved in “fake” related work. According the the United Nations, in certain nation states, law enforcement is hampered and in some cases prevented from pursuing a bad actor.
Several observations:
- As most high school debaters learn on Day One of class: Define your terms. Present these in plain English, not a series of anecdotes and opinions.
- Keep the focus sharp. If reviews designed to damage something are the problem, focus on that. Avoid the hand waving.
- The issue exists due to a US government policy of looking the other way with regard to the large social media and online services companies. Why not become a bit more proactive? Decades of non-regulation cannot be buried under 160 page plus documents with footnotes.
Net net: “Fake,” like other glittering generalities cannot be defined. That’s why we have some interesting challenges in today’s world. Fuzzy is good enough.
PS. If you have money, the $50,000 fine won’t make any difference. Jail time will.
Stephen E Arnold, October 28, 2024