Is the UK Stupid? Well, Maybe, But Government Officials Have Identified Some Targets
February 27, 2023
I live in good, old Kentucky, rural Kentucky, according to my deceased father-in-law. I am not an Anglophile. The country kicked my ancestors out in 1575 for not going with the flow. Nevertheless, I am reluctant to slap “even more stupid” on ideas generated by those who draft regulations. A number of experts get involved. Data are collected. Opinions are gathered from government sources and others. The result is a proposal to address a problem.
The write up “UK Proposes Even More Stupid Ideas for Directly Regulating the Internet, Service Providers” makes clear that governments have not been particularly successful with its most recent ideas for updating the UK’s 1990 Computer Misuse Act. The reasons offered are good; for example, reducing cyber crime and conducting investigations. The downside of the ideas is that governments make mistakes. Governmental powers creep outward over time; that is, government becomes more invasive.
The article highlights the suggested changes that the people drafting the modifications suggest:
- Seize domains and Internet Protocol addresses
- Use of contractors for this process
- Restrict algorithm-manufactured domain names
- Ability to go after the registrar and the entity registering the domain name
- Making these capabilities available to other government entities
- A court review
- Mandatory data retention
- Redefining copying data as theft
- Expanded investigatory activities.
I am not a lawyer, but these proposals are troubling.
I want to point out that whoever drafted the proposal is like a tracking dog with an okay nose. Based on our research for an upcoming lecture to some US government officials, it is clear that domain name registries warrant additional scrutiny. We have identified certain ISPs as active enablers of bad actors because there is no effective oversight on these commercial and sometimes non-governmental organizations or non-profit “do good” entities. We have identified transnational telecommunications and service providers who turn a blind eye to the actions of other enterprises in the “chain” which enables Internet access.
The UK proposal seems interesting and a launch point for discussion, the tracking dog has focused attention on one of the “shadow” activities enabled by lax regulators. Hopefully more scrutiny will be directed at the complicated and essentially Wild West populated by enablers of criminal activity like human trafficking, weapons sales, contraband and controlled substance marketplaces, domain name fraud, malware distribution, and similar activities.
At least a tracking dog is heading along what might be an interesting path to explore.
Stephen E Arnold, February 27, 2023
Legal Eagles Will Have Ruffled Feathers and Emit Non-AI Screeches
February 6, 2023
The screech of an eagle is annoying. An eagle with ruffled feathers can make short work of a French bulldog. But legal eagles are likely to produce loud sounds and go hunting for prey; specifically, those legal eagles will want to make life interesting for a certain judge in Columbia. (Nice weather in Bogota, by the way.)
“A Judge Just Used ChatGPT to Make a Court Decision” reports:
Judge Juan Manuel Padilla Garcia, who presides over the First Circuit Court in the city of Cartagena, said he used the AI tool to pose legal questions about the case and included its responses in his decision, according to a court document dated January 30, 2023.
One attorney in the US wanted to use smart software in a US case. That did not work out. There are still job openings at Chick-f-A, by the way.
I am not convinced that outputs from today’s smart software is ready for prime time. In fact, much of the enthusiasm is a result of push back against lousy Google search results, a downer economic environment, and a chance to make a buck without ending up in the same pickle barrel as Sam Bankman Fried or El Chapo.
Lawyers have a reason to watch Sr. Garcia’s downstream activities. Here are the reasons behind what I think will be fear and loathing by legal eagles about the use of smart software:
- Billability. If software can do what a Duke law graduate does in a dusty warehouse in dim light in a fraction of the time, partners lose revenue. Those lawyers sifting through documents and pulling out the ones that are in their jejune view are germane to a legal matter can be replaced with fast software. Wow. Hasta la vista billing for that mindless document review work.
- Accuracy. Today’s smart software is in what I call “close enough for horseshoes” accuracy. But looking ahead, the software will become more accurate or at least as accurate as a judge or other legal eagle needs to be to remain a certified lawyer. Imagine. Replacing legal deliberations with a natural language interface and the information in a legal database with the spice of journal content. There goes the legal backlog or at least some of it with speedy but good enough decisions.
- Consistency. Legal decisions are all over the place. There are sentencing guidelines and those are working really well, right? A software system operating on a body of content will produce outputs that are accurate within a certain range. Lawyers and judges output decisions which can vary widely.
Nevertheless, after the ruffling and screeching die down, the future is clear. If a judge in Columbia can figure out how to use smart software, that means the traditional approach to legal eagle life is going to change.
Stephen E Arnold, February 6, 2023
Crypto and Crime: Interesting Actors Get Blues and Twos on Their Systems
January 31, 2023
I read a widely available document which presents information once described to me as a “close hold.” The article is “Most Criminal Crypto currency Is Funneled Through Just 5 Exchanges.” Most of the write up is the sort of breathless “look what we know” information. The article which recycles information from Wired and from the specialized services firm Chainalysis does not mention the five outfits currently under investigation. The write up does not provide much help to a curious reader by omitting open source intelligence tools which can rank order exchanges by dollar volume. Why not learn about this listing by CoinMarketCap and include that information instead of recycling OPI (other people’s info)? Also, why not point to resources on one of the start.me pages? I know. I know. That’s work that interferes with getting a Tall, Non-Fat Latte With Caramel Drizzle.
The key points for me is the inclusion of some companies/organizations allegedly engaged in some fascinating activities. (Fascinating for crime analysts and cyber fraud investigators. For the individuals involved with these firms, “fascinating” is not the word one might use to describe the information in the Ars Technica article.)
Here are the outfits mentioned in the article:
- Bitcoin Fog – Offline
- Bitzlato
- Chatex
- Garantex
- Helix – Offline
- Suex
- Tornado Cash – Offline
Is there a common thread connecting these organizations? Who are the stakeholders? Who are the managers? Where are these outfits allegedly doing business?
Could it be Russia?
Stephen E Arnold, February 1, 2023
Newton and Shoulders of Giants? Baloney. Is It Everyday Theft?
January 31, 2023
Here I am in rural Kentucky. I have been thinking about the failure of education. I recall learning from Ms. Blackburn, my high school algebra teacher, this statement by Sir Isaac Newton, the apple and calculus guy:
If I have seen further, it is by standing on the shoulders of giants.
Did Sir Isaac actually say this? I don’t know, and I don’t care too much. It is the gist of the sentence that matters. Why? I just finished reading — and this is the actual article title — “CNET’s AI Journalist Appears to Have Committed Extensive Plagiarism. CNET’s AI-Written Articles Aren’t Just Riddled with Errors. They Also Appear to Be Substantially Plagiarized.”
How is any self-respecting, super buzzy smart software supposed to know anything without ingesting, indexing, vectorizing, and any other math magic the developers have baked into the system? Did Brunelleschi wake up one day and do the Eureka! thing? Maybe he stood on line and entered the Pantheon and looked up? Maybe he found a wasp’s nest and cut it in half and looked at what the feisty insects did to build a home? Obviously intellectual theft. Just because the dome still stands, when it falls, he is an untrustworthy architect engineer. Argument nailed.
The write up focuses on other ideas; namely, being incorrect and stealing content. Okay, those are interesting and possibly valid points. The write up states:
All told, a pattern quickly emerges. Essentially, CNET‘s AI seems to approach a topic by examining similar articles that have already been published and ripping sentences out of them. As it goes, it makes adjustments — sometimes minor, sometimes major — to the original sentence’s syntax, word choice, and structure. Sometimes it mashes two sentences together, or breaks one apart, or assembles chunks into new Frankensentences. Then it seems to repeat the process until it’s cooked up an entire article.
For a short (very, very brief) time I taught freshman English at a big time university. What the Futurism article describes is how I interpreted the work process of my students. Those entitled and enquiring minds just wanted to crank out an essay that would meet my requirements and hopefully get an A or a 10, which was a signal that Bryce or Helen was a very good student. Then go to a local hang out and talk about Heidegger? Nope, mostly about the opposite sex, music, and getting their hands on a copy of Dr. Oehling’s test from last semester for European History 104. Substitute the topics you talked about to make my statement more “accurate”, please.
I loved the final paragraphs of the Futurism article. Not only is a competitor tossed over the argument’s wall, but the Google and its outstanding relevance finds itself a target. Imagine. Google. Criticized. The article’s final statements are interesting; to wit:
As The Verge reported in a fascinating deep dive last week, the company’s primary strategy is to post massive quantities of content, carefully engineered to rank highly in Google, and loaded with lucrative affiliate links. For Red Ventures, The Verge found, those priorities have transformed the once-venerable CNET into an “AI-powered SEO money machine.” That might work well for Red Ventures’ bottom line, but the specter of that model oozing outward into the rest of the publishing industry should probably alarm anybody concerned with quality journalism or — especially if you’re a CNET reader these days — trustworthy information.
Do you like the word trustworthy? I do. Does Sir Isaac fit into this future-leaning analysis. Nope, he’s still pre-occupied with proving that the evil Gottfried Wilhelm Leibniz was tipped off about tiny rectangles and the methods thereof. Perhaps Futurism can blame smart software?
Stephen E Arnold, January 31, 2023
Have You Ever Seen a Killer Dinosaur on a Leash?
January 27, 2023
I have never seen a Tyrannosaurus Rex allow a European regulators to put a leash on its neck and lead the beastie around like a tamed circus animal?
Another illustration generated by the smart software outfit Craiyon.com. The copyright is up in the air just like the outcome of Google’s battles with regulators, OpenAI, and assorted employees.
I think something similar just happened. I read “Consumer Protection: Google Commits to Give Consumers Clearer and More Accurate Information to Comply with EU Rules.” The statement said:
Google has committed to limit its capacity to make unilateral changes related to orders when it comes to price or cancellations, and to create an email address whose use is reserved to consumer protection authorities, so that they can report and request the quick removal of illegal content. Moreover, Google agreed to introduce a series of changes to its practices…
The details appear in the this EU table of Google changes.
Several observations:
- A kind and more docile Google may be on parade for some EU regulators. But as the circus act of Roy and Siegfried learned, one must not assume a circus animal will not fight back
- More problematic may be Google’s internal management methods. I have used the phrase “high school science club management methods.” Now that wizards were and are being terminated like insects in a sophomore biology class, getting that old team spirit back may be increasingly difficult. Happy wizards do not create problems for their employer or former employer as the case may be. Unhappy folks can be clever, quite clever.
- The hyper-problem in my opinion is how the tide of online user sentiment has shifted from “just Google it” to ladies in my wife’s bridge club asking me, “How can I use ChatGPT to find a good hotel in Paris?” Yep, really old ladies in a bridge club in rural Kentucky. Imagine how the buzz is ripping through high school and college students looking for a way to knock out an essay about the Louisiana Purchase for that stupid required American history class? ChatGPT has not needed too much search engine optimization, has it.
Net net: The friendly Google faces a multi-bladed meat grinder behind Door One, Door Two, and Door Three. As Monte Hall, game show host of “Let’s Make a Deal” said:
“It’s time for the Big Deal of the Day!”
Stephen E Arnold, January 27, 2023
Googzilla Squeezed: Will the Beastie Wriggle Free? Can Parents Help Google Wiggle Out?
January 25, 2023
How easy was it for our prehistoric predecessors to capture a maturing reptile. I am thinking of Googzilla. (That’s my way of conceptualizing the Alphabet Google DeepMind outfit.)
This capturing the dangerous dinosaur shows one regulator and one ChatGPT dev in the style of Normal Rockwell (who may be spinning in his grave). The art was output by the smart software in use at Craiyon.com. I love those wonky spellings and the weird video ads and the image obscuring Next and Stay buttons. Is this the type of software the Google fears? I believe so.
On one side of the creature is the pesky ChatGPT PR tsunami. Google’s management team had to call Google’s parents to come to the garage. The whiz kids find themselves in a marketing battle. Imagine, a technology that Facebook dismisses as not a big deal, needs help. So the parents come back home from their vacations and social life to help out Sundar and Prabhakar. I wonder if the parents are asking, “What now?” and “Do you think these whiz kids want us to move in with them.” Forbes, the capitalist tool with annoying pop ups, tells one side of the story in “How ChatGPT Suddenly Became Google’s Code Red, Prompting Return of Page and Brin.”
On the other side of Googzilla is a weak looking government regulator. The Wall Street Journal (January 25, 2023) published “US Sues to Split Google’s Ad Empire.” (Paywall alert!) The main idea is that after a couple of decades of Google is free, great, and gives away nice tchotchkes US Federal and state officials want the Google to morph into a tame lizard.
Several observations:
- I find it amusing that Google had to call its parents for help. There’s nothing like a really tough, decisive set of whiz kids
- The Google has some inner strengths, including lawyers, lobbyists, and friends who really like Google mouse pads, LED pins, and T shirts
- Users of ChatGPT may find that as poor as Google’s search results are, the burden of figuring out an “answer” falls on the user. If the user cooks up an incorrect answer, the Google is just presenting links or it used to. When the user accepts a ChatGPT output as ready to use, some unforeseen consequences may ensue; for example, getting called out for presenting incorrect or stupid information, getting sued for copyright violations, or assuming everyone is using ChatGPT so go with the flow
Net net: Capturing and getting the vet to neuter the beastie may be difficult. Even more interesting is the impact of ChatGPT on allegedly calm, mature, and seasoned managers. Yep, Code Red. “Hey, sorry to bother you. But we need your help. Right now.”
Stephen E Arnold, January 25, 2023
Japan Does Not Want a Bad Apple on Its Tax Rolls
January 25, 2023
Everyone is falling over themselves about a low-cost Mac Mini, just not a few Japanese government officials, however.
An accountant once gave me some advice: never anger the IRS. A governmental accounting agency that arms its employees with guns is worrisome. It is even more terrifying to anger a foreign government accounting agency. The Japanese equivalent of the IRS smacked Apple with the force of a tsunami in fees and tax penalties Channel News Asia reported: “Apple Japan Hit With $98 Million In Back Taxes-Nikkei.”
The Japanese branch of Apple is being charged with $98 million (13 billion yen) for bulk sales of Apple products sold to tourists. The product sales, mostly consisting of iPhones, were wrongly exempted from consumption tax. The error was caught when a foreigner was caught purchasing large amounts of handsets in one shopping trip. If a foreigner visits Japan for less than six months they are exempt from the ten percent consumption tax unless the products are intended for resale. Because the foreign shopper purchased so many handsets at once, it is believed they were cheating the Japanese tax system.
The Japanese counterpart to the IRS brought this to Apple Japan’s attention and the company handled it in the most Japanese way possible: quiet acceptance. Apple will pay the large tax bill:
“Apple Japan is believed to have filed an amended tax return, according to Nikkei. In response to a Reuters’ request for comment, the company only said in an emailed message that tax-exempt purchases were currently unavailable at its stores. The Tokyo Regional Taxation Bureau declined to comment.”
Apple America responded that the company invested over $100 billion in the Japanese supply network in the past five years.
Japan is a country dedicated to advancing technology and, despite its declining population, it possesses one of the most robust economies in Asia. Apple does not want to lose that business, so paying $98 million is a small hindrance to continue doing business in Japan.
Whitney Grace, January 25, 2023
OpenAI Working on Proprietary Watermark for Its AI-Generated Text
January 24, 2023
Even before OpenAI made its text generator GPT-3 available to the public, folks were concerned the tool was too good at mimicking the human-written word. For example, what is to keep students from handing their assignments off to an algorithm? (Nothing, as it turns out.) How would one know? Now OpenAI has come up with a solution—of sorts. Analytics India Magazine reports, “Generated by Human or AI: OpenAI to Watermark its Content.” Writer Pritam Bordoloi describes how the watermark would work:
“We want it to be much harder to take a GPT output and pass it off as if it came from a human,’ [OpenAI’s Scott Aaronson] revealed while presenting a lecture at the University of Texas at Austin. ‘For GPT, every input and output is a string of tokens, which could be words but also punctuation marks, parts of words, or more—there are about 100,000 tokens in total. At its core, GPT is constantly generating a probability distribution over the next token to generate, conditional on the string of previous tokens,’ he said in a blog post documenting his lecture. So, whenever an AI is generating text, the tool that Aaronson is working on would embed an ‘unnoticeable secret signal’ which would indicate the origin of the text. ‘We actually have a working prototype of the watermarking scheme, built by OpenAI engineer Hendrik Kirchner.’ While you and I might still be scratching our heads about whether the content is written by an AI or a human, OpenAI—who will have access to a cryptographic key—would be able to uncover a watermark, Aaronson revealed.”
Great! OpenAI will be able to tell the difference. But … how does that help the rest of us? If the company just gifted the watermarking key to the public, bad actors would find a way around it. Besides, as Bordoloi notes, that would also nix OpenAI’s chance to make a profit off it. Maybe it will sell it as a service to certain qualified users? That would be an impressive example of creating a problem and selling the solution—a classic business model. Was this part of the firm’s plan all along? Plus, the killer question, “Will it work?”
Cynthia Murrell, January 24, 2023
How to Make Chinese Artificial Intelligence Professionals Hope Like Happy Bunnies
January 23, 2023
Happy New Year! It is the Year of the Rabbit, and the write up “Is Copyright Easting AI?” may make some celebrants happier than the contents of a red envelop. The article explains that the US legal system may derail some of the more interesting, publicly accessible applications of smart software. Why? US legal eagles and the thicket of guard rails which comprise copyright.
The article states:
… neural network developers, get ready for the lawyers, because they are coming to get you.
That means the the interesting applications on the “look what’s new on the Internet” news service Product Hunt will disappear. Only big outfits can afford to bring and fight some litigation. When I worked as an expert witness, I learned that money is not an issue of concern for some of the parties to a lawsuit. Those working as a robot repair technician for a fast food chain will want to avoid engaging in a legal dispute.
The write up also says:
If the AI industry is to survive, we need a clear legal rule that neural networks, and the outputs they produce, are not presumed to be copies of the data used to train them. Otherwise, the entire industry will be plagued with lawsuits that will stifle innovation and only enrich plaintiff’s lawyers.
I liked the word “survive.” Yep, continue to exist. That’s an interesting idea. Let’s assume that the US legal process brings AI develop to a halt. Who benefits? I am a dinobaby living in rural Kentucky. Nevertheless, it seems to me that a country will just keep on working with smart software informed by content. Some of the content may be a US citizen’s intellectual property, possibly a hard drive with data from Los Alamos National Laboratory, or a document produced by a scientific and technical publisher.
It seems to me that smart software companies and research groups in a country with zero interest in US laws can:
- Continue to acquire content by purchase, crawling, or enlisting the assistance of third parties
- Use these data to update and refine their models
- Develop innovations not available to smart software developers in the US.
Interesting, and with the present efficiency of some legal and regulatory system, my hunch is that bunnies in China are looking forward to 2023. Will an innovator use enhanced AI for information warfare or other weapons? Sure.
Stephen E Arnold, January 23, 2023
Seattle: Awareness Flickering… Maybe?
January 17, 2023
Generation Z is the first age of humans completely raised with social media. They are also growing up during a historic mental health crisis. Educators and medical professionals believe there is a link between the rising mental health crisis and social media. While studies are not 100% conclusive, there is a correlation between the two. The Seattle Times shares a story about how Seattle public schools think the same: “Seattle Schools Sues Social Media Firms Over Youth Mental Health Crisis.”
Seattle schools files a ninety-page lawsuit that asserts social media companies purposely designed, marketed, and operate their platforms for optimum engagement with kids so they can earn profits. The lawsuit claims that the companies cause mental and health disorders, such as depression, eating disorders, anxiety, and cyber bullying. Seattle Public Schools’ (SPS) lawsuit states the company violated the Washington public nuisance law and should be penalized.
SPS argues that due to the increased mental and physical health disorders, they have been forced to divert resources and spend funds on counselors, teacher training in mental health issues, and educating kids on dangers related to social media. SPS wants the tech companies to be held responsible and help treat the crisis:
“ ‘Our students — and young people everywhere — face unprecedented learning and life struggles that are amplified by the negative impacts of increased screen time, unfiltered content, and potentially addictive properties of social media,’ said SPS Superintendent Brent Jones in the release. ‘We are confident and hopeful that this lawsuit is the first step toward reversing this trend for our students, children throughout Washington state, and the entire country.’”
Tech insiders have reported that social media companies are aware of the dangers their platforms pose to kids, but are not too concerned. The tech companies argue they have tools to help adults limit kids’ screen time. Who is usually savvier with tech though, kids or adults?
The rising mental health crisis is also caused by two additional factors:
- Social media induces mass hysteria in kids, because it is literally a digital crowd. Humans are like sheep they follow crowds.
- Mental health diagnoses are more accurate, because the science has improved. More kids are being diagnosed because the experts know more.
Social media is only part of the problem. Tech companies, however, should be held accountable because they are knowingly contributing to the problem. And Seattle? Flicker, flicker candle of awareness.
Whitney Grace, January 17, 2023