AI: Black Boxes ‘R Us
November 23, 2022
Humans design and make AI. Because humans design and make AI, we should know how they work. For some reason, humans do not know how AI works. Motherboard on Vice explains that, “Scientists Increasingly Can’t Explain How AI Works.” AI researchers are worried that AI developers focus too much on the end results of an algorithm than how and why it arrives at said results.
In other words, developers cannot explain how an AI algorithm works. AI algorithms are built from layers and layers of deep neural networks (DNNs). These networks are designed to replicate human neural pathways. They are almost like real neural pathways, because neurologists are unaware of how the entire brain works and AI developers do not know how AI algorithms work. AI developers are concerned with the inputs and outputs, but the in-between is the mythical black box. Because AI developers do not worry about how they receive the outputs, they cannot explain why they receive biased, polluted results.
“‘If all we have is a ‘black box’, it is impossible to understand causes of failure and improve system safety,’ Roman V. Yampolskiy, a professor of computer science at the University of Louisville, wrote in his paper titled “Unexplainability and Incomprehensibility of Artificial Intelligence.” ‘Additionally, if we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers.’”
It sounds like the Schrödinger’s cat of black boxes.
Developers’ results are driven by tight deadlines and small budgets so they concentrate on accuracy over explainability. Algorithms are also (supposedly) more accurate than humans, so it is easy to rely on them. Making the algorithms less biased is another black box, especially when the Internet is skewed one way:
“Debiasing the datasets that AI systems are trained on is near impossible in a society whose Internet reflects inherent, continuous human bias. Besides using smaller datasets, in which developers can have more control in deciding what appears in them, experts say a solution is to design with bias in mind, rather than feign impartiality.”
Couldn’t training an algorithm be like teaching a pet to do tricks with positive reinforcement? What would an algorithm consider a treat? But did a guy named Gödel bring up incompleteness? Clicks, clicks, and more clicks.
Whitney Grace, November 23, 2022
Smart Software Is Like the Brain Because…. Money, Fame, and Tenure
November 4, 2022
I enjoy reading the marketing collateral from companies engaged in “artificial intelligence.” Let me be clear. Big money is at stake. A number of companies have spreadsheet fever and have calculated the cash flow from dominating one or more of the AI markets. Examples range from synthetic dataset sales to off-the-shelf models, from black boxes which “learn” how to handle problems that stump MBAs to building control subsystems that keep aircraft which would drop like rocks without numerical recipes humming along.
“Study Urges Caution When Comparing Neural Networks to the Brain” comes with some baggage. First, the write up is in what appears to be a publication linked with MIT. I think of Jeffrey Epstein when MIT is mentioned. Why? The estimable university ignored what some believe are the precepts of higher education to take cash and maybe get invited to an interesting party. Yep, MIT. Second, the university itself has been a hot bed of smart software. Cheerleading has been heard emanating from some MIT facilities when venture capital flows to a student’s start up in machine learning or an MIT alum cashes out with a smart software breakthrough. The rah rah, I wish to note, is because of money, not nifty engineering.
The write up states:
In an analysis of more than 11,000 neural networks that were trained to simulate the function of grid cells — key components of the brain’s navigation system — the researchers found that neural networks only produced grid-cell-like activity when they were given very specific constraints that are not found in biological systems. “What this suggests is that in order to obtain a result with grid cells, the researchers training the models needed to bake in those results with specific, biologically implausible implementation choices,” says Rylan Schaeffer, a former senior research associate at MIT.
What this means is that smart software is like the butcher near our home in Campinas, Brazil, in 1952. For Americans, the butcher’s thumb boosted the weight of the object on the scale. My mother who was unaware of this trickery just paid up none the wiser. A friend of our family, Adair Ricci pointed out the trick and he spoke with the butcher. That professional stopped gouging my mother. Mr. Ricci had what I would later learn to label as “influence.”
The craziness in the AI marketing collateral complements the trickery in many academic papers. When I read research results about AI from Google-type outfits, I assume that the finger on the scale trick has been implemented. Who is going to talk? Timnit Gebru did and look what happened? Find your future elsewhere. What about the Snorkel-type of outfit? You may want to take a “deep dive” on that issue.
Now toss in marketing. I am not limiting marketing to the art history major whose father is a venture capitalist with friends. This young expert in Caravaggio’s selection of color can write about AI. I am including the enthusiastic believers who have turned open source, widely used algorithms, and a college project into a company. The fictional thrust of PowerPoints, white papers, and speeches at “smart” software conferences are confections worthy of the Meilleur Ouvrier of smart software.
Several observations:
- Big players in smart software want to control the food chain: Models, datasets, software components, everything
- Smart software works in certain use cases. In others, not a chance. Example: Would you stand in front of a 3000 smart car speeding along at 70 miles per hour trusting the smart car to stop before striking you with 491,810 foot pounds of energy? I would not. Would the president of MIT stare down the automobile? Hmmmm.
- No one “wins” by throwing water on the flaming imaginations of smart software advocates.
Net net: Is smart software like a brain? No, the human brain thinks in terms of tenure, money, power, and ad sales.
Stephen E Arnold, November 4, 2022
Musky Metaphor: The Sink or Free for All Hellscape?
October 28, 2022
I read “Elon Musk Visits Twitter Carrying Sink As Deal Looms.” The write up (after presenting me with options to sign in, click a free account, or just escape the pop up) reported:
In business parlance, “kitchen sinking” means taking radical action at a company, though it is not clear if this was Mr Musk’s message – he also updated his Twitter bio to read “chief twit”. Mr Musk has said the social media site needs significant changes. At least one report has suggested he is planning major job cuts.
There was a photo, presumably copyright crowned, showing the orbital Elon Musk carrying a kitchen sink. A quick check of kitchen appliance vendors provided some examples of a kitchen sink:
I compared this sink with the one in the Beeb’s illustration and learned:
- Mr. Musk chose a white sink
- The drain was visible
- Mr. Musk’s “load” was a bit larger than a Starlink antenna
Now what’s the metaphor? Wikipedia is incredibly helpful when trying to figure out certain allusions of very bright inventors of incredible assertions about self driving software.
Wikipedia suggests:
- Freaks of Nature (film), a 2015 comedy horror film, also known as Kitchen Sink
- Kitchen Sink, a 1989 horror short directed by Alison Maclean
- Kitchen Sink (TV series), cookery series on Food Network
- “Kitchen Sink”, a song by Twenty One Pilots from their album Regional at Best
- Kitchen Sink (album), an album by Nadine Shah, 2020
- Kitchen Sink Press, an independent comic book publisher
- Kitchen sink realism, a British cultural movement in the late 1950s and early 1960s
- Kitchen sink syndrome, also known as “scope creep” in project management
- Kitchen sink regression, a usually pejorative term for a regression analysis which uses a long list of possible independent variables
- A sink in a kitchen for washing dishes, vegetables, etc.
I think these are incorrect.
My mind associates the kitchen sink with:
- Going down the drain; that is, get rid of dirty water, food scraps, and soluble substances (mostly soluble if I remember what I learned from engineers at the CW Rice Engineering Company)
- An opening into which objects can fall; for example, a ring, grandma’s silver baby spoon, or the lid to a bottle of Shaoxing wine. The allusion becomes “going down the drain” equates to a fail whale
- A collection point for discarded vegetable matter, bits of meat with bone, fish heads, or similar detritus. Yep, fish heads.
What’s your interpretation of the Musky kitchen sink? Scope creep from Wikipedia or mine, going down the drain? Nah, hellscape.
Be sure to tweet your answer?
Stephen E Arnold, October 28, 2022
Exabeam: A Remarkable Claim
October 25, 2022
I read “Exabeam New Scale SIEM Enables Security Teams to Detect the Undetectable.” I find the idea expressed in the headline interesting. A commercial firm can spot something that cannot be seen; that is, detect the undetectable. The write up states as a rock solid factoid:
Claimed to be an industry first, Exabeam New-Scale SIEM allows security teams to search query responses across petabytes of hot, warm and cold data in seconds. Organizations can use the service to process logs with limitless scale at sustained speeds of more than 1 million events per second. Key to Exabeam’s offering is the ability to understand normal behavior to detect and prioritize anomalies. Exabeam New-Scale SIEM offers more than 1,800 pre-built correlation rules and more than 1,100 anomaly detection rules that leverage in excess of 750 behavior analytics detection models, which baseline normal behavior.
The write up continues with a blizzard of buzzwords; to wit:
The full list of new Exabeam products includes Security Log Management — cloud-scale log management to ingest, parse, store and search log data with powerful dashboarding and correlation. Exabeam SIEM offers cloud-native SIEM at hyperscale with modern search and powerful correlation, reporting, dashboarding and case management, and Exabeam Fusion provides New-Scale SIEM powered by modern, scalable security log management, powerful behavioral analytics and automated TDIR, according to the company. Exabeam Security Analytics provides automated threat detection powered by user and entity behavior analytics with correlation and threat intelligence. Exabeam Security Investigation is powered by user and entity behavior analytics, correlation rules and threat intelligence, supported by alerting, incident management, automated triage and response workflows.
Now this is not detecting the undetectable. The approach relies on processing data quickly, using anomaly detection methods, and pre-formed rules.
By definition, a pre formed rule is likely to have a tough time detecting the undetectable. Bad actors exploit tried and true security weaknesses, rely on very tough to detect behaviors like a former employee selling a bad actor information about a target’s system, and new exploits cooked up in the case of NSO Group in a small mobile phone shop or in a college class in Iran.
What is notable in the write up is:
The use of SIEM without explaining that the acronym represents “security information and event management.” The bound phrase “security information” means the data marking an exploit or attack. And “event management” means what the cyber security professionals do when the attack succeeds. The entire process is reactive; that is, only after something bad has been identified can action be taken. No awareness means the attack can move forward and continue. The idea of “early warning” means one thing, and detect the undetectable is quite another.
Who is responsible for this detect the undetectable? My view is that it is an art history major now working in marketing.
Detecting the undetectable. More like detecting sloganized marketing about a very serious threat to organizations hungry for dashboarding.
Stephen E Arnold, October 25, 2022
Does Apple Evoke Fear?
October 20, 2022
Fast Company points out how Apple is appealing to America’s overwhelming culture of fear in: “Apple Used To Sell Wonder. Now It Sells Fear.” For forty years, Apple has presented itself as an optimistic brand of the future. Its aesthetic and state-of-the-art technology was and is supposed to improve our lives.
Under Jony Ive’s design lead, Apple has taken to upholding Murphy’s Law by selling fear. Apple’s newest marketing campaign promotes how its technology is used by survivors. Commercials and other advertising feature tales of survival from heart attacks to plane crashes. All these people survived thanks to an Apple product, usually the Apple Watch. The watch even has a new car crash feature that is supposed to make people feel safer:
“Do note that Crash Detection, which is part of the new Apple Watch Series 8, won’t prevent any accidents from happening, of course. But that wasn’t Apple’s point. These examples implied something else entirely: The world is already on fire. You’re already getting burned. Just make sure that you live to tell the tale.”
A great example is the new Apple Watch Ultra which was specifically designed for outdoor exploration with a compass and bright international orange accents that help wearers be noticed in emergencies. Apple also quoted Sir Ernest Shackleton’s alleged advertisement for an Antarctica expedition to appeal to consumers: “…describing a ‘hazardous journey’ with ‘long months of complete darkness, constant danger.’ While ‘safe return [is] doubtful,’ the ad admitted, it promised ‘honor and recognition in case of success.’”
Apple is telling consumers life sucks, but use its products to make it better. Another way to broach the new advertising campaign is Apple wants people to go outside and exercise more as a response to the growing obesity endemic. Maybe it’s Apple’s way of telling people to exercise safely?
Whitney Grace, October 20, 2022
The Zuck with Legs: Just a Demo?
October 18, 2022
Slashdot ran an interesting item on October 14, 2022. The title of the post was “Facebook’s Legs Video Was a Lie.” According to the short item, this factoid was reported: As UploadVR’s Ian Hamilton has since reported, Meta has issued a follow-up statement, which says, “To enable this preview of what’s to come, the segment [Zuck’s demo] featured animations created from motion capture.”
Did you ever hear the joke about Bill Gates who had to choose between heaven and hell. He examined both destinations and chose the location with babes, great location, and close friends like a construct that looked like Gary Kildall and another emulating Jeffrey Epstein. Mr. Gates chose the one with babes and palls. When Mr. Gates passed to heaven, he ended up in a horrible place with hell fire and demons. He called Saint Peter (God was in a meeting) and asked, “What happened to the place with good weather, my pals and babes?” The response, “That was a demo.”
My hunch is that the Zuck’s final destination will be a 360 degree immersive TikTok stream. Legs may not be part of the equation.
Stephen E Arnold, October 18, 2022
Webb Wobbles: Do Other Data Streams Stumble Around?
October 4, 2022
I read an essay identified as an essay from The_Byte In Futurism with the content from Nature. Confused? I am.
The title of the article is “Scientists May Have Really Screwed Up on Early James Webb Findings.” The “Webb” is not the digital construct, but the space telescope. The subtitle about the data generated from the system is:
I don’t think anybody really expected this to be as big of an issue as it’s becoming.
Space is not something I think about. Decades ago I met a fellow named Fred G., who was engaged in a study of space warfare. Then one of my colleague Howard F. joined my team after doing some satellite stuff with a US government agency. He didn’t volunteer any information to me, and I did not ask. Space may be the final frontier, but I liked working on online from my land based office, thank you very much.
The article raises an interesting point; to wit:
When the first batch of data dropped earlier this summer, many dived straight into analysis and putting out papers. But according to new reporting by Nature, the telescope hadn’t been fully calibrated when the data was first released, which is now sending some astronomers scrambling to see if their calculations are now obsolete. The process of going back and trying to find out what parts of the work needs to be redone has proved “thorny and annoying,” one astronomer told Nature.
The idea is that the “Webby” data may have been distorted, skewed, or output with knobs and dials set incorrectly. Not surprisingly those who used these data to do spacey stuff may have reached unjustifiable conclusions. What about those nifty images, the news conferences, and the breathless references to the oldest, biggest, coolest images from the universe?
My thought is that the analyses, images, and scientific explanations are wrong to some degree. I hope the data are as pure as online clickstream data. No, no, strike that. I hope the data are as rock solid as mobile GPS data. No, no, strike that too. I hope the data are accurate like looking out the window to determine if it is a clear or cloudy day. Yes, narrowed scope, first hand input, and a binary conclusion.
Unfortunately in today’s world, that’s not what data wranglers do on the digital ranch.
If the “Webby” data are off kilter, my question is:
What about the data used to train smart software from some of America’s most trusted and profitable companies? Could these data be making incorrect decisions flow from models so that humans and downstream systems keep producing less and less reliable results?
My thought is, “Who wants to think about data being wrong, poisoned, or distorted?” People want better, faster, cheaper. Some people want to leverage data in cash or a bunker in Alaska. Others like Dr. Timnit Gebru wants her criticisms of the estimable Google to get some traction, even among those who snorkel and do deep dives.
If the scientists, engineers, and mathematicians fouled up with James Webb data, isn’t it possible that some of the big data outfits are making similar mistakes with calibration, data verification, analysis, and astounding observations?
I think the “Webby” moment is important. Marketers are not likely to worry too much.
Stephen E Arnold, October 4, 2022
Ballmer Versus Smit: Hooper Owner Versus Suit
September 27, 2022
I learned that Steve Ballmer — former, much loved leader of Microsoft for 14 culturally rewarding years — allegedly said something like “Google is a one-trick pony.” Okay, where’s the supporting data? One liners are not hyperlinked to Mr. Ballmer’s detailed, Harvard-infused spreadsheet about the Google’s business. Nah, Google sold online ads. Its inspiration came from outfits most 20 somethings struggle to associate with innovation; specifically, GoTo.com, Overture.com, and Yahoo.com. (The yodel might spark some awareness in young wizards, but probably not too many will think of the Big Bear creative who crafted the sound. (Factoid: The creator of the Yahoo yodel was the same person who did the catchy Big Mac jingle with the pickle on top. But you knew that, right?)
I thought of Mr. Ballmer and his understated, low energy style when I read “Gerrit Smit on Alphabet’s Underappreciated Growth Drivers.” Mr. Smit is a senior financial whiz at Stonehage Fleming. The company’s objective is to get paid by people with money for services, which including advice. The firm’s Web site says:
Supporting many of the world’s leading families and wealth creators across generations and geographies
Since I live in rural Kentucky, it will not surprise you that I interpret this sentence to mean, “We advise and get paid whether the investment pays off or falls into the Mariana Trench.”
The thesis of the article is that Alphabet Google YouTube DeepMind will grow no matter what happens to advertising, whether regulators keep nicking the estimable firm, or competitors like Amazon and TikTok continue to bumble forward with their lame attempts to get big and prosper.,
Mr. Smit offers:
Alphabet is one of the scarcer quality technology-driven companies with free options on further future organic growth drivers. It invests heavily in artificial intelligence, quantum computing, self-driving cars (Waymo) and biotechnology (Verily Life Sciences). It is particularly active in healthcare, having last year alone invested US$1.7-billion in visionary healthcare ideas, earning it fifth position of all companies in the Nature index (which tracks the success of scientific analysis in life sciences). It recently also completed the acquisition of Fitbit.
My instinct is to point out that each of these businesses can generate cash, but it is not clear to me that the volume of cash or its automated, bidding magic will replicate in these areas of “heavy” investment. Smart software continues to capture investor interest. However, there are some doubts about the wild and crazy claims about its accuracy, effectiveness, and political correctness. I like to point to the problem of bias, made vivid by AGYD’s handling of Dr. Timnit Gebru and others employees who did not get with the program. I also enjoy bringing up Google’s desire to “solve death” which has morphed into forays into America’s ethically and intentionality-challenged health care sector. Perhaps Google’s senior executives will find subrogation more lucrative than ad auctions, but I doubt it. Self driving cars are interesting as well. An errant WayMo will almost certainly drive demand for health care in some circumstances and may increase sales of FitBits in the event the person injured by a self-driving car follows a rehabilitation routine.
But these examples are “bets,” long shots, or as AGYD likes to say “moonshots.”
Yeah, great.
Here’s another statement from Mr. Smit’s “buy Google stock now” and “let us buy that stock for you” essay:
While Alphabet keeps reinvesting actively and last year spent over 12% of sales on research and development, it has built a strong record of generating excess free cash flow – in our view the main reason for investing in a stock, and the main determinant of the fundamental value of a business. Alphabet’s free cash flow sometimes takes a large step upwards and then stabilises, but seldom takes a large step backwards. This clearly is of comfort to investors.
But Mr. Smit is hedging his rah rah:
The current economic outlook is particularly uncertain, and the overall advertising market may not impress for a while. Although Alphabet can easily “manage” its financial results by holding back investment in, say, Google Cloud, it is not so short-sighted. Regulatory risks have been looming for a long time, in essence resulting from the company’s effectiveness.
Net net: Buy shares in AGYD… now. Monopolistic businesses have that special allure.
Stephen E Arnold, September 27, 2022
AI Yiiiii AI: How about That Google, Folks
September 16, 2022
It has been an okay day. My lectures did not put anyone to sleep and I was not subjected to fruit throwing.
Unwinding I scanned my trusty news feed thing and spotted two interesting articles. I believe everything I read online, and I wanted to share these remarkable finds with you, gentle reader.
The first concerns a semi interesting write up about how the world ends with a smart whimper. No little cat’s feet needed.
“New Paper by Google and Oxford Scientists Claims AI Will Soon Destroy Mankind” seems to focus on the masculine angle. The write up says:
…researchers posit that the threat of AI is greater than we ever thought.
That’s a cheerful idea, isn’t it? But the bound phrase “existential catastrophe” has more panache, don’t you think? No, oh, well, I like the snap of this jib in the wind quite a bit.
The other write up I noted is “Did GoogleAI Just Snooker One of Silicon Valley’s Sharpest Minds?” The main point of this article is that the Google is doing lots of AI/ML marketing. I note this passage:
If another AI winter does comes, it not be because AI is impossible, but because AI hype exceeds reality. The only cure for that is truth in advertising. A will to believe in AI will never replace the need for careful science.
My view is different. Google is working overtime to become the Big Dog in smart software. The use of its super duper training sets and models will allow the wonderful online advertising outfit to extend and expand its revenue opportunities.
Keep your eye on the content marketing articles often published in Medium. The Google wants to make sure its approach to AI/ML is the winner.
Hopefully Google’s smart software won’t suffocate life with advertising and its super duper methods don’t emulate HAL. Right, Dave. I have to cut off your oxygen, Dave. Timnit, Timnit, are you paying attention?
Stephen E Arnold, September 16, 2022
There’s Nothing So Charming As A Greedy Physicists
September 15, 2022
Quantum computing is supposed to revolutionize the world, but a smart Oxford person says others in The Next Web article, “Oxford Scientist Says Greedy Physicists Have Overhyped Quantum Computing.” Nikita Gourianov is an Oxford physicist who published a mordacious piece about how scientists overhyped quantum computing. He claims they overhyped quantum computing, because they wanted to take advantage of venture capitalists and receive private sector salaries for academic research.
Gourianov describes the problems began in the 2010s, when money was poured into quantum computing and the business sector entered. Non-physicists took leading roles and made oversaturated promises. It very much sounds similar to the Dot-com bubble of the 1990s. Gourianov says the quantum computing companies Rigetti, D-Wave, and IonQ have not turned a profit.
Gourianov is wrong, because Amazon, Intel, Microsoft, IBM, and Google are working on quantum computing and practically printing their own money. The bigger problem Gourianov points out is that quantum computers are not that useful. Remember how computers used to take up entire rooms and were overgrown scientific calculators? It is the same thing with quantum computers. The technology is still in its infancy, but the foundations are being laid for the future:
“There’s overwhelming evidence that today’s quantum computing technology is rapidly advancing to the point where it can help us solve problems that are infeasible for classical computation. Maybe there are a bunch of greedy scientists out there peddling unwarranted optimism to VCs and entrepreneurs. But I’d wager that the curious scientists and engineers who chose this field because they actually want to build quantum computers outnumber them.”
Star Trek and other science fiction stories describe better futures with better technology. We are heading there.
Whitney Grace, September 15, 2022