Google to Microsoft: We Are Trying to Be Helpful
December 16, 2022
Ah, those fun loving alleged monopolies are in the news again. Microsoft — famous in some circles for its interesting approach to security issues — allegedly has an Internet Explorer security problem. Wait! I thought the whole wide world was using Microsoft Edge, the new and improved solution to Web access.
According to “CVE-2022-41128: Type Confusion in Internet Explorer’s JScript9 Engine,” Internet Explorer after decades of continuous improvement and its replacement has a security vulnerability. Are you still using Internet Explorer? The answer may be, “Sure you are.”
With Internet Explorer following Bob down the trail of Microsoft’s most impressive software, the Redmond crowd the Microsoft Office application uses bits and pieces of Internet Explorer. Thrilling, right?
Google explains the Microsoft issue this way:
The JIT compiler generates code that will perform a type check on the variable
q
at the entry of theboom
function. The JIT compiler wrongly assumes the type will not change throughout the rest of the function. This assumption is broken whenq
is changed fromd
(anInt32Array
) toe
(anObject
). When executingq[0] = 0x42424242
, the compiled code still thinks it is dealing with the previousInt32Array
and uses the corresponding offsets. In reality, it is writing to wherevere.e
points to in the case of a 32-bit process ore.d
in the case of a 64-bit process. Based on the patch, the bug seems to lie within a flawed check inGlobOpt::OptArraySrc
, one of the optimization phases.GlobOpt::OptArraySrc
callsShouldExpectConventionalArrayIndexValue
and based on its return value will (in some cases wrongly) skip some code.
Got that.
The main idea is that Google is calling attention to the future great online game company’s approach to software engineering. In a word or two, “Poor to poorer.”
My view of the helpful announcement is that Microsoft Certified Professionals will have to explain this problem. Google’s sales team will happily point out this and other flaws in the Microsoft approach to enterprise software.
If you can’t trust a Web browser or remove flawed code from a widely used app, what’s the fix?
Ready for the answer: “Helpful cyber security revelations that make the online ad giant look like a friendly, fluffy Googzilla. Being helpful is the optimal way to conduct business.
Stephen E Arnold, December 16, 2022
Rainbow Narcotics? Just a Coincidence? Nope, Marketing Plain and Simple
December 9, 2022
Does anyone remember how the tobacco companies had ads and mascots that appealed to kids but claimed more than once their target demographic wasn’t children? It was a bald-faced lie as big as the former claim that smoking does not negatively affect health. The Daily Caller has a whopper of a story about fentanyl: “Drug Cartel Operative Claims Rainbow Fentanyl Was Not Created To ‘Make Kids Addicts.’”
A Mexican drug cartel operative told Insider that rainbow-colored fentanyl is not meant to make kids addicts. The fentanyl pills have the same colors and shapes as popular candies such as Smarties, Sweet Tarts, and more. The cartel operative said the bright colors are meant to warn adults the pills contain fentanyl:
“‘We know that some of the dealers in the US started mixing cocaine with ‘fenta’ without letting their buyers know, and that is very dangerous,’ the operative told Insider. The colorful drug form was created ‘to make it look different than coke or white heroin,’ a Sinaloa cartel drug cook explained, according to Insider. ‘Also we mix some of the heroin with fentanyl to make it more powerful, but we mark it, to let the buyer know that this one has ‘fenta,’’ the operative added. ‘Whatever happens when it’s taken from our hands, it’s not our problem.’”
Ann Milgram, the Administrator of the Drug Enforcement Administration (DEA), pushes back against the cartels by stating the opposite. She claims that rainbow fentanyl is a deliberate attempt to target American kids.
The same cartel operative says they cook clean “fenta” and clearly label it with “el arco del iris” (rainbow).
Right. And Joe Camel was as friendly as Chuck E. Cheese, McGruff the Crime Dog, Smokey the Bear, and Ronald McDonald.
Whitney Grace, December 9, 2022
Can Clever Smart Software Identify Misinformation?
December 8, 2022
My view is, “Nope.” What will marketers say? My thought is, “Anything, anything at all.”
Navigate to “Physicists Create a Wormhole Using a Quantum Computer.” Read it. Now click on “The Death of Quanta Magazine” and read the essay about the Wormhole write up. Here’s the question: “Can you identify the misinformation in each essay?” The $64 dollar question is: “Can smart software flag and tag the misinformation?”
My hunch is that most humans, even the highly intelligent ones reading my article about these two essays, will have a difficult time identifying factoids, hypotheses, and baloney. Now is smart software from one of the allegedly open source outfits or a rapacious but user friendly commercial service able to handle this task?
Let’s look at one passage from the “The Death of Quanta Magazine”; to wit:
While the article correctly points out that one needs negative energy to make a wormhole traversable, and that negative energy does not exist, and that the experiment merely simulated a negative energy pulse, the video has no such qualms. It directly stated that the experiment created a negative energy shockwave and used it to transmit qubits through the wormhole. For me the worst part of the video was at 11:53, where they showed a graph with a bright point labeled “negative energy peak” on it. The problem is that this is not a plot of data, it’s just a drawing, with no connection to the experiment. Lay people will think they are seeing actual data, so this is straightforward disinformation.
Several observations:
- An article and a video. The combo suggests that presumably intelligent people writing about what is allegedly a scientific presentation are chasing the ethos of TikTok and YouTube. Interesting but didn’t Newton get along with a pen and paper?
- Fancy lingo. Yep, holograms, sci-fi sounding jargon like negative energy, and obligatory static graphs.
- Experts. Wow. Experts offered up without much context. Impressive indeed.
- Meta-commentary. I love it when articles comment on other articles. Great fun.
The problem is that smart software may struggle with the nuances in the two articles. Quanta will do an article about that soon I expect.
Content marketing, pseudo tech baloney, and clicks. Yeah.
Stephen E Arnold, December 8, 2022
Is There a Horse Named Intel PR?
November 25, 2022
I noted the information in “Intel Introduces Real-Time Deepfake Detector.” I like the real time angle. The subtitle caught my attention:
Intel’s deepfake detector analyzes ‘blood flow’ in video pixels to return results in milliseconds with 96% accuracy.
Milliseconds.
I am not saying that Intel’s FakeCatcher does not work on a small, curated video, maybe several.
But like smart cyber security technology, a system works because it recognizes what it knows about. What happens when a bad actor (maybe a disaffected computer science student at a forward leaning university cooks up a novel exploit? In my experience, the smart cyber security system looks dumb.
And what about the interesting four percent error rate? A four percent error rate. So if Intel is monitoring in real time the 500 hours of video uploaded to the Googley YouTube, the system incorrectly identifies only 20 hours of video per minute. What if those misidentified videos were discussing somewhat tricky subjects like missiles striking Poland or a statement about a world leader? Not even the whiz kids who fall in love with chatbots bandy about 96 percent accuracy. Well, maybe a whiz kid who thinks a chatbot is alive may go for the 100 percent thing. Researchers often have a different approach to data; namely, outputting results that are not reproducible or just copied and pasted from other documents. Efficiency is good. So is PR.
Let’s take a step back.
What about the cost of a system to handle, analyze, and identify a fake? I think most US television programming is in the business of institutionalized fakery. I can hear the rejoinder, “We are talking about a certain type of video?” That’s okay for the researchers, not okay for me.
The Intel PR item (which may itself be horse feathers or its close cousin content marketing) says:
Intel’s real-time platform uses FakeCatcher, a detector designed by Demir in collaboration with Umur Ciftci from the State University of New York at Binghamton. Using Intel hardware and software, it runs on a server and interfaces through a web-based platform. On the software side, an orchestra of specialist tools form the optimized FakeCatcher architecture.
Ah, ha. Academic computer razzle dazzle. I am not sure if the Intel news release is in the same league as the computer scientist in Louisville, Kentucky, who has published the ways to determine if I am living in a simulation. (See this IFL Science write up.) It is possible that the Intel claim is in some ways similar: Academics and big companies in search of buzz.
Intel’s announcement is really important. How do I know? I learned:
Deepfake videos are a growing threat.
This is news? I think it is a horse named “PR.”
Stephen E Arnold, November 25, 2022
The iPhone Is Magic
November 23, 2022
I believe everything I read about the Apple iPhone. My knowledge junk bun includes such items as:
- Apple has a secret $275 billion deal with China. China is, of course, one of some governmental officials’ favorite countries. See this write up for details.
- Apple cares about user privacy. Well, maybe there are/were some issues. See this Forbes’ article for details.
- Apple has a monopoly-like position. But monopolies are good for everyone! See the Market Realist article for more insights.
I had these thoughts in mind right after I read this magical — possibly cream puff confection of a story — article called “Woman Who Lost iPhone at Sea Finds It Washed up 460 Days Later in Mint Condition.” The article states:
Clare Atfield, 39, dropped her iPhone in the ocean and never expected to see it again, until an incredible 460 days later. On top of it, the device was in perfect working condition
The article added:
But a year later on November 7, she was contacted by a local dog walker who claimed to have found it on the beach, not far from where she originally lost it… “The gentleman who found it and I were both just in shock that it still worked,” she admitted. The paddle boarder was stunned there wasn’t much damage to the phone considering it was lost at sea for a long time.
What’s this tell me?
- By golly iPhones in free protective cases are okay after being submerged in salt water for more than one year
- The protective case kept the water from obliterating the information on non digital documents
- Content marketing is alive and well when the magical iPhone is involved.
Yes, I believe everything about Apple: No secret deals, no violations of user privacy for ads or any other thing, and no monopoly position. I also believe the iPhone survivability story in the estimable “Daily Star.”
Don’t believe me? Just check with a tooth fairy. I loved the “mint condition” point too.
Stephen E Arnold, November 23, 2022
AI: Black Boxes ‘R Us
November 23, 2022
Humans design and make AI. Because humans design and make AI, we should know how they work. For some reason, humans do not know how AI works. Motherboard on Vice explains that, “Scientists Increasingly Can’t Explain How AI Works.” AI researchers are worried that AI developers focus too much on the end results of an algorithm than how and why it arrives at said results.
In other words, developers cannot explain how an AI algorithm works. AI algorithms are built from layers and layers of deep neural networks (DNNs). These networks are designed to replicate human neural pathways. They are almost like real neural pathways, because neurologists are unaware of how the entire brain works and AI developers do not know how AI algorithms work. AI developers are concerned with the inputs and outputs, but the in-between is the mythical black box. Because AI developers do not worry about how they receive the outputs, they cannot explain why they receive biased, polluted results.
“‘If all we have is a ‘black box’, it is impossible to understand causes of failure and improve system safety,’ Roman V. Yampolskiy, a professor of computer science at the University of Louisville, wrote in his paper titled “Unexplainability and Incomprehensibility of Artificial Intelligence.” ‘Additionally, if we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers.’”
It sounds like the Schrödinger’s cat of black boxes.
Developers’ results are driven by tight deadlines and small budgets so they concentrate on accuracy over explainability. Algorithms are also (supposedly) more accurate than humans, so it is easy to rely on them. Making the algorithms less biased is another black box, especially when the Internet is skewed one way:
“Debiasing the datasets that AI systems are trained on is near impossible in a society whose Internet reflects inherent, continuous human bias. Besides using smaller datasets, in which developers can have more control in deciding what appears in them, experts say a solution is to design with bias in mind, rather than feign impartiality.”
Couldn’t training an algorithm be like teaching a pet to do tricks with positive reinforcement? What would an algorithm consider a treat? But did a guy named Gödel bring up incompleteness? Clicks, clicks, and more clicks.
Whitney Grace, November 23, 2022
Smart Software Is Like the Brain Because…. Money, Fame, and Tenure
November 4, 2022
I enjoy reading the marketing collateral from companies engaged in “artificial intelligence.” Let me be clear. Big money is at stake. A number of companies have spreadsheet fever and have calculated the cash flow from dominating one or more of the AI markets. Examples range from synthetic dataset sales to off-the-shelf models, from black boxes which “learn” how to handle problems that stump MBAs to building control subsystems that keep aircraft which would drop like rocks without numerical recipes humming along.
“Study Urges Caution When Comparing Neural Networks to the Brain” comes with some baggage. First, the write up is in what appears to be a publication linked with MIT. I think of Jeffrey Epstein when MIT is mentioned. Why? The estimable university ignored what some believe are the precepts of higher education to take cash and maybe get invited to an interesting party. Yep, MIT. Second, the university itself has been a hot bed of smart software. Cheerleading has been heard emanating from some MIT facilities when venture capital flows to a student’s start up in machine learning or an MIT alum cashes out with a smart software breakthrough. The rah rah, I wish to note, is because of money, not nifty engineering.
The write up states:
In an analysis of more than 11,000 neural networks that were trained to simulate the function of grid cells — key components of the brain’s navigation system — the researchers found that neural networks only produced grid-cell-like activity when they were given very specific constraints that are not found in biological systems. “What this suggests is that in order to obtain a result with grid cells, the researchers training the models needed to bake in those results with specific, biologically implausible implementation choices,” says Rylan Schaeffer, a former senior research associate at MIT.
What this means is that smart software is like the butcher near our home in Campinas, Brazil, in 1952. For Americans, the butcher’s thumb boosted the weight of the object on the scale. My mother who was unaware of this trickery just paid up none the wiser. A friend of our family, Adair Ricci pointed out the trick and he spoke with the butcher. That professional stopped gouging my mother. Mr. Ricci had what I would later learn to label as “influence.”
The craziness in the AI marketing collateral complements the trickery in many academic papers. When I read research results about AI from Google-type outfits, I assume that the finger on the scale trick has been implemented. Who is going to talk? Timnit Gebru did and look what happened? Find your future elsewhere. What about the Snorkel-type of outfit? You may want to take a “deep dive” on that issue.
Now toss in marketing. I am not limiting marketing to the art history major whose father is a venture capitalist with friends. This young expert in Caravaggio’s selection of color can write about AI. I am including the enthusiastic believers who have turned open source, widely used algorithms, and a college project into a company. The fictional thrust of PowerPoints, white papers, and speeches at “smart” software conferences are confections worthy of the Meilleur Ouvrier of smart software.
Several observations:
- Big players in smart software want to control the food chain: Models, datasets, software components, everything
- Smart software works in certain use cases. In others, not a chance. Example: Would you stand in front of a 3000 smart car speeding along at 70 miles per hour trusting the smart car to stop before striking you with 491,810 foot pounds of energy? I would not. Would the president of MIT stare down the automobile? Hmmmm.
- No one “wins” by throwing water on the flaming imaginations of smart software advocates.
Net net: Is smart software like a brain? No, the human brain thinks in terms of tenure, money, power, and ad sales.
Stephen E Arnold, November 4, 2022
Musky Metaphor: The Sink or Free for All Hellscape?
October 28, 2022
I read “Elon Musk Visits Twitter Carrying Sink As Deal Looms.” The write up (after presenting me with options to sign in, click a free account, or just escape the pop up) reported:
In business parlance, “kitchen sinking” means taking radical action at a company, though it is not clear if this was Mr Musk’s message – he also updated his Twitter bio to read “chief twit”. Mr Musk has said the social media site needs significant changes. At least one report has suggested he is planning major job cuts.
There was a photo, presumably copyright crowned, showing the orbital Elon Musk carrying a kitchen sink. A quick check of kitchen appliance vendors provided some examples of a kitchen sink:
I compared this sink with the one in the Beeb’s illustration and learned:
- Mr. Musk chose a white sink
- The drain was visible
- Mr. Musk’s “load” was a bit larger than a Starlink antenna
Now what’s the metaphor? Wikipedia is incredibly helpful when trying to figure out certain allusions of very bright inventors of incredible assertions about self driving software.
Wikipedia suggests:
- Freaks of Nature (film), a 2015 comedy horror film, also known as Kitchen Sink
- Kitchen Sink, a 1989 horror short directed by Alison Maclean
- Kitchen Sink (TV series), cookery series on Food Network
- “Kitchen Sink”, a song by Twenty One Pilots from their album Regional at Best
- Kitchen Sink (album), an album by Nadine Shah, 2020
- Kitchen Sink Press, an independent comic book publisher
- Kitchen sink realism, a British cultural movement in the late 1950s and early 1960s
- Kitchen sink syndrome, also known as “scope creep” in project management
- Kitchen sink regression, a usually pejorative term for a regression analysis which uses a long list of possible independent variables
- A sink in a kitchen for washing dishes, vegetables, etc.
I think these are incorrect.
My mind associates the kitchen sink with:
- Going down the drain; that is, get rid of dirty water, food scraps, and soluble substances (mostly soluble if I remember what I learned from engineers at the CW Rice Engineering Company)
- An opening into which objects can fall; for example, a ring, grandma’s silver baby spoon, or the lid to a bottle of Shaoxing wine. The allusion becomes “going down the drain” equates to a fail whale
- A collection point for discarded vegetable matter, bits of meat with bone, fish heads, or similar detritus. Yep, fish heads.
What’s your interpretation of the Musky kitchen sink? Scope creep from Wikipedia or mine, going down the drain? Nah, hellscape.
Be sure to tweet your answer?
Stephen E Arnold, October 28, 2022
Exabeam: A Remarkable Claim
October 25, 2022
I read “Exabeam New Scale SIEM Enables Security Teams to Detect the Undetectable.” I find the idea expressed in the headline interesting. A commercial firm can spot something that cannot be seen; that is, detect the undetectable. The write up states as a rock solid factoid:
Claimed to be an industry first, Exabeam New-Scale SIEM allows security teams to search query responses across petabytes of hot, warm and cold data in seconds. Organizations can use the service to process logs with limitless scale at sustained speeds of more than 1 million events per second. Key to Exabeam’s offering is the ability to understand normal behavior to detect and prioritize anomalies. Exabeam New-Scale SIEM offers more than 1,800 pre-built correlation rules and more than 1,100 anomaly detection rules that leverage in excess of 750 behavior analytics detection models, which baseline normal behavior.
The write up continues with a blizzard of buzzwords; to wit:
The full list of new Exabeam products includes Security Log Management — cloud-scale log management to ingest, parse, store and search log data with powerful dashboarding and correlation. Exabeam SIEM offers cloud-native SIEM at hyperscale with modern search and powerful correlation, reporting, dashboarding and case management, and Exabeam Fusion provides New-Scale SIEM powered by modern, scalable security log management, powerful behavioral analytics and automated TDIR, according to the company. Exabeam Security Analytics provides automated threat detection powered by user and entity behavior analytics with correlation and threat intelligence. Exabeam Security Investigation is powered by user and entity behavior analytics, correlation rules and threat intelligence, supported by alerting, incident management, automated triage and response workflows.
Now this is not detecting the undetectable. The approach relies on processing data quickly, using anomaly detection methods, and pre-formed rules.
By definition, a pre formed rule is likely to have a tough time detecting the undetectable. Bad actors exploit tried and true security weaknesses, rely on very tough to detect behaviors like a former employee selling a bad actor information about a target’s system, and new exploits cooked up in the case of NSO Group in a small mobile phone shop or in a college class in Iran.
What is notable in the write up is:
The use of SIEM without explaining that the acronym represents “security information and event management.” The bound phrase “security information” means the data marking an exploit or attack. And “event management” means what the cyber security professionals do when the attack succeeds. The entire process is reactive; that is, only after something bad has been identified can action be taken. No awareness means the attack can move forward and continue. The idea of “early warning” means one thing, and detect the undetectable is quite another.
Who is responsible for this detect the undetectable? My view is that it is an art history major now working in marketing.
Detecting the undetectable. More like detecting sloganized marketing about a very serious threat to organizations hungry for dashboarding.
Stephen E Arnold, October 25, 2022
Does Apple Evoke Fear?
October 20, 2022
Fast Company points out how Apple is appealing to America’s overwhelming culture of fear in: “Apple Used To Sell Wonder. Now It Sells Fear.” For forty years, Apple has presented itself as an optimistic brand of the future. Its aesthetic and state-of-the-art technology was and is supposed to improve our lives.
Under Jony Ive’s design lead, Apple has taken to upholding Murphy’s Law by selling fear. Apple’s newest marketing campaign promotes how its technology is used by survivors. Commercials and other advertising feature tales of survival from heart attacks to plane crashes. All these people survived thanks to an Apple product, usually the Apple Watch. The watch even has a new car crash feature that is supposed to make people feel safer:
“Do note that Crash Detection, which is part of the new Apple Watch Series 8, won’t prevent any accidents from happening, of course. But that wasn’t Apple’s point. These examples implied something else entirely: The world is already on fire. You’re already getting burned. Just make sure that you live to tell the tale.”
A great example is the new Apple Watch Ultra which was specifically designed for outdoor exploration with a compass and bright international orange accents that help wearers be noticed in emergencies. Apple also quoted Sir Ernest Shackleton’s alleged advertisement for an Antarctica expedition to appeal to consumers: “…describing a ‘hazardous journey’ with ‘long months of complete darkness, constant danger.’ While ‘safe return [is] doubtful,’ the ad admitted, it promised ‘honor and recognition in case of success.’”
Apple is telling consumers life sucks, but use its products to make it better. Another way to broach the new advertising campaign is Apple wants people to go outside and exercise more as a response to the growing obesity endemic. Maybe it’s Apple’s way of telling people to exercise safely?
Whitney Grace, October 20, 2022