Is There a Horse Named Intel PR?

November 25, 2022

I noted the information in “Intel Introduces Real-Time Deepfake Detector.” I like the real time angle. The subtitle caught my attention:

Intel’s deepfake detector analyzes ‘blood flow’ in video pixels to return results in milliseconds with 96% accuracy.

Milliseconds.

I am not saying that Intel’s FakeCatcher does not work on a small, curated video, maybe several.

But like smart cyber security technology, a system works because it recognizes what it knows about. What happens when a bad actor (maybe a disaffected computer science student at a forward leaning university cooks up a novel exploit? In my experience, the smart cyber security system looks dumb.

And what about the interesting four percent error rate? A four percent error rate. So if Intel is monitoring in real time the 500 hours of video uploaded to the Googley YouTube, the system incorrectly identifies only 20 hours of video per minute. What if those misidentified videos were discussing somewhat tricky subjects like missiles striking Poland or a statement about a world leader? Not even the whiz kids who fall in love with chatbots bandy about 96 percent accuracy. Well, maybe a whiz kid who thinks a chatbot is alive may go for the 100 percent thing. Researchers often have a different approach to data; namely, outputting results that are not reproducible or just copied and pasted from other documents. Efficiency is good. So is PR.

Let’s take a step back.

What about the cost of a system to handle, analyze, and identify a fake? I think most US television programming is in the business of institutionalized fakery. I can hear the rejoinder, “We are talking about a certain type of video?” That’s okay for the researchers, not okay for me.

The Intel PR item (which may itself be horse feathers or its close cousin content marketing) says:

Intel’s real-time platform uses FakeCatcher, a detector designed by Demir in collaboration with Umur Ciftci from the State University of New York at Binghamton. Using Intel hardware and software, it runs on a server and interfaces through a web-based platform. On the software side, an orchestra of specialist tools form the optimized FakeCatcher architecture.

Ah, ha. Academic computer razzle dazzle. I am not sure if the Intel news release is in the same league as the computer scientist in Louisville, Kentucky, who has published the ways to determine if I am living in a simulation. (See this IFL Science write up.) It is possible that the Intel claim is in some ways similar: Academics and big companies in search of buzz.

Intel’s announcement is really important. How do I know? I learned:

Deepfake videos are a growing threat.

This is news? I think it is a horse named “PR.”

Stephen E Arnold, November 25, 2022

The iPhone Is Magic

November 23, 2022

I believe everything I read about the Apple iPhone. My knowledge junk bun includes such items as:

  1. Apple has a secret $275 billion deal with China. China is, of course, one of some governmental officials’ favorite countries. See this write up for details.
  2. Apple cares about user privacy. Well, maybe there are/were some issues. See this Forbes’ article for details.
  3. Apple has a monopoly-like position. But monopolies are good for everyone! See the Market Realist article for more insights.

I had these thoughts in mind right after I read this magical — possibly cream puff confection of a story — article called “Woman Who Lost iPhone at Sea Finds It Washed up 460 Days Later in Mint Condition.” The article states:

Clare Atfield, 39, dropped her iPhone in the ocean and never expected to see it again, until an incredible 460 days later. On top of it, the device was in perfect working condition

The article added:

But a year later on November 7, she was contacted by a local dog walker who claimed to have found it on the beach, not far from where she originally lost it… “The gentleman who found it and I were both just in shock that it still worked,” she admitted. The paddle boarder was stunned there wasn’t much damage to the phone considering it was lost at sea for a long time.

What’s this tell me?

  1. By golly iPhones in free protective cases are okay after being submerged in salt water for more than one year
  2. The protective case kept the water from obliterating the information on non digital documents
  3. Content marketing is alive and well when the magical iPhone is involved.

Yes, I believe everything about Apple: No secret deals, no violations of user privacy for ads or any other thing, and no monopoly position. I also believe the iPhone survivability story in the estimable “Daily Star.”

Don’t believe me? Just check with a tooth fairy. I loved the “mint condition” point too.

Stephen E Arnold, November 23, 2022

AI: Black Boxes ‘R Us

November 23, 2022

Humans design and make AI. Because humans design and make AI, we should know how they work. For some reason, humans do not know how AI works. Motherboard on Vice explains that, “Scientists Increasingly Can’t Explain How AI Works.” AI researchers are worried that AI developers focus too much on the end results of an algorithm than how and why it arrives at said results.

In other words, developers cannot explain how an AI algorithm works. AI algorithms are built from layers and layers of deep neural networks (DNNs). These networks are designed to replicate human neural pathways. They are almost like real neural pathways, because neurologists are unaware of how the entire brain works and AI developers do not know how AI algorithms work. AI developers are concerned with the inputs and outputs, but the in-between is the mythical black box. Because AI developers do not worry about how they receive the outputs, they cannot explain why they receive biased, polluted results.

“‘If all we have is a ‘black box’, it is impossible to understand causes of failure and improve system safety,’ Roman V. Yampolskiy, a professor of computer science at the University of Louisville, wrote in his paper titled “Unexplainability and Incomprehensibility of Artificial Intelligence.” ‘Additionally, if we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers.’”

It sounds like the Schrödinger’s cat of black boxes.

Developers’ results are driven by tight deadlines and small budgets so they concentrate on accuracy over explainability. Algorithms are also (supposedly) more accurate than humans, so it is easy to rely on them. Making the algorithms less biased is another black box, especially when the Internet is skewed one way:

“Debiasing the datasets that AI systems are trained on is near impossible in a society whose Internet reflects inherent, continuous human bias. Besides using smaller datasets, in which developers can have more control in deciding what appears in them, experts say a solution is to design with bias in mind, rather than feign impartiality.”

Couldn’t training an algorithm be like teaching a pet to do tricks with positive reinforcement? What would an algorithm consider a treat? But did a guy named Gödel bring up incompleteness? Clicks, clicks, and more clicks.

Whitney Grace, November 23, 2022

Smart Software Is Like the Brain Because…. Money, Fame, and Tenure

November 4, 2022

I enjoy reading the marketing collateral from companies engaged in “artificial intelligence.” Let me be clear. Big money is at stake. A number of companies have spreadsheet fever and have calculated the cash flow from dominating one or more of the AI markets. Examples range from synthetic dataset sales to off-the-shelf models, from black boxes which “learn” how to handle problems that stump MBAs to building control subsystems that keep aircraft which would drop like rocks without numerical recipes humming along.

Study Urges Caution When Comparing Neural Networks to the Brain” comes with some baggage. First, the write up is in what appears to be a publication linked with MIT. I think of Jeffrey Epstein when MIT is mentioned. Why? The estimable university ignored what some believe are the precepts of higher education to take cash and maybe get invited to an interesting party. Yep, MIT. Second, the university itself has been a hot bed of smart software. Cheerleading has been heard emanating from some MIT facilities when venture capital flows to a student’s start up in machine learning or an MIT alum cashes out with a smart software breakthrough. The rah rah, I wish to note, is because of money, not nifty engineering.

The write up states:

In an analysis of more than 11,000 neural networks that were trained to simulate the function of grid cells — key components of the brain’s navigation system — the researchers found that neural networks only produced grid-cell-like activity when they were given very specific constraints that are not found in biological systems. “What this suggests is that in order to obtain a result with grid cells, the researchers training the models needed to bake in those results with specific, biologically implausible implementation choices,” says Rylan Schaeffer, a former senior research associate at MIT.

What this means is that smart software is like the butcher near our home in Campinas, Brazil, in 1952. For Americans, the butcher’s thumb boosted the weight of the object on the scale. My mother who was unaware of this trickery just paid up none the wiser. A friend of our family, Adair Ricci pointed out the trick and he spoke with the butcher. That professional stopped gouging my mother. Mr. Ricci had what I would later learn to label as “influence.”

The craziness in the AI marketing collateral complements the trickery in many academic papers. When I read research results about AI from Google-type outfits, I assume that the finger on the scale trick has been implemented. Who is going to talk? Timnit Gebru did and look what happened? Find your future elsewhere. What about the Snorkel-type of outfit? You may want to take a “deep dive” on that issue.

Now toss in marketing. I am not limiting marketing to the art history major whose father is a venture capitalist with friends. This young expert in Caravaggio’s selection of color can write about AI. I am including the enthusiastic believers who have turned open source, widely used algorithms, and a college project into a company. The fictional thrust of PowerPoints, white papers, and speeches at “smart” software conferences are confections worthy of the Meilleur Ouvrier of smart software.

Several observations:

  1. Big players in smart software want to control the food chain: Models, datasets, software components, everything
  2. Smart software works in certain use cases. In others, not a chance. Example: Would you stand in front of a 3000 smart car speeding along at 70 miles per hour trusting the smart car to stop before striking you with 491,810 foot pounds of energy? I would not. Would the president of MIT stare down the automobile? Hmmmm.
  3. No one “wins” by throwing water on the flaming imaginations of smart software advocates.

Net net: Is smart software like a brain? No, the human brain thinks in terms of tenure, money, power, and ad sales.

Stephen E Arnold, November 4, 2022

Musky Metaphor: The Sink or Free for All Hellscape?

October 28, 2022

I read “Elon Musk Visits Twitter Carrying Sink As Deal Looms.” The write up (after presenting me with options to sign in, click a free account, or just escape the pop up) reported:

In business parlance, “kitchen sinking” means taking radical action at a company, though it is not clear if this was Mr Musk’s message – he also updated his Twitter bio to read “chief twit”. Mr Musk has said the social media site needs significant changes. At least one report has suggested he is planning major job cuts.

There was a photo, presumably copyright crowned, showing the orbital Elon Musk carrying a kitchen sink. A quick check of kitchen appliance vendors provided some examples of a kitchen sink:

I compared this sink with the one in the Beeb’s illustration and learned:

  1. Mr. Musk chose a white sink
  2. The drain was visible
  3. Mr. Musk’s “load” was a bit larger than a Starlink antenna

image

Now what’s the metaphor? Wikipedia is incredibly helpful when trying to figure out certain allusions of very bright inventors of incredible assertions about self driving software.

Wikipedia suggests:

  • Freaks of Nature (film), a 2015 comedy horror film, also known as Kitchen Sink
  • Kitchen Sink, a 1989 horror short directed by Alison Maclean
  • Kitchen Sink (TV series), cookery series on Food Network
  • “Kitchen Sink”, a song by Twenty One Pilots from their album Regional at Best
  • Kitchen Sink (album), an album by Nadine Shah, 2020
  • Kitchen Sink Press, an independent comic book publisher
  • Kitchen sink realism, a British cultural movement in the late 1950s and early 1960s
  • Kitchen sink syndrome, also known as “scope creep” in project management
  • Kitchen sink regression, a usually pejorative term for a regression analysis which uses a long list of possible independent variables
  • A sink in a kitchen for washing dishes, vegetables, etc.

I think these are incorrect.

My mind associates the kitchen sink with:

  • Going down the drain; that is, get rid of dirty water, food scraps, and soluble substances (mostly soluble if I remember what I learned from engineers at the CW Rice Engineering Company)
  • An opening into which objects can fall; for example, a ring, grandma’s silver baby spoon, or the lid to a bottle of Shaoxing wine. The allusion becomes “going down the drain” equates to a fail whale
  • A collection point for discarded vegetable matter, bits of meat with bone, fish heads, or similar detritus. Yep, fish heads.

What’s your interpretation of the Musky kitchen sink? Scope creep from Wikipedia or mine, going down the drain? Nah, hellscape.

Be sure to tweet your answer?

Stephen E Arnold, October 28, 2022

Exabeam: A Remarkable Claim

October 25, 2022

I read “Exabeam New Scale SIEM Enables Security Teams to Detect the Undetectable.” I find the idea expressed in the headline interesting. A commercial firm can spot something that cannot be seen; that is, detect the undetectable. The write up states as a rock solid factoid:

Claimed to be an industry first, Exabeam New-Scale SIEM allows security teams to search query responses across petabytes of hot, warm and cold data in seconds. Organizations can use the service to process logs with limitless scale at sustained speeds of more than 1 million events per second. Key to Exabeam’s offering is the ability to understand normal behavior to detect and prioritize anomalies. Exabeam New-Scale SIEM offers more than 1,800 pre-built correlation rules and more than 1,100 anomaly detection rules that leverage in excess of 750 behavior analytics detection models, which baseline normal behavior.

The write up continues with a blizzard of buzzwords; to wit:

The full list of new Exabeam products includes Security Log Management — cloud-scale log management to ingest, parse, store and search log data with powerful dashboarding and correlation. Exabeam SIEM offers cloud-native SIEM at hyperscale with modern search and powerful correlation, reporting, dashboarding and case management, and Exabeam Fusion provides New-Scale SIEM powered by modern, scalable security log management, powerful behavioral analytics and automated TDIR, according to the company. Exabeam Security Analytics provides automated threat detection powered by user and entity behavior analytics with correlation and threat intelligence. Exabeam Security Investigation is powered by user and entity behavior analytics, correlation rules and threat intelligence, supported by alerting, incident management, automated triage and response workflows.

Now this is not detecting the undetectable. The approach relies on processing data quickly, using anomaly detection methods, and pre-formed rules.

By definition, a pre formed rule is likely to have a tough time detecting the undetectable. Bad actors exploit tried and true security weaknesses, rely on very tough to detect behaviors like a former employee selling a bad actor information about a target’s system, and new exploits cooked up in the case of NSO Group in a small mobile phone shop or in a college class in Iran.

What is notable in the write up is:

The use of SIEM without explaining that the acronym represents “security information and event management.” The bound phrase “security information” means the data marking an exploit or attack. And “event management” means what the cyber security professionals do when the attack succeeds. The entire process is reactive; that is, only after something bad has been identified can action be taken. No awareness means the attack can move forward and continue. The idea of “early warning” means one thing, and detect the undetectable is quite another.

Who is responsible for this detect the undetectable? My view is that it is an art history major now working in marketing.

Detecting the undetectable. More like detecting sloganized marketing about a very serious threat to organizations hungry for dashboarding.

Stephen E Arnold, October 25, 2022

Does Apple Evoke Fear?

October 20, 2022

Fast Company points out how Apple is appealing to America’s overwhelming culture of fear in: “Apple Used To Sell Wonder. Now It Sells Fear.” For forty years, Apple has presented itself as an optimistic brand of the future. Its aesthetic and state-of-the-art technology was and is supposed to improve our lives.

Under Jony Ive’s design lead, Apple has taken to upholding Murphy’s Law by selling fear. Apple’s newest marketing campaign promotes how its technology is used by survivors. Commercials and other advertising feature tales of survival from heart attacks to plane crashes. All these people survived thanks to an Apple product, usually the Apple Watch. The watch even has a new car crash feature that is supposed to make people feel safer:

“Do note that Crash Detection, which is part of the new Apple Watch Series 8, won’t prevent any accidents from happening, of course. But that wasn’t Apple’s point. These examples implied something else entirely: The world is already on fire. You’re already getting burned. Just make sure that you live to tell the tale.”

A great example is the new Apple Watch Ultra which was specifically designed for outdoor exploration with a compass and bright international orange accents that help wearers be noticed in emergencies. Apple also quoted Sir Ernest Shackleton’s alleged advertisement for an Antarctica expedition to appeal to consumers: “…describing a ‘hazardous journey’ with ‘long months of complete darkness, constant danger.’ While ‘safe return [is] doubtful,’ the ad admitted, it promised ‘honor and recognition in case of success.’”

Apple is telling consumers life sucks, but use its products to make it better. Another way to broach the new advertising campaign is Apple wants people to go outside and exercise more as a response to the growing obesity endemic. Maybe it’s Apple’s way of telling people to exercise safely?

Whitney Grace, October 20, 2022

The Zuck with Legs: Just a Demo?

October 18, 2022

Slashdot ran an interesting item on October 14, 2022. The title of the post was “Facebook’s Legs Video Was a Lie.” According to the short item, this factoid was reported: As UploadVR’s Ian Hamilton has since reported, Meta has issued a follow-up statement, which says, “To enable this preview of what’s to come, the segment [Zuck’s demo] featured animations created from motion capture.”

Did you ever hear the joke about Bill Gates who had to choose between heaven and hell. He examined both destinations and chose the location with babes, great location, and close friends like a construct that looked like Gary Kildall and another emulating Jeffrey Epstein. Mr. Gates chose the one with babes and palls. When Mr. Gates passed to heaven, he ended up in a horrible place with hell fire and demons. He called Saint Peter (God was in a meeting) and asked, “What happened to the place with good weather, my pals and babes?” The response, “That was a demo.”

My hunch is that the Zuck’s final destination will be a 360 degree immersive TikTok stream. Legs may not be part of the equation.

Stephen E Arnold, October 18, 2022

Webb Wobbles: Do Other Data Streams Stumble Around?

October 4, 2022

I read an essay identified as an essay from The_Byte In Futurism with the content from Nature. Confused? I am.

The title of the article is “Scientists May Have Really Screwed Up on Early James Webb Findings.” The “Webb” is not the digital construct, but the space telescope. The subtitle about the data generated from the system is:

I don’t think anybody really expected this to be as big of an issue as it’s becoming.

Space is not something I think about. Decades ago I met a fellow named Fred G., who was engaged in a study of space warfare. Then one of my colleague Howard F. joined my team after doing some satellite stuff with a US government agency. He didn’t volunteer any information to me, and I did not ask. Space may be the final frontier, but I liked working on online from my land based office, thank you very much.

The article raises an interesting point; to wit:

When the first batch of data dropped earlier this summer, many dived straight into analysis and putting out papers. But according to new reporting by Nature, the telescope hadn’t been fully calibrated when the data was first released, which is now sending some astronomers scrambling to see if their calculations are now obsolete. The process of going back and trying to find out what parts of the work needs to be redone has proved “thorny and annoying,” one astronomer told Nature.

The idea is that the “Webby” data may have been distorted, skewed, or output with knobs and dials set incorrectly. Not surprisingly those who used these data to do spacey stuff may have reached unjustifiable conclusions. What about those nifty images, the news conferences, and the breathless references to the oldest, biggest, coolest images from the universe?

My thought is that the analyses, images, and scientific explanations are wrong to some degree. I hope the data are as pure as online clickstream data. No, no, strike that. I hope the data are as rock solid as mobile GPS data. No, no, strike that too. I hope the data are accurate like looking out the window to determine if it is a clear or cloudy day. Yes, narrowed scope, first hand input, and a binary conclusion.

Unfortunately in today’s world, that’s not what data wranglers do on the digital ranch.

If the “Webby” data are off kilter, my question is:

What about the data used to train smart software from some of America’s most trusted and profitable companies? Could these data be making incorrect decisions flow from models so that humans and downstream systems keep producing less and less reliable results?

My thought is, “Who wants to think about data being wrong, poisoned, or distorted?” People want better, faster, cheaper. Some people want to leverage data in cash or a bunker in Alaska. Others like Dr. Timnit Gebru wants her criticisms of the estimable Google to get some traction, even among those who snorkel and do deep dives.

If the scientists, engineers, and mathematicians fouled up with James Webb data, isn’t it possible that some of the big data outfits are making similar mistakes with calibration, data verification, analysis, and astounding observations?

I think the “Webby” moment is important. Marketers are not likely to worry too much.

Stephen E Arnold, October 4, 2022

Ballmer Versus Smit: Hooper Owner Versus Suit

September 27, 2022

I learned that Steve Ballmer — former, much loved leader of Microsoft for 14 culturally rewarding years — allegedly said something like “Google is a one-trick pony.” Okay, where’s the supporting data? One liners are not hyperlinked to Mr. Ballmer’s detailed, Harvard-infused spreadsheet about the Google’s business. Nah, Google sold online ads. Its inspiration came from outfits most 20 somethings struggle to associate with innovation; specifically, GoTo.com, Overture.com, and Yahoo.com. (The yodel might spark some awareness in young wizards, but probably not too many will think of the Big Bear creative who crafted the sound. (Factoid: The creator of the Yahoo yodel was the same person who did the catchy Big Mac jingle with the pickle on top. But you knew that, right?)

I thought of Mr. Ballmer and his understated, low energy style when I read “Gerrit Smit on Alphabet’s Underappreciated Growth Drivers.” Mr. Smit is a senior financial whiz at Stonehage Fleming. The company’s objective is to get paid by people with money for services, which including advice. The firm’s Web site says:

Supporting many of the world’s leading families and wealth creators across generations and geographies

Since I live in rural Kentucky, it will not surprise you that I interpret this sentence to mean, “We advise and get paid whether the investment pays off or falls into the Mariana Trench.”

The thesis of the article is that Alphabet Google YouTube DeepMind will grow no matter what happens to advertising, whether regulators keep nicking the estimable firm, or competitors like Amazon and TikTok continue to bumble forward with their lame attempts to get big and prosper.,

Mr. Smit offers:

Alphabet is one of the scarcer quality technology-driven companies with free options on further future organic growth drivers. It invests heavily in artificial intelligence, quantum computing, self-driving cars (Waymo) and biotechnology (Verily Life Sciences). It is particularly active in healthcare, having last year alone invested US$1.7-billion in visionary healthcare ideas, earning it fifth position of all companies in the Nature index (which tracks the success of scientific analysis in life sciences). It recently also completed the acquisition of Fitbit.

My instinct is to point out that each of these businesses can generate cash, but it is not clear to me that the volume of cash or its automated, bidding magic will replicate in these areas of “heavy” investment. Smart software continues to capture investor interest. However, there are some doubts about the wild and crazy claims about its accuracy, effectiveness, and political correctness. I like to point to the problem of bias, made vivid by AGYD’s handling of Dr. Timnit Gebru and others employees who did not get with the program. I also enjoy bringing up Google’s desire to “solve death” which has morphed into forays into America’s ethically and intentionality-challenged health care sector. Perhaps Google’s senior executives will find subrogation more lucrative than ad auctions, but I doubt it. Self driving cars are interesting as well. An errant WayMo will almost certainly drive demand for health care in some circumstances and may increase sales of FitBits in the event the person injured by a self-driving car follows a rehabilitation routine.

But these examples are “bets,” long shots, or as AGYD likes to say “moonshots.”

Yeah, great.

Here’s another statement from Mr. Smit’s “buy Google stock now” and “let us buy that stock for you” essay:

While Alphabet keeps reinvesting actively and last year spent over 12% of sales on research and development, it has built a strong record of generating excess free cash flow – in our view the main reason for investing in a stock, and the main determinant of the fundamental value of a business. Alphabet’s free cash flow sometimes takes a large step upwards and then stabilises, but seldom takes a large step backwards. This clearly is of comfort to investors.

But Mr. Smit is hedging his rah rah:

The current economic outlook is particularly uncertain, and the overall advertising market may not impress for a while. Although Alphabet can easily “manage” its financial results by holding back investment in, say, Google Cloud, it is not so short-sighted. Regulatory risks have been looming for a long time, in essence resulting from the company’s effectiveness.

Net net: Buy shares in AGYD… now. Monopolistic businesses have that special allure.

Stephen E Arnold, September 27, 2022

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta