The Google: Dribs and Drabs of Information Suggest a Frisky Outfit

October 10, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I have been catching up since I returned from a law enforcement conference. One of the items in my “read” file concerned Google’s alleged demonstrations of the firm’s cleverness. Clever is often valued more than intelligence in some organization  in my experience. I picked up on an item describing the system and method for tweaking a Google query to enhance the results with some special content.

 How Google Alters Search Queries to Get at Your Wallet” appeared on October 2, 2023. By October 6, 2023, the article was disappeared. I want to point out for you open source intelligence professionals, the original article remains online.

image

Two serious and bright knowledge workers look confused when asked about alleged cleverness. One says, “I don’t understand. We are here to help you.” Thanks, Microsoft Bing. Highly original art and diverse too.

Nope. I won’t reveal where or provide a link to it. I read it and formulated three notions in my dinobaby brain:

  1. The author is making darned certain that he/she/it will not be hired by the Google.
  2. The system and method described in the write up is little more than a variation on themes which thread through a number of Google patent documents. I demonstrated in my monograph Google Version 2.0: The Calculating Predator that clever methods work for profiling users and building comprehensive data sets about products.
  3. The idea of editorial curation is alive, just not particularly effective at the “begging for dollars” outfit doing business as Wired Magazine.

Those are my opinions, and I urge you to formulate your own.

I noted several interesting comments on Hacker News about this publish and disappear event. Let me highlight several. You can find the posts at this link, but keep in mind, these also can vaporize without warning. Isn’t being a sysadmin fun?

  1. judge2020: “It’s obvious that they design for you to click ads, but it was fairly rocky suggesting that the backend reaches out to the ad system. This wouldn’t just destroy results, but also run afoul of FCC Ad disclosure requirements….”
  2. techdragon: “I notice it seems like Google had gotten more and more willing to assume unrelated words/concepts are sufficiently interchangeable that it can happily return both in a search query for either … and I’ll be honest here… single behavior is the number one reason I’m on the edge of leaving google search forever…”
  3. TourcanLoucan: “Increasingly the Internet is not for us, it is certainly not by us, it is simply where you go when you are bored, the only remaining third place that people reliably have access to, and in true free market fashion, it is wall-to-wall exploitation.”

I want to point out that online services operate like droplets of mercury. They merge and one has a giant blob of potentially lethal mercury. Is Google a blob of mercury? The disappearing content is interesting as are the comments about the incident. But some kids play with mercury; others use it in industrial processes; and some consume it (willingly or unwillingly) like sailors of yore with a certain disease. They did not know. You know or could know.

Stephen E Arnold, October 10, 2023

    Newly Emerged Snowden Revelations Appear in Dutch Doctoral Thesis

    October 10, 2023

    Vea4_thumb_thumb_thumb_thumb_thumb_t[2]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

    One Eddie Snowden (a fine gent indeed) rumor said that 99 percent of the NSA data Edward Snowden risked his neck to expose ten years ago remains unpublished. Some entities that once possessed that archive are on record as having destroyed it. This includes The Intercept, which was originally created specifically to publish its revelations. So where are the elusive Snowden files now? Could they be In the hands of a post-PhD researcher residing in Berlin? Computer Weekly examines three fresh Snowden details that made their way into a doctoral thesis in its article, “New Revelations from the Snowden Archive Surface.” The thesis was written by American citizen Jacob Applebaum, who has since received his PhD from the Eindhoven University of Technology in the Netherlands. Reporter Stefania Maurizi summarizes:

    “These revelations go back a decade, but remain of indisputable public interest:

    1. The NSA listed Cavium, an American semiconductor company marketing Central Processing Units (CPUs) – the main processor in a computer which runs the operating system and applications – as a successful example of a ‘SIGINT-enabled’ CPU supplier. Cavium, now owned by Marvell, said it does not implement back doors for any government.
    2. The NSA compromised lawful Russian interception infrastructure, SORM. The NSA archive contains slides showing two Russian officers wearing jackets with a slogan written in Cyrillic: ‘You talk, we listen.’ The NSA and/or GCHQ has also compromised Key European LI [lawful interception] systems.
    3. Among example targets of its mass surveillance program, PRISM, the NSA listed the Tibetan government in exile.”

    Of public interest, indeed. See the write-up for more details on each point or, if you enjoy wading through academic papers, the thesis itself [pdf]. So how and when did Applebaum get his hands on information from the Snowden docs? Those details are not revealed, but we do know this much:

    “In 2013, Jacob Appelbaum published a remarkable scoop for Der Spiegel, revealing the NSA had spied on Angela Merkel’s mobile phone. This scoop won him the highest journalistic award in Germany, the Nannen Prize (later known as the Stern Award). Nevertheless, his work on the NSA revelations, and his advocacy for Julian Assange and WikiLeaks, as well as other high-profile whistleblowers, has put him in a precarious condition. As a result of this, he has resettled in Berlin, where he has spent the past decade.”

    Probably wise. Will most of the Snowden archive remain forever unpublished? Impossible to say, especially since we do not know how many copies remain and in whose hands.

    Cynthia Murrell, October 10, 2023

    Israeli Intelware: Is It Time to Question Its Value?

    October 9, 2023

    Vea4_thumb_thumb_thumb_thumb_thumb_t[2]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

    In 2013, I believe that was the year, I attended an ISS TeleStrategies Conference. A friend of mine wanted me to see his presentation, and I was able to pass the Scylla and Charybdis-inspired security process and listen to the talk. (Last week I referenced that talk and quoted a statement posted on a slide for everyone in attendance to view. Yep, a quote from 2013, maybe earlier.)

    After the talk, I walked quickly through the ISS exhibit hall. I won’t name the firms exhibiting because some of these are history (failures), some are super stealthy, and others have been purchased by other outfits as the intelware roll ups continue. I do recall a large number of intelware companies with their headquarters in or near Tel Aviv, Israel. My impression, as I recall, was that Israel’s butt-kicking software could make sense of social media posts, Dark Web forum activity, Facebook craziness, and Twitter disinformation. These Israeli outfits were then the alpha vendors. Now? Well, maybe a bit less alpha drifting to beta or gamma.

    10 8 intel wrong

    One major to another: “Do you think our intel was wrong?” The other officer says, “I sat in a briefing teaching me that our smart software analyzed social media in real time. We cannot be surprised. We have the super duper intelware.” The major says, jarred by an explosion, “Looks like we were snookered by some Madison Avenue double talk. Let’s take cover.” Thanks, MidJourney. You do understand going down in flames. Is that because you are thinking about your future?

    My impression was that the Israeli-developed software shared a number of functional and visual similarities. I asked people at the conference if they had noticed the dark themes, the similar if not identical timeline functions, and the fondness for maps on which data were plotted and projected. “Peas in a pod,” my friend, a former NATO officer told me. Are not peas alike?

    The reason — and no one has really provided this information — is that the developers shared a foxhole. The government entities in Israel train people with the software and systems proven over the years to be useful. The young trainees carry their learnings forward in their career. Then when mustered out, a few bright sparks form companies or join intelware giants like Verint and continue to enhance existing tools or building new ones. The idea is that life in the foxhole imbues those who experience it with certain similar mental furniture. The ideas, myths, and software experiences form the muddy floor and dirt walls of the foxhole. I suppose one could call this “digital bias”, which later manifests itself in the dozens of Tel Aviv -based intelware, policeware, and spyware companies’ products and services.

    Why am I mentioning this?

    The reason is that I was shocked and troubled by the allegedly surprise attack. If you want to follow the activity, navigate to X.com and search that somewhat crippled system for #OSINT. Skip top and go to the “Latest” tab.

    Several observations:

    1. Are the Israeli intelware products (many of which are controversial and expensive) flawed? Obviously excellent software processing “signals” was blind to the surprise attack, right?
    2. Are the Israeli professionals operating the software unable to use it to prevent surprise attacks? Obviously excellent software in the hands of well-trained professionals flags signals and allows action to be taken when warranted. Did that happen? Has Israeli intel training fallen short of its goal of protecting the nation? Hmmm. Maybe, yes.
    3. Have those who hype intelware and the excellence of a particular system and method been fooled, falling into the dark pit of OSINT blind spots like groupthink and “reasoning from anecdote, not fact”? I am leaning toward a “yes”, gentle reader.

    The time for a critical look at what works and what doesn’t is what the British call “from this day” work. The years of marketing craziness is one thing, but when either the system or the method allows people to be killed without warning or cause broadcasts one message: “Folks, something is very, very wrong.”

    Perhaps certification of these widely used systems is needed? Perhaps a hearing in an appropriate venue is warranted?

    Blind spots can cause harm. Marketers can cause harm. Poorly trained operators can cause harm. Even foxholes require tidying up. Technology for intelligence applications is easy to talk about, but it is now clear to everyone engaged in making sense of signals, one country’s glamped up systems missed the wicket.

    Stephen E Arnold, October 9, 2023

    Canada vs. Google: Not a Fair Hockey Game

    October 9, 2023

    Vea4_thumb_thumb_thumb_thumb_thumb_t[2]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

    I get a bit of a thrill when sophisticated generalist executives find themselves rejected by high-tech wizards. An amusing example of “Who is in charge here?” appears in “Google Rejects Trudeau’s Olive Branch, Threatens News Link Block Over New Law.”

    image

    A seasoned high-tech executive explains that the laptop cannot retrieve Canadian hockey news any longer. Thanks, Microsoft Bing. Nice maple leaf hat.

    The write up states:

    Alphabet Inc.’s Google moved closer to blocking Canadians from viewing news links on its search engine, after it rejected government regulations meant to placate its concerns about an impending online content law.

    Yep, Canada may not be allowed into the select elite of Google users with news. Why? Canada passed a law with which Google does not agree. Imagine. Canada wants Google to pay for accessing, scraping, and linking to Canadian news.

    Canada does not understand who is in charge. The Google is the go-to outfit. If you don’t believe me, just ask some of those Canadian law enforcement and intelligence analysts what online system is used to obtain high-value information. Hint. It is not yandex.ru.

    The write up adds:

    Google already threatened to remove links to news, and tested blocking such content for a small percentage of users in Canada earlier this year. On Friday, it went further, implying a block could be imminent as the current regulations would force the company to participate in the mandatory bargaining process while it applies for exemption.

    Will the Google thwart the Canadian government? Based on the importance of the Google system to certain government interests, a deal of some sort seems likely. But Google could just buy Canada and hire some gig workers to run the country.

    Stephen E Arnold, October 9, 2023

    9 Cognitive Blind Spot 3: You Trust Your Instincts, Right?

    October 9, 2023

    Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

    ChatGPT became available in the autumn of 2022. By December, a young person fell in love with his chatbot. From this dinobaby’s point of view, that was quicker than a love affair ignited by a dating app. “Treason Case: What Are the Dangers of AI Chatbots?” misses the point of its own reporter’s story. The Beeb puts the blame on Jaswant Singh Chail, not the software. Justice needs an individual, not a pride of zeros and ones.

    10 6 trust me

    A bad actor tries to convince other criminals that he is honest, loyal, trustworthy, and an all-around great person. “Trust me,” he says. Some of those listening to the words are skeptical. Thanks, MidJourney. You are getting better at depicting duplicity.

    Here’s the story: Shortly after discovering an online chatbot, Mr. Chail fell in love with “an online companion.” The Replika app allows a user to craft a chatbot. The protagonist in this love story promptly moved from casual chit chat to emotional attachment. As the narrative arc unfolded, Mr. Chail confessed that he was an assassin, and he wanted to kill the Queen of England. Mr. Chail planned on using a crossbow.

    The article reports:

    Marjorie Wallace, founder and chief executive of mental health charity SANE, says the Chail case demonstrates that, for vulnerable people, relying on AI friendships could have disturbing consequences. “The rapid rise of artificial intelligence has a new and concerning impact on people who suffer from depression, delusions, loneliness and other mental health conditions,” she says.

      That seems reasonable. The software meshed nicely with the cognitive blind spot of trusting one’s intuition. Some call this “gut” feel. The label is less important in the confusion of software with reality.

    But what happens when the new Google Pixel 8 camera enhances an image automatically. Who wants a lousy snap? Google appears to favor a Mother Google approach. When an image is manipulated either in a still or video, what does one’s gut say, “I trust pictures and videos for accuracy.” Like the young would be and off-the-rails chatbot lover, zeros and ones can create some interesting effects.

    What about you, gentle reader? Do you know how to recognize an unhealthy interaction with smart software? Can you determine if an image is “real” or the fabrication of a large outfit like Google?

    Stephen E Arnold, October 9, 2023

    Smart Productivity Software Means Pink Slip Flood

    October 9, 2023

    Vea4_thumb_thumb_thumb_thumb_thumb_t[2]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

    Ready for some excitement, you under 50s?

    Soon, workers may be spared the pain of training their replacements. Consciously, anyway. Wired reports, “Your Boss’s Spyware Could Train AI to Replace You.” Researcher Carl Frey’s landmark 2013 prediction that AI could threated half of US jobs has not yet come to pass. Now that current tools like ChatGPT have proven (so far) less accurate and self-sufficient than advertised, some workers are breathing a sigh of relief. Not so fast, warns journalist Thor Benson. It is the growingly pervasive “productivity” (aka monitoring) software we need to be concerned about. Benson writes:

    “Enter corporate spyware, invasive monitoring apps that allow bosses to keep close tabs on everything their employees are doing—collecting reams of data that could come into play here in interesting ways. Corporations, which are monitoring their employees on a large scale, are now having workers utilize AI tools more frequently, and many questions remain regarding how the many AI tools that are currently being developed are being trained. Put all of this together and there’s the potential that companies could use data they’ve harvested from workers—by monitoring them and having them interact with AI that can learn from them—to develop new AI programs that could actually replace them. If your boss can figure out exactly how you do your job, and an AI program is learning from the data you’re producing, then eventually your boss might be able to just have the program do the job instead.”

    Even at companies that do not use spyware, employees may unwittingly train their AI replacements simply by generating data as part of their work. To make matters worse, because it gets neither salary nor benefits, an algorithm need not exceed or even match a human’s performance to land the job.

    So what can we do? We could retrain workers but, as MIT economics professor David Autor notes, that is not one of the US’s strong suits. Or we could take a cue from the Industrial Revolution: Frey points to Britain’s Poor Laws, which gave financial relief to workers whose jobs became obsolete back then. Hmm, we wonder: How would a similar measure fair in the current US Congress?

    Cynthia Murrell, October 9, 2023

    Cognitive Blind Spot 2: Bandwagon Surfing or Do What May Be Fashionable

    October 6, 2023

    Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

    The litigation about the use of Web content to train smart generative software is ramping up. Outfits like OpenAI, Microsoft, and Amazon and its new best friend will be snagged in the US legal system.

    Humans are into trends. The NFL and Taylor Swift appear to be a trend. A sporting money machine and a popular music money machine. Jersey sales increase. Ms. Swift’s music sales go up. New eyeballs track a certain football player. The question is, “Who is exploiting whom?”

    Which bandwagon are you riding? Thank you, MidJourney. Gloom seems to be part of your DNA.
    Think about large language models and smart software. A similar dynamic may exist. Late in 2022, the natural language interface became the next big thing. Students and bad actors figured out that using a ChatGPT-type service could expedite certain activities. Students could be 500 word essays in less than a minute. Bad actors could be snippets of code in seconds. In short, many people were hopping on the LLM bandwagon decorated with smart software logos.

    Now a bandwagon powered by healthy skepticism may be heading toward main street. Wired Magazine published a short essay titled “Chatbot Hallucinations Are Poisoning Web Search.” The foundational assumption is that Web search was better before ChatGPT-type incursions. I am not sure that idea is valid, but for the purposes of illustrating bandwagon surfing, it will pass unchallenged. Wired’s main point is that as AI-generated content proliferates, the results delivered by Google and a couple of other but vastly less popular search engines will deteriorate. I think this is a way to assert that lousy LLM output will make Web search worse. “Hallucination” is jargon for made up or just incorrect information.

    Consider this essay “Evaluating LLMs Is a Minefield.” The essay and slide deck are the work of two AI wizards. The main idea is that figuring out whether a particular LLM or a ChatGPT-service is right, wrong, less wrong, more right, biased, or a digital representation of a 23 year old art history major working in a public relations firm is difficult.

    I am not going to take the side of either referenced article. The point is that the hyperbolic excitement about “smart software” seems to be giving way to LLM criticism. From software for Every Man, the services are becoming tools for improving productivity.

    To sum up, the original bandwagon has been pushed out of the parade by a new bandwagon filled with poobahs explaining that smart software, LLM, et al are making the murky, mysterious Web worse.

    The question becomes, “Are you jumping on the bandwagon with the banner that says, “LLMs are really bad?” or are you sticking with the rah rah crowd? The point is that information at one point was good. Now information is less good. Imagine how difficult it will be to determine what’s right or wrong, biased or unbiased, or acceptable or unacceptable.

    Who wants to do the work to determine provenance or answer questions about accuracy? Not many people. That, rather then lousy Web search, may be more important to some professionals. But that does not solve the problem of the time and resources required to deal with accuracy and other issues.

    So which bandwagon are you riding? The NFL or Taylor Swift? Maybe the tension between the two?

    Stephen E Arnold, October 6, 2023

    Is Google Setting a Trap for Its AI Competition

    October 6, 2023

    Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

    The litigation about the use of Web content to train smart generative software is ramping up. Outfits like OpenAI, Microsoft, and Amazon and its new best friend will be snagged in the US legal system.

    But what big outfit will be ready to offer those hungry to use smart software without legal risk? The answer is the Google.

    How is this going to work?

    simple. Google is beavering away with its synthetic data. Some real data are used to train sophisticated stacks of numerical recipes. The idea is that these algorithms will be “good enough”; thus, the need for “real” information is obviated. And Google has another trick up its sleeve. The company has coveys of coders working on trimmed down systems and methods. The idea is that using less information will produce more and better results than the crazy idea of indexing content from wherever in real time. The small data can be licensed when the competitors are spending their days with lawyers.

    How do I know this? I don’t but Google is providing tantalizing clues in marketing collateral like “Researchers from the University of Washington and Google have Developed Distilling Step-by-Step Technology to Train a Dedicated Small Machine Learning Model with Less Data.” The author is a student who provides sources for the information about the “less is more” approach to smart software training.

    And, may the Googlers sing her praises, she cites Google technical papers. In fact, one of the papers is described by the fledgling Googler as “groundbreaking.” Okay.

    What’s really being broken is the approach of some of Google’s most formidable competition.

    When will the Google spring its trap? It won’t. But as the competitors get stuck in legal mud, the Google will be an increasingly attractive alternative.

    The last line of the Google marketing piece says:

    Check out the Paper and Google AI Article. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 30k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

    Get that young marketer a Google mouse pad.

    Stephen E Arnold, October 6, 2023

    The Google and Its AI Peers Guzzle Water. Yep, Guzzle

    October 6, 2023

    Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

    Much has been written about generative AI’s capabilities and its potential ramifications for business and society. Less has been stated about its environmental impact. The AP highlights this facet of the current craze in its article, “Artificial Intelligence Technology Behind ChatGPT Was Built in Iowa—With a Lot of Water.” Iowa? Who knew? Turns out, there is good reason to base machine learning operations, especially the training, in such a chilly environment. Reporters Matt O’Brien and Hannah Fingerhut write:

    “Building a large language model requires analyzing patterns across a huge trove of human-written text. All of that computing takes a lot of electricity and generates a lot of heat. To keep it cool on hot days, data centers need to pump in water — often to a cooling tower outside its warehouse-sized buildings. In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons, or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research.”

    During the same period, Google’s water usage surge by 20% according to the company. Notably, Google was strategic about where it guzzled this precious resource: it kept usage steady in Oregon, where there was already criticism about its water usage. But its consumption doubled outside Las Vegas, famously one of the nation’s hottest and driest regions. Des Moines, Iowa, on the other hand is a much cooler and wetter locale. We learn:

    “In some ways, West Des Moines is a relatively efficient place to train a powerful AI system, especially compared to Microsoft’s data centers in Arizona that consume far more water for the same computing demand. … For much of the year, Iowa’s weather is cool enough for Microsoft to use outside air to keep the supercomputer running properly and vent heat out of the building. Only when the temperature exceeds 29.3 degrees Celsius (about 85 degrees Fahrenheit) does it withdraw water, the company has said in a public disclosure.”

    Though merely a trickle compared to what the same work would take in Arizona, that summer usage is still a lot of water. Microsoft’s Iowa data centers swilled about 11.5 million gallons in July of 2022, the month just before GPT-4 graduated training. Naturally, both Microsoft and Google insist they are researching ways to use less water. It be nice if environmental protection were more than an afterthought.

    The write-up introduces us to Shaolei Ren, a researcher at the University of California, Riverside. His team is working to calculate the environmental impact of generative AI enthusiasm. Their paper is due later this year, but they estimate ChatGPT swigs more than 16 ounces of water for every five to 50 prompts, depending on the servers’ location and the season. Will big tech find a way to curb AI’s thirst before it drinks us dry?

    Cynthia Murrell, October 6, 2023

    Cognitive Blind Spot 1: Can You Identify Synthetic Data? Better Learn.

    October 5, 2023

    Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

    It has been a killer with the back-to-back trips to Europe and then to the intellectual hub of the old-fashioned America. In France, I visited a location allegedly the office of a company which “owns” the domain rrrrrrrrrrr.com. No luck. Fake address. I then visited a semi-sensitive area in Paris, walking around in the confused fog only a 78 year old can generate. My goal was to spot a special type of surveillance camera designed to provide data to a smart software system. The idea is that the images can be monitored through time so a vehicle making frequent passes of a structure can be flagged, its number tag read, and a bit of thought given to answer the question, “Why?” I visited with a friend and big brain who was one of the technical keystones of an advanced search system. He gave me his most recent book and I paid for my Orangina. Exciting.

    10 5 financial documents

    One executive tells his boss, “Sir, our team of sophisticated experts reviewed these documents. The documents passed scrutiny.” One of the “smartest people in the room” asks, “Where are we going for lunch today?” Thanks, MidJourney. You do understand executive stereotypes, don’t you?

    On the flights, I did some thinking about synthetic data. I am not sure that most people can provide a definition which will embrace the Google’s efforts in the money saving land of synthetic. I don’t think too many people know about Charlie Javice’s use of synthetic data to whip up JPMC’s enthusiasm for her company Frank Financial. I don’t think most people understand that when typing a phrase into the Twitch AI Jesus that software will output a video and mostly crazy talk along with some Christian lingo.

    The purpose of this short blog post is to present an example of synthetic data and conclude by revisiting the question, “Can You Identify Synthetic Data?” The article I want to use as a hook for this essay is from Fortune Magazine. I love that name, and I think the wolves of Wall Street find it euphonious as well. Here’s the title: “Delta Is Fourth Major U.S. Airline to Find Fake Jet Aircraft Engine Parts with Forged Airworthiness Documents from U.K. Company.”

    The write up states:

    Delta Air Lines Inc. has discovered unapproved components in “a small number” of its jet aircraft engines, becoming the latest carrier and fourth major US airline to disclose the use of fake parts.  The suspect components — which Delta declined to identify — were found on an unspecified number of its engines, a company spokesman said Monday. Those engines account for less than 1% of the more than 2,100 power plants on its mainline fleet, the spokesman said. 

    Okay, bad parts can fail. If the failure is in a critical component of a jet engine, the aircraft could — note that I am using the word could — experience a catastrophic failure. Translating catastrophic into more colloquial lingo, the sentence means catch fire and crash or something slightly less terrible; namely, catch fire, explode, eject metal shards into the tail assembly, or make a loud noise and emit smoke. Exciting, just not terminal.

    I don’t want to get into how the synthetic or fake data made its way through the UK company, the UK bureaucracy, the Delta procurement process, and into the hands of the mechanics working in the US or offshore. The fake data did elude scrutiny for some reason. With money being of paramount importance, my hunch is that saving some money played a role.

    If organizations cannot spot fake data when it relates to a physical and mission-critical component, how will organizations deal with fake data generated by smart software. The smart software can get it wrong because an engineer-programmer screwed up his or her math or the complex web of algorithms just generate unanticipated behaviors from dependencies no one knew to check and validate.

    What happens when computers which many people are “always” more right than a human, says, “Here’s the answer.” Many humans will skip the hard work because they are in a hurry, have no appetite for grunt work, or are scheduled by a Microsoft calendar to do something else when the quality assurance testing is supposed to take place.

    Let’s go back to the question in the title of the blog post, “Can You Identify Synthetic Data?”

    I don’t want to forget this part of the title, “Better learn.”

    JPMC paid out more than $100 million in November 2022 because some of the smartest guys in the room weren’t that smart. But get this. JPMC is a big, rich bank. People who could die because of synthetic data are a different kettle of fish. Yeah, that’s what I thought about as I flew Delta back to the US from Paris. At the time, I thought Delta had not fallen prey to the scam.

    I was wrong. Hence, I “better learn” myself.

    Stephen E Arnold, October 5, 2023

    « Previous PageNext Page »

    • Archives

    • Recent Posts

    • Meta