Judgment Before? No. Backing Off After? Yes.

August 5, 2024

I wanted to capture two moves from two technology giants. The first item is the report that Google pulled the oh-so-Googley ad about a father using Gemini to write personal note to his daughter. If you are not familiar with the burst of creative marketing, you can glean a few details from “Google Pulls Gemini AI Ad from Olympics after Backlash.” The second item is the report that according to Bloomberg, “Apple Pulls Commercial After Thai Backlash, Calls for Boycott.

I reacted to these two separate announcements by thinking about what these do it-reverse it decisions suggest about the management controls at two technology giants.

Some management processes operated to think up the ad ideas. Then the project had to be given the green light from “leadership” at the two outfits. Next third party providers had to be enlisted to do some of the “knowledge work”. Finally, I assume there were meetings to review the “creative.” Finally, one ad from several candidates was selected by each firm. The money paid. And then the ads appeared. That’s a lot of steps and probably more than two or three people working in a cube next to a Foosball tables.

Plus, the about faces by the two companies did not take much time. Google caved after a few days. Apple also hopped on its havester and chopped the India advertisement quickly as well. Decisiveness. Actually decisiveness after the fact.

Why not less obvious processes like using better judgment before releasing the advertisements? Why not focus on working with people who are more in tune with audience reactions than being clever, smooth talking, and desperate-eager for big company money?

Several observations:

  • Might I hypothesize that both companies lack a fabric of common sense?
  • If online ads “work,” why use what I would call old-school advertising methods? Perhaps the online angle is not correct for such important messaging from two companies that seem to do whatever they want most of the time?
  • The consequences of these do-then-undo actions are likely to be close to zero. Is that what operating in a no-consequences environment fosters?

I wonder if the back away mentality is now standard operating procedure. We have Intel and nVidia with some back-away actions. We have a nation state agreeing to a plea bargain and the un-agreeing the next day. We have a net neutraility rule, then don’t, then we do, and now we don’t. Now that I think about it, perhaps because there are no significant consequences, decision quality has taken a nose dive?

Some believe that great complexity sets the stage for bad decisions which regress to worse decisions.

Stephen E Arnold, August 5, 2024

Fancy Cyber Methods Are Useless Against Insider Threats

August 2, 2024

dinosaur30a_thumb_thumb_thumb_thumb__thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

In my lectures to law enforcement and intelligence professionals, I end the talks with one statement: “Do not assume. Do not reduce costs by firing experienced professionals. Do not ignore human analyses of available information. Do not take short cuts.” Cyber security companies are often like the mythical kids of the village shoemaker. Those who can afford to hire the shoemaker have nifty kicks and slides. Those without resources have almost useless footware.

Companies in the security business often have an exceptionally high opinion of their capabilities and expertise. I think of this as the Google Syndrome or what some have called by less salubrious names. The idea is that one is just so smart, nothing bad can happen here. Yeah, right.

image

An executive answers questions about a slight security misstep. Thanks, Microsoft Copilot. You have been there and done that I assume.

I read “North Korean Hacker Got Hired by US Security Vendor, Immediately Loaded Malware.” The article is a reminder that outfits in the OSINT, investigative, and intelligence business can make incredibly interesting decisions. Some of these lead to quite significant consequences. This particular case example illustrates how a hiring process using humans who are really smart and dedicated can be fooled, duped, and bamboozled.

The write up explains:

KnowBe4, a US-based security vendor, revealed that it unwittingly hired a North Korean hacker who attempted to load malware into the company’s network. KnowBe4 CEO and founder Stu Sjouwerman described the incident in a blog post yesterday, calling it a cautionary tale that was fortunately detected before causing any major problems.

I am a dinobaby, and I translated the passage to mean: “We hired a bad actor but, by the grace of the Big Guy, we avoided disaster.”

Sure, sure, you did.

I would suggest you know you trapped an instance of the person’s behavior. You may not know and may never know what that individual told a colleague in North Korea or another country what the bad actor said or emailed from a coffee shop using a contact’s computer. You may never know what business processes the person absorbed, converted to an encrypted message, and forwarded via a burner phone to a pal in a nation-state whose interests are not aligned with America’s.

In short, the cyber security company dropped the ball. It need not feel too bad. One of the companies I worked for early in my 60 year working career hired a person who dumped top secrets into journalists’ laps. Last week a person I knew was complaining about Delta Airlines which was shown to be quite addled in the wake of the CrowdStrike misstep.

What’s the fix? Go back to how I end my lectures. Those in the cyber security business need to be extra vigilant. The idea that “we are so smart, we have the answer” is an example of a mental short cut. The fact is that the company KnowBe4 did not. It is lucky it KnewAtAll. Some tips:

  1. Seek and hire vetted experts
  2. Question procedures and processes in “before action” and “after action” incidents
  3. Do not rely on assumptions
  4. Do not believe the outputs of smart software systems
  5. Invest in security instead of fancy automobiles and vacations.

Do these suggestions run counter to your business goals and your image of yourself? Too bad. Life is tough. Cyber crime is the growth business. Step up.

Stephen E Arnold, August 2, 2024

A Reliability Test for General-Purpose AI

August 1, 2024

A team of researchers has developed a valuable technique: “How to Assess a General-Purpose AI Model’s Reliability Before It’s Deployed.” The ScienceDaily article begins by defining foundation models—the huge, generalized deep-learning models that underpin generative AI like ChatGPT and DALL-E. We are reminded these tools often make mistakes, and that sometimes these mistakes can have serious consequences. (Think self-driving cars.) We learn:

“To help prevent such mistakes, researchers from MIT and the MIT-IBM Watson AI Lab developed a technique to estimate the reliability of foundation models before they are deployed to a specific task. They do this by considering a set of foundation models that are slightly different from one another. Then they use their algorithm to assess the consistency of the representations each model learns about the same test data point. If the representations are consistent, it means the model is reliable. When they compared their technique to state-of-the-art baseline methods, it was better at capturing the reliability of foundation models on a variety of downstream classification tasks. Someone could use this technique to decide if a model should be applied in a certain setting, without the need to test it on a real-world dataset. This could be especially useful when datasets may not be accessible due to privacy concerns, like in health care settings. In addition, the technique could be used to rank models based on reliability scores, enabling a user to select the best one for their task.”

Great! See the write-up for the technical details behind the technique. This breakthrough can help companies avoid mistakes before they launch their products. That is, if they elect to use it. Will organizations looking to use AI for cost cutting go through these processes? Sadly, we suspect that, if costs go down and lawsuits are few and far between, the AI is deemed good enough. But thanks for the suggestion, MIT.

Cynthia Murrell, August 1, 2024

Crowd What? Strike Who?

July 24, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

How are those Delta cancellations going? Yeah, summer, families, harried business executives, and lots of hand waving. I read a semi-essay about the minor update misstep which caused blue to become a color associated with failure. I love the quirky sad face and the explanations from the assorted executives, experts, and poohbahs about how so many systems could fail in such a short time on a global scale.

In “Surely Microsoft Isn’t Blaming EU for Its Problems?” I noted six reasons the CrowdStrike issue became news instead of a system administrator annoyance. In a nutshell, the reasons identified harken back to Microsoft’s decision to use an “open design.” I like the phrase because it beckons a wide range of people to dig into the plumbing. Microsoft also allegedly wants to support its customers with older computers. I am not sure older anything is supported by anyone. As a dinobaby, I have first-hand experience with this “we care about legacy stuff.” Baloney. The essay mentions “kernel-level access.” How’s that working out? Based on CrowdStrike’s remarkable ability to generate PR from exceptions which appear to have allowed the super special security software to do its thing, that access sure does deliver. (Why does the nationality of CrowdStrike’s founder not get mentioned? Dmitri Alperovitch, a Russian who became a US citizen and a couple of other people set up the firm in 2012. Is there any possibility that the incident was a test play or part of a Russian long game?)

image

Satan congratulates one of his technical professionals for an update well done. Thanks, MSFT Copilot. How’re things today? Oh, that’s too bad.

The essay mentions that the world today is complex. Yeah, complexity goes with nifty technology, and everyone loves complexity when it becomes like an appliance until it doesn’t work. Then fixes are difficult because few know what went wrong. The article tosses in a reference to Microsoft’s “market size.” But centralization is what an appliance does, right? Who wants a tube radio when the radio can be software defined and embedded in another gizmo like those FM radios in some mobile devices. Who knew? And then there is a reference to “security.” We are presented with a tidy list.

The one hitch in the git along is that the issue emerges from a business culture which has zero to do with technology. The objective of a commercial enterprise is to generate profits. Companies generate profits by selling high, subtracting costs, and keeping the rest for themselves and stakeholders.

Hiring and training professionals to do jobs like quality checks, environmental impact statements, and ensuring ethical business behavior in work processes is overhead. One can hire a blue chip consulting firm and spark an opioid crisis or deprecate funding for pre-release checks and quality assurance work.

Engineering excellence takes time and money. What’s valued is maximizing the payoff. The other baloney is marketing and PR to keep regulators, competitors, and lawyers away.

The write up encapsulates the reason that change will be difficult and probably impossible for a company whether in the US or Ukraine to deliver what the customer expects. Regulators have failed to protect citizens from the behaviors of commercial enterprises. The customers assume that a big company cares about excellence.

I am not pessimistic. I have simply learned to survive in what is a quite error-prone environment. Pundits call the world fragile or brittle. Those words are okay. The more accurate term is reality. Get used to it and knock off the jargon about failure, corner cutting, and profit maximization. The reality is that Delta, blue screens, and yip yap about software chock full of issues define the world.

Fancy talk, lists, and entitled assurances won’t do the job. Reality is here. Accept it and blame.

Stephen E Arnold, July 24, 2024

A Windows Expert Realizes Suddenly Last Outage Is a Rerun

July 22, 2024

dinosaur30a_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness

I love poohbahs. One quite interesting online outlet I consult occasionally continues to be quite enthusiastic for all things Microsoft. I spotted a write up about the Crowdstrike matter and its unfortunate downstream consequences for a handful of really tolerant people using its cyber security software. The absolute gem of a write up which arrested my attention was “As the World Suffers a Global IT Apocalypse, What’s More Worrying is How Easy It Is for This to Happen.” The article discloses a certain blind spot among a few Windows cheerleaders. (I thought the Apple fan core was the top of the marketing mountain. I was wrong again, a common problem for a dinobaby like me.

image

Is the blue screen plague like the sinking of the Swedish flagship Vasa? Thanks, OpenAI. Good enough.

The subtitle is even more striking. Here it is:

Nefarious actors might not be to blame this time, but it should serve as a warning to us all how fragile our technology is.

Who knew? Perhaps those affected by the flood of notable cyber breaches. Norton Hospital, Solarwinds, the US government, et al are examples which come to mind.

To what does the word “nefarious” refer? Perhaps it is one of those massive, evil, 24×7 gangs of cyber thugs which work to find the very, very few flaws in Microsoft software? Could it be cyber security professionals who think about security only when some bad — note this — like global outages occur and the flaws in their procedures or code allow people to spend the night in airports or have their surgeries postponed?

The article states:

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

I find it interesting that the money-raising information appears before the stunning insights in the article.

The article reveals this pivotal item of information:

It’s an unprecedented situation around the globe, with banks, healthcare, airlines, TV stations, all affected by it. While Crowdstrike has confirmed this isn’t the result of any type of hack, it’s still incredibly alarming. One piece of software has crippled large parts of industry all across the planet. That’s what worries me.

Ah, a useful moment of recognizing the real world. Quite a leap for those who find certain companies a source of calm and professionalism. I am definitely glad Windows Central, the publisher of this essay, is worried about concentration of technology and the downstream dependencies. Worry only when a cyber problem takes down banks, emergency call services, and other technologically-dependent outfits.

But here’s the moment of insight for the Windows Central outfit. I can here “Eureka!” echoing in the workspace of this intrepid collection of poohbahs:

This time we’re ‘lucky’ in the sense it wasn’t some bad actors causing deliberate chaos.

Then the write up offers this stunning insight after decades of “good enough” software:

This stuff is all too easy. Bad actors can target a single source and cripple millions of computers, many of which are essential.

Holy Toledo. I am stunned with the brilliance of the observations in the article. I do have several thoughts from my humble office in rural Kentucky:

  1. A Windows cheerleading outfit is sort of admitting that technology concentration where “good enough” is excellence creates a global risk. I mean who knew? The Apple and Linux systems running Crowdstrike’s estimable software were not affected. Is this a Windows thing, this global collapse?
  2. Awareness of prior security and programming flaws simply did not exist for the author of the essay. I can understand why Windows Central found the Windows folding phone and a first generation Windows on Arm PCs absolutely outstanding.
  3. Computer science students in a number of countries learn online and at school how to look for similar configuration vulnerabilities in software and exploit them. The objective is to steal, cripple, or start up a cyber security company and make oodles of money. Incidents like this global outage are a road map for some folks, good and not so good.

My take away from this write up is that those who only worry when a global problem arises from what seems to be US-managed technology have not been paying attention. Online security is the big 17th century Swedish flagship Vasa (Wasa). Upon launch, the marine architect and assorted influential government types watched that puppy sink.

But the problem with the most recent and quite spectacular cyber security goof is that it happened to Microsoft and not to Apple or Linux systems. Perhaps there is a lesson in this fascinating example of modern cyber practices?

Stephen E Arnold, July 22, 2024

Stop Indexing! And Pay Up!

July 17, 2024

dinosaur30a_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “Apple, Nvidia, Anthropic Used Thousands of Swiped YouTube Videos to Train AI.” The write up appears in two online publications, presumably to make an already contentious subject more clicky. The assertion in the title is the equivalent of someone in Salem, Massachusetts, pointing at a widower and saying, “She’s a witch.” Those willing to take the statement at face value would take action. The “trials” held in colonial Massachusetts. My high school history teacher was a witchcraft trial buff. (I think his name was Elmer Skaggs.) I thought about his descriptions of the events. I recall his graphic depictions and analysis of what I recall as “dunking.” The idea was that if a person was a witch, then that person could be immersed one or more times. I think the idea had been popular in medieval Europe, but it was not a New World innovation. Me-too is a core way to create novelty. The witch could survive being immersed for a period of time. With proof, hanging or burning were the next step. The accused who died was obviously not a witch. That’s Boolean logic in a pure form in my opinion.

image

The Library in Alexandria burns in front of people who wanted to look up information, learn, and create more information. Tough. Once the cultural institution is gone, just figure out the square root of two yourself. Thanks, MSFT Copilot. Good enough.

The accusations and evidence in the article depict companies building large language models as candidates for a test to prove that they have engaged in an improper act. The crime is processing content available on a public network, indexing it, and using the data to create outputs. Since the late 1960s, digitizing information and making it more easily accessible was perceived as an important and necessary activity. The US government supported indexing and searching of technical information. Other fields of endeavor recognized that as the volume of information expanded, the traditional methods of sitting at a table, reading a book or journal article, making notes, analyzing the information, and then conducting additional research or writing a technical report was simply not fast enough. What worked in a medieval library was not a method suited to put a satellite in orbit or perform other knowledge-value tasks.

Thus, online became a thing. Remember, we are talking punched cards, mainframes, and clunky line printers one day there was the Internet. The interest in broader access to online information grew and by 1985, people recognized that online access was useful for many tasks, not just looking up information about nuclear power technologies, a project I worked on in the 1970s. Flash forward 50 years, and we are upon the moment one can read about the “fact” that Apple, Nvidia, Anthropic Used Thousands of Swiped YouTube Videos to Train AI.

The write up says:

AI companies are generally secretive about their sources of training data, but an investigation by Proof News found some of the wealthiest AI companies in the world have used material from  thousands of  YouTube videos to train AI. Companies did so despite YouTube’s rules against harvesting materials from the platform without permission. Our investigation found that subtitles from 173,536 YouTube videos, siphoned from more than 48,000 channels, were used by Silicon Valley heavyweights, including Anthropic, Nvidia, Apple, and Salesforce.

I understand the surprise some experience when they learn that a software script visits a Web site, processes its content, and generates an index (a buzzy term today is large language model, but I prefer the simpler word index.)

I want to point out that for decades those engaged in making information findable and accessible online have processed content so that a user can enter a query and get a list of indexed items which match that user’s query. In the old days, one used Boolean logic which we met a few moments ago. Today a user’s query (the jazzy term is prompt now) is expanded, interpreted, matched to the user’s “preferences”, and a result generated. I like lists of items like the entries I used to make on a notecard when I was a high school debate team member. Others want little essays suitable for a class assignment on the Salem witchcraft trials in Mr. Skaggs’s class. Today another system can pass a query, get outputs, and then take another action. This is described by the in-crowd as workflow orchestration. Others call it, “taking a human’s job.”

My point is that for decades, the index and searching process has been without much innovation. Sure, software scripts can know when to enter a user name and password or capture information from Web pages that are transitory, disappearing in the blink of an eye. But it is still indexing over a network. The object remains to find information of utility to the user or another system.

The write up reports:

Proof News contributor Alex Reisner obtained a copy of Books3, another Pile dataset and last year published a piece in The Atlantic reporting his finding that more than 180,000 books, including those written by Margaret Atwood, Michael Pollan, and Zadie Smith, had been lifted. Many authors have since sued AI companies for the unauthorized use of their work and alleged copyright violations. Similar cases have since snowballed, and the platform hosting Books3 has taken it down. In response to the suits, defendants such as Meta, OpenAI, and Bloomberg have argued their actions constitute fair use. A case against EleutherAI, which originally scraped the books and made them public, was voluntarily dismissed by the plaintiffs.  Litigation in remaining cases remains in the early stages, leaving the questions surrounding permission and payment unresolved. The Pile has since been removed from its official download site, but it’s still available on file sharing services.

The passage does a good job of making clear that most people are not aware of what indexing does, how it works, and why the process has become a fundamental component of many, many modern knowledge-centric systems. The idea is to find information of value to a person with a question, present relevant content, and enable the user to think new thoughts or write another essay about dead witches being innocent.

The challenge today is that anyone who has written anything wants money. The way online works is that for any single user’s query, the useful information constitutes a tiny, miniscule fraction of the information in the index. The cost of indexing and responding to the query is high, and those costs are difficult to control.

But everyone has to be paid for the information that individual “created.” I understand the idea, but the reality is that the reason indexing, search, and retrieval was invented, refined, and given numerous life extensions was to perform a core function: Answer a question or enable learning.

The write up makes it clear that “AI companies” are witches. The US legal system is going to determine who is a witch just like the process in colonial Salem. Several observations are warranted:

  1. Modifying what is a fundamental mechanism for information retrieval may be difficult to replace or re-invent in a quick, cost-efficient, and satisfactory manner. Digital information is loosey goosey; that is, it moves, slips, and slides either by individual’s actions or a mindless system’s.
  2. Slapping fines and big price tags on what remains an access service will take time to have an impact. As the implications of the impact become more well known to those who are aggrieved, they may find that their own information is altered in a fundamental way. How many research papers are “original”? How many journalists recycle as a basic work task? How many children’s lives are lost when the medical reference system does not have the data needed to treat the kid’s problem?
  3. Accusing companies of behaving improperly is definitely easy to do. Many companies do ignore rules, regulations, and cultural norms. Engineering Index’s publisher leaned that bootleg copies of printed Compendex indexes were available in China. What was Engineering Index going to do when I learned this almost 50 years ago? The answer was give speeches, complain to those who knew what the heck a Compendex was, and talk to lawyers. What happened to the Chinese content pirates? Not much.

I do understand the anger the essay expresses toward large companies doing indexing. These outfits are to some witches. However, if the indexing of content is derailed, I would suggest there are downstream consequences. Some of those consequences will make zero difference to anyone. A government worker at a national lab won’t be able to find details of an alloy used in a nuclear device. Who cares? Make some phone calls? Ask around. Yeah, that will work until the information is needed immediately.

A student accustomed to looking up information on a mobile phone won’t be able to find something. The document is a 404 or the information returned is an ad for a Temu product. So what? The kid will have to go the library, which one hopes will be funded, have printed material or commercial online databases, and a librarian on duty. (Good luck, traditional researchers.) A marketing team eager to get information about the number of Telegram users in Ukraine won’t be able to find it. The fix is to hire a consultant and hope those bright men and women have a way to get a number, a single number, good, bad, or indifferent.)

My concern is that as the intensity of the objections about a standard procedure for building an index escalate, the entire knowledge environment is put at risk. I have worked in online since 1962. That’s a long time. It is amazing to me that the plumbing of an information economy has been ignored for a long time. What happens when the companies doing the indexing go away? What happens when those producing the government reports, the blog posts, or the “real” news cannot find the information needed to create information? And once some information is created, how is another person going to find it. Ask an eighth grader how to use an online catalog to find a fungible book. Let me know what you learn? Better yet, do you know how to use a Remac card retrieval system?

The present concern about information access troubles me. There are mechanisms to deal with online. But the reason content is digitized is to find it, to enable understanding, and to create new information. Digital information is like gerbils. Start with a couple of journal articles, and one ends up with more journal articles. Kill this access and you get what you wanted. You know exactly who is the Salem witch.

Stephen E Arnold, July 17, 2024

x

x

x

x

x

x

AI: Helps an Individual, Harms Committee Thinking Which Is Often Sketchy at Best

July 16, 2024

dinosaur30a_thumb_thumb_thumb_thumb_[1]_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I spotted an academic journal article type write up called “Generative AI Enhances Individual Creativity But Reduces the Collective Diversity of Novel Content.” I would give the paper a C, an average grade. The most interesting point in the write up is that when one person uses smart software like a ChatGPT-type service, the output can make that person seem to a third party smarter, more creative, and more insightful than a person slumped over a wine bottle outside of a drug dealer’s digs.

The main point, which I found interesting, is that a group using ChatGPT drops down into my IQ range, which is “Dumb Turtle.” I think this is potentially significant. I use the word “potential” because the study relied upon human “evaluators” and imprecise subjective criteria; for instance, novelty and emotional characteristics. This means that if the evaluators are teacher or people who have to critique writing are making the judgments, these folks have baked in biases and preconceptions. I know first hand because one of my pieces of writing was published in the St. Louis Post Dispatch at the same time my high school English teacher clapped a C for narrative value and D for language choice. She was not a fan of my phrase “burger boat drive in.” Anyway I got paid $18 for the write up.

Let’s pick up this “finding” that a group degenerates or converges on mediocrity. (Remember, please, that a camel is a horse designed by a committee.) Here’s how the researchers express this idea:

While these results point to an increase in individual creativity, there is risk of losing collective novelty. In general equilibrium, an interesting question is whether the stories enhanced and inspired by AI will be able to create sufficient variation in the outputs they lead to. Specifically, if the publishing (and self-publishing) industry were to embrace more generative AI-inspired stories, our findings suggest that the produced stories would become less unique in aggregate and more similar to each other. This downward spiral shows parallels to an emerging social dilemma (42): If individual writers find out that their generative AI-inspired writing is evaluated as more creative, they have an incentive to use generative AI more in the future, but by doing so, the collective novelty of stories may be reduced further. In short, our results suggest that despite the enhancement effect that generative AI had on individual creativity, there may be a cautionary note if generative AI were adopted more widely for creative tasks.

I am familiar with the stellar outputs of committees. Some groups deliver zero and often retrograde outputs; that is, the committee makes a situation worse. I am thinking of the home owners’ association about a mile from my office. One aggrieved home owner attended a board meeting and shot one of the elected officials. Exciting plus the scene of the murder was a church conference room. Driveways can be hot topics when the group decides to change rules which affected this fellow’s own driveway.

Sometimes committees come up with good ideas; for example, at one government agency where I was serving as the IV&V professional (independent verification and validation) which decided to disband because there was a tiny bit of hanky panky in the procurement process. That was a good idea.

Other committee outputs are worthless; for example, the transcripts of the questions from elected officials directed to high-technology executives. I won’t name any committees of this type because I worked for a congress person, and I observe the unofficial rule: Button up, butter cup.

Let me offer several observations about smart software producing outputs that point to dumb turtle mode:

  1. Services firms (lawyers and blue chip consultants) will produce less useful information relying on smart software than on what crazed Type A achievers produce. Yes, I know that one major blue chip consulting firm helped engineer the excitement one can see in certain towns in West Virginia, but imagine even more negative downstream effects. Wow!
  2. Dumb committees relying on AI will be among the first to suggest, “Let AI set the agenda.” And, “Let AI provide the list of options.” Great idea and one that might be more exciting that an aircraft door exiting the airplane frame at 15,000 feet.
  3. The bean counters in the organization will look at the efficiency of using AI for committee work and probably suggest, “Let’s eliminate the staff who spend more than 85 percent of their time in committee meetings.” That will save money and produce some interesting downstream consequences. (I once had a job which was to attendee committee meetings.)

Net net: AI will help some; AI will produce surprises which cannot be easily anticipated it seems.

Stephen E Arnold, July 16, 2024

AI and Electricity: Cost and Saving Whales

July 15, 2024

dinosaur30a_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Grumbling about the payoff from those billions of dollars injected into smart software continues. The most recent angle is electricity. AI is a power sucker, a big-time energy glutton. I learned this when I read the slightly alarmist write up “Artificial Intelligence Needs So Much Power It’s Destroying the Electrical Grid.” Texas, not a hot bed of AI excitement, seems to be doing quite well with the power grid problem without much help from AI. Mother Nature has made vivid the weaknesses of the infrastructure in that great state.

image

Some dolphins may love the power plant cooling effluent (run off). Other animals, not so much. Thanks, MSFT Copilot. Working on security this week?

But let’s get back to saving whales and the piggishness of those with many GPUs processing data to help out the eighth-graders with their 200 word essays.

The write up says:

As a recent report from the Electric Power Research Institute lays out, just 15 states contain 80% of the data centers in the U.S.. Some states – such as Virginia, home to Data Center Alley – astonishingly have over 25% of their electricity consumed by data centers. There are similar trends of clustered data center growth in other parts of the world. For example, Ireland has become a data center nation.

So what?

The article says that it takes just two years to spin up a smart software data center but it takes four years to enhance an electrical grid. Based on my experience at a unit of Halliburton specializing in nuclear power, the four year number seems a bit optimistic. One doesn’t flip a switch and turn on Three Mile Island. One does not pick a nice spot near a river and start building a nuclear power reactor. Despite the recent Supreme Court ruling calling into question what certain frisky Executive Branch agencies can require, home owners’ associations and medical groups can make life interesting. Plus building out energy infrastructure is expensive and takes time. How long does it take for several feet of specialized concrete to set? Longer than pouring some hardware store quick fix into a hole in your driveway?

The article says:

There are several ways the industry is addressing this energy crisis. First, computing hardware has gotten substantially more energy efficient over the years in terms of the operations executed per watt consumed. Data centers’ power use efficiency, a metric that shows the ratio of power consumed for computing versus for cooling and other infrastructure, has been reduced to 1.5 on average, and even to an impressive 1.2 in advanced facilities. New data centers have more efficient cooling by using water cooling and external cool air when it’s available. Unfortunately, efficiency alone is not going to solve the sustainability problem. In fact, Jevons paradox points to how efficiency may result in an increase of energy consumption in the longer run. In addition, hardware efficiency gains have slowed down substantially as the industry has hit the limits of chip technology scaling.

Okay, let’s put aside the grid and the dolphins for a moment.

AI has and will continue to have downstream consequences. Although the methods of smart software are “old” when measured in terms of Internet innovations, the knock on effects are not known.

Several observations are warranted:

  1. Power consumption can be scheduled. The method worked to combat air pollution in Poland, and it will work for data centers. (Sure, the folks wanting computation will complain, but suck it up, buttercups. Plan and engineer for efficiency.)
  2. The electrical grid, like the other infrastructures in the US, need investment. This is a job for private industry and the governmental authorities. Do some planning and deliver results, please.
  3. Those wanting to scare people will continue to exercise their First Amendment rights. Go for it. However, I would suggest putting observations in a more informed context may be helpful. But when six o’clock news weather people scare the heck out of fifth graders when a storm or snow approaches, is this an appropriate approach to factual information? Answer: Sure when it gets clicks, eyeballs, and ad money.

Net net: No big changes for now are coming. I hope that the “deciders” get their Fiat 500 in gear.

Stephen E Arnold, July 15, 2024

Short Cuts? Nah, Just Business as Usual in the Big Apple Publishing World

June 28, 2024

dinosaur30a_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

One of my team alerted me to this Fortune Magazine story: “Telegram Has Become the Go-To App for Heroin, Guns, and Everything Illegal. Can Crypto Save It?” The author appears to be Niamh Rowe. I do not know this “real” journalist. The Fortune Magazine write up is interesting for several reasons. I want to share these because if I am correct in my hypotheses, the problems of big publishing extend beyond artificial intelligence.

First, I prepared a lecture about Telegram specifically for several law enforcement conferences this year. One of our research findings was that a Clear Web site, accessible to anyone with an Internet connection and a browser, could buy stolen bank cards. But these ready-to-use bank cards were just bait. The real play was the use of an encrypted messaging service to facilitate a switch to a malware once the customer paid via crypto for a bundle of stolen credit and debit cards. The mechanism was not the Dark Web. The Dark Web is showing its age, despite the wild tales which appear in the online news services and semi-crazy videos on YouTube-type services. The new go-to vehicle is an encrypted messaging service. The information in the lecture was not intended to be disseminated outside of the law enforcement community.

image

A big time “real” journalist explains his process to an old person who lives in the Golden Rest Old Age Home. The old-timer thinks the approach is just peachy-keen. Thanks, MSFT Copilot. Close enough like most modern work.

Second, in my talk I used idiosyncratic lingo for one reason. The coinages and phrases allow my team to locate documents and the individuals who rip off my work without permission.

I have had experience with having my research pirated. I won’t name a major Big Apple consulting firm which used my profiles of search vendors as part of the firm’s training materials. Believe it or not, a senior consultant at this ethics-free firm told me that my work was used to train their new “experts.” Was I surprised? Nope. New York. Consultants. What did I expect? Integrity was not a word I used to describe this Big Apple publishing outfitthen, and it sure isn’t today. The Fortune Magazine article uses my lingo, specifically “superapp” and includes comments which struck my researcher as a coincidental channeling of my observations about an end-to-end encrypted service’s crypto play. Yep, coincidence. No problem. Big time publishing. Eighty-year-old person from Kentucky. Who cares? Obviously not the “real” news professional who is in telepathic communication with me and my study team. Oh, well, mind reading must exist, right?

Third, my team and I are working hard on a monograph about E2EE specifically for law enforcement. If my energy holds out, I will make the report available free to any member of a law enforcement cyber investigative team in the US as well as investigators at agencies in which I have some contacts; for example, the UK’s National Crime Agency, Europol, and Interpol.

I thought (silly me) that I was ahead of the curve as I was with some of my other research reports; for example, in the the year 1995 my publisher released Internet 2000: The Path to the Total Network, then in 2004, my publisher issued The Google Legacy, and in 2006 a different outfit sold out of my Enterprise Search Report. Will I be ahead of the curve with my E2EE monograph? Probably not. Telepathy I guess.

But my plan is to finish the monograph and get it in the hands of cyber investigators. I will continue to be on watch for documents which recycle my words, phrases, and content. I am not a person who writes for a living. I write to share my research team’s findings with the men and women who work hard to make it safe to live and work in the US and other countries allied with America. I do not chase clicks like those who must beg for dollars, appeal to advertisers, and provide links to Patreon-type services.

I have never been interested in having a “fortune” and I learned after working with a very entitled, horse-farm-owning Fortune Magazine writer that I had zero in common with him, his beliefs, and, by logical reasoning, the culture of Fortune Magazine.

My hunch is that absolutely no one will remember where the information in the cited write up with my lingo originated. My son, who owns the DC-based GovWizely.com consulting firm, opined, “I think the story was written by AI.” Maybe I should use that AI and save myself money, time, and effort?

To be frank, I laughed at the spin on the Fortune Magazine story’s interpretation of superapp. Not only does the write up misrepresent what crypto means to Telegram, the superapp assertion is not documented with fungible evidence about how the mechanics of Telegram-anchored crime can work.

Net net: I am 80. I sort of care. But come on, young wizards. Up your game. At least, get stuff right, please.

Stephen E Arnold, June 28, 2024

Thomson Reuters: A Trust Report about Trust from an Outfit with Trust Principles

June 21, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Thomson Reuters is into trust. The company has a Web page called “Trust Principles.” Here’s a snippet:

The Trust Principles were created in 1941, in the midst of World War II, in agreement with The Newspaper Proprietors Association Limited and The Press Association Limited (being the Reuters shareholders at that time). The Trust Principles imposed obligations on Reuters and its employees to act at all times with integrity, independence, and freedom from bias. Reuters Directors and shareholders were determined to protect and preserve the Trust Principles when Reuters became a publicly traded company on the London Stock Exchange and Nasdaq. A unique structure was put in place to achieve this. A new company was formed and given the name ‘Reuters Founders Share Company Limited’, its purpose being to hold a ‘Founders Share’ in Reuters.

Trust nestles in some legalese and a bit of business history. The only reason I mention this anchoring in trust is that Thomson Reuters reported quarterly revenue of $1.88 billion in May 2024, up from $1.74 billion in May 2023. The financial crowd had expected $1.85 billion in the quarter, and Thomson Reuters beat that. Surplus funds makes it possible to fund many important tasks; for example, a study of trust.

image

The ouroboros, according to some big thinkers, symbolizes the entity’s journey and the unity of all things; for example, defining trust, studying trust, and writing about trust as embodied in the symbol.

My conclusion is that trust as a marketing and business principle seems to be good for business. Therefore, I trust, and I am confident that the information in “Global Audiences Suspicious of AI-Powered Newsrooms, Report Finds.” The subject of the trusted news story is the Reuters Institute for the Study of Journalism. The Thomson Reuters reporter presents in a trusted way this statement:

According to the survey, 52% of U.S. respondents and 63% of UK respondents said they would be uncomfortable with news produced mostly with AI. The report surveyed 2,000 people in each country, noting that respondents were more comfortable with behind-the-scenes uses of AI to make journalists’ work more efficient.

To make the point a person working for the trusted outfit’s trusted report says in what strikes me as a trustworthy way:

“It was surprising to see the level of suspicion,” said Nic Newman, senior research associate at the Reuters Institute and lead author of the Digital News Report. “People broadly had fears about what might happen to content reliability and trust.”

In case you have lost the thread, let me summarize. The trusted outfit Thomson Reuters funded a study about trust. The research was conducted by the trusted outfit’s own Reuters Institute for the Study of Journalism. The conclusion of the report, as presented by the trusted outfit, is that people want news they can trust. I think I have covered the post card with enough trust stickers.

I know I can trust the information. Here’s a factoid from the “real” news report:

Vitus “V” Spehar, a TikTok creator with 3.1 million followers, was one news personality cited by some of the survey respondents. Spehar has become known for their unique style of delivering the top headlines of the day while laying on the floor under their desk, which they previously told Reuters is intended to offer a more gentle perspective on current events and contrast with a traditional news anchor who sits at a desk.

How can one not trust a report that includes a need met by a TikTok creator? Would a Thomson Reuters’ professional write a news story from under his or her desk or cube or home office kitchen table?

I think self funded research which finds that the funding entity’s approach to trust is exactly what those in search of “real” news need. Wikipedia includes some interesting information about Thomson Reuters in its discussion of the company in the section titled “Involvement in Surveillance.” Wikipedia alleges that Thomson Reuters licenses data to Palantir Technologies, an assertion which if accurate I find orthogonal to my interpretation of the word “trust.” But Wikipedia is not Thomson Reuters.

I will not ask questions about the methodology of the study. I trust the Thomson Reuters’ professionals. I will not ask questions about the link between revenue and digital information. I have the trust principles to assuage any doubt. I will not comment on the wonderful ouroboros-like quality of an enterprise embodying trust, funding a study of trust, and converting those data into a news story about itself. The symmetry is delicious and, of course, trustworthy. For information about Thomson Reuters’s trust use of artificial intelligence see this Web page.

Stephen E Arnold, June 21, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta