SolarWinds: Huffing and Puffing in a Hot Wind on a Sunny Day

November 16, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Remember the SolarWinds’ misstep? Time has a way deleting memories of security kerfuffles. Who wants to recall ransomware, loss of data, and the general embarrassment of getting publicity for the failure of existing security systems? Not too many. A few victims let off steam by blaming their cyber vendors. Others — well, one — relieve their frustrations by emulating a crazed pit bull chasing an M1 A2 battle tank. The pit bull learns that the M1 A2 is not going to stop and wait for the pit bull to stop barking and snarling. The tank grinds forward, possibly over Solar (an unlikely name for a pit bull in my opinion).

11 11 political speech

The slick business professional speaks to a group of government workers gathered outside on the sidewalk of 100 F Street NW. The talker is semi-shouting, “Your agency is incompetent. You are unqualified. My company knows how to manage our business, security, and personnel affairs.” I am confident this positive talk will win the hearts and minds of the GS-13s listening. Thanks, Microsoft Bing. You obviously have some experience with government behaviors.

I read “SolarWinds Says SEC Sucks: Watchdog Lacks Competence to Regulate Cybersecurity.” The headline attributes the statement to a company. My hunch is that the criticism of the SEC is likely someone other than the firm’s legal counsel, the firm’s CFO, or its PR team.

The main idea, of course, is that SolarWinds should not be sued by the US Securities & Exchange Commission. The SEC does have special agents, but no criminal authority. However, like many US government agencies and their Offices of Inspector General, the investigators can make life interesting for those in whom the US government agency has an interest. (Tip: I will now offer an insider tip. Avoid getting crossways with a US government agency. The people may change but the “desks” persist through time along with documentation of actions. The business processes in the US government mean that people and organizations of interest can be the subject to scrutiny. Like the poem says, “Time cannot wither nor custom spoil the investigators’ persistence.”)

The write up presents information obtained from a public blog post by the victim of a cyber incident. I call the incident a misstep because I am not sure how many organizations, software systems, people, and data elements were negatively whacked by the bad actors. In general, the idea is that a bad actor should not be able to compromise commercial outfits.

The write up reports:

SolarWinds has come out guns blazing to defend itself following the US Securities and Exchange Commission’s announcement that it will be suing both the IT software maker and its CISO over the 2020 SUNBURST cyberattack.

The vendor said the SEC’s lawsuit is "fundamentally flawed," both from a legal and factual perspective, and that it will be defending the charges "vigorously." A lengthy blog post, published on Wednesday, dissected some of the SEC’s allegations, which it evidently believes to be false. The first of which was that SolarWinds lacked adequate security controls before the SUNBURST attack took place.

The right to criticize is baked into the ethos of the US of A. The cited article includes this quote from the SolarWinds’ statement about the US Securities & Exchange Commission:

It later went on to accuse the regulator of overreaching and "twisting the facts" in a bid to expand its regulatory footprint, as well as claiming the body "lacks the authority or competence to regulate public companies’ cybersecurity. The SEC’s cybersecurity-related capabilities were again questioned when SolarWinds addressed the allegations that it didn’t follow the NIST Cybersecurity Framework (CSF) at the time of the attack.

SolarWinds feels strongly about the SEC and its expertise. I have several observations to offer:

  1. Annoying regulators and investigators is not perceived in some government agencies as a smooth move
  2. SolarWinds may find that its strong words may be recast in the form of questions in the legal forum which appears to be roaring down the rails
  3. The SolarWinds’ cyber security professionals on staff and the cyber security vendors whose super duper bad actor stoppers appear to have an opportunity to explain their view of what I call a “misstep.”

Do I have an opinion? Sure. You have read it in my blog posts or heard me say it in my law enforcement lectures, most recently at the Massachusetts / New York Association of Crime Analysts’ meeting in Boston the first week of October 2023.

Cyber security is easier to describe in marketing collateral than do in real life. The SolarWinds’ misstep is an interesting case example of reality being different from the expectation.

Stephen E Arnold, November 16, 2023

Using Smart Software to Make Google Search Less Awful

November 16, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Here’s a quick tip: to get useful results from Google Search, use a competitor’s software. Digital Digging blogger Henk van Ess describes “How to Teach ChatGPT to Come Up with Google Formulas.” Specifically, Ess needed to include foreign-language results in his queries while narrowing results to certain time frames. These are not parameters Google handles well on its own. It was Chat GPT to the rescue—after some tinkering, anyway. He describes an example search goal:

“Find any official document about carbon dioxide reduction from Greek companies, anything from March 24, 2020 to December 21, 2020 will do. Hey, can you search that in Greek, please? Tough question right? Time to fire up Bing or ChatGPT. Round 1 in #chatgpt has a terrible outcome.”

But of course, Hess did not stop there. For the technical details on the resulting “ball of yarn,” how Hess resolved it, and how it can be extrapolated to other use cases, navigate to the write-up. One must bother to learn how to write effective prompts to get these results, but Hess insists it is worth the effort. The post observes:

“The good news is: you only have to do it once for each of your favorite queries. Set and forget, as you just saw I used the same formulae for Greek CO2 and Japanese EV’s. The advantage of natural language processing tools like ChatGPT is that they can help you generate more accurate and relevant search queries in a faster and more efficient way than manually typing in long and complex queries into search engines like Google. By using natural language processing tools to refine and optimize your search queries, you can avoid falling into ‘rabbit holes’ of irrelevant or inaccurate results and get the information you need more quickly and easily.”

Google is currently rolling out its own AI search “experience” in phases around the world. Will it improve results, or will one still be better off employing third-party hacks?

Cynthia Murrell, November 16, 2023

Google and the Tom Sawyer Method, Part Two

November 15, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

What does a large online advertising company do when it cannot figure out what’s fake and what’s not? The answer, as I suggested in this post, is to get other people to do the work. The approach is cheap, shifts the burden to other people, and sidesteps direct testing of an automated “smart” system to detect fake data in the form of likenesses of living people or likenesses for which fees must be paid to use the likeness.

YouTube Will Let Musicians and Actors Request Takedowns of Their Deepfakes” explains (sort of):

YouTube is making it “possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice.” Individuals can submit calls for removal through YouTube’s privacy request process

I find this angle on the process noted in my “Google Solves Fake Information with the Tom Sawyer Method” a useful interpretation of what Google is doing.

From my point of view, Google wants others to do the work of monitoring, identifying, and filling out a form to request fake information be removed. Nevermind that Google has the data, the tags, and (in theory) the expertise to automate the process.

I admire Google. I bet Tom Sawyer’s distant relative now works at Google and cooked up this approach. Well done. Hit that Foosball game while others hunt for their fake or unauthorized likeness, their music, or some other copyrighted material.

Stephen E Arnold, November 15, 2023

Hitting the Center Field Wall, AI Suffers an Injury!

November 15, 2023

green-dino_thumb_thumbThis essay is the work of a dumb, dinobaby humanoid. No smart software required.

At a reception at a government facility in Washington, DC, last week, one of the bright young sparks told me, “Every investment deal I see gets fund if it includes the words ‘artificial intelligence.’” I smiled and moved to another conversation. Wow, AI has infused the exciting world of a city built on the swampy marge of the Potomac River.

I think that the go-go era of smart software has reached a turning point. Venture firms and consultants may not have received the email with this news. However, my research team has, and the update contains information on two separate thrusts of the AI revolution.

image

The heroic athlete, supported by his publicist, makes a heroic effort to catch the long fly ball. Unfortunately our star runs into the wall, drops the ball, and suffers what may be a career-ending injury to his left hand. (It looks broken, doesn’t it?)Oh, well. Thanks, MSFT Bing. The perspective is weird and there is trash on the ground, but the image is good enough.

The first signal appears in “AI Companies Are Running Out of Training Data.” The notion that online information is infinite is a quaint one. But in the fever of moving to online, reality is less interesting that the euphoria of the next gold rush or the new Industrial Revolution. Futurism reports:

Data plays a central role, if not the central role, in the AI economy. Data is a model’s vital force, both in basic function and in quality; the more natural — as in, human-made — data that an AI system has to train on, the better that system becomes. Unfortunately for AI companies, though, it turns out that natural data is a finite resource — and if that tap runs dry, researchers warn they could be in for a serious reckoning.

The information or data in question is not the smog emitted by modern automobiles’ chip stuffed boxes. Nor is the data the streams of geographic information gathered by mobile phone systems. The high value data are those which matter; for example, in a stream of security information, which specific stock is moving because it is being manipulated by one of those bright young minds I met at the DC event.

The article “AI Companies Are Running Out of Training Data” adds:

But as data becomes increasingly valuable, it’ll certainly be interesting to see how many AI companies can actually compete for datasets — let alone how many institutions, or even individuals, will be willing to cough their data over to AI vacuums in the first place. But even then, there’s no guarantee that the data wells won’t ever run dry. As infinite as the internet seems, few things are actually endless.

The fix is synthetic or faked data; that is, fabricated data which appears to replicate real-life behavior. (Don’t you love it when Google predicts the weather or a smarty pants games the crypto market?)

The message is simple: Smart software has ground through the good stuff and may face its version of an existential crisis. That’s different from the rah rah one usually hears about AI.

The second item my team called to my attention appears in a news story called “OpenAI Pauses New ChatGPT Plus Subscriptions De to Surge in Demand.” I read the headline as saying, “Oh, my goodness, we don’t have the money or the capacity to handle more users requests.”

The article expresses the idea in this snappy 21st century way:

The decision to pause new ChatGPT signups follows a week where OpenAI services – including ChatGPT and the API – experienced a series of outages related to high-demand and DDoS attacks.

Okay, security and capacity.

What are the implications of these two unrelated stories:

  1. The run up to AI has been boosted with system operators ignoring copyright and picking low hanging fruit. The orchard is now looking thin. Apples grow on trees, just not quickly and over cultivation can ruin the once fertile soil. Think a digital Dust Bowl perhaps?
  2. The friction of servicing user requests is causing slow downs. Can the heat be dissipated? Absolutely but the fix requires money, more than high school science club management techniques, and common sense. Do AI companies exhibit common sense? Yeah, sure. Everyday.
  3. The lack of high-value or sort of good information is a bummer. Machines producing insights into the dark activities of bad actors and the thoughts of 12-year-olds are grinding along. However, the value of the information outputs seems to be lagging behind the marketers’ promises. One telling example is the outright failure of Israel’s smart software to have utility in identifying the intent of bad actors. My goodness, if any country has smart systems, it’s Israel. Based on events in the last couple of months, the flows of data produced what appears to be a failing grade.

If we take these two cited articles’ information at face value, one can make a case that the great AI revolution may be facing some headwinds. In a winner-take-all game like AI, there will be some Sad Sacks at those fancy Washington, DC receptions. Time to innovate and renovate perhaps?

Stephen E Arnold, November 15, 2023

Cyberwar Crimes? Yep and Prosecutions Coming Down the Pike

November 15, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Existing international law has appeared hamstrung in the face of cyber-attacks for years, with advocates calling for new laws to address the growing danger. It appears, however, that step will no longer be necessary. Wired reports, “The International Criminal Court Will Now Prosecute Cyberwar Crimes.” The Court’s lead prosecutor, Karim Khan, acknowledged in an article published by Foreign Policy Analytics that cyber warfare perpetuates serious harm in the real world. Attacks on critical infrastructure like medical facilities and power grids may now be considered “war crimes, crimes against humanity, genocide, and/or the crime of aggression” as defined in the 1998 Rome Statute. That is great news, but why now? Writer Andy Greenberg tells us:

“Neither Khan’s article nor his office’s statement to WIRED mention Russia or Ukraine. But the new statement of the ICC prosecutor’s intent to investigate and prosecute hacking crimes comes in the midst of growing international focus on Russia’s cyberattacks targeting Ukraine both before and after its full-blown invasion of its neighbor in early 2022. In March of last year, the Human Rights Center at UC

Berkeley’s School of Law sent a formal request to the ICC prosecutor’s office urging it to consider war crime prosecutions of Russian hackers for their cyberattacks in Ukraine—even as the prosecutors continued to gather evidence of more traditional, physical war crimes that Russia has carried out in its invasion. In the Berkeley Human Rights Center’s request, formally known as an Article 15 document, the Human Rights Center focused on cyberattacks carried out by a Russian group known as Sandworm, a unit within Russia’s GRU military intelligence agency. Since 2014, the GRU and Sandworm, in particular, have carried out a series of cyberwar attacks against civilian critical infrastructure in Ukraine beyond anything seen in the history of the internet.”

See the article for more details of Sandworm’s attacks. Greenberg consulted Lindsay Freeman, the Human Rights Center’s director of technology, law, and policy, who expects the ICC is ready to apply these standards well beyond the war in Ukraine. She notes the 123 countries that signed the Rome Statute are obligated to detain and extradite convicted war criminals. Another expert, Strauss Center director Bobby Chesney, points out Khan paints disinformation as a separate, “gray zone.” Applying the Rome Statute to that tactic may prove tricky, but he might make it happen. Khan seems determined to hold international bad actors to account as far as the law will possibly allow.

Cynthia Murrell, November 15, 2023

A Musky Odor Thwarts X Academicians

November 15, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

How does a tech mogul stamp out research? The American way, of course! Ars Technica reveals, “100+ Researchers Say they Stopped Studying X, Fearing Elon Musk Might Sue Them.” A recent Reuters report conducted by the Coalition for Independent Technology Research found a fear of litigation and jacked-up data-access fees are hampering independent researchers. All while X (formerly Twitter) is under threat of EU fines for allowing Israel/Hamas falsehoods. Meanwhile, the usual hate speech, misinformation, and disinformation continue. The company insists its own, internal mechanisms are doing a fine job, thank you very much, but it is getting harder and harder to test that claim. Writer Ashley Belanger tells us:

“Although X’s API fees and legal threats seemingly have silenced some researchers, X has found some other partners to support its own research. In a blog last month, Yaccarino named the Technology Coalition, Anti-Defamation League (another group Musk threatened to sue), American Jewish Committee, and Global Internet Forum to Counter Terrorism (GIFCT) among groups helping X ‘keep up to date with potential risks’ and supporting X safety measures. GIFCT, for example, recently helped X identify and remove newly created Hamas accounts. But X partnering with outside researchers isn’t a substitute for external research, as it seemingly leaves X in complete control of spinning how X research findings are characterized to users. Unbiased research will likely become increasingly harder to come by, Reuters’ survey suggested.”

Indeed. And there is good reason to believe the company is being less than transparent about its efforts. We learn:

“For example, in July, X claimed that a software company that helps brands track customer experiences, Sprinklr, supplied X with some data that X Safety used to claim that ‘more than 99 percent of content users and advertisers see on Twitter is healthy.’ But a Sprinklr spokesperson this week told Reuters that the company could not confirm X’s figures, explaining that ‘any recent external reporting prepared by Twitter/X has been done without Sprinklr’s involvement.’”

Musk is famously a “free speech absolutist,” but only when it comes to speech he approves of. Decreasing transparency will render X more dangerous, unless and until its decline renders it irrelevant. Fear the musk ox.

Cynthia Murrell, November 15, 2023

Copyright Trolls: An Explanation Which Identifies Some Creatures

November 14, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

If you are not familiar with firms which pursue those who intentionally or unintentionally use another person’s work in their writings, you may not know what a “copyright troll” is. I want to point you to an interesting post from IntoTheMinds.com. The write up “PicRights + AFP: Une Opération de Copyright Trolling Bien Rodée.” appeared in 2021, and it was updated in June 2023. The original essay is in French, but you may want to give Google Translate a whirl if your high school French is but a memoire dou dou.

image

A copyright troll is looking in the window of a blog writer. The troll is waiting for the writer to use content covered by copyright and for which a fee must be paid. The troll is patient. The blog writer is clueless. Thanks, Microsoft Bing. Nice troll. Do you perhaps know one?

The write up does a good job of explaining trollism with particular reference to an estimable outfit called PicRights and the even more estimable Agence France-Presse. It also does a bit of critical review of the PicRights’ operation, including the use of language to alleged copyright violators about how their lives will take a nosedive if money is not paid promptly for the alleged transgression. There are some thoughts about what to do if and when a copyright troll like the one pictured courtesy of Microsoft Bing’s art generator. Some comments about the rules and regulations regarding trollism. The author includes a few observations about the rights of creators. And a few suggested readings are included. Of particular note is the discussion of an estimable legal eagle outfit doing business as Higbee and Associates. You can find that document at this link.

If you are interested in copyright trolling in general and PicRights in particular, I suggest you download the document. I am not sure how long it will remain online.

Stephen E Arnold, November 14, 2023

Google Solves Fake Information with the Tom Sawyer Method

November 14, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

How does one deliver “responsible AI”? Easy. Shift the work to those who use a system built on smart software. I call the approach the “Tom Sawyer Method.” The idea is that the fictional character (Tom) convinced lesser lights to paint the fence for him. Sammy Clemmons (the guy who invested in the typewriter) said:

“Work consists of whatever a body is obliged to do. Play consists of whatever a body is not obliged to do.”

Thus the information in “Our Approach to Responsible AI Innovation” is play. The work is for those who cooperate to do the real work. The moral is, “We learn more about Google than we do about responsible AI innovation.”

image

The young entrepreneur says, “You fellows chop the wood.  I will go and sell it to one of the neighbors. Do a good job. Once you finish you can deliver the wood and I will give you your share of the money. How’s that sound?” The friends are eager to assist their pal. Thanks Microsoft Bing. I was surprised that you provided people of color when I asked for “young people chopping wood.” Interesting? I think so.

The Google write up from a trio of wizard vice presidents at the online advertising company says:

…we’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools. When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material.

Yep, “require.” But what I want to do is to translate Google speak into something dinobabies understand. Here’s my translation:

  1. Google cannot determine what content is synthetic and what is not; therefore, the person using our smart software has to tell us, “Hey, Google, this is fake.”
  2. Google does not want to increase headcount and costs related to synthetic content detection and removal. Therefore, the work is moved via the Tom Sawyer Method to YouTube “creators” or fence painters. Google gets the benefit of reduced costs, hopefully reduced liability, and “play” like Foosball.
  3. Google can look at user provided metadata and possibly other data in the firm’s modest repository and determine with acceptable probability that a content object and a creator should be removed, penalized, or otherwise punished by a suitable action; for example, not allowing a violator to buy Google merchandise. (Buying Google AdWords is okay, however.)

The write up concludes with this bold statement: “The AI transformation is at our doorstep.” Inspiring. Now wood choppers, you can carry the firewood into the den and stack it buy the fireplace in which we burn the commission checks the offenders were to receive prior to their violating the “requirements.”

Ah, Google, such a brilliant source of management inspiration: A novel written in 1876. I did not know that such old information was in the Google index. I mean DejaVu is consigned to the dust bin. Why not Mark Twain’s writings?

Stephen  E Arnold, November 14, 2023

test

Google: Slip Slidin Away? Not Yet. Defaults Work

November 14, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

I spotted a short item in the online information service called Quartz. The story had a click magnet title, and it worked for me. “Is This the Beginning of the End of Google’s Dominance in Search?” asks a rhetorical question without providing much of an answer. The write up states:

The tech giant’s market share is being challenged by an increasingly crowded field

I am not sure what this statement means. I noticed during the week of November 6, 2023, that the search system 50kft.com stopped working. Is the service dead? Is it experiencing technical problems? No one knows. I also checked Newslookup.com. That service remains stuck in the past. And Blogsurf.io seems to be a goner. I am not sure where the renaissance in Web search is. Is there a digital Florence, Italy, I have overlooked?

image

A search expert lounging in the hammock of habit. Thanks, Microsoft Bing. You do understand some concepts like laziness when it comes to changing search defaults, don’t you?

The write up continues:

Google has been the world’s most popular search engine since its launch in 1997. In October, it was holding a market share of 91.6%, according to web analytics tracker StatCounter. That’s down nearly 80 basis points from a year before, though a relatively small dent considering OpenAI’s ChatGPT was introduced late last year.

And what’s number two? How about Bing with a market share of 3.1 percent according to the numbers in the article.

Some people know that Google has spent big bucks to become the default search engine in places that matter. What few appreciate is that being a default is the equivalent of finding oneself in a comfy habit hammock. Changing the default setting for search is just not worth the effort.

What I think is happening is the conflation of search and retrieval with another trend. The new thing is letting software generate what looks like an answer. Forget that the outputs of a system based on smart software may be wonky or just incorrect. Thinking up a query is difficult.

But Web search sucks. Google is in a race to create bigger, more inviting hammocks.

image

Google is not sliding into a loss of market share. The company is coming in for the kill as it demonstrates its financial resolve with regard to the investment in Character.ai.

Let me be clear: Finding actionable information today is more difficult than at any previous time in my 50 year career in online information. Why? Software struggles to match content to what a human needs to solve certain problems. Finding a pizza joint or getting a list of results for further reading just looks like an answer. To move beyond good enough so the pizza joint does not gag a maggot or the list of citations is beyond the user’s reading level is not what’s required.

We are stuck in the Land of Good Enough, lounging in habit hammocks, and living the good life. Some people wear a T shirt with the statement, “Ignorance is bliss. Hello, Happy.”

Net net: I think the write up projects a future in which search becomes really easy and does the thinking for the humanoids. But for now, it’s the Google.

Stephen E Arnold, November 14, 2023

Pundit Recounts Amazon Sins and Their Fixes

November 14, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Sci-fi author and Pluralistic blogger Cory Doctorow is not a fan of Amazon. In fact, he declares, “Amazon Is a Ripoff.” His article references several sources to support this assertion, beginning with Lina Khan’s 2017 cautionary paper published in the Yale Law Journal. Now head of the FTC, Khan is bringing her expertise to bear in a lawsuit against the monopoly. We are reminded how tech companies have been able to get away with monopolistic practices thus far:

“There’s a cheat-code in US antitrust law, one that’s been increasingly used since the Reagan administration, when the ‘consumer welfare’ theory (‘monopolies are fine, so long as the lower prices’) shoved aside the long-established idea that antitrust law existed to prevent monopolies from forming at all. The idea that a company can do anything to create or perpetuate a monopoly so long as its prices go down and/or its quality goes up is directly to blame for the rise of Big Tech.”

But what, exactly, is shady about Amazon’s practices? From confusing consumers through complexity and gouging them with “drip pricing” to holding vendors over a barrel, Doctorow describes the company’s sins in this long, specific, and heavily linked diatribe. He then pulls three rules to hold Amazon accountable from a paper by researchers Tim O’Reilly, Ilan Strauss, and Mariana Mazzucato: Force the company to halt its most deceptive practices, mandate interoperability between it and comparison shopping sites, and create legal safe harbors for the scraping that underpins such interoperability. The invective concludes:

“I was struck by how much convergence there is among different kinds of practitioners, working against the digital sins of very different kinds of businesses. From the CFPB using mandates and privacy rules to fight bank rip-offs to behavioral economists thinking about Amazon’s manipulative search results. This kind of convergence is exciting as hell. After years of pretending that Big Tech was good for ‘consumers,’ we’ve not only woken up to how destructive these companies are, but we’re also all increasingly in accord about what to do about it. Hot damn!”

He sounds so optimistic. Are big changes ahead? Don’t forget to sign up for Prime.

Cynthia Murrell, November 14, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta