Business Intelligence: Optimism and Palantir

June 28, 2010

Business intelligence is in the news. Memex, the low profile UK outfit, sold to SAS. Kroll, another low profile operation, became part of Altegrity, anther organization with modest visibility among the vast sea of online experts. Now Palantir snags $90 million, which I learned in “Palantir: the Next Billion Dollar Company Raises $90 Million.” In the post financial meltdown world, there is a lot of money looking for a place that can grow more money. The information systems developed for serious intelligence analysis seem to be a better bet than funding another Web search company.

Palantir has some ardent fans in the US defense and intelligence communities. I like the system as well. What is fascinating to me is that smart money believes that there is gold in them there analytics and visualizations. I don’t doubt for a New York minute that some large commercial organizations can do a better job of figuring out the nuances in their petabytes of data with Palantir-type tools. But Palantir is not exactly Word or Excel.

The system requires an understanding of such nettlesome points as source data, analytic methods, and – yikes – programmatic thinking. The outputs from Palantir are almost good enough for General Stanley McChrystal to get another job. I have seen snippets of some really stunning presentations featuring Palantir outputs. You can see some examples at the Palantir Web site or take a gander (no pun intended by the addled goose) at the image below:

image

Palantir is an open platform; that is, a licensee with some hefty coinage in their knapsack can use Palantir to tackle the messy problem of data transformation and federation. The approach features dynamic ontologies, which means that humans don’t have to do as much heavy lifting as required by some of the other vendors’ systems. A licensee will want to have a tame rocket scientist around to deal with the internals of pXML, the XML variant used to make Palantir walk and talk.

You can poke around at these links which may go dark in a nonce, of course: https://devzone.palantirtech.com/ and https://www.palantirtech.com/.

Several observations:

  • The system is expensive and requires headcount to operate in a way that will deliver satisfactory results under real world conditions
  • Extensibility is excellent, but this work is not for a desk jockey no matter how confident that person in his undergraduate history degree and Harvard MBA
  • The approach is industrial strength which means that appropriate resources must be available to deal with data acquisition, system tuning, and programming the nifty little extras that are required to make next generation business intelligence systems smarter than a grizzled sergeant with a purple heart.

Can Palantir become a billion dollar outfit? Well, there is always the opportunity to pump in money, increase the marketing, and sell the company to a larger organization with Stone Age business intelligence systems. If Oracle wanted to get serious about XML, Palantir might be worth a look. I can name some other candidates for making the investors day, but I will leave those to your imagination. Will you run your business on a Palantir system in the next month or two? Probably not.

Stephen E Arnold, June 27, 2010

Freebie

Palantir Describes Lucene Searching with a Twist

January 27, 2010

If you do work in law enforcement, financial services, or intelligence (business or governmental), chances are high that you know about Palantir. The firm provides sophisticated data analysis and analytics tools for industrial-strength information jobs.

The company published in August 2009 and October 2009, a discussion of its approach to search and retrieval. I had occasion to update my file about Palantir technology, and I reviewed these two write ups. Both appeared in the Palantir Web log, and I thought that the information was relevant to some of the issues I am working on in 2010.

The first article is “Palantir: Search with a Twist (Part One: Memory Efficiency).” In that write up, the company points out that it uses the “venerable Java search engine Lucene.” Ah, open source, I thought. Palantir’s engineers encountered some limitations in Lucene and needed to work around these. The article explains that Palantir addressed Lucene’s approach to accumulating search results with a priority queue, streaming through results and inserting into the queue, and returning the set of results in the priority queue. The first article provides a useful summary of the Palantir method.

The second article is “Palantir: Search with a Twist (Part Two: Real-Time Indexing and Security).” This write up explains two approaches Palantir explored to deal with what the company calls “leaking information; namely that there’s data on this object that the user making the query is not privy to.” The write up says:

Given this problem, there are two approaches one can take: [1] Store all the information needed to decide which labels are visible to the user running the query and then use only the visible labels when calculating the relevance of a match. Note that is a pretty expensive operation. [2] Don’t use the length of match to compute relevance. Note that skipping a relevance calculation is, obviously, a very cheap thing do. Which do we do? Both.

I recommend that anyone wrestling with Lucene to take a look at these two articles. A third installment has been promised but I have not yet seen it.

Stephen E Arnold, January 27, 2010

A free search engine warrants a free post. No one paid me to write this. I will report this sad fact to the Department of Labor.

Palantir: Data Analysis

March 24, 2009

In the last month, three people have asked me about Palantir Technologies. I have had several people mention the work environment and the high caliber of the team at the company. The company has about 170 employees and is privately held. I have heard that the firm is profitable, but I have that from two sources now hunting for work after their financial institutions went south. The company is one of the leaders in finance and intelligence analytics. The specialities of the company include global macro research and trading; quantitative trading; knowledge discovery and knowledge management.

If you are not familiar with the company, you may want to navigate to www.palantirtech.com and take a look at the company’s offerings. Located in Palo Alto, the company focuses on making software that facilitates information analysis. With interest in business intelligence waxing and waning, Palantir has captured a very solid reputation for sophisticated analytics. Law enforcement and intelligence agencies “snap in” Palantir’s software to perform analysis and generate visualizations of the data. The company has been influenced by Apple in terms of the value placed upon sophisticated design and presentation. Palantir’s system makes highly complex tasks somewhat easier because of the firm’s interfaces. If you want to generate a visualization of a large, complex analytic method, Palantir can produce visually arresting graphics. If you navigate to the company’s “operation tradestop” page here, you can access demonstrations and white papers.

When I last checked the company’s demos, a number of them provided examples drawn from military and intelligence simulations. These examples provide a useful window into the sophistication of the Palantir technology. The company’s tools can manipulate data from any domain where large datasets and complex analyses must be run. The screenshot below comes from the firm’s demonstration of an entity extraction, text processing, and relationship analysis:

palantir 1

A Palantir relationship diagram. Each object is a link making it easy to drill down into the underlying data or documents.

Each object on the display is “live” so you can drill down or run other analyses about that object. The idea is to make data analysis interactive. Most of the vendors of high-end business intelligence systems offer some interactivity, but Palantir has gone further than most firms.

The company has a Web log, and it seems to be updated with reasonable frequency. The Web log does a good job of pointing out some of the features of the firm’s software. For example, I found this discussion of the Palantir monitoring server quite useful. The Web site emphasizes the visualization capabilities of the software. The Web log digs deeper into the innovations upon which the graphics rest.

Be careful when you run a Google query for Palantir. There are several firms with similar names. You will want to navigate to www.palantirtech.com. You may find yourself at another Palantir when you want the business intelligence firm.

Stephen Arnold, March 24, 2009

Thomson Reuters: A Trust Report about Trust from an Outfit with Trust Principles

June 21, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Thomson Reuters is into trust. The company has a Web page called “Trust Principles.” Here’s a snippet:

The Trust Principles were created in 1941, in the midst of World War II, in agreement with The Newspaper Proprietors Association Limited and The Press Association Limited (being the Reuters shareholders at that time). The Trust Principles imposed obligations on Reuters and its employees to act at all times with integrity, independence, and freedom from bias. Reuters Directors and shareholders were determined to protect and preserve the Trust Principles when Reuters became a publicly traded company on the London Stock Exchange and Nasdaq. A unique structure was put in place to achieve this. A new company was formed and given the name ‘Reuters Founders Share Company Limited’, its purpose being to hold a ‘Founders Share’ in Reuters.

Trust nestles in some legalese and a bit of business history. The only reason I mention this anchoring in trust is that Thomson Reuters reported quarterly revenue of $1.88 billion in May 2024, up from $1.74 billion in May 2023. The financial crowd had expected $1.85 billion in the quarter, and Thomson Reuters beat that. Surplus funds makes it possible to fund many important tasks; for example, a study of trust.

image

The ouroboros, according to some big thinkers, symbolizes the entity’s journey and the unity of all things; for example, defining trust, studying trust, and writing about trust as embodied in the symbol.

My conclusion is that trust as a marketing and business principle seems to be good for business. Therefore, I trust, and I am confident that the information in “Global Audiences Suspicious of AI-Powered Newsrooms, Report Finds.” The subject of the trusted news story is the Reuters Institute for the Study of Journalism. The Thomson Reuters reporter presents in a trusted way this statement:

According to the survey, 52% of U.S. respondents and 63% of UK respondents said they would be uncomfortable with news produced mostly with AI. The report surveyed 2,000 people in each country, noting that respondents were more comfortable with behind-the-scenes uses of AI to make journalists’ work more efficient.

To make the point a person working for the trusted outfit’s trusted report says in what strikes me as a trustworthy way:

“It was surprising to see the level of suspicion,” said Nic Newman, senior research associate at the Reuters Institute and lead author of the Digital News Report. “People broadly had fears about what might happen to content reliability and trust.”

In case you have lost the thread, let me summarize. The trusted outfit Thomson Reuters funded a study about trust. The research was conducted by the trusted outfit’s own Reuters Institute for the Study of Journalism. The conclusion of the report, as presented by the trusted outfit, is that people want news they can trust. I think I have covered the post card with enough trust stickers.

I know I can trust the information. Here’s a factoid from the “real” news report:

Vitus “V” Spehar, a TikTok creator with 3.1 million followers, was one news personality cited by some of the survey respondents. Spehar has become known for their unique style of delivering the top headlines of the day while laying on the floor under their desk, which they previously told Reuters is intended to offer a more gentle perspective on current events and contrast with a traditional news anchor who sits at a desk.

How can one not trust a report that includes a need met by a TikTok creator? Would a Thomson Reuters’ professional write a news story from under his or her desk or cube or home office kitchen table?

I think self funded research which finds that the funding entity’s approach to trust is exactly what those in search of “real” news need. Wikipedia includes some interesting information about Thomson Reuters in its discussion of the company in the section titled “Involvement in Surveillance.” Wikipedia alleges that Thomson Reuters licenses data to Palantir Technologies, an assertion which if accurate I find orthogonal to my interpretation of the word “trust.” But Wikipedia is not Thomson Reuters.

I will not ask questions about the methodology of the study. I trust the Thomson Reuters’ professionals. I will not ask questions about the link between revenue and digital information. I have the trust principles to assuage any doubt. I will not comment on the wonderful ouroboros-like quality of an enterprise embodying trust, funding a study of trust, and converting those data into a news story about itself. The symmetry is delicious and, of course, trustworthy. For information about Thomson Reuters’s trust use of artificial intelligence see this Web page.

Stephen E Arnold, June 21, 2024

Kill Off the Dinobabies and Get Younger, Bean Counter-Pleasing Workers. Sound Familiar?

September 21, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Google, Meta, Amazon Hiring low-Paid H1B Workers after US Layoffs: Report.” Is it accurate? Who knows? In the midst of a writers’ strike in Hollywood, I thought immediately about endless sequels to films like “Batman 3: Deleting Robin” and Halloween 8: The Night of the Dinobaby Purge.”

The write up reports a management method similar to those implemented when the high school science club was told that a school field trip to the morgue was turned down. The school’s boiler suffered a mysterious malfunction and school was dismissed for a day. Heh heh heh.

I noted this passage:

Even as global tech giants are carrying out mass layoffs, several top Silicon Valley companies are reportedly looking to hire lower-paid tech workers from foreign countries. Google, Meta, Amazon, Microsoft, Zoom, Salesforce and Palantir have applied for thousands of H1B worker visas this year…

I heard a rumor that IBM used a similar technique. Would Big Blue replace older, highly paid employees with GenX professionals not born in the US? Of course not! The term “dinobabies” was a product of spontaneous innovation, not from a personnel professional located in a suburb of New York City. Happy bean counters indeed. Saving money with good enough work. I love the phrase “minimal viable product” for “minimally viable” work environments.

There are so many ways to allow people to find their futures elsewhere. Shelf stockers are in short supply I hear.

Stephen E Arnold, September 21, 2023

Thomson Reuters, Where Is Your Large Language Model?

April 3, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I have to give the lovable Bloomberg a pat on the back. Not only did the company explain its large language model for finance, the end notes to the research paper are fascinating. One cited document has 124 authors. Why am I mentioning the end notes? The essay is 65 pages in length, and the notes consume 25 pages. Even more interesting is that the “research” apparently involved nVidia and everyone’s favorite online bookstore, Amazon and its Web services. No Google. No Microsoft. No Facebook. Just Bloomberg and the tenure-track researcher’s best friend: The end notes.

The article with a big end … note that is presents this title: “BloombergGPT: A Large Language Model for Finance.” I would have titled the document with its chunky equations “A Big Headache for Thomson Reuters,” but I know most people are not “into” the terminal rivalry, the analytics rivalry and the Thomson Reuters’ Fancy Dancing with Palantir Technologies, nor the “friendly” competition in which the two firms have engaged for decades.

Smart software score appears to be: Bloomberg 1, Thomson Reuters, zippo. (Am I incorrect? Of course, but this beefy effort, the mind boggling end notes, and the presence of Johns Hopkins make it clear that Thomson Reuters has some marketing to do. What Microsoft Bing has done to the Google may be exactly what Bloomberg wants to do to Thomson Reuters: Make money on the next big thing and marginalize a competitor. Bloomberg obviously wants more than the ageing terminal business and the fame achieved on free TV’s Bloomberg TV channels.

What is the Bloomberg LLM or large language model? Here’s what the paper asserts. Please, keep in mind that essays stuffed with mathy stuff and researchy data are often non-reproducible. Heck, even the president of Stanford University took short cuts. Plus more than half of the research results my team has tried to reproduce ends up in Nowheresville, which is not far from my home in rural Kentucky:

we present BloombergGPT, a 50 billion parameter language model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg’s extensive data sources, perhaps the largest domain-specific dataset yet, augmented with 345 billion tokens from general purpose datasets. We validate BloombergGPT on standard LLM benchmarks, open financial benchmarks, and a suite of internal benchmarks that most accurately reflect our intended usage. Our mixed dataset training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general LLM benchmarks.

My interpretations of this quotation is:

  1. Lots of data
  2. Big model
  3. Informed financial decisions.

“Informed financial decisions” means to me that a crazed broker will give this Bloomberg thing a whirl in the hope of getting a huge bonus, a corner office which is never visited, and fame at the New York Athletic Club.

Will this happen? Who knows.

What I do know is that Thomson Reuters’ executives in London, New York, and Toronto are doing some humanoid-centric deep thinking about Bloomberg. And that may be what Bloomberg really wants because Bloomberg may be ahead. Imagine that Bloomberg ahead of the “trust” outfit.

Stephen E Arnold, April 3, 2023

Google and OpenAI: The Big Dust Up

February 8, 2023

Let’s go back to high school English class and a very demanding spinster named Miss Drake. Her test question was, “Who wrote this line?”

O heaven! that one might read the book of fate, and see the revolution of the times. (Henry IV, Part 2 [~ 1597] Act 3)

Do you remember? I do. The answer is the Bard of Avon, and he allegedly spun the phrase from his brain cells or he ripped it off from another playwright. Yeah, the plagiarism thing has not been resolved, and it is unclear if and with whom the Bard sucked in content and output a money-making play. Was the real Bard a primordial version of a creator on YouTube? Sure, why not draw that connection?

Now back to the quote. I like the idea of “the revolution of the times.”

The idea is that some wizard like Sundar or Prabhakar can check out the “Book of Fate” which may or may be among the works in the Google data warehouse and see the future. Just the Palantir seeing stone which works so darned well as those SPAC bets attest. Perhaps that’s what happened when Google declared a Code Red? Fear and a big bet that the GOOG can de-momentum ChatGPT.

When did OpenAI become a thing? I would suggest that it was in 2015 if one believes Sillycon Valley history. The next highlight of what was something of note took place in 2019 but possibly earlier when Microsoft plopped some Azure cycles on the OpenAI conference table. Two years later we get this:

Google at Code Red over ChatGPT As Teams Reassigned to Work on Competing AI Products

Almost coincident with Google’s realizing that people were genuinely excited about ChatGPT, Google realized that Prabhakar’s hair had caught on fire. Sundar, the Sillycon Valley manager par excellence called back the original Relevance Revolutionaries (Messrs. Brin and Page) after Microsoft made it evident to the well-fed at Davos that Softies were good at marketing. Maybe Microsoft fell short of the “Start Me Up” for the outstandingly “good enough” Windows 95, but the Chat GPT deal is notable. To make sure the GOOG got the message that Microsoft was surfing on the waves created by ChatGPT, another bundle of billions were allocated to OpenAI and ChatGPT. The time was January 2023, and it was clear that millions of norms interested in Microsoft’s use of ChatGPT in those “good enough” engineering marvels, Bing.com and the Google infused Edge browser.

Where are we on Wednesday, February 8, 2023? How about this as a marker:

Google said Bard would be widely available to the public in the next few weeks. Source: MSN.com

Yep, the killer words are right there—”would be.” Not here, the conditional future, just a limited test. Not integrated into heaven knows how many apps like OpenAI. Not here like the collection of links generated by Matt Shumer. Not here like the YouTube videos explaining how to build an app from scratch with ChatGPT. Nope. Google is into the “to be” and “demo” mode.

Let’s do simple math on Google’s situational awareness:

  • 2015 OpenAI and Elon Musk are visible
  • 2019 Microsoft provides some “money” which may be a round trip to pay for Azure cycles
  • 2022 (November) ChatGPT gets a million users in five days
  • 2022 (December) Google feels the heat from burning hair and screeches “Code Red”
  • 2023 (January) Davos talks about ChatGPT, not just the new world order, power, money, and where to eat dinner
  • 2023 (February) Google says, “Out version of ChatGPT is coming… soon. Really, very soon.” But for now it’s a limited demo. And Microsoft? The ChatGPT thing turned up when one of my team ran a query on Tuesday, February 7, 2023. Yep, ready or not there it was.

Several observations:

  1. The Sundar and Prabhakar duo missed the boat on the impact of ChatGPT for search. Err, you are supposed to be the search wizards, and you are standing on the platform waiting for the next train to arrive?
  2. The uptake of ChatGPT may be a reaction against the Google search system, not a reaction to the inherent greatness of ChatGPT. A certain search company with a 90 percent share and a genuine disdain to relevant responses to users’ queries may have boosted the interest in ChatGPT. If I am correct, this is an unintended consequence of being Googley.
  3. The Microsoft marketing move is an outstanding [a] bit of luck, [b] a super star tactic that warrants a Grammy (oh, wait, the Grammies suck), or [c] a way to breathe new life into products which suffer from featuritis and a lack of differentiation.

Net net: Sundar and Prabhakar are destined for Harvard case study glory. Also, the dynamic duo may pull a marshmallow from the fire, but will it make a great S’more? Does sophomoric rhyme with Anthropic. And Code Red? The revolution of the times might be here as the Bard wrote or obtained from a fellow playwright.

Stephen E Arnold, February 8, 2023

Smart Software: Just One Real Problem? You Wish

January 6, 2023

I read “The One Real Problem with Synthetic Media.” when consulting and publishing outfits point out the “one real problem” analysis, I get goose bumps. Am I cold? Nah, I am frightened. Write ups that propose the truth frighten me. Life is — no matter what mid tier consulting outfits say — slightly more nuanced.

What is the one real problem? The write up asserts:

Don’t use synthetic media for your business in any way. Yes, use it for getting ideas, for learning, for exploration. But don’t publish words or pictures generated by AI — at least until there’s a known legal framework for doing so. AI-generated synthetic media is arguably the most exciting realm in technology right now. Some day, it will transform business. But for now, it’s a legal third rail you should avoid.

What’s the idea behind the shocking metaphor? The third rail provides electric power to a locomotive. I think the idea is that one will be electrocuted should an individual touch a live third rail.

Okay.

Are there other issues beyond the legal murkiness?

Yes, let me highlight several which strike me as important.

First, the smart software can output quickly and economically weaponized information. Whom can one believe? A college professor funded by a pharmaceutical company or a robot explaining the benefits of an electric vehicle? The hosing of synthetic content and data into a society may provide more corrosive than human outputs alone. Many believe that humans are expert misinformation generators. I submit that smart software will blow the doors off the human content jalopies.

Second, smart software ingests data, when right or wrong, human generated or machine generated, and outputs results on these data. What happens when machine generated content makes the human generated content into tiny rivulets? The machine output is as formidable as Hokusai’s wave. Those humans in the boats: Goners perhaps?

Third, my thought is that in some parts of the US the slacker culture is the dominant mode. Forget that crazy, old-fashioned industrial revolution 9-to-5 work day. Ignore the pressure to move up, earn more, and buy a Buick, not a Chevrolet. Slacker culture dwellers look for the easy way to accomplish what they want. Does this slacker thing explain some FTX-type behavior? What about Amazon’s struggles with third-party resellers’ products? What about Palantir Technology buying advertising space in the Wall Street Journal to convince me that it is the leader in smart software? Yeah, slacker stuff in my opinion. These examples and others mean that the DALL-E and ChatGPT type of razzle dazzle will gain traction.

Where are legal questions in these three issues? Sure legal eagles will fly when there is an opportunity to bill.

I think the smart software thing is a good example of “technology is great” thinking. The one real problem is that it is not.

Stephen E Arnold, January 6, 2023

CNN Surfaces an Outstanding Quote from the Zuck

December 30, 2022

Tucked in “The Year That Brought Silicon Valley Back Down to Earth” was an outstanding quotation from the chief Meta professional, Mark (the Zucker) Zuckerberg. Here’s the quote:

“Unfortunately, this did not play out the way I expected.”

The CNN article revisits what are by now old tropes and saws.

When I spotted the title, I thought a handful of topics would be mentioned; for example:

  1. The medical testing fraud
  2. The crazy “value” of wild hair styles and digital currency, lawyer parents, and disappearing billions. Poof.
  3. Assorted security issues (Yes, I am thinking of Microsoft and poisoned open source libraries. Hey, isn’t GitHub part of the Softies’ empire?)
  4. Apple’s mystical interactions with China
  5. Taylor Swift’s impact on Congressional interest in online ticket excitement
  6. An annual update on Google’s progress in solving death
  7. Amazon’s interaction with trusted third party sellers (Yes, I am thinking of retail thefts)
  8. Tesla’s outer space thinking about self driving
  9. Palantir’s ads asserting that it is the leader in artificial intelligence.

None of these made the CNN story. However, that quote from the Zuck touches some of these fascinating 2022 developments.

Stephen E Arnold, December 30, 2022

The Cloud and Points of Failure: Really?

September 13, 2022

A professional affiliated with Syntropy points out one of my “laws” of online; namely, that centralization is inevitable. What’s interesting about “The Internet is Now So Centralized That One Company Can Break It” is that it does not explain much about Syntropy. In my opinion, there is zero information about the c9ompany. The firm’s Web site explains:

Unlocking the power of the world’s scientific data requires more than a new tool or method – it requires a catalyst for change and collaboration across industries.

The Web site continues:

We are committed to inspiring others around our vision — a world in which the immense power of a single source of truth in biomedical data propels us towards discoveries, breakthroughs and cures faster than ever before.

The company is apparently involved with Merck KGaA, which as I recall from my Pharmaceutical News Index days, is not too keen on sharing its intellectual property, trial data, or staff biographies. Also, the company has some (maybe organic, maybe more diaphanous) connection with Palantir Technologies. Palantir, an interesting search and retrieval company morphing into search based applications and consulting, is a fairly secretive outfit despite its being a publicly traded company. (The firm’s string of quarterly disappointments and its share price send a signal to some astute observers I think.)

But what’s in the article by individual identified at the foot of the essay as Domas Povilauskas, the top dog at Syntropy. Note that the byline for the article is Benzinga Contributor which is not particularly helpful.

Hmmm. What’s up?

The write up recycles the online leads to centralization notion. Okay. But centralization is a general feature of online information, and that’s not a particularly new idea either.

The author continues:

The problem with the modern Internet is that it is essentially a set of private networks run by individual internet service providers. Each has a network, and most connections occur between these networks…. Networks are only managed locally. Routing decisions are made locally by the providers via the BGP protocol. There’s no shared knowledge, and nobody controls the entire route of the connection. Using these public ISPs is like using public transport. You have no control over where it goes. Providers own the cables and everything else. In this system, there are no incentives for ISPs to provide a good service.

The set up of ISPs strikes me as a mix of centralization and whatever works. My working classification of ISPs and providers has three categories: Constrained services (Amazon-type outfits), Boundary Operators (the TOR relay type outfits), and Unconstrained ISPs and providers (CyberBunker-type organizations). My view is that this is the opposite of centralization. In each category there are big and small outfits, but 90 percent of the action follows Arnold’s Law of Centralization. What’s interesting is that in each category — for instance, boundary operators — the centralization repeats just on a smaller scale. AccessNow runs a conference. At this conference are many operators unknown by the general online user.

The author of the article says:

The only way to get a more reliable service is to pay ISPs a lot for high-speed private connections. That’s the only way big tech companies like Amazon run their data centers. But the biggest irony is that there is enough infrastructure to handle much more growth.  70% of Internet infrastructure isn’t utilized because nobody knows about these routes, and ISPs don’t have an excellent solution to monetize them on demand. They prefer to work based on fixed, predetermined contracts, which take a lot of time to negotiate and sign.

I think this is partially correct. As soon as one shifts from focusing on what appear to be legitimate online activities to more questionable and possibly illegal activities, evidence of persistent online services which are difficult for law enforcement to take down thrive. CyberBunker generated millions and required more than two years to knock offline and reign in the owners. There is more dimensionality in the ISP/provider sector than the author of the essay considers.

The knock-offline idea sounds good. One can point to the outages and the pain caused by Microsoft Azure/Microsoft Cloud, Google Cloud, Amazon, and others as points of weakness with as many vulnerabilities as a five-legged Achilles would have.

The reality is that the generalizations about centralization sound good, seem logical, and appear to follow the Arnold Law that says online services tend to centralization. Unfortunately new technologies exist which make it possible for more subtle approaches to put services online.

Plus, I am not sure how a company focused on a biomedical single source of truth fits into what is an emerging and diverse ecosystem of ISPs and service providers.

Stephen E Arnold, September 13, 2022

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta