Silicon Valley: The New Home of Unsportsmanlike Conduct
July 26, 2025
Sorry, no smart software involved. A dinobaby’s own emergent thoughts.
I read the Axios run down of Mark Zuckerberg’s hiring blitz. “Mark Zuckerberg Details Meta’s Superintelligence Plans” reports:
The company [Mark Zuckerberg’s very own Meta] is spending billions of dollars to hire key employees as it looks to jumpstart its effort and compete with Google, OpenAI and others.
Meta (formerly the estimable juicy brand Facebook) had some smart software people. (Does anyone remember Jerome Pesenti?) Then there was Llama, and like the guanaco, tamed and used to carry tourists to Peruvian sights, has been seen as a photo opp for parents wanting to document their kids’ visit to Cusco.
Is Mr. Zuckerberg creating a mini Bell Labs in order to take the lead in smart software?The Axios write up contains some names of people who may have some connection to the Middle Kingdom. The idea is to get smart people, put them in a two-story building in Silicon Valley, turn up the A/C, and inject snacks.
I interpret the hiring and the allegedly massive pay packets to a simpler, more direct idea: Move fast, break things.
What are the things Mr. Zuckerberg is breaking?
First, I worked in Silicon Valley (aka Plastic Fantastic) for a number of years. I lived in Berkely and loved that commute to San Mateo, Foster City, and environs. Poaching employees was done in a more relaxed way. A chat at a conference, a small gathering after a softball game at the public fields not far from Stanford (yes, the school which had a president who made up information), or at some event like a talk at the Computer Museum or whatever it was called. That’s history. Mr. Zuckerberg shows up (virtually or in a T shirt), offers an alleged $100 million and hires a big name. No muss. No fuss. No social conventions. Just money. Cash. (I almost wish I was 25 and working in Mountain View. Sigh.)
Second, Mr. Zuckerberg is targeting the sensitive private parts of big leadership people. No dancing. Just targeted castration of key talent. Ouch. The Axios write up provides the names of some of these individuals. What interesting is that these people come from the knowledge parts hidden from the journalistic spotlight. Those suffering life changing removals without anesthesia include Google, OpenAI, and similar firms. In the good old days, Silicon Valley firms competed less of that Manhattan, lower east side vibe. No more.
Third, Mr. Zuckerberg is not announcing anything at conferences or with friendly emails. He is just taking action. Let the people at Apple, Safe Superintelligence, and similar outfits read the news in a resignation email. Mr. Zuckerberg knows that those NDAs and employment contracts can be used to wipe away tears when the loss of a valuable person is discovered.
What’s up?
Obviously Mr. Zuckerberg is not happy that his outfit is perceived as a loser in the AI game. Will this Bell Labs’ West approach work? Probably not. It will deliver one thing, however. Mr. Zuckerberg is sending a message that he will spend money to cripple, hobble, and derail AI innovation at firms beating his former LLM to death.
Move fast and break things has come to the folks who used the approach to take out swaths of established businesses. Now the technique is being used on companies next door. Welcome to the ungentrified neighborhood. Oh, expect more fist fights at those once friendly, co-ed softball games.
Stephen E Arnold, July 26, 2025
Will Apple Do AI in China? Subsidies, Investment, Saluting Too
July 25, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
Apple long ago vowed to use the latest tech to design its hardware. Now that means generative AI. Asia Financial reports, “Apple Keen to Use AI to Design Its Chips, Tech Executive Says.” That tidbit comes from a speech Apple VP Johny Srouji made as he accepted an award from tech R&D group Imec. We learn:
“In the speech, a recording of which was reviewed by Reuters, Srouji outlined Apple’s development of custom chips from the first A4 chip in an iPhone in 2010 to the most recent chips that power Mac desktop computers and the Vision Pro headset. He said one of the key lessons Apple learned was that it needed to use the most cutting-edge tools available to design its chips, including the latest chip design software from electronic design automation (EDA) firms. The two biggest players in that industry – Cadence Design Systems and Synopsys – have been racing to add artificial intelligence to their offerings. ‘EDA companies are super critical in supporting our chip design complexities,’ Srouji said in his remarks. ‘Generative AI techniques have a high potential in getting more design work in less time, and it can be a huge productivity boost.’”
Srouji also noted Apple is one to commit to its choices. The post notes:
“Srouji said another key lesson Apple learned in designing its own chips was to make big bets and not look back. When Apple transitioned its Mac computers – its oldest active product line – from Intel chips to its own chips in 2020, it made no contingency plans in case the switch did not work.”
Yes, that gamble paid off for the polished tech giant. Will this bet be equally advantageous?
Has Apple read “Apple in China”?
Cynthia Murrell, July 25, 2025
Lawyers Do What Lawyers Do: Revenues, AI, and Talk
July 22, 2025
A legal news service owned by LexisNexis now requires every article be auto-checked for appropriateness. So what’s appropriate? Beyond Search does not know. However, here’s a clue. Harvard’s NeimanLab reports, “Law360 Mandates Reporters Use AI Bias Detection on All Stories.” LexisNexis mandated the policy in May 2025. One of the LexisNexis professionals allegedly asserted that bias surfaced in reporting about the US government.The headline cited by VP Teresa Harmon read: “DOGE officials arrive at SEC with unclear agenda.” Um, okay.
Journalist Andrew Deck shares examples of wording the “bias” detection tool flagged in an article. The piece was a breaking story on a federal judge’s June 12 ruling against the administration’s deployment of the National Guard in LA. We learn:
“Several sentences in the story were flagged as biased, including this one: ‘It’s the first time in 60 years that a president has mobilized a state’s National Guard without receiving a request to do so from the state’s governor.’ According to the bias indicator, this sentence is ‘framing the action as unprecedented in a way that might subtly critique the administration.’ It was best to give more context to ‘balance the tone.’ Another line was flagged for suggesting Judge Charles Breyer had ‘pushed back’ against the federal government in his ruling, an opinion which had called the president’s deployment of the National Guard the act of ‘a monarchist.’ Rather than ‘pushed back,’ the bias indicator suggested a milder word, like ‘disagreed.’”
Having it sound as though anyone challenges the administration is obviously a bridge too far. How dare they? Deck continues:
“Often the bias indicator suggests softening critical statements and tries to flatten language that describes real world conflict or debates. One of the most common problems is a failure to differentiate between quotes and straight news copy. It frequently flags statements from experts as biased and treats quotes as evidence of partiality. For a June 5 story covering the recent Supreme Court ruling on a workplace discrimination lawsuit, the bias indicator flagged a sentence describing experts who said the ruling came ‘at a key time in U.S. employment law.’ The problem was that this copy, ‘may suggest a perspective.’”
Some Law360 journalists are not happy with their “owners.” Law360’s reporters and editors may not be on the same wave length as certain LexisNexis / Reed Elsevier executives. In June 2025, unit chair Hailey Konnath sent a petition to management calling for use of the software to be made voluntary. At this time, Beyond Search thinks that “voluntary” has a different meaning in leadership’s lexicon.
Another assertion is that the software mandate appeared without clear guidelines. Was there a dash of surveillance and possible disciplinary action? To add zest to this publishing stew, the Law360 Union is negotiating with management to adopt clearer guidelines around the requirement.
What’s the software engine? Allegedly LexisNexis built the tool with OpenAI’s GPT 4.0 model. Deck notes it is just one of many publishers now outsourcing questions of bias to smart software. (Smart software has been known for its own peculiarities, including hallucination or making stuff up.) For example, in March 2025, the LA Times launched a feature dubbed “Insights” that auto-assesses opinion stories’ political slants and spits out AI-generated counterpoints. What could go wrong? Who new that KKK had an upside?
What happens when a large publisher gives Grok a whirl? What if a journalist uses these tools and does not catch a “glue cheese on pizza moment”? Senior managers training in accounting, MBA get it done recipes, and (date I say it) law may struggle to reconcile cost, profit, fear, and smart software.
But what about facts?
Cynthia Murrell, July 22, 2025
Why Customer Trust of Chatbot Does Not Matter
July 22, 2025
Just a dinobaby working the old-fashioned way, no smart software.
The need for a winner is pile driving AI into consumer online interactions. But like the piles under the San Francisco Leaning Tower of Insurance Claims, the piles cannot stop the sag, the tilt, and the sight of a giant edifice tilting.
I read an article in the “real” new service called Fox News. The story’s title is “Chatbots Are Losing Customer Trust Fast.” The write up is the work of the CyberGuy, so you know it is on the money. The write up states:
While companies are excited about the speed and efficiency of chatbots, many customers are not. A recent survey found that 71% of people would rather speak with a human agent. Even more concerning, 60% said chatbots often do not understand their issue. This is not just about getting the wrong answer. It comes down to trust. Most people are still unsure about artificial intelligence, especially when their time or money is on the line.
So what? Customers are essentially irrelevant. As long as the outfit hits its real or imaginary revenue goals, the needs of the customer are not germane. If you don’t believe me, navigate to a big online service like Amazon and try to find the number of customer service. Let me know how that works out.
Because managers cannot “fix” human centric systems, using AI is a way out. Let AI do it is a heck of lot easier than figuring out a work flow, working with humans, and responding to customer issues. The old excuse was that middle management was not needed when decisions were pushed down to the “workers.”
AI flips that. Managerial ranks have been reduced. AI decisions come from “leadership” or what I call carpetland. AI solves problems: Actually managing, cost reduction, and having good news for investor communications.
The customers don’t want to talk to software. The customer wants to talk to a human who can change a reservation without automatically billing for a service charge. The customer wants a person to adjust a double billing for a hotel doing business Snap Commerce Holdings. The customer wants a fair shake.
AI does not do fair. AI does baloney, confusion, errors, and hallucinations. I tried a new service which put Google Gemini front and center. I asked one question and got an incomplete and erroneous answer. That’s AI today.
The CyberGuy’s article says:
If a company is investing in a chatbot system, it should track how well that system performs. Businesses should ask chatbot vendors to provide real-world data showing how their bots compare to human agents in terms of efficiency, accuracy and customer satisfaction. If the technology cannot meet a high standard, it may not be worth the investment.
This is simply not going to happen. Deployment equals cost savings. Only when the money goes away will someone in leadership take action. Why? AI has put many outfits in a precarious position. Big money has been spent. Much of that money comes from other people. Those “other people” want profits, not excuses.
I heard a sci-fi rumor that suggests Apple can buy OpenAI and catch up. Apple can pay OpenAI’s investors and make good on whatever promissory payments have been offered by that firm’s leadership. Will that solve the problem?
Nope. The AI firms talk about customers but don’t care. Dealing with customers abused by intentionally shady business practices cooked up by a committee that has to do something is too hard and too costly. Let AI do it.
If the CyberGuy’s write up is correct, some excitement is speeding down the information highway toward some well known smart software companies. A crash at one of the big boys junctions will cause quite a bit of collateral damage.
Whom do you trust? Humans or smart software.
Stephen E Arnold, July 22, 2025
What Did You Tay, Bob? Clippy Did What!
July 21, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I was delighted to read “OpenAI Is Eating Microsoft’s Lunch.” I don’t care who or what wins the great AI war. So many dollars have been bet that hallucinating software is the next big thing. Most content flowing through my dinobaby information system is political. I think this food story is a refreshing change.
So what’s for lunch? The write up seems to suggest that Sam AI-Man has not only snagged a morsel from the Softies’ lunch pail but Sam AI-Man might be prepared to snap at those delicate lady fingers too. The write up says:
ChatGPT has managed to rack up about 10 times the downloads that Microsoft’s Copilot has received.
Are these data rock solid? Probably not, but the idea that two “partners” who forced Googzilla to spasm each time its Code Red lights flashed are not cooperating is fascinating. The write up points out that when Microsoft and OpenAI were deeply in love, Microsoft had the jump on the smart software contenders. The article adds:
Despite that [early lead], Copilot sits in fourth place when it comes to total installations. It trails not only ChatGPT, but Gemini and Deepseek.
Shades of Windows phone. Another next big thing muffed by the bunnies in Redmond. How could an innovation power house like Microsoft fail in the flaming maelstrom of burning cash that is AI? Microsoft’s long history of innovation adds a turbo boost to its AI initiatives. The Bob, Clippy, and Tay inspired Copilot is available to billions of Microsoft Windows users. It is … everywhere.
The write up explains the problem this way:
Copilot’s lagging popularity is a result of mismanagement on the part of Microsoft.
This is an amazing insight, isn’t it? Here’s the stunning wrap up to the article:
It seems no matter what, Microsoft just cannot make people love its products. Perhaps it could try making better ones and see how that goes.
To be blunt, the problem at Microsoft is evident in many organizations. For example, we could ask IBM Watson what Microsoft should do. We could fire up Deepseek and get some China-inspired insight. We could do a Google search. No, scratch that. We could do a Yandex.ru search and ask, “Microsoft AI strategy repair.”
I have a more obvious dinobaby suggestion, “Make Microsoft smaller.” And play well with others. Silly ideas I know.
Stephen E Arnold, July 21, 2025
Xooglers Reveal Googley Dreams with Nightmares
July 18, 2025
Just a dinobaby without smart software. I am sufficiently dull without help from smart software.
Fortune Magazine published a business school analysis of a Googley dream and its nightmares titled “As Trump Pushes Apple to Make iPhones in the U.S., Google’s Brief Effort Building Smartphones in Texas 12 years Ago Offers Critical Lessons.” The author, Mr. Kopytoff, states:
Equivalent in size to nearly eight football fields, the plant began producing the Google Motorola phones in the summer of 2013.
Mr. Kopytoff notes:
Just a year later, it was all over. Google sold the Motorola phone business and pulled the plug on the U.S. manufacturing effort. It was the last time a major company tried to produce a U.S. made smartphone.
Yep, those Googlers know how to do moon shots. They also produce some digital rocket ships that explode on the launch pads, never achieving orbit.
What happened? You will have to read the pork loin write up, but the Fortune editors did include a summary of the main point:
Many of the former Google insiders described starting the effort with high hopes but quickly realized that some of the assumptions they went in with were flawed and that, for all the focus on manufacturing, sales simply weren’t strong enough to meet the company’s ambitious goals laid out by leadership.
My translation of Fortune-speak is: “Google was really smart. Therefore, the company could do anything. Then when the genius leadership gets the bill, a knee jerk reaction kills the project and moves on as if nothing happened.”
Here’s a passage I found interesting:
One of the company’s big assumptions about the phone had turned out to be wrong. After betting big on U.S. assembly, and waving the red, white, and blue in its marketing, the company realized that most consumers didn’t care where the phone was made.
Is this statement applicable to people today? It seems that I hear more about costs than I last year. At a 4th of July hoe down, I heard:
- “The prices are Kroger go up each week.”
- “I wanted to trade in my BMW but the prices were crazy. I will keep my car.”
- “I go to the Dollar Store once a week now.”
What’s this got to do with the Fortune tale of Google wizards’ leadership goof and Apple (if it actually tries to build an iPhone in Cleveland?
Answer: Costs and expertise. Thinking one is smart and clever is not enough. One has to do more than spend big money, talk in a supercilious manner, and go silent when the crazy “moon shot” explodes before reaching orbit.
But the real moral of the story is that it is political. That may be more problematic than the Google fail and Apple’s bitter cider. It may be time to harvest the fruit of tech leaderships’ decisions.
Stephen E Arnold, July 18, 2025
Swallow Your AI Pill or Else
July 18, 2025
Just a dinobaby without smart software. I am sufficiently dull without help from smart software.
Annoyed at the next big thing? I find it amusing, but a fellow with the alias of “Honest Broker” (is that an oxymoron) sure seems to upset with smart software. Let me make clear my personal view of smart software; specifically, the outputs and the applications are a blend of the stupid, semi useful, and dangerous. My team and I have access smart software, some running locally on one of my work stations, and some running in the “it isn’t cheap is it” cloud.
The write up is titled “The Force-Feeding of AI on an Unwilling Public: This Isn’t Innovation. It’s Tyranny.” The author, it seems, is bristling at how 21st century capitalism works. News flash: It doesn’t work for anyone except the stakeholders. When the stakeholders are employees and the big outfit fires some stakeholders, awareness dawns. Work for a giant outfit and get to the top of the executive pile. Alternatively, become an expert in smart software and earn lots of money, not a crappy car like we used to give certain high performers. This is cash, folks.
The argument in the polemic is that outfits like Amazon, Google, and Microsoft, et al, are forcing their customers to interact with systems infused with “artificial intelligence.” Here’s what the write up says:
“The AI business model would collapse overnight if they needed consumer opt-in. Just pass that law, and see how quickly the bots disappear. ”
My hunch is that the smart software companies lobbied to get the US government to slow walk regulation of smart software. Not long ago, wizards circulated a petition which suggested a moratorium on certain types of smart software development. Those who advocate peace don’t want smart software in weapons. (News flash: Check out how Ukraine is using smart software to terminate with extreme prejudice individual Z troops in a latrine. Yep, smart software and a bit of image recognition.)
Let me offer several observations:
- For most people technology is getting money from an automatic teller machine and using a mobile phone. Smart software is just sci-fi magic. Full stop.
- The companies investing big money in smart software have to make it “work” well enough to recover their investment and (hopefully) railroad freight cars filled with cash or big crypto transfers. To make something work, deception will be required. Full stop.
- The products and services infused with smart software will accelerate the degradation of software. Today’s smart software is a recycler. Feed it garbage; it outputs garbage. Maybe a phase change innovation will take place. So far, we have more examples of modest success or outright disappointment. From my point of view, core software is not made better with black box smart software. Someday, but today is not the day.
I like the zestiness of the cited write up. Here’s another news flash: The big outfits pumping billions into smart software are relentless. If laws worked, the EU and other governments would not be taking these companies to court with remarkable regularity. Laws don’t seem to work when US technology companies are “innovating.”
Have you ever wondered if the film Terminator was sent to the present day by aliens? Forget the pyramid stuff. Terminator is a film used by an advanced intelligence to warn us humanoids about the dangers of smart software.
The author of the screed about smart software has accomplished one thing. If smart software turns on humanoids, I can identify a person who will be a list for in-depth questioning.
I love smart software. I think the developers need some recognition for their good work. I believe the “leadership” of the big outfits investing billions are doing it for the good of humanity.
I also have a bridge in Brooklyn for sale… cheap. Oh, I would suggest that the analogy is similar to the medical device by which liquid is introduced into the user’s system typically to stimulate evacuation of the wallet.
Stephen E Arnold, July 18, 2025
Google Is Great. Its AI Is the Leader, Just As Philco Was
July 15, 2025
No smart software involved with this blog post. (An anomaly I know.)
The Google and its Code Red Yellow or whatever has to pull a revenue rabbit out of its ageing Stetson. (It is a big Stetson too.) Microsoft found a way to put Googzilla on its back paw in January 2023. Mr. Nadella announced a deal with OpenAI and ignited the Softies to put Copilot in everything, including the ASCII editor Notepad.
Google demonstrated a knee jerk reaction. Put Prabhakar in Paris to do a stand up about Google AI. Then Google reorganized its smart software activities… sort of. The wizards at Google has pushed out like a toothpaste tube crushed by a Stanford University computer science professor’s flip flops. Suffice it to say there are many Google AI products and services. I gave up trying to keep track of them months ago.
What’s happened? Old-school, Google searches are work now. Some sites have said that Google referral traffic is down a third or more.
What’s up?
“Google Faces Threat That Could Destroy Its Business” offers what I would characterize as a Wall Street MBA view of the present day Google. The write up says:
As the AI boom continues to transform the landscape of the tech world, a new type of user behavior has begun to gain popularity on the web. It’s called zero-click search, and it means a person searches for something and gets the answer they want without clicking a single link. There are several reasons for this, including the AI Overview section that Google has added to the top of many search result pages. This isn’t a bad thing, but what’s interesting is why Google is leaning into AI Overview in the first place: millions of people are opening ChatGPT instead of Google to search for the things they want to know.
The cited passage suggests that Google is embracing one-click search, essentially marginalizing the old-school list of links. Google has made this decision because of or in response to OpenAI. Lurking between the lines of the paragraph is the question, “What the heck is Google doing?”
On July 9, Reuters exclusively reported that OpenAI would soon launch its own web browser to challenge Google Chrome’s dominance.
This follows on OpenAI’s stating that it would like to buy the Chrome browser if the US government forces Google to sell is ubiquitous data collection interface with users. Start ups are building browsers. Perplexity is building browsers. The difference is that OpenAI and Perplexity will use AI as plumbing, not an add on. Chrome is built as a Web 1 and Web 2 service. OpenAI and Perplexity are likely to just go for Web 3 functionality.
What’s that look like? I am not sure, but it will not come from some code originally cooked up someplace like Denmark and refurbished many times to the ubiquitous product we have today.
My view is that Google is somewhat disorganized when it comes to smart software. As the company tries to revolutionize medicine, create smart maps, and build expensive self driving taxis — people are gravitating to ChatGPT which is now a brand like Kleenex or Xerox. Perplexity is a fan favorite at the moment as well. To add some spice to the search recipe, Anthropic and outfits like China Telecom are busy innovating.
What about Google? We are about to learn how a former blue chip consultant will give Google more smarts. Will that intelligence keep the money flowing and growing? Why be a Debbie Downer. Google is the greatest thing since sliced bread. Those legal actions are conspiracies fueled by jealous competitors. Those staff cutback? Just efficiencies. Those somewhat confusing AI products and services? Hey, you are just not sufficiently Googley to see the brilliance of Googzilla’s strategy.
Okay, I agree. Google is wonderful and the Wall Street MBA type analysis is wonky, probably written with help from Grok or Mistral. Google is and will be wonderful. You can search for examples too. Give Perplexity a try.
Stephen E Arnold, July 15, 2025
Killing Consulting: Knowledge Draculas Live Forever
July 14, 2025
No smart software involved with this blog post. (An anomaly I know.)
I read an opinion piece published in the Substack system. The article’s title is “The Consulting Crash Is Coming.” This title is in big letters. The write up delivers big news to people who probably did not work at large consulting companies; specifically, the blue chip outfits like McKinsey, Bain, BCG, Booz, Allen, and a handful of others.
The point of the write up is that large language models will put a stake in the consulting Draculas.
I want to offer several observations as a former full-time professional at one of the blue chip outfits and a contractor for a couple of other pay-for-knowledge services firms.
First, assume that Mr. Nocera is correct. Whatever consulting companies remain in business will have professionals with specific work processes developed by the blue chip consulting firms. Boards, directors, investors, non-governmental organizations, individual rich people, and institutions like government agencies and major academic institutions want access to the people and knowledge value of the blue chip consulting firms. A consulting company may become smaller, but the entity will adapt. Leaders of organizations in the sectors I identified will hire these firms. An AT Kearney-type of firm may be disappeared, but for the top tier, resiliency is part of the blue chip DNA.
Second, anyone familiar with Stuart Kauffman (Santa Fe Institute) is familiar with his notion of spontaneous order, adjacency, and innovations creating more innovations. As a result of this de facto inferno of novelty and change, smart people will want to hire other smart people in the hopes of learning something useful. One can ask a large language model or its new and improved versions. However, blue chip consulting firms and the people they attract usually come up with orthogonal ideas and questions. The knowledge exercise builds the client’s mental strength. That is what brings clients to the blue chip firm’s door.
Third, blue chip consulting firms can be useful proxies for [a] reorganizing a unit and removing a problematic officer, [b] figuring out what company to buy, how to chop it up, and sell of the parts for a profit, [c] thinking about clever ways to deploy new technology because the blue chip professionals have first hand expertise from many different companies and their work processes. Where did synthetic bacon bits originate? Answer: A blue chip consulting company. A food company just paid the firm to assemble a team to come up with a new product. Bingo. Big seller.
Fourth, hiring a blue chip consulting firm conveys prestige to some clients. Many senior executives suffer from imposter syndrome. Many are not sure what happened to generate so much cash and market impact. The blue chip firm delivers “colleagues” who function to reduce the senior executive’s anxiety. Those senior executives will pay. My boss challenged Jack Welch in a double or nothing bet worth millions in consulting fees regarding a specific report. Mr. Welch loved the challenge from a mere consulting firm. The bet was to deliver specific high value information. We did. Mr. Welch paid the bill for the report he didn’t like and the one that doubled the original fee. He said, “We will hire you guys again. You think the way I do.”
Net net: Bring on the LLMs, the AI, the smart back office workflows. The blue chip consulting firms may downsize; they may recalibrate; they will not go away. Like Draculas, they keep getting invited back, to suck fees, and probably live forever.
Stephen E Arnold, July 14, 2025
Just What You Want: Information about Footnotes
July 11, 2025
No smart software to write this essay. This dinobaby is somewhat old fashioned.
I am completing my 14th monograph. Some of these 150 page plus documents became books. Examples range from The Google Legacy, published in 2003 for a client and then as a public document in 2004 by Infonortics Ltd., a specialty publisher somewhere in England. Others were published by Panda Press in Sweden. Martin White and I published a book about enterprise search management, and I do not recall what outfit published the book. When I started writing texts to accompany my lectures for ISS Telestrategies, the US National Cyber Crime events, and other specialized conferences, I decided to generate Adobe PDF files and make these “books” available to those in my classes and lectures. Dark Web Notebook and CyberOSINT were “self published.” Why? The commercial specialty publishers were going out of business or did not have a way to market the books I wrote. I wrote a couple of monographs about Japan’s investments in database technology in the early 1990s for the US Office of Technology Assessment. But I have lost track of these “books.”
When I read “Give Footnotes the Boot,” I thought about how I had handled “notes” in my long form writings. For this blog which is a collection of “notes” to myself given the appearance of an essay, I usually cite an article. I then add my preliminary thoughts about the write up, usually including a couple of the source document’s “interesting” statements. The blog, therefore, is an online notebook with 20,000 plus entries written for an audience of one: Me.
I noted that the cited “footnote” article says:
If the footnote markers are links, then the user can use the back button/gesture to return to the main content. But, even though this restores the previous scroll position, the user is still left with the challenge of finding their previous place in a wall of text6. We could try to solve that problem by dynamically pulling the content from the footnotes and displaying it in a popover. In some browsers (including yours) that will display like a tooltip, pointing directly back to the footnote marker. Thanks to modern web features, this can be done entirely without JavaScript7. But this is still shit! I see good, smart people, who’d always avoid using “click here” as link text, littering their articles with link texts such as 1, 7, and sometimes even 12. Not only is this as contextless as “click here”, it also provides the extra frustration of a tiny-weeny hit target. Update: Adrian Roselli pointed out that there are numerous bugs with accessibility tooling and superscript. And all this for what? To cargo-cult academia? Stop it! Stop it now! Footnotes are a shitty hack built on the limitations of printed media. It’s dumb to build on top of those limitations when they don’t exist on the web platform. So I ask you to break free of footnotes and do something better.
The essay omits one option; that is, just write as if the information in the chapter, book, paragraph is common knowledge. The result is fewer footnotes.
I am giving this footnote free approach a try in the book I am working on to accompany my lectures about Telegram for law enforcement, cyber attorneys, and intelligence professionals. I know that most people do not know that a specific quote I include from Pavel Durov originated from a Russia language blog. However, citing the Russian blog, presenting the title of the blog post in Cyrillic, including the English translation, and adding comments like “no longer online” would be the appropriate way to let my reader know I did not make up Pavel’s statement about having more than 100 children.
I am assuming that every person on earth knows that Pavel thinks he is a super human and has the duty to spawn more Pavels.
How will this work out? My hunch is that my readers will use my Telegram Labyrinth monograph to get oriented to a service alleged to be a criminal enterprise by the French judiciary. If someone wants to know where one of my “facts” originates, I will go through my notes, including blog posts, for the link to the document I read. Will those sources be findable in 2025 when the book comes out? Probably not.
Online information is disappearing at an alarming rate. The search systems I use “disappear” content even though I have a PDF of the source document in my electronic file. Intermediaries go out of business or filters block access to content.
I like the ideas in Jake Archibald’s essay. I also like the academic rigor of footnotes. But for the Telegram Labyrinth, I am minimizing footnotes. I assume that every investigator, intelligence professional, and government lawyer will know about Telegram. Therefore, what’s in my new book is common knowledge. That means, “Sorry, Miss Dalton, Stevie is dumping 95 percent of the footnotes.” (I should footnote that Miss Dalton was one of my teachers who wanted footnotes in Modern Language Association style for everything single thing her students wrote.) Nope. Blame Web rot, blame my laziness, blame the wild social media environment.
You will live and probably have some of that Telegram common knowledge refreshed; for example, the Telegram programming language FIFT is like FORTH only better. Get the pun. The Durovs have a sense of humor.
Stephen E Arnold, July1, 2025