Reflecting on the Value Loss from a Security Failure
May 6, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Right after the October 2023 security lapse in Israel, I commented to one of the founders of a next-generation Israeli intelware developer, “Quite a security failure.” The response was, “It is Israel’s 9/11.” One of the questions that kept coming to my mind was, “How could such sophisticated intelligence systems, software, and personnel have dropped the ball?” I have arrived at an answer: Belief in the infallibility of in situ systems. Now I am thinking about the cost of a large-scale security lapse.
It seems the young workers are surprised the security systems did not work. Thanks, MSFT Copilot. Good enough which may be similar to some firms’ security engineering.
Globes published “Big Tech 50 Reveals Sharp Falls in Israeli Startup Valuations.” The write up provides some insight into the business cost of security which did not live up to its marketing. The write up says:
The Israeli R&D partnership has reported to the TASE [Tel Aviv Stock Exchange] that 10 of the 14 startups in which it has invested have seen their valuations decline.
Interesting.
What strikes me is that the cost of a security lapse is obviously personal and financial. One of the downstream consequences is a loss of confidence or credibility. Israel’s hardware and software security companies have had, in my opinion, a visible presence at conferences addressing specialized systems and software. The marketing of the capabilities of these systems has been maturing and becoming more like Madison Avenue efforts.
I am not sure which is worse: The loss of “value” or the loss of “credibility.”
If we transport the question about the cost of a security lapse to large US high-technology company, I am not sure a Globes’ type of article captures the impact. Frankly, US companies suffer security issues on a regular basis. Only a few make headlines. And then the firms responsible for the hardware or software which are vulnerable because of poor security issue a news release, provide a software update, and move on.
Several observations:
- The glittering generalities about the security of widely used hardware and software is simply out of step with reality
- Vendors of specialized software such as intelware suggest that their systems provide “protection” or “warnings” about issues so that damage is minimized. I am not sure I can trust these statements.
- The customers, who may have made security configuration errors, have the responsibility to set up the systems, update, and have trained personnel operate them. That sounds great, but it is simply not going to happen. Customers are assuming what they purchase is secure.
Net net: The cost of security failure is enormous: Loss of life, financial disaster, and undermining the trust between vendor and customer. Perhaps some large outfits should take the security of the products and services they offer beyond a meeting with a PR firm, a crisis management company, or a go-go marketing firm? The “value” of security is high, but it is much more than a flashy booth, glib presentations at conferences, or a procurement team assuming what vendors present correlates with real world deployment.
Stephen E Arnold, May 6, 2024
Telegram Barks, Whines, and Wants a Treat
April 25, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Tucker Carlson, an American TV star journalist lawyer person, had an opportunity to find his future elsewhere after changes at Rupert Murdoch’s talking heads channel. The future, it seems, is for Mr. Carlson to present news via Telegram, which is an end-to-end-encrypted messaging platform. It features selectable levels of encryption. Click enough and the content of the data passed via the platform is an expensive and time consuming decryption job. Mr. Carlson wanted to know more about his new broadcast home. It appears that as part of the tie up between Mr. Carlson and Mr. Durov, the latter would agree to a one-hour interview with the usually low profile, free speech flag waver Pavel Durov. You can watch the video on YouTube and be monitored by those soon-to-be-gone cookies or on Telegram and be subject to its interesting free speech activities.
A person dressed in the uniform of an unfriendly enters the mess hall of a fighting force engaged in truth, justice, and the American way. The bold lad in red forgets he is dressed as an enemy combatant and does not understand why everyone is watching him with suspicion or laughter because he looks like a fool or a clueless dolt. Thanks, MSFT Copilot. Good enough. Any meetings in DC today about security?
Pavel Durov insists that he not as smart as his brother. He tells Mr. Carlson [bold added for emphasis. Editor]:
So Telegram has been a tool for those to a large extent. But it doesn’t really matter whether it’s opposition or the ruling party that is using Telegram for us. We apply the rules equally to all sides. We don’t become prejudiced in this way. It’s not that we are rooting for the opposition or we are rooting for the ruling party. It’s not that we don’t care. But we think it’s important to have this platform that is neutral to all voices because we believe that the competition of different ideas can result in progress and a better world for everyone. That’s in stark contrast to say Facebook which has said in public. You know we tip the scale in favor of this or that movement and this or that country all far from the west and far from Western media attention. But they’ve said that what do you think of that tech companies choosing governments? I think that’s one of the reasons why we ended up here in the UAE out of all places right? You don’t want to be geopolitically aligned. You don’t want to select the winners in any of these political fights and that’s why you have to be in a neutral place. … We believe that Humanity does need a neutral platform like Telegram that would be respectful to people’s privacy and freedoms.
Wow, the royal “we.” The word salad. Then the Apple editorial control.
Okay, the flag bearer for secure communications yada yada. Do I believe this not-as-smart-as-my-brother guy?
No.
Mr. Pavlov says one thing and then does another, endangering lives and creating turmoil among those who do require secure communications. Whom you may ask? How about intelligence operatives, certain war fighters in Ukraine and other countries in conflict, and experts working on sensitive commercial projects. Sure, bad actors use Telegram, but that’s what happens when one embraces free speech.
Now it seems that Mr. Durov has modified his position to sort-of free speech.
I learned this from articles like “Telegram to Block Certain Content for Ukrainian Users” and “Durov: Apple Demands to Ban Some Telegram Channels for Users with Ukrainian SIM Cards.”
In the interview between two estimable individuals, Mr. Durov made the point that he was approached by individuals working in US law enforcement. In very nice language, Mr. Durov explained they were inept, clumsy, and focused on getting access to the data in his platform. He pointed out that he headed off to Dubai, where he could operate without having to bow down, lick boots, sell out, or cooperate with some oafs in law enforcement.
But then, I read about Apple demanding that Telegram curtail free speech for “some” individuals. Well, isn’t that special? Say one thing, criticize law enforcement, and then roll over for Apple. That is a company, as I recall, which is super friendly with another nation state somewhat orthogonal to the US. Furthermore, Apple is proud of its efforts to protect privacy. Rumors suggest Apple is not too eager to help out some individuals investigating crimes because the sacred iPhone is above the requirements of a mere country… with exceptions, of course. Of course.
The article “Durov: Apple Demands to Ban Some Telegram Channels for Users with Ukrainian SIM Cards” reports:
Telegram founder Pavel Durov said that Apple had sent a request to block some Telegram channels for Ukrainian users. Although the platform’s community usually opposes such blocking, the company has to listen to such requests in order to keep the app available in the App Store.
Why roll over? The write up quotes Mr. Durov as saying:
…, it doesn’t always depend on us.
Us. The royal we again. The company is owned by Mr. Durov. The smarter brother is a math genius like two PhDs and there are about 50 employees. “Us.” Who are the people in the collective consisting of one horn blower?
Several observations:
- Apple has more power or influence over Telegram than law enforcement from a government
- Mr. Durov appears to say one thing and then do the opposite, thinking no one will notice maybe?
- Relying on Telegram for secure communications may not be the best idea I have heard today.
Net net: Is this a “signal” that absolutely no service can be trusted? I don’t have a scorecard for trust bandits, but I will start one I think. In the meantime, face-to-face in selected locations without mobile devices may be one option to explore, but it sure is easy to use Telegram to transmit useful information to a drone operator in order to obtain a desire outcome. Like Mr. Snowden, Mr. Durov has made a decision. Actions have consequences; word sewage may not.
Stephen E Arnold, April 25, 2024
Information: Cheap, Available, and Easy to Obtain
April 9, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I worked in Sillycon Valley and learned a few factoids I found somewhat new. Let me highlight three. First, a person with whom my firm had a business relationship told me, “Chinese people are Chinese for their entire life.” I interpreted this to mean that a person from China might live in Mountain View, but that individual had ties to his native land. That makes sense but, if true, the statement has interesting implications. Second, another person told me that there was a young person who could look at a circuit board and then reproduce it in sufficient detail to draw a schematic. This sounded crazy to me, but the individual took this person to meetings, discussed his company’s interest in upcoming products, and asked for briefings. With the delightful copying machine in tow, this person would have information about forthcoming hardware, specifically video and telecommunications devices. And, finally, via a colleague I learned of an individual who was a naturalized citizen and worked at a US national laboratory. That individual swapped hard drives in photocopy machines and provided them to a family member in his home town in Wuhan. Were these anecdotes true or false? I assumed each held a grain of truth because technology adepts from China and other countries comprised a significant percentage of the professionals I encountered.
Information flows freely in US companies and other organizational entities. Some people bring buckets and collect fresh, pure data. Thanks, MSFT Copilot. If anyone knows about security, you do. Good enough.
I thought of these anecdotes when I read an allegedly accurate “real” news story called “Linwei Ding Was a Google Software Engineer. He Was Also a Prolific Thief of Trade Secrets, Say Prosecutors.” The subtitle is a bit more spicy:
U.S. officials say some of America’s most prominent tech firms have had their virtual pockets picked by Chinese corporate spies and intelligence agencies.
The write up, which may be shaped by art history majors on a mission, states:
Court records say he had others badge him into Google buildings, making it appear as if he were coming to work. In fact, prosecutors say, he was marketing himself to Chinese companies as an expert in artificial intelligence — while stealing 500 files containing some of Google’s most important AI secrets…. His case illustrates what American officials say is an ongoing nightmare for U.S. economic and national security: Some of America’s most prominent tech firms have had their virtual pockets picked by Chinese corporate spies and intelligence agencies.
Several observations about these allegedly true statements are warranted this fine spring day in rural Kentucky:
- Some managers assume that when an employee or contractor signs a confidentiality agreement, the employee will abide by that document. The problem arises when the person shares information with a family member, a friend from school, or with a company paying for information. That assumption underscores what might be called “uninformed” or “naive” behavior.
- The language barrier and certain cultural norms lock out many people who assume idle chatter and obsequious behavior signals respect and conformity with what some might call “US business norms.” Cultural “blindness” is not uncommon.
- Individuals may possess technical expertise unknown to colleagues and contracting firms offering body shop services. Armed with knowledge of photocopiers in certain US government entities, swapping out a hard drive is no big deal. A failure to appreciate an ability to draw a circuit leads to similar ineptness when discussing confidential information.
America operates in a relatively open manner. I have lived and worked in other countries, and that openness often allows information to flow. Assumptions about behavior are not based on an understanding of the cultural norms of other countries.
Net net: The vulnerability is baked in. Therefore, information is often easy to get, difficult to keep privileged, and often aided by companies and government agencies. Is there a fix? No, not without a bit more managerial rigor in the US. Money talks, moving fast and breaking things makes sense to many, and information seeps, maybe floods, from the resulting cracks. Whom does one trust? My approach: Not too many people regardless of background, what people tell me, or what I believe as an often clueless American.
Stephen E Arnold, April 9, 2024
Viruses Get Intelligence Upgrade When Designed With AI
March 21, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Viruses are still a common problem on the Internet despite all the PSAs, firewalls, antiviral software, and other precautions users take to protect their technology and data. Intelligent and adaptable viruses have remained a concept of science-fiction but bad actors are already designing them with AI. It’s only going to get worse. Tom’s Hardware explains that an AI virus is already wreaking havoc: “AI Worm Infects Users Via AI-Enabled Email Clients-Morris II Generative AI Worm Steals Confidential Data As It Spreads.”
The Morris II Worm was designed by researchers Ben Nassi of Cornell Tech, Ron Button from Intuit, and Stav Cohen from the Israel Institute of Technology. They built the worm to understand how to better combat bad actors. The researchers named it after the first computer worm Morris. The virus is a generative AI for that steals data, spams with email, spreads malware, and spreads to multiple systems.
Morris II attacks AI apps and AI-enabled email assistants that use generative text and image engines like ChatGPT, LLaVA, and Gemini Pro. It also uses adversarial self-replicating prompts. The researchers described Morris II’s attacks:
“ ‘The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication) and engage in malicious activities (payload). Additionally, these inputs compel the agent to deliver them (propagate) to new agents by exploiting the connectivity within the GenAI ecosystem. We demonstrate the application of Morris II against GenAI-powered email assistants in two use cases (spamming and exfiltrating personal data), under two settings (black-box and white-box accesses), using two types of input data (text and images).’”
The worm continues to harvest information and update it in databases. The researchers shared their information with OpenAI and Google. OpenAI responded by saying the organization will make its systems more resilient and advises designers to watch out for harmful inputs. The advice is better worded as “sleep with one eye open.”
Whitney Grace, March 21, 2024
Worried about TikTok? Do Not Overlook CapCut
March 18, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I find the excitement about TikTok interesting. The US wants to play the reciprocity card; that is, China disallows US apps so the US can ban TikTok. How influential is TikTok? US elected officials learned first hand that TikTok users can get messages through to what is often a quite unresponsive cluster of elected officials. But let’s leave TikTok aside.
Thanks, MSFT Copilot. Good enough.
What do you know about the ByteDance cloud software CapCut? Ah, you have never heard of it. That’s not surprising because it is aimed at those who make videos for TikTok (big surprise) and other video platforms like YouTube.
CapCut has been gaining supporters like the happy-go-lucky people who published “how to” videos about CapCut on YouTube. On TikTok, CapCut short form videos have tallied billions of views. What makes it interesting to me is that it wants to phone home, store content in the “cloud”, and provide high-end tools to handle some tricky video situations like weird backgrounds on AI generated videos.
The product CapCut was named (I believe) JianYing or Viamaker (the story varies by source) which means nothing to me. The Google suggests its meanings could range from hard to paper cut out. I am not sure I buy these suggestions because Chinese is a linguistic slippery fish. Is that a question or a horse? In 2020, the app got a bit of shove into the world outside of the estimable Middle Kingdom.
Why is this important to me? Here are my reasons for creating this short post:
- Based on my tests of the app, it has some of the same data hoovering functions of TikTok
- The data of images and information about the users provides another source of potentially high value information to those with access to the information
- Data from “casual” videos might be quite useful when the person making the video has landed a job in a US national laboratory or in one the high-tech playgrounds in Silicon Valley. Am I suggesting blackmail? Of course not, but a release of certain imagery might be an interesting test of the videographer’s self-esteem.
If you want to know more about CapCut, try these links:
- Download (ideally to a burner phone or a PC specifically set up to test interesting software) at www.capcut.com
- Read about the company CapCut in this 2023 Recorded Future write up
- Learn about CapCut’s privacy issues in this Bloomberg story.
Net net: Clever stuff but who is paying attention. Parents? Regulators? Chinese intelligence operatives?
Stephen E Arnold, March 18, 2024
The NSO Group Back in the News: Is That a Good Thing?
January 24, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Some outfits struggle to get PR, not the NSO Group. The situation is no “dream.” I spotted this write up in 9 to 5 Mac: “Apple Wins Early Battle against NSO after Suing Spyware Mercenaries for Attacking iPhone Users.” For me, the main point of the article is:
Judge Donato ruled that NSO Group’s request for dismissal in the US in favor of a trial in Israel didn’t meet the bar. Instead, Judge Donato suggested that Apple would face the same challenges in Israel that NSO faces in the US.
A senior manager who is an attorney skilled in government processes looks at the desk in his new office. Wow, that looks untidy. Thanks, MSFT Copilot Bing thing. How’s that email security issue coming along? Ah, good enough, you say?
I think this means that the legal spat will be fought in the US of A. Here’s the sentence quoted by 9 to 5 Mac which allegedly appeared in a court document:
NSO has not demonstrated otherwise. NSO also overlooks the fact that the challenges will be amenable to a number of mitigating practices.
The write up includes this passage:
An Apple spokesperson tells 9to5Mac that the company will continue to protect users against 21st century mercenaries like the NSO Group. Litigation against the Pegasus spyware maker is part of a larger effort to protect users…
From my point of view, the techno feudal outfit has surfed on the PR magnetism of the NSO Group. Furthermore, the management team at NSO Group faces what seems to be a bit of a legal hassle. Some may believe that the often ineffective Israeli cyber security technology which failed to signal, thwart, or disrupt the October 2023 dust up requires more intense scrutiny. NSO Group, therefore, is in the spotlight.
More interesting from my vantage point is the question, “How can NSO Group’s lawyering-savvy senior management not demonstrate its case in such a way to, in effect, kill some of the PR magnetism. Take it from me. This is not a “dream” assignment for NSO Group’s legal eagles. I would also be remiss if I did not mention that Apple has quite a bit of spare cash with which to feather the nest of legal eagles. Apple wants to be perceived as the user’s privacy advocate and BFF. When it comes to spending money and rounding up those who love their Apple devices, the estimable Cupertino outfit may be a bit of a challenge, even to attorneys with NSA and DHS experience.
As someone said about publicity, any publicity is good publicity. I am not sure the categorical affirmative is shared by everyone involved with NSO Group. And where is Hulio? He’s down by the school yard. He doesn’t know where he’s going, but Hulio is going the other way. (A tip of the hat to Paul Simon and his 1972 hit.)
Stephen E Arnold, January 24, 2024
23andMe: Those Users and Their Passwords!
December 5, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Silicon Valley and health are match fabricated in heaven. Not long ago, I learned about the estimable management of Theranos. Now I find out that “23andMe confirms hackers stole ancestry data on 6.9 million users.” If one follows the logic of some Silicon Valley outfits, the data loss is the fault of the users.
“We have the capability to provide the health data and bioinformation from our secure facility. We have designed our approach to emulate the protocols implemented by Jack Benny and his vault in his home in Beverly Hills,” says the enthusiastic marketing professional from a Silicon Valley success story. Thanks, MSFT Copilot. Not exactly Jack Benny, Ed, and the foghorn, but I have learned to live with “good enough.”
According to the peripatetic Lorenzo Franceschi-Bicchierai:
In disclosing the incident in October, 23andMe said the data breach was caused by customers reusing passwords, which allowed hackers to brute-force the victims’ accounts by using publicly known passwords released in other companies’ data breaches.
Users!
What’s more interesting is that 23andMe provided estimates of the number of customers (users) whose data somehow magically flowed from the firm into the hands of bad actors. In fact, the numbers, when added up, totaled almost seven million users, not the original estimate of 14,000 23andMe customers.
I find the leak estimate inflation interesting for three reasons:
- Smart people in Silicon Valley appear to struggle with simple concepts like adding and subtracting numbers. This gap in one’s education becomes notable when the discrepancy is off by millions. I think “close enough for horse shoes” is a concept which is wearing out my patience. The difference between 14,000 and almost 17 million is not horse shoe scoring.
- The concept of “security” continues to suffer some set backs. “Security,” one may ask?
- The intentional dribbling of information reflects another facet of what I call high school science club management methods. The logic in the case of 23andMe in my opinion is, “Maybe no one will notice?”
Net net: Time for some regulation, perhaps? Oh, right, it’s the users’ responsibility.
Stephen E Arnold, December 5, 2023
The Risks of Smart Software in the Hands of Fullz Actors and Worse
November 7, 2023
This essay is the work of a dumb humanoid. No smart software required.
The ChatGPT and Sam AI-Man parade is getting more acts. I spotted some thumbs up from Satya Nadella about Sam AI-Man and his technology. The news service Techmeme provided me with dozens of links and enticing headlines about enterprise this and turbo that GPT. Those trumpets and tubas were pumping out the digital version of Funiculì, Funiculà.
I want to highlight one write up and point out an issue with smart software that appears to have been ignored, overlooked, or like the iceberg possibly that sank the RMS Titanic, was a heck of a lot more dangerous than Captain Edward Smith appreciated.
The crowd is thrilled with the new capabilities of smart software. Imagine automating mundane, mindless work. Over the oom-pah of the band, one can sense the excitement of the Next Big Thing getting Bigger and more Thingier. In the crowd, however, are real or nascent bad actors. They are really happy too. Imagine how easy it will be to automate processes designed to steal personal financial data or other chinks in humans’ armor!
The article is “How OpenAI Is Building a Path Toward AI Agents.” The main idea is that one can type instructions into Sam AI-Man’s GPT “system” and have smart software hook together discrete functions. These functions can then deliver an output requiring the actions of different services.
The write up approaches this announcement or marketing assertion with some prudence. The essay points out that “customer chatbots aren’t a new idea.” I agree. Connecting services has been one of the basic ideas of the use of software. Anyone who has used notched cards to retrieve items related to one another is going to understand the value of automation. And now, if the Sam AI-Man announcements are accurate that capability no longer requires old-fashioned learning the ropes.
The cited write up about building a path asserts:
Once you start enabling agents like the ones OpenAI pointed toward today, you start building the path toward sophisticated algorithms manipulating the stock market; highly personalized and effective phishing attacks; discrimination and privacy violations based on automations connected to facial recognition; and all the unintended (and currently unimaginable) consequences of infinite AIs colliding on the internet.
Fear, uncertainty, and doubt are staples of advanced technology. And the essay makes clear that the rule maker in chief is Sam AI-Man; to wit the essay says:
After the event, I asked Altman how he was thinking about agents in general. Which actions is OpenAI comfortable letting GPT-4 take on the internet today, and which does the company not want to touch? Altman’s answer is that, at least for now, the company wants to keep it simple. Clear, direct actions are OK; anything that involves high-level planning isn’t.
Let me introduce my observations about the Sam AI-Man innovations and the type of explanations about the PR and marketing event which has whipped up pundits, poohbahs, and Twitter experts (perhaps I should say X-spurts?)
First, the Sam AI-Man announcements strike me as making orchestration a service easy to use and widely available. Bad things won’t be allowed. But the core idea of what I call “orchestration” is where the parade is marching. I hear the refrain “Some think the world is made for fun and frolic.” But I don’t agree, I don’t agree. Because as advanced tools become widely available, the early adopters are not exclusively those who want to link a calendar to an email to a document about a meeting to talk about a new marketing initiative.
Second, the ability of Sam AI-Man to determine what’s in bounds and out of bounds is different from refereeing a pickleball game. Some of the players will be nation states with an adversarial view of the US of A. Furthermore, there are bad actors who have a knack for linking automated information to online extortion. These folks will be interested in cost cutting and efficiency. More problematic, some of these individuals will be more active in testing how orchestration can facilitate their human trafficking activities or drug sales.
Third, government entities and people like Sam AI-Man are, by definition, now in reactive mode. What I mean is that with the announcement and the chatter about automating the work required to create a snappy online article is not what a bad actor will do. Individuals will see opportunities to create new ways to exploit the cluelessness of employees, senior citizens, and young people. The cheerful announcements and the parade tunes cannot drown out the low frequency rumbles of excitement now rippling through the bad actor grapevines.
Net net: Crime propelled by orchestration is now officially a thing. The “regulations” of smart software, like the professionals who will have to deal with the downstream consequences of automation, are out of date. Am I worried? For me personally, no, I am not worried. For those who have to enforce the laws which govern a social construct? Yep, I have a bit of concern. Certainly more than those who are laughing and enjoying the parade.
Stephen E Arnold, November 7, 2023
Microsoft and What Fizzled with One Trivial Omission. Yep, Inconsequential
October 27, 2023
This essay is the work of a dumb humanoid. No smart software required.
I read “10 Hyped-Up Windows Features That Fizzled Out” is an interesting list. I noticed that the Windows Phone did not make the cut. How important is the mobile phone to online computing and most people’s life? Gee, a mobile phone? What’s that? Let’s see Apple has a phone and it produces some magnetism for the company’s other products and services. And Google has a phone with its super original, hardly weird Android operating system with the pull through for advertising sales. Google does fancy advertising, don’t you think? Then we have the Huawei outfit, which despite political headwinds, keeps tacking and making progress and some money. But Microsoft? Nope, no phone despite the superior thinking which brought Nokia into the Land of Excitement.
What do you mean security is a priority? I was working on 3D, the metaverse, and mixed reality. I don’t think anyone on my team knows anything about security. Is someone going to put out that fire? I have to head to an off site meeting. Catch you later,” says the hard working software professional. Thanks MidJourney, you understand dumpster fire, don’t you?
What’s on the list? Here are five items that the online write up identified as “fizzled out” products. Please, navigate to the original “let’s make a list and have lunch delivered” article.
The five items I noted are:
- The dual screen revolution Windows 10X for devices like the “Surface Neo.” Who knew?
- 3D modeling. Okay, I would have been happy if Microsoft could support plain old printing from its outstanding Windows products.
- Mixed reality. Not even the Department of Defense was happy with weird goggles which could make those in the field of battle a target.
- Set tabs. Great idea. Now you can buy it from Stardock, the outfit that makes software to kill the weird Window interface. Yep, we use this on our Windows computers. Why? The new interface is a pain, not a “pane.”
- My People. I don’t have people. I have a mobile phone and email. Good enough.
What else is missing from this lunch time-brainstorming list generation session?
My nomination is security. The good enough approach is continuing to demonstrate that — bear with me for this statement — good enough is no longer good enough in my opinion.
Stephen E Arnold, October 27, 2023
Newly Emerged Snowden Revelations Appear in Dutch Doctoral Thesis
October 10, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
One Eddie Snowden (a fine gent indeed) rumor said that 99 percent of the NSA data Edward Snowden risked his neck to expose ten years ago remains unpublished. Some entities that once possessed that archive are on record as having destroyed it. This includes The Intercept, which was originally created specifically to publish its revelations. So where are the elusive Snowden files now? Could they be In the hands of a post-PhD researcher residing in Berlin? Computer Weekly examines three fresh Snowden details that made their way into a doctoral thesis in its article, “New Revelations from the Snowden Archive Surface.” The thesis was written by American citizen Jacob Applebaum, who has since received his PhD from the Eindhoven University of Technology in the Netherlands. Reporter Stefania Maurizi summarizes:
“These revelations go back a decade, but remain of indisputable public interest:
- The NSA listed Cavium, an American semiconductor company marketing Central Processing Units (CPUs) – the main processor in a computer which runs the operating system and applications – as a successful example of a ‘SIGINT-enabled’ CPU supplier. Cavium, now owned by Marvell, said it does not implement back doors for any government.
- The NSA compromised lawful Russian interception infrastructure, SORM. The NSA archive contains slides showing two Russian officers wearing jackets with a slogan written in Cyrillic: ‘You talk, we listen.’ The NSA and/or GCHQ has also compromised Key European LI [lawful interception] systems.
- Among example targets of its mass surveillance program, PRISM, the NSA listed the Tibetan government in exile.”
Of public interest, indeed. See the write-up for more details on each point or, if you enjoy wading through academic papers, the thesis itself [pdf]. So how and when did Applebaum get his hands on information from the Snowden docs? Those details are not revealed, but we do know this much:
“In 2013, Jacob Appelbaum published a remarkable scoop for Der Spiegel, revealing the NSA had spied on Angela Merkel’s mobile phone. This scoop won him the highest journalistic award in Germany, the Nannen Prize (later known as the Stern Award). Nevertheless, his work on the NSA revelations, and his advocacy for Julian Assange and WikiLeaks, as well as other high-profile whistleblowers, has put him in a precarious condition. As a result of this, he has resettled in Berlin, where he has spent the past decade.”
Probably wise. Will most of the Snowden archive remain forever unpublished? Impossible to say, especially since we do not know how many copies remain and in whose hands.
Cynthia Murrell, October 10, 2023