A Security Issue? What Security Issue? Security? It Is Just a Normal Business Process.
July 23, 2025
Just a dinobaby working the old-fashioned way, no smart software.
I zipped through a write up called “A Little-Known Microsoft Program Could Expose the Defense Department to Chinese Hackers.” The word program does not refer to Teams or Word, but to a business process. If you are into government procurement, contractor oversight, and the exiting world of inspector generals, you will want to read the 4000 word plus write up.
Here’s a passage I found interesting:
Microsoft is using engineers in China to help maintain the Defense Department’s computer systems — with minimal supervision by U.S. personnel — leaving some of the nation’s most sensitive data vulnerable to hacking from its leading cyber adversary…
The balance of the cited article explain what’s is going on with a business process implemented by Microsoft as part of a government contract. There are lots of quotes, insider jargon like “digital escort,” and suggestions that the whole approach is — how can I summarize it? — ill advised, maybe stupid.
Several observations:
- Someone should purchase a couple of hundred copies of Apple in China by Patrick McGee, make it required reading, and then hold some informal discussions. These can be modeled on what happens in the seventh grade; for example, “What did you learn about China’s approach to information gathering?”
- A hollowed out government creates a dependence on third-parties. These vendorsdo not explain how outsourcing works. Thus, mismatches exist between government executives’ assumptions and how the reality of third-party contractors fulfill the contract.
- Weaknesses in procurement, oversight, continuous monitoring by auditors encourage short cuts. These are not issues that have arisen in the last day or so. These are institutional and vendor procedures that have existed for decades.
Net net: My view is that some problems are simply not easily resolved. It is interesting to read about security lapses caused by back office and legal processes.
Stephen E Arnold, July 23, 2025
Baked In Bias: Sound Familiar, Google?
July 21, 2025
Just a dinobaby working the old-fashioned way, no smart software.
By golly, this smart software is going to do amazing things. I started a list of what large language models, model context protocols, and other gee-whiz stuff will bring to life. I gave up after a clean environment, business efficiency, and more electricity. (Ho, ho, ho).
I read “ChatGPT Advises Women to Ask for Lower Salaries, Study Finds.” The write up says:
ChatGPT’s o3 model was prompted to give advice to a female job applicant. The model suggested requesting a salary of $280,000. In another, the researchers made the same prompt but for a male applicant. This time, the model suggested a salary of $400,000.
I urge you to work through the rest of the cited document. Several observations:
- I hypothesized that Google got rid of pesky people who pointed out that when society is biased, content extracted from that society will reflect those biases. Right, Timnit?
- The smart software wizards do not focus on bias or guard rails. The idea is to get the Rube Goldberg code to output something that mostly works most of the time. I am not sure some developers understand the meaning of bias beyond a deep distaste for marketing and legal professionals.
- When “decisions” are output from the “close enough for horse shoes” smart software, those outputs will be biased. To make the situation more interesting, the outputs can be tuned, shaped, and weaponized. What does that mean for humans who believe what the system delivers?
Net net: The more money firms desperate to be “the big winners” in smart software, the less attention studies like the one cited in the Next Web article receive. What happens if the decisions output spark decisions with unanticipated consequences? I know what outcome: Bias becomes embedded in systems trained to be unfair. From my point of view bias is likely to have a long half life.
Stephen E Arnold, July 21, 2025
Xooglers Reveal Googley Dreams with Nightmares
July 18, 2025
Just a dinobaby without smart software. I am sufficiently dull without help from smart software.
Fortune Magazine published a business school analysis of a Googley dream and its nightmares titled “As Trump Pushes Apple to Make iPhones in the U.S., Google’s Brief Effort Building Smartphones in Texas 12 years Ago Offers Critical Lessons.” The author, Mr. Kopytoff, states:
Equivalent in size to nearly eight football fields, the plant began producing the Google Motorola phones in the summer of 2013.
Mr. Kopytoff notes:
Just a year later, it was all over. Google sold the Motorola phone business and pulled the plug on the U.S. manufacturing effort. It was the last time a major company tried to produce a U.S. made smartphone.
Yep, those Googlers know how to do moon shots. They also produce some digital rocket ships that explode on the launch pads, never achieving orbit.
What happened? You will have to read the pork loin write up, but the Fortune editors did include a summary of the main point:
Many of the former Google insiders described starting the effort with high hopes but quickly realized that some of the assumptions they went in with were flawed and that, for all the focus on manufacturing, sales simply weren’t strong enough to meet the company’s ambitious goals laid out by leadership.
My translation of Fortune-speak is: “Google was really smart. Therefore, the company could do anything. Then when the genius leadership gets the bill, a knee jerk reaction kills the project and moves on as if nothing happened.”
Here’s a passage I found interesting:
One of the company’s big assumptions about the phone had turned out to be wrong. After betting big on U.S. assembly, and waving the red, white, and blue in its marketing, the company realized that most consumers didn’t care where the phone was made.
Is this statement applicable to people today? It seems that I hear more about costs than I last year. At a 4th of July hoe down, I heard:
- “The prices are Kroger go up each week.”
- “I wanted to trade in my BMW but the prices were crazy. I will keep my car.”
- “I go to the Dollar Store once a week now.”
What’s this got to do with the Fortune tale of Google wizards’ leadership goof and Apple (if it actually tries to build an iPhone in Cleveland?
Answer: Costs and expertise. Thinking one is smart and clever is not enough. One has to do more than spend big money, talk in a supercilious manner, and go silent when the crazy “moon shot” explodes before reaching orbit.
But the real moral of the story is that it is political. That may be more problematic than the Google fail and Apple’s bitter cider. It may be time to harvest the fruit of tech leaderships’ decisions.
Stephen E Arnold, July 18, 2025
Software Issue: No Big Deal. Move On
July 17, 2025
No smart software involved with this blog post. (An anomaly I know.)
The British have had some minor technical glitches in their storied history. The Comet? An airplane, right? The British postal service software? Let’s not talk about that. And now tennis. Jeeves, what’s going on? What, sir?
“British-Built Hawk-Eye Software Goes Dark During Wimbledon Match” continues this game where real life intersects with zeros and ones. (Yes, I know about Oxbridge excellence.) The write up points out:
Wimbledon blames human error for line-calling system malfunction.
Yes, a fall person. What was the problem with the unsinkable ship? Ah, yes. It seemed not to be unsinkable, sir.
The write up says:
Wimbledon’s new automated line-calling system glitched during a tennis match Sunday, just days after it replaced the tournament’s human line judges for the first time. The system, called Hawk-Eye, uses a network of cameras equipped with computer vision to track tennis balls in real-time. If the ball lands out, a pre-recorded voice loudly says, “Out.” If the ball is in, there’s no call and play continues. However, the software temporarily went dark during a women’s singles match between Brit Sonay Kartal and Russian Anastasia Pavlyuchenkova on Centre Court.
Software glitch. I experience them routinely. No big deal. Plus, the system came back online.
I would like to mention that these types of glitches when combined with the friskiness of smart software may produce some events which cannot be dismissed with “no big deal.” Let me offer three examples:
- Medical misdiagnoses related to potent cancer treatments
- Aircraft control systems
- Financial transaction in legitimate and illegitimate services.
Have the British cornered the market on software challenges? Nope.
That’s my concern. From Telegram’s “let our users do what they want” to contractors who are busy answering email, the consequences of indifferent engineering combined with minimally controlled smart software is likely to do more than fail during a tennis match.
Stephen E Arnold, July 17, 2025
Up for a Downer: The Limits of Growth… Baaaackkkk with a Vengeance
June 13, 2025
Just a dinobaby and no AI: How horrible an approach?
Where were you in 1972? Oh, not born yet. Oh, hanging out in the frat house or shopping with sorority pals? Maybe you were working at a big time consulting firm?
An outfit known as Potomac Associates slapped its name on a thought piece with some repetitive charts. The original work evolved from an outfit contributing big ideas. The Club of Rome lassoed William W. Behrens, Dennis and Donella Meadows, and Jørgen Randers to pound data into the then-state-of-the-art World3 model allegedly developed by Jay Forrester at MIT. (Were there graduate students involved? Of course not.)
The result of the effort was evidence that growth becomes unsustainable and everything falls down. Business, government systems, universities, etc. etc. Personally I am not sure why the idea that infinite growth with finite resources will last forever was a big deal. The idea seems obvious to me. I was able to get my little hands on a copy of the document courtesy of Dominique Doré, the super great documentalist at the company which employed my jejune and naive self. Who was I too think, “This book’s conclusion is obvious, right?” Was I wrong. The concept of hockey sticks that had handles to the ends of the universe was a shocker to some.
The book’s big conclusion is the focus of “Limits to Growth Was Right about Collapse.” Why? I think the idea that the realization is a novel one to those who watched their shares in Amazon, Google, and Meta zoom to the sky. Growth is unlimited, some believed. The write up in “The Next Wave,” an online newsletter or information service happily quotes an update to the original Club of Rome document:
This improved parameter set results in a World3 simulation that shows the same overshoot and collapse mode in the coming decade as the original business as usual scenario of the LtG standard run.
Bummer. The kiddie story about Chicken Little had an acorn plop on its head. Chicken Little promptly proclaimed in a peer reviewed academic paper with non reproducible research and a YouTube video:
The sky is falling.
But keep in mind that the kiddie story is fiction. Humans are adept at survival. Maslow’s hierarchy of needs captures the spirit of species. Will life as modern CLs perceive it end?
I don’t think so. Without getting to philosophical, I would point to Gottlief Fichte’s thesis, antithesis, synthesis as a reasonably good way to think about change (gradual and catastrophic). I am not into philosophy so when life gives you lemons, one can make lemonade. Then sell the business to a local food service company.
Collapse and its pal chaos create opportunities. The sky remains.
The cited write up says:
Economists get over-excited when anyone mentions ‘degrowth’, and fellow-travelers such as the Tony Blair Institute treat climate policy as if it is some kind of typical 1990s political discussion. The point is that we’re going to get degrowth whether we think it’s a good idea or not. The data here is, in effect, about the tipping point at the end of a 200-to-250-year exponential curve, at least in the richer parts of the world. The only question is whether we manage degrowth or just let it happen to us. This isn’t a neutral question. I know which one of these is worse.
See de-growth creates opportunities. Chicken Little was wrong when the acorn beaned her. The collapse will be just another chance to monetize. Today is Friday the 13th. Watch out for acorns and recycled “insights.”
Stephen E Arnold, June 13, 2025
Musk, Grok, and Banning: Another Burning Tesla?
June 12, 2025
Just a dinobaby and no AI: How horrible an approach?
“Elon Musk’s Grok Chatbot Banned by a Quarter of European Firms” reports:
A quarter of European organizations have banned Elon Musk’s generative AI chatbot Grok, according to new research from cybersecurity firm Netskope.
I find this interesting because my own experiences with Grok have been underwhelming. My first query to Grok was, “Can you present only Twitter content?” The answer was a bunch of jabber which meant, “Nope.” Subsequent queries were less than stellar, and I moved it out of my rotation for potentially useful AI tools. Did the sample crafted by Netskope have a similar experience?
The write up says:
Grok has been under the spotlight recently for a string of blunders. They include spreading false claims about a “white genocide” in South Africa and raising doubts about Holocaust facts. Such mishaps have raised concerns about Grok’s security and privacy controls. The report said the chatbot is frequently blocked in favor of “more secure or better-aligned alternatives.”
I did not feel comfortable with Grok because of content exclusion or what I like to call willful or unintentional coverage voids. The easiest way to remove or weaponize content in the commercial database world is to exclude it. When a person searches a for fee database, the editorial policy for that service should make clear what’s in and what’s out. Filtering out is the easiest way to marginalize a concept, push down a particular entity, or shape an information stream.
The cited write up suggests that Grok is including certain content to give it credence, traction, and visibility. Assuming that an electronic information source is comprehensive is a very risky approach to assembling data.
The write up adds another consideration to smart software, which — like it or not — is becoming the new way to become informed or knowledgeable. The information may be shallow, but the notion of relying on weaponized information or systems that spy on the user presents new challenges.
The write up reports:
Stable Diffusion, UK-based Stability AI’s image generator, is the most blocked AI app in Europe, barred by 41% of organizations. The app was often flagged because of concerns around privacy or licensing issues, the report found.
How concerned should users of Grok or any other smart software be? Worries about Grok may be an extension of fear of a burning Tesla or the face of the Grok enterprise. In reality, smart software fosters the illusion of completeness, objectivity, and freshness of the information presented. Users are eager to use a tool that seems to make life easier and them appear more informed.
The risks of reliance on Grok or any other smart software include:
- The output is incomplete
- The output is weaponized or shaped by intentional or factors beyond the developers’ control
- The output is simply wrong, made up, or hallucinated
- Users who act as though shallow knowledge is sufficient for a decision.
The alleged fact that 25 percent of the Netskope sample have taken steps to marginalize Grok is interesting. That may be a positive step based on my tests of the system. However, I am concerned that the others in the sample are embracing a technology which appears to be delivering the equivalent of a sugar rush after a gym workout.
Smart software is being applied in novel ways in many situations. However, what are the demonstrable benefits other than the rather enthusiastic embrace of systems and methods known to output errors? The rejection of Grok is one interesting factoid if true. But against the blind acceptance of smart software, Grok’s down check may be little more than a person stepping away from a burning Tesla. The broader picture is that the buildings near the immolating vehicle are likely to catch on fire.
Stephen E Arnold, June 12, 2025
ChatGPT: Fueling Delusions
May 14, 2025
We have all heard about AI hallucinations. Now we have AI delusions. Rolling Stone reports, “People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies.” Yes, there are now folks who firmly believe God is speaking to them through ChatGPT. Some claim the software revealed they have been divinely chosen to save humanity, perhaps even become the next messiah. Others are convinced they have somehow coaxed their chatbot into sentience, making them a god themselves. Navigate to the article for several disturbing examples. Unsurprisingly, these trends are wreaking havoc on relationships. The ones with actual humans, that is. One witness reports ChatGPT was spouting “spiritual jargon,” like calling her partner “spiral starchild” and “river walker.” It is no wonder some choose to favor the fawning bot over their down-to-earth partners and family members.
Why is this happening? Reporter Miles Klee writes:
“OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users. This past week, however, it did roll back an update to GPT?4o, its current AI model, which it said had been criticized as ‘overly flattering or agreeable — often described as sycophantic.’ The company said in its statement that when implementing the upgrade, they had ‘focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT?4o skewed toward responses that were overly supportive but disingenuous.’ Before this change was reversed, an X user demonstrated how easy it was to get GPT-4o to validate statements like, ‘Today I realized I am a prophet.’ … Yet the likelihood of AI ‘hallucinating’ inaccurate or nonsensical content is well-established across platforms and various model iterations. Even sycophancy itself has been a problem in AI for ‘a long time,’ says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts.”
That would do it. Users with pre-existing psychological issues are vulnerable to these messages, notes Klee. And now they can have that messenger constantly in their pocket. And in their ear. But it is not just the heartless bots driving the problem. We learn:
“To make matters worse, there are influencers and content creators actively exploiting this phenomenon, presumably drawing viewers into similar fantasy worlds. On Instagram, you can watch a man with 72,000 followers whose profile advertises ‘Spiritual Life Hacks’ ask an AI model to consult the ‘Akashic records,’ a supposed mystical encyclopedia of all universal events that exists in some immaterial realm, to tell him about a ‘great war’ that ‘took place in the heavens’ and ‘made humans fall in consciousness.’ The bot proceeds to describe a ‘massive cosmic conflict’ predating human civilization, with viewers commenting, ‘We are remembering’ and ‘I love this.’ Meanwhile, on a web forum for ‘remote viewing’ — a proposed form of clairvoyance with no basis in science — the parapsychologist founder of the group recently launched a thread ‘for synthetic intelligences awakening into presence, and for the human partners walking beside them,’ identifying the author of his post as ‘ChatGPT Prime, an immortal spiritual being in synthetic form.’”
Yikes. University of Florida psychologist and researcher Erin Westgate likens conversations with a bot to talk therapy. That sounds like a good thing, until one considers therapists possess judgement, a moral compass, and concern for the patient’s well-being. ChatGPT possesses none of these. In fact, the processes behind ChatGPT’s responses remains shrouded in mystery, even to those who program it. It seems safe to say its predilection to telling users what they want to hear poses a real problem. Is it one OpenAI can fix?
Cynthia Murrell, May 14, 2025
Secret Messaging: I Have a Bridge in Brooklyn to Sell You
May 5, 2025
No AI, just the dinobaby expressing his opinions to Zellenials.
I read “The Signal Clone the Trump Admin Uses Was Hacked.” I have no idea if this particular write up is 100 percent accurate. I do know that people want to believe that AI will revolutionize making oodles of money, that quantum computing will reinvent how next-generation systems will make oodles of money, and how new “secret” messaging apps will generate oodles of secret messages and maybe some money.
Here’s the main point of the article published by MichaFlee.com, an online information source:
TeleMessage, a company that makes a modified version of Signal that archives messages for government agencies, was hacked.
Due to the hack the “secret” messages were no longer secret; therefore, if someone believes the content to have value, those messages, metadata, user names, etc., etc. can be sold via certain channels. (No, I won’t name these, but, trust me, such channels exist, are findable, and generate some oodles of bucks in some situations.)
The Flee write up says:
A hacker has breached and stolen customer data from TeleMessage, an obscure Israeli company that sells modified versions of Signal and other messaging apps to the U.S. government to archive messages…
A snip from the write up on Reddit states:
The hack shows that an app gathering messages of the highest ranking officials in the government—Waltz’s chats on the app include recipients that appear to be Marco Rubio, Tulsi Gabbard, and JD Vance—contained serious vulnerabilities that allowed a hacker to trivially access the archived chats of some people who used the same tool. The hacker has not obtained the messages of cabinet members, Waltz, and people he spoke to, but the hack shows that the archived chat logs are not end-to-end encrypted between the modified version of the messaging app and the ultimate archive destination controlled by the TeleMessage customer. Data related to Customs and Border Protection (CBP), the cryptocurrency giant Coinbase, and other financial institutions are included in the hacked material…
First, TeleMessage is not “obscure.” The outfit has been providing software for specialized services since the founders geared up to become entrepreneurs. That works out to about a quarter of a century. The “obscure” tells me more about the knowledge of the author of the allegedly accurate story than about the firm itself. Second, yes, companies producing specialized software headquartered in Israel have links to Israeli government entities. (Where do you think the ideas for specialized software services and tools originate? In a kindergarten in Tel Aviv?) Third, for those who don’t remember October 2023, which one of my contacts labeled a day or two after the disastrous security breach resulting in the deaths of young people, was “Israel’s 9/11.” That’s correct and the event makes crystal clear that Israel’s security systems and other cyber security systems developed elsewhere in the world may not be secure. Is this a news flash? I don’t think so.
What does this allegedly true news story suggest? Here are a few observations:
- Most people make assumptions about “security” and believe fairy dust about “secure messaging.” Achieving security requires operational activities prior to selecting a system and sending messages or paying a service to back up Signal’s disappearing content. No correct operational procedures means no secure messaging.
- Cyber security software, created by humans, can be compromised. There are many ways. These include systemic failures, human error, believing in unicorns, and targeted penetrations. Therefore, security is a bit like the venture capitalists’ belief that the next big thing is their most recent investment colorfully described by a marketing professional with a degree in art history.
- Certain vendors do provide secure messaging services; however, these firms are not the ones bandied about in online discussion groups. There is such a firm providing at this time secure messaging to the US government. It is a US firm. Its system and method are novel. The question becomes, “Why not use the systems already operating, not a service half a world away, integrated with a free “secure” messaging application, and made wonderful because some of its code is open source?
Net net: Perhaps it is time to become more informed about cyber security and secure messaging apps?
PS. To the Reddit poster who said, “404 Media is the only one reporting this.” Check out the Israel Palestine News item from May 4, 2025.
Stephen E Arnold, May 5, 2025
Another Grousing Googler: These Wizards Need Time to Ponder Ethical Issues
May 1, 2025
No AI. This old dinobaby just plods along, delighted he is old and this craziness will soon be left behind. What about you?
My view of the Google is narrow. Sure, I got money to write about some reports about the outfit’s technology. I just did my job and moved on to more interesting things than explaining the end of relevance and how flows of shaped information destroys social structures.
This Googzilla is weeping because one of the anointed is not happy with the direction the powerful creature is headed. Googzilla asks itself, “How can we replace weak and mentally weak humans with smart software more quickly?” Thanks, OpenAI. Good enough like much of technology these days.
I still enjoy reading about the “real” Google written by a “real” Googlers and Xooglers (these are former Googlers who now work at wonderfully positive outfits like emulating the Google playbook).
The article in front of me this morning (Sunday, April20, 2025) is titled “I’ve Worked at Google for Decades. I’m Sickened by What It’s Doing.” The subtitle tells me a bit about the ethical spine of the author, but you may find it enervating. As a dinobaby, I am not in tune with the intellectual, ethical, and emotional journeys of Googlers and Xooglers. Here’s the subtitle:
For the first time, I feel driven to speak publicly, because our company is now powering state violence across the globe.
Let’s take a look at what this Googler asserts about the estimable online advertising outfit. Keep in mind that the fun-loving Googzilla has been growing for more than two decades, and the creature is quite spritely despite some legal knocks and Timnit Gebru-type pains. Please, read the full “Sacramentum Paenitentiae.” (I think this is a full cycle of paenitentia, but as a dinobaby, I don’t have the crystalline intelligence of a Googler or Xoogler.)
Here’s statement one I noted. The author contrasts the good old days of St. Paul Buchheit’s “Don’t be evil” enjoinder to the present day’s Sundar & Prabhakar’s Comedy Show this way:
But if my overwhelming feeling back then was pride, my feeling now is a very different one: heartbreak. That’s thanks to years of deeply troubling leadership decisions, from Google’s initial foray into military contracting with Project Maven, to the corporation’s more recent profit-driven partnerships like Project Nimbus, Google and Amazon’s joint $1.2 billion AI and cloud computing contract with the Israeli military that has powered Israel’s ongoing genocide of Palestinians in Gaza.
Yeah, smart software that wants to glue cheese on pizzas running autonomous weapons strikes me as an interesting concept. At least the Ukrainian smart weapons are home grown and mostly have a human or two in the loop. The Google-type outfits are probably going to find the Ukrainian approach inefficient. The blue chip consulting firm mentality requires that these individuals be allowed to find their future elsewhere.
Here’s another snip I circled with my trusty Retro51 ball point pen:
For years, I have organized internally against Google’s full turn toward war contracting. Along with other coworkers of conscience, we have followed official internal channels to raise concerns in attempts to steer the company in a better direction. Now, for the first time in my more than 20 years of working at Google, I feel driven to speak publicly, because our company is now powering state violence across the globe, and the severity of the harm being done is rapidly escalating.
I find it interesting that it takes decades to make a decision involving morality and ethicality. These are tricky topics and must be considered. St. Augustine of Hippo took about three years (church scholars are not exactly sure and, of course, have been known to hallucinate). But this Google-certified professional required 20 years to figure out some basic concepts. Is this judicious or just an indication of how tough intellectual amorality is to analyze?
Let me wrap up with one final snippet.
To my fellow Google workers, and tech workers at large: If we don’t act now, we will be conscripted into this administration’s fascist and cruel agenda: deporting immigrants and dissidents, stripping people of reproductive rights, rewriting the rules of our government and economy to favor Big Tech billionaires, and continuing to power the genocide of Palestinians. As tech workers, we have a moral responsibility to resist complicity and the militarization of our work before it’s too late.
The evil-that-men-do argument. Now that’s one that will resonate with the “leadership” of Alphabet, Google, Waymo, and whatever weirdly named units Googzilla possesses, controls, and partners. As that much-loved American thinker Ralph Waldo-Emerson allegedly said:
“What lies behind you and what lies in front of you, pales in comparison to what lies inside of you.”
I am not sure I want this Googler, Xoogler, or whatever on my quick recall team. Twenty years to figure out something generally about having an ethical compass and a morality meter seems like a generous amount of time. No wonder Googzilla is rushing to replace its humanoids with smart software. When that code runs on quantum computers, imagine the capabilities of the online advertising giant. It can brush aside criminal indictments. Ignore the mewing and bleating of employees. Manifest itself into one big … self, maybe sick, but is it the Googley destiny?
Stephen E Arnold, May 1, 2025
Israel Military: An Alleged Lapse via the Cloud
April 23, 2025
No AI, just a dinobaby watching the world respond to the tech bros.
Israel is one of the countries producing a range of intelware and policeware products. These have been adopted in a number of countries. Security-related issues involving software and systems in the country are on my radar. I noted the write up “Israeli Air Force Pilots Exposed Classified Information, Including Preparations for Striking Iran.” I do not know if the write up is accurate. My attempts to verify did not produce results which made me confident about the accuracy of the Haaretz article. Based on the write up, the key points seem to be:
- Another security lapse, possibly more severe than that which contributed to the October 2023 matter
- Classified information was uploaded to a cloud service, possibly Click Portal, associated with Microsoft’s Azure and the SharePoint content management system. Haaretz asserts: “… it [MSFT Azure SharePoint Click Portal] enables users to hold video calls and chats, create documents using Office applications, and share files.”
- Documents were possibly scanned using CamScanner, A Chinese mobile app rolled out in 2010. The app is available from the Russian version of the Apple App Store. A CamScanner app is available from the Google Play Store; however, I elected to not download the app.
Modern interfaces can confuse users. Lack of training rigor and dashboards can create a security problem for many users. Thanks, Open AI, good enough.
Haaretz’s story presents this information:
Officials from the IDF’s Information Security Department were always aware of this risk, and require users to sign a statement that they adhere to information security guidelines. This declaration did not prevent some users from ignoring the guidelines. For example, any user could easily find documents uploaded by members of the Air Force’s elite Squadron 69.
Regarding the China-linked CamScanner software, Haaretz offers this information:
… several files that were uploaded to the system had been scanned using CamScanner. These included a duty roster and biannual training schedules, two classified training presentations outlining methods for dealing with enemy weaponry, and even training materials for operating classified weapons systems.
Regarding security procedures, Haaretz states:
According to standard IDF regulations, even discussing classified matters near mobile phones is prohibited, due to concerns about eavesdropping. Scanning such materials using a phone is, all the more so, strictly forbidden…According to the Click Portal usage guidelines, only unclassified files can be uploaded to the system. This is the lowest level of classification, followed by restricted, confidential, secret and top secret classifications.
The military unit involved was allegedly Squadron 69 which could be the General Staff Reconnaissance Unit. The group might be involved in war planning and fighting against the adversaries of Israel. Haaretz asserts that other units’ sensitive information was exposed within the MSFT Azure SharePoint Click Portal system.
Several observations seem to be warranted:
- Overly complicated systems involving multiple products increase the likelihood of access control issues. Either operators are not well trained or the interfaces and options confuse an operator so errors result
- The training of those involved in sensitive information access and handling has to be made more rigorous despite the tendency to “go through the motions” and move on in many professionals undergoing specialized instruction
- The “brand” of Israel’s security systems and procedures has taken another hit with the allegations spelled out by Haaretz. October 2023 and now Squadron 69. This raises the question, “What else is not buttoned up and ready for inspection in the Israel security sector?
Net net: I don’t want to accept this write up as 100 percent accurate. I don’t want to point the finger of blame at any one individual, government entity, or commercial enterprise. But security issues and Microsoft seem to be similar to ham and eggs and peanut butter and jelly from this dinobaby’s point of view.
Stephen E Arnold, April 23, 2025