Secure Phones Keep Appearing

October 31, 2024

The KDE community has developed an open source interface for mobile devices called Plasma Mobile. It allegedly turns any phone into a virtual fortress, promising a “privacy-respecting, open source and secure phone ecosystem.” This project is based on the original Plasma for desktops, an environment focused on security and flexibility. As with many open-source projects, Plasma Mobile is an imperfect work in progress. We learn:

“A pragmatic approach is taken that is inclusive to software regardless of toolkit, giving users the power to choose whichever software they want to use on their device. … Plasma Mobile is packaged in multiple distribution repositories, and so it can be installed on regular x86 based devices for testing. Have an old Android device? postmarketOS, is a project aiming to bring Linux to phones and offers Plasma Mobile as an available interface for the devices it supports. You can see the list of supported devices here, but on any device outside the main and community categories your mileage may vary. Some supported devices include the OnePlus 6, Pixel 3a and PinePhone. The interface is using KWin over Wayland and is now mostly stable, albeit a little rough around the edges in some areas. A subset of the normal KDE Plasma features are available, including widgets and activities, both of which are integrated into the Plasma Mobile UI. This makes it possible to use and develop for Plasma Mobile on your desktop/laptop. We aim to provide an experience (with both the shell and apps) that can provide a basic smartphone experience. This has mostly been accomplished, but we continue to work on improving shell stability and telephony support. You can find a list of mobile friendly KDE applications here. Of course, any Linux-based applications can also be used in Plasma Mobile.

KDE states its software is “for everyone, from kids to grandparents and from professionals to hobbyists.” However, it is clear that being an IT professional would certainly help. Is Plasma Mobile as secure as they claim? Time will tell.

Cynthia Murrell, October 31, 2024

Microsoft Security: A World First

September 30, 2024

green-dino_thumb_thumb_thumb_thumb_t[2]This essay is the work of a dumb dinobaby. No smart software required.

After the somewhat critical comments of the chief information security officer for the US, Microsoft said it would do better security. “Secure Future Initiative” is a 25 page document which contains some interesting comments. Let’s look at a handful.

image

Some bad actors just go where the pickings are the easiest. Thanks, MSFT Copilot. Good enough.

On page 2 I noted the record beating Microsoft has completed:

Our engineering teams quickly dedicated the equivalent of 34,000 full-time engineers to address the highest priority security tasks—the largest cybersecurity engineering project in history.

Microsoft is a large software company. It has large security issues. Therefore, the company undertaken the “largest cyber security engineering project in history.” That’s great for the Guinness Book of World Records. The question is, “Why?” The answer, it seems to me, is, “Microsoft did “good enough” security. As the US government’s report stated, “Nope. Not good enough.” Hence, a big and expensive series of changes. Have the changes been tested or have unexpected security issues been introduced to the sprawl of Microsoft software? Another question from this dinobaby: “Can a big company doing good enough security implement fixes to remediate “the highest priority security tasks”? Companies have difficulty changing certain work practices. Can “good enough” methods do the job?

On page 3:

Security added as a core priority for all employees, measured against all performance reviews. Microsoft’s senior leadership team’s compensation is now tied to security performance

Compensation is lined to security as a “core priority.” I am not sure what making something a “core priority” means, particularly when the organization has implement security systems and methods which have been found wanting. When the US government gives a bad report card, one forms an impression of a fairly deep hole which needs to be filled with functional, reliable bits. Adding a “core priority” does not correlate with security software from cloud to desktop.

On page 5:

To enhance governance, we have established a new Cybersecurity Governance Council…

The creation of a council and adding security responsibilities to some executives and hiring a few other means to me:

  1. Meetings and delays
  2. Adding duties may translate to other issues
  3. How much will these remediating processes cost?

Microsoft may be too big to change its culture in a timely manner. The time required for a council to enhance governance means fixing security problems may take time. Even with additional time and “the equivalent of 34,000 full time engineers” may be a project management task of more than modest proportions.

On page 7:

Secure by design

Quite a subhead. How can Microsoft’s sweep of legacy and now products be made secure by design when these products have been shown to be insecure.

On page 10:

Our strategy for delivering enduring compliance with the standard is to identify how we will Start Right, Stay Right, and Get Right for each standard, which are then driven programmatically through dashboard driven reviews.

The alliteration is notable. However, what is “right”? What happens when fixing up existing issues and adhering to a “standard” find that a “standard” has changed. The complexity of management and the process of getting something “right” is like an example from a book from a Santa Fe Institute complexity book. The reality of addressing known security issues and conforming to standards which may change is interesting to contemplate. Words are great but remediating what’s wrong in a dynamic and very complicated series of dependent services is likely to be a challenge. Bad actors will quickly probe for new issues. Generally speaking, bad actors find faults and exploit them. Thus, Microsoft will find itself in a troublesome mode: Permanent reactions to previously unknown and new security issues.

On page 11, the security manifesto launches into “pillars.” I think the idea is that good security is built upon strong foundations. But when remediating “as is” code as well as legacy code, how long will the design, engineering, and construction of the pillars take? Months, years, decades, or multiple decades. The US CISO report card may not apply to certain time scales; for instance, big government contracts. Pillars are ideas.

Let’s look at one:

The monitor and detect threats pillar focuses on ensuring that all assets within Microsoft production infrastructure and services are emitting security logs in a standardized format that are accessible from a centralized data system for both effective threat hunting/investigation and monitoring purposes. This pillar also emphasizes the development of robust detection capabilities and processes to rapidly identify and respond to any anomalous access, behavior, and configuration.

The reality of today’s world is that security issues can arise from insiders. Outside threats seem to be identified each week. However, different cyber security firms identify and analyze different security issues. No one cyber security company is delivering 100 percent foolproof threat identification. “Logs” are great; however, Microsoft used to charge for making a logging function available to a customer. Now more logs. The problem is that logs help identify a breach; that is, a previously unknown vulnerability is exploited or an old vulnerability makes its way into a Microsoft system by a user action. How can a company which has a poor report card issued by the US government become the firm with a threat detection system which is the equivalent of products now available from established vendors. The recent CrowdStrike misstep illustrates that the Microsoft culture created the opportunity for the procedural mistake someone made at Crowdstrike. The words are nice, but I am not that confident in Microsoft’s ability to build this pillar. Microsoft may have to punt and buy several competitive systems and deploy them like mercenaries to protect the unmotivated Roman citizens in a century.

I think reading the “Secure Future Initiative” is a useful exercise. Manifestos can add juice to a mission. However, can the troops deliver a victory over the bad actors who swarm to Microsoft systems and services because good enough is like a fried chicken leg to a colony of ants.

Stephen E Arnold, September 30, 2024

Fancy Cyber Methods Are Useless Against Insider Threats

August 2, 2024

dinosaur30a_thumb_thumb_thumb_thumb__thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

In my lectures to law enforcement and intelligence professionals, I end the talks with one statement: “Do not assume. Do not reduce costs by firing experienced professionals. Do not ignore human analyses of available information. Do not take short cuts.” Cyber security companies are often like the mythical kids of the village shoemaker. Those who can afford to hire the shoemaker have nifty kicks and slides. Those without resources have almost useless footware.

Companies in the security business often have an exceptionally high opinion of their capabilities and expertise. I think of this as the Google Syndrome or what some have called by less salubrious names. The idea is that one is just so smart, nothing bad can happen here. Yeah, right.

image

An executive answers questions about a slight security misstep. Thanks, Microsoft Copilot. You have been there and done that I assume.

I read “North Korean Hacker Got Hired by US Security Vendor, Immediately Loaded Malware.” The article is a reminder that outfits in the OSINT, investigative, and intelligence business can make incredibly interesting decisions. Some of these lead to quite significant consequences. This particular case example illustrates how a hiring process using humans who are really smart and dedicated can be fooled, duped, and bamboozled.

The write up explains:

KnowBe4, a US-based security vendor, revealed that it unwittingly hired a North Korean hacker who attempted to load malware into the company’s network. KnowBe4 CEO and founder Stu Sjouwerman described the incident in a blog post yesterday, calling it a cautionary tale that was fortunately detected before causing any major problems.

I am a dinobaby, and I translated the passage to mean: “We hired a bad actor but, by the grace of the Big Guy, we avoided disaster.”

Sure, sure, you did.

I would suggest you know you trapped an instance of the person’s behavior. You may not know and may never know what that individual told a colleague in North Korea or another country what the bad actor said or emailed from a coffee shop using a contact’s computer. You may never know what business processes the person absorbed, converted to an encrypted message, and forwarded via a burner phone to a pal in a nation-state whose interests are not aligned with America’s.

In short, the cyber security company dropped the ball. It need not feel too bad. One of the companies I worked for early in my 60 year working career hired a person who dumped top secrets into journalists’ laps. Last week a person I knew was complaining about Delta Airlines which was shown to be quite addled in the wake of the CrowdStrike misstep.

What’s the fix? Go back to how I end my lectures. Those in the cyber security business need to be extra vigilant. The idea that “we are so smart, we have the answer” is an example of a mental short cut. The fact is that the company KnowBe4 did not. It is lucky it KnewAtAll. Some tips:

  1. Seek and hire vetted experts
  2. Question procedures and processes in “before action” and “after action” incidents
  3. Do not rely on assumptions
  4. Do not believe the outputs of smart software systems
  5. Invest in security instead of fancy automobiles and vacations.

Do these suggestions run counter to your business goals and your image of yourself? Too bad. Life is tough. Cyber crime is the growth business. Step up.

Stephen E Arnold, August 2, 2024

What Will the AT&T Executives Serve Their Lawyers at the Security Breach Debrief?

July 15, 2024

dinosaur30a_thumb_thumb_thumb_thumb_[1]_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

On the flight back to my digital redoubt in rural Kentucky, I had the thrill of sitting behind a couple of telecom types who were laughing at the pickle AT&T has plopped on top of what I think of a Judge Green slushee. Do lime slushees and dill pickles go together? For my tastes, nope. Judge Green wanted to de-monopolize the Ma Bell I knew and loved. (Yes, I cashed some Ma Bell checks and I had a Young Pioneers hat.)

We are back to what amounts a Ma Bell trifecta: AT&T (the new version which wears spurs and chaps), Verizon (everyone’s favorite throw back carrier), and the new T-Mobile (bite those customer pocketbooks as if they were bratwursts mit sauerkraut). Each of these outfits is interesting. But at the moment, AT&T is in the spotlight.

Data of Nearly All AT&T Customers Downloaded to a Third-Party Platform in a 2022 Security Breach” dances around a modest cyber misstep at what is now a quite old and frail Ma Bell. Imagine the good old days before the Judge Green decision to create Baby Bells. Security breaches were possible, but it was quite tough to get the customer data. Attacks were limited to those with the knowledge (somewhat tough to obtain), the tools (3B series computers and lots of mainframes), and access to network connections. Technology has advanced. Consequently competition means that no one makes money via security. Security is better at old-school monopolies because money can be spent without worrying about revenue. As one AT&T executive said to my boss at a blue-chip consulting company, “You guys charge so much we will have to get another railroad car filled with quarters to pay your bill.” Ho ho ho — except the fellow was not joking. At the pre-Judge Green AT&T, spending money on security was definitely not an issue. Today? Seems to be different.

A more pointed discussion of Ma Bell’s breaking her hip again appears in “AT&T Breach Leaked Call and Text Records from Nearly All Wireless Customers” states:

AT&T revealed Friday morning (July 12, 2024) that a cybersecurity attack had exposed call records and texts from “nearly all” of the carrier’s cellular customers (including people on mobile virtual network operators, or MVNOs, that use AT&T’s network, like Cricket, Boost Mobile, and Consumer Cellular). The breach contains data from between May 1st, 2022, and October 31st, 2022, in addition to records from a “very small number” of customers on January 2nd, 2023.

The “problem” if I understand the reference to Snowflake. Is AT&T suggesting that Snowflake is responsible for the breach? Big outfits like to identify the source of the problem. If Snowflake made the misstep, isn’t it the responsibility of AT&T’s cyber unit to make sure that the security was as good as or better than the security implemented before the Judge Green break up? I think AT&T, like other big companies, wants to find a way to shift blame, not say, “We put the pickle in the lime slushee.”

My posture toward two year old security issues is, “What’s the point of covering up a loss of ‘nearly all’ customers’ data?” I know the answer: Optics and the share price.

As a person who owned a Young Pioneers’ hat, I am truly disappointed in the company. The Regional Managers for whom I worked as a contractor had security on the list of top priorities from day one. Whether we were fooling around with a Western Electric data service or the research charge back system prior to the break up, security was not someone else’s problem.

Today it appears that AT&T has made some decisions which are now perched on the top officer’s head. Security problems  are, therefore, tough to miss. Boeing loses doors and wheels from aircraft. Microsoft tantalizes bad actors with insecure systems. AT&T outsources high value data and then moves more slowly than the last remaining turtle in the mine run off pond near my home in Harrod’s Creek.

Maybe big is not as wonderful as some expect the idea to be? Responsibility for one’s decisions and an ethical compass are not cyber tools, but both notions are missing in some big company operations. Will the after-action team guzzle lime slushees with pickles on top?

Stephen E Arnold, July 15, 2024

Cloudflare, What Else Can You Block?

July 11, 2024

I spotted an interesting item in Silicon Angle. The article is “Cloudflare Rolls Out Feature for Blocking AI Companies’ Web Scrapers.” I think this is the main point:

Cloudflare Inc. today debuted a new no-code feature for preventing artificial intelligence developers from scraping website content. The capability is available as part of the company’s flagship CDN, or content delivery network. The platform is used by a sizable percentage of the world’s websites to speed up page loading times for users. According to Cloudflare, the new scraping prevention feature is available in both the free and paid tiers of its CDN.

Cloudflare is what I call an “enabler.” For example, when one tries to do some domain research, one often encounters Cloudflare, not the actual IP address of the service. This year I have been doing some talks for law enforcement and intelligence professionals about Telegram and its Messenger service. Guess what? Telegram is a Cloudflare customer. My team and I have encountered other interesting services which use Cloudflare the way Natty Bumpo’s sidekick used branches to obscure footprints in the forest.

Cloudflare has other capabilities too; for instance, the write up reports:

Cloudflare assigns every website visit that its platform processes a score of 1 to 99. The lower the number, the greater the likelihood that the request was generated by a bot. According to the company, requests made by the bot that collects content for Perplexity AI consistently receive a score under 30.

I wonder what less salubrious Web site operators score. Yes, there are some pretty dodgy outfits that may be arguably worse than an AI outfit.

The information in this Silicon Angle write up raises a question, “What other content blocking and gatekeeping services can Cloudflare provide?

Stephen E Arnold, July 11, 2024

MSFT: Security Is Not Job One. News or Not?

June 11, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The idea that free and open source software contains digital trap falls is one thing. Poisoned libraries which busy and confident developers snap into their software should not surprise anyone. What I did not expect was the information in “Malicious VSCode Extensions with Millions of Installs Discovered.” The write up in Bleeping Computer reports:

A group of Israeli researchers explored the security of the Visual Studio Code marketplace and managed to “infect” over 100 organizations by trojanizing a copy of the popular ‘Dracula Official theme to include risky code. Further research into the VSCode Marketplace found thousands of extensions with millions of installs.

image

I heard the “Job One” and “Top Priority” assurances before. So far, bad actors keep exploiting vulnerabilities and minimal progress is made. Thanks, MSFT Copilot, definitely close enough for horseshoes.

The write up points out:

Previous reports have highlighted gaps in VSCode’s security, allowing extension and publisher impersonation and extensions that steal developer authentication tokens. There have also been in-the-wild findings that were confirmed to be malicious.

How bad can this be? This be bad. The malicious code can be inserted and happily delivers to a remote server via an HTTPS POST such information as:

the hostname, number of installed extensions, device’s domain name, and the operating system platform

Clever bad actors can do more even if the information they have is the description and code screen shot in the Bleeping Computer article.

Why? You are going to love the answer suggested in the report:

“Unfortunately, traditional endpoint security tools (EDRs) do not detect this activity (as we’ve demonstrated examples of RCE for select organizations during the responsible disclosure process), VSCode is built to read lots of files and execute many commands and create child processes, thus EDRs cannot understand if the activity from VSCode is legit developer activity or a malicious extension.”

That’s special.

The article reports that the research team poked around in the Visual Studio Code Marketplace and discovered:

  • 1,283 items with known malicious code (229 million installs).
  • 8,161 items communicating with hardcoded IP addresses.
  • 1,452 items running unknown executables.
  • 2,304 items using another publisher’s GitHub repo, indicating they are a copycat.

Bleeping Computer says:

Microsoft’s lack of stringent controls and code reviewing mechanisms on the VSCode Marketplace allows threat actors to perform rampant abuse of the platform, with it getting worse as the platform is increasingly used.

Interesting.

Let’s step back. The US Federal government prodded Microsoft to step up its security efforts. The MSFT leadership said, “By golly, we will.”

Several observations are warranted:

  1. I am not sure I am able to believe anything Microsoft says about security
  2. I do not believe a “culture” of security exists within Microsoft. There is a culture, but it is not one which takes security seriously after a butt spanking by the US Federal government and Microsoft Certified Partners who have to work to address their clients issues. (How do I know this? On Wednesday, June 8, 2024, at the TechnoSecurity & Digital Forensics Conference told me, “I have to take a break. The security problems with Microsoft are killing me.”
  3. The “leadership” at Microsoft is loved by Wall Street. However, others fail to respond with hearts and flowers.

Net net: Microsoft poses a grave security threat to government agencies and the users of Microsoft products. Talking with dulcet tones may make some people happy. I think there are others who believe Microsoft wants government contracts. Its employees want an easy life, money, and respect. Would you hire a former Microsoft security professional? This is not a question of trust; this is a question of malfeasance. Smooth talking is the priority, not security.

Stephen E Arnold, June 11, 2024

Allegations of Personal Data Flows from X.com to Au10tix

June 4, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I work from my dinobaby lair in rural Kentucky. What the heck to I know about Hod HaSharon, Israel? The answer is, “Not much.” However, I read an online article called “Elon Musk Now Requiring All X Users Who Get Paid to Send Their Personal ID Details to Israeli Intelligence-Linked Corporation.”I am not sure if the statements in the write up are accurate. I want to highlight some items from the write up because I have not seen information about this interesting identify verification process in my other feeds. This could be the second most covered news item in the last week or two. Number one goes to Google’s telling people to eat a rock a day and its weird “not our fault” explanation of its quantumly supreme technology.

Here’s what I carried away from this X to Au10tix write up. (A side note: Intel outfits like obscure names. In this case, Au10tix is a cute conversion of the word authentic to a unique string of characters. Aw ten tix. Get it?)

Yes, indeed. There is an outfit called Au10tix, and it is based about 60 miles north of Jerusalem, not in the intelware capital of the world Tel Aviv. The company, according to the cited write up, has a deal with Elon Musk’s X.com. The write up asserts:

X now requires new users who wish to monetize their accounts to verify their identification with a company known as Au10tix. While creator verification is not unusual for online platforms, Elon Musk’s latest move has drawn intense criticism because of Au10tix’s strong ties to Israeli intelligence. Even people who have no problem sharing their personal information with X need to be aware that the company they are using for verification is connected to the Israeli government. Au10tix was founded by members of the elite Israeli intelligence units Shin Bet and Unit 8200.

Sounds scary. But that’s the point of the article. I would like to remind you, gentle reader, that Israel’s vaunted intelligence systems failed as recently as October 2023. That event was described to me by one of the country’s former intelligence professionals as “our 9/11.” Well, maybe. I think it made clear that the intelware does not work as advertised in some situations. I don’t have first-hand information about Au10tix, but I would suggest some caution before engaging in flights of fancy.

The write up presents as actual factual information:

The executive director of the Israel-based Palestinian digital rights organization 7amleh, Nadim Nashif, told the Middle East Eye: “The concept of verifying user accounts is indeed essential in suppressing fake accounts and maintaining a trustworthy online environment. However, the approach chosen by X, in collaboration with the Israeli identity intelligence company Au10tix, raises significant concerns. “Au10tix is located in Israel and both have a well-documented history of military surveillance and intelligence gathering… this association raises questions about the potential implications for user privacy and data security.” Independent journalist Antony Loewenstein said he was worried that the verification process could normalize Israeli surveillance technology.

What the write up did not significant detail. The write up reports:

Au10tix has also created identity verification systems for border controls and airports and formed commercial partnerships with companies such as Uber, PayPal and Google.

My team’s research into online gaming found suggestions that the estimable 888 Holdings may have a relationship with Au10tix. The company pops up in some of our research into facial recognition verification. The Israeli gig work outfit Fiverr.com seems to be familiar with the technology as well. I want to point out that one of the Fiverr gig workers based in the UK reported to me that she was no longer “recognized” by the Fiverr.com system. Yeah, October 2023 style intelware.

Who operates the company? Heading back into my files, I spotted a few names. These individuals may no longer involved in the company, but several names remind me of individuals who have been active in the intelware game for a few years:

  • Ron Atzmon: Chairman (Unit 8200 which was not on the ball on October 2023 it seems)
  • Ilan Maytal: Chief Data Officer
  • Omer Kamhi: Chief Information Security Officer
  • Erez Hershkovitz: Chief Financial Officer (formerly of the very interesting intel-related outfit Voyager Labs, a company about which the Brennan Center has a tidy collection of information related to the LAPD)

The company’s technology is available in the Azure Marketplace. That description identifies three core functions of Au10tix’ systems:

  1. Identity verification. Allegedly the system has real-time identify verification. Hmm. I wonder why it took quite a bit of time to figure out who did what in October 2023. That question is probably unfair because it appears no patrols or systems “saw” what was taking place. But, I should not nit pick. The Azure service includes a “regulatory toolbox including disclaimer, parental consent, voice and video consent, and more.” That disclaimer seems helpful.
  2. Biometrics verification. Again, this is an interesting assertion. As imagery of the October 2023 emerged I asked myself, “How did that ID to selfie, selfie to selfie, and selfie to token matches” work? Answer: Ask the families of those killed.
  3. Data screening and monitoring. The system can “identify potential risks and negative news associated with individuals or entities.” That might be helpful in building automated profiles of individuals by companies licensing the technology. I wonder if this capability can be hooked to other Israeli spyware systems to provide a particularly helpful, real-time profile of a person of interest?

Let’s assume the write up is accurate and X.com is licensing the technology. X.com — according to “Au10tix Is an Israeli Company and Part of a Group Launched by Members of Israel’s Domestic Intelligence Agency, Shin Bet” — now includes this

image

The circled segment of the social media post says:

I agree to X and Au10tix using images of my ID and my selfie, including extracted biometric data to confirm my identity and for X’s related safety and security, fraud prevention, and payment purposes. Au10tix may store such data for up to 30 days. X may store full name, address, and hashes of my document ID number for as long as I participate in the Creator Subscription or Ads Revenue Share program.

This dinobaby followed the October 2023 event with shock and surprise. The dinobaby has long been a champion of Israel’s intelware capabilities, and I have done some small projects for firms which I am not authorized to identify. Now I am skeptical and more critical. What if X’s identity service is compromised? What if the servers are breached and the data exfiltrated? What if the system does not work and downstream financial fraud is enabled by X’s push beyond short text messaging? Much intelware is little more than glorified and old-fashioned search and retrieval.

Does Mr. Musk or other commercial purchasers of intelware know about cracks and fissures in intelware systems which allowed the October 2023 event to be undetected until live-fire reports arrived? This tie up is interesting and is worth monitoring.

Stephen E Arnold, June 4, 2024

E2EE: Not Good Enough. So What Is Next?

May 21, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

What’s wrong with software? “

I think one !*#$ thing about the state of technology in the world today is that for so many people, their job, and therefore the thing keeping a roof over their family’s head, depends on adding features, which then incentives people to, well, add features. Not to make and maintain a good app.

image

Who has access to the encrypted messages? Someone. That’s why this young person is distraught as she is escorted to the police van. Thanks, MSFT Copilot. Good enough.

This statement appears in “A Rant about Phone Messaging Apps UI.” But there are some more interesting issues in messaging; specifically, E2EE or end to end encrypted messaging. The current example of talking about the wrong topic in a quite important application space is summarized in Business Insider, an estimable online publication with snappy headlines like this one: “”In the Battle of Telegram vs Signal, Elon Musk Casts Doubt on the Security of the App He Once Championed.” That write up reports as “real” news:

Signal has also made its cryptography open-source. It is widely regarded as a remarkably secure way to communicate, trusted by Jeff Bezos and Amazon executives to conduct business privately.

I want to point out that Edward Snowden “endorses” Signal. He does not use Telegram. Does he know something that others may not have tucked into their memory stack?

The Business Insider “real” news report includes this quote from a Big Dog at Signal:

“We use cryptography to keep data out of the hands of everyone but those it’s meant for (this includes protecting it from us),” Whittaker wrote. “The Signal Protocol is the gold standard in the industry for a reason–it’s been hammered and attacked for over a decade, and it continues to stand the test of time.”

Pavel Durov, the owner of Telegram, and the brother of the person like two Ph.D.’s (his brother Nikolai), suggests that Signal is insecure. Keep in mind that Mr. Durov has been the subject of some scrutiny because after telling the estimable Tucker Carlson that Telegram is about free speech. Why? Telegram blocked Ukraine’s government from using a Telegram feature to beam pro-Ukraine information into Russia. That’s a sure-fire way to make clear what country catches Mr. Durov’s attention. He did this, according to rumors reaching me from a source with links to the Ukraine, because Apple or maybe Google made him do it. Blaming the alleged US high-tech oligopolies is a good red herring and a sinky one at that.

What Telegram got to do with the complaint about “features”? In my view, Telegram has been adding features at a pace that is more rapid than Signal, WhatsApp, and a boatload of competitors. have those features created some vulnerabilities in the Telegram set up? In fact, I am not sure Telegram is a messaging platform. I also think that the company may be poised to do an end run around open sourcing its home-grown encryption method.

What does this mean? Here are a few observations:

  1. With governments working overtime to gain access to encrypted messages, Telegram may have to add some beef.
  2. Established firms and start ups are nosing into obfuscation methods that push beyond today’s encryption methods.
  3. Information about who is behind an E2EE messaging service is tough to obtain? What is easy to document with a Web search may be one of those “fake” or misinformation plays.

Net net: E2EE is getting long in the tooth. Something new is needed. If you want to get a glimpse of the future, catch my lecture about E2EE at the upcoming US government Cycon 2024 event in September. Want a preview? We have a briefing. Write benkent2020 at yahoo dot com for restrictions and prices.

Stephen E Arnold, May 21, 2024

Encryption Battles Continue

May 15, 2024

Privacy protections are great—unless you are law-enforcement attempting to trace a bad actor. India has tried to make it easier to enforce its laws by forcing messaging apps to track each message back to its source. That is challenging for a platform with encryption baked in, as Rest of World reports in, “WhatsApp Gives India an Ultimatum on Encryption.” Writer Russell Brandom tells us:

“IT rules passed by India in 2021 require services like WhatsApp to maintain ‘traceability’ for all messages, allowing authorities to follow forwarded messages to the ‘first originator’ of the text. In a Delhi High Court proceeding last Thursday, WhatsApp said it would be forced to leave the country if the court required traceability, as doing so would mean breaking end-to-end encryption. It’s a common stance for encrypted chat services generally, and WhatsApp has made this threat before — most notably in a protracted legal fight in Brazil that resulted in intermittent bans. But as the Indian government expands its powers over online speech, the threat of a full-scale ban is closer than it’s been in years.”

And that could be a problem for a lot of people. We also learn:

“WhatsApp is used by more than half a billion people in India — not just as a chat app, but as a doctor’s officea campaigning tool, and the backbone of countless small businesses and service jobs. There’s no clear competitor to fill its shoes, so if the app is shut down in India, much of the digital infrastructure of the nation would simply disappear. Being forced out of the country would be bad for WhatsApp, but it would be disastrous for everyday Indians.”

Yes, that sounds bad. For the Electronic Frontier Foundation, it gets worse: The civil liberties organization insists the regulation would violate privacy and free expression for all users, not just suspected criminals.

To be fair, WhatsApp has done a few things to limit harmful content. It has placed limits on message forwarding and has boosted its spam and disinformation reporting systems. Still, there is only so much it can do when enforcement relies on user reports. To do more would require violating the platform’s hallmark: its end-to-end encryption. Even if WhatsApp wins this round, Brandom notes, the issue is likely to come up again when and if the Bharatiya Janata Party does well in the current elections. 

Cynthia Murrell, May 15, 2024

Reflecting on the Value Loss from a Security Failure

May 6, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Right after the October 2023 security lapse in Israel, I commented to one of the founders of a next-generation Israeli intelware developer, “Quite a security failure.” The response was, “It is Israel’s 9/11.” One of the questions that kept coming to my mind was, “How could such sophisticated intelligence systems, software, and personnel have dropped the ball?” I have arrived at an answer: Belief in the infallibility of in situ systems. Now I am thinking about the cost of a large-scale security lapse.

image

It seems the young workers are surprised the security systems did not work. Thanks, MSFT Copilot. Good enough which may be similar to some firms’ security engineering.

Globes published “Big Tech 50 Reveals Sharp Falls in Israeli Startup Valuations.” The write up provides some insight into the business cost of security which did not live up to its marketing. The write up says:

The Israeli R&D partnership has reported to the TASE [Tel Aviv Stock Exchange] that 10 of the 14 startups in which it has invested have seen their valuations decline.

Interesting.

What strikes me is that the cost of a security lapse is obviously personal and financial. One of the downstream consequences is a loss of confidence or credibility. Israel’s hardware and software security companies have had, in my opinion, a visible presence at conferences addressing specialized systems and software. The marketing of the capabilities of these systems has been maturing and becoming more like Madison Avenue efforts.

I am not sure which is worse: The loss of “value” or the loss of “credibility.”

If we transport the question about the cost of a security lapse to large US high-technology company, I am not sure a Globes’ type of article captures the impact. Frankly, US companies suffer security issues on a regular basis. Only a few make headlines. And then the firms responsible for the hardware or software which are vulnerable because of poor security issue a news release, provide a software update, and move on.

Several observations:

  1. The glittering generalities about the security of widely used hardware and software is simply out of step with reality
  2. Vendors of specialized software such as intelware suggest that their systems provide “protection” or “warnings” about issues so that damage is minimized. I am not sure I can trust these statements.
  3. The customers, who may have made security configuration errors, have the responsibility to set up the systems, update, and have trained personnel operate them. That sounds great, but it is simply not going to happen. Customers are assuming what they purchase is secure.

Net net: The cost of security failure is enormous: Loss of life, financial disaster, and undermining the trust between vendor and customer. Perhaps some large outfits should take the security of the products and services they offer beyond a meeting with a PR firm, a crisis management company, or a go-go marketing firm? The “value” of security is high, but it is much more than a flashy booth, glib presentations at conferences, or a procurement team assuming what vendors present correlates with real world deployment.

Stephen E Arnold, May 6, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta