A Digital Schism: Is It the 16th Century All Over Again?

December 12, 2022

I noted “FBI Calls Apple’s Enhanced iCloud Encryption Deeply Concerning As Privacy Groups Hail It As a Victory for Users.” I am tempted to provide some historical color about Galileo, Jesuits, and infinitesimals. I won’t. I will point out that schisms appear to be evident today and may be as fraught as those when data flows were not ripping apart social norms. (How bad was it in the 16th century? Think in terms of toasting in fires those who did not go with the program. Quite toasty for some.)

The write up explains:

Apple yesterday [December 7, 2022] announced that end-to-end encryption is coming to even more sensitive types of iCloud data, including device backups, contacts, messages, photos, and more, meeting the longstanding demand of both users and privacy groups who have rallied for the company to take the significant step forward in user privacy.

Who is in favor of Apple’s E2EE push? The article says:

We [the Electronic Frontier Foundation] applaud Apple for listening to experts, child advocates, and users who want to protect their most sensitive data. Encryption is one of the most important tools we have for maintaining privacy and security online. That’s why we included the demand that Apple let users encrypt iCloud backups in the Fix It Already campaign that we launched in 2019.

Across the E2EE chess board is the FBI. The article points out:

In a statement to The Washington Post, the FBI, the largest intelligence agency in the world, said it’s “deeply concerned with the threat end-to-end and user-only-access encryption pose.” The bureau said that end-to-end encryption and Apple’s Advanced Data Protection make it harder for them to do their work and that they request “lawful access by design.”

I don’t have a dog in this commercial push for E2EE encryption which is one component in Apple’s marketing of itself as the Superman/Superwoman of truth, justice, and the American way. (A 30 percent app store tariff is part of this mythic set up as well.) I understand the concern of the investigators, but I am retired and sitting on the sidelines as I watch the Grim Reaper’s Rivian creep closer.

Several observations:

  1. In the boundary between these two sides or factions, the emergent behavior will get around the rules. That emergent behavior is a consequence of apparently irreconcilable differences. The impact of this schism will reverberate for an unknown amount of time.
  2. Absolutism makes perfect sense in a social setting where one side enjoys near total control of behavior, access, thoughts, etc. However we live in a Silicon Valley environment partially fueled by phenomenological existentialism. Toss in the digital flows of information, and the resulting mixture is likely to be somewhat unpredictable.
  3. Compromise will be painful but baby steps will be taken. Even Iran is reassigning morality police to less riot inducing activities. China has begun to respond to increasingly unhappy campers in lock down mode. Like I said, Baby steps.

Net net: Security and privacy are a bit like love and Plato’s chair. Welcome to the digital Middle Ages. The emergent middle class may well be bad actors.

Stephen E Arnold, December 12, 2022

Is Cyber Security Lagging a Grade Behind Other Technology?

November 25, 2022

The average computer user is unaware of how invasive and harmful cyber attacks are. Forbes details how little individuals and companies know about cyber crime in, “Why We Need A Cyber Intelligence Revolution.” Peiter Zatko is an infamous hacker and the former head of Twitter’s security. He revealed in a whistleblower complaint that Twitter’s protections are at risk because of poor security measures.

The whistleblower complaint was not a surprise to the cybersecurity world, but it was to everyone else. Companies and individuals need to be aware of the capabilities and limitations of cyber security. Companies should also set up reasonable expectations for their cybersecurity teams. Businesses are more at risk from security breaches, ransomware, and other threats. Legacy systems are especially vulnerable, because they were not designed to handle modern cyber attacks.

Cybersecurity teams need to be proactive. They can be proactive by gathering real-time intelligence from multiple sources to identify and prevent bad actors from attacking. Cybersecurity workers are in a pickle though:

“Our company recently conducted a survey of more than 300 IT professionals to determine the state of enterprise cybersecurity today and gather insights to lead us into a more secure future. Seventy-two percent of respondents have added new technologies in the past 12 months and nearly half (46%) have more than six tools and services in their security stack today. At the same time, 27% don’t even know how many tools they have in their security stack, and almost a quarter of professionals (24%) said their security posture is average or below average, indicating their awareness of their security stack vulnerabilities.”

A Gartner survey also found that 75% of organizations are investing in security vendor consolidation, because they want to reduce the strain on their cybersecurity teams. It is even worse that the old methods, such as firewalls, do not work anymore.

Organizations and individuals can take a few steps to ensure they remain safe. They can assess their current security plan and run a threat scan, use proactive and reactive solutions, and integrate threat intelligence from multiple sources.

Whitney Grace, November 25, 2022

Cyber Security? That Is a Good Question

November 25, 2022

This is not ideal. We learn from Yahoo Finance, “Russian Software Disguised as American Finds Its Way into U.S. Army, CDC Apps.” Reuters journalists James Pearson and Marisa Taylor report:

“Thousands of smartphone applications in Apple and Google’s online stores contain computer code developed by a technology company, Pushwoosh, that presents itself as based in the United States, but is actually Russian, Reuters has found. The Centers for Disease Control and Prevention (CDC), the United States’ main agency for fighting major health threats, said it had been deceived into believing Pushwoosh was based in the U.S. capital. After learning about its Russian roots from Reuters, it removed Pushwoosh software from seven public-facing apps, citing security concerns. The U.S. Army said it had removed an app containing Pushwoosh code in March because of the same concerns. That app was used by soldiers at one of the country’s main combat training bases. According to company documents publicly filed in Russia and reviewed by Reuters, Pushwoosh is headquartered in the Siberian town of Novosibirsk, where it is registered as a software company that also carries out data processing. … Pushwoosh is registered with the Russian government to pay taxes in Russia. On social media and in U.S. regulatory filings, however, it presents itself as a U.S. company, based at various times in California, Maryland and Washington, D.C., Reuters found.”

Pushwoosh’s software was included in the CDC’s main app and that share information on health concerns, including STDs. The Army had used the software in an information portal at, perhaps among other places, its National Training Center in California. Any data breach there could potentially reveal upcoming troop movements. Great. To be clear, there is no evidence data has been compromised. However, we do know Russia has a pesky habit of seizing any data it fancies from companies based within its borders.

Other entities apparently duped by Pushwoosh include the NRA, Britain’s Labor Party, large companies like Unilever, and makers of many items on Apple’s and Google’s app stores. The article includes details on how the company made it look like it was based in the US and states the FTC has the authority to prosecute those who engage in such deceptive practices. Whether it plans to bring charges is yet to be seen.

Cynthia Murrell, November 25, 2022

With Mass Firings, Here Is a Sketchy Factoid to Give One Pause

November 17, 2022

In the midst of the Twitter turmoil and the mea culpae of the Zuck and the Zen master (Jack Dorsey), the idea about organizational vulnerability is not getting much attention. One facet of layoffs or RIFs (reductions in force) is captured in the article “Only a Quarter of Businesses Have Confidence Ex-Employees Can No Longer Access Infrastructure.” True to content marketing form, the details of the methodology are not disclosed.

Who among the thousands terminated via email or a Slack message are going to figure out that selling “insider information” is a good way to make money. Are those executive recruitment firms vetting their customers. Is that jewelry store in Athens on the up and up, or is it operated by a friend of everyone’s favorite leader, Vlad the Assailer. What mischief might a tech-savvy former employee undertake as a contractor on Fiverr or a disgruntled person in a coffee shop?

The write up states:

Only 24 percent of respondents to a new survey are fully confident that ex-employees no longer have access to their company’s infrastructure, while almost half of organizations are less than 50 percent confident that former employees no longer have access.

An outfit called Teleport did the study. A few other factoids which I found suggestive are:

  • … Organizations [are] using on average 5.7 different tools to manage access policy, making it complicated and time-consuming to completely shut off access.
  • “62 percent of respondents cite privacy concerns as a leading challenge when replacing passwords with biometric authentication.”
  • “55 percent point to a lack of devices capable of biometric authentication.”

Let’s assume that these data are off by 10 or 15 percent. There’s room for excitement in my opinion.

Stephen E Arnold, November 17, 2022

Thomson Reuters: Trust the Firm with Data Security?

November 16, 2022

Thomson Reuters tosses around the word “trust.” Should one trust the firm with data security? (Keep in mind that Thomson Reuters compiles and licenses data to law enforcement and intelligence entities in the US and elsewhere, please.)

As most people know, everyone makes mistakes, but Thomson Reuters made one heck of a doozy when the company left three terabytes of sensitive information open to the Internet. Hackers and their nefarious bots purloined the three terabytes. Cyber News discusses the fallout in: “Thomson Reuters Collected And Leaked At Least 3TB Of Sensitive Data.” The three databases are public-facing and are housed in ElasticSearch software.

Thomson Reuters fixed the problem when they found it, then they notified their customers. Thomson Reuters specializes in business-to-business media tools, such as Checkpoint, ONESOURCE, Westlaw, and Reuters Connect. The exposed databases rely on open-source software ElasticSearch because it was designed for companies handling large amounts of constantly updated data. The leaked three terabytes are worth millions of dollars in the criminal world.

Two databases were public-facing, meaning they were meant to be accessible to the public, while the third was a non-production server related to the product ONESOURCE. The leaked data could cause a lot of mayhem:

“Researchers believe that any loss of information on the dataset could not only harm Thomson Reuters and its clients but also be detrimental to the public interest.

For example, the open database was leaking some individuals’ and organizations’ sensitive screening and compliance data. Accessible data from the public-facing Thomson Reuters database could have tipped off entities that would like their wrongdoing kept in the dark.

According to Martynas Vareikis, Information Security Researcher at Cybernews, threat actors could use the email addresses exposed in the dataset to carry out phishing attacks. Attackers could impersonate Thomson Reuters and send the company’s customers fake invoices.”

While Thomson Reuters attributes the error as a system glitch, leaving the passwords in plaintext format was a rookie mistake. No matter how strong the passwords are, they are worthless once exposed.

Trust? Maybe it is a marketing play?

Whitney Grace, November 16, 2022

Microsoft Downplays Revelation of Massive Data Leak

November 1, 2022

Microsoft customers have reason to be annoyed despite the company’s insistence there is nothing to see here. “Microsoft Under Fire After Leaking 2.4TB of Data from Customers Including Contracts, Emails, and More,” reveals Tech Times. Citing a report by cybersecurity firm SOCRadar, writer Joseph Henry tells us:

“According to SOCRadar post, 2.4TB of confidential data from more than 65,000 entities has been leaked because of the misconfiguration in the data bucket. The cybersecurity firm confirms that the data involved in the leak include State of Work (SoW) documents, PII (Personally Identifiable Information) data, Proof-of-Execution (PoE) data, customer emails, project details, product offers, and more. SOCRadar also notes that the above mentioned data spanned five years, particularly from 2017 to August 2022. It should be noted that Microsoft did not include the number of affected customers in its announcement. Unfortunately, instead of acknowledging SOCRadar’s finding, the Redmond giant downplayed the statement by disapproving of its post. Microsoft added that its investigation showed that no customer accounts were compromised in the process.”

Really? What a stroke of good fortune. Henry goes on to share some customer comments regarding the data leak as collected by Ars Technica. Apparently few are reassured by the company’s insistence SOCRadar is exaggerating. If nothing else, some note, this incident highlights Microsoft’s policy of retaining sensitive information in perpetuity. That is not exactly a security best practice. See the SOCRadar post for its description of the misconfiguration that caused this kerfuffle and its potential ramifications.

Which big tech giant will be the next one to get an F in security? My hunch is that it is Amazon’s turn to lose the game of cyber security musical chairs.

Cynthia Murrell, November 1, 2022

Exabeam: A Remarkable Claim

October 25, 2022

I read “Exabeam New Scale SIEM Enables Security Teams to Detect the Undetectable.” I find the idea expressed in the headline interesting. A commercial firm can spot something that cannot be seen; that is, detect the undetectable. The write up states as a rock solid factoid:

Claimed to be an industry first, Exabeam New-Scale SIEM allows security teams to search query responses across petabytes of hot, warm and cold data in seconds. Organizations can use the service to process logs with limitless scale at sustained speeds of more than 1 million events per second. Key to Exabeam’s offering is the ability to understand normal behavior to detect and prioritize anomalies. Exabeam New-Scale SIEM offers more than 1,800 pre-built correlation rules and more than 1,100 anomaly detection rules that leverage in excess of 750 behavior analytics detection models, which baseline normal behavior.

The write up continues with a blizzard of buzzwords; to wit:

The full list of new Exabeam products includes Security Log Management — cloud-scale log management to ingest, parse, store and search log data with powerful dashboarding and correlation. Exabeam SIEM offers cloud-native SIEM at hyperscale with modern search and powerful correlation, reporting, dashboarding and case management, and Exabeam Fusion provides New-Scale SIEM powered by modern, scalable security log management, powerful behavioral analytics and automated TDIR, according to the company. Exabeam Security Analytics provides automated threat detection powered by user and entity behavior analytics with correlation and threat intelligence. Exabeam Security Investigation is powered by user and entity behavior analytics, correlation rules and threat intelligence, supported by alerting, incident management, automated triage and response workflows.

Now this is not detecting the undetectable. The approach relies on processing data quickly, using anomaly detection methods, and pre-formed rules.

By definition, a pre formed rule is likely to have a tough time detecting the undetectable. Bad actors exploit tried and true security weaknesses, rely on very tough to detect behaviors like a former employee selling a bad actor information about a target’s system, and new exploits cooked up in the case of NSO Group in a small mobile phone shop or in a college class in Iran.

What is notable in the write up is:

The use of SIEM without explaining that the acronym represents “security information and event management.” The bound phrase “security information” means the data marking an exploit or attack. And “event management” means what the cyber security professionals do when the attack succeeds. The entire process is reactive; that is, only after something bad has been identified can action be taken. No awareness means the attack can move forward and continue. The idea of “early warning” means one thing, and detect the undetectable is quite another.

Who is responsible for this detect the undetectable? My view is that it is an art history major now working in marketing.

Detecting the undetectable. More like detecting sloganized marketing about a very serious threat to organizations hungry for dashboarding.

Stephen E Arnold, October 25, 2022

Open Source Is the Answer. Maybe Not?

October 24, 2022

In my last three lectures, I have amplified and explained what I call the open source frenzy and the concomitant blind spots. One senior law enforcement professional told me after a talk in September 2022, “We’re pushing forward with open source.” To be fair, that’s been the position of many government professionals with whom I have spoken in this year. Open source delivers high value software. Open source provides useful information with metatags. These data can be cross correlated to provide useful insight for investigators. Open source has even made it easier for those following Mr. Putin’s special action to get better information than those in war fighting hot spots.

Open source is the answer.

If you want a reminder about the slippery parts of open source information, navigate to “Thousands of GitHub Repositories Deliver Fake PoC Exploits with Malware.” The write up reports:

According to the technical paper from the researchers at Leiden Institute of Advanced Computer Science, the possibility of getting infected with malware instead of obtaining a PoC could be as high as 10.3%, excluding proven fakes and prankware.

Not a big deal, right?

Wrong. These data, even if the percentage is adrift, point to a vulnerability caused by the open source cheerleaders.

The write up does a good job of providing examples, which will be incomprehensible to most people. However, the main point of the write up is that open source repositories for software can be swizzled. The software, libraries, executables, and other bits and bobs can put some additional functions in the objects. If that takes place, the vulnerabilities rides along until called upon to perform an unexpected and possibly difficult to identify action.

Cyber security is primarily reactive. Embedded malware can be proactive, particularly if it uses a previously unknown code flaw.

The interesting part of the write up is this passage in my opinion:

The researchers have reported all the malicious repositories they discovered to GitHub, but it will take some time until all of them are reviewed and removed, so many still remain available to the public. As Soufian [a Dark Trace expert] explained, their study aims not just to serve as a one-time cleaning action on GitHub but to act as a trigger to develop an automated solution that could be used to flag malicious instructions in the uploaded code.

The idea of unknown or zero day flaws is apparently not on the radar. What’s this mean in practical terms? A “good enough” set of actions to deal with known issues is not going to be good enough.

This seems to set the stage for a remedial action that does not address the workflows and verification for open source. More significantly, should the focus be on code only?

The answer is, “No.” Think about injecting Fibonacci sequences into certain quantum computer operations. Can injection of crafted numerical strings into automated content processing systems throw a wrench into the works?

The answer to this question is, “Yes.”

Stephen E Arnold, October 24, 2022

TikTok: Tracking Humanoids? Nope, Never, Ever

October 21, 2022

I read “TikTok Denies It Could Be Used to Track US Citizens.” Allegedly linked to the cheerful nation state China, TikTok allegedly asserts that it cannot, does not, and never ever thought about analyzing log data. Nope, we promise.

The article asserts:

The social media giant said on Twitter that it has never been used to “target” the American government, activists, public figures or journalists. The firm also says it does not collect precise location data from US users.

Here’s a good question: Has notion of persistent cookies, geospatial data, content consumption analytics, psychological profiling based on thematics have never jived with TikTok data at the Surveillance Soirée?

The answer is, according to the Beeb:

The firm [TikTok] also says it does not collect precise location data from US users. It was responding to a report in Forbes that data would have been accessed without users’ knowledge or consent. The US business magazine, which cited documents it had seen, reported that ByteDance had started a monitoring project to investigate misconduct by current and former employees. It said the project, which was run by a Beijing-based team, had planned to collect location data from a US citizen on at least two occasions.

Saying is different from doing in my opinion.

Based on my limited experience with online, would it be possible for a smart system with access to log data to do some high-value data analysis? Would it be possible to link the analytics’ output with a cluster of users? Would be possible to cross correlate data so that individuals with a predicted propensity of a desired behavior to be identified?

Of course not. Never. Nation states and big companies are fountains of truth.

TikTok. Why worry?

Stephen E Arnold, October 21, 2022

Cy4Gate Named As Big Player In AI Industry

October 21, 2022

There are famous industry awards: Academy Award, Golden Globe, Emmy, Pulitzer, Newbery Award, Caldecott Medal, Nobel Prize, Peabody Award, etc. These are associated with entertainment, science, and literature. Lesser-known industry awards are hardly heard of outside of their relevant fields, but they still earn bragging rights. Cy4Gate recently won bragging rights in AI: “Cy4Gate Mentioned As A Representative Provided In 2022 Gartner innovation Insight For Composite AI Report.”

Gartner is a renowned research company and anyone who gets a compliment from them is at the top of their game. Cy4Gate won recognition in AI as a “Representative Provider for Composite Artificial Intelligence solutions. Composite artificial intelligence is a combination of several machine learning algorithms (i.e.e deep neural network, natural language processing, computer vision, and speech recognition) to make big data analysis more effective and efficient without the need for relevant computation capabilities. Cy4Gate earned this notoriety for its years of development and research in AI applications.

“Since its establishment, Cy4gate has considered as decisive the use of AI in innovative ways, to ensure its products the ability to perform at excellent levels even in highly complex, uncertain and ambiguous contexts. Within these application areas, the enormous amount of data generated by the consistent increase of interconnected devices can be profitably used to adopt appropriate and timely decisions, and to reduce margins of error.”

Cy4Gate’s products, specializing in cyber security and intelligence, are believed to have a competitive advantage over their rivals. Other AI companies in the cyber security and intelligence field rely on single AI algorithms instead of combining them into composite artificial intelligence. Based on their advances and recognition, Cy4Gate established a new division of the company: the Data and Artificial Intelligence Center of Competence. It is part of the engineering department.

Whitney Grace, October 21, 2022

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta