FOGINT: Targets Draw Attention. Signal Is a Target

April 1, 2025

dino orange_thumb_thumb_thumbDinobaby says, “No smart software involved. That’s for “real” journalists and pundits.

We have been plugging away on the “Telegram Overview: Notes for Analysts and Investigators.” We have not exactly ignored Signal or the dozens of other super secret, encrypted beyond belief messaging applications. We did compile a table of those we came across, and Signal was on that list.

I read “NSA Warned of Vulnerabilities in Signal App a Month Before Houthi Strike Chat.” I am not interested in the political facets of this incident. The important point for me is this statement:

The National Security Agency sent out an operational security special bulletin to its employees in February 2025 warning them of vulnerabilities in using the encrypted messaging application Signal

One of the big time cyber security companies spoke with me, and I mentioned that Signal might not be the cat’s pajamas. To the credit of that company and the former police chief with whom I spoke, the firm shifted to an end to end encrypted messaging app we had identified as slightly less wonky. Good for that company, and a pat on the back for the police chief who listened to me.

In my experience, operational bulletins are worth reading. When the bulletin is “special,” re-reading the message is generally helpful.

Signal, of course, defends itself vigorously. The coach who loses a basketball game says, “Our players put out a great effort. It just wasn’t enough.”

In the world of presenting oneself as a super secret messaging app immediately makes that messaging app a target. I know first hand that some whiz kid entrepreneurs believe that their EE2E solution is the best one ever. In fact, a year ago, such an entrepreneur told me, “We have developed a method that only a government agency can compromise.”

Yeah, that’s the point of the NSA bulletin.

Let me ask you a question: “How many computer science students in countries outside the United States are looking at EE2E messaging apps and trying to figure out how to compromise the data?” Years ago, I gave some lectures in Tallinn, Estonia. I visited a university computer science class. I asked the students who were working on projects each selected. Several of them told me that they were trying to compromise messaging systems. A favorite target was Telegram but Signal came up.

I know the wizards who cook up EE2E messaging apps and use the latest and greatest methods for delivering security with bells on are fooling themselves. Here are the reasons:

  1. Systems relying on open source methods are well documented. Exploits exist and we have noticed some CaaS offers to compromise these messages. Now the methods may be illegal in many countries, but they exist. (I won’t provide a checklist in a free blog post. Sorry.)
  2. Techniques to prevent compromise of secure messaging systems involve some patented systems and methods. Yes, the patents are publicly available, but the methods are simply not possible unless one has considerable resources for software, hardware, and deployment.
  3. A number of organizations turn EE2E messaging systems into happy eunuchs taking care of the sultan’s harem. I have poked fun at the blunders of the NSO Group and its Pegasus approach, and I have pointed out that the goodies of the Hacking Team escaped into the wild a long time ago. The point is that once the procedures for performing certain types of compromise are no longer secret, other humans can and will create a facsimile and use those emulations to suck down private messages, the metadata, and probably the pictures on the device too. Toss in some AI jazziness, and the speed of the process goes faster than my old 1962 Studebaker Lark.

Let me wrap up by reiterating that I am not addressing the incident involving Signal. I want to point out that I am not into the “information wants to be free.” Certain information is best managed when it is secret. Outfits like Signal and the dozens of other EE2E messaging apps are targets. Targets get hit. Why put neon lights on oneself and try to hide the fact that those young computer science students or their future employers will find a way to compromise the information.

Technical stealth, network fiddling, human bumbling — Compromises will continue to occur. There were good reasons to enforce security. That’s why stringent procedures and hardened systems have been developed. Today it’s marketing, and the possibility that non open source, non American methods may no longer be what the 23 year old art history who has a job in marketing says the systems actually deliver.

Stephen E Arnold, April 1, 2025

Cyber Attacks in Under a Minute

March 25, 2025

Cybercrime has evolved. VentureBeat reports, "51 Seconds to Breach: How CISOs Are Countering AI-Driven, Lightning-Fast Deepfake, Vishing and Social Engineering Attacks." Yes, according to cybersecurity firm CrowdStrike‘s Adam Meyers, the fastest breakout time he has seen is 51 seconds. No wonder bad actors have an advantage—it can take cyber defense weeks to months to determine a system has been compromised. In the interim, hackers can roam undetected.

Cybercrime methods have also changed. Where malware was once the biggest problem, hackers now favor AI-assisted phishing and vishing (voice-based phishing) campaigns. We learn:

"Vishing is out of control due in large part to attackers fine-turning their tradecraft with AI. CrowdStrike’s 2025 Global Threat Report found that vishing exploded by 442% in 2024. It’s the top initial access method attackers use to manipulate victims into revealing sensitive information, resetting credentials and granting remote access over the phone. ‘We saw a 442% increase in voice-based phishing in 2024. This is social engineering, and this is indicative of the fact that adversaries are finding new ways to gain access because…we’re kind of in this new world where adversaries have to work a little bit harder or differently to avoid modern endpoint security tools,’ Meyers said. Phishing, too, continues to be a threat. Meyers said, ‘We’ve seen that with phishing emails, they have a higher click-through rate when it’s AI-generated content, a 54% click-through rate, versus 12% when a human is behind it.’"

The write-up suggests three strategies to fight today’s breaches. Stop attackers at the authentication layer by shortening token lifetimes and implementing real-time revocation. Also, set things up so no one person can bypass security measures. No, not even the owner. Maybe especially not them. Next, we are advised, fight AI with AI: Machine-learning tools now exist to detect intrusions and immediately shut them down. Finally, stop lateral movement from the breach point with security that is unified across the system. See the write-up for more details on each of these.

Cynthia Murrell, March 25, 2025

Why Worry about TikTok?

March 21, 2025

dino orange_thumb_thumb_thumbWe have smart software, but the dinobaby continues to do what 80 year olds do: Write the old-fashioned human way. We did give up clay tablets for a quill pen. Works okay.

I hope this news item from WCCF Tech is wildly incorrect. I have a nagging thought that it might be on the money. “Deepseek’s Chatbot Was Being Used By Pentagon Employees For At Least Two Days Before The Service Was Pulled from the Network; Early Version Has Been Downloaded Since Fall 2024” is the headline I noted. I find this interesting.

The short article reports:

A more worrying discovery is that Deepseek mentions that it stores data on servers in China, possibly presenting a security risk when Pentagon employees started playing around with the chatbot.

And adds:

… employees were using the service for two days before this discovery was made, prompting swift action. Whether the Pentagon workers have been reprimanded for their recent act, they might want to exercise caution because Deepseek’s privacy policy clearly mentions that it stores user data on its Chinese servers.

Several observations:

  1. This is a nifty example of an insider threat. I thought cyber security services blocked this type of to and fro from government computers on a network connected to public servers.
  2. The reaction time is either months (fall of 2024 to 48 hours). My hunch is that it is the months long usage of an early version of the Chinese service.
  3. Which “manager” is responsible? Sorting out which vendors’ software did not catch this and which individual’s unit dropped the ball will be interesting and probably unproductive. Is it in any authorized vendors’ interest to say, “Yeah, our system doesn’t look for phoning home to China but it will be in the next update if your license is paid up for that service.” Will a US government professional say, “Our bad.”

Net net: We have snow removal services that don’t remove snow. We have aircraft crashing in sight of government facilities. And we have Chinese smart software running on US government systems connected to the public Internet. Interesting.

Stephen E Arnold, March 21, 2025

AI Hiring Spoofs: A How To

March 12, 2025

dino orange_thumbBe aware. A dinobaby wrote this essay. No smart software involved.

The late Robert Steele, one of first government professionals to hop on the open source information bandwagon, and I worked together for many years. In one of our conversations in the 1980s, Robert explained how he used a fake persona to recruit people to assist him in his work on a US government project. He explained that job interviews were an outstanding source of information about a company or an organization.

AI Fakers Exposed in Tech Dev Recruitment: Postmortem” is a modern spin on Robert’s approach. Instead of newspaper ads and telephone calls, today’s approach uses AI and video conferencing. The article presents a recipe for what was at one time a technique not widely discussed in the 1980s. Robert learned his approach from colleagues in the US government.

The write up explains that a company wants to hire a professional. Everything hums along and then:

…you discover that two imposters hiding behind deepfake avatars almost succeeded in tricking your startup into hiring them. This may sound like the stuff of fiction, but it really did happen to a startup called Vidoc Security, recently. Fortunately, they caught the AI impostors – and the second time it happened they got video evidence.

The cited article explains how to set and operate this type of deep fake play. I am not going to present the “how to” in this blog post. If you want the details, head to the original. The penetration tactic requires Microsoft LinkedIn, which gives that platform another use case for certain individuals gathering intelligence.

Several observations:

  1. Keep in mind that the method works for fake employers looking for “real” employees in order to obtain information from job candidates. (Some candidates are blissfully unaware that the job is a front for obtaining data about an alleged former employer.)
  2. The best way to avoid AI centric scams is to do the work the old-fashioned way. Smart software opens up a wealth of opportunities to obtain allegedly actionable information. Unfortunately the old fashioned way is slow, expensive, and prone to social engineering tactics.
  3. As AI and bad actors take advantage of the increased capabilities of smart software, humans do not adapt  quickly when those humans are not actively involved with AI capabilities. Personnel related matters are a pain point for many organizations.

To sum up, AI is a tool. It can be used in interesting ways. Is the contractor you hired on Fiverr or via some online service a real person? Is the job a real job or a way to obtain information via an AI that is a wonderful conversationalist? One final point: The target referenced in the write was a cyber security outfit. Did the early alert, proactive, AI infused system prevent penetration?

Nope.

Stephen E Arnold, March 12, 2025

Encryption: Not the UK Way but Apple Is A-Okay

March 6, 2025

The UK is on a mission. It seems to be making progress. The BBC Reports, "Apple Pulls Data Protection Tool After UK Government Security Row." Technology editor Zoe Kleinman explains:

"Apple is taking the unprecedented step of removing its highest level data security tool from customers in the UK, after the government demanded access to user data. Advanced Data Protection (ADP) means only account holders can view items such as photos or documents they have stored online through a process known as end-to-end encryption. But earlier this month the UK government asked for the right to see the data, which currently not even Apple can access. Apple did not comment at the time but has consistently opposed creating a ‘backdoor’ in its encryption service, arguing that if it did so, it would only be a matter of time before bad actors also found a way in. Now the tech giant has decided it will no longer be possible to activate ADP in the UK. It means eventually not all UK customer data stored on iCloud – Apple’s cloud storage service – will be fully encrypted."

The UK’s security agency, the Home Office, refused to comment on the matter. Apple states it was "gravely disappointed" with this outcome. It emphasizes its longstanding refusal to build any kind of back door or master key. It is the principle of the thing. Instead, it is now removing the locks on the main entrance. Much better.

As of the publication of Kleinman’s article, new iCloud users who tried to opt into ADP received an error message. Apparently, protection for existing users will be stripped at a later date. Some worry Apple’s withdrawal of ADP from the UK sets a bad precedent in the face of similar demands in other countries. Of course, so would caving in to them. The real culprit here, some say, is the UK government that put its citizens’ privacy at risk. Will other governments follow its lead? Will tech firms develop some best practices in the face of such demands? We wonder what their priorities will be.

Cynthia Murrell, March 6, 2025

Google and Personnel Vetting: Careless?

February 20, 2025

dino orangeNo smart software required. This dinobaby works the old fashioned way.

The Sundar & Prabhakar Comedy Show pulled another gag. This one did not delight audiences the way Prabhakar’s AI presentation did, nor does it outdo Google’s recent smart software gaffe. It is, however, a bit of a hoot for an outfit with money, smart people, and smart software.

I read the decidedly non-humorous news release from the Department of Justice titled “Superseding Indictment Charges Chinese National in Relation to Alleged Plan to Steal Proprietary AI Technology.” The write up states on February 4, 2025:

A federal grand jury returned a superseding indictment today charging Linwei Ding, also known as Leon Ding, 38, with seven counts of economic espionage and seven counts of theft of trade secrets in connection with an alleged plan to steal from Google LLC (Google) proprietary information related to AI technology. Ding was initially indicted in March 2024 on four counts of theft of trade secrets. The superseding indictment returned today describes seven categories of trade secrets stolen by Ding and charges Ding with seven counts of economic espionage and seven counts of theft of trade secrets.

image

Thanks, OpenAI, good enough.

Mr. Ding, obviously a Type A worker, appears to have quite industrious at the Google. He was not working for the online advertising giant; he was working for another entity. The DoJ news release describes his set up this way:

While Ding was employed by Google, he secretly affiliated himself with two People’s Republic of China (PRC)-based technology companies. Around June 2022, Ding was in discussions to be the Chief Technology Officer for an early-stage technology company based in the PRC.  By May 2023, Ding had founded his own technology company focused on AI and machine learning in the PRC and was acting as the company’s CEO.

What technology caught Mr. Ding’s eye? The write up reports:

Ding intended to benefit the PRC government by stealing trade secrets from Google. Ding allegedly stole technology relating to the hardware infrastructure and software platform that allows Google’s supercomputing data center to train and serve large AI models. The trade secrets contain detailed information about the architecture and functionality of Google’s Tensor Processing Unit (TPU) chips and systems and Google’s Graphics Processing Unit (GPU) systems, the software that allows the chips to communicate and execute tasks, and the software that orchestrates thousands of chips into a supercomputer capable of training and executing cutting-edge AI workloads. The trade secrets also pertain to Google’s custom-designed SmartNIC, a type of network interface card used to enhance Google’s GPU, high performance, and cloud networking products.

At least, Mr. Ding validated the importance of some of Google’s sprawling technical insights. That’s a plus I assume.

One of the more colorful items in the DoJ news release concerned “evidence.” The DoJ says:

As alleged, Ding circulated a PowerPoint presentation to employees of his technology company citing PRC national policies encouraging the development of the domestic AI industry. He also created a PowerPoint presentation containing an application to a PRC talent program based in Shanghai. The superseding indictment describes how PRC-sponsored talent programs incentivize individuals engaged in research and development outside the PRC to transmit that knowledge and research to the PRC in exchange for salaries, research funds, lab space, or other incentives. Ding’s application for the talent program stated that his company’s product “will help China to have computing power infrastructure capabilities that are on par with the international level.”

Mr. Ding did not use Google’s cloud-based presentation program. I found the explicit desire to “help China” interesting. One wonders how Google’s Googley interview process run by Googley people failed to notice any indicators of Mr. Ding’s loyalties? Googlers are very confident of their Googliness, which obviously tolerates an insider threat who conveys data to a nation state known to be adversarial in its view of the United States.

I am a dinobaby, and I find this type of employee insider threat at Google. Google bought Mandiant. Google has internal security tools. Google has a very proactive stance about its security capabilities. However, in this case, I wonder if a Googler ever noticed that Mr. Ding used PowerPoint, not the Google-approved presentation program. No true Googler would use PowerPoint, an archaic, third party program Microsoft bought eons ago and has managed to pump full of steroids for decades.

Yep, the tell — Googlers who use Microsoft products. Sundar & Prabhakar will probably integrate a short bit into their act in the near future.

Stephen E Arnold, February 20, 2025

Hackers and AI: Of Course, No Hacker Would Use Smart Software

February 18, 2025

dino orangeThis blog post is the work of a real live dinobaby. Believe me, after reading the post, you know that smart software was not involved.

Hackers would never ever use smart software. I mean those clever stealer distributors preying on get-rich-quick stolen credit card users. Nope. Those people using online games to lure kiddies and people with kiddie-level intelligence into providing their parents’ credit card data? Nope and double nope. Those people in computer science classes in Azerbaijan learning how to identify security vulnerability while working as contractors for criminals. Nope. Never. Are you crazy. These bad actors know that smart software is most appropriate for Mother Teresa type activities and creating Go Fund Me pages to help those harmed by natural disasters, bad luck, or not having a job except streaming.

I mean everyone knows that bad actors respect the firms providing smart software. It is common knowledge that bad actors play fair. Why would a criminal use smart software to create more efficacious malware payloads, compromise Web sites, or defeat security to trash the data on Data.gov. Ooops. Bad example. Data.gov has been changed.

I read “Google Says Hackers Abuse Gemini AI to Empower Their Attacks.” That’s the spirit. Bad actors are using smart software. The value of the systems is evident to criminals. The write up says:

Multiple state-sponsored groups are experimenting with the AI-powered Gemini assistant from Google to increase productivity and to conduct research on potential infrastructure for attacks or for reconnaissance on targets. Google’s Threat Intelligence Group (GTIG) detected government-linked advanced persistent threat (APT) groups using Gemini primarily for productivity gains rather than to develop or conduct novel AI-enabled cyberattacks that can bypass traditional defenses. Threat actors have been trying to leverage AI tools for their attack purposes to various degrees of success as these utilities can at least shorten the preparation period. Google has identified Gemini activity associated with APT groups from more than 20 countries but the most prominent ones were from Iran and China.

Stop the real time news stream! Who could have imagined that bad actors would be interested in systems and methods that would make their behaviors more effective and efficient.

When Microsoft rolled out its marketing gut punch aimed squarely at Googzilla, the big online advertising beast responded. The Code Red and Code Yellow lights flashed. Senior managers held meetings after Foosball games and hanging at Philz’ Coffee.

Did Google management envision the reality of bad actors using Gemini? No. It appears that the Google acquisition Mandiant figured it out. Eventually — it’s been two years and counting since Microsoft caused the AI tsunami — the Eureka! moment arrived.

The write up reports:

Google also mentions having observed cases where the threat actors attempted to use public jailbreaks against Gemini or rephrasing their prompts to bypass the platform’s security measures. These attempts were reportedly unsuccessful.

Of course, the attacks were. Do US banks tell their customers when check fraud or other cyber dishonesty relieves people of their funds. Sure they don’t. Therefore, it is only the schlubs who are unfortunate enough to have the breach disclosed. Then the cyber security outfits leap into action and issue fixes. Everything is the cyber security world is buttoned up and buttoned down. Absolutely.

Several observations:

  1. How has free access without any type of vetting working out? The question is directed at the big tech outfits who are beavering away in this technology blast zone.
  2. What are the providers of free smart software doing to make certain that the method can only produce seventh grade students’ essays about the transcontinental railroad?
  3. What exactly is a user of free smart software supposed to do to reign in the actions of nation states with which most Americans are somewhat familiar. I mean there is a Chinese restaurant near Harrod’s Creek. Am I to discuss the matter with the waitress?

Why worry? That worked for Mad Magazine until it didn’t. Hey, Google, thanks for the information. Who could have known smart software can be used for nefarious purposes? (Obviously not Google.)

Stephen E Arnold, February 18, 2025

A Vulnerability Bigger Than SolarWinds? Yes.

February 18, 2025

dino orangeNo smart software. Just a dinobaby doing his thing.

I read an interesting article from WatchTowr Labs. (The spelling is what the company uses, so the url is labs.watchtowr.com.) On February 4, 2024, the company reported that it discovered what one can think of as orphaned or abandoned-but-still alive Amazon S3 “buckets.” The discussion of the firm’s research and what it revealed is presented in “8 Million Requests Later, We Made The SolarWinds Supply Chain Attack Look Amateur.”

The company explains that it was curious if what it calls “abandoned infrastructure” on a cloud platform might yield interesting information relevant to security. We worked through the article and created what in the good old days would have been called an abstract for a database like ABI/INFORM. Here’s our summary:

The article from WatchTowr Labs describes a large-scale experiment where researchers identified and took control of about 150 abandoned Amazon Web Services S3 buckets previously used by various organizations, including governments, militaries, and corporations. Over two months, these buckets received more than eight million requests for software updates, virtual machine images, and sensitive files, exposing a significant vulnerability. Watchtowr explain that bad actors could have injected malicious content. Abandoned infrastructure could be used for supply chain attacks like SolarWinds. Had this happened, the impact would have been significant.

Several observations are warranted:

  1. Does Amazon Web Services have administrative functions to identify orphaned “buckets” and take action to minimize the attack surface?
  2. With companies information technology teams abandoning infrastructure, how will these organizations determine if other infrastructure vulnerabilities exist and remediate them?
  3. What can cyber security vendors’ software and systems do to identify and neutralize these “shoot yourself in the foot” vulnerabilities?

One of the most compelling statements in the WatchTowr article, in my opinion, is:

… we’d demonstrated just how held-together-by-string the Internet is and at the same time point out the reality that we as an industry seem so excited to demonstrate skills that would allow us to defend civilization from a Neo-from-the-Matrix-tier attacker – while a metaphorical drooling-kid-with-a-fork-tier attacker, in reality, has the power to undermine the world.

Is WatchTowr correct? With government and commercial organizations leaving S3 buckets available, perhaps WatchTowr should have included gum, duct tape, and grade-school white glue in its description of the Internet?

Stephen E Arnold, February 18, 2025

A New Spin on Insider Threats: Employees Secretly Use AI At Work

February 12, 2025

We’re afraid of AI replacing our jobs. Employers are blamed for wanting to replace humans with algorithms, but employees are already bringing AI into work. According to the BBC, employees are secretly using AI: “Why Employees Smuggle AI Into Work.” In IT departments across the United Kingdom (and probably the world), knowledge workers are using AI tools without permission from their leads.

Software AG conducted a survey of knowledge workers and the results showed that half of them used personal AI tools. Knowledge workers are defined at people who primarily work at a desk or a computer. Some of them are using the tools because their job doesn’t offer tools and others said they wanted to choose their tools.

Many of the workers are also not asking. They’re abiding by the mantra of, “It’s easier to ask forgiveness than permission.”

One worker uses ChatGPT as a mechanized coworker. ChatGPT allows the worker to consume information at faster rates and it has increased his productivity. His company banned AI tools, he didn’t know why but assumes it is a control thing.

AI tools also pose security risks, because the algorithms learn from user input. The algorithms store information and it can expose company secrets:

“Companies may be concerned about their trade secrets being exposed by the AI tool’s answers, but Alastair Paterson, CEO and co-founder of Harmonic Security, thinks that’s unlikely. "It’s pretty hard to get the data straight out of these [AI tools]," he says.

However, firms will be concerned about their data being stored in AI services they have no control over, no awareness of, and which may be vulnerable to data breaches.”

Using AI tools is like any new technology. The AI tools need to be used and tested, then regulated. AI can’t replace experience, but it certainly helps get the job done.

Whitney Grace, February 12, 2025

Acquiring AWS Credentials—Let Us Count the Ways

February 7, 2025

Will bad actors interested in poking around Amazon Web Services find the Wiz’s write up interesting? The answer is that the end of this blog post.

Cloud security firm Wiz shares an informative blog post: "The Many Ways to Obtain Credentials in AWS." It is a write-up that helps everyone: customers, Amazon, developers, cybersecurity workers, and even bad actors. We have not seen a similar write up about Telegram, however. Why publish such a guide to gaining IAM role and other AWS credentials? Why, to help guard against would- be hackers who might use these methods, of course.

Writer Scott Piper describes several services and features one might use to gain access: Certain AWS SDK credential providers; the Default Host Management Configuration; Systems Manager hybrid activation; the Internet of Things credentials provider; IAM Roles Anywhere; Cognito’s API, GetCredentialsForIdentity; and good old Datasync. The post concludes:

"There are many ways that compute services on AWS obtain their credentials and there are many features and services that have special credentials. This can result in a single EC2 having multiple IAM principals accessible from it. In order to detect attackers, we need to know the various ways they might attempt to obtain these credentials. This article has shown how this is not a simple problem and requires defenders to have just as much, if not more, expertise as attackers in credential access."

So true. Especially with handy cheat sheets like this one available online. Based in New York, New York, Wiz was founded in 2020.

Will bad actors find the Wiz’s post interesting? Answer: Yes but probably less interesting than a certain companion of Mr. Bezos’ fashion sense. But not by much.

Cynthia Murrell, February 7, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta