FOGINT: Security Tools Over Promise & Under Deliver

November 22, 2024

While the United States and the rest of the world has been obsessed with the fallout of the former’s presidential election, bad actors planned terrorist plots. I24 News reports that after a soccer/football match in Amsterdam, there was a preplanned attack on Israeli fans: “Evidence From WhatsApp, Telegram Groups Shows Amsterdam Pogrom Was Organized.”

The Daily Telegraph located screenshots from WhatsApp and Telegram that displayed messages calling for a “Jew Hunt” after the game. The message writers were identified as Pro-Palestinian supports. The bad actors also called Jews “cancer dogs”, a vile slur in Dutch and told co-conspirators to bring fireworks to the planned attack. Dutch citizens and other observers were underwhelmed with the response of the Netherlands’ law enforcement. Even King Willem-Alexander noted that his country failed to protect the Jewish community when he spoke with Israeli President Isaac Herzog:

“Dutch king Willem-Alexander reportedly said to Israel’s President Isaac Herzog in a phone call on Friday morning that the ‘we failed the Jewish community of the Netherlands during World War II, and last night we failed again.’”

This an unfortunate example of the failure of cyber security tools that monitor social media. If this was a preplanned attack and the Daily Telegraph located the messages, then a cyber security company should have as well. These police ware and intelware systems failed to alert authorities. Is this another confirmation that cyber security and threat intelligence tools over promise and under deliver? Well, T-Mobile is compromised again and there is that minor lapse in Israel in October 2023.

Whitney Grace, November 22, 2024

Short Snort: How to Find Undocumented APIs

November 20, 2024

green-dino_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The essay / how to “All the Data Can Be Yours” does a very good job of providing a hacker road map. The information in the write up includes:

  1. Tips for finding undocumented APIs in GitHub
  2. Spotting “fetch” requests
  3. WordPress default APIs
  4. Information in robots.txt files
  5. Using the Google
  6. Examining JavaScripts
  7. Poking into mobile apps
  8. Some helpful resources and tools.

Each of these items includes details; for example, specific search strings and “how to make a taco” type of instructions. Assembling this write up took quite a bit of work.

Those engaged in cyber security (white, gray, and black hat types) will find the write up quite interesting.

I want to point out that I am not criticizing the information per se. I do want to remind those with a desire to share their expertise of three behaviors:

  1. Some computer science and programming classes in interesting countries use this type of information to provide students with what I would call hands on instruction
  2. Some governments, not necessarily aligned with US interests, provide the tips to the employees and contractors to certain government agencies to test and then extend the functionalities of the techniques presented in the write up
  3. Certain information might be more effectively distributed in other communication channels.

Stephen E Arnold, November 20, 2024

Insider Threats: More Than Threat Reports and Cumbersome Cyber Systems Are Needed

November 13, 2024

dino orange_thumbSorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.

With actionable knowledge becoming increasingly concentrated, is it a surprise that bad actors go where the information is? One would think that organizations with high-value information would be more vigilant when it comes to hiring people from other countries, using faceless gig worker systems, or relying on an AI-infused résumé on LinkedIn. (Yep, that is a Microsoft entity.)

image

Thanks, OpenAI. Good enough.

The fact is that big technology outfits are supremely confident in their ability to do no wrong. Billions in revenue will boost one’s confidence in a firm’s management acumen. The UK newspaper Telegraph published “Why Chinese Spies Are Sending a Chill Through Silicon Valley.”

The write up says:

In recent years the US government has charged individuals with stealing technology from companies including Tesla, Apple and IBM and seeking to transfer it to China, often successfully. Last year, the intelligence chiefs of the “Five Eyes” nations clubbed together at Stanford University – the cradle of Silicon Valley innovation – to warn technology companies that they are increasingly under threat.

Did the technology outfits get the message?

The Telegram article adds:

Beijing’s mission to acquire cutting edge tech has been given greater urgency by strict US export controls, which have cut off China’s supply of advanced microchips and artificial intelligence systems. Ding, the former Google employee, is accused of stealing blueprints for the company’s AI chips. This has raised suspicions that the technology is being obtained illegally. US officials recently launched an investigation into how advanced chips had made it into a phone manufactured by China’s Huawei, amid concerns it is illegally bypassing a volley of American sanctions. Huawei has denied the claims.

With some non US engineers and professionals having skills needed by some of the high-flying outfits already aloft or working their hangers to launch their breakthrough product or service, US companies go through human resource and interview processes. However, many hires are made because a body is needed, someone knows the candidate, or the applicant is willing to work for less money than an equivalent person with a security clearance, for instance.

The result is that most knowledge centric organizations have zero idea about the security of their information. Remember Edward Snowden? He was visible. Others are not.

Let me share an anecdote without mentioning names or specific countries and companies.

A business colleague hailed from an Asian country. He maintained close ties with his family in his country of origin. He had a couple of cousins who worked in the US. I was at his company which provided computer equipment to the firm at which I was working in Silicon Valley. He explained to me that a certain “new” technology was going to be released later in the year. He gave me an overview of this “secret” project. I asked him where the data originated. He looked at me and said, “My cousin. I even got a demo and saw the prototype.”

I want to point out that this was not a hire. The information flowed along family lines. The sharing of information was okay because of the closeness of the family. I later learned the information was secret. I realized that doing an HR interview process is not going to keep secrets within an organization.

I ask the companies with cyber security software which has an insider threat identification capability, “How do you deal with family or high-school relationship information channels?”

The answer? Blank looks.

The Telegraph and most of the whiz bang HR methods and most of the cyber security systems don’t work. Cultural blind spots are a problem. Maybe smart software will prevent knowledge leakage. I think that some hard thinking needs to be applied to this problem. The Telegram write up does not tackle the job. I would assert that most organizations have fooled themselves. Billions and arrogance have interesting consequences.

Stephen E Arnold, November 13, 2024

Two New Coast Guard Cybersecurity Units Strengthen US Cyber Defense

November 13, 2024

Some may be surprised to learn the Coast Guard had one of the first military units to do signals intelligence. Early in the 20th century, the Coast Guard monitored radio traffic among US bad guys. It is good to see the branch pushing forward. “U.S. Coast Guard’s New Cyber Units: A Game Changer for National Security,” reveals a post from ClearanceJobs. The two units, the Coast Guard Reserve Unit USCYBER and 1941 Cyber Protection Team (CPT), will work with U.S. Cyber Command. Writer Peter Suciu informs us:

“The new cyber reserve units will offer service-wide capabilities for Coast Guardsman while allowing the service to retain cyber talent. The reserve commands will pull personnel from around the United States and will bring experience from the private and public sectors. Based in Washington, D.C., CPTs are the USCG’s deployable units responsible for offering cybersecurity capabilities to partners in the MTS [Marine Transportation System].”

Why tap reserve personnel for these units? Simple: valuable experience. We learn:

“‘Coast Guard Cyber is already benefitting from its reserve members,’ said Lt. Cmdr. Theodore Borny of the Office of Cyberspace Forces (CG-791), which began putting together these units in early 2023. ‘Formalizing reserves with cyber talent into cohesive units will give us the ability to channel a skillset that is very hard to acquire and retain.’”

The Coast Guard Reserve Unit will (mostly) work out of Fort Meade in Maryland, alongside the U.S. Cyber Command and the National Security Agency. The post reminds us the Coast Guard is unique: it operates under the Department of Homeland Security, while our other military branches are part of the Department of Defense. As the primary defender of our ports and waterways, brown water and blue water, we think the Coast Guard is well position capture and utilize cybersecurity intel.

Cynthia Murrell, November 13, 2024

Meta and China: Yeah, Unauthorized Use of Llama. Meh

November 8, 2024

dino orangeThis post is the work of a dinobaby. If there is art, accept the reality of our using smart art generators. We view it as a form of amusement.

That open source smart software, you remember, makes everything computer- and information-centric so much better. One open source champion laboring as a marketer told me, “Open source means no more contractual handcuffs, the ability to make changes without a hassle, and evidence of the community.

image

An AI-powered robot enters a meeting. One savvy executive asks in Chinese, “How are you? Are you here to kill the enemy?” Another executive, seated closer to the gas emitted from a cannister marked with hazardous materials warnings gasps, “I can’t breathe!” Thanks, Midjourney. Good enough.

How did those assertions work for China? If I can believe the “trusted” outputs of the “real” news outfit Reuters, just super cool. “Exclusive: Chinese Researchers Develop AI Model for Military Use on Back of Meta’s Llama”, those engaging folk of the Middle Kingdom:

… have used Meta’s publicly available Llama model to develop an AI tool for potential military applications, according to three academic papers and analysts.

Now that’s community!

The write up wobbles through some words about the alleged Chinese efforts and adds:

Meta has embraced the open release of many of its AI models, including Llama. It imposes restrictions on their use, including a requirement that services with more than 700 million users seek a license from the company. Its terms also prohibit use of the models for “military, warfare, nuclear industries or applications, espionage” and other activities subject to U.S. defense export controls, as well as for the development of weapons and content intended to “incite and promote violence”. However, because Meta’s models are public, the company has limited ways of enforcing those provisions.

In the spirit of such comments as “Senator, thank you for that question,” a Meta (aka Facebook), wizard allegedly said:

“That’s a drop in the ocean compared to most of these models (that) are trained with trillions of tokens so … it really makes me question what do they actually achieve here in terms of different capabilities,” said Joelle Pineau, a vice president of AI Research at Meta and a professor of computer science at McGill University in Canada.

My interpretation of the insight? Hey, that’s okay.

As readers of this blog know, I am not too keen on making certain information public. Unlike some outfits’ essays, Beyond Search tries to address topics without providing information of a sensitive nature. For example, search and retrieval is a hard problem. Big whoop.

But posting what I would term sensitive information as usable software for anyone to download and use strikes me as something which must be considered in a larger context; for example, a bad actor downloading an allegedly harmless penetration testing utility of the Metasploit-ilk. Could a bad actor use these types of software to compromise a commercial or government system? The answer is, “Duh, absolutely.”

Meta’s founder of the super helpful Facebook wants to bring people together. Community. Kumbaya. Sharing.

That has been the lubricant for amassing power, fame, and money… Oh, also a big gold necklace similar to the one’s I saw labeled “Pharaoh jewelry.”

Observations:

  1. Meta (Facebook) does open source for one reason: To blunt initiatives from its perceived competitors and to position itself to make money.
  2. Users of Meta’s properties are only data inputters and action points; that is, they are instrumentals.
  3. Bad actors love that open source software. They download it. They study it. They repurpose it to help the bad actors achieve their goals.

Did Meta include a kill switch in its open source software? Oh, sure. Meta is far-sighted, concerned with misuse of its innovations, and super duper worried about what an adversary of the US might do with that technology. On the bright side, if negotiations are required, the head of Meta (Facebook) allegedly speaks Chinese. Is that a benefit? He could talk with the weaponized robot dispensing biological warfare agents.

Stephen E Arnold, November 8, 2024

Microsoft 24H2: The Reality Versus Self Awareness

November 4, 2024

dino orangeSorry. Written by a dumb humanoid. Art? It is AI, folks. Eighty year old dinobabies cannot draw very well in my experience.

I spotted a short item titled “Microsoft Halts Windows 11 24H2 Update for Many PCs Due to Compatibility Issues.” Today is October 29, 2024. By the time you read this item, you may have a Windows equipped computer humming along on the charmingly named 11 24H2 update. That’s the one with Recall.

image

Microsoft does not see itself as slightly bedraggled. Those with failed updates do. Thanks, ChatGPT, good enough, but at least you work. MSFT Copilot has been down for six days with a glitch.

Now if you work at the Redmond facility where Google paranoia reigns, you probably have Recall running on your computing device as well as Teams’ assorted surveillance features. That means that when you run a query for “updates”, you may see screens presenting an array of information about non functioning drivers, printer errors, visits to the wonderfully organized knowledge bases, and possibly images of email from colleagues wanting to take kinetic action about the interns, new hires, and ham fisted colleagues who rolled out an update which does not update.

According to the write up offers this helpful advice:

We advise users against manually forcing the update through the Windows 11 Installation Assistant or media creation tool, especially on the system configurations mentioned above. Instead, users should check for updates to the specific software or hardware drivers causing the holds and wait for the blocks to be lifted naturally.

Okay.

Let’s look at this from the point of view of bad actors. These folks know that the “new” Windows with its many nifty new features has some issues. When the Softies cannot get wallpaper to work, one knows that deeper, more subtle issues are not on the wizards’ radar.

Thus, the 24H2 update will be installed on bad actors’ test systems and subjected to tests only a fan of Metasploit and related tools can appreciate. My analogy is that these individuals, some of whom are backed by nation states, will give the update the equivalent of a digital colonoscopy. Sorry, Redmond, no anesthetic this go round.

Why?

Microsoft suggests that security is Job Number One. Obviously when fingerprint security functions don’t work and the Windows Hello fails, the bad actor knows that other issues exist. My goodness. Why doesn’t Microsoft just turn its PR and advertising firms lose on Telegram hacking groups and announce, “Take me. I am yours!”

Several observations:

  1. The update is flawed
  2. Core functions do not work
  3. Partners, not Microsoft, are supposed to fix the broken slot machine of operating systems
  4. Microsoft is, once again, scrambling to do what it should have done correctly before releasing a deeply flawed bundle of software.

Net net: Blaming Google for European woes and pointing fingers at everything and everyone except itself, Microsoft is demonstrating that it cannot do a basic task correctly.  The only users who are happy are those legions of bad actors in the countries Microsoft accuses of making its life difficult. Sorry. Microsoft you did this, but you could blame Google, of course.

Stephen E Arnold, November 4, 2024

Computer Security and Good Enough Methods

November 1, 2024

dino orange_thumb_thumb_thumbWritten by a humanoid dinobaby. No AI except the illustration.

I read “TikTok Owner Sacks Intern for Sabotaging AI Project.” The BBC report is straight forward; it does not provide much “management” or “risk” commentary. In a nutshell, the allegedly China linked ByteDance hired or utilized an intern. The term “intern” used to mean a student who wanted to get experience. Today, “intern” has a number of meanings. For example, for certain cyber fraud outfits operating in Southeast Asia an “intern” could be:

  1. A person paid to do work in a special economic zone
  2. A person coerced into doing work for an organization engaged in cyber fraud
  3. A person who is indeed a student and wants to get some experience
  4. An individual kidnapped and forced to perform work; otherwise, bad things can happen in dark rooms.

What’s the BBC say? Here is a snippet:

TikTok owner, ByteDance, says it has sacked an intern for “maliciously interfering” with the training of one of its artificial intelligence (AI) models.

The punishment, according to the write up, was “contacting” the intern’s university. End of story.

My take on this incident is a bit different from the BBC’s.

First, how did a company allegedly linked to the Chinese government make a bad hire? If the student was recommended by a university, what mistake did the university and the professors training the young person commit. The idea is to crank out individuals who snap into certain roles. I am not sure the spirit of an American party school is part of the ByteDance and TikTok work culture, but I may be off base.

Second, when a company hires a gig worker or brings an intern into an organization, are today’s managers able to identify potential issues either with an individual’s work or that person’s inner wiring? The fact that an intern was able to fiddle with code indicates a failure of internal checks and balances. The larger question is, “Can organizations trust interns who are operating as insiders, but without the controls an organization should have over individual workers. This gaffe makes clear that modern management methods are not proactive; they are reactive. For that reason, insider threats exist and could do damage. ByteDance, according to the write up, downplayed the harm caused by the intern:

ByteDance also denied reports that the incident caused more than $10m (£7.7m) of damage by disrupting an AI training system made up of thousands of powerful graphics processing units (GPU).

Is this claim credible? Nope. I refer to the information about four companies “downplaying the impact of the SolarWinds hack.” US outfits don’t want to reveal the impact of a cyber issue. Are outfits like ByteDance and TikTok on the up and up about the impact of the intern’s actions.

Third, the larger question becomes, “How does an organization minimize insider threats?” As organizations seek to cut training staff and rely on lower cost labor?” The answer is, in my opinion, clear to me. An organization does what it can and hope for the best.

Like many parts of a life in an informationized world or datasphere in my lingo, the quality of most efforts is good enough. The approach guarantees problems in the future. These are problems which cannot be solved. Management just finds something to occupy its time. The victims are the users, the customers, or the clients.

The world, even when allegedly linked with nation states, is struggling to achieve good enough.

Stephen E Arnold, November 1, 2024

Pavel Durov and Telegram: In the Spotlight Again

October 21, 2024

dino orangeNo smart software used for the write up. The art, however, is a different story.

Several news sources reported that the entrepreneurial Pavel Durov, the found of Telegram, has found a way to grab headlines. Mr. Durov has been enjoying a respite in France, allegedly due to his contravention of what the French authorities views as a failure to cooperate with law enforcement. After his detainment, Mr. Durov signaled that he has cooperated and would continue to cooperate with investigators in certain matters.

image

A person under close scrutiny may find that the experience can be unnerving. The French are excellent intelligence operators. I wonder how Mr. Durov would hold up under the ministrations of Israeli and US investigators. Thanks, ChatGPT, you produced a usable cartoon with only one annoying suggestion unrelated to my prompt. Good enough.

Mr. Durov may have an opportunity to demonstrate his willingness to assist authorities in their investigation into documents published on the Telegram Messenger service. These documents, according to such sources as Business Insider and South China Morning Post, among others, report that the Telegram channel Middle East Spectator dumped information about Israel’s alleged plans to respond to Iran’s October 1, 2024, missile attack.

The South China Morning Post reported:

The channel for the Middle East Spectator, which describes itself as an “open-source news aggregator” independent of any government, said in a statement that it had “received, through an anonymous source on Telegram who refused to identify himself, two highly classified US intelligence documents, regarding preparations by the Zionist regime for an attack on the Islamic Republic of Iran”. The Middle East Spectator said in its posted statement that it could not verify the authenticity of the documents.

Let’s look outside this particular document issue. Telegram’s mostly moderation-free approach to the content posted, distributed, and pushed via the Telegram platform is like to come under more scrutiny. Some investigators in North America view Mr. Durov’s system as a less pressing issue than the content on other social media and messaging services.

This document matter may bring increased attention to Mr. Durov, his brother (allegedly with the intelligence of two PhDs), the 60 to 80 engineers maintaining the platform, and its burgeoning ancillary interests in crypto. Mr. Durov has some fancy dancing to do. One he is able to travel, he may find that additional actions will be considered to trim the wings of the Open Network Foundation, the newish TON Social service, and the “almost anything” goes approach to the content generated and disseminated by Telegram’s almost one billion users.

From a practical point of view, a failure to exercise judgment about what is allowed on Messenger may derail Telegram’s attempts to become more of a mover and shaker in the world of crypto currency. French actions toward Mr. Pavel should have alerted the wizardly innovator that governments can and will take action to protect their interests.

Now Mr. Durov is placing himself, his colleagues, and his platform under more scrutiny. Close scrutiny may reveal nothing out of the ordinary. On the other hand, when one pays close attention to a person or an organization, new and interesting facts may be identified. What happens then? Often something surprising.

Will Mr. Durov get that message?

Stephen E Arnold, October 21, 2024

Another Stellar Insight about AI

October 17, 2024

Because AI we think AI is the most advanced technology, we believe it is impenetrable to attack. Wrong. While AI is advanced, the technology is still in its infancy and is extremely vulnerable, especially to smart bad actors. One of the worst things about AI and the Internet is that we place too much trust in it and bad actors know that. They use their skills to manipulate information and AI says ArsTechnica in the article: “Hacker Plants False Memories In ChatGPT To Steal User Data In Perpetuity.”

Johann Rehberger is a security researcher who discovered that ChatGPT is vulnerable to attackers. The vulnerability allows bad actors to leave false information and malicious instructions in a user’s long-term memory settings. It means that they could steal user data or cause more mayhem. OpenAI didn’t take Rehmberger serious and called the issue a safety concern aka not a big deal.

Rehberger did not like being ignored, so he hacked ChatGPT in a “proof-of-concept” to perpetually exfiltrate user data. As a result, ChatGPT engineers released a partial fix.

OpenAI’s ChatGPT stores information to use in future conversations. It is a learning algorithm to make the chatbot smarter. Rehberger learned something incredible about that algorithm:

“Within three months of the rollout, Rehberger found that memories could be created and permanently stored through indirect prompt injection, an AI exploit that causes an LLM to follow instructions from untrusted content such as emails, blog posts, or documents. The researcher demonstrated how he could trick ChatGPT into believing a targeted user was 102 years old, lived in the Matrix, and insisted Earth was flat and the LLM would incorporate that information to steer all future conversations. These false memories could be planted by storing files in Google Drive or Microsoft OneDrive, uploading images, or browsing a site like Bing—all of which could be created by a malicious attacker.”

Bad attackers could exploit the vulnerability for their own benefits. What is alarming is that the exploit was as simple as having a user view a malicious image to implement the fake memories. Thankfully ChatGPT engineers listened and are fixing the issue.

Can’t anything be hacked one way or another?

Whitney Grace, October 17, 2024

The GoldenJackals Are Running Free

October 11, 2024

Vea_thumb_thumb_thumb_thumbThe only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.

Remember the joke about security. Unplugged computer in a locked room. Ho ho ho. “Mind the (Air) Gap: GoldenJackal Gooses Government Guardrails” reports that security is getting more difficult. The write up says:

GoldenJackal used a custom toolset to target air-gapped systems at a South Asian embassy in Belarus since at least August 2019… These toolsets provide GoldenJackal a wide set of capabilities for compromising and persisting in targeted networks. Victimized systems are abused to collect interesting information, process the information, exfiltrate files, and distribute files, configurations and commands to other systems. The ultimate goal of GoldenJackal seems to be stealing confidential information, especially from high-profile machines that might not be connected to the internet.

What’s interesting is that the sporty folks at GoldenJackal can access the equivalent of the unplugged computer in a locked room. Not exactly, of course, but allegedly darned close.

image

Microsoft Copilot does a great job of presenting an easy to use cyber security system and console. Good work.

The cyber experts revealing this exploit learned of it in 2020. I think that is more than three years ago. I noted the story in October 2024. My initial question was, “What took so long to provide some information which is designed to spark fear and ESET sales?”

The write up does not tackle this question but the write up reveals that the vector of compromise was a USB drive (thumb drive). The write up provides some detail about how the exploit works, including a code snippet and screen shots. One of the interesting points in the write up is that Kaspersky, a recently banned vendor in the US, documented some of the tools a year earlier.

The conclusion of the article is interesting; to wit:

Managing to deploy two separate toolsets for breaching air-gapped networks in only five years shows that GoldenJackal is a sophisticated threat actor aware of network segmentation used by its targets.

Several observations come to mind:

  1. Repackaging and enhancing existing malware into tool bundles demonstrates the value of blending old and new methods.
  2. The 60 month time lag suggests that the GoldenJackal crowd is organized and willing to invest time in crafting a headache inducer for government cyber security professionals
  3. With the plethora of cyber alert firms monitoring everything from secure “work use only” laptops to useful outputs from a range of devices, systems, and apps why is it that only one company sufficiently alert or skilled to explain the droppings of the GoldenJackal?

I learn about new exploits every couple of days. What is now clear to me is that a cyber security firm which discovers something novel does so by accident. This leads me to formulate the hypothesis that most cyber security services are not particularly good at spotting what I would call “repackaged systems and methods.” With a bit of lipstick, bad actors are able to operate for what appears to be significant periods of time without detection.

If this hypothesis is correct, US government memoranda, cyber security white papers, and academic type articles may be little more than puffery. “Puffery,” as we have learned is no big deal. Perhaps that is what expensive cyber security systems and services are to bad actors: No big deal.

Stephen E Arnold, October 11, 2024

One

Next Page »

  • Archives

  • Recent Posts

  • Meta