First WAP? What Is That? Who Let the Cat Out of the Bag?

October 21, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Ageing in rural Kentucky is not a good way to keep up with surveillance technology. I did spot a post on LinkedIn. I will provide a url for the LinkedIn post, but I have zero clue if anyone reading this blog will be able to view the information. The focus of the LinkedIn post is that some wizards have taken inspiration from NSO Group-type of firms and done some innovation. Like any surveillance technology, one has to apply it in a real life situation. Sometimes there is a slight difference between demonstrations, PowerPoint talks, and ease of use. But, hey, that’s the MBA-inspired way to riches or at least in NSO Group’s situation, infamy.

image

Letting the cat out of the bag. Who is the individual? The president, an executive, a conference organizer, or a stealthy “real” journalist. One thing is clear: The cat is out of the bag. Thanks, Venice.ai. Good enough.

The LinkedIn post is from an entity using the handle OSINT Industries. Here is the link, dutifully copied from Microsoft’s outstanding social media platform. Don’t blame me if it doesn’t work. Microsoft just blames users, so just look in the mirror and complain: https://www.linkedin.com/posts/osint-industries_your-phone-is-being-tracked-right-now-ugcPost-7384354091293982721-KQWk?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAACYEwBhJbGkTw7Ad0vyN4RcYKj0Su8NUU

How’s that for a link. ShortURL spit out this version: https://shorturl.at/x2Qx9.

So what’s the big deal. Cyber security outfits and an online information service (in the old days a printed magazine) named Mother Jones learned that an outfit called First WAP exploited the SS7 telecom protocol. As i understand this signal switching, SS7 is about 50 years old and much loved by telephony nerds and Bell heads. The system and method acts like an old fashioned switchyard operator at a rail yard in the 1920s. Signals are filtered from voice channels. Call connections and other housekeeping are pushed to the SS7 digital switchyard. Instead of being located underground in Manhattan, the SS7 system is digital and operates globally. I have heard but have no first hand information about its security vulnerabilities. I know that a couple of companies are associated with switching fancy dancing. Do security exploits work? Well, the hoo-hah about First WAP suggests that SS7 exploitation is available.

The LinkedIn post says that “The scale [is] 14,000+ phone numbers. 160 countries. Over 1 million location pings.

A bit more color appears in the Russian information service ? FrankMedia.ru’s report “First WAP Empire: How Hidden Technology Followed Leaders and Activists.” The article is in Russian, but ever-reliable Google Translate makes short work of one’s language blind spots. Here are some interesting points from Frank Media:

  1. First WAP has been in business for about 17 or 18 years
  2. The system was used to track Google and Raytheon professionals
  3. First WAP relies on resellers of specialized systems and services and does not do too much direct selling. The idea is that the intermediaries are known to the government buyers. A bright engineer from another country is generally viewed as someone who should not be in a meeting with certain government professionals. This is nothing personal, you understand. This is just business.
  4. The system is named Altamides, which may be a variant of a Greek word for “powerful.”

The big reveal in the Russian write up is that a journalist got into the restricted conference, entered into a conversation with an attendee at the restricted conference, and got information which has put First WAP in the running to be the next NSO Group in terms of PR problems. The Frank Media write up does a fine job of identifying two individuals. One is the owner of the firm and the other is the voluble business development person.

Well, everyone gets 15 minutes of fame. Let me provide some additional, old-person information. First, the company’s Web address is www.1rstwap.com. Second, the firm’s alleged full name is First WAP International DMCC. The “DMCC” acronym means that the firm operates from Dubai’s economic zone. Third, the firm sells through intermediaries; for example, an outfit called KCS operating allegedly from the UK. Companies House information is what might be called sparse.

Several questions:

  1. How did a non-LE or intel professional get into the conference?
  2. Why was the company to operate off the radar for more than a decade?
  3. What benefits does First WAP derive from its nominal base in Indonesia?
  4. What are the specific security vulnerabilities First WAP exploits?
  5. Why do the named First WAP executives suddenly start talking after many years of avoiding an NSO-type PR problem?

Carelessness seems to be the reason this First WAP got its wireless access protocol put in the spotlight. Nice work!

To WAP up, you can download the First WAP encrypted messaging application from… wait for it… the Google Play Store. The Google listing includes this statement, “No data shared with third parties.” Think about that statement.

Stephen E Arnold, October 21, 2025

AI Security: Big Plus or Big Minus?

October 9, 2025

Agentic AI presents a new security crisis. But one firm stands ready to help you survive the threat. Cybersecurity firm Palo Alto Networks describes “Agentic AI and the Looming Board-Level Security Crisis.” Writer and CSO Haider Pasha sounds the alarm:

“In the past year, my team and I have spoken to over 3,000 of Europe’s top business leaders, and these conversations have led me to a stark conclusion: Three out of four current agentic AI projects are on track to experience significant security challenges. The hype, and resulting FOMO, around AI and agentic AI has led many organisations to run before they’ve learned to walk in this emerging space. It’s no surprise how Gartner expects agentic AI cancellations to rise through 2027 or that an MIT report shows most enterprise GenAI pilots already failing. The situation is even worse from a cybersecurity perspective, with only 6% of organizations leveraging an advanced security framework for AI, according to Stanford.

But the root issue isn’t bad code, it’s bad governance. Unless boards instill a security mindset from the outset and urgently step in to enforce governance while setting clear outcomes and embedding guardrails in agentic AI rollouts, failure is inevitable.”

The post suggests several ways to implement this security mindset from the start. For example, companies should create a council that oversees AI agents across the organization. They should also center initiatives on business goals and risks, not shiny new tech for its own sake. Finally, enforce least-privilege access policies as if the AI agent were a young intern. See the write-up for more details on these measures.

If one is overwhelmed by the thought of implementing these best practices, never fear. Palo Alto Networks just happens to have the platform to help. So go ahead and fear the future, just license the fix now.

Cynthia Murrell, October 9, 2025

AI May Be Like a Disneyland for Threat Actors

October 7, 2025

AI is supposed to revolutionize the world, but bad actors are the ones who are benefitting the most tight now.  AI is the ideal happy place for bad actors, because there’s an easy hack using autonomous browser based agents that use them as a tool for their nefarious deeds.  This alert cokes from Hacker Noon’s story: “Studies Show AI Agents And Browsers Are A Hacker’s Perfect Playground.”

Many companies are running on at least one AI enterprise agent, using it as a tool to fetch external data, etc.  Security, however, is still viewed as an add-on for the developers in this industry.  Zenity Labs, a leading Agentic AI security and governance company, discovered that 3000 publicly accessible MS Copilot agents.  

The Copilot agents failed because they relied on soft boundaries:

“…i.e., fragile, surface-level protections (i.e., instructions to the AI about what it should and shouldn’t do, with no technical controls). Agents were instructed in their prompts to “only help legitimate customers,” yet such rules were easy to bypass. Prompt shields designed to filter malicious inputs proved ineffective, while system messages outlining “acceptable behavior” did little to stop crafted attacks. Critically, there was no technical validation of the input sources feeding the agents, leaving them open to manipulation. With no sandboxing layer separating the agent from live production data, attackers can exploit these weaknesses to access sensitive systems directly.”

White hat hackers also found other AI exploits that were demonstrated at Black Hat USA 2025. Here’s a key factoid: “The more autonomous the AI agent, the higher the security risk.”

Many AI agents are vulnerable to security exploits and it’s a scary thought information is freely available to bad actors.  Hacker Noon suggests putting agents through stress tests to find weak points then adding the necessary security levels.  But Oracle (the marketer of secure enterprise search) and Google (owner of the cyber security big dog Mandiant) have both turned on their klaxons for big league vulnerabilities. Is AI helping? It depends whom one asks.

Whitney Grace, October 7, 2025

Get Cash for Spyware

September 26, 2025

Are you a white hat hacker? Do you have the genius to comprehend code and write your own? Are you a bad actor looking to hang up your black hat and clean up your life? Crowdfense might be the place for you. Here’s the link.

Crowdfense is an organization that “…is the world-leading research hub and acquisition platform for high-quality zero-day exploits and advanced vulnerability research. We acquire the most advanced zero-day research across desktop, mobile, appliances, web and embedded platforms.”

Despite the archaic web design (probably to weed out) uninterested parties, Crowdfense is a respected for spyware. They’re currently advertising for for their Exploit Acquisition Program:

“Since 2017, Crowdfense has operated the world’s most private vulnerability acquisition program, initially backed by a USD 10 million fund and powered by our proprietary Vulnerability Research Hub (VRH) platform. Today, the program has expanded to USD 30 million, with a broader scope that now includes enterprise software, mobile components, and messaging technologies. We offer rewards ranging from USD 10,000 to USD 7 million for full exploit chains or previously unreported capabilities. Partial chains and individual components are assessed individually and priced accordingly. As part of our commitment to the research community, we also offered free high-level technical training to hundreds of vulnerability researchers worldwide.”

If you want to do some good with your bad l33t skills, search for an exploit, invent some spyware, and reap the benefits. You can retire to an island and live off grid. Isn’t that the dream?

Whitney Grace, September 26, 2025

Graphite: Okay, to License Now

September 24, 2025

The US government uses specialized software to gather information related to persons of interest. The brand of popular since NSO Group marketed itself into a pickle is from the Israeli-founded spyware company Paragon Solutions. The US government isn’t a stranger to Paragon Solutions, in fact, El Pais shares in the article, “Graphite, the Israeli Spyware Acquired By ICE” that it renewed its contract with the specialized software company.

The deal was originally signed during Biden’s administration during September 24, but it went against the then president’s executive order that prohibited US agencies from using spyware tools that “posed ‘significant counterintelligence and security risks’ or had been misused by foreign governments to suppress dissent.

During the negotiations, AE Industrial Partners purchased Paragon and merged it with REDLattice, an intelligence contractor located in Virginia. Paragon is now a domestic partner with deep connections to former military and intelligence personnel. The suspension on ICE’s Homeland Security Investigations was quietly lifted on August 29 according to public contracting announcements.

The Us government will use Paragon’s Graphite spyware:

“Graphite is one of the most powerful commercial spy tools available. Once installed, it can take complete control of the target’s phone and extract text messages, emails, and photos; infiltrate encrypted apps like Signal and WhatsApp; access cloud backups; and covertly activate microphones to turn smartphones into listening devices.

The source suggests that although companies like Paragon insist their tools are intended to combat terrorism and organized crime, past use suggests otherwise. Earlier this year, Graphite allegedly has been linked to info gathering in Italy targeting at least some journalists, a few migrant rights activists, and a couple of associates of the definitely worth watching Pope Francis. Paragon stepped away from the home of pizza following alleged “public outrage.”

The US government’s use of specialized software seems to be a major concern among Democrats and Republicans alike. What government agencies are licensing and using Graphite. Beyond Search has absolutely no idea.

Whitney Grace, September 24, 2025

Google: Is It Becoming Microapple?

September 19, 2025

Google’s approach to Android, the freedom to pay Apple to make Google search the default for Safari, and the registering of developers — These are Tim Apple moves. Google has another trendlet too.

Google has 1.8 billion users around the world and according to the Mens Journal Google has a new problem: “Google Issues Major Warning to All 1.8 Billion Users.” There’s a new digital security threat and it involves AI. That’s not a surprise, because artificial intelligence has been a growing concern for cyber security experts for years. Since the technology is becoming more advanced, bad actors are using it for devious actions. The newest round of black hat tricks are called “indirect prompt injections.”

Indirect prompt injections are a threat for individual users, businesses, and governments. Google warned users about this new threat and how it works:

“‘Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources. These may include emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions,’ the blog post continued.

The Google blog post warned that this puts individuals and entities at risk.

‘As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures,’ the blog post continued.”

Bad actors have tasked Google’s Gemini (Shock! Gasp!) to infiltrate emails and ask users for their passwords and login information. That’s not the scary part. Most spammy emails have a link for users to click on to collect data, instead this new hack uses Gemini to prompt users for the information. Downloading fear.

Google is already working on counter measures for Gemini. Good luck! Microsoft has had this problem for years! Google and Microsoft are now twins! Is this the era of Google as Microapple?

Whitney Grace, September 19, 2025

AI and Security? What? Huh?

September 18, 2025

As technology advances so do bad actors and their devious actions. Bad actors are so up to date with the latest technology that it takes white hat hackers and cyber security engineers awhile to catch up to them. AI has made bad actors smarter and EWeek explains that there is we are facing a banking security crisis: “Altman Warns Of AI-Powered Fraud Crisis in Banking, Urges Stronger Security Measures.”

OpenAI CEO Sam Altman warned that AI vocal technology is a danger to society. He told the Federal Reserve Vice Chair for Supervision Michelle Bowman that US banks are lagging behind Ai vocal security, because many financial institutions still rely on voiceprint technology to verify customers’ identities.

Altman warned that AI vocal technology can easily replicate humans and deepfake videos are even scarier when they become indistinguishable from reality. Bowman mentioned potential partnering with tech companies to create solutions.

Despite sounding the warning bells, Altman didn’t offer much help:

“Despite OpenAI’s prominence in the AI industry, Altman clarified that the company is not creating tools for impersonation. Still, he stressed that the broader AI community must take responsibility for developing new verification systems, such as “proof of human” solutions.

Altman is supporting tools like The Orb, developed by Tools for Humanity. The device aims to provide “proof of personhood” in a digital world flooded with fakes. His concerns go beyond financial fraud, extending to the potential for AI superintelligence to be misused in areas such as cyberwarfare or biological threats.”

Proof of personhood? It’s like the blue check on verified X/Twitter accounts. Altman might be helping make the future but he’s definitely also part of the problem.

Whitney Grace, September 18, 2025

Google: Klaxons, Red Lights, and Beeps

September 12, 2025

Here we go again with another warning from Google about scams in the form of Gemini. The Mirror reports that, “Google Issues ‘Red Alert’ To Gmail Users Over New AI Scam That Steals Passwords.” Bad actors are stealing passwords using Google’s own chatbot. Hackers are sending emails using Gemini. These emails contain a hidden message to reveal passwords.

Here’s how people are falling for the scam: there’s no link to click in the email. A box pops up alerting you to a risk. That’s all! It’s incredibly simple and scary. Remember that Google will never ask you for your username and password. It’s still the easiest tip to remember when it comes to these scams.

Google issued a statement:

“The tech giant explained the subtlety of the threat: ‘Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources. These may include emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions.’ As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures.’”

Google also said some calming platitudes but the record replay is getting tiresome.

Whitney Grace, September  12, 2025

AI a Security Risk? No Way or Is It No WAI?

September 11, 2025

Am I the only one who realizes that AI is a security problem? Okay, I’m not but organizations certainly aren’t taking AI security breaches says Venture Beat in the article, “Shadow AI Adds $670K To Breach Costs While 97% Of Enterprises Skip Basic Access Controls, IBM Reports.” IBM collected information with the Ponemon Institute (does anyone else read that as Pokémon Institute?) about data breaches related to AI. IBM and the Ponemon Institute held 3470 interviews with 600 organizations that had data breaches.

Shadow AI is the unauthorized use of AI tools and applications. IBM shared how shadow AI affects organizations in the Cost of a Data Breach Report. Unauthorized usage of AI tools cost organizations $4.63 million and that is 16% more than the $4.44 million global average. YIKES! Another frightening statistic is that 97% of the organizations lacked proper AI access controls. Only 13% had AI-security related breaches compared to 8% who were unaware if AI comprised their systems

Bad actors are using supply chains as their primary attack and AI allows them to automate tasks to blend in with regular traffic. If you want to stay awake at night here are some more numbers:

“A majority of breached organizations (63%) either don’t have an AI governance policy or are still developing one. Even when they have a policy, less than half have an approval process for AI deployments, and 62% lack proper access controls on AI systems.”

An expert said this about the issue:

This pattern of delayed response to known vulnerabilities extends beyond AI governance to fundamental security practices. Chris Goettl, VP Product Management for Endpoint Security at Ivanti, emphasizes the shift in perspective: ‘What we currently call ‘patch management’ should more aptly be named exposure management—or how long is your organization willing to be exposed to a specific vulnerability?’”

Organizations that are aware of AI breaches and have security plans in place save more money.

It pays to be prepared and cheaper too!

Whitney Grace, September 11, 2025

Derailing Smart Software with Invisible Prompts

September 3, 2025

Dino 5 18 25Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.

The Russian PCNews service published “Visual Illusion: Scammers Have Learned to Give Invisible Instructions to Neural Networks.” Note: The article is in Russian.

The write up states:

Attackers can embed hidden instructions for artificial intelligence (AI) into the text of web pages, letters or documents … For example, CSS (a style language for describing the appearance of a document) makes text invisible to humans, but quite readable to a neural network.

The write up includes examples like these:

… Attackers can secretly run scripts, steal data, or encrypt files. The neural network response may contain social engineering commands [such as] “download this file,” “execute a PowerShell command,” or “open the link,” … At the same time, the user perceives the output as trusted … which increases the chance of installing ransomware or stealing data. If data [are] “poisoned” using hidden prompts [and] gets into the training materials of any neural network, [the system] will learn to give “harmful advice” even when processing “unpoisoned” content in future use….

Examples of invisible information have been identified in the ArXiv collection of pre-printed journal articles.

Stephen E Arnold, September 3, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta