Oracle: Pricked by a Rose and Still Bleeding

April 15, 2025

How disappointing. DoublePulsar documents a senior tech giant’s duplicity in, “Oracle Attempt to Hide Serious Cybersecurity Incident from Customers in Oracle SaaS Service.” Blogger Kevin Beaumont cites reporting by Bleeping Computer as he tells us someone going by rose87168 announced in March they had breached certain Oracle services. The hacker offered to remove individual companies’ data for a price. They also invited Oracle to email them to discuss the matter. The company, however, immediately denied there had been a breach. It should know better by now.

Rose87168 responded by releasing evidence of the breach, piece by piece. For example, they shared a recording of an internal Oracle meeting, with details later verified by Bleeping Computer and Hudson Rock. They also shared the code for Oracle configuration files, which proved to be current. Beaumont writes:

“In data released to a journalist for validation, it has now become 100% clear to me that there has been cybersecurity incident at Oracle, involving systems which processed customer data. … All the systems impacted are directly managed by Oracle. Some of the data provided to journalists is current, too. This is a serious cybersecurity incident which impacts customers, in a platform managed by Oracle. Oracle are attempting to wordsmith statements around Oracle Cloud and use very specific words to avoid responsibility. This is not okay. Oracle need to clearly, openly and publicly communicate what happened, how it impacts customers, and what they’re doing about it. This is a matter of trust and responsibility. Step up, Oracle — or customers should start stepping off.”

In an update to the original post, Beaumont notes some linguistic slight-of-hand employed by the company:

“Oracle rebadged old Oracle Cloud services to be Oracle Classic. Oracle Classic has the security incident. Oracle are denying it on ‘Oracle Cloud’ by using this scope — but it’s still Oracle cloud services that Oracle manage. That’s part of the wordplay.”

However, it seems the firm finally admitted the breach was real to at least some users. Just not in in black and white. We learn:

“Multiple Oracle cloud customers have reached out to me to say Oracle have now confirmed a breach of their services. They are only doing so verbally, they will not write anything down, so they’re setting up meetings with large customers who query. This is similar behavior to the breach of medical PII in the ongoing breach at Oracle Health, where they will only provide details verbally and not in writing.”

So much for transparency. Beaumont pledges to keep investigating the breach and Oracle’s response to it. He invites us to follow his Mastodon account for updates.

Cynthis Murrell, April 15, 2025

Trapped in the Cyber Security Gym with Broken Gear?

April 11, 2025

As an IT worker you can fall into more pitfalls than a road that needs repaving. Mac Chaffee shared a new trap on his blog, Mac’s Tech Blog and how he handled: “Avoid Building A Security Treadmill.” Chaffee wrote that he received a ticket that asked him to stop people from using a GPU service to mine cryptocurrencies. Chafee used Falco, an eBPF-powered agent that runs on the Kubernetes cluster, to monitor the spot and deactivate the digital mining.

Chaffee doesn’t mind the complexity of the solution. His biggest issue was with the “security treadmill” that he defines as:

“A security treadmill1 is a piece of software that, due to a weakness of design, requires constant patching to keep it secure. Isn’t that just all software? Honestly… kinda, yeah, but a true treadmill is self-inflicted. You bought it, assembled it, and put it in your spare bedroom; a device specifically designed to let you walk/run forever without making forward progress.”

One solution suggested to Chaffee was charging people to use the GPU. The idea was that if they charged people more to use the GPU than what they were making with cryptocurrencies than it would stop. That idea wasn’t followed of reasons Chaffee wasn’t told, so Falco was flown.

Unfortunately Falco only detects network traffic to host when its directly connected to the IP. The security treadmill was in full swing because users were bypassing the Internet filter monitored by Falco. Falco needs to be upgraded to catch new techniques that include a VPN or proxy.

Another way to block cryptocurrency mining is blocking all outbound traffic except for those an allowed-user list. It would also prevent malware attacks, command and control servers, and exfiltration attacks. Another problem Chaffee noted is that applications doesn’t need a full POSIX environment. To combat this he suggests:

“Perhaps free-tier users of these GPUs could have been restricted to running specific demos, or restrictive timeouts for GPU processing times, or denying disk write access to prevent downloading miners, or denying the ability to execute files outside of a read-only area.”

Chaffee declares it’s time to upgrade legacy applications or make them obsolete to avoid security treadmills. It sounds like there’s a niche to make a startup there. What a thought a Planet Fitness with one functioning treadmill.

Whitney Grace, April 11, 2025

No Joke: Real Secrecy and Paranoia Are Needed Again

April 1, 2025

dino orangeNo AI. Just a dinobaby sharing an observation about younger managers and their innocence.

In the US and the UK, secrecy and paranoia are chic again. The BBC reported “GCHQ Worker Admits Taking top Secret Data Home.” Ah, a Booz Allen / Snowden type story? The BBC reports:

The court heard that Arshad took his work mobile into a top secret GCHQ area and connected it to work station. He then transferred sensitive data from a secure, top secret computer to the phone before taking it home, it was claimed. Arshad then transferred the data from the phone to a hard drive connected to his personal home computer.

Mr. Snowden used a USB drive. The question is, “What are the bosses doing? Who is watching the logs? Who is  checking the video feeds? Who is hiring individuals with some inner need to steal classified information?

But outside phones in a top secret meeting? That sounds like a great idea. I attended a meeting held by a local government agency, and phones and weapons were put in little steel boxes. This outfit was no GHCQ, but the security fellow (a former Marine) knew what he was doing for that local government agency.

A related story addresses paranoia, a mental characteristic which is getting more and more popular among some big dogs.

CNBC reported an interesting approach to staff trust. “Anthropic Announces Updates on Security Safeguards for Its AI Models” reports:

In an earlier version of its responsible scaling policy, Anthropic said it would begin sweeping physical offices for hidden devices as part of a ramped-up security effort.

The most recent update to the firm’s security safeguards adds:

updates to the “responsible scaling” policy for its AI, including defining which of its model safety levels are powerful enough to need additional security safeguards.

The actual explanation is a master piece of clarity. Here’s snippet of what Anthropic actually said in its “Anthropic’s Responsible Scaling Policy” announcement:

The current iteration of our RSP (version 2.1) reflects minor updates clarifying which Capability Thresholds would require enhanced safeguards beyond our current ASL-3 standards.

The Anthropic methods, it seems to me, to include “sweeps” and “compartmentalization.”

Thus, we have two examples of outstanding management:

First, the BBC report implies that personal computing devices can plug in and receive classified information.

And:

Second, CNBC explains that sweeps are not enough. Compartmentalization of systems and methods puts in “cells” who can do what and how.

Andy Grove’s observation popped into my mind. He allegedly rattled off this statement:

Success breeds complacency. Complacency breeds failure. Only the paranoid survive.

Net net: Cyber security is easier to “trust” and “assume”. Real fixes edge into fear and paranoia.

Stephen E Arnold, April 9, 2025

FOGINT: Targets Draw Attention. Signal Is a Target

April 1, 2025

dino orange_thumb_thumb_thumbDinobaby says, “No smart software involved. That’s for “real” journalists and pundits.

We have been plugging away on the “Telegram Overview: Notes for Analysts and Investigators.” We have not exactly ignored Signal or the dozens of other super secret, encrypted beyond belief messaging applications. We did compile a table of those we came across, and Signal was on that list.

I read “NSA Warned of Vulnerabilities in Signal App a Month Before Houthi Strike Chat.” I am not interested in the political facets of this incident. The important point for me is this statement:

The National Security Agency sent out an operational security special bulletin to its employees in February 2025 warning them of vulnerabilities in using the encrypted messaging application Signal

One of the big time cyber security companies spoke with me, and I mentioned that Signal might not be the cat’s pajamas. To the credit of that company and the former police chief with whom I spoke, the firm shifted to an end to end encrypted messaging app we had identified as slightly less wonky. Good for that company, and a pat on the back for the police chief who listened to me.

In my experience, operational bulletins are worth reading. When the bulletin is “special,” re-reading the message is generally helpful.

Signal, of course, defends itself vigorously. The coach who loses a basketball game says, “Our players put out a great effort. It just wasn’t enough.”

In the world of presenting oneself as a super secret messaging app immediately makes that messaging app a target. I know first hand that some whiz kid entrepreneurs believe that their EE2E solution is the best one ever. In fact, a year ago, such an entrepreneur told me, “We have developed a method that only a government agency can compromise.”

Yeah, that’s the point of the NSA bulletin.

Let me ask you a question: “How many computer science students in countries outside the United States are looking at EE2E messaging apps and trying to figure out how to compromise the data?” Years ago, I gave some lectures in Tallinn, Estonia. I visited a university computer science class. I asked the students who were working on projects each selected. Several of them told me that they were trying to compromise messaging systems. A favorite target was Telegram but Signal came up.

I know the wizards who cook up EE2E messaging apps and use the latest and greatest methods for delivering security with bells on are fooling themselves. Here are the reasons:

  1. Systems relying on open source methods are well documented. Exploits exist and we have noticed some CaaS offers to compromise these messages. Now the methods may be illegal in many countries, but they exist. (I won’t provide a checklist in a free blog post. Sorry.)
  2. Techniques to prevent compromise of secure messaging systems involve some patented systems and methods. Yes, the patents are publicly available, but the methods are simply not possible unless one has considerable resources for software, hardware, and deployment.
  3. A number of organizations turn EE2E messaging systems into happy eunuchs taking care of the sultan’s harem. I have poked fun at the blunders of the NSO Group and its Pegasus approach, and I have pointed out that the goodies of the Hacking Team escaped into the wild a long time ago. The point is that once the procedures for performing certain types of compromise are no longer secret, other humans can and will create a facsimile and use those emulations to suck down private messages, the metadata, and probably the pictures on the device too. Toss in some AI jazziness, and the speed of the process goes faster than my old 1962 Studebaker Lark.

Let me wrap up by reiterating that I am not addressing the incident involving Signal. I want to point out that I am not into the “information wants to be free.” Certain information is best managed when it is secret. Outfits like Signal and the dozens of other EE2E messaging apps are targets. Targets get hit. Why put neon lights on oneself and try to hide the fact that those young computer science students or their future employers will find a way to compromise the information.

Technical stealth, network fiddling, human bumbling — Compromises will continue to occur. There were good reasons to enforce security. That’s why stringent procedures and hardened systems have been developed. Today it’s marketing, and the possibility that non open source, non American methods may no longer be what the 23 year old art history who has a job in marketing says the systems actually deliver.

Stephen E Arnold, April 1, 2025

Cyber Attacks in Under a Minute

March 25, 2025

Cybercrime has evolved. VentureBeat reports, "51 Seconds to Breach: How CISOs Are Countering AI-Driven, Lightning-Fast Deepfake, Vishing and Social Engineering Attacks." Yes, according to cybersecurity firm CrowdStrike‘s Adam Meyers, the fastest breakout time he has seen is 51 seconds. No wonder bad actors have an advantage—it can take cyber defense weeks to months to determine a system has been compromised. In the interim, hackers can roam undetected.

Cybercrime methods have also changed. Where malware was once the biggest problem, hackers now favor AI-assisted phishing and vishing (voice-based phishing) campaigns. We learn:

"Vishing is out of control due in large part to attackers fine-turning their tradecraft with AI. CrowdStrike’s 2025 Global Threat Report found that vishing exploded by 442% in 2024. It’s the top initial access method attackers use to manipulate victims into revealing sensitive information, resetting credentials and granting remote access over the phone. ‘We saw a 442% increase in voice-based phishing in 2024. This is social engineering, and this is indicative of the fact that adversaries are finding new ways to gain access because…we’re kind of in this new world where adversaries have to work a little bit harder or differently to avoid modern endpoint security tools,’ Meyers said. Phishing, too, continues to be a threat. Meyers said, ‘We’ve seen that with phishing emails, they have a higher click-through rate when it’s AI-generated content, a 54% click-through rate, versus 12% when a human is behind it.’"

The write-up suggests three strategies to fight today’s breaches. Stop attackers at the authentication layer by shortening token lifetimes and implementing real-time revocation. Also, set things up so no one person can bypass security measures. No, not even the owner. Maybe especially not them. Next, we are advised, fight AI with AI: Machine-learning tools now exist to detect intrusions and immediately shut them down. Finally, stop lateral movement from the breach point with security that is unified across the system. See the write-up for more details on each of these.

Cynthia Murrell, March 25, 2025

Why Worry about TikTok?

March 21, 2025

dino orange_thumb_thumb_thumbWe have smart software, but the dinobaby continues to do what 80 year olds do: Write the old-fashioned human way. We did give up clay tablets for a quill pen. Works okay.

I hope this news item from WCCF Tech is wildly incorrect. I have a nagging thought that it might be on the money. “Deepseek’s Chatbot Was Being Used By Pentagon Employees For At Least Two Days Before The Service Was Pulled from the Network; Early Version Has Been Downloaded Since Fall 2024” is the headline I noted. I find this interesting.

The short article reports:

A more worrying discovery is that Deepseek mentions that it stores data on servers in China, possibly presenting a security risk when Pentagon employees started playing around with the chatbot.

And adds:

… employees were using the service for two days before this discovery was made, prompting swift action. Whether the Pentagon workers have been reprimanded for their recent act, they might want to exercise caution because Deepseek’s privacy policy clearly mentions that it stores user data on its Chinese servers.

Several observations:

  1. This is a nifty example of an insider threat. I thought cyber security services blocked this type of to and fro from government computers on a network connected to public servers.
  2. The reaction time is either months (fall of 2024 to 48 hours). My hunch is that it is the months long usage of an early version of the Chinese service.
  3. Which “manager” is responsible? Sorting out which vendors’ software did not catch this and which individual’s unit dropped the ball will be interesting and probably unproductive. Is it in any authorized vendors’ interest to say, “Yeah, our system doesn’t look for phoning home to China but it will be in the next update if your license is paid up for that service.” Will a US government professional say, “Our bad.”

Net net: We have snow removal services that don’t remove snow. We have aircraft crashing in sight of government facilities. And we have Chinese smart software running on US government systems connected to the public Internet. Interesting.

Stephen E Arnold, March 21, 2025

AI Hiring Spoofs: A How To

March 12, 2025

dino orange_thumbBe aware. A dinobaby wrote this essay. No smart software involved.

The late Robert Steele, one of first government professionals to hop on the open source information bandwagon, and I worked together for many years. In one of our conversations in the 1980s, Robert explained how he used a fake persona to recruit people to assist him in his work on a US government project. He explained that job interviews were an outstanding source of information about a company or an organization.

AI Fakers Exposed in Tech Dev Recruitment: Postmortem” is a modern spin on Robert’s approach. Instead of newspaper ads and telephone calls, today’s approach uses AI and video conferencing. The article presents a recipe for what was at one time a technique not widely discussed in the 1980s. Robert learned his approach from colleagues in the US government.

The write up explains that a company wants to hire a professional. Everything hums along and then:

…you discover that two imposters hiding behind deepfake avatars almost succeeded in tricking your startup into hiring them. This may sound like the stuff of fiction, but it really did happen to a startup called Vidoc Security, recently. Fortunately, they caught the AI impostors – and the second time it happened they got video evidence.

The cited article explains how to set and operate this type of deep fake play. I am not going to present the “how to” in this blog post. If you want the details, head to the original. The penetration tactic requires Microsoft LinkedIn, which gives that platform another use case for certain individuals gathering intelligence.

Several observations:

  1. Keep in mind that the method works for fake employers looking for “real” employees in order to obtain information from job candidates. (Some candidates are blissfully unaware that the job is a front for obtaining data about an alleged former employer.)
  2. The best way to avoid AI centric scams is to do the work the old-fashioned way. Smart software opens up a wealth of opportunities to obtain allegedly actionable information. Unfortunately the old fashioned way is slow, expensive, and prone to social engineering tactics.
  3. As AI and bad actors take advantage of the increased capabilities of smart software, humans do not adapt  quickly when those humans are not actively involved with AI capabilities. Personnel related matters are a pain point for many organizations.

To sum up, AI is a tool. It can be used in interesting ways. Is the contractor you hired on Fiverr or via some online service a real person? Is the job a real job or a way to obtain information via an AI that is a wonderful conversationalist? One final point: The target referenced in the write was a cyber security outfit. Did the early alert, proactive, AI infused system prevent penetration?

Nope.

Stephen E Arnold, March 12, 2025

Encryption: Not the UK Way but Apple Is A-Okay

March 6, 2025

The UK is on a mission. It seems to be making progress. The BBC Reports, "Apple Pulls Data Protection Tool After UK Government Security Row." Technology editor Zoe Kleinman explains:

"Apple is taking the unprecedented step of removing its highest level data security tool from customers in the UK, after the government demanded access to user data. Advanced Data Protection (ADP) means only account holders can view items such as photos or documents they have stored online through a process known as end-to-end encryption. But earlier this month the UK government asked for the right to see the data, which currently not even Apple can access. Apple did not comment at the time but has consistently opposed creating a ‘backdoor’ in its encryption service, arguing that if it did so, it would only be a matter of time before bad actors also found a way in. Now the tech giant has decided it will no longer be possible to activate ADP in the UK. It means eventually not all UK customer data stored on iCloud – Apple’s cloud storage service – will be fully encrypted."

The UK’s security agency, the Home Office, refused to comment on the matter. Apple states it was "gravely disappointed" with this outcome. It emphasizes its longstanding refusal to build any kind of back door or master key. It is the principle of the thing. Instead, it is now removing the locks on the main entrance. Much better.

As of the publication of Kleinman’s article, new iCloud users who tried to opt into ADP received an error message. Apparently, protection for existing users will be stripped at a later date. Some worry Apple’s withdrawal of ADP from the UK sets a bad precedent in the face of similar demands in other countries. Of course, so would caving in to them. The real culprit here, some say, is the UK government that put its citizens’ privacy at risk. Will other governments follow its lead? Will tech firms develop some best practices in the face of such demands? We wonder what their priorities will be.

Cynthia Murrell, March 6, 2025

Google and Personnel Vetting: Careless?

February 20, 2025

dino orangeNo smart software required. This dinobaby works the old fashioned way.

The Sundar & Prabhakar Comedy Show pulled another gag. This one did not delight audiences the way Prabhakar’s AI presentation did, nor does it outdo Google’s recent smart software gaffe. It is, however, a bit of a hoot for an outfit with money, smart people, and smart software.

I read the decidedly non-humorous news release from the Department of Justice titled “Superseding Indictment Charges Chinese National in Relation to Alleged Plan to Steal Proprietary AI Technology.” The write up states on February 4, 2025:

A federal grand jury returned a superseding indictment today charging Linwei Ding, also known as Leon Ding, 38, with seven counts of economic espionage and seven counts of theft of trade secrets in connection with an alleged plan to steal from Google LLC (Google) proprietary information related to AI technology. Ding was initially indicted in March 2024 on four counts of theft of trade secrets. The superseding indictment returned today describes seven categories of trade secrets stolen by Ding and charges Ding with seven counts of economic espionage and seven counts of theft of trade secrets.

image

Thanks, OpenAI, good enough.

Mr. Ding, obviously a Type A worker, appears to have quite industrious at the Google. He was not working for the online advertising giant; he was working for another entity. The DoJ news release describes his set up this way:

While Ding was employed by Google, he secretly affiliated himself with two People’s Republic of China (PRC)-based technology companies. Around June 2022, Ding was in discussions to be the Chief Technology Officer for an early-stage technology company based in the PRC.  By May 2023, Ding had founded his own technology company focused on AI and machine learning in the PRC and was acting as the company’s CEO.

What technology caught Mr. Ding’s eye? The write up reports:

Ding intended to benefit the PRC government by stealing trade secrets from Google. Ding allegedly stole technology relating to the hardware infrastructure and software platform that allows Google’s supercomputing data center to train and serve large AI models. The trade secrets contain detailed information about the architecture and functionality of Google’s Tensor Processing Unit (TPU) chips and systems and Google’s Graphics Processing Unit (GPU) systems, the software that allows the chips to communicate and execute tasks, and the software that orchestrates thousands of chips into a supercomputer capable of training and executing cutting-edge AI workloads. The trade secrets also pertain to Google’s custom-designed SmartNIC, a type of network interface card used to enhance Google’s GPU, high performance, and cloud networking products.

At least, Mr. Ding validated the importance of some of Google’s sprawling technical insights. That’s a plus I assume.

One of the more colorful items in the DoJ news release concerned “evidence.” The DoJ says:

As alleged, Ding circulated a PowerPoint presentation to employees of his technology company citing PRC national policies encouraging the development of the domestic AI industry. He also created a PowerPoint presentation containing an application to a PRC talent program based in Shanghai. The superseding indictment describes how PRC-sponsored talent programs incentivize individuals engaged in research and development outside the PRC to transmit that knowledge and research to the PRC in exchange for salaries, research funds, lab space, or other incentives. Ding’s application for the talent program stated that his company’s product “will help China to have computing power infrastructure capabilities that are on par with the international level.”

Mr. Ding did not use Google’s cloud-based presentation program. I found the explicit desire to “help China” interesting. One wonders how Google’s Googley interview process run by Googley people failed to notice any indicators of Mr. Ding’s loyalties? Googlers are very confident of their Googliness, which obviously tolerates an insider threat who conveys data to a nation state known to be adversarial in its view of the United States.

I am a dinobaby, and I find this type of employee insider threat at Google. Google bought Mandiant. Google has internal security tools. Google has a very proactive stance about its security capabilities. However, in this case, I wonder if a Googler ever noticed that Mr. Ding used PowerPoint, not the Google-approved presentation program. No true Googler would use PowerPoint, an archaic, third party program Microsoft bought eons ago and has managed to pump full of steroids for decades.

Yep, the tell — Googlers who use Microsoft products. Sundar & Prabhakar will probably integrate a short bit into their act in the near future.

Stephen E Arnold, February 20, 2025

Hackers and AI: Of Course, No Hacker Would Use Smart Software

February 18, 2025

dino orangeThis blog post is the work of a real live dinobaby. Believe me, after reading the post, you know that smart software was not involved.

Hackers would never ever use smart software. I mean those clever stealer distributors preying on get-rich-quick stolen credit card users. Nope. Those people using online games to lure kiddies and people with kiddie-level intelligence into providing their parents’ credit card data? Nope and double nope. Those people in computer science classes in Azerbaijan learning how to identify security vulnerability while working as contractors for criminals. Nope. Never. Are you crazy. These bad actors know that smart software is most appropriate for Mother Teresa type activities and creating Go Fund Me pages to help those harmed by natural disasters, bad luck, or not having a job except streaming.

I mean everyone knows that bad actors respect the firms providing smart software. It is common knowledge that bad actors play fair. Why would a criminal use smart software to create more efficacious malware payloads, compromise Web sites, or defeat security to trash the data on Data.gov. Ooops. Bad example. Data.gov has been changed.

I read “Google Says Hackers Abuse Gemini AI to Empower Their Attacks.” That’s the spirit. Bad actors are using smart software. The value of the systems is evident to criminals. The write up says:

Multiple state-sponsored groups are experimenting with the AI-powered Gemini assistant from Google to increase productivity and to conduct research on potential infrastructure for attacks or for reconnaissance on targets. Google’s Threat Intelligence Group (GTIG) detected government-linked advanced persistent threat (APT) groups using Gemini primarily for productivity gains rather than to develop or conduct novel AI-enabled cyberattacks that can bypass traditional defenses. Threat actors have been trying to leverage AI tools for their attack purposes to various degrees of success as these utilities can at least shorten the preparation period. Google has identified Gemini activity associated with APT groups from more than 20 countries but the most prominent ones were from Iran and China.

Stop the real time news stream! Who could have imagined that bad actors would be interested in systems and methods that would make their behaviors more effective and efficient.

When Microsoft rolled out its marketing gut punch aimed squarely at Googzilla, the big online advertising beast responded. The Code Red and Code Yellow lights flashed. Senior managers held meetings after Foosball games and hanging at Philz’ Coffee.

Did Google management envision the reality of bad actors using Gemini? No. It appears that the Google acquisition Mandiant figured it out. Eventually — it’s been two years and counting since Microsoft caused the AI tsunami — the Eureka! moment arrived.

The write up reports:

Google also mentions having observed cases where the threat actors attempted to use public jailbreaks against Gemini or rephrasing their prompts to bypass the platform’s security measures. These attempts were reportedly unsuccessful.

Of course, the attacks were. Do US banks tell their customers when check fraud or other cyber dishonesty relieves people of their funds. Sure they don’t. Therefore, it is only the schlubs who are unfortunate enough to have the breach disclosed. Then the cyber security outfits leap into action and issue fixes. Everything is the cyber security world is buttoned up and buttoned down. Absolutely.

Several observations:

  1. How has free access without any type of vetting working out? The question is directed at the big tech outfits who are beavering away in this technology blast zone.
  2. What are the providers of free smart software doing to make certain that the method can only produce seventh grade students’ essays about the transcontinental railroad?
  3. What exactly is a user of free smart software supposed to do to reign in the actions of nation states with which most Americans are somewhat familiar. I mean there is a Chinese restaurant near Harrod’s Creek. Am I to discuss the matter with the waitress?

Why worry? That worked for Mad Magazine until it didn’t. Hey, Google, thanks for the information. Who could have known smart software can be used for nefarious purposes? (Obviously not Google.)

Stephen E Arnold, February 18, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta