FOGINT: Targets Draw Attention. Signal Is a Target

April 1, 2025

dino orange_thumb_thumb_thumbDinobaby says, “No smart software involved. That’s for “real” journalists and pundits.

We have been plugging away on the “Telegram Overview: Notes for Analysts and Investigators.” We have not exactly ignored Signal or the dozens of other super secret, encrypted beyond belief messaging applications. We did compile a table of those we came across, and Signal was on that list.

I read “NSA Warned of Vulnerabilities in Signal App a Month Before Houthi Strike Chat.” I am not interested in the political facets of this incident. The important point for me is this statement:

The National Security Agency sent out an operational security special bulletin to its employees in February 2025 warning them of vulnerabilities in using the encrypted messaging application Signal

One of the big time cyber security companies spoke with me, and I mentioned that Signal might not be the cat’s pajamas. To the credit of that company and the former police chief with whom I spoke, the firm shifted to an end to end encrypted messaging app we had identified as slightly less wonky. Good for that company, and a pat on the back for the police chief who listened to me.

In my experience, operational bulletins are worth reading. When the bulletin is “special,” re-reading the message is generally helpful.

Signal, of course, defends itself vigorously. The coach who loses a basketball game says, “Our players put out a great effort. It just wasn’t enough.”

In the world of presenting oneself as a super secret messaging app immediately makes that messaging app a target. I know first hand that some whiz kid entrepreneurs believe that their EE2E solution is the best one ever. In fact, a year ago, such an entrepreneur told me, “We have developed a method that only a government agency can compromise.”

Yeah, that’s the point of the NSA bulletin.

Let me ask you a question: “How many computer science students in countries outside the United States are looking at EE2E messaging apps and trying to figure out how to compromise the data?” Years ago, I gave some lectures in Tallinn, Estonia. I visited a university computer science class. I asked the students who were working on projects each selected. Several of them told me that they were trying to compromise messaging systems. A favorite target was Telegram but Signal came up.

I know the wizards who cook up EE2E messaging apps and use the latest and greatest methods for delivering security with bells on are fooling themselves. Here are the reasons:

  1. Systems relying on open source methods are well documented. Exploits exist and we have noticed some CaaS offers to compromise these messages. Now the methods may be illegal in many countries, but they exist. (I won’t provide a checklist in a free blog post. Sorry.)
  2. Techniques to prevent compromise of secure messaging systems involve some patented systems and methods. Yes, the patents are publicly available, but the methods are simply not possible unless one has considerable resources for software, hardware, and deployment.
  3. A number of organizations turn EE2E messaging systems into happy eunuchs taking care of the sultan’s harem. I have poked fun at the blunders of the NSO Group and its Pegasus approach, and I have pointed out that the goodies of the Hacking Team escaped into the wild a long time ago. The point is that once the procedures for performing certain types of compromise are no longer secret, other humans can and will create a facsimile and use those emulations to suck down private messages, the metadata, and probably the pictures on the device too. Toss in some AI jazziness, and the speed of the process goes faster than my old 1962 Studebaker Lark.

Let me wrap up by reiterating that I am not addressing the incident involving Signal. I want to point out that I am not into the “information wants to be free.” Certain information is best managed when it is secret. Outfits like Signal and the dozens of other EE2E messaging apps are targets. Targets get hit. Why put neon lights on oneself and try to hide the fact that those young computer science students or their future employers will find a way to compromise the information.

Technical stealth, network fiddling, human bumbling — Compromises will continue to occur. There were good reasons to enforce security. That’s why stringent procedures and hardened systems have been developed. Today it’s marketing, and the possibility that non open source, non American methods may no longer be what the 23 year old art history who has a job in marketing says the systems actually deliver.

Stephen E Arnold, April 1, 2025

Free AI Sites (Well, Mostly Free Sort of)

April 1, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumbDinobaby says, “No smart software involved. That’s for “real” journalists and pundits.

One of my team generated images of French bulldogs. After months of effort, he presented me with a picture of our French bulldog complete with one floppy ear. The image was not free. I pay for the service because free image generation systems work and then degrade because of the costs associated with doing smart software without oodles of cash.

Another person proudly emailed everyone a link to Best AI Websites and the page “Free AI Tools.” The interfaces, functionality, and the outputs vary. The linked Web page is a directory presented with some of that mobile interface zip.l

There are more than 30 tools anyone can try. Here’s what the “directory” interface looks like:

image

The first click displays the BestFreeAIWebsites’ write up for each “service” or “tool.” Then a direct link to the free AI site is displayed. There is a “submit” button to allow those with a free AI tool to add theirs to the listing. The “add” function is a common feature of Telegram bot and Channel listings.

Here is a selection of the “free” services that are available as of March 28, 2025, in alphabetical order:

  1. HUUK.ai, a trip planner
  2. Metavoice at https://studio.themetavoice.xyz/, a “one click voice changer”
  3. Presentpicker.ai, a service to help a user choose a gift.
  4. Remaker.ai, a face swap tool
  5. Yomii.app, a real estate investing assistant

ChatGPT features numerous times in the list of “free” AI tools. Google shows up a couple of times with Bard and Gemini. The majority of the services “wrap” functionality around the big dogs in the LLM space.

Are these services “free”? Our view is that the “free” is a way to get people to give the services a try. If the experience is positive, upgrades are available.

As one of my team worked through the listings, he said, “Most of these services have been available as Telegram bots from other developers.” If he is correct, perhaps Telegram’s AI functions should be included in the listing?

Stephen E Arnold, April 1, 2025

Amazon: So Many Great Ideas

April 1, 2025

AWS puts its customers first. Well, those who pay for the premium support plan, anyway. A thread on Reddit complains, "AWS Blocking Troubleshooting Docs Behind Paid Premium Support Plan." Redditor Certain_Dog1960 writes:

"When did AWS decide that troubleshooting docs/articles require you to have a paid premium support plan….like seriously who thought this was a good idea?"

Good question. The comments and the screenshot of Amazon’s message make clear that the company’s idea of how to support customers is different from actual customers’ thoughts. However, Certain_Dog posted an encouraging update:

"The paywall has been taken down!!! :)"

Apparently customer outrage still makes a difference. Occasionally.

Cynthia Murrell, March 31, 2025

Click Counting: It Is 1992 All Over Again

March 31, 2025

dino orange_thumb_thumb_thumb_thumbDinobaby says, “No smart software involved. That’s for “real” journalists and pundits.

I love it when search engine optimization experts, online marketing executives, and drum beaters for online advertising talk about clicks, clickstreams, and click metrics. Ho ho ho.

I think I was involved in creating a Web site called Point (The Top 5% of the Internet). The idea was simple: Curate and present a directory of the most popular sites on the Internet. It was a long shot because the team did not want to do drugs, sex, and a number of other illegal Web site profiles for the directory. The idea was that in 1992 or so, no one had a Good Housekeeping Seal of Approval-type of directory. There was Yahoo, but if one poked around, some interesting Web sites would display in their low resolution, terrible bandwidth glory.

To my surprise, the idea worked and the team wisely exited the business when someone a lot smarter than the team showed up with a check. I remember fielding questions about “traffic”. There was the traffic we used to figure out what sites were popular. Then there was traffic we counted when visitors to Point hit the home page and read profiles of sites with our Good Housekeeping-type of seal.

I want to share that from those early days of the Internet the counting of clicks was pretty sketchy. Scripts could rack up clicks in a slow heartbeat. Site operators just lied or cooked up reports that served up a reality in terms of tasty little clicks.

Why are clicks bogus? I am not prepared to explain the dark arts of traffic boosting which today is greatly aided by  scripts instantly generated by smart software. Instead I want to highlight this story in TechCrunch: “YouTube Is Changing How YouTube Shorts Views Are Counted.” The article does a good job of explaining how one monopoly is responding to its soaring costs and the slow and steady erosion of its search Nile River of money.

The write up says:

YouTube is changing how it counts views on YouTube Shorts to give creators a deeper understanding of how their short-form content is performing

I don’t know much about YouTube. But I recall watching little YouTubettes which bear a remarkable resemblance to TikTok weaponized data bursts just start playing. Baffled, I would watch a couple of seconds, check that my “autoplay” was set to off, and then kill the browser page. YouTubettes are not for me.

Most reasonable people would want to know several things about their or any YouTubette; for example:

  1. How many times did a YouTubette begin to play and then was terminated in less that five seconds
  2. How many times a YouTubette was viewed from start to bitter end
  3. How many times a YouTubette was replayed in its entirety by a single user
  4. What device was used
  5. How many YouTubettes were “shared”
  6. The percentage of these data points compared against the total clicks of a short nature or the full view?

You get the idea. Google has these data, and the wonderfully wise but stressed firm is now counting “short views” as what I describe as the reality: Knowing exactly how many times a YouTubette was played start to finish.

According to the write up:

With this update, YouTube Shorts will now align its metrics with those of TikTok and Instagram Reels, both of which track the number of times your video starts or replays. YouTube notes that creators will now be able to better understand how their short-form videos are performing across multiple platforms. Creators who are still interested in the original Shorts metric can view it by navigating to “Advanced Mode” within YouTube Analytics. The metric, now called “engaged views,” will continue to allow creators to see how many viewers choose to continue watching their Shorts. YouTube notes that the change won’t impact creators’ earnings or how they become eligible for the YouTube Partner Program, as both of these factors will continue to be based on engaged views rather than the updated metric.

Okay, responding to the competition from one other monopolistic enterprise. I get it. Okay, Google will allegedly provided something for a creator of a YouTubette to view for insight. And the change won’t impact what Googzilla pays a creator. Do creators really know how Google calculates payments? Google knows. With the majority of the billions of YouTube videos (short and long) getting a couple of clicks, the “popularity” scheme boils down to what we did in 1992. We used whatever data was available, did a few push ups, and pumped out a report.

Could Google follow the same road map? Of course not. In 1992, we had no idea what we were doing. But this is 2025 and Google knows exactly what it is doing.

Advertisers will see click data that do not reflect what creators want to see and what viewers of YouTubettes and probably other YouTube content really want to know: How many people watched the video from start to finish?

Google wants to sell ads at perhaps the most difficult point in its 20 year plus history. That autoplay inflates clicks. “Hey, the video played. We count it,” can you conceptualize the statement? I can.

Let’s not call this new method “weaponization.” That’s too strong. Let’s describe this as “shaping” or “inflating” clicks.

Remember. I am a dinobaby and usually wrong. No high technology company would disadvantage a creator or an advertiser. Therefore, this change is no big deal. Will it help Google deal with its current challenges? You can now ask Google AI questions answered by its most sophisticated smart software for free.

Is that an indication that something is not good enough to cause people to pay money? Of course not. Google says “engaged views” are still important. Absolutely. Google is just being helpful.

Stephen E Arnold, March 31, 2025

Google: Android and the Walled Garden

March 31, 2025

dino orange_thumb_thumb_thumb_thumb_thumbDinobaby says, “No smart software involved. That’s for “real” journalists and pundits.

In my little corner of the world, I do not see Google as “open.” One can toss around the idea 24×7, and I won’t change my mind. Despite its unusual approach to management, the company has managed to contain the damage from Xooglers’ yip yapping about the company. Xoogler.co is focused on helping people. I suppose there are versions of Sarah Wynn-Williams “Careless People” floating around. Few talk much about THE Timnit Gebru “parrot” paper. Google is, it seems, just not the buzz generator it was in 2006, the year the decline began to accelerate in my opinion.

We have another example of “circling the wagons” strategy. It is a doozy.

Google Moves All Android Development Behind Closed Doors” reports with some “real” writing and recycling of Google generated slick talk an interesting shift in the world of the little green man icon:

Google had to merge the two branches, which lead to problems and issues, so Google decided it’s now moving all development of Android behind closed doors

How many versions of messaging apps did Google have before it decided that “let many flowers bloom” was not in line with the sleek profile the ageing Google want to flaunt on Wall Street?

The article asks a good question:

…if development happens entirely behind closed doors, with only the occasional code drop, is the software in question really open source? Technically, the answer is obviously ‘yes’ – there’s no requirement that development take place in public. However, I’m fairly sure that when most people think of open source, they think not only of occasionally throwing chunks of code over the proverbial corporate walls, but also of open development, where everybody is free to contribute, pipe in, and follow along.

News flash from the dinobaby: Open source software, when bandied about by folks who don’t worry too much about their mom missing a Social Security check means:

  1. We don’t want to chase and fix bugs. Make it open source and let the community do it for free.
  2. We probably have coded up something that violates laws. By making it open source, we are really benefiting those other developers and creating opportunities for innovation.
  3. We can use the buzzword “open source” and jazz the VCs with a term that is ripe with promise for untold riches
  4. A student thinks: I can make my project into open source and maybe it will help me get a job.
  5. A hacker thinks: I can get “cred” by taking my exploit and coding a version that penetration testers will find helpful and possibly not discover the backdoor.

I have not exhausted the kumbaya about open source.

It is clear that Google is moving in several directions, a luxury only Googzillas have:

First, Google says, “We will really, really, cross my fingers and hope to die, share code … just like always.

Second, Google can add one more oxen drawn wagon to its defensive circle. The company will need it when the licensing terms for Android include some very special provisions. Of course, Google may be charitable and not add additional fees to its mobile OS.

Third, it can wave the “we good managers” flag.

Fourth, as the write up correctly notes:

…Darwin, the open source base underneath macOS and iOS, is technically open source, but nobody cares because Apple made it pretty much worthless in and of itself. Anything of value is stripped out and not only developed behind closed doors, but also not released as open source, ensuring Darwin is nothing but a curiosity we sometimes remember exists. Android could be heading in the same direction.

I think the “could” is a hedge. I penciled in “will.” But I am a dinobaby. What do I know?

Stephen E Arnold, March 31, 2025

Apple CEO Chases Chinese AI and Phone Sales

March 31, 2025

While the hullabaloo about making stakes in China’s burgeoning market has died down, Big Tech companies still want pieces of the Chinese pie or dumpling would be a better metaphor here. An example of Big Tech wanting to entrench itself in the ChinBaiese market is Apple. Mac Rumors reports that Apple CEO Tim Cook was recently in China and he complimented start-up Deepseek for its AI models. The story, “Apple CEO Tim Cook Praises China’s Deepseek”

While Cook didn’t say he would pursue a partnership with Deepseek, he was impressed with their AI models. He called them excellent, because Deepseek delivers AI models with high performance capabilities that have lower costs and compute requirements. Deepseek’s research has been compared to OpenAI for achieving similar results by using less resources.

When Cook visited China he reportedly made an agreement with Alibaba Group to integrate its Qwen models into Apple Intelligence. There are also rumors that Apple’s speaking with Baidu about providing LLMs for the Chinese market.

Does this mean that Tim Apple hopes he can use Chinese smart tech in the iPhone and make that more appealing to Chinese users? Hmmmm.

Cook conducted more business during his visit:

In addition to his comments on AI, Cook announced plans to expand Apple’s cooperation with the China Development Research Foundation, alongside continued investments in clean energy development. Throughout his visit, Cook posted updates on the Chinese social media platform Weibo, showcasing a range of Apple products being used in classrooms, creative environments, and more.

Cook’s comments mark a continuation of Apple’s intensified focus on the Chinese market at a time when the company is facing declining iPhone shipments and heightened competition from domestic brands. Apple’s smartphone shipments in China are believed to have fallen by 25% year-over-year in the fourth quarter of 2024, while annual shipments dropped 17% to 42.9 million units, placing Apple behind local competitors Vivo and Huawei.”

It’s evident that Apple continues to want a piece of the Chinese dumpling, but also seeks to incorporate Chinese technology into its products. Subtle, Tim Apple, subtle.

Whitney Grace, March 31, 2025

Cypersecurity Pros, Bit of an Issue. You Think?

March 28, 2025

dino orangeBe aware. A dinobaby wrote this essay. No smart software involved.

I read a research report in the Register titled “MINJA Sneak Attack Poisons AI Models for Other Chatbot Users.” The write up is interesting and, I think, important. The weakness is that the essay does not make explicit that this type of vulnerability can be automated and the outputs used to create the type of weaponized content produced by some intelligence agencies (and PR firms).

The write up provides diagrams and useful detail. For this short blog post, my take on the technique is a manipulation of an LLM’s penchant for adapting to the prompts during a human-interface interaction. If the bad actor crafts misleading information, the outputs can be skewed.

How serious is the behavior in LLMs? In my view, the PR and hype about AI renders the intentional fiddling to a trivial concern. That’s not where the technique nor the implications of its effectiveness belong. Triggering wonky behavior is as easy as mismatching patient data as the article illustrates.

Before one gets too excited about autonomous systems using LLMs to just do it, more attention to the intentional weaponization of LLMs is needed.

Will the AI wizards fix this problem? Sure, someday, but it is an issue that requires time, money, and innovation. We live in an era of marketing. I know I cannot trust most people. Now I know that I can’t trust a MINJA that sneaks into my search or research and delivers a knock out blow.

The Register could have been a bit more energetic in its presentation of this issue. The cited essay does a good job of encouraging bad actors and propagandists to be more diligent in their use of LLMs.

Stephen E Arnold, March 28, 2025

OpenAI and Alleged Environmental Costs: No Problem

March 28, 2025

We know ChatGPT uses an obscene amount of energy and water. But it can be difficult to envision exactly how much. Digg offers some helpful infographics in, "Do You Know How Much Energy ChatGPT Actually Uses?" Writer Darcy Jimenez tells us:

"Since it was first released in 2022, ChatGPT has gained a reputation for being particularly bad for the environment — for example, the GPT-4 model uses as many as 0.14 kilowatt-hours (kWh) generating something as simple as a 100-word email. It can be tricky to fully appreciate the environmental impact of using ChatGPT, though, so the researchers at Business Energy UK made some visualizations to help. Using findings from a 2023 research paper, they calculated the AI chatbot’s estimated water and electricity usage per day, week, month and year, assuming its 200 million weekly users feed it five prompts per day."

See the post for those enlightening graphics. Here are just a few of the astounding statistics:

"Electricity: each day, ChatGPT uses 19.99 million kWh. That’s enough power to charge 4 million phones, or run the Empire State Building for 270 days. … ChatGPT uses a whopping 7.23 billion kWh per year, which is more electricity than the world’s 112 lowest-consumption countries consume over the same period. It’s also enough to power every home in Wyoming for two and a half years."

And:

"Water: The 19.58 million gallons ChatGPT drinks every day could fill a bath for each of Colorado Springs’s 488,664 residents. That amount is also equivalent to everyone in Belgium flushing their toilet at the same time. … In the space of a year, the chatbot uses 7.14 billion gallons of water. That’s enough to fill up the Central Park Reservoir seven times, or power Las Vegas’s Fountains of Bellagio shows for almost 600 years."

Wow. See the write-up for more mind-boggling comparisons. Dolphin lovers and snail darter fans may want to check out the write up.

Cynthia Murrell, March 28, 2025

Amazon Twitches: Love That Streaming, Dontcha?

March 28, 2025

Having a pervasive online presence is great for business, especially if you’re an influencer and you want endorsements. There’s also a dark side to being in the public eye and that comes in the form of anything from home invasions to death threats. Twitch star Amouranth’s home was burgled and she ended up being assaulted. Three more female Twitch stars were in the danger zone. The BBC reports that, “Twitch Creators ‘Taking Live Stream Death Threats Very Seriously.”

Twitch stars Emiru, China, and Valkyrae received death threats from a follower named Russell. He appeared on their stream from Pacific Park, Santa Monica. He threatened to unalive [sic] Emiru when she refused to share her contact information. The streamers reported the incident to Santa Monica police.

Emiru, China, and Valkyrae have millions of followers online. They went to Santa Monica, rode some rides, and then they were followed by a bad actor. When he asked for Emiru’s contact information, she refused and he made the threat. The streamers were scared, so they screamed and ran into a store.

Some watchers said the streamers staged the incident. Valkyrae responded:

‘Posting on X, she also said what happened demonstrates the ‘harsh reality women live in’ and hit out at online comments that it was staged to drive hits.

‘Seeing accounts accusing my friends and I for faking this and blaming us instead of questioning the man’s behaviour has been embarrassing to see.

‘I’ve learned it doesn’t matter how much I accomplish in this industry or how much I try to gain respect, some men will hate women and blame women no matter the situation.’

Emiru did not appear in the follow-up stream on Monday but posted on X afterwards.

‘I wish I could say this was some kind of one-in-a-million incident, but the truth is, it is not,’ she said. ‘This is what life is like for girls.

‘I hope if anything, people see what happened and realise how much of a reality it is for women and content creators as a whole.’”

It’s horrible that these high profile streamers were accosted, received death threats, and were also accused of staging. It demonstrates what women in the public eye are incredibly vulnerable.

Whitney Grace, March 28, 2025

Programmers: The Way of the Dodo Bird?

March 27, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumbAnother dinobaby blog post. Eight decades and still thrilled when I point out foibles.

Let’s just assume that the US economy is A-OK. One discipline is indispensable now and in the future. What is it? The programmer.

Perhaps not if the information in “Employment for Computer Programmers in the U.S. Has Plummeted to Its Lowest Level Since 1980—Years Before the Internet Existed” is accurate.

The write up states:

There are now fewer computer programmers in the U.S. than there were when Pac-Man was first invented—years before the internet existed as we know it. Computer-programmer employment dropped to its lowest level since 1980, the Washington Post reported, using data from the Current Population Survey from the Bureau of Labor Statistics. There were more than 300,000 computer-programming jobs in 1980. The number peaked above 700,000 during the dot-com boom of the early 2000s but employment opportunities have withered to about half that today. U.S. employment grew nearly 75% in that 45-year period, according to the Post.

What’s interesting is that article makes a classification decision I wasn’t expecting; specifically:

Computer programmers are different from software developers, who liaise between programmers and engineers and design bespoke solutions—a much more diverse set of responsibilities compared to programmers, who mostly carry out the coding work directly. Software development jobs are expected to grow 17% from 2023 to 2033, according to the Bureau of Labor Statistics. The bureau meanwhile projects about a 10% decline in computer programming employment opportunities from 2023 to 2033.

Let’s go with the distinction.

Why are programmers’ jobs disappearing? The write up has the answer:

There has been a 27.5% plummet in the 12-month average of computer-programming employment since about 2023—coinciding with OpenAI’s introduction of ChatGPT the year before. ChatGPT can handle coding tasks without a user needing more detailed knowledge of the code being written. The correlation between the decline of programmer jobs and the rise of AI tools signals to some experts that the burgeoning technology could begin to cost some coding experts their jobs.

Now experts are getting fired? Does that resonate with everyone? Experts.

There is an upside if one indulges in a willing suspension of disbelief. The write up says:

Programmers will be required to perform complicated tasks, Krishna argued, and AI can instead serve to eliminate the simpler, time-consuming tasks those programmers would once need to perform, which would increase productivity and subsequently company performance.

My question, “Did AI contribute to this article?” In my opinion, something is off. It might be dependent on the references to the Bureau of Labor Statistics and “real” newspapers as sources for the numbers. Would a high school debate teacher give the green light to the logic in categorizing and linking those heading for the termination guillotine and those who are on the path to carpet land. The use of AI hype as fact is interesting as well.

I am thrilled to be a dinobaby.

Stephen E Arnold, March 27, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta