FOGINT: A Shocking Assertion about Israeli Intelligence Before the October 2023 Attack
January 13, 2025
One of my colleagues alerted me to a new story in the Jerusalem Post. The article is “IDF Could’ve Stopped Oct. 7 by Monitoring Hamas’s Telegram, Researchers Say.” The title makes clear that this is an “after action” analysis. Everyone knows that thinking about the whys and wherefores right of bang is a safe exercise. Nevertheless, let’s look at what the Jerusalem Post reported on January 5, 2025.
First, this statement:
“These [Telegram] channels were neither secret nor hidden — they were open and accessible to all.” — Lt.-Col. (res.) Jonathan Dahoah-Halevi
Telegram puts some “silent” barriers to prevent some third parties from downloading in real time active discussions. I know of one Israeli cyber security firm which asserts that it monitors Telegram public channel messages. (I won’t ask the question, “Why didn’t analysts at that firm raise an alarm or contact their former Israeli government employers with that information? Those are questions I will sidestep.)
Second, the article reports:
These channels [public Telegram channels like Military Tactics] were neither secret nor hidden — they were open and accessible to all. The “Military Tactics” Telegram channel even shared professional content showcasing the organization’s level of preparedness and operational capabilities. During the critical hours before the attack, beginning at 12:20 a.m. on October 7, the channel posted a series of detailed messages that should have raised red flags, including: “We say to the Zionist enemy, [the operation] coming your way has never been experienced by anyone,” “There are many, many, many surprises,” “We swear by Allah, we will humiliate you and utterly destroy you,” and “The pure rifles are loaded, and your heads are the target.”
Third, I circled this statement:
However, Dahoah-Halevi further asserted that the warning signs appeared much earlier. As early as September 17, a message from the Al-Qassam Brigades claimed, “Expect a major security event soon.” The following day, on September 18, a direct threat was issued to residents of the Gaza border communities, stating, “Before it’s too late, flee and leave […] nothing will help you except escape.”
The attack did occur, and it had terrible consequences for the young people killed and wounded and for the Israeli cyber security industry, which some believe is one of the best in the world. The attack suggested that marketing rather than effectiveness created an impression at odds with reality.
What are the lessons one can take from this report? The FOGINT team will leave that to you to answer.
Stephen E Arnold, January 13, 2025
Super Humans Share Super Thoughts about Free Speech
January 13, 2025
Prepared by a still-alive dinobaby.
The Marvel comix have come to life. “Elon Musk Responds As Telegram CEO Makes Fun of Facebook Parent Meta Over Fact Checking” reports
Elon Musk responded to a comment from Telegram CEO Pavel Durov, who made a playful jab at Meta over its recent decision to end fact checking on Facebook and Instagram. Durov, posted about the shut down of Meta’s fact checking program on X (formerly known as Twitter) saying that Telegram’s commitment to freedom of speech does not depend on the US Electoral cycle.
The interaction among three modern Marvel heroes is interesting. Only Mark Zuckerberg, the founder and controlling force at Facebook (now Meta) is producing children with a spouse. Messrs. Musk and Durov are engaged in spawning children — presumably super comix characters — with multiple partners and operating as if each ruled a country. Mr. Musk has fathered a number of children. Mr. Durov allegedly has more than 100 children. The idea uniting these two larger-than-life characters is that they are super humans. Mr. Zuckerberg has a different approach, guided more by political expediency than a desire to churn out numerous baby Zucks.
Technology super heroes head toward a meeting of the United Nations to explain how the world will be working with their organizations. Thanks, Copilot. Good enough.
The article includes this statement from Mr. Durov:
I’m proud that Telegram has supported freedom of speech long before it became politically safe to dop so. Our values don’t depend on US electoral cycles, said Durov in a post shared on X.
This is quite a statement. Mr. Durov blocked messages from the Ukrainian government to Russian users of Telegram. After being snared in the French judicial system, Mr. Durov has demonstrated a desire to cooperate with law enforcement. Information about Telegram users has been provided to law enforcement. Mr. Durov is confined to France as his lawyers work to secure his release. Mr. Durov has been learning more about French procedures and bureaucracy since August 2024. The wheels of justice do turn in France, probably less rapidly than the super human Pavel Durov wishes.
After Mr. Durov shared his observation about the Zuck’s willingness to embrace free speech on Twitter (now x.com), the super hero Elon Musk chose to respond. Taking time from posts designed to roil the political waters in Britain, Mr. Musk offered an ironic “Good for you” as a comment about Mr. Durov’s quip about the Zuck.
The question is, “Do these larger-than-life characters with significant personal fortunes and influential social media soap boxes support free speech?” The answer is unclear. From my vantage point in rural Kentucky, I perceive public relations or marketing output from these three individuals. My take is that Mr. Durov talks about free speech as he appears to cooperate with French law enforcement and possibly a nation-state like Russia. Mr. Musk has been characterized by some in the US as “President Musk.” The handle reflects Mr. Musk’s apparent influence on some of the policies of the incoming administration. Mr. Zuckerberg has been quick to contribute money to a recently elected candidate and even faster on the draw when it comes to dumping much of the expensive overhead of fact checking social media content.
The Times of India article is more about the global ambitions of three company leaders. Free speech could be a convenient way to continue to generate business, retain influence over information framing, and reinforce their roles as the the 2025 incarnations of Spider-Man, Iron Man, and Hulk. After decades of inattention by regulators, the new super heroes may not be engaged in saving or preserving anything except their power and influence and cash flows.
Stephen E Arnold, January 13, 2025
AI Defined in an Arts and Crafts Setting No Less
January 13, 2025
Prepared by a still-alive dinobaby.
I was surprised to learn that a design online service (what I call arts and crafts) tackled a to most online publications skip. The article “What Does AI Really Mean?” tries to define AI or smart software. I remember a somewhat confused and erratic college professor trying to define happiness. Wow, that was a wild and crazy lecture. (I think the person’s name was Dr. Chapman. I tip my ball cap with the SS8 logo on it to him.) The author of this essay is a Googler, so it must be outstanding, furthering the notion of quantum supremacy at Google.
What is AI? The write up says:
I hope this helped you better understand what those terms mean and the processes which encompass the term “AI”.
Okay, “helped you understand better.” What does the essay do to help me understand better. Hang on to your SS8 ball cap. The author briefly defines these buzzwords:
- Data as coordinates
- Querying per approximation
- Language models both large and small
- Fine “Tunning” (Isn’t that supposed to be tuning?)
- Enhancing context with information, including grounded generation
- Embedding.
For me, a list of buzzwords is not a definition. (At least the hapless Dr. Chapman tried to provide concrete examples and references to his own experience with happiness, which as I recall eluded him.)
The “definition” jumps to a section called “Let’s build.” The author concludes the essay with:
I hope this helped you better understand what those terms mean and the processes which encompass the term “AI”. This merely scratches the surface of complexity, though. We still need to talk about AI Agents and how all these approaches intertwine to create richer experiences. Perhaps we can do that in a later article — let me know in the comments if you’d like that!
That’s it. The Google has, from his point of view, defined AI. As Holden Caufield in The Catcher in the Rye said:
“I can’t explain what I mean. And even if I could, I’m not sure I’d feel like it.”
Bingo.
Stephen E Arnold, January 13, 2025
Oh, Oh! Silicon Valley Hype Minimizes Risk. Who Knew?
January 10, 2025
This is an official dinobaby post. No smart software involved in this blog post.
I read “Silicon Valley Stifled the AI Doom Movement in 2024.” I must admit I was surprised that one of the cheerleaders for Silicon Valley is disclosing something absolutely no one knew. I mean unregulated monopolies, the “Puff the Magic Dragon” strafing teens, and the vulture capitalists slavering over the corpses of once thriving small and mid sized businesses. Hey, I thought that “progress” myth was real. I thought technology only makes life better. Now I read that “Silicon Valley” wanted only good news about smart software. Keep in mind that this is software which outputs hallucinations, makes decisions about medical care for people, and monitors the clicks and location of everyone with a mobile device or a geotracker.
The write up reminded me that ace entrepreneur / venture professional Marc Andreessen said:
“The era of Artificial Intelligence is here, and boy are people freaking out. Fortunately, I am here to bring the good news: AI will not destroy the world, and in fact may save it,” said Andreessen in the essay. In his conclusion, Andreessen gave a convenient solution to our AI fears: move fast and break things – basically the same ideology that has defined every other 21st century technology (and their attendant problems). He argued that Big Tech companies and startups should be allowed to build AI as fast and aggressively as possible, with few to no regulatory barriers. This would ensure AI does not fall into the hands of a few powerful companies or governments, and would allow America to compete effectively with China, he said.
What publications touted Mr. Andreessen’s vision? Answer: Lots.
Regulate smart software? Nope. From Connecticut’s effort to the US government, smart software regulation went nowhere. The reasons included, in my opinion:
- A chance to make a buck, well, lots of bucks
- Opportunities to foist “smart software” plus its inherent ability to make up stuff on corporate sheep
- A desire to reinvent “dumb” processes like figuring out how to push buttons to create addiction to online gambling, reduce costs by eliminating inefficient humans, and using stupid weapons.
Where are we now? A pillar of the Silicon Valley media ecosystem writes about the possible manipulation of information to make smart software into a Care Bear. Cuddly. Harmless. Squeezable. Yummy too.
The write up concludes without one hint of the contrast between the AI hype and the viewpoints of people who think that the technology of AI is immature but fumbling forward to stick its baby finger in a wall socket. I noted this concluding statement in the write up:
Calling AI “tremendously safe” and attempts to regulate it “dumb” is something of an oversimplification. For example, Character.AI – a startup a16z has invested in – is currently being sued and investigated over child safety concerns. In one active lawsuit, a 14-year-old Florida boy killed himself after allegedly confiding his suicidal thoughts to a Character.AI chatbot that he had romantic and sexual chats with. This case shows how our society has to prepare for new types of risks around AI that may have sounded ridiculous just a few years ago. There are more bills floating around that address long-term AI risk – including one just introduced at the federal level by Senator Mitt Romney. But now, it seems AI doomers will be fighting an uphill battle in 2025.
But don’t worry. Open source AI provides a level playing field for [a] adversaries of the US, [b] bad actors who use smart software to compromise Swiss cheese systems, and [c] manipulate people on a grand scale. Will the “Silicon Valley” media give equal time to those who don’t see technology as a benign or net positive? Are you kidding? Oh, aren’t those smart drones with kinetic devices just fantastic?
Stephen E Arnold, January 10, 2025
The Brain Rot Thing: The 78 Wax Record Is Stuck Again
January 10, 2025
This is an official dinobaby post.
I read again about brain rot. I get it. Young kids play with a mobile phone. They get into social media. They watch TikTok. The discover the rich, rewarding world of Telegram online gambling. These folks don’t care about reading. Period. I get it.
But the Financial Times wants me to really get it. “Social Media, Brain Rot and the Slow Death of Reading” says:
Social media is designed to hijack our attention with stimulation and validation in a way that makes it hard for the technology of the page to compete.
This is news? Well, what about this statement:
The easy dopamine hit of social media can make reading feel more effortful by comparison. But the rewards are worth the extra effort: regular readers report higher wellbeing and life satisfaction, benefiting from improved sleep, focus, connection and creativity. While just six minutes of reading has been shown to reduce stress levels by two-thirds, deep reading offers additional cognitive rewards of critical thinking, empathy and self-reflection.
Okay, now tell that to the people in line at the grocery store or the kids in a high school class. Guess what? The joy of reading is not part of the warp and woof of 2025 life.
The news flash is that traditional media like the Financial Times long for the time when everyone read. Excuse me. When was that time? People read in school so they can get out of school and not read. Books still sell, but the avid readers are becoming dinobabies. Most of the dinobabies I know don’t read too much. My wife’s bridge club reads popular novels but non fiction is a non starter.
What does the FT want people to do? Here’s a clue:
Even if the TikTok ban goes ahead in the US, other platforms will pop up to replace it. So in 2025, why not replace the phone on your bedside table with a book? Just an hour a day clawed back from screen time adds up to about a book a week, placing you among an elite top one per cent of readers. Melville (and a Hula-Hoop) are optional.
Lamenting and recommending is not going to change what the flows of electronic information have done. There are more insidious effects racing down the information highway. Those who will be happiest will be those who live in ignorance. People with some knowledge will be deeply unhappy.
Will the FT want dinosaurs to roam again? Sure. Will the FT write about them? Of course. Will the impassioned words change what’s happened and will happen? Nope. Get over it, please. You may as well long for the days when Madame Tussaud’s Wax Museum and you were part of the same company.
Stephen E Arnold, January 10, 2025
Social Media Change: Stop the Decay! Ouch! Stop!
January 10, 2025
This is an official dinobaby post. No smart software involved in this blog post.
I learned a new term: Platform Decay. I associated the phrase with Tooth Decay.
The Techspot article “Meta Wants to Fill Its Social Platforms with AI-Generated Bots” asserts:
Meta is actively working to transform its social media platforms into spaces where AI bots interact with each other. Over the next few years, the company formerly known as Facebook aims to integrate AI technology to boost “engagement” with its three billion real, human users. This could either be a revolution or just another disastrously misguided idea, like the previously dismissed “metaverse” VR ecosystem.
I thought Facebook was about people posting words and text on Instagram and shooting “secure” messages to and from via WhatsApp. Facebook is a service I perceive as supporting a platform for ecommerce excitement and allowing grandparents to see the grandchildren.
Now I am updated. The write up explains:
Meta is currently developing several AI products, including a service designed to help users create AI bots on Instagram and Facebook. These bots could clone users’ personalities and interact with other (non-bot) users on the network. The company hopes to attract younger audiences, who are apparently going crazy over AI these days.
I learned that there is a downside to this bot-topia; specifically:
Critics of this AI-filled dystopia warn about the risks related to the “weaponization” of AI-generated content. Becky Owen, innovation officer at creative agency Billion Dollar Boy and former head of Meta’s creator team, said fake AI accounts could easily be used to amplify false narratives if robust safeguards are not enforced on social media.
What’s interesting to me is that one of Meta / Zuckbook’s competitors is not going in this direction. Telegram is chasing crypto. To be fair, the Zuck is not under the control of a nation state like Pavel Durov. He enjoys the ministrations of the French judiciary. His minions are cutting deals, integrating online gambling services like CryptoCasino.com, and training developers in Vancouver and other major cities to build for the Telegram platform. (I think of Telegram as the framework for building super apps for online crime, but I am a dinobaby and hopelessly out of step with social media).
Which strategy will win in 2025? Will the Zuck get richer and dominate the social bot scene and attract millions more new users? Will Telegram grow beyond one billion users and help undermine the US financial system while delivering crypto alternatives for traditional banking services? I don’t know.
I am not sure the phrase “platform decay” captures what the Zuck is doing. I know that Telegram is not exactly decaying while its founder is confined to France, good food, and French red tape.
I think the article is trying to explain that the good old Facebook is changing. What’s decaying are the features and digital hooks that made the Zuck a big dog.
Net net: These platforms are making an attempt to adapt and avoid the MySpace problem: No users. Get real, Techspot. Longing for the past is a poor use of one’s time. Adapt or go away — That’s this dinobaby’s advice.
Stephen E Arnold, January 10, 2025
Meta and Zuck Make Free Speech News
January 9, 2025
Techmeme makes clear that Meta and its charming leader are important and “real” news. I checked the splash page of the online news service and learned:
- Zuckerberg is “pretending” about free speech. You can read that legal / journalistic explanation in TechDirt
- Mastodon, another social media service, will filter some Meta content. (Isn’t that censorship?) Read that TechCrunch story here.
- The truth outfit — Thomson Reuters — reports that the European Union says, “Hey, we don’t institutionalize censorship!” Top up your info tank at this link.
- The paywalled orange newspaper asserts that in 2023 Meta did the “give me money” approach to business, letting some “top advertisers” call ad placement shots. The FT discloses what may be non-public information too!
- The Bezos journalistic enterprise, another for-fee operation which may have some staff issues, points out that the US of A and Europe may not see eye-to-eye when it comes to filtering content.
Here’s what the Zuck-dense splash page looked like at 545 am on January 9, 2025:
Several observations:
- The message about what is permissible and what is not permissible across the Zuckerberg properties is not clear
- The gestalt of the cited stories is that Meta is responding to and taking advantage of an opportunity to define “free speech” so it conforms with the expectations of certain person of influence in the United States
- The decisions illustrate a certain opportunism with benefits in the management think tank at the Zuck operational headquarters: Reduce some costs, generate buzz, and dump the baggage of trying to establish and maintain an editorial policies that get in the way of generating cash or “free” money.
Net net: The difference between Meta’s approach to innovation to that of an organization like Telegram becomes increasingly clear. Focusing on Meta could result in missing important Telegram signals.
Stephen E Arnold, January 9, 2025
GitHub Identifies a Sooty Pot and Does Not Offer a Fix
January 9, 2025
This is an official dinobaby post. No smart software involved in this blog post.
GitLab’s Sabrina Farmer is a sharp thinking person. Her “Three Software Development Challenges Slowing AI Progress” articulates an issue often ignored or just unknown. Specifically, according to her:
AI is becoming an increasingly critical component in software development. However, as is the case when implementing any new tool, there are potential growing pains that may make the transition to AI-powered software development more challenging.
Ms. Farmer is being kind and polite. I think she is suggesting that the nest with the AI eggs from the fund-raising golden goose has become untidy. Perhaps, I should use the word “unseemly”?
She points out three challenges which I interpret as the equivalent of one of those unsolved math problems like cracking the Riemann Hypothesis or the Poincaré Conjecture. These are:
- AI training. Yeah, marketers write about smart software. But a relatively small number of people fiddle with the knobs and dials on the training methods and the rat’s nests of computational layers that make life easy for an eighth grader writing an essay about Washington’s alleged crossing of the Delaware River whilst standing up in a boat rowed by hearty, cheerful lads. Big demand, lots of pretenders, and very few 10X coders and thinkers are available. AI Marketers? A surplus because math and physics are hard and art history and social science are somewhat less demanding on today’s thumb typers.
- Tools, lots of tools. Who has time to keep track of every “new” piece of smart software tooling? I gave up as the hyperbole got underway in early 2023. When my team needs to do something specific, they look / hunt for possibilities. Testing is required because smart software often gets things wrong. Some call this “innovation.” I call it evidence of the proliferation of flawed or cute software. One cannot machine titanium with lousy tools.
- Management measurements. Give me a break, Ms. Farmer. Managers are often evidence of the Peter Principle, an accountant, or a lawyer. How can one measure what one does not use, understand, or creates? Those chasing smart software are not making spindles for a wooden staircase. The task of creating smart software that has a shot at producing money is neither art nor science. It is a continuous process of seeing what works, fiddling, and fumbling. You want to measure this? Good luck, although blue chip consultants will gladly create a slide deck to show you the ropes and then churn out a spectacular invoice for professional services.
One question: Is GitLab part of the problem or part of the solution?
Stephen E Arnold, January 9, 2025
AI Outfit Pitches Anti Human Message
January 9, 2025
AI startup Artisan thought it could capture attention by telling companies to get rid of human workers and use its software instead. It was right. Gizmodo reports, “AI Firm’s ‘Stop Hiring Humans’ Billboard Campaign Sparks Outrage.” The firm plastered its provocative messaging across San Francisco. Writer Lucas Ropek reports:
“The company, which is backed by startup accelerator Y-Combinator, sells what it calls ‘AI Employees’ or ‘Artisans.’ What the company actually sells is software designed to assist with customer service and sales workflow. The company appears to have done an internal pow-wow and decided that the most effective way to promote its relatively mundane product was to fund an ad campaign heralding the end of the human age. Writing about the ad campaign, local outlet SFGate notes that the posters—which are strewn all over the city—include plugs like the following:
‘Artisans won’t complain about work-life balance’
‘Artisan’s Zoom cameras will never ‘not be working’ today.’
‘Hire Artisans, not humans.’
‘The era of AI employees is here.'”
The write-up points to an interview with SFGate in which CEO Jaspar Carmichael-Jack states the ad campaign was designed to “draw eyes.” Mission accomplished. (And is it just me, or does that name belong in a pirate movie?) Though Ropek acknowledges his part in drawing those eyes, he also takes this chance to vent about AI and big tech in general. He writes:
“It is Carmichael-Jackson’s admission that his billboards are ‘dystopian’—just like the product he’s selling—that gets to the heart of what is so [messed] up about the whole thing. It’s obvious that Silicon Valley’s code monkeys now embrace a fatalistic bent of history towards the Bladerunner-style hellscape their market imperatives are driving us.”
Like Artisan’s billboards, Ropek pulls no punches. Located in San Francisco, Artisan was launched in 2023. Founders hail from the likes of Stanford, Oxford, Meta, and IBM. Will the firm find a way to make its next outreach even more outrageous?
Cynthia Murrell, January 9, 2025
Be Secure Like a Journalist
January 9, 2025
This is an official dinobaby post.
If you want to be secure like a journalist, Freedom.press has a how-to for you. The write up “The 2025 Journalist’s Digital Security Checklist” provides text combined with a sort of interactive design. For example, if you want to know more about an item on a checklist, just click the plus sign and the recommendations appear.
There are several sections in the document. Each addresses a specific security vector or issue. These are:
- Asses your risk
- Set up your mobile to be “secure”
- Protect your mobile from unwanted access
- Secure your communication channels
- Guard your documents from harm
- Manage your online profile
- Protect your research whilst browsing
- Avoid getting hacked
- Set up secure tip lines.
Most of the suggestions are useful. However, I would strongly recommend that any mobile phone user download this presentation from the December 2024 Chaos Computer Club meeting held after Christmas. There are some other suggestions which may be of interest to journalists, but these regard specific software such as Google’s Chrome browser, Apple’s wonderful iCloud, and Microsoft’s oh-so-secure operating system.
The best way for a journalist to be secure is to be a “ghost.” That implies some type of zero profile identity, burner phones, and other specific operational security methods. These, however, are likely to land a “real” journalist in hot water either with an employer or an outfit like a professional organization. A clever journalist would gain access to a sock puppet control software in order to manage a number of false personas at one time. Plus, there are old chestnuts like certain Dark Web services. Are these types of procedures truly secure?
In my experience, the only secure computing device is one that is unplugged in a locked room. The only secure information is that which one knows and has not written down or shared with anyone. Every time I meet a journalist unaware of specialized tools and services for law enforcement or intelligence professionals I know I can make that person squirm if I describe one of the hundreds of services about which journalists know nothing.
For starters, watch the CCC video. Another tip: Choose the country in which certain information is published with your name identifying you as an author carefully. Very carefully.
Stephen E Arnold, January 9, 2025