A Failure Retrospective
February 3, 2025
Every year has tech failures, some of them will join the zeitgeist as cultural phenomenons like Windows Vista, Windows Me, Apple’s Pippin game console, chatbots, etc. PC Mag runs down the flops in: “Yikes: Breaking Down the 10 Biggest Tech Fails of 2024.” The list starts with Intel’s horrible year with its booted CEO, poor chip performance. It follows up with the Salt Typhoon hack that proved (not that we didn’t already know it with TikTok) China is spying on every US citizen with a focus on bigwigs.
National Public Data lost 272 million social security numbers to a hacker. That was a great day in summer for hacker, but the summer travel season became a nightmare when a CrowdStrike faulty kernel update grounded over 2700 flights and practically locked down the US borders. Microsoft’s Recall, an AI search tool that took snapshots of user activity that could be recalled later was a concern. What if passwords and other sensitive information were recorded?
The fabulous Internet Archive was hacked and taken down by a bad actor to protest the Israel-Gaza conflict. It makes us worry about preserving Internet and other important media history. Rabbit and Humane released AI-powered hardware that was supposed to be a hands free way to use a digital assistant, but they failed. JuiceBox ended software support on its EV car chargers, while Scarlett Johansson’s voice was stolen by OpenAI for its Voice Mode feature. She sued.
The worst of the worst is this:
“Days after he announced plans to acquire Twitter in 2022, Elon Musk argued that the platform needed to be “politically neutral” in order for it to “deserve public trust.” This approach, he said, “effectively means upsetting the far right and the far left equally.” In March 2024, he also pledged to not donate to either US presidential candidate, but by July, he’d changed his tune dramatically, swapping neutrality for MAGA hats. “If we want to preserve freedom and a meritocracy in America, then Trump must win,” Musk tweeted in September. He seized the @America X handle to promote Trump, donated millions to his campaign, shared doctored and misleading clips of VP Kamala Harris, and is now working closely with the president-elect on an effort to cut government spending, which is most certainly a conflict of interest given his government contracts. Some have even suggested that he become Speaker of the House since you don’t have to be a member of Congress to hold that position. The shift sent many X users to alternatives like BlueSky, Threads, and Mastodon in the days after the US election.”
It doesn’t matter what Musk’s political beliefs are. He has no right to participate in politics.
Whitney Grace, February 3, 2025
National Security: A Last Minute Job?
January 20, 2025
On its way out the door, the Biden administration has enacted a prudent policy. Whether it will persist long under the new administration is anyone’s guess. The White House Briefing Room released a “Fact Sheet: Ensuring U.S. Security and Economic Strength in the Age of Artificial Intelligence.” The rule provides six key mechanisms on the diffusion of U.S. Technology. The statement specifies:
“In the wrong hands, powerful AI systems have the potential to exacerbate significant national security risks, including by enabling the development of weapons of mass destruction, supporting powerful offensive cyber operations, and aiding human rights abuses, such as mass surveillance. Today, countries of concern actively employ AI – including U.S.-made AI – in this way, and seek to undermine U.S. AI leadership. To enhance U.S. national security and economic strength, it is essential that we do not offshore this critical technology and that the world’s AI runs on American rails. It is important to work with AI companies and foreign governments to put in place critical security and trust standards as they build out their AI ecosystems. To strengthen U.S. security and economic strength, the Biden-Harris Administration today is releasing an Interim Final Rule on Artificial Intelligence Diffusion. It streamlines licensing hurdles for both large and small chip orders, bolsters U.S. AI leadership, and provides clarity to allied and partner nations about how they can benefit from AI. It builds on previous chip controls by thwarting smuggling, closing other loopholes, and raising AI security standards.”
The six mechanisms specify 18 key allies to whom no restrictions apply and create a couple trusted statuses other entities can attain. They also support cooperation between governments on export controls, clean energy, and technology security. As for “countries of concern,” the rule seeks to ensure certain advanced technologies do not make it into their hands. See the briefing for more details.
The measures add to previous security provisions, including the October 2022 and October 2023 chip controls. We are assured they were informed by conversations with stakeholders, bipartisan members of Congress, industry representatives, and foreign allies over the previous 10 months. Sounds like it was a lot of work. Let us hope it does not soon become wasted effort.
Cynthia Murrell, January 20, 2025
Racers, Start Your Work Around Engines
January 16, 2025
Prepared by a still-alive dinobaby.
Companies are now prohibited from sending our personal information to specific, hostile nations. Because tech firms must be forced to exercise common sense, apparently. TechRadar reports, "US Government Says Companies Are No Longer Allowed to Send Bulk Data to these Nations." The restriction is the final step in implementing Executive Order 14117, which President Biden signed nearly a year ago. It is to take effect at the beginning of April.
The rule names six countries the DoJ says have “engaged in a long-term pattern or serious instances of conduct significantly adverse to the national security of the United States or the security and safety of U.S. persons”: China, Cuba, Iran, North Korea, Russia, and Venezuela. Writer Benedict Collins tells us:
"The Executive Order is aimed at preventing countries generally hostile to the US from using the data of US citizens in cyber espionage and influence campaigns, as well as building profiles of US citizens to be used in social engineering, phishing, blackmail, and identity theft campaigns. The final rule sets out the threshold for transactions of data that carry an unacceptable level of risk, alongside the different classes of transactions that are prohibited, restricted or exempt. Companies that violate the order will face civil and criminal penalties."
The restriction covers geolocation data; personal identifiers like social security numbers; biometric identifiers; personal health data; personal financial information; and data on our very cells. The agency clarifies some activities that are not prohibited:
"The DoJ also outlined the final rule does not apply to ‘medical, health, or science research or the development and marketing of new drugs’ and ‘also does not broadly prohibit U.S. persons from engaging in commercial transactions, including exchanging financial and other data as part of the sale of commercial goods and services with countries of concern or covered persons, or impose measures aimed at a broader decoupling of the substantial consumer, economic, scientific, and trade relationships that the United States has with other countries.’"
So, outside those exceptions, the idea is that US firms will not be sending our personal data to these hostile countries. That is the theory. However, organizations gather data from mobile phone apps, from exfiltrated mobile phone records, from “gray” data aggregators. How does one find entities providing conduits for information outflows? A bit of sleuthing on Telegram or searches on Dark Web search engines provide a number of contact points. Are the data reliable, accurate, and timely? Bad data are plentiful, but by acquiring or assembling information, bad actors send out their messages. Volume and human nature work.
Cynthia Murrell, January 16, 2025
Be Secure Like a Journalist
January 9, 2025
This is an official dinobaby post.
If you want to be secure like a journalist, Freedom.press has a how-to for you. The write up “The 2025 Journalist’s Digital Security Checklist” provides text combined with a sort of interactive design. For example, if you want to know more about an item on a checklist, just click the plus sign and the recommendations appear.
There are several sections in the document. Each addresses a specific security vector or issue. These are:
- Asses your risk
- Set up your mobile to be “secure”
- Protect your mobile from unwanted access
- Secure your communication channels
- Guard your documents from harm
- Manage your online profile
- Protect your research whilst browsing
- Avoid getting hacked
- Set up secure tip lines.
Most of the suggestions are useful. However, I would strongly recommend that any mobile phone user download this presentation from the December 2024 Chaos Computer Club meeting held after Christmas. There are some other suggestions which may be of interest to journalists, but these regard specific software such as Google’s Chrome browser, Apple’s wonderful iCloud, and Microsoft’s oh-so-secure operating system.
The best way for a journalist to be secure is to be a “ghost.” That implies some type of zero profile identity, burner phones, and other specific operational security methods. These, however, are likely to land a “real” journalist in hot water either with an employer or an outfit like a professional organization. A clever journalist would gain access to a sock puppet control software in order to manage a number of false personas at one time. Plus, there are old chestnuts like certain Dark Web services. Are these types of procedures truly secure?
In my experience, the only secure computing device is one that is unplugged in a locked room. The only secure information is that which one knows and has not written down or shared with anyone. Every time I meet a journalist unaware of specialized tools and services for law enforcement or intelligence professionals I know I can make that person squirm if I describe one of the hundreds of services about which journalists know nothing.
For starters, watch the CCC video. Another tip: Choose the country in which certain information is published with your name identifying you as an author carefully. Very carefully.
Stephen E Arnold, January 9, 2025
Insider Threats: More Than Threat Reports and Cumbersome Cyber Systems Are Needed
November 13, 2024
Sorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.
With actionable knowledge becoming increasingly concentrated, is it a surprise that bad actors go where the information is? One would think that organizations with high-value information would be more vigilant when it comes to hiring people from other countries, using faceless gig worker systems, or relying on an AI-infused résumé on LinkedIn. (Yep, that is a Microsoft entity.)
Thanks, OpenAI. Good enough.
The fact is that big technology outfits are supremely confident in their ability to do no wrong. Billions in revenue will boost one’s confidence in a firm’s management acumen. The UK newspaper Telegraph published “Why Chinese Spies Are Sending a Chill Through Silicon Valley.”
The write up says:
In recent years the US government has charged individuals with stealing technology from companies including Tesla, Apple and IBM and seeking to transfer it to China, often successfully. Last year, the intelligence chiefs of the “Five Eyes” nations clubbed together at Stanford University – the cradle of Silicon Valley innovation – to warn technology companies that they are increasingly under threat.
Did the technology outfits get the message?
The Telegram article adds:
Beijing’s mission to acquire cutting edge tech has been given greater urgency by strict US export controls, which have cut off China’s supply of advanced microchips and artificial intelligence systems. Ding, the former Google employee, is accused of stealing blueprints for the company’s AI chips. This has raised suspicions that the technology is being obtained illegally. US officials recently launched an investigation into how advanced chips had made it into a phone manufactured by China’s Huawei, amid concerns it is illegally bypassing a volley of American sanctions. Huawei has denied the claims.
With some non US engineers and professionals having skills needed by some of the high-flying outfits already aloft or working their hangers to launch their breakthrough product or service, US companies go through human resource and interview processes. However, many hires are made because a body is needed, someone knows the candidate, or the applicant is willing to work for less money than an equivalent person with a security clearance, for instance.
The result is that most knowledge centric organizations have zero idea about the security of their information. Remember Edward Snowden? He was visible. Others are not.
Let me share an anecdote without mentioning names or specific countries and companies.
A business colleague hailed from an Asian country. He maintained close ties with his family in his country of origin. He had a couple of cousins who worked in the US. I was at his company which provided computer equipment to the firm at which I was working in Silicon Valley. He explained to me that a certain “new” technology was going to be released later in the year. He gave me an overview of this “secret” project. I asked him where the data originated. He looked at me and said, “My cousin. I even got a demo and saw the prototype.”
I want to point out that this was not a hire. The information flowed along family lines. The sharing of information was okay because of the closeness of the family. I later learned the information was secret. I realized that doing an HR interview process is not going to keep secrets within an organization.
I ask the companies with cyber security software which has an insider threat identification capability, “How do you deal with family or high-school relationship information channels?”
The answer? Blank looks.
The Telegraph and most of the whiz bang HR methods and most of the cyber security systems don’t work. Cultural blind spots are a problem. Maybe smart software will prevent knowledge leakage. I think that some hard thinking needs to be applied to this problem. The Telegram write up does not tackle the job. I would assert that most organizations have fooled themselves. Billions and arrogance have interesting consequences.
Stephen E Arnold, November 13, 2024
Secure Phones Keep Appearing
October 31, 2024
The KDE community has developed an open source interface for mobile devices called Plasma Mobile. It allegedly turns any phone into a virtual fortress, promising a “privacy-respecting, open source and secure phone ecosystem.” This project is based on the original Plasma for desktops, an environment focused on security and flexibility. As with many open-source projects, Plasma Mobile is an imperfect work in progress. We learn:
“A pragmatic approach is taken that is inclusive to software regardless of toolkit, giving users the power to choose whichever software they want to use on their device. … Plasma Mobile is packaged in multiple distribution repositories, and so it can be installed on regular x86 based devices for testing. Have an old Android device? postmarketOS, is a project aiming to bring Linux to phones and offers Plasma Mobile as an available interface for the devices it supports. You can see the list of supported devices here, but on any device outside the main and community categories your mileage may vary. Some supported devices include the OnePlus 6, Pixel 3a and PinePhone. The interface is using KWin over Wayland and is now mostly stable, albeit a little rough around the edges in some areas. A subset of the normal KDE Plasma features are available, including widgets and activities, both of which are integrated into the Plasma Mobile UI. This makes it possible to use and develop for Plasma Mobile on your desktop/laptop. We aim to provide an experience (with both the shell and apps) that can provide a basic smartphone experience. This has mostly been accomplished, but we continue to work on improving shell stability and telephony support. You can find a list of mobile friendly KDE applications here. Of course, any Linux-based applications can also be used in Plasma Mobile.
KDE states its software is “for everyone, from kids to grandparents and from professionals to hobbyists.” However, it is clear that being an IT professional would certainly help. Is Plasma Mobile as secure as they claim? Time will tell.
Cynthia Murrell, October 31, 2024
Microsoft Security: A World First
September 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
After the somewhat critical comments of the chief information security officer for the US, Microsoft said it would do better security. “Secure Future Initiative” is a 25 page document which contains some interesting comments. Let’s look at a handful.
Some bad actors just go where the pickings are the easiest. Thanks, MSFT Copilot. Good enough.
On page 2 I noted the record beating Microsoft has completed:
Our engineering teams quickly dedicated the equivalent of 34,000 full-time engineers to address the highest priority security tasks—the largest cybersecurity engineering project in history.
Microsoft is a large software company. It has large security issues. Therefore, the company undertaken the “largest cyber security engineering project in history.” That’s great for the Guinness Book of World Records. The question is, “Why?” The answer, it seems to me, is, “Microsoft did “good enough” security. As the US government’s report stated, “Nope. Not good enough.” Hence, a big and expensive series of changes. Have the changes been tested or have unexpected security issues been introduced to the sprawl of Microsoft software? Another question from this dinobaby: “Can a big company doing good enough security implement fixes to remediate “the highest priority security tasks”? Companies have difficulty changing certain work practices. Can “good enough” methods do the job?
On page 3:
Security added as a core priority for all employees, measured against all performance reviews. Microsoft’s senior leadership team’s compensation is now tied to security performance
Compensation is lined to security as a “core priority.” I am not sure what making something a “core priority” means, particularly when the organization has implement security systems and methods which have been found wanting. When the US government gives a bad report card, one forms an impression of a fairly deep hole which needs to be filled with functional, reliable bits. Adding a “core priority” does not correlate with security software from cloud to desktop.
On page 5:
To enhance governance, we have established a new Cybersecurity Governance Council…
The creation of a council and adding security responsibilities to some executives and hiring a few other means to me:
- Meetings and delays
- Adding duties may translate to other issues
- How much will these remediating processes cost?
Microsoft may be too big to change its culture in a timely manner. The time required for a council to enhance governance means fixing security problems may take time. Even with additional time and “the equivalent of 34,000 full time engineers” may be a project management task of more than modest proportions.
On page 7:
Secure by design
Quite a subhead. How can Microsoft’s sweep of legacy and now products be made secure by design when these products have been shown to be insecure.
On page 10:
Our strategy for delivering enduring compliance with the standard is to identify how we will Start Right, Stay Right, and Get Right for each standard, which are then driven programmatically through dashboard driven reviews.
The alliteration is notable. However, what is “right”? What happens when fixing up existing issues and adhering to a “standard” find that a “standard” has changed. The complexity of management and the process of getting something “right” is like an example from a book from a Santa Fe Institute complexity book. The reality of addressing known security issues and conforming to standards which may change is interesting to contemplate. Words are great but remediating what’s wrong in a dynamic and very complicated series of dependent services is likely to be a challenge. Bad actors will quickly probe for new issues. Generally speaking, bad actors find faults and exploit them. Thus, Microsoft will find itself in a troublesome mode: Permanent reactions to previously unknown and new security issues.
On page 11, the security manifesto launches into “pillars.” I think the idea is that good security is built upon strong foundations. But when remediating “as is” code as well as legacy code, how long will the design, engineering, and construction of the pillars take? Months, years, decades, or multiple decades. The US CISO report card may not apply to certain time scales; for instance, big government contracts. Pillars are ideas.
Let’s look at one:
The monitor and detect threats pillar focuses on ensuring that all assets within Microsoft production infrastructure and services are emitting security logs in a standardized format that are accessible from a centralized data system for both effective threat hunting/investigation and monitoring purposes. This pillar also emphasizes the development of robust detection capabilities and processes to rapidly identify and respond to any anomalous access, behavior, and configuration.
The reality of today’s world is that security issues can arise from insiders. Outside threats seem to be identified each week. However, different cyber security firms identify and analyze different security issues. No one cyber security company is delivering 100 percent foolproof threat identification. “Logs” are great; however, Microsoft used to charge for making a logging function available to a customer. Now more logs. The problem is that logs help identify a breach; that is, a previously unknown vulnerability is exploited or an old vulnerability makes its way into a Microsoft system by a user action. How can a company which has a poor report card issued by the US government become the firm with a threat detection system which is the equivalent of products now available from established vendors. The recent CrowdStrike misstep illustrates that the Microsoft culture created the opportunity for the procedural mistake someone made at Crowdstrike. The words are nice, but I am not that confident in Microsoft’s ability to build this pillar. Microsoft may have to punt and buy several competitive systems and deploy them like mercenaries to protect the unmotivated Roman citizens in a century.
I think reading the “Secure Future Initiative” is a useful exercise. Manifestos can add juice to a mission. However, can the troops deliver a victory over the bad actors who swarm to Microsoft systems and services because good enough is like a fried chicken leg to a colony of ants.
Stephen E Arnold, September 30, 2024
Fancy Cyber Methods Are Useless Against Insider Threats
August 2, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
In my lectures to law enforcement and intelligence professionals, I end the talks with one statement: “Do not assume. Do not reduce costs by firing experienced professionals. Do not ignore human analyses of available information. Do not take short cuts.” Cyber security companies are often like the mythical kids of the village shoemaker. Those who can afford to hire the shoemaker have nifty kicks and slides. Those without resources have almost useless footware.
Companies in the security business often have an exceptionally high opinion of their capabilities and expertise. I think of this as the Google Syndrome or what some have called by less salubrious names. The idea is that one is just so smart, nothing bad can happen here. Yeah, right.
An executive answers questions about a slight security misstep. Thanks, Microsoft Copilot. You have been there and done that I assume.
I read “North Korean Hacker Got Hired by US Security Vendor, Immediately Loaded Malware.” The article is a reminder that outfits in the OSINT, investigative, and intelligence business can make incredibly interesting decisions. Some of these lead to quite significant consequences. This particular case example illustrates how a hiring process using humans who are really smart and dedicated can be fooled, duped, and bamboozled.
The write up explains:
KnowBe4, a US-based security vendor, revealed that it unwittingly hired a North Korean hacker who attempted to load malware into the company’s network. KnowBe4 CEO and founder Stu Sjouwerman described the incident in a blog post yesterday, calling it a cautionary tale that was fortunately detected before causing any major problems.
I am a dinobaby, and I translated the passage to mean: “We hired a bad actor but, by the grace of the Big Guy, we avoided disaster.”
Sure, sure, you did.
I would suggest you know you trapped an instance of the person’s behavior. You may not know and may never know what that individual told a colleague in North Korea or another country what the bad actor said or emailed from a coffee shop using a contact’s computer. You may never know what business processes the person absorbed, converted to an encrypted message, and forwarded via a burner phone to a pal in a nation-state whose interests are not aligned with America’s.
In short, the cyber security company dropped the ball. It need not feel too bad. One of the companies I worked for early in my 60 year working career hired a person who dumped top secrets into journalists’ laps. Last week a person I knew was complaining about Delta Airlines which was shown to be quite addled in the wake of the CrowdStrike misstep.
What’s the fix? Go back to how I end my lectures. Those in the cyber security business need to be extra vigilant. The idea that “we are so smart, we have the answer” is an example of a mental short cut. The fact is that the company KnowBe4 did not. It is lucky it KnewAtAll. Some tips:
- Seek and hire vetted experts
- Question procedures and processes in “before action” and “after action” incidents
- Do not rely on assumptions
- Do not believe the outputs of smart software systems
- Invest in security instead of fancy automobiles and vacations.
Do these suggestions run counter to your business goals and your image of yourself? Too bad. Life is tough. Cyber crime is the growth business. Step up.
Stephen E Arnold, August 2, 2024
What Will the AT&T Executives Serve Their Lawyers at the Security Breach Debrief?
July 15, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
On the flight back to my digital redoubt in rural Kentucky, I had the thrill of sitting behind a couple of telecom types who were laughing at the pickle AT&T has plopped on top of what I think of a Judge Green slushee. Do lime slushees and dill pickles go together? For my tastes, nope. Judge Green wanted to de-monopolize the Ma Bell I knew and loved. (Yes, I cashed some Ma Bell checks and I had a Young Pioneers hat.)
We are back to what amounts a Ma Bell trifecta: AT&T (the new version which wears spurs and chaps), Verizon (everyone’s favorite throw back carrier), and the new T-Mobile (bite those customer pocketbooks as if they were bratwursts mit sauerkraut). Each of these outfits is interesting. But at the moment, AT&T is in the spotlight.
“Data of Nearly All AT&T Customers Downloaded to a Third-Party Platform in a 2022 Security Breach” dances around a modest cyber misstep at what is now a quite old and frail Ma Bell. Imagine the good old days before the Judge Green decision to create Baby Bells. Security breaches were possible, but it was quite tough to get the customer data. Attacks were limited to those with the knowledge (somewhat tough to obtain), the tools (3B series computers and lots of mainframes), and access to network connections. Technology has advanced. Consequently competition means that no one makes money via security. Security is better at old-school monopolies because money can be spent without worrying about revenue. As one AT&T executive said to my boss at a blue-chip consulting company, “You guys charge so much we will have to get another railroad car filled with quarters to pay your bill.” Ho ho ho — except the fellow was not joking. At the pre-Judge Green AT&T, spending money on security was definitely not an issue. Today? Seems to be different.
A more pointed discussion of Ma Bell’s breaking her hip again appears in “AT&T Breach Leaked Call and Text Records from Nearly All Wireless Customers” states:
AT&T revealed Friday morning (July 12, 2024) that a cybersecurity attack had exposed call records and texts from “nearly all” of the carrier’s cellular customers (including people on mobile virtual network operators, or MVNOs, that use AT&T’s network, like Cricket, Boost Mobile, and Consumer Cellular). The breach contains data from between May 1st, 2022, and October 31st, 2022, in addition to records from a “very small number” of customers on January 2nd, 2023.
The “problem” if I understand the reference to Snowflake. Is AT&T suggesting that Snowflake is responsible for the breach? Big outfits like to identify the source of the problem. If Snowflake made the misstep, isn’t it the responsibility of AT&T’s cyber unit to make sure that the security was as good as or better than the security implemented before the Judge Green break up? I think AT&T, like other big companies, wants to find a way to shift blame, not say, “We put the pickle in the lime slushee.”
My posture toward two year old security issues is, “What’s the point of covering up a loss of ‘nearly all’ customers’ data?” I know the answer: Optics and the share price.
As a person who owned a Young Pioneers’ hat, I am truly disappointed in the company. The Regional Managers for whom I worked as a contractor had security on the list of top priorities from day one. Whether we were fooling around with a Western Electric data service or the research charge back system prior to the break up, security was not someone else’s problem.
Today it appears that AT&T has made some decisions which are now perched on the top officer’s head. Security problems are, therefore, tough to miss. Boeing loses doors and wheels from aircraft. Microsoft tantalizes bad actors with insecure systems. AT&T outsources high value data and then moves more slowly than the last remaining turtle in the mine run off pond near my home in Harrod’s Creek.
Maybe big is not as wonderful as some expect the idea to be? Responsibility for one’s decisions and an ethical compass are not cyber tools, but both notions are missing in some big company operations. Will the after-action team guzzle lime slushees with pickles on top?
Stephen E Arnold, July 15, 2024
Cloudflare, What Else Can You Block?
July 11, 2024
I spotted an interesting item in Silicon Angle. The article is “Cloudflare Rolls Out Feature for Blocking AI Companies’ Web Scrapers.” I think this is the main point:
Cloudflare Inc. today debuted a new no-code feature for preventing artificial intelligence developers from scraping website content. The capability is available as part of the company’s flagship CDN, or content delivery network. The platform is used by a sizable percentage of the world’s websites to speed up page loading times for users. According to Cloudflare, the new scraping prevention feature is available in both the free and paid tiers of its CDN.
Cloudflare is what I call an “enabler.” For example, when one tries to do some domain research, one often encounters Cloudflare, not the actual IP address of the service. This year I have been doing some talks for law enforcement and intelligence professionals about Telegram and its Messenger service. Guess what? Telegram is a Cloudflare customer. My team and I have encountered other interesting services which use Cloudflare the way Natty Bumpo’s sidekick used branches to obscure footprints in the forest.
Cloudflare has other capabilities too; for instance, the write up reports:
Cloudflare assigns every website visit that its platform processes a score of 1 to 99. The lower the number, the greater the likelihood that the request was generated by a bot. According to the company, requests made by the bot that collects content for Perplexity AI consistently receive a score under 30.
I wonder what less salubrious Web site operators score. Yes, there are some pretty dodgy outfits that may be arguably worse than an AI outfit.
The information in this Silicon Angle write up raises a question, “What other content blocking and gatekeeping services can Cloudflare provide?
Stephen E Arnold, July 11, 2024