Cognitive Blind Spot 4: Ads. What Is the Big Deal Already?
October 11, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Last week, I presented a summary of Dark Web Trends 2023, a research update my team and I prepare each year. I showed a visual of the ads on a Dark Web search engine. Here’s an example of one of my illustrations:
The TorLanD service, when it is accessible via Tor, displays a search box and advertising. What is interesting about this service and a number of other Dark Web search engines is the ads. The search results are so-so, vastly inferior to those information retrieval solutions offered by intelware vendors.
Some of the ads appear on other Dark Web search systems as well; for example, Bobby and DarkSide, among others. The advertisements off a range of interesting content. TorLanD screenshot pitches carding, porn, drugs, gadgets (skimmers and software), illegal substances. I pointed out that the ads on TorLanD looked a lot like the ads on Bobby; for instance:
I want to point out that the Silk Road 4.0 and the Gadgets, Docs, Fakes ads are identical. Notice also that TorLanD advertises on Bobby. The Helsinki Drug Marketplace on the Bobby search system offers heroin.
Most of these ads are trade outs. The idea is that one Dark Web site will display an ad for another Dark Web site. There are often links to Dark Web advertising agencies as well. (For this short post, I won’t be listing these vendors, but if you are interested in this research, contact benkent2020 at yahoo dot com. One of my team will follow up and explain our for-fee research policy.)
The point of these two examples is make clear that advertising has become normalized, even among bad actors. Furthermore, few are surprised that bad actors (or alleged bad actors) communicate, pat one another on the back, and support an ecosystem to buy and sell space on the increasingly small Dark Web. Please, note that advertising appears in public and private Telegram groups focused on he topics referenced in these Dark Web ads.
Can you believe the ads? Some people do. Users of the Clear Web and the Dark Web are conditioned to accept ads and to believe that these are true, valid, useful, and intended to make it easy to break the law and buy a controlled substance or CSAM. Some ads emphasize “trust.”
People trust ads. People believe ads. People expect ads. In fact, one can poke around and identify advertising and PR agencies touting the idea that people “trust” ads, particularly those with brand identity. How does one build brand? Give up? Advertising and weaponized information are two ways.
The cognitive bias that operates is that people embrace advertising. Look at a page of Google results. Which are ads and which are ads but not identified. What happens when ads are indistinguishable from plausible messages? Some online companies offer stealth ads. On the Dark Web pages illustrating this essay are law enforcement agencies masquerading as bad actors. Can you identify one such ad? What about messages on Twitter which are designed to be difficult to spot as paid messages or weaponized content. For one take on Twitter technology, read “New Ads on X Can’t Be Blocked or Reported, and Aren’t Labeled as Advertisements.”
Let me highlight some of the functions on online ads like those on the Dark Web sites. I will ignore the Clear Web ads for the purposes of this essay:
- Click on the ad and receive malware
- Visit the ad and explore the illegal offer so that the site operator can obtain information about you
- Sell you a product and obtain the identifiers you provide, a deliver address (either physical or digital), or plant a beacon on your system to facilitate tracking
- Gather emails for phishing or other online initiatives
- Blackmail.
I want to highlight advertising as a vector of weaponization for three reasons: [a] People believe ads. I know it sound silly, but ads work. People suspend disbelief when an ad on a service offers something that sounds too good to be true; [b] many people do not question the legitimacy of an ad or its message. Ads are good. Ads are everywhere. and [c] Ads are essentially unregulated.
What happens when everything drifts toward advertising? The cognitive blind spot kicks in and one cannot separate the false from the real.
Public service note: Before you explore Dark Web ads or click links on social media services like Twitter, consider that these are vectors which can point to quite surprising outcomes. Intelligence agencies outside the US use Dark Web sites as a way to harvest useful information. Bad actors use ads to rip off unsuspecting people like the doctor who once lived two miles from my office when she ordered a Dark Web hitman to terminate an individual.
Ads are unregulated and full of surprises. But the cognitive blind spot for advertising guarantees that the technique will flourish and gain technical sophistication. Are those objective search results useful information or weaponized? Will the Dark Web vendor really sell you valid stolen credit cards? Will the US postal service deliver an unmarked envelope chock full of interesting chemicals?
Stephen E Arnold, October 11, 2023
Savvy GenZs: Scammers Love Those Kids
October 3, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Many of us assumed the generation that has grown up using digital devices would be the most cyber-crime savvy. Apparently not. Vox reports, “Gen Z Falls for Online Scams More than their Boomer Grandparents Do.” Writer A.W. Ohlheiser cites a recent Deloitte survey that found those born between 1997 and 2012 were three times more likely to fall victim to an online scam than Boomers, twice as likely to have their social media accounts hacked, and more likely to have location information misused than any other generation.
One might think they should know better and, apparently, they do: the survey found Gen Z respondents to be quite aware of cybersecurity issues. The problem may instead lie in the degree to which young people are immersed in the online world(s). We learn:
“There are a few theories that seem to come up again and again. First, Gen Z simply uses technology more than any other generation and is therefore more likely to be scammed via that technology. Second, growing up with the internet gives younger people a familiarity with their devices that can, in some instances, incentivize them to choose convenience over safety. And third, cybersecurity education for school-aged children isn’t doing a great job of talking about online safety in a way that actually clicks with younger people’s lived experiences online.”
So one thing we might to is adjust our approach to cybersecurity education in schools. How else can we persuade Gen Z to accept hassles like two-factor authentication in the interest of safety? Maybe that is the wrong question. Ohlheiser consulted 21-year-old Kyla Guru, a Stanford computer science student and founder of a cybersecurity education organization. The article suggests:
“Instead, online safety best practices should be much more personalized to how younger people are actually using the internet, said Guru. Staying safer online could involve switching browsers, enabling different settings in the apps you use, or changing how you store passwords, she noted. None of those steps necessarily involve compromising your convenience or using the internet in a more limited way. Approaching cybersecurity as part of being active online, rather than an antagonist to it, might connect better with Gen Z, Guru said.”
Guru also believes learning about online bad actors and their motivations may help her peers be more attentive to the issue. The write-up also points to experts who insist apps and platforms bear at least some responsibility to protect users, and there is more they could be doing. For example, social media platforms could send out test phishing emails, as many employers do, then send educational resources to anyone who bites. And, of course, privacy settings could be made much easier to access and understand. Those steps, in fact, could help users of all ages.
Cynthia Murrell, October 3, 2023
Good New and Bad News: Smart Software Is More Clever Than Humanoids
September 11, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
After a quick trip to Europe, I will be giving a lecture about fake data. One of the case examples concerns the alleged shortcuts taken by Frank Financial in its efforts to obtain about $175 million from JPMorgan Chase. I like to think of JPMC as “the smartest guys in the room” when it comes to numbers related to money. I suppose wizards at Goldman or McKinsey would disagree. But the interesting swizzle on the JPMC story is that alleged fraudster was a graduate of Wharton.
That’s good news for getting an education in moral probity at a prestigious university.
A big, impressive university’s smart software beats smart students at Tic Tac Toe. Imagine what these wizards will be able to accomplish when smart software innovates and assists the students with financial fancy dancing. Thanks, Mother MJ. Deep on the gradient descent, please.
Flash forward to the Murdoch real news story “M.B.A. Students vs. ChatGPT: Who Comes Up With More Innovative Ideas?” [The Rupert toll booth is operating.] The main idea of the write up is that humanoid Wharton students were less “creative,” “innovative,” and “inventive” than smart software. What’s this say for the future of financial fraud. Mere humanoids like those now in the spotlight at the Southern District of New York show may become more formidable with the assistance of smart software. The humanoids were caught, granted it took JPMC a few months after the $175 million check was cashed, but JPMC did figure it out via a marketing text.
Imagine. Wharton grads with smart software. How great will that be for the targets of financial friskiness? Let’s hope JPMC gets its own cyber fraud detecting software working. In late 2022, the “smartest guys in the room” were not smart enough to spot synthetic and faked data. Will smart software be able to spot smart software scams?
That’s the bad new. No.
Stephen E Arnold, September 11, 2023
Why Encrypted Messaging Is Getting Love from Bad Actors
August 17, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
The easier it is to break the law or circumvent regulations, the more people will give into their darker nature. Yes, this is another of Arnold’s Laws of Online along with online data flows erode ethical behavior. I suppose the two “laws” go together like Corvettes and fuel stops, tattoos and body art, or Barbie and Ken dolls.
“Banks Hit with $549 Million in Fines for Use of Signal, WhatsApp to Evade Regulators’ Reach” explains a behavior I noticed when I was doing projects for a hoop-de-do big time US financial institution.
Let’s jump back in time to 2005: I arrived for a meeting with the bank lugging my lecture equipment. As I recall, I had a couple of laptops, my person LCD projector, a covey of connectors, and a couple of burner phones and SIMs from France and the UK.
“What are you looking at?” queries the young financial analyst on the sell side. I had interrupted a young, whip-smart banker who was organizing her off-monitoring client calls. I think she was deciding which burner phone and pay-as-you-go SIM to use to pass a tip about a major financial deal to a whale. Thanks, MidJourney. It only took three times for your smart software to show mobile phones. Outstanding C minus work. Does this MBA CFA look innocent to you? She does to me. Doesn’t every banker have multiple mobile phones?
One bright bank type asked upon entering the meeting room as I was stowing and inventorying my gear after a delightful taxi ride from the equally thrilling New York Hilton, “Why do you have so many mobile phones?” I explained that I used the burners in my talks about cyber crime. The intelligent young person asked, “How do you connect them?” I replied, “When I travel, I buy SIMs in other countries. I also purchase them if I see a US outfit offering a pay-as-you-go SIM.” She did not ask how I masked my identity when acquiring SIMs, and I did not provide any details like throwing the phone away after one use.
Flash forward two months. This time it was a different conference room. My client had his assistant and the bright young thing popped into the meeting. She smiled and said, “I have been experimenting with the SIMs and a phone I purchased on Lexington Avenue from a phone repair shop.”
“What did you learn?” I asked.
She replied, “I can do regular calls on the mobile the bank provides. But I can do side calls on this other phone.”
I asked, “Do you call clients on the regular phone or the other phone?”
She said, “I use the special phone for special clients.”
Remember this was late 2005.
The article dated August 8, 2023, appeared 18 years after my learning how quickly bright young things can suck in an item of information and apply it to transferring information supposedly regulated by a US government agency. That’s when I decided my Arnold Law about people breaking the law when it is really easy one of my go-to sayings.
The write up stated:
U.S. regulators on Tuesday announced a combined $549 million in penalties against Wells Fargo and a raft of smaller or non-U.S. firms that failed to maintain electronic records of employee communications. The Securities and Exchange Commission disclosed charges and $289 million in fines against 11 firms for “widespread and longstanding failures” in record-keeping, while the Commodity Futures Trading Commission also said it fined four banks a total of $260 million for failing to maintain records required by the agency.
How long has a closely regulated sector like banking been “regulated”? A long time.
I want to mention that I have been talking about getting around regulations which require communication monitoring for a long time. In fact, in October 2023, at the Massachusetts / New York Association of Crime Analysts conference. In my keynote, I will update my remarks about Telegram and its expanding role in cyber and regular crime. I will also point out how these encrypted messaging apps have breathed new, more secure life into certain criminal activities. We have an organic ecosystem of online-facilitated crime, crime that is global, not a local stick up at a convenient store at 3 am on a rainy Thursday morning.
What does this news story say about regulatory action? What does it make clear about behavior in financial services firms?
I, of course, have no idea. Just like some of the regulatory officers at financial institutions and some regulatory agencies.
Stephen E Arnold, August 17, 2023
Microsoft and Russia: A Convenient Excuse?
August 14, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
In the Solarwinds’ vortex, the explanation of 1,000 Russia hackers illuminated a security with the heat of a burning EV with lithium batteries. Now Russian hackers have again created a problem. Are these Russians cut from the same cloth as the folks who have turned a special operation into a noir Laurel & Hardy comedy routine?
users in Microsoft Teams chatrooms, pretending to be from technical support. In a blog post [August 2, 2023], Microsoft researchers called the campaign a “highly targeted social engineering attack” by a Russia-based hacking team dubbed Midnight Blizzard. The hacking group, which was previously tracked as Nobelium, has been attributed by the U.S. and UK governments as part of the Foreign Intelligence Service of the Russian Federation.
Isn’t this the Russia producing planners who stalled a column of tanks in its alleged lightning strike on the capital of Ukraine? I think this is the country now creating problems for Microsoft. Imagine that.
The write up continues:
For now, the fake domains and accounts have been neutralized, the researchers said. “Microsoft has mitigated the actor from using the domains and continues to investigate this activity and work to remediate the impact of the attack,” Microsoft said. The company also put forth a list of recommended precautions to reduce the risk of future attacks, including educating users about “social engineering” attacks.
Let me get this straight. Microsoft deployed software with issues. Those issues were fixed after the Russians attacked. The fix, if I understand the statement, is for customers/users to take “precautions” which include teaching obviously stupid customers/users how to be smart. I am probably off base, but it seems to me that Microsoft deployed something that was exploitable. Then after the problem became obvious, Microsoft engineered an alleged “repair.” Now Microsoft wants others to up their game.
Several observations:
- Why not cut and paste the statements from Microsoft’s response to the SolarWinds’ missteps. Why write the same old stuff and recycle the tiresome assertion about Russia? ChatGPT could probably help out Microsoft’s PR team.
- The bad actors target Microsoft because it is a big, overblown system/products with security that whips some people into a frenzy of excitement.
- Customers and users are not going to change their behaviors even with a new training program. The system must be engineered to work in the environment of the real-life users.
Net net: The security problem can be identified when Microsofties look in a mirror. Perhaps Microsoft should train its engineers to deliver security systems and products?
Stephen E Arnold, August 14, 2023
One More Reason to Love Twitter: Fake People and Malware Injection.
June 22, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
With regulators beginning to wake up to the threats, risks, and effects of online information, I enjoyed reading “Fake Zero-Day PoC Exploits on GitHub Push Windows, Linux Malware.” The write up points out:
Hackers are impersonating cybersecurity researchers on Twitter and GitHub to publish fake proof-of-concept exploits for zero-day vulnerabilities that infect Windows and Linux with malware. These malicious exploits are promoted by alleged researchers at a fake cybersecurity company named ‘High Sierra Cyber Security,’ who promote the GitHub repositories on Twitter, likely to target cybersecurity researchers and firms involved in vulnerability research.
The tweeter thing is visualized by that nifty art generator Dezgo. I think the smart software captures the essence of the tweeter’s essence.
I noted that the target appears to be cyber security “experts”. Does this raise questions in your mind about the acuity of some of those who fell for the threat intelligence? I have to admit. I was not surprised. Not in the least.
The article includes illustrations of the “Python downloader.”
I want to mention that this is just one type of OSINT blindspot causing some “experts” to find themselves on the wrong end of a Tesla-like or Waymo-type self-driving vehicle. I know I would not stand in front of one. Similarly, I would not read about an “exploit” on Twitter, click on links, or download code.
But that’s just me, a 78 year old dinobaby. But a 30 something cyber whiz? That’s something that makes news.
Stephen E Arnold, June 22, 2023
Newsflash: Common Sense Illuminates Friendly Fish for Phishers
June 16, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Here’s a quick insider threat and phishing victim test: [a] Are you really friendly, fraternity or sorority social officer gregarious humanoid? [b] Are you a person who says “Yes” to almost any suggestion a friend or stranger makes to you? [c] Are you curious about emails offering big bucks, free prizes, or great deals on avocado slicers?
If you resonated with a, b, or c, researchers have some news for you.
… the older you are, the less susceptible you are to phishing scams. In addition, highly extroverted and agreeable people are more susceptible to this style of cyber attack. This research holds the potential to provide valuable guidance for future cybersecurity training, considering the specific knowledge and skills required to address age and personality differences.
The research summary continues:
The results of the current study support the idea that people with poor self-control and impulsive tendencies are more likely to misclassify phishing emails as legitimate. Interestingly, impulsive individuals also tend to be less confident in their classifications, suggesting they are somewhat aware of their vulnerability.
It is good to be an old, irascible, skeptical dinobaby after all.
Stephen E Arnold, June 16, 2023
Is This for Interns, Contractors, and Others Whom You Trust?
June 14, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Not too far from where my office is located, an esteemed health care institution is in its second month of a slight glitch. The word in Harrod’s Creek is that security methods at use at a major hospital were — how shall I frame this — a bit like the 2022-2023 University of Kentucky’s’ basketball team’s defense. In Harrod’s Creek lingo, this statement would translate to standard English as “them ‘Cats did truly suck.”
A young temporary worker looks at her boss. She says, “Yes, I plugged a USB drive into this computer because I need to move your PowerPoint to a different machine to complete the presentation.” The boss says, “Okay, you can use the desktop in my office. I have to go to a cyber security meeting. See you after lunch. Text me if you need a password to something.” The illustration for this hypothetical conversation emerged from the fountain of innovation known as MidJourney.
The chatter about assorted Federal agencies’ cyber personnel meeting with the institution’s own cyber experts are flitting around. When multiple Federal entities park their unobtrusive and sometimes large black SUVs close to the main entrance, someone is likely to notice.
This short blog post, however, is not about the lame duck cyber security at the health care facility. (I would add an anecdote about an experience I had in 2022. I showed up for a check up at a unit of the health care facility. Upon arriving, I pronounced my date of birth and my name. The professional on duty said, “We have an appointment for your wife and we have her medical records.” Well, that was a trivial administrative error: Wrong patient, confidential information shipped to another facility, and zero idea how that could happen. I made the appointment myself and provided the required information. That’s a great computer systems and super duper security in my book.)
The question at hand, however, is: “How can a profitable, marketing oriented, big time in their mind health care outfit, suffer a catastrophic security breach?”
I shall point you to one possible pathway: Temporary workers, interns, and contractors. I will not mention other types of insiders.
Please, point your browser to Hak5.org and read about the USB Rubber Ducky. With a starting price of $80US, this USB stick has some functions which can accomplish some interesting actions. The marketing collateral explains:
Computers trust humans. Humans use keyboards. Hence the universal spec — HID, or Human Interface Device. A keyboard presents itself as a HID, and in turn it’s inherently trusted as human by the computer. The USB Rubber Ducky — which looks like an innocent flash drive to humans — abuses this trust to deliver powerful payloads, injecting keystrokes at superhuman speeds.
With the USB Rubby Ducky, one can:
- Install backdoors
- Covertly exfiltrate documents
- Capture credential
- Execute compound actions.
Plus, if there is a USB port, the Rubber Ducky will work.
I mention this device because it may not too difficult for a bad actor to find ways into certain types of super duper cyber secure networks. Plus temporary workers and even interns welcome a coffee in an organization’s cafeteria or a nearby coffee shop. Kick in a donut and a smile and someone may plug the drive in for free!
Stephen E Arnold, June 14, 2023
Microsoft: Just a Minor Thing
June 6, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Several years ago, I was asked to be a technical advisor to a UK group focused on improper actions directed toward children. Since then, I have paid some attention to the information about young people that some online services collect. One of the more troubling facets of improper actions intended to compromise the privacy, security, and possibly the safety of minors is the role data aggregators play. Whether gathering information from “harmless” apps favored by young people to surreptitious collection and cross correlation of young users’ online travels, these often surreptitious actions of people and their systems trouble me.
The “anything goes” approach of some organizations is often masked by public statements and the use of words like “trust” when explaining how information “hoovering” operations are set up, implemented, and used to generate revenue or other outcomes. I am not comfortable identifying some of these, however.
A regulator and a big company representative talking about a satisfactory resolution to the regrettable collection of kiddie data. Both appear to be satisfied with another job well done. The image was generated by the MidJourney smart software.
Instead, let me direct your attention to the BBC report “Microsoft to Pay $20m for Child Privacy Violations.” The write up states as “real news”:
Microsoft will pay $20m (£16m) to US federal regulators after it was found to have illegally collected
data on children who had started Xbox accounts.
The write up states:
From 2015 to 2020 Microsoft retained data “sometimes for years” from the account set up, even when a parent failed to complete the process …The company also failed to inform parents about all the data it was collecting, including the user’s profile picture and that data was being distributed to third parties.
Will the leader in smart software and clever marketing have an explanation? Of course. That’s what advisory firms and lawyers help their clients deliver; for example:
“Regrettably, we did not meet customer expectations and are committed to complying with the order to continue improving upon our safety measures,” Microsoft’s Dave McCarthy, CVP of Xbox Player Services, wrote in an Xbox blog post. “We believe that we can and should do more, and we’ll remain steadfast in our commitment to safety, privacy, and security for our community.”
Sounds good.
From my point of view, something is out of alignment. Perhaps it is my old-fashioned idea that young people’s online activities require a more thoughtful approach by large companies, data aggregators, and click capturing systems. The thought, it seems, is directed at finding ways to take advantage of weak regulation, inattentive parents and guardians, and often-uninformed young people.
Like other ethical black holes in certain organizations, surfing for fun or money on children seems inappropriate. Does $20 million have an impact on a giant company? Nope. The ethical and moral foundation of decision making is enabling these data collection activities. And $20 million causes little or no pain. Therefore, why not continue these practices and do a better job of keeping the procedures secret?
Pragmatism is the name of the game it seems. And kiddie data? Fair game to some adrift in an ethical swamp. Just a minor thing.
Stephen E Arnold, June 6, 2023
Need a Guide to Destroying Social Cohesion: Chinese Academics Have One for You
May 25, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
The TikTok service is one that has kicked the Google in its sensitive bits. The “algorithm” befuddles both European and US “me too” innovators. The ability of short, crunchy videos has caused restaurant chefs to craft food for TikTok influencers who record meals. Chefs!
What other magical powers can a service like TikTok have? That’s a good question, and it is one that the Chinese academics have answered. Navigate to “Weak Ties Strengthen Anger Contagion in Social Media.” The main idea of the research is to validate a simple assertion: Can social media (think TikTok, for example) take a flame thrower to social ties? The answer is, “Sure can.” Will a social structure catch fire and collapse? “Sure can.”
A frail structure is set on fire by a stream of social media consumed by a teen working in his parents’ garden shed. MidJourney would not accept the query a person using a laptop setting his friends’ homes on fire. Thanks, Aunt MidJourney.
The write up states:
Increasing evidence suggests that, similar to face-to-face communications, human emotions also spread in online social media.
Okay, a post or TikTok video sparks emotion.
So what?
…we find that anger travels easily along weaker ties than joy, meaning that it can infiltrate different communities and break free of local traps because strangers share such content more often. Through a simple diffusion model, we reveal that weaker ties speed up anger by applying both propagation velocity and coverage metrics.
The authors note:
…we offer solid evidence that anger spreads faster and wider than joy in social media because it disseminates preferentially through weak ties. Our findings shed light on both personal anger management and in understanding collective behavior.
I wonder if any psychological operations professionals in China or another country with a desire to reduce the efficacy of the American democratic “experiment” will find the research interesting?
Stephen E Arnold, May 25, 2023