Are Threat Detection and Cyber Security Systems Working?

October 26, 2021

I read “Microsoft: Russian SVR Hacked at Least 14 IT Supply Chain Firms Since May.” The write up states:

Microsoft says the Russian-backed Nobelium threat group behind last year’s SolarWinds hack is still targeting the global IT supply chain, with 140 managed service providers (MSPs) and cloud service providers attacked and at least 14 breached since May 2021. This campaign shares all the signs of Nobelium’s approach to compromising a significant list of targets by breaching their service provider.

That’s interesting. At first glance, it seems as if a small number of targets succumbed.

On the other hand, it raises some questions:

  1. What cyber security and threat detection systems were in use at the 14 outfits breached?
  2. What caused the failure of the cyber security systems? Human error, lousy cyber security methods, or super crafty bad actors like insiders?
  3. Is a 10 percent failure rate acceptable? Microsoft seems agitated, but why didn’t Microsoft’s security protect 10 percent of the targets?

Each week I am invited to webinars to learn about advanced security systems. Am I to assume that if I receive 10 invites, one invite will be from an outfit whose technology cannot protect me?

The reports of breaches, the powers of giant software outfits, and the success of most companies in protecting themselves is somewhat cheering.

On the other hand, a known group operating for more than a year is still bedeviling some organizations. Why?

Stephen E Arnold, October 26, 2021

Microsoft and Russia: Who Does What to Whom?

October 26, 2021

Last year’s infamous Solar Winds attack really boosted Russia’s hacking community. That is one take-away from MarketBeat’s write-up, “Microsoft: Russia Behind 58% of Detected State-Backed Hacks.” Writer Frank Bajak shares some details from Microsoft’s second annual Digital Defense Report:

“Russia accounted for most state-sponsored hacking detected by Microsoft over the past year, with a 58% share, mostly targeting government agencies and think tanks in the United States, followed by Ukraine, Britain and European NATO members, the company said. The devastating effectiveness of the long-undetected SolarWinds hack — it mainly breached information technology businesses including Microsoft — also boosted Russian state-backed hackers’ success rate to 32% in the year ending June 30, compared with 21% in the preceding 12 months. China, meanwhile, accounted for fewer than 1 in 10 of the state-backed hacking attempts Microsoft detected but was successful 44% of the time in breaking into targeted networks, Microsoft said. … Only 4% of all state-backed hacking that Microsoft detected targeted critical infrastructure, the Redmond, Washington-based company said, with Russian agents far less interested in it than Chinese or Iranian cyber-operatives.”

Well, that is something. Ransomware, though, is also up, with the U.S. targeted three times as often as the next nation. Anyone who was affected by the Colonial Pipeline attack may be concerned about our infrastructure despite the lack of state-sponsored interest in sabotaging it. We are told state-backed attackers are mostly interested in intelligence gathering. Bajak cites Microsoft Digital Security Unit’s Cristin Goodwin as he writes:

“Goodwin finds China’s ‘geopolitical goals’ in its recent cyber espionage especially notable, including targeting foreign ministries in Central and South American countries where it is making Belt-and-Road-Initiative infrastructure investments and universities in Taiwan and Hong Kong where resistance to Beijing’s regional ambitions is strong.”

North Korea is another participant covered in the report. That country was in second place as a source of attacks at 23%, though their effectiveness was considerably less impressive—only 6% of their spear-phishing attempts were successful. Bajak closes by reminding us the report can only include attacks Microsoft actually detected. See the write-up or the report itself for more information.

Cynthia Murrell, October 26, 2021

Apple Emulates the Timnit Gebru Method

October 26, 2021

Remember Dr. Timnit Gebru. This individual was the researcher who apparently did not go along with the tension flow related to Google’s approach to snorkeling. (Don’t get the snorkel thing? Yeah, too bad.) The solution was to exit Dr. Gebru and move in a Googley manner forward.

Now the Apple “we care about privacy” outfit appears to have reached a “me too moment” in management tactics.

Two quick examples:

First, the very Silicon Valley Verge published “Apple Just Fired a Leader of the #AppleToo Movement.” I am not sure what the AppleToo thing encompasses, but it obviously sparked the Timnit option. The write up says:

Apple has fired Janneke Parrish, a leader of the #AppleToo movement, amid a broad crackdown on leaks and worker organizing. Parrish, a program manager on Apple Maps, was terminated for deleting files off of her work devices during an internal investigation — an action Apple categorized as “non-compliance,” according to people familiar with the situation.

Okay, deletes are bad. I figured that out when Apple elected to get rid of the backspace key.

Second, Gizmodo, another Silicon Valley information service, revealed “Apple Wanted Her Fired. It Settled on an Absurd Excuse.” The write up reports:

The next email said she’d been fired. Among the reasons Apple provided, she’d “failed to cooperate” with what the company called its “investigatory process.”

Hasta la vista, Ashley Gjøvik.

Observations:

  • The Timnit method appears to work well when females are involved in certain activities which run contrary to the Apple way. (Note that the Apple way includes flexibility in responding to certain requests from nation states like China.)
  • The lack of information about the incidents is apparently part of the disappearing method. Transparency? Yeah, not so much in Harrod’s Creek.
  • The one-two disappearing punch is fascinating. Instead of letting the dust settle, do the bang-bang thing.

Net net: Google’s management methods appear to be viral at least in certain management circles.

Stephen E Arnold, October 26, 2021

Facebook Tip: The Company Has Power

October 26, 2021

The sharks are circling the social media world’s favorite chum. In the meantime, here’s a tip.

The revelations from whistleblower Frances Haugen did not surprise us, but it is good to see them in the open. The former Facebook data scientist testified that Facebook actively puts profits above user safety. The more users scroll and click, the more the company can push tailored ads and the more money it makes. Since harmful and polarizing content gets more attention, Facebook is motivated to keep that content in circulation. A safer algorithm, attested Haugen, would have gotten in the way of those profits. (Naturally, Mark Zuckerberg denied her testimony.)

While we wait to see what, if any, changes the platform will be forced to make, BGR describes how users can wrest control of their accounts from the algorithm. Writer Chris Smith declares that “Facebook Is Terrified that You’ll Learn this News Feed Secret.” It is a process that may take some time, though there was briefly a Chrome extension to automate it. We learn:

“Louis Barclay’s ‘Unfollow Everything’ tool automated the entire process, allowing users to unfollow their friends and pages. Just like that, the tool cleared the News Feed, helping people spend less time inside the app. If you unfollow everyone, the algorithm has nothing to feed on. It won’t know what to serve you. Then, you can start from scratch and only follow important people. Your News Feed experience will improve dramatically as a result. Barclay explained in a post on Slate that unfollowing people isn’t like unfriending. You remain friends with people. You just won’t follow their posts and have them all dumped into your News Feed. Then, you’ll be able to follow only the people you want. ‘I still remember the feeling of unfollowing everything for the first time,’ he said. ‘It was near-miraculous. I had lost nothing since I could still see my favorite friends and groups by going to them directly. But I had gained a staggering amount of control. I was no longer tempted to scroll down an infinite feed of content. The time I spent on Facebook decreased dramatically. Overnight, my Facebook addiction became manageable.’”

Facebook refused to allow such a cure to spread, however. The company threatened Barclay with legal action if he did not remove the Chrome extension. It also permanently disabled his Facebook and Instagram accounts, which seems rather petty. Barclay must have hit a nerve.

Cynthia Murrell, October 26, 2021

Rogue in Vogue: What Can Happen When Specialized Software Becomes Available

October 25, 2021

I read “New York Times Journalist Ben Hubbard Hacked with Pegasus after Reporting on Previous Hacking Attempts.” I have no idea if the story is true or recounted accurately. The main point strikes me that a person or group allegedly used the NSO Group tools to compromise the mobile of a journalist.

The article concludes:

Hubbard was repeatedly subjected to targeted hacking with NSO Group’s Pegasus spyware. The hacking took place after the very public reporting in 2020 by Hubbard and the Citizen Lab that he had been a target. The case starkly illustrates the dissonance between NSO Group’s stated concerns for human rights and oversight, and the reality: it appears that no effective steps were taken by the company to prevent the repeated targeting of a prominent American journalist’s phone.

The write up makes clear one point I have commented upon in the past; that is, making specialized software and systems available without meaningful controls creates opportunities for problematic activity.

When specialized technology is developed using expertise and sometimes money and staff of nation states, making these tools widely available means a loss of control.

As access and knowledge of specialized tool systems and methods diffuses, it becomes easier and easier to use specialized technology for purposes for which the innovations were not intended.

Now bad actors, introductory programming classes in many countries, individuals with agendas different from those of their employer, disgruntled software engineers, and probably a couple of old time programmers with a laptop in an elder care facility can:

  • Engage in Crime as a Service
  • Use a bot to poison data sources
  • Access a target’s mobile device
  • Conduct surveillance operations
  • Embed obfuscated code in open source software components.

If the cited article is not accurate, it provides sufficient information to surface and publicize interesting ideas. If the write up is accurate, the control mechanisms in the countries actively developing and licensing specialized software are not effective in preventing misuse. For cloud services, the controls should be easier to apply.

Is every company, every nation, and every technology savvy individual a rogue? I hope not.

Stephen E Arnold, October 25, 2021

A Sporty Allegation: One Person Is Two on the Zuckmetabook Thing?

October 25, 2021

If you are interested in an “indie hacker’s” view of Zuckbook. Ooops. Sorry. I meant Facebook, you will want to read “Facebook Will Count One Person as Two on Its Platform.” I found the write up interesting. Darko has a way with words.

Here’s the statement from the Zuckbook which caught his attention:

Starting today, if someone does not have their Facebook and Instagram accounts linked in Accounts Center, we will consider those accounts as separate people for ads planning and measurement.

Darko then clarifies this corporate Zuck speak:

Essentially, Facebook will count one person as two on its platform for advertisers, unless the users have explicitly linked their accounts in “Account Center”. [Emphasis in the original text}

The write up identifies other murkiness; for example, the machinations of the “Account Center” and how the Zuckbook presents some ad effectiveness data.

Darko points out that the Zuckbook may be doing the Darwin adaptation to the Tim Apple privacy play. Plus, Zuckbook ad rates are “skyrocketing” to use Darko’s term.

What’s the impact of the Zuckbook’s new ad finery? Darko says:

Fortunately, there are new channels that are emerging and some founders already started having success with them. These recent interviews I did on using TikTok influencers to grow a SaaS and using Reddit outreach are just some examples. Decentralized social networking is also on the way, according to people like Naval, and is just waiting for its Satoshi moment.

I think I understand. Bad news for the Zuckbook. Maybe.

Stephen E Arnold, October 25, 2021

5G, Gee Whiz, Marketing Is Easier Than Making Technology Work

October 25, 2021

One of the interesting characteristics of life in the US in 2021 is that marketing is easier than other types of work. Furthermore, once the marketing copy is written and pushed into the channel, it’s time to take a break. Writing about bits and bytes is much easier than making those restless zeros and ones do what the copywriter said would happen. A good example of this “let’s have lunch” statement tossed out on a Manhattan sidewalk to a person whom one never wants to see again appears in “Fake It Until You Make It: 5G Marketing Outpaces Service Reality.”

The real and trusty news report asserts:

An analysis done by OpenSignal released on Thursday found that their testers connected with T-Mobile 5G just 34.7% of the time, AT&T 16.4% of the time and Verizon just 9.7%. And that’s generally not for the fastest 5G many expect.

And the marketing?

The numbers are in stark contrast to what the carriers promise about 5G in their advertisements, showing how much they are banking on 5G as a selling point in the hotly-contested market for cellular service.

This “fake it until you make it” method has been slapped on Banjo (now SafeX.ai), Theranos, and Uber, among others. The idea is that fast talking, jargon, and lots of high school confidence works.

Is this an American characteristic? Nah, the real and trusty journalist notes:

Internationally, the story is similar. South Korea tops the list of best 5G availability at 28.1% of the time, with Saudi Arabia, Kuwait and Hong Kong all above 25%, according to an OpenSignal report from early September.

It’s the Silicon Valley way. It works really well sometimes.

Stephen E Arnold, October 25, 2021

TikTok: A Billion Users Like the Burger Joint

October 25, 2021

I just wanted o document this post from TikTok. “Thanks a Billion!” states:

More than 1 billion people around the world now come to TikTok every month to be entertained as they learn, laugh, or discover something new. We’re honored to be a home for our immensely diverse community of families, small businesses, and creators who transform into our favorite stars.

I noted this statement too:

TikTok has become a beloved part of life for people around the world because of the creativity and authenticity of our creators.

How valuable are TikTok users’ data?

Answer: Really valuable. Unregulated, non-US owned, and chugging along because billions don’t understand short, often weird videos. Mistake.

Stephen E Arnold, October 25, 2021

Google Does Waymo Than Online Advertising

October 22, 2021

If Google Waymo smart vehicles are confused, are other components of Google’s smart software system off track as well? That’s a good question, and it is one that those fond of snorkeling may have to do a deep dive to answer.

Confused Waymo Robotaxis Keep Flooding Dead-End Street in San Francisco” reports:

Residents of an otherwise quiet neighborhood in San Francisco have been dealing lately with a very weird affliction: the constant buzzing of several Waymo vehicles crowding a dead-end street. The self-driving taxis are flooding the end of 15th Avenue, appearing rather “confused” as they enter the area ….

San Francisco is an interesting city in which to drive. I am easily confused and when I commuted from Berkeley to San Mateo in Plastic Fantastic County, I would end up in some fascinating places. The Cow Palace parking lot was memorable after a bit of congestion on the 101 forced people like me to seek an option.

The write up points out:

What we know for sure is that Waymo has been trialing its autonomous vehicles in San Francisco since 2008. But as we’ve seen other instances of Alphabet’s robotaxis freaking out, the situation begs the question, what’s going on?

Yep, beta testing, trying to minimize crashing into things, and giving those safety drivers something to enter into their Waymo app.

How long has the Google been wrestling with smart software for smart vehicles? Not long enough maybe?

Stephen E Arnold, October 22, 2021

Auditing Algorithms: A Semi-Tough Task

October 22, 2021

Many years ago, I did a project for a large outfit. The goal was to look at a project and figure out why it was a flop. I assembled an okay team, which beavered away. The end result was that a number of small things went wrong. Each added some friction to what on the surface seemed a doable project. The “small things” added friction and the process went nowhere.

I thought about this after I read “Twitter’s Own Research Show That It’s a Megaphone for the Right. But It’s Complicated.

I circled this statement from the article:

We can see that it is happening. We are not entirely sure why it is happening. To be clear, some of it could be user-driven, people’s actions on the platform, we are not sure what it is.

Now back to failure. Humans expect a specific construct to work in a certain way. When it doesn’t humans either embrace root cause analysis or just shrug their shoulders and move on.

Several questions:

  • If those closest to a numerical recipe are not sure what’s causing the unexpected outcome, how will third party algorithm auditors figure out what is happening?
  • Engineering failures like using a material which cannot tolerate a particular amount of stress are relatively easy to figure out. Social media “smart” algorithms may be a more difficult challenge. What tools are available to deal with this engineering failure analysis? Do they work or are they too unabled to look at a result and pinpoint one or more points of inappropriate performance?
  • When humans and social media interact with complex algorithmic systems, do researchers have the meta-models capable of identifying the cause of failures or performance factors resulting from tiny operations in the collective system?

My hunch is that something new exists to be studied. Was Timnit Gebru, the former Google engineer, on the right track?

Stephen E Arnold, October 22, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta