KnowBe4: Leveraging Mitnick

August 21, 2020

Many hackers practice their “art,” because they want to beat the system, make easy money, and challenge themselves. White hat hackers are praised for their Batman vigilante tactics, but the black hat hackers like Kevin Mitnick cannot even be classified as a Robin Hood. Fast Company article, “I Hired An Infamous Hacker-And It Was The Best Decision I Ever Made” tells Stu Sjourverman’s story about hiring Kevin Mitnick.

Mitnick is a typical child hacker prodigy, who learned about easy money through pirated software. He went to prison for a year, violated his parole, and was viewed as an antihero by some and villain by others. Either way, his background was controversial and yet Sjourverman decided to hire him. Sjourverman was forming a new company centered on “social engineering” or “hacking the human,” terms used to describe tricking people into clicking harmful links or downloading malware invested attachments. For his new cybersecurity company, Sjourverman knew he needed a hacker:

“That was a turning point for my startup, KnowBe4. By recruiting Mitnick, we gained invaluable insights about where employees are most vulnerable. We were able to use those insights to develop a practical platform where companies can see where their own employees stumble and, most importantly, train them to recognize and avoid potential pitfalls. This is essential for any business because if all other security options fail, employees become a company’s last line of defense—one unintentional blunder can infect the entire network and bring down the whole company.”

Mitnick’s infamous reputation also gave the new startup a type of legitimacy. Other players in the cybersecurity industry knew about Mitnick’s talents and using them for white hat tactics gave KnowBe4 an advantage over rivals. Mitnick also became the center of KnowBe4’s marketing strategy, because he was a reformed criminal, understood the hacker community, and gave the startup an edgy yet authentic identity.

Hiring Mitnick proved to be the necessary step to make KnowBe4 a reputable and profitable business. It is also a story about redemption, because Mitnick donned the white hat and left his criminal past behind.

Will KnowBe4’s marketing maintain its momentum? Cyber security firms appear to be embracing Madison Avenue techniques. Watch next week’s DarkCyber for a different take on NSO Group’s “in the spotlight” approach to generating cyber intelligence sales.

Whitney Grace, August 21, 2020

Amazon and Toyota: Tacoma Connects to AWS

August 20, 2020

This is just a very minor story. For most people, the information reported in “Toyota, Amazon Web Services Partner On Cloud-Connected Vehicle Data” will be irrelevant. The value of the data collected by the respective firms and their partners is trivial and will not have much impact. Furthermore, any data processed within Amazon’s streaming data marketplace and made available to some of the firm’s customers will be of questionable value. That’s why I am not immediately updating my Amazon reports to include the Toyota and insurance connection.

Now to the minor announcement:

Toyota will use AWS’ services to process and analyze data “to help Toyota engineers develop, deploy, and manage the next generation of data-driven mobility services for driver and passenger safety, security, comfort, and convenience in Toyota’s cloud-connected vehicles. The MSPF and its application programming interfaces (API) will enable Toyota to use connected vehicle data to improve vehicle design and development, as well as offer new services such as rideshare, full-service lease, proactive vehicle maintenance notifications and driving behavior-based insurance.

Are there possible implications from this link up? Sure, but few people care about Amazon’s commercial, financial, and governmental services, why think about issues like:

  • Value of the data to the AWS streaming data marketplace
  • Link analytics related to high risk individuals or fleet owners
  • Significance of the real time data to predictive analytics, maybe to insurance carriers and others?

Nope, not much of a big deal at all. Who cares? Just mash that Buy Now button and move on. Curious about how Amazon ensures data integrity in such a system? If you are, you can purchase our 50 page report about Amazon’s advanced data security services. Just write darkcyber333 at yandex dot com.

But I know first hand after two years of commentary, shopping is more fun than thinking about Amazon examined from a different viewshed.

Stephen E Arnold, August 20, 2020

Informatica: An Old Dog Is Trying to Learn New Tricks?

August 20, 2020

Old dogs. Many people have to pause a moment when standing. Balancing is not the same when one is getting old. Others have to extend an arm, knee, or finger slowly. Joints? Don’t talk about those points of failure to a former athlete. Can bee pollen, a vegan diet, a training session with Glennon Doyle, or an acquisition do the trick?

Informatica Buys AI Startup for Entity and Schema Matching” explains a digital rejuvenation. The article reports:

Informatica’s latest acquisition extends machine learning capabilities into matching of data entities and schemas.

Entities and schemas are important when fiddling with data. I want to point out that Informatica was founded in 1993 and has been in the data entities and schema business for more than a quarter century. Obviously the future is arriving at the venerable software development company.

The technology employed by Green Bay Technologies is what the article calls “Random Forest” machine learning. The article explains that Green Bay’s method possesses:

the ability to handle more diverse data across different domains, including semi-structured and unstructured data, and a crowd-sourcing approach that improves performance.

The Green Bay method employs:

a machine learning approach where multiple decision trees are run, and then subjected to a crowd sourced consensus process to identify the best results. It is a supervised approach where models are auto generated after the user applies some declarative rules – that is, he or she labels a sample set of record pairs, and from there the system infers “blocking rules” to build the models.

Informatica will add Green Bay’s capabilities to its existing smart software engine called CLAIRE.

The write up does not dig into issues related to performance, over fitting, or dealing with rare outcomes or predictors.

Glennon Doyle does not dwell on her flaws either.

Stephen E Arnold, August 20, 2020

Alphabet Spells Out Actions for YouTubers to Take

August 20, 2020

Coercion is interesting because it can take many forms. An online publication called Digital Journal published “Google Rallies YouTubers Against Australian News Payment Plan.” Let’s assume the information in the write up is accurate. The pivot point for the article is:

Google has urged YouTubers around the world to complain to Australian authorities as it ratchets up its campaign against a plan to force digital giants to pay for news content. Alongside pop-ups warning “the way Aussies use Google is at risk”, which began appearing for Australian Google users on Monday, the tech titan also urged YouTube creators worldwide to complain to the nation’s consumer watchdog.

The idea, viewed from a company’s point of view, seems to be that users can voice their concern about an Australian government decision. The company believes that email grousing will alter a government decision. The assumption is that protest equals an increased likelihood of change. Is this coercion? Let’s assume that encouraging consumer push back against a government is.

The action, viewed from a government’s point of view, may be that email supporting a US company’s desire to index content and provide it to whomever, is harming the information sector in a country.

The point of friction is that Alphabet Google is a company which operates as if it were a country. The only major difference is that Alphabet Google does not have its own military force, and it operates in a fascinating dimension in which its actions are important, maybe vital, to some government agencies and, therefore, its corporate actions are endorsed or somehow made more important in other spheres of activity.

DarkCyber is interested in monitoring these issues:

  1. How will YouTube data consumers and enablers of Google ad revenue react to their corporate-directed coercive role?
  2. How will the Australian government react to and then accommodate such coercion if it becomes significant?
  3. How will other countries — for example, France, Germany, and the UK — learn from the YouTube coercion initiative?
  4. How will Alphabet Google mutate its coercive tactics to make them more effective?

Of course, the Google letter referenced in the Digital Journal may be a hoax or a bit of adolescent humor. Who pays attention to a super bright person’s high school antics? These can be explained away or deflected with “Gee, I am sorry.”

The real issue is a collision of corporatism and government. The coercion angle, if the write up is accurate, draws attention to a gap between what’s good for the company and what’s good for a country.

The issue may be the responsibility of the Australian Competition and Consumer Commission, but the implications reach to other Australian government entities and to other countries as well. The US regulatory entities have allowed a handful of companies to dominate the digital environment. Coercion may the an upgrade to these monopolies’ toolkits.

But the whole matter may be high school humor, easily dismissed with “it’s a joke” and “we’re sorry. Really, really sorry.”

Stephen E Arnold, August 20, 2020

Surprising Google Data

August 20, 2020

DarkCyber is not sure if these data are accurate. We have had some interesting interactions with NordVPN, and we are skeptical about this outfit. Nevertheless, let’s look beyond a dicey transaction with the NordVPN outfit and focus on the data in “When Looking for a VPN, Chinese Citizens Search for Google.”

The article asserts:

New research by NordVPN reveals that when looking for VPN services on Baidu, the local equivalent of Google, the Chinese are mostly trying to get access to Google – in fact, 40,35% of all VPN service-related searches have to do with Google. YouTube comes second on the list, accounting for 31,58% of all searches. Other research by NordVPN has shown that YouTube holds the most desired restricted content, with 82,7% of Internet users worldwide searching for how to unblock this video sharing platform.

If valid, these data suggest that Google’s market magnetism is powerful. Perhaps a type of quantum search entanglement?

Stephen E Arnold, August 20, 2020

Which Cloud Is Better?

August 20, 2020

DarkCyber noted “AWS Vs. Azure: Key Differences and Business Benefits.” This is a free analysis. The write up offers some interesting points, and we suggest that you consult the original for the nuances and additional details.

Key points we noted:

  • A cloud is mostly neutral. The developer and tech team’s experience and knowledge make the difference.
  • Migration is easier if the technology professional has experience with a particular cloud.
  • More advanced cloud features cost more money; for instance, machine learning.
  • Prices are about the same regardless of cloud vendor.
  • AWS is better with open source technology.
  • Cloud providers are becoming adept at matching other cloud vendors’ offerings. But in the hybrid cloud game, Microsoft is number one.

The write up includes a league table. What’s interesting is that the Alibaba cloud business is within spitting distance of Google’s market share.

Stephen E Arnold, August 20, 2020

Insider Threats: Yep, a Problem for Cyber Security Systems

August 20, 2020

The number of cyber threat, security, alerting, and pentesting services is interesting. Cyber security investments have helped cultivate an amazing number of companies. DarkCyber’s research team has a difficult time keeping up with startups, new studies about threats, and systems which are allegedly one step ahead of bad actors. Against this context, two news stories caught our attention. It is too soon to determine if these reports are spot on, but each is interesting.

The first report appeared in Time Magazine’s story “Former CIA Officer Charged With Giving China Classified Information.” China is in the news, and this article reveals that China is or was inside two US government agencies. The story is about what insiders can do when they gather information and pass it to hostile third parties. The problem with insiders is that detecting improper behavior is difficult. There are cyber security firms which assert that their systems can detect these individuals’ actions. If the Time article is accurate, perhaps the US government should avail itself of such a system. Oh, right. The US government has invested in such systems. Time Magazine, at least in my opinion, did not explore what cyber security steps were in place. Maybe a follow up article will address this topic?

The second news item concerns a loss of health related personally identifiable information. The data breach is described in “Medical Data of Auto Accident Victims Exposed Online.” The security misstep allowed a bad actor to abscond with 2.5 million health records. The company responsible for the data loss is a firm engaged in artificial intelligence. The article explains that a PII health record can fetch hundreds of dollars when sold on “the Dark Web.” There is scant information about the security systems in place at this firm. That information strikes me as important.

Several questions come to mind:

  • What cyber security systems were in place and operating when these breaches took place?
  • Why did these systems fail?
  • Are security procedures out of step with what bad actors are actually doing?
  • What systemic issues exist to create what appear to be quite serious lapses?

DarkCyber does not have answers to these questions. DarkCyber is becoming increasingly less confident in richly funded, over-hyped, and ever fancier smart security systems. Maybe these whizzy new solutions just don’t work?

Stephen E Arnold, August 20, 2020

Apple and Russia

August 19, 2020

We have learned that Apple is being accused of unfair practices in yet another country from AppleInsider’s write-up, “Russian Watchdog Says Apple’s App Store Rules and Behaviors Are Anticompetitive.” According to Reuters, the Federal Antimonopoly Service of Russia declares that the way Apple runs its online app store gives it unfair advantage. We note the agency’s leader once allegedly worked for the KGB; we suggest it is unwise to irritate such an individual. Writer Mike Peterson gives details of the allegations:

“The FAS’s ruling cites the need for users to download iOS apps from the official App Store, and claimed that Apple has ‘unlawfully reserved rights’ to block any third-party app from the marketplace. The watchdog also signaled that it would issue an order demanding that Apple resolve it alleged regulatory abuses. The FAS launched its investigation following a formal complaint by cybersecurity firm Kaspersky Lab. The company issued the complaint after Apple blocked its ‘Safe Kids’ parental control app from the App Store, citing child privacy and security concerns. At the time, Apple’s removal of those parental control apps prompted concerns that the company was quashing competition of its Screen Time feature. Apple responded, stating that the use of mobile device management (MDM) and other tools in the apps presented a security risk.”

Peterson reminds us Apple is also facing antitrust investigations in the US and Europe. The EU probe, we’re told, was launched in response to charges by Apple Music competitor Spotify.

Russia’s government options may include some strategists at Apple may under weight. Telegram, for example, found cooperation a more pragmatic way to deal with Russian authorities. Why? Perhaps it was first hand knowledge of certain bureaucratic features of the Russian government’s mechanisms?

Some companies want to function as if they were countries. Some countries find that approach untenable.

Cynthia Murrell, August 19, 2020

Aussie Agency Accuses Google of Misleading Consumers

August 19, 2020

Our beloved Google misleading consumers? Say it isn’t so! The Australian Competition & Consumer Commission (ACCC) announces: “ACCC Alleges Google Misled Consumers About Expanded Use of Personal Data.” The commission has begun federal court proceedings against the company, saying it failed to adequately notify users about a change made to its privacy policy in 2016. Basically, it swapped out the promise, “We will not combine DoubleClick cookie information with personally identifiable information unless we have your opt in consent,” for the sentence, “Depending on your account settings, your activity on other sites and apps may be associated with your personal information in order to improve Google’s services and the ads delivered by Google.” To those who do not follow developments in the world of data, that sounds like a neutral thing at worst, perhaps even helpful. However, the post explains:

“Before June 2016, Google only collected and used, for advertising purposes, personally identifiable information about Google account users’ activities on Google owned services and apps like Google Search and YouTube. After June 2016, when consumers clicked on the ‘I agree’ notification, Google began to collect and store a much wider range of personally identifiable information about the online activities of Google account holders, including their use of third-party sites and apps not owned by Google. Previously, this additional data had been stored separately from a user’s Google account. Combined with the personal data stored in Google accounts, this provided Google with valuable information with which to sell even more targeted advertising, including through its Google Ad Manager and Google Marketing Platform brands. The ACCC alleges that the ‘I agree’ notification was misleading, because consumers could not have properly understood the changes Google was making nor how their data would be used, and so did not – and could not – give informed consent.”

As ACCC Chair Rod Sims points out, these third-party sites can include some “very sensitive and private information.” He also takes an interesting perspective—since Google is raking in more ad revenue from this personal data, and users essentially pay for its services with their data, the policy change amounted to an inadequately announced price hike. See the article for details on how Google implemented these changes in 2016.

We’re reminded Google acquired ad-serving firm DoubleClick in 2008, but it has since referred to the system as simply “Google technology” in its privacy policy. The technology tracks users all over the web to provide more personalized, and lucrative, advertising. With some imagination, though, one can think of many more uses for this information. Users should certainly be aware of the implications.

Cynthia Murrell, August 18, 2020

Deepfakes and Other AI Threats

August 19, 2020

As AI technology matures it has greater and greater potential to facilitate bad actors. Now, researchers at the University College London have concluded that falsified audio and video content poses the greatest danger. The university announces its results on its news page in, “‘Deepfakes’ Ranked as Most Serious AI Crime Threat.” The post relates:

“The study, published in Crime Science and funded by the Dawes Centre for Future Crime at UCL (and available as a policy briefing), identified 20 ways AI could be used to facilitate crime over the next 15 years. These were ranked in order of concern – based on the harm they could cause, the potential for criminal profit or gain, how easy they would be to carry out and how difficult they would be to stop. Authors said fake content would be difficult to detect and stop, and that it could have a variety of aims – from discrediting a public figure to extracting funds by impersonating a couple’s son or daughter in a video call. Such content, they said, may lead to a widespread distrust of audio and visual evidence, which itself would be a societal harm.”

Is the public ready to take audio and video evidence with a grain of salt? And what happens when we do? It is not as though first-hand witnesses are more reliable. The rest of the list presents five more frightening possibilities: using driverless vehicles as weapons; crafting more specifically tailored phishing messages; disrupting AI-controlled systems (like power grids, we imagine); large-scale blackmail facilitated by raking in data from the Web; and one of our favorites, realistic AI-generated fake news. The post also lists some crimes of medium- and low-concern. For example, small “burglar bots” could be thwarted by measures as simple as a letterbox cage. The write-up describes the study’s methodology:

“Researchers compiled the 20 AI-enabled crimes from academic papers, news and current affairs reports, and fiction and popular culture. They then gathered 31 people with an expertise in AI for two days of discussions to rank the severity of the potential crimes. The participants were drawn from academia, the private sector, the police, the government and state security agencies.”

Dawes Centre Director Shane Johnson notes that, as technology evolves, we must predict with potential threats so policy makers and others can keep up. Yes, that would be nice. She promises more reports are in her organization’s future. Stay tuned.

Cynthia Murrell, August 19, 2020

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta