Multiple Vendors Form Alliance to Share Threat Intelligence

October 20, 2016

In order to tackle increasing instances of digital security threats, multiple intelligence threat vendors have formed an alliance that will share the intelligence gathered by each of them.

An article that appeared on Network World titled Recorded Future aligns with other threat intelligence vendors states that stated:

With the Omni Intelligence Partner Network, businesses that are customers of both Recorded Future and participating partners can import threat intelligence gathered by the partners and display it within Intelligence Cards that are one interface within Recorded Future’s platform

Apart from any intelligence, the consortium will also share IP addresses that may be origin point of any potential threat. Led by Recorded Future, the other members of the alliance include FireEye iSIGHTResilient Systems and Palo Alto Networks

We had earlier suggested about formation inter-governmental alliance that could be utilized for sharing incident reporting in a seamless manner. The premise was:

Intelligence gathered from unstructured data on the Internet such as security blogs that might shed light on threats that haven’t been caught yet in structured-data feeds

Advent of Internet of Things (IoT) will exacerbate the problems for the connected world. Will Omni Intelligence Partner Network succeed in preempting those threats?

Vishal IngoleOctober 20, 2016
Sponsored by, publisher of the CyberOSINT monograph


Google Cloud, Azure, and AWS Differences

October 18, 2016

With so many options for cloud computing, it can be confusing about which one to use for your personal or business files.  Three of the most popular cloud computing options are Amazon Web Services (AWS), Google Cloud Platform, and Microsoft Azure.  Beyond the pricing, the main differences range from what services they offer and what they name them.  Site Point did us a favor with its article comparing the different cloud services: “A Side-By-Side Comparison Of AWS, Google Cloud, And Azure.”

Cloud computing has the great benefit of offering flexible price options, but they can often can very intricate based on how much processing power you need, how many virtual servers you deploy, where they are deployed, etc.  AWS, Azure, and Google Cloud do offer canned solutions along with individual ones.

AWS has the most extensive service array, but they are also the most expensive.  It is best to decide how you want to use cloud computing because prices will vary based on the usage and each service does have specializations.  All three are good for scalable computing on demand, but Google is less flexible in its offering, although it is easier to understand the pricing.  Amazon has the most robust storage options.

When it comes to big data:

This requires very specific technologies and programming models, one of which is MapReduce, which was developed by Google, so maybe it isn’t surprising to see Google walking forward in the big data arena by offering an array of products — such as BigQuery (managed data warehouse for large-scale data analytics), Cloud Dataflow (real-time data processing), Cloud Dataproc (managed Spark and Hadoop), Cloud Datalab (large-scale data exploration, analysis, and visualization), Cloud Pub/Sub (messaging and streaming data), and Genomics (for processing up to petabytes of genomic data). Elastic MapReduce (EMR) and HDInsight are Amazon’s and Azure’s take on big data, respectively.

Without getting too much into the nitty gritty, each of the services have their strengths and weaknesses.  If one of the canned solutions do not work for you, read the fine print to learn how cloud computing can help your project.

Whitney Grace, October 18, 2016
Sponsored by, publisher of the CyberOSINT monograph

Pattern of Life Analysis to Help Decrypt Dark Web Actors

October 18, 2016

Google funded Recorded Future plans to use technologies like natural language processing, social network analysis and temporal pattern analysis to track Dark Web actors. This, in turn, will help security professionals to detect patterns and thwart security breaches well in advance.

An article Decrypting The Dark Web: Patterns Inside Hacker Forum Activity that appeared on DarkReading points out:

Most companies conducting threat intelligence employ experts who navigate the Dark Web and untangle threats. However, it’s possible to perform data analysis without requiring workers to analyze individual messages and posts.

Recorded Future which deploys around 500-700 servers across the globe monitors Dark Web forums to identify and categorize participants based on their language and geography. Using advanced algorithms, it then identifies individuals and their aliases who are involved in various fraudulent activities online. This is a type of automation where AI is deployed rather than relying on human intelligence.

The major flaw in this method is that bad actors do not necessarily use same or even similar aliases or handles across different Dark Web forums. Christopher Ahlberg, CEO of Recorded Future who is leading the project says:

A process called mathematical clustering can address this issue. By observing handle activity over time, researchers can determine if two handles belong to the same person without running into many complications.

Again, researchers and not AI or intelligent algorithms will have to play a crucial role in identifying the bad actors. What’s interesting is to note that Google, which pretty much dominates the information on Open Web is trying to make inroads into Dark Web through many of its fronts. The question is – will it succeed?

Vishal Ingole, October 18, 2016
Sponsored by, publisher of the CyberOSINT monograph

New EU Legislation on Terrorist Content

October 12, 2016

Balancing counterterrorism with digital rights continues to be a point of discussion. An article, EU parliament pushes ahead with plans to block, remove terrorist content online from Ars Technica reiterates the . Now, national authorities are required to ensure action are taken to remove illegal content hosted from within their territory that “constitutes public incitement to commit a terrorist offence”. If this is not feasible, they may take the necessary measures to block access to such content. Parliament’s chief negotiator, German MEP Monika Hohlmeier’s perspective is shared,

Hohlmeier said that the proposal strikes the right balance between security on the one hand and data protection and freedom of expression on the other. “It’s not so much a question of whether terrorists are using particular ways to hide on the Internet, or encryption, but they very often have perfect propaganda machinery. Our approach is to try to close websites, and if this is not possible to block these Internet websites,” she said. She added that enhanced cooperation was needed between police and justice authorities as well as private actors.

European digital rights organisation EDRi asserts that speed of action is taking undue priority over “legislation fit for the purpose.” Perhaps there is an opportunity for cyber security technology developed by justice authorities and the private sector to hit the mark on balancing the fine line between censorship and counterterrorism.

Megan Feil, October 12, 2016
Sponsored by, publisher of the CyberOSINT monograph

Busted Black Marketplace Pops Back Up

October 5, 2016

In June, a vendor of access to hacked servers, xDedic, was taken down. Now, reports intelligence firm Digital Shadows, it has resurrected itself as a Tor domain. Why am I suddenly reminded of the mythical hydra? We learn of the resurgence from SecurityWeek’s article, “Hacked Server Marketplace Returns as a Tor Domain.” The article tells us:

After Kaspersky Lab researchers revealed in mid-June that they counted over 70,000 hacked servers made available for purchase on xDedic, some for as low as just $6, the marketplace operators closed the virtual shop on June 16. However, with roughly 30,000 users a month, the storefront was too popular to disappear for good, and intelligence firm Digital Shadows saw it re-emerge only a week later, but as a Tor domain now.

In an incident report shared with SecurityWeek, Digital Shadows reveals that a user named xDedic posted on 24 Jun 2016 a link to the new site on the criminal forum exploit[.]in. The user, who ‘had an established reputation on the forum and has been previously identified as associated with the site,’ posted the link on a Russian language forum thread titled ‘xDedic ???????’ (xDedic burned).

We’re told that, though the new site looks just like the old site, the user accounts did not tag along. The now-shuttered site was attracting about 30,000 users monthly, so it should not take long to re-build their client list. Researchers are not able to assess the sites traffic, since it is now a Tor domain, but both Digital Shadows and Kaspersky Lab, another security firm, are “monitoring the situation.” We can rest assured they will inform law enforcement when they have more information.

Cynthia Murrell, October 5, 2016
Sponsored by, publisher of the CyberOSINT monograph

You Too Can Be an Expert Searcher

October 4, 2016

One would think that in the days of instant information, we all would be expert searchers and know how to find any fact.  The problem is that most people type entire questions into search engines and allow natural language processing to do the hard labor.  There is a smarter way to search than lazy question typing and Geek Squad has an search literacy guide you might find useful: “Search Engine Secrets: Find More With Google’s Hidden Features.”

What very few people know (except us search gurus) is that search engines have hidden tricks you can use you find your results quicker and make search easier.  While Google is the standard search engine and all these tricks are geared towards that search engine, they will also work with other ones.  The standard way to search is by typing a query into the search bar and some of these typing tricks are old school, such as using parentheses for an exact phrase, searching one specific Web site, wildcards, Boolean operators, and using a minus sigh (-) to exclude terms.

Searching for pictures is a much newer search form and is usually done by clicking on the image search on a search engine.  However, did you know that most search engines have the option to search with an image itself?  With Google, simply drag and drop an image into the search bar to start the process.  There are also delimiters on image search to filter results by specifics, such as GIFs, size, color, and others

Even newer than image search is vocal search with a microphone.  Usually, voice search is employed with a digital assistant like Cortana and Siri.  Some voice search commands are:

  •  Find a movie: What movies are playing tonight? or Where’s Independence Day playing?
  • Find nearby places: Where’s the closest cafe?
  • Find the time: What time is it in Melbourne?
  • Answer trivia questions: Where was Albert Einstein born? or How old is Beyonce?
  • Translate words or phrases: How do you say milk in Spanish?
  • Define a word: What does existentialism mean?
  • Convert between units: What’s 16 ounces in grams?
  • Solve a math problem: What’s the square root of 2,209?

Book a restaurant table: Book a table for two at Dorsia on Wednesday night.

The only problem is that only the typing tricks transfer to professional research.  They are used at universities, research institutes, and even large companies.  The biggest problem is that people do not know how to use them in those organizations.

Whitney Grace, October 4, 2016
Sponsored by, publisher of the CyberOSINT monograph

World-Check Database Leaked by Third Party

October 4, 2016

This is the problem with sensitive data—it likes to wander from its confines. Motherboard reports, “Terrorism Database Used by Governments and Banks Leaked Online.” Security researcher Chris Vickery reported stumbling upon a copy of the World-Check intelligence database from mid-2014 that was made available by a third party. The database maintained by Thomson Reuters for use by governments, intelligence agencies, banks, and law firms to guard against risks. Reporter Joseph Cox specifies:

Described by Thomson Reuters as a ‘global screening solution,’ the World-Check service, which relies on information from all over the world, is designed to give deep insight into financial crime and the people potentially behind it.

We monitor over 530 sanctions, including watch and regulatory law and enforcement lists, and hundreds of thousands of information sources, often identifying heightened-risk entities months or years before they are listed. In fact, in 2012 alone we identified more than 180 entities before they appeared on the US Treasury Office of Foreign Assets Control (OFAC) list based on reputable sources identifying relevant risks,’ the Thomson Reuters website reads.

A compilation of sensitive data like the World-Check database, though built on publicly available info, is subject to strict European privacy laws. As a result, it is (normally) only used by carefully vetted organizations. The article notes that much the U.S.’s No Fly List, World-Check has been known to flag the innocent on occasion.

Though Vickery remained mum on just how and where he found the data, he did characterize it as a third-party leak, not a hack. Thomson Reuters reports that the leak is now plugged, and they have secured a promise from that party to never leak the database again.

Cynthia Murrell, October 4, 2016
Sponsored by, publisher of the CyberOSINT monograph

Recent Developments in Deep Learning Architecture from AlexNet to ResNet

September 27, 2016

The article on GitHub titled The 9 Deep Learning Papers You Need To Know About (Understanding CNNs Part 3) is not an article about the global media giant but rather the advancements in computer vision and convolutional neural networks (CNNs). The article frames its discussion around the ImageNet Large-Scale Recognition Challenges (ILSVRC), what it terms the “annual Olympics of computer vision…where teams compete to see who has the best computer vision model for tasks such as classification, localization, detection and more.” The article explains that the 2012 winners and their network (AlexNet) revolutionized the field.

This was the first time a model performed so well on a historically difficult ImageNet dataset. Utilizing techniques that are still used today, such as data augmentation and dropout, this paper really illustrated the benefits of CNNs and backed them up with record breaking performance in the competition.

In 2013, CNNs flooded in, and ZF Net was the winner with an error rate of 11.2% (down from AlexNet’s 15.4%.) Prior to AlexNet though, the lowest error rate was 26.2%. The article also discusses other progress in general network architecture including VGG Net, which emphasized depth and simplicity of CNNs necessary to hierarchical data representation, and GoogLeNet, which tossed the deep and simple rule out of the window and paved the way for future creative structuring using the Inception model.

Chelsea Kerwin, September 27, 2016
Sponsored by, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link:

The Design of Our Future

September 26, 2016

An article at Co.Exist suggests we all pause to consider what we want our world to look like, in “We Need To Spend More Time Questioning Our Technology-Driven Future.” Along with the boundless potential of today’s fast-evolving technology come consequences, many of them unforeseen. Writer Ben Schiller cites futurist Gerd Leonhard, author of the book, Technology vs. Humanity. Far from a modern Luddite, Leonhard is a consultant for Google and a daily advocate for the wonders of advancing technology. His thorough understanding of the topic allows him to see potential pitfalls, as well.

The shape of technology today calls for society to update the way it approaches doing business, says Leonhard, and move past the “industrial-age paradigm of profit and growth at all costs, or some outmoded technological imperative that may have served us well in the 1980s.” He also points to the environmental problems created by fossil fuel companies as an example—if we aren’t careful, the AI and genetic engineering fields could develop their own “externalities,” or problems others will pay for, one way or another. Can we even imagine all the ways either of those fields could potentially cause harm?

Schiller writes of Leonhard:

The futurist outlines a philosophy he calls ‘exponential humanism’—the human equivalent of exponential technology. As a species we’re not developing the necessary skills and ethical frameworks to deal with technology that’s moving faster than we are, he says. We may be able to merge biology and technology, augment our minds and bodies, become superhuman, end disease, and even prolong life. But we’re yet to ask ourselves whether, for example, extending life is actually a good thing (as a society—there will always be individuals who for some reason want to live to 150). And, more to the point, will these incredible advances be available to everyone, or just a few people? To Leonhard, our current technological determinism—the view that technology itself is the purpose—is as dangerous as Luddism was 200-odd years ago. Without moral debate, we’re trusting in technology for its own sake, not because it actually improves our lives.

The write-up gives a few ideas on how to proactively shape our future. For example, Facebook could take responsibility for the content on its site instead of resting on its algorithm. Leonhard also suggests companies that replace workers with machines pay a tax  that would help soften the blow to society, perhaps even with a minimum guaranteed income. Far-fetched? Perhaps. But in a future with fewer jobs and more freely-available products, a market-driven economy might just be doomed. If that is the case, what would we prefer to see emerge in its place?

Cynthia Murrell, September 26, 2016
Sponsored by, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link:

Geoparsing Is More Magical Than We Think

September 23, 2016

The term geoparsing sounds like it has something to do with cartography, but according to Directions Magazine in the article, “Geoparsing Maps The Future Of Text Documents” it is more like an alchemical spell.  Geoparsing refers to when text documents into a geospatial database that allows entity extraction and disambiguation (aka is geotagging).  It relies on natural language processing and is generally used to analyze text document collections.

While it might appear that geoparsing is magical, it actually is a complex technological process that relies on data to put information into context.  Places often have the same name, so disambiguation would have difficulty inputting the correct tags.  Geoparsing has important applications, such as:

Military users will not only want to exploit automatically geoparsed documents, they will require a capability to efficiently edit the results to certify that the place names in the document are all geotagged, and geotagged correctly. Just as cartographers review and validate map content prior to publication, geospatial analysts will review and validate geotagged text documents. Place checking, like spell checking, allows users to quickly and easily edit the content of their documents.

The article acts as a promo piece for the GeoDoc application, however, it does delve into the details into how geoparsing works and its benefits.

Whitney Grace, September 23, 2016
Sponsored by, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link:

Next Page »