Enterprise Technology Perspective on Preventing Security Breaches

September 16, 2016

When it comes to the Dark Web, the enterprise perspective wants solutions to prevent security breaches. Fort Scale released an article, Dark Web — Tor Use is 50% Criminal Activity — How to Detect It, speaking to this audience. This write-up explains the anonymizer Tor as The Onion Router, a name explained by the multiple layers used to hide an IP address and therefore the user’s identity. How does the security software works to detect Tor users? We learned,

There are a couple of ways security software can determine if a user is connecting via the Tor network. The first way is through their IP address. The list of Tor relays is public, so you can check whether the user is coming from a known Tor relay. It’s actually a little bit trickier than that, but a quality security package should be able to alert you if user behaviors include connecting via a Tor network. The second way is by looking at various application-level characteristics. For example, a good security system can distinguish the differences between a standard browser and a Tor Browser because among other things,Tor software won’t respond to certain history requests or JavaScript queries.

Many cybersecurity software companies that exist offer solutions that monitor the Dark Web for sensitive data, which is more of a recovery strategy. However, this article highlights the importance of cybersecurity solutions which monitor enterprise systems usage to identify users connecting through Tor. While this appears a sound strategy to understand the frequency of Tor-based users, it will be important to know whether these data-producing software solutions facilitate action such as removing Tor users from the network.

Megan Feil, September 16, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

UltraSearch Releases Version 2.1

September 16, 2016

Now, after more than a year, we have a new version of a popular alternative to Windows’ built-in Desktop Search, UltraSearch. We learn the details from the write-up at gHacks.net, “UltraSearch 2.1 with File Content Search.” The application works by accessing a system’s master file table, so results appear almost instantly. Writer Martin Brinkmann informs us:

The list of changes on the official UltraSearch project website is long. While some of them may affect only some users, others are useful or at least nice to have for all. Jam Software, the company responsible for the search program, have removed the advertising banner from the program. There is, however, a new ‘advanced search’ menu option which links to the company’s TreeSize program in various ways. TreeSize is available as a free and commercial program.

As far as functional changes are concerned, these are noteworthy:

  1. File results are displayed faster than before.
  2. New File Type selection menu to pick file groups or types quickly (video files, Office files).
  3. Command line parameters are supported by the program now.
  4. The drive list was moved from the bottom to the top.
  5. The export dialog displays a progress dialog now.
  6. You may deactivate the automatic updating of the MFT index under Options > Include file system changes.

Brinkmann emphasizes that these are but a few of the changes in this extensive update, and suggests Windows users who have rejected it before give it another chance. We remind you, though, that UltraSearch is not your only Windows Desktop Search alternative. Some others include FileSearchEX, Gaviri Pocket SearchLaunchy. Locate32, Search EverythingSnowbird, Sow Soft’s Effective File Search, and Super Finder XT.

Launched back in 1997, Jam Software is based in Trier, Germany.  The company specializes in software tools to address common problems faced by users, developers, and organizations., like TreeSize, SpaceObserver, and, of course, UltraSearch. Though free versions of each are available, the company makes its money by enticing users to invest in the enhanced, professional versions.

Cynthia Murrell, September 16, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

Law Enforcement Utilizes New and Traditional Methods for Dark Web Matters

September 15, 2016

While the Dark Web may be thought of as a home to drug dealers, several individuals have been apprehended by law enforcement. Edinburgh News published a report: FBI Helps Catch Edinburgh Man Selling Drugs on ‘Dark Web’. David Trail was convicted for creating a similar website to eBay, but on the Dark Web, called Topix2. Stolen credit card information from his former employer, Scotweb were found in the search of his home. The article states,

Detective Inspector Brian Stuart, of the Cybercrime Unit, said: ‘Following information from colleagues in FBI, Germany’s West Hessen Police and the UK’s National Crime Agency, Police Scotland identified David Trail and his operation and ownership of a hidden website designed to enable its users to buy and sell illegal drugs anonymously and beyond the reach of law enforcement. His targeting of a previous employer, overcoming their security, almost had a devastating effect on the company’s ability to remain in business.

As this piece notes, law enforcement used a combination of new and traditional policing techniques to apprehend Trail. Another common practice we have been seeing is the cooperation of intelligence authorities across borders — and across levels of law enforcement. In the Internet age this is a necessity, and even more so when the nature of the Dark Web is taken into account.

Megan Feil, September 15, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

SLI Search: Loss Narrows for $35 Million Business

September 14, 2016

SLI Systems offers an eCommerce search system. If you followed the history of NBC’s search efforts, you may know that SLI Systems has some DNA from Snap Search. The company is an interesting one. It competes with EasyAsk, another eCommerce search vendor.

SLI released its financial results in a news release titled “SLI Systems Announces Financial Research for the Year to 30 June 2016.” (Some news releases have the ability to disappear or become a pay to play feature. The release was online an free as of September 6, 2016.)

The write up confirmed what most stakeholders in search and content processing systems may avoid thinking about: Generating revenue in today’s economic climate is difficult.

SLI Systems is a $35 million dollar company. The firm lost several big accounts for a range of reasons. The good news is that instead of losing $7 million in FY2015, SLI reported a before tax loss of $162,000. There are no details about what caused the hefty loss 12 months ago or what a new management team to reduce the shortfall by almost $8 million. Great management? Magic?

I circled this chunk of management explanation:

SLI Systems Chairman Greg Cross said: “The 2016 financial year has been a period of significant change for the company. Chris Brennan took over as Chief Executive Officer in October 2015 and since then we have recruited three key executives: a new Chief Revenue Officer, a new Chief Marketing Officer and a new Vice President of Customer Success. Drawing on the expertise of these new recruits and the broader management team, SLI has put in place new business processes and organizational structures to lift the performance of the business for the long term.

He added:

“The company remains in a strong financial position. Although we expect net cash outflows in the coming year as we return to a growth trajectory, we remain confident that we have sufficient cash resources to support the company’s plan. We are looking forward to the remainder of the year with cautious optimism,” Mr. Cross said.

SLI is based in New Zealand. The mot recent version of the company’s Web site does not make it easy to locate the company’s address at 78 – 106 Manchester Street. Christchurch 8011. New Zealand. New Zealand Phone: 0800 754 797. The company’s office appears to be in the Enterprise Precinct Innovation Center. The firm has an office in San Jose, California. SLI’s office locations are available at this link.

Stephen E Arnold, September 14, 2016

Is the UK Tolling the App Death Knell for Government Services?

September 14, 2016

The article titled Why Britain Banned Mobile Apps on GovInsider introduces Ben Terret and the innovative UK Government Digital Service program, the first of its kind in the world. Terret spearheaded a strict “no apps” policy in favor of websites while emphasizing efficiency, clarity, cost savings, and relevance of the information. This all adds up to creating a simple and streamlined experience for UK citizens. Terret explains why this approach is superior in an app-crazed world,

Apps are “very expensive to produce, and they’re very very expensive to maintain because you have to keep updating them when there are software changes,” Terrett says. “I would say if you times that by 300, you’re suddenly talking about a huge team people and a ton of money to maintain that ecosystem”…Sites can adapt to any screen size, work on all devices, and are open to everyone to use regardless of their device.

So what do these websites look like? They are clean, simple, and operated under the assumption that “Google is the homepage.” Terrett measures the success of a given digital services by monitoring how many users complete a transaction, or how many continued to search for additional information, documents, or services. Terrett’s argument against apps is a convincing one, especially based on the issue of cutting expenses. Whether this argument translates into the private sector is another question.

Chelsea Kerwin, September 14, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

HonkinNews, September 13, 2016 Now Available

September 13, 2016

Interested in having your polynomials probed? The Beyond Search weekly news explains this preventive action. In this week’s program you will learn about Google new enterprise search solution. Palantir is taking legal action against an investor in the company. IBM Watson helps out at the US Open. Catch up on the search, online, and content processing news that makes the enterprise procurement teams squirm. Dive in with Springboard and Pool Party. To view the video, click this link.

Kenny Toth, September 13, 2016

True or False: Google Fakes Results for Social Engineering

September 13, 2016

Here in Harrod’s Creek, we love the Alphabet Google thing. When we read anti Google articles, we are baffled. Why don’t these articles love and respect the GOOG as we do? A case in point is “How Google’s Search Engines Use Faked Results for Social Engineering.” The loaded words “faked results” and “social engineering” put us on our guard.

What is the angle the write up pursues? Let’s look.

I highlighted this passage as a way get my intellectual toe in the murky water:

Google published an “overview” of how SEO works, but in a nutshell, Google searches for the freshest, most authoritative, easiest-to-display (desktop/laptop and mobile) content to serve its search engine users. It crawls, caches (grabs) content, calculates the speed of download, looks at textual content, counts words to find relevance, and compares how it looks on different sized devices. It not only analyzes what other sites link to it, but counts the number of these links and then determines their quality, meaning the degree to which the links in those sites are considered authoritative. Further, there are algorithms in place that block the listing of “spammy” sites, although, spam would not be relevant here. And recently, they have claimed to boost sites using HTTPS to promote security and privacy (fox henhouse?).

I am not sure about the “fox hen house” reference because fox is a popular burgoo addition. As a result the critters are few and far between. Too bad. They are tasty and their tails make nifty additions to cold weather parkas.

The author of the write up is not happy with how Google responds to a query for “Jihad.” I learned:

Google’s search results give pride of place to IslamicSupremeCouncil.org. The problem, according to the write up, is that this site is not a big hitter in the Jihad content space.

The article points out that Google does not return the search results the person running the test queries expected. The article points out:

When someone in the US, perhaps wanting to educate themselves on the subject, searches for “Jihad” and sees the Islamic Supreme Council as the top-ranked site, the perception is that this is the global, unbiased and authoritative view. If they click on that first, seemingly most popular link, their perception of Jihad will be skewed by the beliefs and doctrine of this peaceful group of people. These people who merely dabble on the edge of Islamic doctrine. These people who are themselves repeatedly targeted for their beliefs that are contrary to those of the majority of Muslims. These people who do not even come close to being any sort of credible or realistic representation of the larger and more prevalent subscribers (nay soldiers) of the “Lesser Jihad” (again, the violent kind).

My thought is that the results I expect from any ad supported, publicly accessible search system are rarely what I expect. The more I know about a particular subject—how legacy search system marketing distorts what the systems can actually do—the more disappointed I am with the search results.

I don’t think Google is intentionally distorting search results. Certain topics just don’t match up to the Google algorithms. Google is pretty good at sports, pizza, and the Housewives of Beverly Hills. Google is not particularly good with fine grained distinctions in certain topic spaces.

If the information presented by, for instance, the Railway Retirement Board is not searched, the Google system does its best to find a way to sell an ad against a topic or word. In short, Google does better with certain popular subjects which generate ad revenue.

Legacy enterprise search systems like STAIRS III are not going to be easy to search. Nailing down the names of the programmers in Germany who worked on the system and how the STAIRS III system influenced BRS Search is a tough slog with the really keen Google system.

If I attribute Google’s indifference to information about STAIRS III to a master scheme put in place by Messrs. Brin and Page, I would be giving them a heck of a lot of credit for micro managing how content is indexed.

The social engineering angle is more difficult for me to understand. I don’t think Google is biased against mainframe search systems which are 50 years old. The content, the traffic, and the ad focus pretty much guarantee that STAIRS III is presented in a good enough way.

The problem, therefore, is that Google’s whiz kid technology is increasingly good enough. That means average or maybe a D plus. The yardstick is neither precision nor recall. At Google, revenue counts.

Baidu, Bing, Silobreaker, Qwant, and Yandex, among other search systems, have similar challenges. But each system is tending to the “good enough” norm. Presenting any subject in a way which makes a subject matter expert happy is not what these systems are tuned to do.

Here in Harrod’s Creek, we recognize that multiple queries across multiple systems are a good first step in research. Then there is the task of identifying individuals with particular expertise and trying to speak with them or at least read what they have written. Finally, there is the slog through the dead tree world.

Expecting Google or any free search engine to perform sophisticated knowledge centric research is okay. We prefer the old fashioned approach to research. That’s why Beyond Search documents some of the more interesting approaches revealed in the world of online analysis.

I like the notion of social engineering, particularly the Augmentext approach. But Google is more interested in money and itself than many search topics which are not represented in a way which I would like. Does Google hate me? Nah, Google doesn’t know I exist. Does Google discriminate against STAIRS III? Nah, of Google’s 65,000 employees probably fewer than 50 know what STAIRS III is? Do Googlers understand revenue? Yep, pretty much.

Stephen E Arnold, September 13, 2016

Toshiba Amps up Vector Indexing and Overall Data Matching Technology

September 13, 2016

The article on MyNewsDesk titled Toshiba’s Ultra-Fast Data Matching Technology is 50 Times Faster than its Predecessors relates the bold claims swirling around Toshiba and their Vector Indexing Technology. By skipping the step involving computation of the distance between vectors, Toshiba has slashed the time it takes to identify vectors (they claim). The article states,

Toshiba initially intends to apply the technology in three areas: pattern mining, media recognition and big data analysis. For example, pattern mining would allow a particular person to be identified almost instantly among a large set of images taken by surveillance cameras, while media recognition could be used to protect soft targets, such as airports and railway stations*4by automatically identifying persons wanted by the authorities.

In sum, Toshiba technology is able to quickly and accurately recognize faces in the crowd. But the specifics are much more interesting. Current technology takes around 20 seconds to identify an individual out of 10 million, and Toshiba can do it in under a second. The precision rates that Toshiba reports are also outstanding at 98%. The world of Minority Report, where ads recognize and direct themselves to random individuals seems to be increasingly within reach. Perhaps more importantly, this technology should be of dire importance to the criminal and perceived criminal populations of the world

Chelsea Kerwin, September 13, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monographThere is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

Elastic Links Search and Social Through Graph Capabilities

September 13, 2016

The article titled Confused About Relationships? Elasticsearch Gets Graphic on The Register communicates the latest offering from Elasticsearch, the open-source search server based on Apache’s Lucene. Graph capabilities are an exciting new twist on search that enables users to map out relationships through the search engine and the Kibana data visualization plug-in. The article explains,

By fusing graph with search, Elastic hopes to combine the power of social with that earlier great online revolution, the revolution that gave us Google: search. Graph in Elasticsearch establishes relevance by establishing the significance of each relationship versus the global average to return important results. That’s different to what Elastic called “traditional” relationship mapping, which is based on a count of the frequency of a given relationship.

Elasticsearch sees potential for their Graph capabilities in behavioral analysis, particularly in areas such as drug discovery, fraud detection, and customized medicine and recommendations. When it comes to identifying business opportunities, Graph databases have already proven their value. Discovering connections and trimming degrees of separation are all of vital importance in social media. Social networks like Twitter have been using them since the beginning of NoSQL. Indeed, Facebook is a customer of Elastic, the business version of Elasticsearch that was founded in 2012. Other users of Elasticsearch include Netflix, StumbleUpon, and Mozilla.

Chelsea Kerwin, September 13, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

Autonomy Back Home in Merrie Olde England

September 12, 2016

I read “Hewlett Packard Offloads Last Autonomy Assets in Software Deal.” I think that Autonomy is now going back home. Blood pudding, the derbies, and Indian take aways—yes, the verdant isle.

The union of Hewlett Packard (once an ink outfit) and the love child of Bayesian and Laplacian methods is burst asunder. HPE (the kissin’ cousin of the ink outfit) fabricated a deal only lawyers, MBAs, and accountants can conjure.

There is an $8 billion deal, cash to HPE, and a fresh swath of lush pasture for Micro Focus to cultivate.

I learned:

“Autonomy doesn’t really exist as an entity, just the products,” said Kevin Loosemore, executive chairman of Micro Focus. Loosemore said the Newbury-based business conducted due diligence across all of the products included in the deal, with no different approach taken for the Autonomy assets. No legal liabilities from Autonomy will be transferred to Micro Focus.

Integration is what Micro Focus does. Autonomy embodied in products was once a goal for some senior Autonomy executives. The golden sun is rising over the mid 1990s technology.

We wish Micro Focus well. We wish HPE well as it moves toward the resolution of its claims against Autonomy for assorted misdeeds.

Without search, HPE ceases to interest me. While HPE was involved in search, there was some excitement generated, but that is winding down and, for some I imagine, has long since vaporized.

I will have fond memories of HP blaming Autonomy for HP’s decision to buy Autonomy. Amazing. One of the great comedic moments in search and fading technology management.

Autonomy is dead. Long live Autonomy. Bayes lasted 60 years; Autonomy may have some legs even if embodied in other products. IDOL hands are the devil’s playthings I think. PS. I will miss the chipper emails from BM.com. Substantive stuff.

Stephen E Arnold, September 12, 2016

« Previous PageNext Page »