January 6, 2017
Ecommerce sites rely on a strong search tool to bring potential customers to their online stores and to find specific products without a hassle. B2B based companies have the same goal, but they need an entire different approach although they still rely on search. If you run a B2B company, you might want to take a gander at Klevu and their solutions: “Search Requirements For A B2B Retailer.”
In the blog post, Klevu explains that B2B companies have multiple customer groups that allow different pricing, products, discounts, etc. The customers see prices based on allocation from the store, but they cannot use a single price for every item. Search is also affected by this outcome. Klevu came out with the Klevu Magneto plugin to:
The Klevu Magneto plugin also has an SKU search option, maintaining the same landing page within search results, and instant faceted search. Klevu researched the issues that its B2B customers had the most problems with and created solutions. They are actively pursuing ways to resolve bothersome issues that pop up and this is just the start for them.
Whitney Grace, January 6, 2017
January 5, 2017
Remember the good old days of search? Autonomy, Convera, Endeca, Fast Search, and others from the go go 2000s identified search as a solution to enterprise information access. Well, those assertions proved to be difficult to substantiate. Marketing is one thing; finding information is another.
How does a vendor of Google style searching with some pre-sell Clearwell Systems-type business process tweaking avoid the problems which other enterprise search vendors have encountered?
The answer is, “Market search as a solution for hiring.” Just as Clearwell Systems and its imitators did in the legal sector, Textkernel, founded in 2001 and sold to CareerBuilder in 2015, , is doing résumé indexing and search focused on finding people to hire. Search becomes “recruitment technology,” which is reasonably clever buzzworking.
The company explains its indexing of CVs (curricula vitae) this way:
CV parsing, also called resume parsing or CV extraction, is the process of converting an unstructured (so-called free-form) CV/resume or social media profile into a structured format that can be integrated into any software system and made searchable. CV parsing eliminates manual data entry, allows candidates to apply via any (mobile) device and enables better search results.
The Textkernel Web site provides more details about the company’s use of tried and true enterprise search functions like metadata generation and report generation (called a “candidate profile”).
In 2015 the company had about 70 employees. Using the Overflight revenue estimation tool, Beyond Search pegs the 2015 revenue in the $5 million range.
The good news is that the company avoided the catastrophic thrashing which other European enterprise search vendors experienced. The link to the video on the Textkernel page is broken, which does not bode well for Web coding expertise. However, you can bite into some text kernels at this link.
Stephen E Arnold, January 5, 2016
January 4, 2017
A new service has been launched in UK that enables users to find out if their confidential information is up for sale over the Dark Web.
As reported by Hacked in an article This Tool Lets You Scan the Dark Web for Your (Stolen) Personal Data, it says:
The service is called OwlDetect and is available for £3,5 a month. It allows users to scan the dark web in search for their own leaked information. This includes email addresses, credit card information and bank details.
The service uses a supposedly sophisticated algorithm that has alleged capabilities to penetrate up to 95% of content on the Dark Web. The inability of Open Web search engines to index and penetrate Dark Web has led to mushrooming of Dark Web search engines.
OwlDetect works very similar to early stage Google, as it becomes apparent here in the article:
This new service has a database of stolen data. This database was created over the past 10 years, presumably with the help of their software and team. A real deep web search engine does exist, however.
This means the search is not real time and is as good as searching your local hard drive. Most of the data might be outdated and companies that owned this data might have migrated to secure platforms. Moreover, the user might also have deleted the old data. Thus, the service just tells you that were you ever hacked or was your data was even stolen?
Vishal Ingole, January 4, 2017
January 3, 2017
A recent report was released by the U.S. Government Accountability Office entitled Patent Office Should Strengthen Search Capabilities and Better Monitor Examiners’ Work. Published on June 30, 2016, the report totals 91 pages in the form of a PDF. Included in the report is an examination by the U.S. Patent and Trademark Office (USPTO) of the challenges in identifying relevant information to an existing claimed invention that effect patent search. The website says the following in regards to the reason for this study,
GAO was asked to identify ways to improve patent quality through use of the best available prior art. This report (1) describes the challenges examiners face in identifying relevant prior art, (2) describes how selected foreign patent offices have addressed challenges in identifying relevant prior art, and (3) assesses the extent to which USPTO has taken steps to address challenges in identifying relevant prior art. GAO surveyed a generalizable stratified random sample of USPTO examiners with an 80 percent response rate; interviewed experts active in the field, including patent holders, attorneys, and academics; interviewed officials from USPTO and similarly sized foreign patent offices, and other knowledgeable stakeholders; and reviewed USPTO documents and relevant laws.
In short, the state of patent search is currently not very good. Timeliness and accuracy continue to be concerned when it comes to providing effective search in any capacity. Based on the study’s findings, it appears bolstering the effectiveness of these areas can be especially troublesome due to clarity of patent applications and USPTO’s policies and search tools.
Megan Feil, January 3, 2017
January 1, 2017
An article at Naked Security reveals some information turned up by innovative Tor-exploring hidden services in its article, “‘Honey Onions’ Probe the Dark Web: At Least 3% of Tor Nodes are Rogues.” By “rogues,” writer Paul Ducklin is referring to sites, run by criminals and law-enforcement alike, that are able to track users through Tor entry and/or exit nodes. The article nicely lays out how this small fraction of sites can capture IP addresses, so see the article for that explanation. As Ducklin notes, three percent is a small enough window that someone just wishing to avoid having their shopping research tracked may remain unconcerned, but is a bigger matter for, say, a journalist investigating events in a war-torn nation. He writes:
Two researchers from Northeastern University in Boston, Massachussets, recently tried to measure just how many rogue HSDir nodes there might be, out of the 3000 or more scattered around the world. Detecting that there are rogue nodes is fairly easy: publish a hidden service, tell no one about it except a minimum set of HSDir nodes, and wait for web requests to come in.[…]
With 1500 specially-created hidden services, amusingly called ‘Honey Onions,’ or just Honions, deployed over about two months, the researchers measured 40,000 requests that they assume came from one or more rogue nodes. (Only HSDir nodes ever knew the name of each Honion, so the researchers could assume that all connections must have been initiated by a rogue node.) Thanks to some clever mathematics about who knew what about which Honions at what time, they calculated that these rogue requests came from at least 110 different HSDir nodes in the Tor network.
It is worth noting that many of those requests were simple pings, but others were actively seeking vulnerabilities. So, if you are doing anything more sensitive than comparing furniture prices, you’ll have to decide whether you want to take that three percent risk. Ducklin concludes by recommending added security measures for anyone concerned.
Cynthia Murrell, January 1, 2017
December 31, 2016
You may remember Ardentia NetSearch. The company’s original product was NetSearch, which was designed to be quick to deploy and designed for the end use, not the information technology department. The company changed its name to Connexica in 2001. I checked the company’s Web site and noted that the company positions itself this way:
Our mission is to turn smart data discovery into actionable information for everyone.
What’s interesting is that Connexica asserts that
“search engine technology is the simplest and fastest way for users to service their own information needs.”
The idea is that if one can use Google, one can use Connexica’s systems. A brief description of the company states:
Connexica is the world’s pioneer of search based analytics.
The company offers Cxair. This is a Java based Web application. The application provides search engine based data discovery. The idea is that Cxair permits “fast, effective and agile business analytics.” What struck me was the assertion that Cxair is usable with “poor quality data.” The idea is to create reports without having to know the formal query syntax of SQL.
The company’s MetaVision produce is a Java based Web application that “interrogates database metadata.” The idea, as I understand it, is to use MetaVision to help migrate data into Hadoop, Cxair, or ElasticSearch.
Connexica, partly funded by Midven, is a privately held company based in the UK. The firm has more than 200 customers and more than 30 employees. When updating my files, I noted that Zoominfo reports that the firm was founded in 2006, but that conflicts with my file data which pegs the company operating as early as 2001.
A quick review of the company’s information on its Web site and open sources suggests that the firm is focusing its sales and marketing efforts on health care, finance, and government customers.
Connexica is another search vendor which has performed a successful pivot. Search technology is secondary to the company’s other applications.
Stephen E Arnold, December 31, 2016
December 30, 2016
I was able to snag a copy of “Indexing and Search: A Peek into What Real Users Think.” The study appeared in October 2016, and it appears to be the work of IT Central Station, which is an outfit described as a source of “unbiased reviews from the tech community.” I thought, “Oh, oh, “real users.” A survey. An IDC type or Gartner type sample which although suspicious to me seems to convey some useful information when the moon is huge. Nope. Nope.Unbiased. Nope.
Note that the report is free. One can argue that free does not translate to accurate, high value, somewhat useful information. I support this argument.
The report, like many of the “real” reports I have reviewed over the decades is relatively harmless. In terms of today’s content payloads, the study fires blanks. Let’s take a look at some of the results, and you can work through the 16 pages to double check my critique.
First, who are the “top” vendors? This list reads quite a bit about the basic flaw in the “peek.” The table below presents the list of “top” vendors along with my comment about each vendor. Companies with open source Lucene/Solr based systems are in dark red. Companies or brands which have retired from the playing field in professional search are in bold gray.
|Apache||This is not a search system. It is an open source umbrella for projects of which Lucene and Solr are two projects among many.|
|Attivio||Based on Lucene/Solr open source search software; positioned as a business intelligence vendor|
|Copernic||A desktop search and research system based on proprietary technology from the outfit known as Coveo|
|Coveo||A vendor of proprietary search technology now chasing Big Data and customer support|
|Dassault Systèmes||Owns Exalead which is now downgraded to a utility with Dassault’s PLM software|
|Data Design, now Ryft.com||Pitches search without indexing via propriety “circuit module” method|
|Data Gravity||Search is a utility in a storage centric system|
|DieselPoint||Company has been “quiet” for a number of years|
|Expert System||Publicly traded and revenue challenged vendor of a metadata utility, not a search system|
|Fabasoft||Mindbreeze is a proprietary replacement for SharePoint search|
|Discontinued the Google Search Appliance and exited enterprise search|
|Hewlett Packard Enterprise||Sold its search technology to Micro Focus; legal dispute in progress over alleged fraud|
|IBM Ominifind||Lucene and proprietary scripts plus acquired technology|
|IBM StoredIQ||Like DB2 search, a proprietary utility|
|ISYS Search Software||Now owned by Lexmark and marginalized due to alleged revenue shortfalls|
|Lookeen||Lucene based desktop and Outlook search|
|Lucidworks||Solr add ons with floundering to be more than enterprise search|
|MAANA||Proprietary search optimized for Big Data|
|Microsoft||Offers multiple search solutions. The most notorious are Bing and Fast Search & Transfer proprietary solutions|
|Oracle||Full text search is a utility for Oracle licenses; owns Artificial Linguistics, Triple Hop, Endeca, RightNow, InQuira, and the marginalized Secure Enterprise Search. Oh, don’t forget command line querying via PL/SQL|
|Polyspot, now CustomerMatrix||Now a customer service vendor|
|Siderean Software||Went out of business in 2008; a semantic search outfit|
|Sinequa||Now a Big Data outfit with hopes of becoming the “next big thing” in whatever sells|
|X1 Search||An eternal start up pitching eDiscovery and desktop search with a wild and crazy interface|
What’s the table tell us about “top” systems? First, the list includes vendors not directly in the search and retrieval business. There is no differentiation among the vendors repackaging and reselling open source Lucene/Solr solutions. The listing is a fruit cake of desktop, database, and unstructured search systems. In short, the word “top” does not do the trick for me. I prefer “a list of eclectic and mostly unknown systems which include a search function.”
The report presents 10 bar charts which tell me absolutely nothing about search and retrieval. The bars appear to be a popularity content based on visits to the author’s Web site. Only two of the search systems listed in the bar chart have “reviews.” Autonomy IDOL garnered three reviews and Lookeen one review. The other eight vendors’ products were not reviewed. Autonomy and Lookeen could not be more different in purpose, design, and features.
The report then tackles the “top five” search systems in terms of clicks on the author’s Web site. Yep, clicks. That’s a heck of a yardstick because what percentage of clicks were humans and what percentage was bot driven? No answer, of course.
The most popular “solutions” illustrate the weirdness of the sample. The number one solution is DataGravity, which is a data management system with various features and utilities. The next four “top” solutions are:
- Oracle Endeca – eCommerce and business intelligence and whatever Oracle can use the ageing system for
- The Google Search Appliance – discontinued with a cloud solution coming down the pike, sort of
- Lucene – open source, the engine behind Elasticsearch, which is quite remarkably not on the list of vendors
- Microsoft Fast Search – included in SharePoint to the delight of the integrators who charge to make the dog heel once in a while.
I find it fascinating that DataGravity (1,273) garnered almost 4X the “votes” as Microsoft Fast Search (404). I think there are more than 200 million plus SharePoint licensees. Many of these outfits have many questions about Fast Search. I would hazard a guess that DataGravity has a tiny fraction of the SharePoint installed base and its brand identity and company name recognition are a fraction of Microsoft’s. Weird data or meaningless.
The bulk of the report are comparison of various search engines. I could not figure out the logic of the comparisons. What, for example, do Lookeen and IBM StoredIQ have in common? Answer: Zero.
The search report strikes me as a bit of silliness. The report may be an anti sales document. But your mileage will differ. If it does, good luck to you.
Stephen E Arnold, December 30, 2016
December 26, 2016
I know that some millennials are not familiar with the Duesenberg automobile. Why would that generation care about an automobile manufacturer that went out of business in 1937. My thought is that the Duesenberg left one nifty artifact: The word doozy which means something outstanding.
I thought of the Duesenberg “doozy” when I read “Unstructured Data Search Engine Has Roots in HPC.” HPC means high performance computing. The acronym suggests a massively parallel system just like the one to which the average mobile phone user has access. The name of the search engine is “Duse,” which here in Harrod’s Creek is pronounced “doozy.”
According to the write up:
One company hoping to tap into the morass of unstructured data is DataFission. The San Jose, California firm was founded in 2013 with the goal of productizing a scale-out search engine , called the Digital Universe Search Engine, or DUSE, that it claims can index just about any piece of data, and make it searchable from any Web-enabled device.
The key to Duse is pattern matching. This is a pretty good method; for example, Brainware used trigrams to power its search system. Since the company disappeared into Lexmark, I am not sure what happened to the company’s system. I think the n-gram patent is owned by a bank located near an abandoned Kodak facility.
The method of the system, as I understand it, is:
- Index content
- Put index into compressed tables
- Allow users to search the index.
The users can “search” by entering queries or dragging “images, videos, or audio files into Duse’s search bar or programmatically via REST APIs.”
What differentiates Duse? The write up states:
The secret sauce lies in how the company indexes the data. A combination of machine learning techniques, such as principal component analysis (PCA), clustering, and classification algorithms, as well as graph link analysis and “nearest neighbor” approach help to find associations in the data.
Dr. Harold Trease, the architect of the Duse system, says:
We generate a high-dimensional signature, a high-dimensional feature vector, that quantifies the information content of the data that we read through,” he says. “We’re not looking for features like dogs or cats or buildings or cars. We’re quantifying the information content related to the data that we read. That’s what we index and put in a database. Then if you pull out a cell phone and take a picture of the dog, we convert that to one of these high-dimensional signatures, and then we compare that to what’s in the database and we find the best matches.
If we index a billion images, we’d end up with a billion points in this search space, and we can look at that search space it has structure to it, and the structure is fantastic. There’s all kinds these points and clusters and strands that connect things. It makes little less sense to humans, because we don’t see things like that. But to the code, it makes perfect sense.
The company’s technology dates from the 1990s and the search technology was part of the company’s medical image analysis and related research.
The write up reports:
The software itself, which today exists as a Python-based Apache Spark application, can be obtained as software product or fully configured on a hardware appliance called DataHunter.
For more information about the company, navigate to this link.
Stephen E Arnold, December 26, 2016
December 21, 2016
Lucidworks (really?). A vision has appeared to the senior managers of Lucidworks, an open source search outfit which has ingested $53 million and sucked in another $6 million in debt financing in June 2016. Yep, that Lucidworks. The “really” which the name invokes is an association I form when someone tells me that commercializing open source search is going to knock off the pesky Elastic of Elasticsearch fame while returning a juicy payoff to the folks who coughed up the funds to keep the company founded in 2007 chugging along. Yep, Lucid works. Sort of, maybe.
I read “Lucidworks Integrates IBM Watson into Fusion Enterprise Discovery Platform.” The write up explains that Lucidworks is “tapping into” the IBM Watson developer cloud. The write up explains that Lucidworks has:
an application framework that helps developers to create enterprise discovery applications so companies can understand their data and take action on insights.
Ah, so many buzzwords. Search has become applications. “Action on insights” puts some metaphorical meat on the bones of Solr, the marrow of Lucidworks. Really?
With Watson in the company’s back pocket, Lucidworks will deliver. I learned:
Customers can rely on Fusion to develop and deploy powerful discovery apps quickly thanks to its advanced cognitive computing features and machine learning from Watson. Fusion applies Watson’s machine learning capabilities to an organization’s unique and proprietary mix of structured and unstructured data so each app gets smarter over time by learning to deliver better answers to users with each query. Fusion also integrates several Watson services such as Retrieve and Rank, Speech to Text, Natural Language Classifier, and AlchemyLanguage to bolster the platform’s performance by making it easier to interact naturally with the platform and improving the relevance of query results for enterprise users.
But wait. Doesn’t Watson perform these functions already. And if Watson comes up a bit short in one area, isn’t IBM-infused Yippy ready to take up the slack?
That question is not addressed in the write up. It seems that the difference between Watson, its current collection of partners, and affiliated entities like Yippy are vast. The write up tells me:
customers looking for hosted, pre-tuned machine learning and natural language processing capabilities can point and click their way to building sophisticated applications without the need for additional resources. By bringing Watson’s cognitive computing technology to the world of enterprise data apps, these discovery apps made with Fusion are helping professionals understand the mountain of data they work with in context to take action.
This sounds like quite a bit of integration work. Lucidworks. Really?
Stephen E Arnold, December 21, 2016
December 18, 2016
If you scan the marketing collateral from now defunct search giants like Convera, DR LINK, Fulcrum Technologies or similar extinct beasties, you will notice a similarity of features and functions. Let’s face it. Search and retrieval has been stuck in the mud for decades. Some wizards point to the revolution of voice search, emoji based queries, and smart software which knows what you want before you know you need some information.
Typing key words, indexing systems which add concept labels, and shouting at a mobile phone whilst standing between cars on a speeding train returns semi-useful links to what amount to homework: Open link, scan for needed info, close link, and do it again.
Eureka, California is easy to find. Get inspired.
Now there is a solution to search and content processing vendors’ inability to be creative. These methods appear to fuel the fanciful flights of fancy emanating from predictive analytics, Big Data, and semantic search companies.
Navigate to “8 Tried-and-Tested Ways to Unlock Your Creativity.” Now you too can emulate the breakthroughs, insights, and juxtapositions of Leonardo, Einstein, Mozart, and, of course, Facebook’s design team.
Let’s take a look at these 10 ideas.
- Set up a moodboard. I have zero idea what a moodboard is. I am not sure it would fit into the work methods of Beethoven. He seemed a bit volatile and prone to “bad” moods.
- Talk it out. That’s a great idea for companies engaged in classified projects for nation states. Why not have those conversations in a coffee shop or better yet on an airplane with strangers sitting cheek by jowl.
- Brainstorming. My recollectioin of brainstorming is that it can be fun, but without one person who doesn’t get with the program, the “ideas” are often like recycled plastic bottles. Not always, of course. But the donuts can be a motivator.
- Mindmapping. Yep, diagrams. These are helpful, particularly when equations are included for the home economics and failed webmasters who wrangle a job at a search or cotnent processing vendor. What’s that pitchfork looking thing mean?
- Doodling. Works great. The use of paper and pencils is popular. One can use a Microsoft Surface or a giant iPad thing. Profilers and psychologists enjoy doodles. Venture capitalists who invested in a search and content processing company often sketch some what dark images.
- Music. Forget that Mozart and fighter pilot stuff. Go for Gregorian chants, heavy metal, and mindfulness tunes. Here in Harrod’s Creek, we love Muzak featuring the Whites and John Lomax.
- Lucid dreaming. This idea is popular among some of the visionaries working at high profile Sillycon Valley companies. Loon balloons, solar powered Internet aircraft, and trips to Mars. Apply that thinking to search and what do you get? Tay, search by sketch, and smart maps which identify pizza joints.
- Imagine what a great innovator would do. That works. People sitting on a sofa playing a video game can innovate between button pushes.
Why are search and cotnent processing vendors more creative? Now these folks can go in new directions armed with these tips and the same eight or nine algorithms in wide use. Peak search? Not by a country mile.
Stephen E Arnold, December 18, 2016