Text Analytics Vendors for Your Retirement Fund
February 10, 2016
I located a list of companies involved in content processing. You may want to add one or more of these to your retirement investment portfolio. Which one will be the next Facebook, Google, or Uber? I know I would love to have a hat or T shirt from each of these outfits:
Api.ai
Appinions
Automated Insights
Bitext
Clueda
Cortical.io
Dataminr
DigitalGenius
Equivio
Health Fidelity
Jobandtalent
Linguasys
Medallia
MonkeyLearn
NetBase
NewBrandAnalytics
Semantic Machines
Sensai
Sentisis
Signal
Strossle
Sysomos
TEMIS (Expert System)
Texternel
Textio
Treparel
Viralheat
Wibbitz
Wit.ai
Stephen E Arnold, February 8, 2016
HP Enterprise Investigative Analytics
February 5, 2016
Shiver me timbers. Batten the hatches. There is a storm brewing in the use of Autonomy-type methods to identify risks and fraud. To be fair, HP Enterprise no longer pitches Autonomy, but the sprit of Dr. Mike Lynch’s 1990s technology is there, just a hint maybe, but definitely noticeable to one who has embraced IDOL.
For the scoop, navigate to “HPE Launches Investigative Analytics, Using AI and Big Data to Identify Risk.” I was surprised that the story’s headline did not add “When Swimming in the Data Lake.” But the message is mostly clear despite the buzzwords.
Here’s a passage I highlighted:
The software is initially geared toward financial services organizations, and it combines existing HPE products like Digital Safe, IDOL, and Vertica all on one platform. By using big data analytics and artificial intelligence, it can analyze a large amount of data and help pinpoint potential risks of fraudulent behavior.
Note the IDOL thing.
The write up added:
Investigative Analytics starts by collecting both structured sources like trading systems, risk systems, pricing systems, directories, HR systems, and unstructured sources like email and chat. It then applies analysis to query “aggressively and intelligently across all those data sources,” Patrick [HP Enterprise wizard] said. Then, it creates a behavior model on top of that analysis to look at certain communication types and see if they can define a certain problematic behavior and map back to a particular historical event, so they can look out for that type of communication in the future.
This is okay, but the words, terminology, and phrasing remind me of more than 1990 Autonomy marketing collateral, BAE’s presentations after licensing Autonomy technology in the late 1990s, the i2 Ltd. Analyst Notebook collateral, and, more recently, the flood of jabber about Palantir’s Metropolitan Platform and Thomson Reuters’ version of Metropolitan called QA Direct or QA Studio or QA fill in the blank.
The fact that HP Enterprise is pitching this new service developed with “one bank” at a legal eagle tech conference is a bit like me offering to do my Dark Web Investigative Tools lecture at Norton Elementary School. A more appropriate audience might deliver more bang for each PowerPoint slide, might it not?
Will HP Enterprise put a dent in the vendors already pounding the carpeted halls of America’s financial institutions?
HP Enterprise stakeholders probably hope so. My hunch is that a me-too, me-too product is a less than inspiring use of the collection of acquired technologies HP Enterprise appears to put in a single basket.
Stephen E Arnold, February 5, 2016
Big Data: A Shopsmith for Power Freaks?
February 4, 2016
I read an article that I dismissed. The title nagged at my ageing mind and dwindling intellect. “This is Why Dictators Love Big Data” did not ring my search, content processing, or Dark Web chimes.
Annoyed at my inner voice, I returned to the story, annoyed with the “This Is Why” phrase in the headline.
Predictive analytics are not new. The packaging is better.
I think this is the main point of the write up, but I an never sure with online articles. The articles can be ads or sponsored content. The authors could be looking for another job. The doubts about information today plague me.
The circled passage is:
Governments and government agencies can easily use the information every one of us makes public every day for social engineering — and even the cleverest among us is not totally immune. Do you like cycling? Have children? A certain breed of dog? Volunteer for a particular cause? This information is public, and could be used to manipulate you into giving away more sensitive information.
The only hitch in the git along is that this is not just old news. The systems and methods for making decisions based on the munching of math in numerical recipes has been around for a while. Autonomy? A pioneer in the 1990s. Nope. Not even the super secret use of Bayesian, Markov, and related methods during World War II reaches back far enough. Nudge the ball to hundreds of years farther on the timeline. Not new in my opinion.
I also noted this comment:
In China, the government is rolling out a social credit score that aggregates not only a citizen’s financial worthiness, but also how patriotic he or she is, what they post on social media, and who they socialize with. If your “social credit” drops below a certain level because you post anti-government messages online or because you’re socially associated with other dissidents, you could be denied credit approval, financial opportunities, job promotions, and more.
Just China? I fear not, gentle reader. Once again the “real” journalists are taking an approach which does not do justice to the wide diffusion of certain mathy applications.
Net net: I should have skipped this write up. My initial judgment was correct. Not only is the headline annoying to me, the information is par for the Big Data course.
Stephen E Arnold, February 4, 2016
Palantir: Revenue Distribution
January 27, 2016
I came across a write up in a Chinese blog about Palantir. You can find the original text at this link. I have no idea if the information are accurate, but I had not seen this breakdown before:
The chart from “Touchweb” shows that in FY 2015 privately held Palantir derives 71 percent of its revenue from commercial clients.
The report then lists the lines of business which the company offers. Again this was information I had not previously seen:
Energy, disaster recovery, consumer goods, and card services
- Retail, pharmaceuticals, media, and insurance
- Audit, legal prosecution
- Cyber security, banking
- Healthcare research
- Local law enforcement, finance
- Counter terrorism, war fighting, special forces.
Because Palantir is privately held, there is not solid, audited data available to folks in Kentucky at this time.
Nevertheless, the important point is that the Palantir search and content processing platform has a hefty valuation, lots of venture financing, and what appears to be a diversified book of business.
Stephen E Arnold, January 27, 2016
Cheerleading for the SAS Text Exploration Framework
January 27, 2016
SAS is a stalwart in the number crunching world. I visualize the company’s executives chatting among themselves about the Big Data revolution, the text mining epoch, and the predictive analytics juggernaut.
Well, SAS is now tapping that staff interaction.
Navigate to “To Data Scientists and Beyond! One of Many Applications of Text Analytics.” There is an explanation of the ease of use of SAS. Okay, but my recollection was that I had to hire a PhD in statistics from Cornell University to chase down the code which was slowing our survivability analyses to meander instead of trot.
I learned:
One of the misconceptions I often see is the expectation that it takes a data scientist, or at least an advanced degree in analytics, to work with text analytics products. That is not the case. If you can type a search into a Google toolbar, you can get value from text analytics.
The write up contains a screenshot too. Where did the text analytics plumbing come from? Perchance an acquisition in 2008 like the canny purchase Teragram’s late 1990s technology?
The write up focuses on law enforcement and intelligence applications of text analytics. I find that interesting because Palantir is allegedly deriving more than 60 percent of the firm’s revenue from commercial customers like JP Morgan and starting to get some traction in health care.
Check out the screenshot. That is worth 1,000 words. SAS has been working on the interface thing to some benefit.
Stephen E Arnold, January 27, 2016
Big Data Blending Solution
January 20, 2016
I would have used Palantir or maybe our own tools. But an outfit named National Instruments found a different way to perform data blending. “How This Instrument Firm Tackled Big Data Blending” provides a case study and a rah rah for Alteryx. Here’s the paragraph I highlighted:
The software it [National Instruments] selected, from Alteryx, takes a somewhat unique approach in that it provides a visual representation of the data transformation process. Users can acquire, transform, and blend multiple data sources essentially by dragging and dropping icons on a screen. This GUI approach is beneficial to NI employees who aren’t proficient at manipulating data using something like SQL.
The graphical approach has been part of a number of tools. There are also some systems which just figure out where to put what.
The issue for me is, “What happens to rich media like imagery and unstructured information like email?”
There are systems which handle these types of content.
Another challenge is the dependence on structured relational data tables. Certain types of operations are difficult in this environment.
The write up is interesting, but it reveals that a narrow view of available tools may produce a partial solution.
Stephen E Arnold, January 20, 2016
Dark Web and Tor Investigative Tools Webinar
January 5, 2016
Telestrategies announced on January 4, 2016, a new webinar for active LEA and intel professionals. The one hour program is focused on tactics, new products, and ongoing developments for Dark Web and Tor investigations. The program is designed to provide an overview of public, open source, and commercial systems and products. These systems may be used as standalone tools or integrated with IBM i2 ANB or Palantir Gotham. More information about the program is available from Telestrategies. There is no charge for the program. In 2016, Stephen E Arnold’s new Dark Web Notebook will be published. More information about the new monograph upon which the webinar is based may be obtained by writing benkent2020 at yahoo dot com.
Stephen E Arnold, January 5, 2016
IBM Generates Text Mining Work Flow Diagram
January 4, 2016
I read “Deriving Insight Text Mining and Machine Learning.” This is an article with a specific IBM Web address. The diagram is interesting because it does not explain which steps are automated, which require humans, and which are one of those expensive man-machine processes. When I read about any text related function available from IBM, I think about Watson. You know, IBM’s smart software.
Here’s the diagram:
If you find this hard to read, you are not in step with modern design elements. Millennials, I presume, love these faded colors.
Here’s the passage I noted about the important step of “attribute selection.” I interpret attribute selection to mean indexing, entity extraction, and related operations. Because neither human subject matter specialists nor smart software perform this function particularly well, I highlighted in red ink in recognition of IBM’s 14 consecutive quarters of financial underperformance:
Machine learning is closely related to and often overlaps with computational statistics—a discipline that also specializes in prediction-making. It has strong ties to mathematical optimization, which delivers methods, theory and application domains to the field. It is employed in a range of computing tasks where designing and programming explicit algorithms is infeasible. Example applications include spam filtering, optical character recognition (OCR), search engines and computer vision. Text mining takes advantage of machine learning specifically in determining features, reducing dimensionality and removing irrelevant attributes. For example, text mining uses machine learning on sentiment analysis, which is widely applied to reviews and social media for a variety of applications ranging from marketing to customer service. It aims to determine the attitude of a speaker or a writer with respect to some topic or the overall contextual polarity of a document. The attitude may be his or her judgment or evaluation, affective state or the intended emotional communication. Machine learning algorithms in text mining include decision tree learning, association rule learning, artificial neural learning, inductive logic programming, support vector machines, Bayesian networks, genetic algorithms and sparse dictionary learning.
Interesting, but how does this IBM stuff actually work? Who uses it? What’s the payoff from these use cases?
More questions than answers to explain the hard to read diagram, which looks quite a bit like a 1998 Autonomy graphic. I recall being able to read the Autonomy image, however.
Stephen E Arnold, December 30, 2015
Text Analytics Jargon: You Too Can Be an Expert
December 22, 2015
Want to earn extra money as a text analytics expert? Need to drop some cool terms like Latent Dirichlet Allocation at a holiday function? Navigate to “Text Analytics: 15 Terms You Should Know Surrounding ERP.” The article will make clear some essential terms. I am not sure the enterprise resource planning crowd will be up to speed on probabilistic latent semantic analysis, but the buzzword will definitely catch everyone’s attention. If you party in certain circles, you might end up with a consulting job at mid tier services firm or, better yet, land several million in venture funding to dance with Dirichlet.
Stephen E Arnold, December 22, 2015
Search Vendors Under Pressure: Welcome to 2016
December 21, 2015
I read ”Silicon Valley’s Cash Party Is Coming to an End.” What took so long? I suppose reality is less fun than fantasy. Why watch a science documentary when one can get lost in Netflix binging.
The write up reports:
Based on interviews with about two dozen venture capitalists and tech investors, 2016 is shaping up to be a year of reckoning for scores of technology start-ups that have yet to prove out their business models and equally challenging for those that raised money at unjustifiably high prices.
Forget the unicorns. There are some enterprise search outfits which have ingested millions of dollars, have convinced investors that big revenue or an HP-Autonomy scale buy out is just around the corner, and proprietary technology or consulting plus open source will produce gushers of organic revenue. Other vendors have tapped their moms, their nest eggs, and angels who believe in fairies.
I am not there is a General Leia Organa to fight Star Wars: The Revenue Battle for most vendors of search and content processing. Bummer. Despite the lack of media coverage for search and content processing vendors, the number of companies pitching information access is hefty. I track about 200 outfits, but many of these are unknown either because they don’t want to be visible or lack any substantive “newsy” magnetism.
My hunch is that this article suggests that 2016 may be different from the free money era the articles suggests is ending. In 2016, my view is that many vendors will find themselves in a modest tussle with their stakeholders. I worked through some of the search and content processing companies taking cash from folks with deep pockets often filled with other people’s money. (Note that investments totals come from Crunchbase). Here’s a list of search and content processing vendors who may face stakeholder and investor pressure. The more more ingested, the greater the interest investors may have in getting a return:
- Antidot, $3 million
- Attensity, $90 million
- Attivio, $71 million
- BA Insight, $14 million
- Connotate, $12 million
- Coveo, $69 million
- Digital Reasoning, $28 million
- Elastic (formerly Elasticsearch), $104 million
- Lucidworks, $53 million
- MarkLogic, $175 million
- Perfect Search, $4 million
- Palantir, $1.7 billion
- Recommind, $22 million
- Sinequa, $5 million
- Sophia Ambiance, $5 million
- X1, $12 million.
Then there are the acquired search systems which been acquired. One assumes these deals will have to produce sustainable revenues in some form:
- Hewlett Packard with Autonomy
- IBM with Vivisimo
- Dassault Systèmes with Exalead
- Lexmark with Brainware and ISYS Search
- Microsoft with Fast Search
- OpenText with BASIS, BRS, Fulcrum, and Nstein
- Oracle with Endeca, InQuira, and Rightnow
- Thomson Reuters with Solcara
Are there sufficient prospects to generate deals large enough to keep these outfits afloat?
There are search and content processing vendors competing for sales with free and open source options and the vendors with proprietary software:
- Ami Albert
- Content Analyst
- Concept Searching
- dtSearch
- EasyAsk
- Exorbyte
- Fabasoft Mindbreeze
- Funnelback
- IHS Goldfire
- SLI Systems
- Smartlogic
- Sprylogics
- SurfRay
- Thunderstone
- WCC Elise
- Zaizi
These search vendors plus many smaller outfits like Intrafind and Srch2 have to find a way to close deals to avoid the fate of Arikus, Convera, Delphes, Dieselpoint, Entopia, Hakia, Kartoo, NuTech Search, and Siderean Software, among others.
Despite the lack of coverage from mid tier consultants and the “real” journalists, the information access sector is moving along. In fact, when one looks at the software options, search and content processing vendors are easily found.
The problem for 2016 will be making sales, generating sustainable revenues, and paying back stakeholders. For many of these companies, the new year will be one which sees a number of outfits going dark. A few will thrive.
Darned exciting times in findability.
Stephen E Arnold, December 21, 2015