March 11, 2014
Attensity has been a quiet sentiment, analytics, text processing vendor for some months. The company has now released a new version of its flagship product, Analyze, now at version 6.3. The headline feature is “enhanced analytics.”
According to a company news release, Attensity is “the leading provider of integrated, real-time solutions that blend multi-channel Voice of the Customer analytics and social engagement for enterprise listening needs.” Okay.
The new version of Analyze delivers to licensees real time information about what is trending. The system provides “multi dimensional visualization that immediately identifies performance outliers in the business that can impact6 the brand both positively and negatively.” Okay.
The system processes over 150 million blogs and forums, Facebook, and Twitter. Okay.
As memorable as these features are, here’s the passage that I noted:
Attensity 6.3 is powered by the Attensity Semantic Annotation Server (ASAS) and patented natural language processing (NLP) technology. Attensity’s unique ASAS platform provides unmatched deep sentiment analysis, entity identification, statistical assignment and exhaustive extraction, enabling organizations to define relationships between people, places and things without using pre-defined keywords or queries. It’s this proprietary technology that allows Attensity to make the unknown known.
“To make the unknown known” is a bold assertion. Okay.
I have heard that sentiment analysis companies are running into some friction. The expectations of some licensees have been a bit high. Perhaps Analyze 6.3 will suck up customers of other systems who are dissatisfied with their sentiment, semantic, analytics systems. Making the “unknown known” should cause the world to beat a path to Attensity’s door. Okay.
Stephen E Arnold, March 11, 2014
February 21, 2014
Thomson Reuters has added Twitter sentiment analysis to its Eikon subscription trading platform. Sorting tweets into positive and negative messages based on proprietary language-processing technology, the feature meets the demands of a growing number of traders.
According to Matthew Finnegan’s story “Thomson Reuters Adds Twitter Sentiment Analysis to Eikon Trading Terminal” for Computerworld UK, the analytics tool will show users the volume of both positive and negative messaging relating to specific companies on an hourly basis. Thomson Reuters’ Chief Technology Officer Philip Brittan stressed that the information will be used primarily for research, not a basis for trading decisions.
Since there have been instances of fake Tweets influencing markets, the caution is probably justified. But the power of social media’s unstructured data cannot be denied, and Eikon is attempting to harness it for subscribers:
“…the Eikon sentiment analysis aims to also make it easier for humans to quickly make sense of masses of social media information currently available, with tens of thousands of tweets about major companies each day.”
It’s one more way we see social media emerging as the dominant media force of the 21st century.
Laura Abrahamsen, February 21, 2014
January 17, 2014
The article on Lexalytics Blog titled Tagging, Taxonomies, Categorization with Salience provides a guide to using salience to get the most out of data. The first step, Discovery, involves features like Themes which extracts proper noun phrases to give a summary of what the content contains. Step 2 uses Concept Topics which uses ontology built from Wikipedia’s semantic knowledge to relate one word to another.
The article explains how this works:
“Salience will use the relationship between the category samples to tag your data. So every time the word “lion” pops up in your data, that entry will be categorized as “cats”. Every time the word “cheetah” appears, salience will know that this animal belongs to the cat family, and will tag the document as “cats”. This method of categorization is awesome because you do not need to list every single member of the cat family to create this category.”
Step 3 is another way of classifying data; it is creating a query topic. You input all words associated with your topic after consulting Wikipedia and a thesaurus, then limit the search with more information, and you also include how closely one word must be to another for it to be relevant.
Chelsea Kerwin, January 17, 2014
January 3, 2014
Clarabridge is a company involved in CEM or customer experience management. I am not exactly sure what the concept means. My experiences with customer support at T-Mobile or Holland America leaves me in a quandary. For companies that want to help me, the companies are doing everythi9ng in their power to drive me elsewhere. I assume that Clarabridge has a solution to this problem. No, not customers, but the costs a company incurs when customers contact them.
I noted that Clarabridge, according to In the Capital, raised $80 million in September 2013. In the Capital asserted that the company may be moving toward an initial public offering. The write up continued:
CEO Sid Banerjee said he hopes his company will soon follow in the footsteps of Cvent, another Northern Virginia company that recently went public.
Interesting stuff. The money and the the alleged 2014 IPO, not CEM. With rumors of some push back for come sentiment centric analytics, Clarabridge may have cracked the code. It does have an additional $80 million.
Stephen E Arnold, January 3, 2014
December 12, 2013
Short honk. I came across an interesting marketing concept in “Diffbot and Semantria Join to Find and Parse the Important Text on the ‘Net (Exclusive).”
Semantria (a company that offers sentiment analysis as a service) participated in a hackathon in San Francisco. The explains:
To make the Semantria service work quickly, even for text-mining novices, Rogynskyy’s team decided to build a plugin for Microsoft’s popular Excel spreadsheet program. The data in a spreadsheet goes to the cloud for processing, and Semantria sends back analysis in Excel format.
Semantria sponsored a prize for the best app. Diffbot won:
A Diffbot developer built a simple plugin for Google’s Chrome browser that changes the background color of messages on Facebook and Twitter based on sentiment — red for negative, green for positive. The concept won a prize from Semantria, Rogynskyy said. A Diffbot executive was on hand at the hackathon, and Rogynskyy started talking with him about how the two companies could work together.
I like the “sponsor”, “winner” and “team up” approach. The pay off, according to the article, is “While Semantria and Diffbot technologies continue to be available separately, they can now be used together.”
Sentiment analysis is one of the search submarkets that caught fire and then, based on the churning at some firms like Attensity, may be losing some momentum. Marketing innovation may be a goal other firms offering this functionality in 2014.
Stephen E Arnold, December 12, 2013
November 19, 2013
Sad to say, we have heard rumblings about severe disappointment with Attensity-type and Lexalytics-type sentiment applications. If you want to kick some tires in this interesting search niche, look instead to the open source application TextBlob. OpenShift points out this resource in, “Day 9: TextBlog—Finding Sentiments in Text.” The article is one in an ambitious series by writer Shekhar Gulati, who challenged himself to master one technology a day for a month. Very admirable, sir!
Gulati begins with his experience with sentiment analysis:
“My interest in sentiment analysis is few years old when I wanted to write an application which will process a stream of tweets about a movie, and then output the overall sentiment about the movie. Having this information would help me decide if I wanted to watch a particular movie or not.
“I googled around, and found that Naive Bayes classifier can be used to solve this problem. The only programming language that I knew at the time was Java, so I wrote a custom implementation and used the application for some time. I was lazy to commit the code, so when my machine crashed, I lost the code and application. Now I commit all my code to github, and I have close to 200 public repositories
“In this blog, I will talk about a Python package called TextBlob which can help developers solve this problem. We will first cover some basics, and then we will develop a simple Flask application which will use the TextBlob API.”
The post does indeed cover the basics, including the installation of Python and virtualenv before we can get going with TextBlob. It then takes us through writing an example application and deploying to the cloud. As he notes above, Gulati has his code safe and sound at Github; the code for this example are posted here, and the js and css files can be found here.
Cynthia Murrell, November 19, 2013
November 2, 2013
Simon Creasey from Computer Weekly recently reported on the outcome of the latest Twitter firestorm in the article “Failure to Invest in Sentiment Analytics Could Lead to Brand Damage.”
According to the article, a disgruntled British Airways passenger decided use a paid-for promoted tweet to blast his complaints to thousands of Twitter followers. As you can imagine, the tweet went viral and was shared and re-shared until it received global coverage. While PR disasters are often unavoidable, businesses are developing social media sentiment analysis software to contain them.
The article concludes:
““Monitoring what people are saying about your products and industry can help you design your products and propositions for the future and in that sense Twitter acts as a great market research tool as well as a lead-generation tool,” says Sinclair.
“Similarly, if you monitor what people are saying about your brand it can also help you with customer service and PR. There are many examples of companies who have found themselves under social media attack. Failure to invest in these kinds of tools could easily result in significant damage to a company’s reputation and brand.”
These days, social media is ever expanding and it is impossible to keep track of everything being said about your company’s brand, products, and employees. In order to avoid PR disasters like the one that happened to British Airways, companies should invest in the latest sentiment analysis technologies.
Jasmine Ashton, November 02, 2013
October 21, 2013
Here is something new from Gigaom: “Stanford Researchers To Open Source Model They Say Has Nailed Sentiment Analysis.” Richard Socher and a team from Stanford have created a computer program that can classify the sentiment of sentence with 85% accurately. They tested the model on movie reviews with a positive or negative tone. Even more amazing is that Socher and his team are making the project available to everyone. Why not capitalize on it instead? After all, companies have been trying for years to analyze social media and would pay the big bucks for said technology.
What makes Sucher’s project different from other sentiment software is that is reads whole sentences rather than just words.
“The team then built a new model it calls a Recursive Neural Tensor Network (it’s an evolution of existing models called Recursive Neural Networks), which is what actually processes all the words and phrases to create numeric representations for them and calculate how they interact with one another. When you’re dealing with text like movie reviews that contain linguistic intricacies, Socher explained, you need a model that can really understand how words play off each other to alter the meaning of sentences. The order in which they come, and what connects them, matters a lot.”
Socher hopes to reach a 95% accuracy, but the technology will never be 100% accurate because of jargon, idioms, odd word combinations, and slang. The project is making landmark strides in machine learning, logical reasoning, and grammatical analysis.
It means better news for online translators and speech technology, but commercial sentiment analytics vendors may see a decline in their profits.
Whitney Grace, October 21, 2013
August 20, 2013
Calling all software developers, analysts and systems integrators. The leading semantic intelligence developer, Expert System is hosting a webinar entitled, “What’s Hiding In Your Data? Test Drive Our Semantic API.” The webinar is scheduled for August 28 at 12 pm ET/9 am PT and registration is now open.
We recommend that professionals who are interested in transforming content and data streams into actionable and strategic information should sign up. A unique offering of this webinar is the live product test drive so that those interested can see how their flagship Cogito Intelligence API works.
The webinar description summarizes Cogito Intelligence API:
Cogito Intelligence API is a unique API that uses the power of semantic processing—Text Mining, Categorization, Tagging—and deep domain vertical knowledge for Intelligence to help analysts access and exploit some of their most strategic sources of information. As the only semantics based system, Cogito Intelligence API provides complete understanding of meaning and context in the processing of data and resolves ambiguities in data more effectively than solutions based on keywords or statistics.
Another unique offering from the Cogito API revolves around corporate security. Their solution is already embedded with corporate security measures, which enables businesses to operate all applications with the same confidence that Cogito offers.
Megan Feil, August 20, 2013
August 13, 2013
When enterprise organizations understand the value of unstructured data, and especially the value of it when it is integrated with structured data, what kind of solutions do they utilize? According to a recent study by Altimeter Group reported in “Enterprise Social Data Isolated in Departmental Silos,” 42 percent of the 35 large organizations surveyed were using business intelligence tools. Other areas where data sets converged were market research at 35 percent, CRM at 27 percent, email marketing at 27 percent and sensor data at four percent.
The main argument presented by the article is that as long as people are working in departmental silos, information and data will be first and foremost stored in a way that parallels how people are organized.
We learned more about why some organizations face challenges when integrating data:
The report also revealed it’s not always easy to integrate this data, attributing the issue to the fact that so many organizational departments touch the data, ‘all with varying perspectives on the information,’ the article states, adding: ‘The report also notes the numerous nuances within social data make it problematic to apply general metrics across the board and, in many organizations, social data doesn’t carry the same credibility as its enterprise counterpart.’
We know that one company, Expert System, would have quite the rebuttal to this argument that unstructured data may not be worthy across the board for all departments. Their solution Cogito Intelligence API yields insights and actionable information after parsing both structured and structured data while using sentiment analysis and natural language processing technologies.
Megan Feil, August 13, 2013