January 7, 2015
Social search was supposed to integrate social media and regular semantic search to create a seamless flow of information. This was one of the major search points for a while, yet it has not come to fruition. So what happened? TechCrunch reports that it is “Good Riddance To Social Search” and with good reason, because the combination only cluttered up search results.
TechCrunch explains that Google tried Social Search back in 2009, using its regular search engine and Google+. Now the search engine mogul is not putting forth much effort in promoting social search. Bing tried something by adding more social media features, but it is not present in most of its search results today.
Why did this endeavor fail?
“I think one of the reasons social search failed is because our social media “friendships” don’t actually represent our real-life tastes all that well. Just because we follow people on Twitter or are friends with old high school classmates on Facebook doesn’t mean we like the same restaurants they do or share the politics they do. At the end of the day, I’m more likely to trust an overall score on Yelp, for example, than a single person’s recommendation.”
It makes sense considering how many people consider their social media feeds are filled with too much noise. Having search results free of the noiwy makes them more accurate and helpful to users.
December 31, 2014
An article published on Innography called “Advanced Patent Search” brings to attention how default search software might miss important search results, especially if one is researching patents. It points pout that some parents are purposefully phrased to cause hide their meaning and relevance to escape under the radar.
Deeper into the article it transforms into a press release highlight Innography’s semantic patent search. It highlights how the software searches through descriptive task over product description, keywords, and patent abstracts. This is not anything too exciting, but this makes the software more innovative:
“Innography provides fast and comprehensive metadata analysis as another method to find related patents. For example, there are several “one-click” analyses from a selected patent – classification analysis, citation mining, invalidation, and infringement – with a user-selected similarity threshold to refine the analyses as desired. The most powerful and complete analyses utilize all three methods – keyword search, semantic search, and metadata analysis – to ensure finding the most relevant patents and intellectual property to analyze further.”
Innography’s patent search serves as an example for how search software needs to compete with comparable products. A simple search is not enough anymore, not in the world of big data. Users demand analytics, insights, infographics, easy of use, and accurate results.
December 31, 2014
IT developers are searching for new ways to manipulate semantic search, but according to Search Engine Journal in “12 Things You Need To Do For Semantic Search” they are all trying to figure out what the user wants. The article offers twelve tips to get back to basics and use semantic search as a tool to drive user adoption.
Some of the tips are quite obvious, such as think like a user, optimize SEO, and harness social media and local resources. Making a Web site stand out, requires taking the obvious tips and using a bit more. The article recommends that it is time to learn more about Google Knowledge Graph and how it applies to your industry. Schema markup is also important, because search engines rely on it for richer results and it develops how users see your site in a search engine.
Here is some advice on future proofing you site:
“Work out how your site can answer questions and provide users with information that doesn’t just read like terms and conditions. Pick the topics, services and niches that apply to your site and start to optimize your site and your content in a way that will benefit users. Users will never stop searching using specific questions, but search engines are actively encouraging them to ask a question or solve a problem so get your services out there by meeting user needs.”
More tips include seeing how results are viewed on search engines other than Google, keeping up with trends, befriending a thesaurus, and being aware that semantic search requires A LOT of work.
December 30, 2014
Despite budget cuts in academic research with print materials, higher education is clamoring for more digital content. You do not need Google Translate to understand that means more revenue for companies in that industry. Virtual Strategy writes that someone wants in on the money: “With Luxid Content Enrichment Platform, Cairn.info Automates The Extraction Of Bibliographic References And The Linking To Corresponding Article.”
Temis is an industry leader in semantic content enrichment solutions for enterprise and they signed a license and service agreement with CAIRN.info. CAIRN.info is a publishing portal for social sciences and humanities, providing students with access to the usual research fare.
Taking note of the changes in academic research, CAIRN.info wants to upgrade its digital records for a more seamless user experience:
“To make its collection easier to navigate, and ahead of the introduction of an additional 20.000 books which will consolidate its role of reference SSH portal, Cairn.info decided to enhance the interconnectedness of SSH publications with semantic enrichment. Indeed, the body of SSH articles often features embedded bibliographic references that don’t include actual links to the target document. Cairn.info therefore chose to exploit the Luxid® Content Enrichment Platform, driven by a customized annotator (Skill Cartridge®), to automatically identify, extract, and normalize these bibliographic references and to link articles to the documents they refer to.”
A round of applause for Cairn.info, realizing that making research easier will help encourage more students to use its services. If only academic databases would take ease of use into consideration and upgrade their UI dashboards.
December 12, 2014
Analytics outfit Lexalytics is going all-in on their European expansion. The write-up, “Lexalytics Expands International Presence: Launches Pain-Free Text Mining Customization” at Virtual-Strategy Magazine tells us that the company has boosted the language capacity of their recently acquired Semantria platform. The text-analytics and sentiment-analysis platform now includes Japanese, Arabic, Malay, and Russian in its supported-language list, which already included English, French, German, Chinese, Spanish, Portuguese, Italian, and Korean.
Lexalytics is also setting up servers in Europe. Because of upcoming changes to EU privacy law, we’re told companies will soon be prohibited from passing data into the U.S. Thanks to these new servers, European clients will be able to use Semantria’s cloud services without running afoul of the law.
Last summer, the company courted Europeans’ attention by becoming a sponsor of the 2014 Enterprise Hackathon in Prague. The press release tells us:
“All participants of the Hackathon were granted unlimited access and support to the Semantria API during the event. Nearly every team tried Semantria during the 36 hours they had to build a program that could crunch enough data to be used at the enterprise level. Redmore says, “We love innovative, quick development events, and are always looking for good events to support. Please contact us if you have a hackathon where you can use the power of our text mining solutions, and we’ll talk about hooking you up!”
Lexalytics is proud to have been the first to offer sentiment analysis, auto theme detection, and Wikipedia integration. Designed to integrate with third-party applications, their text analysis software is chugs along in the background at many data-related organizations. Founded in 2003, Lexalytics is headquartered in Amherst, Massachusetts.
Cynthia Murrell, December 12, 2014
December 6, 2014
ROI is the end goal for many big data and enterprise related projects and it is refreshing to see some information published in regards to if companies achieve it like we recently saw in a Smart Data Collective article, “Text Analytics, Big Data and the Keys to ROI.” According to a study released last year (further discussed in“Text/Content Analytics 2011: User Perspectives on Solutions and Providers”) the reason many businesses do not get positive returns has to do with the planning phase. Many report that they did not start with a clear plan to get there.
The author shares with us an example from his full-time work in text analytics. One of his clients that was focused on sifting through masses of social media data and data from government applications looking for suspicious activity needed a solution for a text-heavy application. The author responded by suggesting a selective cross-lingual process, one which worked with the text in its native language, and only on the text that was relevant to the topic of interest.
The following happened after the author’s suggestion:
Although he seemed to appreciate the logic of my suggestions and the quality benefits of avoiding translation, he just didn’t want to deal with a new approach. He asked to just translate everything and analyze later – as many people do. But I felt strongly that he’d be spending more and getting weaker results. So, I gave him two quotes. One for translating everything first and analyzing later – his way, and one for the cross-lingual approach that I recommended. When he saw that his own plan was going to cost over a million dollars more, he quickly became very open minded about exploring a new approach.
It sounds like the author could have suggested a number of similar semantic processing solutions. For example, Cogito Intelligence API enhances the ability to decipher meaning and insights from a multitude of content sources including social media and unstructured corporate data. The point is that ROI is out there and there are innovative companies like Expert System and beyond enabling it.
Megan Feil, December 6, 2014
December 3, 2014
The article titled Semantic Technology Provider Ontotext Announces Strategic Hires for Ontotext USA on PRWeb discusses the expansion of Ontotext in North America. Tony Agresta, Brad Bogle and Tom Endyke joined Ontotext, as Senior VP of Worldwide Sales, Director of Marketing and Director of Solutions Architecture, respectively. Ontotext, the semantic search and text-mining leader has laid out several main focuses for the near future, including the growth of worldwide marketing efforts and the development of relationships. The article quotes Tony Agresta on Ontotext’s product development,
“Our flagship product, GraphDB™ (formerly OWLIM) has been deployed across the globe and is widely known as a highly scalable enterprise RDF triplestore… But what makes Ontotext truly unique are three other essential elements: 1) a full complement of semantic enrichment, integration, curation and authoring tools that extend our platform approach, 2) a large critical mass of semantic engineers, professional services and support teams that represent the most experienced professionals in the world and 3) S4, the Self Service Semantic Suite.”
Ontotext has provided semantic solutions for such companies as BBC, AstraZeneca, John Willey & Sons, and The British Museum. Their recent expansion efforts in North America are an attempt to reach more semantic technology users in this continent.
Chelsea Kerwin, December 03, 2014
November 28, 2014
As the Internet grows and evolves, the features users expect from search and content management systems is changing. SearchContentManagement addresses the shift in “Semantic Technologies Fuel the Web Experience Wave.” As the title suggests, writer Geoffrey Bock sees this shift as opening a new area with a new set of demands — “web experience management” (WEM) goes beyond “web content management” (WCM).
The inclusion of metadata and contextual information makes all the difference. For example, the information displayed by an airline’s site should, he posits, be different for a user working at their PC, who may want general information, and someone using their phone in the airport parking lot, where they probably need to check their gate number or see whether their flight has been delayed. (Bock is disappointed that none of the airlines’ sites yet work this way.)
The article continues:
“Not surprisingly, to make contextually aware Web content work correctly, a lot of intelligence needs to be added to the underlying information sources, including metadata that describes the snippets, as well as location-specific geo-codes coming from the devices themselves. There is more to content than just publishing and displaying it correctly across multiple channels. It is important to pay attention to the underlying meaning and how content is used — the ‘semantics’ associated with it.
“Another aspect of managing Web experiences is to know when you are successful. It’s essential to integrate tracking and monitoring capabilities into the underlying platform, and to link business metrics to content delivery. Counting page views, search terms and site visitors is only the beginning. It’s important for business users to be able to tailor metrics and reporting to the key performance indicators that drive business decisions.”
Bock supplies an example of one company, specialty-plumbing supplier Uponor, that is making good use of such “WEM” possibilities. See the article for more details on his strategy for leveraging the growing potential of semantic technology.
Cynthia Murrell, November 28, 2014
November 25, 2014
Computers are only as smart as the humans who program them, but they lack the spontaneous ability that humans possess in droves. This does not mean that computers are not getting “smarter,” in fact, according to Market Wired their comprehension levels just increased. Market Wired reports on “Expert Systems Extends The Cogito API Portfolio: To Fashion, Advertising, Intelligence, And Media And Publishing Applications.” Expert Systems is one of the world’s leaders in semantic technology and the Cogito API has been designed to increase an organization’s use of unstructured data.
” ‘Companies want to better exploit the ever growing amounts of internal and external information,’ said Marco Varone, President and CTO, Expert System. ‘Cogito API is the perfect match for these needs and we’re thrilled that the community of developers and all the organizations can leverage our semantic technology to increase in a significant way the value of their information across any sector, whether that is entering new markets, extending their customer reach, or creating innovative products and services for market intelligence, decision making and strategic planning.’ “
Cogito is available as part of the CORE or PACK packages. Expert Systems promises that its technology can be tailored to suit any industry and provide an array of solutions for semantic technology.
November 21, 2014
SemanticWeb.com posted an article called “Retrieving And Using Taxonomy Data From DBpedia” with an interesting introduction. It explains that DBpedia is a crowd-sourced Internet community whose entire goal is to extract structured information from Wikipedia and share it. The introduction continues that DBpedia already has over three billion facts W3C standard RDF data model ready for application use.
The W3C standards are already written using the SKOS vocabulary, primarily used by the New York Times, the Library of Congress, and other organizations for their own taxonomies and subject headers. Users can extrapolate the data and implement it in their own RDF applications with the goal of giving your data more value.
DBpedia is doing a wonderful service for users so they do not have to rely on proprietary software to deliver them rich taxonomies. The taxonomies can be retrieved under the open source community bylaws and gain instant improvement for content. There is one caveat:
“Remember that, for better or worse, the data is based on Wikipedia data. If you extend the structure of the query above to retrieve lower, more specific levels of horror film categories, you’d probably find the work of film scholars who’ve done serious research as well as the work of nutty people who are a little too into their favorite subgenres.”
Remember Wikipedia is a good reference tool to gain an understanding of a topic, but you still need to check more verifiable resources for hard facts.