Associative Semantic Search Is a New Technology, Not a Mental Diagnosis
December 6, 2016
“Associative semantic” sounds like a new mental diagnosis for the DSM-V (Diagnostic and Statistical Manuel of Mental Disorders), but it actually is the name of a search technology that sounds like it amplifies the basic semantic search. Aistemos has the run down on the new search technology in the article, “Associative Semantic Search Technology: Omnity And IP.” Omnity is the purveyor of the “associative semantic search” and it makes the standard big data promise:
…the discovery of otherwise hidden, high-value patterns of interconnection within and between fields of knowledge as diverse as science, medicine, engineering, law and finance.
All of the companies centered on big data have this same focus or something similar, so what does Omnity offer that makes it stand out? It proposes to find connections between documents that do not directly correlate or cite one another. Omnity uses the word “accelerate” to explain how it will discover hidden patterns and expand knowledge. The implications mean semantic search would once again be augmented and more accurate.
Any industry that relies on detailed documents would benefit:
Such a facility would presumably enable someone to find references to relevant patents, technologies and prior art on a far wider scale than has hitherto been the case. The legal, strategic and commercial implications of being able to do this, for litigation, negotiation, due diligence, investment and forward planning are sufficiently obvious for us not to need to list them here.
The article suggests those who would most be interested in Omnity are intellectual property businesses. I can imagine academics would not mind getting their hands on the associative semantic search to power their research or law enforcement could use it to fight crime.
Whitney Grace, December 6, 2016
The Noble Quest Behind Semantic Search
November 25, 2016
A brief write-up at the ontotext blog, “The Knowledge Discovery Quest,” presents a noble vision of the search field. Philologist and blogger Teodora Petkova observed that semantic search is the key to bringing together data from different sources and exploring connections. She elaborates:
On a more practical note, semantic search is about efficient enterprise content usage. As one of the biggest losses of knowledge happens due to inefficient management and retrieval of information. The ability to search for meaning not for keywords brings us a step closer to efficient information management.
If semantic search had a separate icon from the one traditional search has it would have been a microscope. Why? Because semantic search is looking at content as if through the magnifying lens of a microscope. The technology helps us explore large amounts of systems and the connections between them. Sharpening our ability to join the dots, semantic search enhances the way we look for clues and compare correlations on our knowledge discovery quest.
At the bottom of the post is a slideshow on this “knowledge discovery quest.” Sure, it also serves to illustrate how ontotext could help, but we can’t blame them for drumming up business through their own blog. We actually appreciate the company’s approach to semantic search, and we’d be curious to see how they manage the intricacies of content conversion and normalization. Founded in 2000, ontotext is based in Bulgaria.
Cynthia Murrell, November 25, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Word Embedding Captures Semantic Relationships
November 10, 2016
The article on O’Reilly titled Capturing Semantic Meanings Using Deep Learning explores word embedding in natural language processing. NLP systems typically encode word strings, but word embedding offers a more complex approach that emphasizes relationships and similarities between words by treating them as vectors. The article posits,
For example, let’s take the words woman, man, queen, and king. We can get their vector representations and use basic algebraic operations to find semantic similarities. Measuring similarity between vectors is possible using measures such as cosine similarity. So, when we subtract the vector of the word man from the vector of the word woman, then its cosine distance would be close to the distance between the word queen minus the word king (see Figure 1).
The article investigates the various neural network models that prevent the expense of working with large data. Word2Vec, CBOW, and continuous skip-gram are touted as models and the article goes into great technical detail about the entire process. The final result is that the vectors understand the semantic relationship between the words in the example. Why does this approach to NLP matter? A few applications include predicting future business applications, sentiment analysis, and semantic image searches.
Chelsea Kerwin, November 10, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Ontotext: The Fabric of Relationships
November 9, 2016
Relationships among metadata, words, and other “information” are important. Google’s Dr. Alon Halevy, founder of Transformic which Google acquired in 2006, has been beavering away in this field for a number of years. His work on “dataspaces” is important for Google and germane to the “intelligence-oriented” systems which knit together disparate factoids about a person, event, or organization. I recall one of his presentations—specifically the PODs 2006 keynote–in which he reproduced a “colleague’s” diagram of a flow chart which made it easy to see who received the document, who edited the document and what changes were made, and to whom recipients of the document forward the document.
Here’s the diagram from Dr. Halevy’s lecture:
Principles of Dataspace Systems, Slide 4 by Dr. Alon Halevy at delivered on June 26, 2006 at PODs. Note that “PODs” is an annual ACM database-centric conference.
I found the Halevy discussion interesting.
The Semantic Web: Clarified and Mystified
November 4, 2016
Navigate to “Semantic Web Speculations.” After working through the write up, I believe there are some useful insights in the write up.
I highlighted this passage:
Reaching to information has been changed quite dramatically from printed manuscripts to Google age. Being knowledgeable less involves memorizing but more the ability to find an information and ability to connect information in a map-like pattern. However, with semantic tools become more prevalent and a primary mode of reaching information changes, this is open to transform.
I understand that the Google has changed how people locate needed information. Perhaps the information is accurate? Perhaps the information is filtered to present a view shaped by a higher actor’s preferences? I agree that the way in which people “reach” information is going to change.
I also noted this statement:
New way of being knowledgeable in the era of semantic web does not necessarily include having the ability to reach an information.
Does this mean that one can find information but not access the source? Does the statement suggest that one does not have to know a fact because awareness that it is there delivers the knowledge payload?
I also circled this endorsement of link analysis, which has been around for decades:
It will be more common than any time that relations between data points will have more visibility and access. When something is more accessible, it brings meta-abilities to play with them.
The idea that the conversion of unstructured information into structured data is a truism. However, the ability to make sense of the available information remains a work in progress as is the thinking about semantics.
Stephen E Arnold, November 4, 2016
Semantic Search and the Future of Search Engines
November 1, 2016
Google no longer will have one search “engine.” Google will offer mobile search and desktop search. The decision is important because it says to me, in effect, mobile is where it is at. But for how long will the Googlers support desktop search when advertisers have no choice but embrace mobile and the elegance of marketing to specific pairs of eyeballs?
Against the background of the mobile search and end of privacy shift at the GOOG, I read “The Future of Search Engines – Semantic Search.” To point out that the future of search engines is probably somewhat fluid at the moment is a bit of an understatement.
The write up profiles several less well known information retrieval systems. Those mentioned include:
- BizNar, developed by one of the wizards behind Verity, provides search for a number of US government clients. The system has some interesting features, but I recall that I had to wait as “fast” responses were updated with slower responses.
- DuckDuckGo, a Web search system which periodically mounts a PR campaign about how fast its user base is growing or how many queries it processes keeps going up.
- Omnity, allegedly a next generation search system, “gives companies and institutions of all sizes the ability to instantly [sic] discover hidden patterns of interconnection within and between fields of knowledge as diverse as science, finance, law, engineering, and medicine.,” No word about the corpuses in the index, the response time, or how the system compares to gold old Dialog.
- Siri, arguably, the least effective of the voice search systems available for Apple iPhone users.
- Wolfram Alpha, the perennial underdog, in search and question answering.
- Yippy, which strikes me as a system similar to that offered by Vivisimo before its sale to IBM for about $20 million in 2012. Vivisimo’s clustering was interesting, but I like the company’s method for sending a well formed query to multiple Web indexes.
The write up is less about semantic search than doing a quick online search for “semantic search” and then picking a handful of systems to describe. I know the idea of “semantic search” excites some folks, but the reality is that semantic methods have been a part of search plumbing for many years. The semantic search revolution arrived not long after the Saturday Night Fever album hit number one.
Download open source solutions like Lucene/Solr and move on, gentle reader.
Stephen E Arnold, November 1, 2016
Semantiro and Ontocuro Basic
October 20, 2016
Quick update from the Australian content processing vendor SSAP or Semantic Software Asia Pacific Limited. The company’s Semantiro platform now supports the new Ontocuro tool.
Semantiro is a platform which “promises the ability to enrich the semantics of data collected from disparate data sources, and enables a computer to understand its context and meaning,” according to “Semantic Software Announces Artificial Intelligence Offering.”
I learned:
Ontocuro is the first suite of core components to be released under the Semantiro platform. These bespoke components will allow users to safely prune unwanted concepts and axioms; validate existing, new or refined ontologies; and import, store and share these ontologies via the Library.
The company’s approach is to leapfrog the complex interfaces other indexing and data tagging tools impose on the user. The company’s Web site for Ontocuro is at this link.
Stephen E Arnold, October 20, 2016
Key Words and Semantic Annotation
September 27, 2016
I read “Exploiting Semantic Annotation of Content with Linked Data to Improve Searching Performance in Web Repositories.” The nub of the paper is, “Better together.” The idea is that key words work if one knows the subject and the terminology required to snag the desired information.
If not, then semantic indexing provides another path. If the conclusion seems obvious, consider that two paths are better for users. The researchers used Elasticsearch. However, the real world issue is the cost of expertise and the computational cost and time required to add another path. You can download the journal paper at this link.
Stephen E Arnold, September 27, 2016
Gleaning Insights and Advantages from Semantic Tagging for Digital Content
September 22, 2016
The article titled Semantic Tagging Can Improve Digital Content Publishing on Aptara Corp. blog reveals the importance of indexing. The article waves the flag of semantic tagging at the publishing industry, which has been pushed into digital content kicking and screaming. The difficulties involved in compatibility across networks, operating systems, and a device are quite a headache. Semantic tagging could help, if only anyone understood what it is. The article enlightens us,
Put simply, semantic markups are used in the behind-the-scene operations. However, their importance cannot be understated; proprietary software is required to create the metadata and assign the appropriate tags, which influence the level of quality experienced when delivering, finding and interacting with the content… There have been many articles that have agreed the concept of intelligent content is best summarized by Ann Rockley’s definition, which is “content that’s structurally rich and semantically categorized and therefore automatically discoverable, reusable, reconfigurable and adaptable.
The application to the publishing industry is obvious when put in terms of increasing searchability. Any student who has used JSTOR knows the frustrations of searching digital content. It is a complicated process that indexing, if administered correctly, will make much easier. The article points out that authors are competing not only with each other, but also with the endless stream of content being created on social media platforms like Facebook and Twitter. Publishers need to take advantage of semantic markups and every other resource at their disposal to even the playing field.
Chelsea Kerwin, September 22, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/
Enterprise Search: Pool Party and Philosophy 101
September 8, 2016
I noted this catchphrase: “An enterprise without a semantic layer is like a country without a map.” I immediately thought of this statement made by Polish-American scientist and philosopher Alfred Korzybski:
The map is not the territory.
When I think about enterprise search, I am thrilled to have an opportunity to do the type of thinking demanded in my college class in philosophy and logic. Great fun. I am confident that any procurement team will be invigorated by an animated discussion about representations of reality.
I did a bit of digging and located “Introducing a Graph-based Semantic Layer in Enterprises” as the source of the “country without a map” statement.
What is interesting about the article is that the payload appears at the end of the write up. The magic of information representation as a way to make enterprise search finally work is technology from a company called Pool Party.
Pool Party describes itself this way:
Pool Party is a semantic technology platform developed, owned and licensed by the Semantic Web Company. The company is also involved in international R&D projects, which continuously impact the product development. The EU-based company has been a pioneer in the Semantic Web for over a decade.
From my reading of the article and the company’s marketing collateral it strikes me that this is a 12 year old semantic software and consulting company.
The idea is that there is a pool of structured and unstructured information. The company performs content processing and offers such features as:
- Taxonomy editor and maintenance
- A controlled vocabulary management component
- An audit trail to see who changed what and when
- Link analysis
- User role management
- Workflows.
The write up with the catchphrase provides an informational foundation for the company’s semantic approach to enterprise search and retrieval; for example, the company’s four layered architecture:
The base is the content layer. There is a metadata layer which in Harrod’s Creek is called “indexing”. There is the “semantic layer”. At the top is the interface layer. The “semantic” layer seems to be the secret sauce in the recipe for information access. The phrase used to describe the value added content processing is “semantic knowledge graphs.” These, according to the article:
let you find out unknown linkages or even non-obvious patterns to give you new insights into your data.
The system performs entity extraction, supports custom ontologies (a concept designed to make subject matter experts quiver), text analysis, and “graph search.”
Graph search is, according to the company’s Web site:
Semantic search at the highest level: Pool Party Graph Search Server combines the power of graph databases and SPARQL engines with features of ‘traditional’ search engines. Document search and visual analytics: Benefit from additional insights through interactive visualizations of reports and search results derived from your data lake by executing sophisticated SPARQL queries.
To make this more clear, the company offers a number of videos via YouTube.
The idea reminded us of the approach taken in BAE NetReveal and Palantir Gotham products.
Pool Party emphasizes, as does Palantir, that humans play an important role in the system. Instead of “augmented intelligence,” the article describes the approach methods which “combine machine learning and human intelligence.”
The company’s annual growth rate is more than 20 percent. The firm has customers in more than 20 countries. Customers include Pearson, Credit Suisse, the European Commission, Springer Nature, Wolters Kluwer, and the World Bank and “many other customers.” The firm’s projected “Euro R&D project volume” is 17 million (although I am not sure what this 17,000,000 number means. The company’s partners include Accenture, Complexible, Digirati, and EPAM, among others.
I noted that the company uses the catchphrase: “Semantic Web Company” and the catchphrase “Linking data to knowledge.”
The catchphrase, I assume, make it easier for some to understand the firm’s graph based semantic approach. I am still mired in figuring out that the map is not the territory.
Stephen E Arnold, September 8, 2016