Elastic Teams With Startup Insight.io for Semantic Search

August 10, 2018

We’ve learned that a Search company we’ve been following with some interest, Elastic, is pairing with a Palo Alto-based startup to develop and integrate semantic search tools. Computer Weekly shares some details in, “Elastic Puts ‘Semantic Code Search’ Into Stack With Insight.io.” Writer Adrian Bridgwater tells us:

“Known for its Elasticsearch and Elastic Stack products, Elastic insists that Insight.io’s technology is ‘highly complementary’ to other Elastic use cases and solutions—indeed, Insight.io is built on the Elastic Stack. Insight.io provides an interface to search and navigate the source code that is said to ‘go beyond’ simple free text search. Current programming language support includes C/C++, Java, Scala, Ruby, Python, and PHP. This ‘beyond text search’ function gives developers the ability to search for code pertaining to specific application functionality and dependencies. Essentially it provides IDE-like code intelligence features such as cross-reference, class hierarchy and semantic understanding. The impact of such functionality should stretch beyond exploratory question-and-answer utility, for example, enabling more efficient onboarding for new team members and reducing duplication of work for existing teams as they scale.”

According to Elastic’s CEO, integration of the technology will be familiar to anyone who observed how they did it with past acquisitions, like Opbeat and Prelert. We’re also assured that all of Insight.io’s workers are being welcomed into Elastic’s development fold. Bridgwater notes that, with the startup’s Beiging-based engineering team, Elastic now has its first “formal” dev team located in China. Founded in 2012, Elastic is now based in Mountain View, California.

Cynthia Murrell, August 10, 2018

Mondeca: Another Semantic Search Option

April 9, 2018

Mondeca, based in France, has long been focused on indexing and taxonomy. Now they offer a search platform named, simply enough, Semantic Search. Here’s their description:

“Semantic search systems consider various points including context of search, location, intent, variation of words, synonyms, generalized and specialized queries, concept matching and natural language queries to provide relevant search results. Augment your SolR or ElasticSearch capabilities; understand the intent, contextualize search results; search using business terms instead of keywords.”

A few details from the product page caught my eye. Let’s begin with the Search functionality; the page succinctly describes:

“Navigational search – quickly locate specific content or resource. Informational search – learn more about a specific subject. Compound term processing, concept search, fuzzy search, simple but smart search, controlled terms, full text or metadata, relevancy scoring. Takes care of language, spelling, accents, case. Boolean expressions, auto complete, suggestions. Disambiguated queries, suggests alternatives to the original query. Relevance feedback: modify the original query with additional terms. Contextualize by user profile, location, search activity and more.”

The software includes a GUI for visualizing the semantic data, and features word-processing tools like auto complete and a thesaurus. Results are annotated, with key terms highlighted, and filters provide significant refinement, complete with suggestions. Results can also be clustered by either statistics or semantic tags. A personalized dashboard and several options for sharing and publishing round out my list. See the product page for more details.

Established in 1999, Mondeca delivers pragmatic semantic solutions to clients in Europe and North America, and is proud to have developed their own, successful semantic methodology. The firm is based in Paris. Perhaps the next time our beloved leader, Stephen E Arnold, visits Paris, the company will make time to speak with him. Previous attempts to set up a meeting were for naught. Ah, France.

Cynthia Murrell, April 9, 2018

IBM Socrates Wins 2017 Semantic Web Challenge

January 10, 2018

We learn from the press release “Elsevier Announces the Winner of the 2017 Semantic Web Challenge,” posted at PRNewswire, that IBM has taken the top prize in the 2017 Semantic Web Challenge world cup with its AI project, Socrates. The outfit sponsoring the competition is the number one sci-tech publisher, Elsevier. We assume IBM will be happy with another Jeopardy-type win.

Knowledge graphs were the focus of this year’s challenge, and a baseline representing current progress in the field was established. The judges found that Socrates skillfully wielded natural language processing and deep learning to find and check information across multiple web sources. About this particular challenge, the write-up specifies:

This year, the SWC adjusted the annual format in order to measure and evaluate targeted and sustainable progress in this field. In 2017, competing teams were asked to perform two important knowledge engineering tasks on the web: fact extraction (knowledge graph population) [and] fact checking (knowledge graph validation). Teams were free to use any arbitrary web sources as input, and an open set of training data was provided for them to learn from. A closed dataset of facts, unknown to the teams, served as the ground truth to benchmark how well they did. The evaluation and benchmarking platform for the 2017 SWC is based on the GERBIL framework and powered by the HOBBIT project. Teams were measured on a very clear definition of precision and recall, and their performance on both tasks was tracked on a leader board. All data and systems were shared according to the FAIR principles (Findable, Accessible, Interoperable, Reusable).

The Semantic Web Challenge has been going on since 2003, organized in cooperation with the Semantic Web Science Association.

Cynthia Murrell, January 10, 2018

Neural Network Revamps Search for Research

December 7, 2017

Research is a pain, especially when you have to slog through millions of results to find specific and accurate results.  It takes time and lot of reading, but neural networks could cut down on the investigation phase.  The Economist wrote a new article about how AI will benefit research: “A Better Way To Search Through Scientific Papers.”

The Allen Institute for Artificial Intelligence developed Semantic Search to aid scientific research.  Semantic Search’s purpose is to discover scientific papers most relevant to a particular problem.  How does Semantic Scholar work?

Instead of relying on citations in other papers, or the frequency of recurring phrases to rank the relevance of papers, as it once did and rivals such as Google Scholar still do, the new version of Semantic Scholar applies AI to try to understand the context of those phrases, and thus achieve better results.

Semantic Scholar relies on a neural network, a system that mirrors real neural networks and learns by trial and error tests.  To make Semantic Search work, the Allen Institute team annotated ten and sixty-seven abstracts.  From this test sample, they found 7,000 medical terms with which 2,000 could be paired.  The information was fed into the Semantic Search neural network, then it found more relationships based on the data.  Through trial and error, the neural network learns more patterns.

The Allen Institute added 26 million biomedical research papers to the already 12 million in the database.  The plan is to make scientific and medical research more readily available to professionals, but also to regular people.

Whitney Grace, December 7, 2017

Semantic Scholar Expanding with Biomedical Lit

November 29, 2017

Academic publishing is the black hole of the publishing world.  While it is a prestigious honor to have your work published by a scholar press or journal, it will not have a high circulation.  One reason that academic material is blocked behind expensive paywalls and another is that papers are not indexed well.  Tech Crunch has some good news for researchers: “Allen institute For AI’s Semantic Scholar Adds Biomedical Papers To Its AI-Sorted Corpus.”

The Allen Institute for AI started the Semantic Scholar is an effort to index scientific literature with NLP and other AI algorithms.  Semantic Scholar will now include biomedical texts in the index.  There is way too much content available for individuals to read and create indices.  AI helps catalog and create keywords for papers by scanning an entire text, pulling key themes, and adding it to the right topic.

There’s so much literature being published now, and it stretches back so far, that it’s practically impossible for a single researcher or even a team to adequately review it. What if a paper from six years ago happened to note a slight effect of a drug byproduct on norepinephrine production, but it wasn’t a main finding, or was in a journal from a different discipline?

Scientific studies are being called into question, especially when the tests are funded by corporate entities.  It is important to verify truth from false information as we consume more and more each day.  Tools like Semantic Scholar are key to uncovering the truth.  It is too bad it does not receive more attention.

Whitney Grace, November 29, 2017

 

Veteran Web Researcher Speaks on Bias and Misinformation

October 10, 2017

The CTO of semantic search firm Ntent, Dr. Ricardo Baeza-Yates, has been studying the Web since its inception. In their post, “Fake News and the Power of Algorithms: Dr. Ricardo Baeza-Yates Weights In With Futurezone at the Vienna Gödel Lecture,” Ntent shares his take on biases online by reproducing an interview Baeza-Yates gave Futurezone at the Vienna Gödel Lecture 2017, where he was the featured speaker. When asked about the consequences of false information spread far and wide, the esteemed CTO cited two pivotal events from 2016, Brexit and the US presidential election.

These were manipulated by social media. I do not mean by hackers – which cannot be excluded – but by social biases. The politicians and the media are in the game together. For example, a non-Muslim attack may be less likely to make the front page or earn high viewing ratings. How can we minimize the amount of biased information that appears? It is a problem that affects us all.

One might try to make sure people get a more balanced presentation of information. Currently, it’s often the media and politicians that cry out loudest for truth. But could there be truth in this context at all? Truth should be the basis but there is usually more than one definition of truth. If 80 percent of people see yellow as blue, should we change the term? When it comes to media and politics the majority can create facts. Hence, humans are sometimes like lemmings. Universal values could be a possible common basis, but they are increasingly under pressure from politics, as Theresa May recently stated in her attempt to change the Magna Carta in the name of security. As history already tells us, politicians can be dangerous.

Indeed. The biases that concern Baeza-Yates go beyond those that spread fake news, though. He begins by describing presentation bias—the fact that one’s choices are limited to that which suppliers have, for their own reasons, made available. Online, “filter bubbles” compound this issue. Of course, Web search engines magnify any biases—their top results provide journalists with research fodder, the perceived relevance of which is compounded when that journalist’s work is published; results that appear later in the list get ignored, which pushes them yet further from common consideration.

Ntent is working on ways to bring folks with different viewpoints together on topics on which they do agree; Baeza-Yates admits the approach has its limitations, especially on the big issues. What we really need, he asserts, is journalism that is bias-neutral instead of polarized. How we get there from here, even Baeza-Yates can only speculate.

Cynthia Murrell, October 10, 2017

European Tweets Analyzed for Brexit Sentiment

September 28, 2017

The folks at Expert System demonstrate their semantic intelligence chops with an analysis of sentiments regarding Brexit, as expressed through tweets. The company shares their results in their press release, “The European Union on Twitter, One Year After Brexit.” What are Europeans feeling about that major decision by the UK? The short answer—fear. The write-up tells us:

One year since the historical referendum vote that sanctioned Britain’s exit from the European Union (Brexit, June 23, 2016), Expert System has conducted an analysis to verify emotions and moods prevalent in thoughts expressed online by citizens. The analysis was conducted on Twitter using the cognitive Cogito technology to analyze a sample of approximately 160,000 tweets in English, Italian, French, German and Spanish related to Europe (more than 65,000 tweets for #EU, #Europe…) and Brexit (more than 95,000 tweets for #brexit…) posted between May 21 – June 21, 2017. Regarding the emotional sphere of the people, the prevailing sentiment was fear followed by desire as a mood for intensely seeking something, but without a definitive negative or positive connotation. The analysis revealed a need for more energy (action), and, in an atmosphere that seems to be dominated by a general sense of stress, the tweets also showed many contrasts: modernism and traditionalism, hope and remorse, hatred and love.

The piece goes on to parse responses by language, tying priorities to certain countries. For example, those tweeting in Italian often mentioned “citizenship”, while tweets in German focused largely on “dignity” and “solidarity.” The project also evaluates sentiment regarding several EU leaders. Expert System  was founded back in 1989, and their Cogito office is located in London.

Cynthia Murrell, September 28, 2017

Bitext and MarkLogic Join in a Strategic Partnership

June 13, 2017

Strategic partnerships are one of the best ways for companies to grow and diamond in the rough company Bitext has formed a brilliant one. According to a recent press release, “Bitext Announces Technology Partnership With MarkLogic, Bringing Leading-Edge Text Analysis To The Database Industry.” Bitext has enjoyed a number of key license deals. The company’s ability to process multi-lingual content with its deep linguistics analysis platform reduces costs and increases the speed with which machine learning systems can deliver more accurate results.

bitext logo

Both Bitext and MarkLogic are helping enterprise companies drive better outcomes and create better customer experiences. By combining their respectful technologies, the pair hopes to reduce data’s text ambiguity and produce high quality data assets for semantic search, chatbots, and machine learning systems. Bitext’s CEO and founder said:

““With Bitext’s breakthrough technology built-in, MarkLogic 9 can index and search massive volumes of multi-language data accurately and efficiently while maintaining the highest level of data availability and security. Our leading-edge text analysis technology helps MarkLogic 9 customers to reveal business-critical relationships between data,” said Dr. Antonio Valderrabanos.

Bitext is capable of conquering the most difficult language problems and creating solutions for consumer engagement, training, and sentiment analysis. Bitext’s flagship product is its Deep Linguistics Analysis Platform and Kantar, GFK, Intel, and Accenture favor it. MarkLogic used to be one of Bitext’s clients, but now they are partners and are bound to invent even more breakthrough technology. Bitext takes another step to cement its role as the operating system for machine intelligence.

Whitney Grace, June 13, 2017

Quote to Note: Hate That Semantic Web Stuff

June 8, 2017

I read “JSON-LD and Why I Hate the Semantic Web. “

Here’s the quote I noted:

I hate the narrative of the Semantic Web because the focus has been on the wrong set of things for a long time. That community, who I have been consciously distancing myself from for a few years now, is schizophrenic in its direction. Precious time is spent in groups discussing how we can query all this Big Data that is sure to be published via RDF instead of figuring out a way of making it easy to publish that data on the Web by leveraging common practices in use today. Too much time is spent assuming a future that’s not going to unfold in the way that we expect it to. That’s not to say that TURTLE, SPARQL, and Quad stores don’t have their place, but I always struggle to point to a typical startup that has decided to base their product line on that technology (versus ones that choose MongoDB and JSON on a regular basis).

There you go.

Stephen E Arnold, June 8, 2017

Deep Diving into HTML Employing Semantics

May 31, 2017

HTML, the programming language on which websites are based can employ semantics to make search easier and understanding, especially for those who use assistive technologies.

Web Dev Studios in an in-depth article titled Accessibility of Semantics: How Writing Semantic HTML Can Help Accessibility says:

Writing HTML is about more than simply “having stuff appear on the page.” Each element you use has a meaning and conveys information to your visitors, especially to those that use assistive technologies to help interpret that meaning for them.

Assistive technologies are used by people who have limited vision or other forms of impairment that prohibits them from accessing the web efficiently. If semantics is employed, according to the author of the article, impaired people too can access all features of the web like others.

The author goes on to explain things like how different tags in HTML can be used effectively to help people with visual impairments.

The Web and related technologies are evolving, and it can be termed as truly inclusive only when people with all types of handicaps are able to use it with equal ease.

Vishal Ingole, May 31, 2017

Next Page »

  • Archives

  • Recent Posts

  • Meta