IBM Socrates Wins 2017 Semantic Web Challenge
January 10, 2018
We learn from the press release “Elsevier Announces the Winner of the 2017 Semantic Web Challenge,” posted at PRNewswire, that IBM has taken the top prize in the 2017 Semantic Web Challenge world cup with its AI project, Socrates. The outfit sponsoring the competition is the number one sci-tech publisher, Elsevier. We assume IBM will be happy with another Jeopardy-type win.
Knowledge graphs were the focus of this year’s challenge, and a baseline representing current progress in the field was established. The judges found that Socrates skillfully wielded natural language processing and deep learning to find and check information across multiple web sources. About this particular challenge, the write-up specifies:
This year, the SWC adjusted the annual format in order to measure and evaluate targeted and sustainable progress in this field. In 2017, competing teams were asked to perform two important knowledge engineering tasks on the web: fact extraction (knowledge graph population) [and] fact checking (knowledge graph validation). Teams were free to use any arbitrary web sources as input, and an open set of training data was provided for them to learn from. A closed dataset of facts, unknown to the teams, served as the ground truth to benchmark how well they did. The evaluation and benchmarking platform for the 2017 SWC is based on the GERBIL framework and powered by the HOBBIT project. Teams were measured on a very clear definition of precision and recall, and their performance on both tasks was tracked on a leader board. All data and systems were shared according to the FAIR principles (Findable, Accessible, Interoperable, Reusable).
The Semantic Web Challenge has been going on since 2003, organized in cooperation with the Semantic Web Science Association.
Cynthia Murrell, January 10, 2018
Neural Network Revamps Search for Research
December 7, 2017
Research is a pain, especially when you have to slog through millions of results to find specific and accurate results. It takes time and lot of reading, but neural networks could cut down on the investigation phase. The Economist wrote a new article about how AI will benefit research: “A Better Way To Search Through Scientific Papers.”
The Allen Institute for Artificial Intelligence developed Semantic Search to aid scientific research. Semantic Search’s purpose is to discover scientific papers most relevant to a particular problem. How does Semantic Scholar work?
Instead of relying on citations in other papers, or the frequency of recurring phrases to rank the relevance of papers, as it once did and rivals such as Google Scholar still do, the new version of Semantic Scholar applies AI to try to understand the context of those phrases, and thus achieve better results.
Semantic Scholar relies on a neural network, a system that mirrors real neural networks and learns by trial and error tests. To make Semantic Search work, the Allen Institute team annotated ten and sixty-seven abstracts. From this test sample, they found 7,000 medical terms with which 2,000 could be paired. The information was fed into the Semantic Search neural network, then it found more relationships based on the data. Through trial and error, the neural network learns more patterns.
The Allen Institute added 26 million biomedical research papers to the already 12 million in the database. The plan is to make scientific and medical research more readily available to professionals, but also to regular people.
Whitney Grace, December 7, 2017
Semantic Scholar Expanding with Biomedical Lit
November 29, 2017
Academic publishing is the black hole of the publishing world. While it is a prestigious honor to have your work published by a scholar press or journal, it will not have a high circulation. One reason that academic material is blocked behind expensive paywalls and another is that papers are not indexed well. Tech Crunch has some good news for researchers: “Allen institute For AI’s Semantic Scholar Adds Biomedical Papers To Its AI-Sorted Corpus.”
The Allen Institute for AI started the Semantic Scholar is an effort to index scientific literature with NLP and other AI algorithms. Semantic Scholar will now include biomedical texts in the index. There is way too much content available for individuals to read and create indices. AI helps catalog and create keywords for papers by scanning an entire text, pulling key themes, and adding it to the right topic.
There’s so much literature being published now, and it stretches back so far, that it’s practically impossible for a single researcher or even a team to adequately review it. What if a paper from six years ago happened to note a slight effect of a drug byproduct on norepinephrine production, but it wasn’t a main finding, or was in a journal from a different discipline?
Scientific studies are being called into question, especially when the tests are funded by corporate entities. It is important to verify truth from false information as we consume more and more each day. Tools like Semantic Scholar are key to uncovering the truth. It is too bad it does not receive more attention.
Whitney Grace, November 29, 2017
Veteran Web Researcher Speaks on Bias and Misinformation
October 10, 2017
The CTO of semantic search firm Ntent, Dr. Ricardo Baeza-Yates, has been studying the Web since its inception. In their post, “Fake News and the Power of Algorithms: Dr. Ricardo Baeza-Yates Weights In With Futurezone at the Vienna Gödel Lecture,” Ntent shares his take on biases online by reproducing an interview Baeza-Yates gave Futurezone at the Vienna Gödel Lecture 2017, where he was the featured speaker. When asked about the consequences of false information spread far and wide, the esteemed CTO cited two pivotal events from 2016, Brexit and the US presidential election.
These were manipulated by social media. I do not mean by hackers – which cannot be excluded – but by social biases. The politicians and the media are in the game together. For example, a non-Muslim attack may be less likely to make the front page or earn high viewing ratings. How can we minimize the amount of biased information that appears? It is a problem that affects us all.
One might try to make sure people get a more balanced presentation of information. Currently, it’s often the media and politicians that cry out loudest for truth. But could there be truth in this context at all? Truth should be the basis but there is usually more than one definition of truth. If 80 percent of people see yellow as blue, should we change the term? When it comes to media and politics the majority can create facts. Hence, humans are sometimes like lemmings. Universal values could be a possible common basis, but they are increasingly under pressure from politics, as Theresa May recently stated in her attempt to change the Magna Carta in the name of security. As history already tells us, politicians can be dangerous.
Indeed. The biases that concern Baeza-Yates go beyond those that spread fake news, though. He begins by describing presentation bias—the fact that one’s choices are limited to that which suppliers have, for their own reasons, made available. Online, “filter bubbles” compound this issue. Of course, Web search engines magnify any biases—their top results provide journalists with research fodder, the perceived relevance of which is compounded when that journalist’s work is published; results that appear later in the list get ignored, which pushes them yet further from common consideration.
Ntent is working on ways to bring folks with different viewpoints together on topics on which they do agree; Baeza-Yates admits the approach has its limitations, especially on the big issues. What we really need, he asserts, is journalism that is bias-neutral instead of polarized. How we get there from here, even Baeza-Yates can only speculate.
Cynthia Murrell, October 10, 2017
European Tweets Analyzed for Brexit Sentiment
September 28, 2017
The folks at Expert System demonstrate their semantic intelligence chops with an analysis of sentiments regarding Brexit, as expressed through tweets. The company shares their results in their press release, “The European Union on Twitter, One Year After Brexit.” What are Europeans feeling about that major decision by the UK? The short answer—fear. The write-up tells us:
One year since the historical referendum vote that sanctioned Britain’s exit from the European Union (Brexit, June 23, 2016), Expert System has conducted an analysis to verify emotions and moods prevalent in thoughts expressed online by citizens. The analysis was conducted on Twitter using the cognitive Cogito technology to analyze a sample of approximately 160,000 tweets in English, Italian, French, German and Spanish related to Europe (more than 65,000 tweets for #EU, #Europe…) and Brexit (more than 95,000 tweets for #brexit…) posted between May 21 – June 21, 2017. Regarding the emotional sphere of the people, the prevailing sentiment was fear followed by desire as a mood for intensely seeking something, but without a definitive negative or positive connotation. The analysis revealed a need for more energy (action), and, in an atmosphere that seems to be dominated by a general sense of stress, the tweets also showed many contrasts: modernism and traditionalism, hope and remorse, hatred and love.
The piece goes on to parse responses by language, tying priorities to certain countries. For example, those tweeting in Italian often mentioned “citizenship”, while tweets in German focused largely on “dignity” and “solidarity.” The project also evaluates sentiment regarding several EU leaders. Expert System was founded back in 1989, and their Cogito office is located in London.
Cynthia Murrell, September 28, 2017
Bitext and MarkLogic Join in a Strategic Partnership
June 13, 2017
Strategic partnerships are one of the best ways for companies to grow and diamond in the rough company Bitext has formed a brilliant one. According to a recent press release, “Bitext Announces Technology Partnership With MarkLogic, Bringing Leading-Edge Text Analysis To The Database Industry.” Bitext has enjoyed a number of key license deals. The company’s ability to process multi-lingual content with its deep linguistics analysis platform reduces costs and increases the speed with which machine learning systems can deliver more accurate results.
Both Bitext and MarkLogic are helping enterprise companies drive better outcomes and create better customer experiences. By combining their respectful technologies, the pair hopes to reduce data’s text ambiguity and produce high quality data assets for semantic search, chatbots, and machine learning systems. Bitext’s CEO and founder said:
““With Bitext’s breakthrough technology built-in, MarkLogic 9 can index and search massive volumes of multi-language data accurately and efficiently while maintaining the highest level of data availability and security. Our leading-edge text analysis technology helps MarkLogic 9 customers to reveal business-critical relationships between data,” said Dr. Antonio Valderrabanos.
Bitext is capable of conquering the most difficult language problems and creating solutions for consumer engagement, training, and sentiment analysis. Bitext’s flagship product is its Deep Linguistics Analysis Platform and Kantar, GFK, Intel, and Accenture favor it. MarkLogic used to be one of Bitext’s clients, but now they are partners and are bound to invent even more breakthrough technology. Bitext takes another step to cement its role as the operating system for machine intelligence.
Whitney Grace, June 13, 2017
Quote to Note: Hate That Semantic Web Stuff
June 8, 2017
I read “JSON-LD and Why I Hate the Semantic Web. “
Here’s the quote I noted:
I hate the narrative of the Semantic Web because the focus has been on the wrong set of things for a long time. That community, who I have been consciously distancing myself from for a few years now, is schizophrenic in its direction. Precious time is spent in groups discussing how we can query all this Big Data that is sure to be published via RDF instead of figuring out a way of making it easy to publish that data on the Web by leveraging common practices in use today. Too much time is spent assuming a future that’s not going to unfold in the way that we expect it to. That’s not to say that TURTLE, SPARQL, and Quad stores don’t have their place, but I always struggle to point to a typical startup that has decided to base their product line on that technology (versus ones that choose MongoDB and JSON on a regular basis).
There you go.
Stephen E Arnold, June 8, 2017
Deep Diving into HTML Employing Semantics
May 31, 2017
HTML, the programming language on which websites are based can employ semantics to make search easier and understanding, especially for those who use assistive technologies.
Web Dev Studios in an in-depth article titled Accessibility of Semantics: How Writing Semantic HTML Can Help Accessibility says:
Writing HTML is about more than simply “having stuff appear on the page.” Each element you use has a meaning and conveys information to your visitors, especially to those that use assistive technologies to help interpret that meaning for them.
Assistive technologies are used by people who have limited vision or other forms of impairment that prohibits them from accessing the web efficiently. If semantics is employed, according to the author of the article, impaired people too can access all features of the web like others.
The author goes on to explain things like how different tags in HTML can be used effectively to help people with visual impairments.
The Web and related technologies are evolving, and it can be termed as truly inclusive only when people with all types of handicaps are able to use it with equal ease.
Vishal Ingole, May 31, 2017
Semantic Platform Aggregates Scientific Information
May 1, 2017
A new scientific repository is now available from a prominent publisher, we learn from “GraphDB, Leading Semantic Database from Ontotext, Powers Springer Nature’s New Linked Open Data Platform” at PRWeb. (We note the word “leading” in the title; who verifies this assertion? Just curious.) The platform, dubbed SciGraph, aggregates data from Springer Nature and its academic partners. The press release specifies:
Thanks to semantic technologies, Linked Open Data and the GraphDB semantic database, all these data are connected in a way which semantically describes and visualizes how the information is interlinked. GraphDB’s capability to seamlessly integrate disparate data silos allows Springer Nature SciGraph to comprise metadata from journals and articles, books and chapters, organizations, institutions, funders, research grants, patents, clinical trials, substances, conference series, events, citations and reference networks, Altmetrics, and links to research datasets.
The dataset is released under a certain international creative commons license, and can be downloaded (by someone with the appropriate technical knowledge) here.
An early explorer of semantic technology, Ontotext was founded in 2000. Based in Bulgaria, the company keeps their North American office in New Jersey. Ontotext’s client roster includes big names in publishing, government agencies, and cultural institutions.
Cynthia Murrell, May 1, 2017
Keyword Search vs. Semantic Search for Patent Seekers
April 26, 2017
The article on BIP Counsels titled An Introduction to Patent Search, Keyword Search, and Semantic Searches offers a brief overview of the differences between keyword, and semantic search. The article is geared towards inventors and technologists in the early stages of filing a patent application. The article states,
If an inventor proceeds with the patent filing process without performing an exhaustive prior art search, it may hamper the patent application at a later point, such as in the prosecution process. Hence, a thorough search involving all possible relevant techniques is always advisable… Search tools such as ‘semantic search assistant’ help the user find similar patent families based on freely entered text. The search method is ideal for concept based search.
Ultimately the article fails to go beyond the superficial when it comes to keyword and semantic search. One almost suspects that the author (BananaIP patent attorneys) wants to send potential DIY-patent researchers running into their office for help. Yes, terminology plays a key role in keyword searches. Yes, semantic search can help narrow the focus and relevancy of the results. If you want more information than that, you may want to visit the patent attorney. But probably not the one that wrote this article.
Chelsea Kerwin, April 26, 2017