Newspaper Search: Another Findability Challenge

October 13, 2020

Here is an interesting project any American-history enthusiast could get lost in for hours: Newspaper Navigator. I watched the home page’s 15-minute video, which gives both an explanation of the search tool’s development and a demo. Then I played around with the tool for a bit. Here’s what I learned.

Created by Ben Lee, the Library of Congress’ 2020 Innovator in Residence, The Newspaper Navigator is built on the Library of Congress’s Chronicling America, a search portal that allows one perform keyword searches on 16 million pages of historical US newspapers using optical character recognition. That is a great resource—but how to go about an image search for such a collection? That’s where Newspaper Navigator comes in.

Lee used thousands of annotations of the collection’s visual content, created by volunteers in the Library’s Beyond Words crowdsourcing initiative of 2017, to train a machine learning model to recognize visual content. (He released the dataset, which can be found here. He also created hundreds of prepackaged downloadable datasets organized by year and type, like maps, photos, cartoons, etcetera.) The Newspaper Navigator search interface allows users to plumb 1.5 million high-confidence, public-domain photos from newspapers published between 1900-1963. The app allows for standard search, but the juicy bit is the ability to search by visual similarity using machine learning.

Lee walks us through two demo searches—one that begins with the keyword “baseball” and with “sailboat.” One can filter by location and time frame, then hover over results to get more info on the image itself and the paper in which it appeared. Select images to build a Collection, then tap into the AI prowess via the “Train my AI Navigators” button. The AI uses the selected images to generate a page of similar images, each with a clickable + or – button. Clicking these tells the tool which images are more and which are less like what is desired. Click “Train my AI Navigators” again to generate a more refined page, and repeat until only (or almost only) the desired type of image appears. When that happens, clicking the Save button creates a URL to take one right back to those results later.

Lee notes that machine learning is not perfect, and some searches lend themselves to refinement better than others. He suggests starting again and retraining if results start refining themselves in the wrong direction.

The video acknowledges the potential marginalization issues in any machine learning project. Click on the Data Archaeology tab to read about Lee’s investigation of the Navigator dataset and app from the perspective of bias.

I suggest curious readers play around with the search app for themselves. Lee closes by inviting users to share their experiences through or on twitter @LC_Labs, #NewspaperNavigator.

Cynthia Murrell, October 13, 2020

Does Search Breed Fraud?

October 11, 2020

The question “Does search breed fraud?” is an interesting one. As far as I know, none of the big time MBA case studies address the topic. If any academic discipline knows about fraud, I believe it is those very same big time MBA programs.

South Korean Search Giant Fined US $23 Million for Manipulating Results” reveals that Naver has channeled outfits with a penchant for results fiddling. The write up states:

The Korea Fair Trade Commission, the country’s antitrust regulator, ruled Naver altered algorithms on multiple occasions between 2012 and 2015 to raise its own items’ rankings above those of competitors.

Naver responded, according to the write up, with this statement:

“The core value of search service is presenting an outcome that matches the intentions of users,” it said in a statement, adding: “Naver has been chosen by many users thanks to our focus on this essential task.”

The pressure to generate revenue is significant. Engineers, who may be managed loosely or steered by the precepts of high school science club thought processes, can make tiny changes with significant impact. As a result, the manipulation can arise from a desire to get promoted, be cool, or land a bonus.

The implications can be profound. Google may be less evil because fiddling is an emergent behavior.

Stephen E Arnold, October 11, 2020

An Oath from the Past: Yahoo Web Scale Semantic Search

October 9, 2020

I spotted a link to “Yahoo: Web Scale Semantic Search.” You remember Yahoo, don’t you. This is the outfit with the data breaches, the clueless business model, and the sale to the Baby Bell Verizon. The executives too are memorable: Marissa, Alex, Terry, and the Peanut Butter memo man.

The link displayed a presentation by Edgar Meij, a laborer in Yahoo Labs. The topic was an X ray view from Mt. Olympus intended to reveal Web scale semantic search.

The slide deck requires 62 clicks to traverse. There are many riches in the presentation. I want to highlight three of these, and invite you to make your own determination of these insights.

First, there is a “text” accompanying the deck. It contains a riot of jargon and buzzwords. In fact, I have saved the text, despite a portion being truncated, as a glossary of Web search jive talk; for example “s a sequence of terms s 2 s drawn from the set S, s ? Multinomial(?s) e a set of entities e 2 e.” (I knew you would experience the same thrill I did when I read this line.) True to Slideshare’s attention to detail, the text for slides 32 to 62 has been removed. Great loss indeed.

Second, Yahoo cares about knowledge. Consider this diagram:


The idea is that one acquires knowledge (I assume this means scraping and indexing Web site content), knowledge integration (creating a big index), and knowledge consumption (maybe finding something when a user or system sends a query to the search subsystem). The key point is “knowledge” is important. How about that? Yahoo search was focusing on knowledge? Is that why Yahoo floundered in search for many, many years before bowing to failure?

Third, Yahoo’s approach to semantic search requires humans. Here’s proof:


When Yahoo announced Vin Diesel was dead, he was alive. So much for smart software.

Why am I mentioning this blast from the past.

Knowledge was talked about in my interview/discussion with Dr. Stavros Macrakis. We tackled the difference between Web search and enterprise search. This Yahoo deck illustrates that talk about knowledge is one thing. Delivering useful results to a user is quite another.

Jargon in search and retrieval has made more progress than search technology itself. That’s why the Yahoo deck could have been crafted yesterday by one of the search vendors still chasing a huge market in the era of Lucene/Solr and “good enough” information access.

Stephen E Arnold, October 9, 2020

Comparison of Elasticsearch, Solr, and Sphinx

October 8, 2020

Search and retrieval underpins most policeware and intelware systems. Open source search software has made life more challenging for vendors of proprietary enterprise search solutions. There are versions of an “in depth” enterprise search analysis like this available for thousands of dollars from marketers like sporting this title:

Enterprise Search Market Demand Analysis and Projected huge Growth by 2025| IBM Corp, Coveo Corp., Polyspot & Sinequa Inc., Expert System Inc., HP Autonomy, Lucidworks, Esker Software Corp., Dassault Systemes Inc., Perceptive Software Inc., and Marklogic Inc.

Notice that none of the search vendors in “Elasticsearch vs. Solr vs. Sphinx: Best Open Source Search Platform Comparison” appears in the Adroit Market Research report. That’s important for one reason: Open source search has driven vendors of proprietary systems into a corner. What’s even more intriguing is that some vendors of enterprise search like Attivio and IBM Corp. use open source search technology but take pains to avoid revealing the plumbing under the house trailer.

The comparison is, for now, available without charge online, courtesy of Greenice. This firm, based in Ukraine, is what I would describe as a DevOps consulting and services company. It’s a mash up of advisory, coding, and technical deliverables.

The comparison contains some useful information; for example:

  • Inclusion of examples of the search systems’ visualization capabilities
  • Examples of organizations using each of the three systems compared
  • Presentation of the analyst’s perception of strengths and weaknesses of each system
  • References to machine learning in the context of the three systems.

What caught my attention is the disconnect between the expensive and somewhat over enthusiastic for fee study about search and this free analysis.

Many of the problems in search are a result of what may be described as “over enthusiastic marketing.” This approach to jazzing up what can be accomplished by information retrieval technology has resulted in at least one jail sentence for an enterprise search entrepreneur and may be followed by jail time for other companies’ executives who practice razzmataz sales techniques.

The principal value of the free comparison is that it does a good job of walking through basic information without the Madison Avenue hucksterism. Net net: A free write up with some helpful information.

Stephen E Arnold, October 8, 2020

DarkCyber for October 6, 2020, Now Available

October 6, 2020

The October 6, 2020, DarkCyber covers one security-related story and offers a special feature about the differences between Web search and enterprise search. The loss of 250 million user accounts in December 2019 illustrated the flaws in the Microsoft approach to online security. What was the company’s response? The firm researched the event and prepared an after-action report. The document makes clear that Microsoft’s approach to security allowed bad actors to obtain access to proprietary data. Furthermore, the report provides one more example that high-visibility cyber security systems may not work as advertised. What’s the difference between Web search and enterprise search? Dr. Stavros Macrakis and Stephen E Arnold explore this subject. Dr. Macrakis worked at Lycos, Google, and other high-profile search firms. Arnold is the author of Successful Enterprise Search Management and The New Landscape of Search. The extracts from their discussion provide fresh insights into the challenges of information retrieval in today’s mobile-centric world. You can view the program on YouTube.

Kenny Toth, October 6, 2020

Google and Search Results: A Stay at Home Mother Explains

October 1, 2020

DarkCyber has a sneaking suspicion that Google wants to deliver the answers to users’ queries in a manner which:

  • Prevents a user from obtaining non-Google “approved” information
  • Requires zero latency between presenting an answer to a query and a click on an advertiser’s message
  • Appeals to a statistically significant percentage of users who accept the precept “Google makes one’s research easy”.

Other people do not agree with DarkCyber; for example, Google executives testifying before Congress or Googlers who are paid to explain how wonderful Google really, really is.

Google Wants to Eliminate Search Engine. Introducing Semantic Search” is an interesting and possibly disconcerting write up. One of the DarkCyber researchers noted for me this passage:

The experts at Google want to eliminate the one thing that Google does best – searching.

Since Google is perceived as search, what’s up? What’s up is that Google wants to deliver the “correct” answer directly to a thumb typing user or an impressionable child using a Chromebook and Google approved information to learn.

The write up explains in cheery stay-at-home mom panache:

With semantic searching, the algorithm working behind the search engine will understand the meaning of the search term and hence provide meaningful results, saving users a lot of hassle and a lot of time. In short, the new search is going to allow users to smart search for everything on the web.

Yep, smart search. Everything. The Web.

Sounds perfect, particularly for Google and its ad-centric approach to services.

Plus, users benefit because search engine optimization will no longer force the ever-smart Google search system to display irrelevant results:

Google is just preventing website owners to dig out the most-searched for keywords and then bulk them on to their websites.

DarkCyber finds the “just” an interesting word. Google just wants to make users better informed. How thoughtful. Research becomes little more than accepting what Google determines is optimal. Why read? Why compare? Why analyze? Google knows best: Best in terms of controlling access to information, shaping perceptions, and selling ads. Yes, that “best” may mean that an advertiser paid to get the click.

The DarkCyber researcher put an exclamation mark next to this passage:

In order to calm website owners down, Google has provided that the new algorithm is going to consist of an improved form of the same algorithm which will provide an opportunity to work towards legitimate optimization instead of spamming.

Yes, be calm. Accept what is delivered.

Stephen E Arnold, October 1, 2020

Microsoft Bing: Assertions Versus Actual Search Results

September 25, 2020

DarkCyber read “Introducing the Next Wave of AI at Scale innovations in Bing.” The write up explains a number of innovations. These enhancements will make finding information via Bing easier, better, faster, and generally more wonderful.

The main assertions DarkCyber noted are:

Smarter suggestions. The idea is that one does not know how to create a search query. Bing will know what the user wants.

More ideas. Bing will display questions other people (presumably just like me) ask. Bing keeps track and shows the popular questions. Yep, popular.

Translations. Send a query with mixed languages, and Bing will answer in your language. No more of that copying and pasting into Google Translate or

Highlighting. This is Bing’s yellow marker. The system will highlight what you need to read. The method? “A zero-shot fashion.” No, DarkCyber does not know what this means. But one can ask Bing, right?

Let’s give Bing a whirl and run the same query against Googzilla.

Here’s a DarkCyber Bing query related to research we are now doing:

Black Sage open source

And here’s the result:


Black Sage is an integrator engaged in the development of counter unmanned aerial systems. The firm’s marketing collateral emphasizes that its platform is open. DarkCyber wants to know if the system uses open source methods for compromising a targeted UAS (drone). Bing focuses on a publishing company.

Now Google:


The first result from the Google is a pointer to the company. The remainder of the results are crazy and wacky like the sneakers Mr. Brin wore to Washington about a decade ago to meet elected officials. Crazy? Nope, Sillycon Valley.

DarkCyber uses both Bing and Google. Why did Google produce something sort of related to our query and Bing missed the corn hole entirely?

The answer is that Bing does not process a user’s search history as effectively as the Google. All the fancy words from Microsoft cannot alter a search result. DarkCyber is amused by Google and Microsoft. We are skeptical of each system.

Key points:

  • Microsoft is chasing technology instead of looking for efficient ways to tailor results to a user.
  • Microsoft wants to prove that its approach is more knowledge-centric. Google just wants to sell ads. Giving people something they have already seen is fine with Mother Google.
  • Microsoft, like Google, has lost sight of the utility of providing “stupid mode” and “sophisticated mode” for users. Let users select how a query should be matched to the content in the index.

To sum up, Google has a global share of Web search in the 85 percent range. Bing is an also participated player. Perhaps a less academic approach, deeper index, and functional user controls would be helpful?

Stephen E Arnold, September 25, 2020

Web Scraping: Better Than a Library for Thumbtypers

September 22, 2020

Modern research. The thumbtyper way.

Nature explains the embrace of a technology that, when misused, causes concern in the post, “How We Learnt to Stop Worrying and Love Web Scraping.” The efficiency and repeatability of automation are a boon to researchers Nicholas J. DeVito, Georgia C. Richards, and Peter Inglesby, who write:

“You will end up with a sharable and reproducible method for data collection that can be verified, used and expanded on by others — in other words, a computationally reproducible data-collection workflow. In a current project, we are analyzing coroners’ reports to help to prevent future deaths. It has required downloading more than 3,000 PDFs to search for opioid-related deaths, a huge data-collection task. In discussion with the larger team, we decided that this task was a good candidate for automation. With a few days of work, we were able to write a computer program that could quickly, efficiently and reproducibly collect all the PDFs and create a spreadsheet that documented each case. … [Previously,] we could manually screen and save about 25 case reports every hour. Now, our program can save more than 1,000 cases per hour while we work on other things, a 40-fold time saving. It also opens opportunities for collaboration, because we can share the resulting database. And we can keep that database up to date by re-running our program as new PDFs are posted.”

The authors explain how scraping works to extract data from web pages’ HTML and describe how to get started. One could adopt a pre-made browser extension like or write a customized scraper—a challenging task but one that gives users more control. See the post for details on that process.

With either option, we are warned, there are several considerations to keep in mind. For some projects, those who possess the data have created an easier way to reach it, so scraping would be a waste of time an effort. Conversely, other websites hold their data so tightly it is not available directly in the HTML or has protections built in, like captchas. Those considering scraping should also take care to avoid making requests of a web server so rapidly that it crashes (an accidental DoS attack) or running afoul of scraping rules or licensing and copyright restrictions. The researchers conclude by encouraging others to adopt the technique and share any custom code with the community.

Cynthia Murrell, September 22, 2020

Podcast Search: Illuminating the Rich Media Darkness

September 22, 2020

Search for podcasts is broken. We learn of a possible first step toward a fix from Podnews in the brief write-up, “The Podfather Launches a New, Open Podcast Directory.” James Cridland writes:

“‘The digital ad space is watching as the bottom falls out of their data collection methods. But how exactly does Apple’s Age of Privacy impact podcasting?’ – in today’s Sounds Profitable, our new adtech newsletter, with Podsights.

“Adam Curry has launched a new, open podcast directory for app developers, working with developer Dave Jones. Speaking on a new podcast, Podcasting 2.0, Curry and Jones worry that ‘Apple is starting to tinker with their directory’, and say that the company is ‘a very centralized private entity that is controlling pretty much what everybody considers the default yellow pages for podcasting.’ His alternative, The Podcast Index, promises that the ‘core, categorized index will always be available for free, for any use’. You can sign up to be a developer on their developer portal. We support this initiative. As of today, Podnews uses The Podcast Index for our main podcast search.”

The index is a simple type-and-search format. It seems to work acceptably well on Podnews’ database, though it could use a little relevance refinement. Will the open directory attract developers and reach the larger segment? We hope this or another solution is implemented soon.

Cynthia Murrell, September 22, 2020

Making Search Fair: An Interesting Idea

September 11, 2020

Search rankings on Google, Bing, and various other search engines have not been fair for years. SEO tricks fall flatter than a pancake and the best way to get to the top of Google search results is with ads. The only thing Google does that is somewhat decent is that it marks paid ads in search results. EurekAlert! shares that there is a “New Tool Improves Fairness Of Online Search Rankings.”

Cornell University researchers developed a new tool, FairCo, that improves the fairness of online rankings that does not sacrifice relevance or usefulness. The idea behind the tool was that users only look at the first page of search results and miss other relevant results. This otherwise creates bias in the results. FairCo works similar to making a decision when you have all the facts:

“ ‘If you could examine all your choices equally and then decide what to pick, that may be considered ideal. But since we can’t do that, rankings become a crucial interface to navigate these choices,’ said computer science doctoral student Ashudeep Singh, co-first author of “Controlling Fairness and Bias in Dynamic Learning-to-Rank,” which won the Best Paper Award at the Association for Computing Machinery SIGIR Conference on Research and Development in Information Retrieval. ‘For example, many YouTubers will post videos of the same recipe, but some of them get seen way more than others, even though they might be very similar,” Singh said. “And this happens because of the way search results are presented to us. We generally go down the ranking linearly and our attention drops off fast.’”

FairCo is supposed to give the same exposure to all results from a search and ignores preferential treatment. This eliminates the unfairness in current search algorithms which are notorious for being biased.

With the amount of biased media outlets and disinformation spreading across the Internet and social media platforms, FairCo could help eliminate this problem. The problem would be getting large companies like Google and Facebook to adopt the tool, but if Cornell researchers received an injection of Google or Facebook money to expand FairCo it might work. However, paid ads always trump search results.

Whitney Grace, September 11, 2020

Next Page »

  • Archives

  • Recent Posts

  • Meta