An Exploration of Search Code

April 9, 2021

Software engineer Bard de Geode posts an exercise in search coding on his blog—“Building a Full-Text Search Engine in 150 Lines of Python Code.” He has pared down the thousands and thousands of lines of code found in proprietary search systems to the essentials. Of course, those platforms have many more bells and whistles, but this gives one an idea of the basic components. Navigate to the write-up for the technical details and code snippets that I do not pretend to follow completely. The headings de Geode walks us through include Data, Data preparation, Indexing, Analysis, Indexing the corpus, Searching, Relevancy, Term frequency, and Inverse document frequency. He concludes:

“You can find all the code on Github, and I’ve provided a utility function that will download the Wikipedia abstracts and build an index. Install the requirements, run it in your Python console of choice and have fun messing with the data structures and searching. Now, obviously this is a project to illustrate the concepts of search and how it can be so fast (even with ranking, I can search and rank 6.27m documents on my laptop with a ‘slow’ language like Python) and not production grade software. It runs entirely in memory on my laptop, whereas libraries like Lucene utilize hyper-efficient data structures and even optimize disk seeks, and software like Elasticsearch and Solr scale Lucene to hundreds if not thousands of machines. That doesn’t mean that we can’t think about fun expansions on this basic functionality though; for example, we assume that every field in the document has the same contribution to relevancy, whereas a query term match in the title should probably be weighted more strongly than a match in the description. Another fun project could be to expand the query parsing; there’s no reason why either all or just one term need to match.”

Fore more information, de Geode recommends curious readers navigate to MonkeyLearn’s post “What is TF-IDF?” and to an explanation of “Term Frequency and Weighting” posted by Stanford’s NLP Group. Happy coding.

Cynthia Murrell, April 9, 2021

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta