Yarchives: a Multi-Topic Repository of Information

October 5, 2021

Here is a useful resource, a repository of Usenet newsgroup articles collected and maintained by computer scientist Norman Yarvin. The Yarchive houses articles on twenty-two wide-ranging topics, from air conditioning to jokes to space. We note a couple that might be of interest to today’s assorted revolutionaries (or those tasked with countering them): explosives and nuclear technologies. Hmm. Perhaps there is a need to balance unfettered access to information with wisdom. The site’s About page reveals some details about Yarvin’s curation process. He writes:

“Articles are not put up here immediately; only a year or three after first saving them do I look at them again, sort them out, and make index pages for them. (By that time I’ve forgotten enough of them to make them worth rereading — and if I find they are not worth rereading, I discard them.) I’ve largely automated the making of index pages; the programs I’ve written for it (mostly in Perl) are available as a tar file (tools.tar). The making of the links to search for Google’s copy of each article is also automated. If it stops working because Google changed their query syntax, please let me know. Links that are on the Message-ID line of the header should link straight to the article in question; other links (from articles I’ve lost the Message-ID for) should invoke a search. For articles from the linux-kernel mailing list, links that are on the Original-Message-ID line of the header are to kernel.org’s copy of the article. (They used to be to GMANE, but that service went away.) Some changes have been made to these articles, but nothing that would destroy any possible meaning.”

The project seems to be quite the hobby for Yarvin. He goes on to describe the light corrections he makes, articles’ conversion to the UTF-8 character encoding, and his detailed process of checking the worthiness of URLs and making the valuable ones clickable.

Readers may want to peruse the Yarchive and/or bookmark it for future use. Information relevant to many of our readers can be found here, like files on computers, electronics, and security. More generally useful topics are also represented; cars, food, and houses, for example. Then there are the more specialized topics, like bicycles, chemistry, and metalworking. There is something here for everyone, it seems.

Cynthia Murrell, October 5, 2021

Free Resource on AI for Physical Simulations

September 27, 2021

The academics at the Thuerey Group have made a useful book on artificial intelligence operations and smart software applications available online. The Physics-Based Deep Learning Book is a comprehensive yet practical introduction to machine learning for physical simulations. Included are code examples presented via Jupyter notebooks. The book’s introduction includes this passage:

“People who are unfamiliar with DL methods often associate neural networks with black boxes, and see the training processes as something that is beyond the grasp of human understanding. However, these viewpoints typically stem from relying on hearsay and not dealing with the topic enough. Rather, the situation is a very common one in science: we are facing a new class of methods, and ‘all the gritty details’ are not yet fully worked out. However, this is pretty common for scientific advances. … Thus, it is important to be aware of the fact that – in a way – there is nothing magical or otherworldly to deep learning methods. They’re simply another set of numerical tools. That being said, they’re clearly fairly new, and right now definitely the most powerful set of tools we have for non-linear problems. Just because all the details aren’t fully worked out and nicely written up, that shouldn’t stop us from including these powerful methods in our numerical toolbox.”

This virtual tome would be a good place to start doing just that. Interested readers may want to begin studying it right away or bookmark it for later. Also see the Thuerey Group’s other publications for more information on numerical methods for deep-learning physics simulations.

Cynthia Murrell, September 27, 2021

Simple Error for a Simple Link to the Simple Sabotage Field Manual

September 13, 2021

I love Silicon Valley type “real” news. I spotted a story called “The 16 Best Ways to Sabotage Your Organization’s Productivity, from a CIA Manual Published in 1944.” What’s interesting about this story is that the US government publication has been in circulation for many years. The write up states:

The “Simple Sabotage Field Manual,” declassified in 2008 and available on the CIA’s website, provided instructions for how everyday people could help the Allies weaken their country by reducing production in factories, offices, and transportation lines. “Some of the instructions seem outdated; others remain surprisingly relevant,” reads the current introduction on the CIA’s site. “Together they are a reminder of how easily productivity and order can be undermined.”

There’s one tiny flaw — well, two actually — in this Silicon Valley type “real” news report.

First, the url provided in the source document is incorrect. To download the document, navigate to this page or use this explicit link: https://www.hsdl.org/?view&did=750070. We verified both links at 0600, September 13, 2021.

And the second:

The write up did not include the time wasting potential of a Silicon Valley type publication providing incorrect information via a bad link. Mr. Donovan, the author of the document, noted on page 30:

Make mistakes in quantities of material when you’ are copying orders. Confuse similar names. Use wrong addresses.

Silly? Maybe just another productivity killer from the thumbtyping generation.

Stephen E Arnold, September 13, 2021

The British Library Channels University Microfilms and the Google

September 1, 2021

While a quick Google search can yield pertinent information, it is hard to find. Why? Google search results are clogged with paid ads and Web sites that are not authoritative sources. Newspapers are still a valuable resource, especially newspapers from before the Internet’s invention. The brilliant news is, as IanVisits shares, is that, “The British Library Puts 1 Million Newspaper Pages Online For Free.”

The British Newspaper Archive contains over forty-four million newspaper pages that range from 1600-2009. The newspapers are from British and Irish sources and they are over 10% of the newspapers the British Library owns. Around half a million pages are added the archive every month.

The newspapers currently require a subscription, but all funds go to scanning more pages to the archive. The British Newspaper Archive has released one million pages for free and plans to add another million over the next four years. Not all pages will be free, however:

“They won’t add all papers, as they say that while they consider newspapers made before 1881 to be in the public domain, that does not mean that will make all pre-1881 digitized titles available for free, as the archive is dependent on subscriptions to cover its costs. If like me you do a lot of historical research, then the cost of the full subscription is not that bad – just £80 a year for the full archive.”

The archive offers 158 free newspaper titles that range from 1720-1880. All of the newspapers that fall within this date range are in the public domain.

It would be awesome if all newspapers were available for free on the Internet, but money makes the world go round. Libraries and universities offer free access to newspaper databases and subscription services, in most cases, are not that expensive.

The good news is that researchers may have access to news stories infused with some of that good old “real” journalistic wire tapping.

Whitney Grace, September 1, 2021

The Internet Archive Dons a Scholar Skin

April 23, 2021

Some of today’s biggest social faux pas are believing everything on the Internet, clicking the first link in search results, and buying items from questionable Internet ads. It is easy to forget that search engines like Google and Bing are for-profit search engines that put paid links at the top of search results. What is even worse is scientific and scholarly information is locked behind expensive paywalls.

Wikipedia is often believed to be a reliable source, but despite the dedication of wiki editors the encyclopedia is not 100% accurate. There are free scholarly databases and newspapers often have their archives online, but that information is not widely known.

Thankfully the Internet Archive is fairly famous. The Internet Archive is a non-profit digital library that provides users with access to millions of free books, music, Web sites, videos, and software. They also allow users to peruse old Web sites with the Wayback Machine.

The Internet Archive recently introduced a brand new service that is sheer genius: Internet Archive Scholar. It is described as:

“This full text search index includes over 25 million research articles and other scholarly documents preserved in the Internet Archive. The collection spans from digitized copies of eighteenth century journals through the latest Open Access conference proceedings and pre-prints crawled from the World Wide Web.”

Why did no one at the Internet Archive think of doing this before? It is a brilliant idea that localizes millions of scholarly articles and other information without paywalls, university matriculation, or a library card. Most of the information available through the Internet Archive Scholar would otherwise remain buried in Google search results or on the Web, like old books gathering dust on library shelves.

Internet Archive Scholar is still in the beta phase and enhancements are a positive step.

Whitney Grace, April 23, 2021

IA Scholar: A Reminder That Existing Online Resources Are Not Comprehensive

March 10, 2021

We spotted this announcement from the Internet Archive in “Search Scholarly Materials Preserved in the Internet Archive.”

IA Scholar is a simple, access-oriented interface to content identified across several Internet Archive collections, including web archives, archive.org files, and digitized print materials. The full text of articles is searchable for users that are hunting for particular phrases or keywords. This complements our existing full-text search index of millions of digitized books and other documents on archive.org. The service builds on Fatcat, an open catalog we have developed to identify at-risk and web-published open scholarly outputs that can benefit from long-term preservation, additional metadata, and perpetual access. Fatcat includes resources that may be useful to librarians and archivists, such as bulk metadata dumps, a read/write API, command-line tool, and file-level archival metadata. If you are interested in collaborating with us, or are a researcher interested in text analysis applications, we have a public chat channel or can be contacted by email at info@archive.org.

I ran several queries. The system is set up to respond to a conference name, but free text entries worked find; for example, NLP. Here are the results:

image

Worth checking out. In my experience people who are “experts” in online often forget that no online service is up to date, comprehensive, and set up to deliver full text. One other point: Corrections to online content are rarely, if ever made. Business Dateline, produced by the Courier Journal and Louisville Times in the early 1980s was one of the first commercial databases to include corrections. Thumbtypers may not care, but that’s the zippy modern world.

Stephen E Arnold, March 10, 2021

Comments about Web Search: Prompted by a Hacker News Thread

November 13, 2020

I spotted a Web search related threat on Hacker News. You can locate the comments at this link. Several observations:

  1. Metasearch. Confusion seems to exist between a dedicated Web search system like Bing, Google, and Yandex and metasearch systems like DuckDuckGo and Startpage. Dedicated Web search systems require considerable effort, but there is less appreciation for the depth of the crawl, the index updating cycle, and similar factors.
  2. Competitors to Google. The comments present a list of search systems which are relatively well known. Omitted are some other services; for example, iSeek, Swisscows, and 50kft.
  3. Bias. The comments do not highlight some of the biases of Web search systems; for example, when are pages reindexed, what pages are on a slow or never update cycle, blacklisted, or processed against a stop word list.

So what?

  1. Many profess to be experts at finding information online. The comments suggest that perception is different from reality.
  2. Locating content on publicly accessible Web sites is more difficult than at any other time in my professional career in the online information sector.
  3. Locating relevant information is increasingly time consuming because predictive, personalized, and wisdom of crowd results don’t work; for example, run this query on any of the search engines:

Voyager search

Did your results point to the Voyager Labs’s system, the UK HR company’s search engine, a venture capital firm, or a Lucene repackager in Orange County? What about Voyager patents?  What about Voyager customers?

How can one disambiguate when the index scope is unknown, entity extraction is almost non existent, and deduplication almost laughable? Real time? Ho ho ho.

One can do this work manually. Who wants to volunteer for that. The most innovative specialized search vendors try to automate the process. Some of these systems are helpful; most are not.

Is search getting better? Rerun that Voyager search. See for yourself.

Without field codes, Boolean, and a mechanism to search across publicly accessible content domains, Web search reveals its shortcomings to those who care to look.

Not many look, including professionals at some of the better known Web search outfits.

Stephen E Arnold, November 13, 2020

Science: Just Delete It

September 10, 2020

The information in “Dozens of Scientific Journals Have Vanished from the Internet, and No One Preserved Them” may remind some people that the “world’s information” and the “Internet archives” are marketing sizzle. The steak is the source document. The FBI has used the phrase “going dark” as shorthand for not being able to access certain information. The thrill of not have potentially useful information is one that most researchers prefer to reserve for thrill rides at Legoland.

The write up states:

Eighty-four online-only, open-access (OA) journals in the sciences, and nearly 100 more in the social sciences and humanities, have disappeared from the internet over the past 2 decades as publishers stopped maintaining them, potentially depriving scholars of useful research findings, a study has found. An additional 900 journals published only online also may be at risk of vanishing because they are inactive, says a preprint posted on 3 September on the arXiv server. The number of OA journals tripled from 2009 to 2019, and on average the vanished titles operated for nearly 10 years before going dark, which “might imply that a large number … is yet to vanish…

Flat earthers and those who believe that “just being” is a substitute for academic rigor are probably going to have “thank goodness, these documents are gone” party. I won’t be attending.

Anti-intellectualism is really exciting. Plus, it makes life a lot easier for those in the top one percent of intellectual capability. Why? Extensive reading can fill in some blanks. Who wants to be comprehensive? Oh, I know: “Those who consume TikTok videos and devour Instagram while checking WhatsApp messages.”

Stephen E Arnold, September  10, 2020

A Librarian Looks at Google Dorking

August 24, 2020

In order to find solutions for their jobs, many people simply conduct a Google search. Google searching for solutions is practiced by teachers to executives to even software developers. Software developers spend an inordinate amount of their time searching for code libraries and language tutorials. One developer named Alec had the brilliant idea to create “dorking.” What is dorking?

“Use advanced Google Search to find any webpage, emails, info, or secrets

cost: $0

time: 2 minutes

Software engineers have long joked about how much of their job is simply Googling things

Now you can do the same, but for free”

Dorking is free! That is great! How does it work? Dorking is a tip guide using Boolean operators and other Google advanced search options to locate information. Dorking, however, does need a bit of coding knowledge to understand how it works.

Most some of these tips can be plugged into a Google search box, such as finding similar sites and find specific pages that must include a phrase in the Title text. Others need that coding knowledge to make them work. For example finding every email on a Web page requires this:

image

Yep, dorking for everyone.

After a few practice trials, these dorking tips are sure to work for even the most novice of Googlers. It will also make anyone, not just software developers, appear like experts. As a librarian, why not assign field types and codes, return Boolean logic, and respect existing Google operators. Putting a word in quotes and then getting a result without the word is — how should I frame it. I know — dorky.

Whitney Grace, MLS, August 24, 2020

Kaggle ArXiv Dataset

August 7, 2020

“Leveraging ML to Fuel New Discoveries with the ArXiv Dataset” announces that more than 1.7 million journal-type papers are available without charge on Kaggle. DarkCyber learned:

To help make the ArXiv more accessible, we present a free, open pipeline on Kaggle to the machine-readable ArXiv dataset: a repository of 1.7 million articles, with relevant features such as article titles, authors, categories, abstracts, full text PDFs, and more.

What’s Kaggle? The article explains:

Kaggle is a destination for data scientists and machine learning engineers seeking interesting datasets, public notebooks, and competitions. Researchers can utilize Kaggle’s extensive data exploration tools and easily share their relevant scripts and output with others.

The ArXiv contain metadata for each processed paper (document), including these fields:

  • ID: ArXiv ID (can be used to access the paper, see below)
  • Submitter: Who submitted the paper
  • Authors: Authors of the paper
  • Title: Title of the paper
  • Comments: Additional info, such as number of pages and figures
  • Journal-ref: Information about the journal the paper was published in
  • DOI: [https://www.doi.org](Digital Object Identifier)
  • Abstract: The abstract of the paper
  • Categories: Categories / tags in the ArXiv system
  • Versions: A version history

Details about the data and their location appear at this link. You can use the ArXiv ID to download a paper.

What if you want to search the collection? You may want to download the terabyte plus file and index the json using your favorite search utility. There’s a search system available from ArXiv and you can use the site: operator on Bing or Google to see if one of those ad-supported services will point you to the document set you need.

DarkCyber wants to suggest that you download the corpus now (datasets can go missing) and use your favorite search and retrieval system or content processing system to locate and make sense of the ArXiv content objects.

Stephen E Arnold, August 7, 2020

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta