Another Robot Finds a Library Home

August 23, 2016

Job automation has its benefits and downsides.  Some of the benefits are that it frees workers up to take on other tasks, cost-effectiveness, efficiency, and quicker turn around.  The downside is that it could take jobs and could take out the human factor in customer service.   When it comes to libraries, automation and books/research appear to be the antithesis of each other.  Automation, better known as robots, is invading libraries once again and people are up in arms that librarians are going to be replaced.

ArchImag.com shares the story “Robot Librarians Invade Libraries In Singapore” about how the A*Star Research library uses a robot to shelf read.  If you are unfamiliar with library lingo, shelf reading means scanning the shelves to make sure all the books are in their proper order.  The shelf reading robot has been dubbed AuRoSS.  During the night AuRoSS scans books’ RFID tags, then generates a report about misplaced items.  Humans are still needed to put materials back in order.

The fear, however, is that robots can fulfill the same role as a librarian.  Attach a few robotic arms to AuRoSS and it could place the books in the proper places by itself.  There already is a robot named Hugh answering reference questions:

New technologies thus seem to storm the libraries. Recall that one of the first librarian robots, Hugh could officially take his position at the university library in Aberystwyth, Wales, at the beginning of September 2016. Designed to meet the oral requests by students, he can tell them where the desired book is stored or show them on any shelf are the books on the topic that interests them.

It is going to happen.  Robots are going to take over the tasks of some current jobs.  Professional research and public libraries, however, will still need someone to teach people the proper way to use materials and find resources.  It is not as easy as one would think.

Whitney Grace, August 23, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden /Dark Web meet up on August 23, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233019199/

Read the Latest Release from…Virgil

August 18, 2016

The Vatican Library is one of the world’s greatest treasures, because it archives much of western culture’s history.  It probably is on par with the legendary Library of Alexandria, beloved by Cleopatra and burned to the ground.  How many people would love the opportunity to delve into the Vatican Library for a private tour?  Thankfully the Vatican Library shares its treasures with the world via the Internet and now, according to Archaeology News Network, the “Vatican Library Digitises 1600 Year-Old Manuscript Containing Works Of Virgil.”

The digital version of Virgil’s work is not the only item the library plans to scan online, but it does promise donors who pledge 500 euros or more they will receive a faithful reproduction of a 1600 manuscript by the famous author.  NTT DATA is working with the Vatican Library on Digita Vaticana, the digitization project.  NTT DATA has worked with the library since April 2014 and plans to create digital copies of over 3,000 manuscripts to be made available to the general public.

“ ‘Our library is an important storehouse of the global culture of humankind,’ said Monsignor Cesare Pasini, Prefect of the Vatican Apostolic Library. ‘We are delighted the process of digital archiving will make these wonderful ancient manuscripts more widely available to the world and thereby strengthen the deep spirit of humankind’s shared universal heritage.’”

Projects like these point to the value of preserving the original work as well as making it available for research to people who might otherwise make it to the Vatican.  The Vatican also limits the amount of people who can access the documents.

Whitney Grace, August 18, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

There is a Louisville, Kentucky Hidden /Dark Web meet up on August 23, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233019199/

 

Libraries Will Save the Internet

June 10, 2016

Libraries are more than place to check out free DVDs and books and use a computer.  Most people do not believe this and if you try to tell them otherwise, their eyes glaze offer and they start chanting “obsolete” under their breath.  BoingBoing, however, agrees that “How Libraries Can Save The Internet Of Things From The Web’s Centralized Fate”.  For the past twenty years, the Internet has become more centralized and content is increasingly reliant on proprietary sites, such as social media, Amazon, and Google.

Back in the old days, the greatest fear was that the government would take control of the Internet.  The opposite has happened with corporations consolidating the Internet.  Decentralization is taking place, mostly to keep the Internet anonymous.  Usually, these are tied to the Dark Web.  The next big thing in the Internet is “the Internet of things,” which will be mostly decentralized and that can be protected if the groundwork is laid now.  Libraries can protect decentralized systems, because

“Libraries can support a decentralized system with both computing power and lobbying muscle. The fights libraries have pursued for a free, fair and open Internet infrastructure show that we’re players in the political arena, which is every bit as important as servers and bandwidth.  What would services built with library ethics and values look like? They’d look like libraries: Universal access to knowledge. Anonymity of information inquiry. A focus on literacy and on quality of information. A strong service commitment to ensure that they are available at every level of power and privilege.”

Libraries can teach people how to access services like Tor and disseminate the information to a greater extent than many other institutes within the community.  While this is possible, in many ways it is not realistic due to many factors.  Many of the decentralized factors are associated with the Dark Web, which is held in a negative light.  Libraries also have limited budgets and trying to install a program like this will need finances, which the library board might not want to invest in.  Also comes the problem of locating someone to teach these services.  Many libraries are staffed by librarians that are limited in their knowledge, although they can learn.

It is possible, it would just be hard.

 

Whitney Grace, June 10, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

A Dead Startup Tally Sheet

March 17, 2016

Startups are the buzzword for companies that are starting up in the tech industry, usually with an innovative idea that garners them several million in investments.  Some startups are successful, others plodder along, and many simply fail.  CBS Insights makes an interesting (and valid) comparison with tech startups and dot-com bust that fizzled out quicker than a faulty firecracker.

While most starts appear to be run by competent teams that, sometimes they fizzle out or are acquired by a larger company.  Many of them are will not make it as a headlining company.  As a result, CBS Insights invented, “The Downround Tracker: Which Companies Are Not Living Up To The Expectations?”

CBS Insights named this tech boom, the “unicorn era,” probably from the rare and mythical sightings of some of these companies.  The Downround Tracker tracks unicorn era startups that have folded or were purchased.  Since 2015, fifty-six total companies have made the Downround Tracker list, including LiveScribe, Fab.com, Yodle, Escrow.com, eMusic, Adesto Technologies, and others.

Browse through the list and some of the names will be familiar and others will make you wonder what some of these companies did in the first place.  Companies come and go in a fashion that appears to be quicker than any other generation.  At least in shows that human ingenuity is still working, cue Kanas’s “Dust in the Wind.”

 

Whitney Grace, March 17, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Authors Guild Loses Fair Use Argument, Petitions Supreme Court for Copyright Fee Payment from Google

January 12, 2016

The article on Fortune titled Authors Guild Asks Supreme Court to Hear Google Books Copyright Case continues the 10 year battle over Google’s massive book scanning project. Only recently in October of 2015 the Google project received  a ruling in their favor due to the “transformative” nature of the scanning from a unanimous appeals court. Now the Authors Guild, with increasing desperation to claim ownership over their work, takes the fight to the Supreme Court for consideration. The article explains,

“The Authors Guild may be hoping the high profile nature of the case, which at one time transfixed the tech and publishing communities, will tempt the Supreme Court to weigh in on the scope of fair use… “This case represents an unprecedented judicial expansion of the fair-use doctrine that threatens copyright protection in the digital age. The decision below authorizing mass copying, distribution, and display of unaltered content conflicts with this Court’s decisions and the Copyright Act itself.”

In the petition to the Supreme Court, the Authors Guild is now requesting payment of copyright fees rather than a stoppage of the scanning of 20 million books. Perhaps they should have asked for that first, since Google has all but already won this one.

 

 
Chelsea Kerwin, January 12, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Data Managers as Data Librarians

December 31, 2015

The tools of a librarian may be the key to better data governance, according to an article at InFocus titled, “What Librarians Can Teach Us About Managing Big Data.” Writer Joseph Dossantos begins by outlining the plight data managers often find themselves in: executives can talk a big game about big data, but want to foist all the responsibility onto their overworked and outdated IT departments. The article asserts, though, that today’s emphasis on data analysis will force a shift in perspective and approach—data organization will come to resemble the Dewey Decimal System. Dossantos writes:

“Traditional Data Warehouses do not work unless there a common vocabulary and understanding of a problem, but consider how things work in academia.  Every day, tenured professors  and students pore over raw material looking for new insights into the past and new ways to explain culture, politics, and philosophy.  Their sources of choice:  archived photographs, primary documents found in a city hall, monastery or excavation site, scrolls from a long-abandoned cave, or voice recordings from the Oval office – in short, anything in any kind of format.  And who can help them find what they are looking for?  A skilled librarian who knows how to effectively search for not only books, but primary source material across the world, who can understand, create, and navigate a catalog to accelerate a researcher’s efforts.”

The article goes on to discuss the influence of the “Wikipedia mindset;” data accuracy and whether it matters; and devising structures to address different researchers’ needs. See the article for details on each of these (especially on meeting different needs.) The write-up concludes with a call for data-governance professionals to think of themselves as “data librarians.” Is this approach the key to more effective data search and analysis?

Cynthia Murrell, December 31, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

An Early Computer-Assisted Concordance

November 17, 2015

An interesting post at Mashable, “1955: The Univac Bible,” takes us back in time to examine an innovative indexing project. Writer Chris Wild tells us about the preacher who realized that these newfangled “computers” might be able to help with a classically tedious and time-consuming task: compiling a book’s concordance, or alphabetical list of key words, their locations in the text, and the context in which each is used. Specifically, Rev. John Ellison and his team wanted to create the concordance for the recently completed Revised Standard Version of the Bible (also newfangled.) Wild tells us how it was done:

“Five women spent five months transcribing the Bible’s approximately 800,000 words into binary code on magnetic tape. A second set of tapes was produced separately to weed out typing mistakes. It took Univac five hours to compare the two sets and ensure the accuracy of the transcription. The computer then spat out a list of all words, then a narrower list of key words. The biggest challenge was how to teach Univac to gather the right amount of context with each word. Bosgang spent 13 weeks composing the 1,800 instructions necessary to make it work. Once that was done, the concordance was alphabetized, and converted from binary code to readable type, producing a final 2,000-page book. All told, the computer shaved an estimated 23 years off the whole process.”

The article is worth checking out, both for more details on the project and for the historic photos. How much time would that job take now? It is good to remind ourselves that tagging and indexing data has only recently become a task that can be taken for granted.

Cynthia Murrell, November 17, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Google Books Is Not Violating Copyright

November 12, 2015

Google Books was controversial the moment it was conceived.  The concept is simple and effective though: books in academic libraries are scanned and snippets are made available online.  People have the ability to search Google Books for specific words or phrases, then they are shown where it is contained within a book.  The Atlantic wrote, “After Ten Years, Google Books Is Legal” about how a Second Circuit judge panel ruled in favor of Google Books against the Authors Guild.

The panel ruled that Google Books fell under the terms of “Fair Use,” which as most YouTubers know, is the ability to use a piece of copyrighted content within a strict set of rules.  Fair usage includes works of parody, academic works, quotations, criticism, or summarization.

The Authors Guild argued that Google Books was infringing upon its members copyrights and stealing potential profits, but anyone knows that too much of a copyright is a bad thing.  It places too many limitations on how the work can be used, harming the dissemination of creative and intellectual thought.

“’It gives us a better senses of where fair use lies,” says Dan Cohen, the executive director of the Digital Public Library of America. They “give a firmer foundation and certainty for non-profits…Of all the parts of Judge Leval’s decision, many people I talked to were happiest to see that it stressed that fair use’s importance went beyond any tool, company, or institution. ‘To me, I think a muscular fair use is an overall benefit to society, and I think it helps both authors and readers,’ said Cohen.”

Authors do have the right to have their work copyright and make a profit off it, which should be encouraged and a person’s work should not be given away for free.  There is a wealth of information out there, however, that is kept under lock and key and otherwise would not be accessed with a digital form.  Google Books only extends a book’s reach, speaking from one who has relied on it for research.

Whitney Grace, November 12, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Photo Farming in the Early Days

November 9, 2015

Have you ever wondered what your town looked like while it was still urban and used as farmland?  Instead of having to visit your local historical society or library (although we do encourage you to do so), the United States Farm Security Administration and Office Of War Information (known as  FSA-OWI for short) developed Photogrammer.  Photogrammer is a Web-based image platform for organizing, viewing, and searching farm photos from 1935-1945.

Photogrammer uses an interactive map of the United States, where users can click on a state and then a city or county within it to see the photos from the timeline.  The archive contains over 170,000 photos, but only 90,000 have a geographic classification.  They have also been grouped by the photographer who took the photos, although it is limited to fifteen people.  Other than city, photographer, year, and month, the collection c,an be sorted by collection tags and lot numbers (although these are not discussed in much detail).

While farm photographs from 1935-1945 do not appear to need their own photographic database, the collection’s history is interesting:

“In order to build support for and justify government programs, the Historical Section set out to document America, often at her most vulnerable, and the successful administration of relief service. The Farm Security Administration—Office of War Information (FSA-OWI) produced some of the most iconic images of the Great Depression and World War II and included photographers such as Dorothea Lange, Walker Evans, and Arthur Rothstein who shaped the visual culture of the era both in its moment and in American memory. Unit photographers were sent across the country. The negatives were sent to Washington, DC. The growing collection came to be known as “The File.” With the United State’s entry into WWII, the unit moved into the Office of War Information and the collection became known as the FSA-OWI File.”

While the photos do have historical importance, rather than creating a separate database with its small flaws, it would be more useful if it was incorporated into a larger historical archive, like the Library of Congress, instead of making it a pet project.

Whitney Grace, November 9, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Libraries Failure to Make Room for Developer Librarians

October 23, 2015

The article titled Libraries’ Tech Pipeline Problem on Geek Feminism explores the lack of diverse developers. The author, a librarian, is extremely frustrated with the approach many libraries have taken. Rather than refocusing their hiring and training practices to emphasize technical skills, many are simply hiring more and more vendors, hardly a solution. The article states,

“The biggest issue I see is that we offer a fair number of very basic learn-to-code workshops, but we don’t offer a realistic path from there to writing code as a job. To put a finer point on it, we do not offer “junior developer” positions in libraries; we write job ads asking for unicorns, with expert- or near-expert-level skills in at least two areas (I’ve seen ones that wanted strong skills in development, user experience, and devops, for instance).”

The options available are that librarians either learn to code in their spare time (not viable), or enter the tech workforce temporarily and bring your skills back after a few years. This option is also full of drawbacks, especially that even white women are marginalized in the tech industry. Instead, the article stipulates the libraries need to make more room for hiring and promoting people with coding skills and interests while also joining the coding communities like Code4Lib.

 

Chelsea Kerwin, October 23, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta