Facebook Still Having Trouble with Trending Topics

October 28, 2016

Despite taking action to fix its problems with Trending Topics,  Facebook is still receiving criticism on the issue. A post at Slashdot tells us, “The Washington Post Tracked Facebook’s Trending Topics for 3 Weeks, Found 5 Fake Stories and 3 Inaccurate Articles.” The Slashdot post by msmash cites a Washington Post article. (There’s a paywall if, like me, you’ve read your five free WP articles for this month.) The Post monitored Facebook’s Trending Topics for three weeks and found that issue far from resolved. Msmash quotes the report:

The Megyn Kelly incident was supposed to be an anomaly. An unfortunate one-off. A bit of (very public, embarrassing) bad luck. But in the six weeks since Facebook revamped its Trending system — and a hoax about the Fox News Channel star subsequently trended — the site has repeatedly promoted ‘news’ stories that are actually works of fiction. As part of a larger audit of Facebook’s Trending topics, the Intersect logged every news story that trended across four accounts during the workdays from Aug. 31 to Sept. 22. During that time, we uncovered five trending stories that were indisputably fake and three that were profoundly inaccurate. On top of that, we found that news releases, blog posts from sites such as Medium and links to online stores such as iTunes regularly trended. Facebook declined to comment about Trending on the record.

It is worth noting that the team may not have caught every fake story, since it only checked in with Trending Topics once every hour. Quite the quandary. We wonder—would a tool like Google’s new fact-checking feature help? And, if so, will Facebook admit its rival is on to something?

Cynthia Murrell, October 28, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

UltraSearch Releases Version 2.1

September 16, 2016

Now, after more than a year, we have a new version of a popular alternative to Windows’ built-in Desktop Search, UltraSearch. We learn the details from the write-up at gHacks.net, “UltraSearch 2.1 with File Content Search.” The application works by accessing a system’s master file table, so results appear almost instantly. Writer Martin Brinkmann informs us:

The list of changes on the official UltraSearch project website is long. While some of them may affect only some users, others are useful or at least nice to have for all. Jam Software, the company responsible for the search program, have removed the advertising banner from the program. There is, however, a new ‘advanced search’ menu option which links to the company’s TreeSize program in various ways. TreeSize is available as a free and commercial program.

As far as functional changes are concerned, these are noteworthy:

  1. File results are displayed faster than before.
  2. New File Type selection menu to pick file groups or types quickly (video files, Office files).
  3. Command line parameters are supported by the program now.
  4. The drive list was moved from the bottom to the top.
  5. The export dialog displays a progress dialog now.
  6. You may deactivate the automatic updating of the MFT index under Options > Include file system changes.

Brinkmann emphasizes that these are but a few of the changes in this extensive update, and suggests Windows users who have rejected it before give it another chance. We remind you, though, that UltraSearch is not your only Windows Desktop Search alternative. Some others include FileSearchEX, Gaviri Pocket SearchLaunchy. Locate32, Search EverythingSnowbird, Sow Soft’s Effective File Search, and Super Finder XT.

Launched back in 1997, Jam Software is based in Trier, Germany.  The company specializes in software tools to address common problems faced by users, developers, and organizations., like TreeSize, SpaceObserver, and, of course, UltraSearch. Though free versions of each are available, the company makes its money by enticing users to invest in the enhanced, professional versions.

Cynthia Murrell, September 16, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

Toshiba Amps up Vector Indexing and Overall Data Matching Technology

September 13, 2016

The article on MyNewsDesk titled Toshiba’s Ultra-Fast Data Matching Technology is 50 Times Faster than its Predecessors relates the bold claims swirling around Toshiba and their Vector Indexing Technology. By skipping the step involving computation of the distance between vectors, Toshiba has slashed the time it takes to identify vectors (they claim). The article states,

Toshiba initially intends to apply the technology in three areas: pattern mining, media recognition and big data analysis. For example, pattern mining would allow a particular person to be identified almost instantly among a large set of images taken by surveillance cameras, while media recognition could be used to protect soft targets, such as airports and railway stations*4by automatically identifying persons wanted by the authorities.

In sum, Toshiba technology is able to quickly and accurately recognize faces in the crowd. But the specifics are much more interesting. Current technology takes around 20 seconds to identify an individual out of 10 million, and Toshiba can do it in under a second. The precision rates that Toshiba reports are also outstanding at 98%. The world of Minority Report, where ads recognize and direct themselves to random individuals seems to be increasingly within reach. Perhaps more importantly, this technology should be of dire importance to the criminal and perceived criminal populations of the world

Chelsea Kerwin, September 13, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monographThere is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

The Decline of Free Software As a Failure of Leadership and Relevance

August 18, 2016

The article on Datamation titled 7 Reasons Why Free Software Is Losing Influence investigates some of the causes for the major slowdown in FOSS (free and open software software). The article lays much of the blame at the feet of the leader of the Free Software Foundation (FSF), Richard Stallman. In spite of his major contributions to the free software movement, he is prickly and occasionally drops Joe Biden-esque gaffes detrimental to his cause. He also has an issue when it comes to sticking to his message and making his cause relevant. The article explains,

“Over the last few years, Richard Stallman has denounced cloud computinge-bookscell phones in general, and Android in particular. In each case, Stallman has raised issues of privacy and consumer rights that others all too often fail to mention. The trouble is, going on to ignore these new technologies solves nothing, and makes the free software movement more irrelevant in people’s lives. Many people are attracted to new technologies, and others are forced to use them because others are.”

In addition to Stallman’s difficult personality, which only accounts for a small part of the decline in the FSF’s influence, the article also has other suggestions. Perhaps most importantly, the FSF is a tiny company without the resources to achieve its numerous goals like sponsoring the GNU Project, promoting social activism, and running campaigns against DRM and Windows.
 

Chelsea Kerwin, August 18, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

There is a Louisville, Kentucky Hidden /Dark Web meet up on August 23, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233019199/

 

More Data to Fuel Debate About Malice on Tor

June 9, 2016

The debate about malicious content on Tor continues. Ars Technica published an article continuing the conversation about Tor and the claims made by a web security company that says 94 percent of the requests coming through the network are at least loosely malicious. The article CloudFlare: 94 percent of the Tor traffic we see is “per se malicious” reveals how CloudFlare is currently handling Tor traffic. The article states,

“Starting last month, CloudFlare began treating Tor users as their own “country” and now gives its customers four options of how to handle traffic coming from Tor. They can whitelist them, test Tor users using CAPTCHA or a JavaScript challenge, or blacklist Tor traffic. The blacklist option is only available for enterprise customers. As more websites react to the massive amount of harmful Web traffic coming through Tor, the challenge of balancing security with the needs of legitimate anonymous users will grow. The same network being used so effectively by those seeking to avoid censorship or repression has become a favorite of fraudsters and spammers.”

Even though the jury may still be out in regards to the statistics reported about the volume of malicious traffic, several companies appear to want action sooner rather than later. Amazon Web Services, Best Buy and Macy’s are among several sites blocking a majority of Tor exit nodes. While a lot seems unclear, we can’t expect organizations to delay action.

 

Megan Feil, June 9, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Extensive Cultural Resources Available at Europeana Collections

May 17, 2016

Check out this valuable cultural archive, highlighted by Open Culture in the piece, “Discover Europeana Collections, a Portal of 48 Million Free Artworks, Books, Videos, Artifacts & Sounds from across Europe.” Writer Josh Jones is clearly excited about the Internet’s ability to place information and artifacts at our fingertips, and he cites the Europeana Collections as the most extensive archive he’s discovered yet. He tells us the works are:

“… sourced from well over 100 institutions such as The European Library, Europhoto, the National Library of Finland, University College Dublin, Museo Galileo, and many, many more, including contributions from the public at large. Where does one begin?

“In such an enormous warehouse of cultural history, one could begin anywhere and in an instant come across something of interest, such as the the stunning collection of Art Nouveau posters like that fine example at the top, ‘Cercle Artstique de Schaerbeek,’ by Henri Privat-Livemont (from the Plandiura Collection, courtesy of Museu Nacional d’Art de Catalynya, Barcelona). One might enter any one of the available interactive lessons and courses on the history of World War I or visit some of the many exhibits on the period, with letters, diaries, photographs, films, official documents, and war propaganda. One might stop by the virtual exhibit, ‘Photography on a Silver Plate,’ a fascinating history of the medium from 1839-1860, or ‘Recording and Playing Machines,’ a history of exactly what it sounds like, or a gallery of the work of Swiss painter Jean Antoine Linck. All of the artifacts have source and licensing information clearly indicated.”

Jones mentions the archive might be considered “endless,” since content is being added faster than anyone could hope to keep up with.  While such a wealth of information and images could easily overwhelm a visitor, he advises us to look at it as an opportunity for discovery. We concur.

 

Cynthia Murrell, May 17, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Google Relies on Freebase Machine ID Numbers to Label Images in Knowledge Graph

May 3, 2016

The article on Seo by the Sea titled Image Search and Trends in Google Search Using FreeBase Entity Numbers explains the transformation occurring at Google around Freebase Machine ID numbers. Image searching is a complicated business when it comes to differentiating labels. Instead of text strings, Google’s Knowledge Graph is based in Freebase entities, which are able to uniquely evaluate images- without language. The article explains with a quote from Chuck Rosenberg,

An entity is a way to uniquely identify something in a language-independent way. In English when we encounter the word “jaguar”, it is hard to determine if it represents the animal or the car manufacturer. Entities assign a unique ID to each, removing that ambiguity, in this case “/m/0449p” for the former and “/m/012×34” for the latter.”

Metadata is wonderful stuff, isn’t it? The article concludes by crediting Barbara Starr, a co-administrator of the Lotico San Diego Semantic Web Meetup, with noticing that the Machine ID numbers assigned to Freebase entities now appear in Google Trend’s URLs. Google Trends is a public web facility that enables an exploration of the hive mind by showing what people are currently searching. The Wednesday that President Obama nominated a new Supreme Court Justice, for example, had the top search as Merrick Garland.

 

Chelsea Kerwin, May 3, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Innovation Is Not Reheated Pizza. Citation Analysis Is Still Fresh Pizza.

April 22, 2016

Do you remember Eugene Garfield? He was the go to person in the field of citation analysis. The jargon attached to his figuring out how to identify who cited what journal article snagged old school jargon like bibliometrics. Dr. Garfield founded the Institute for Scientific Information. He sold ISI to Thomson (now Thomson Reuters) in 1992. I mention this because this write up explains an “innovation” which strikes me as recycled Garfield.

image

Navigate to “Who’s Hot in Academia? Semantic Scholar Dives More Deeply into the Data.” The write up explains:

If you’re in the “publish-or-perish” game, get ready to find out how you score in acceleration and velocity. Get ready to find out who influences your work, and whom you influence, all with the click of a mouse. “We give you the tools to slice and dice to figure out what you want,” said Oren Etzioni, CEO of the Allen Institute for AI, a.k.a. AI2.

My recollection is that there were a number of information professionals who could provide these type of data to me decades ago. Let’s see if I can recall some of the folks who could wrangle these types of outputs from the pre-Cambridge Scientific Abstracts version of Dialog:

  • Marydee Ojala, former information wrangler at the Bank of America and now editor of Online
  • Barbara Quint, founder of Searcher and a darned good online expert
  • Riva Basch, who lived a short distance from me in Berkeley, California, when I did my time in Sillycon Valley
  • Ann Mintz, former information wrangler at Forbes before the content marketing kicked in
  • Ruth Pagell, once at the Wharton Library and then head of the business library at Emory University.

And there were others.

The system described in the write up makes certain types of queries easier. That’s great, but it is hardly the breathless revolution which caught the attention of the article.

In my experience, it takes a sharp online specialist to ask the correct question and then determine if the outputs are on the money. Easier does not translate directly into accurate outputs. Is the set of journals representative for a particular field; for example, thorium reactor technology. What about patent documents? What about those crazy PDF versions of pre-publication research?

I know my viewpoint shocks the mobile device generation. Try to look beyond software that does the thinking for the user. Ignoring who did what, how, when, and why puts some folks in a disadvantaged viewshed. (Don’t recognize the terms. Well, look it up. It’s just a click away, right?) And, recognize that today’s innovations are often little more than warmed over pizza. The user experience I have had with reheated pizza is that it is often horrible.

Stephen E Arnold, April 22, 2016

The Missing Twitter Manual Located

April 7, 2016

Once more we turn to the Fuzzy Notepad’s advice and their Pokémon mascot, Evee.  This time we visited the fuzz pad for tips on Twitter.  The 140-character social media platform has a slew of hidden features that do not have a button on the user interface.  Check out “Twitter’s Missing Manual” to read more about these tricks.

It is inconceivable for every feature to have a shortcut on the user interface.   Twitter relies on its users to understand basic features, while the experienced user will have picked up tricks that only come with experience or reading tips on the Internet.  The problem is:

“The hard part is striking a balance. On one end of the spectrum you have tools like Notepad, where the only easter egg is that pressing F5 inserts the current time. On the other end you have tools like vim, which consist exclusively of easter eggs.

One of Twitter’s problems is that it’s tilted a little too far towards the vim end of the scale. It looks like a dead-simple service, but those humble 140 characters have been crammed full of features over the years, and the ways they interact aren’t always obvious. There are rules, and the rules generally make sense once you know them, but it’s also really easy to overlook them.”

Twitter is a great social media platform, but a headache to use because it never came with an owner’s manual.  Fuzzy notepad has lined up hint for every conceivable problem, including the elusive advanced search page.

 

Whitney Grace, April 7, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Netflix Algorithm Defaults To “White” Content, Sweeps Diversity Under the Rug

April 1, 2016

The article Marie Claire titled Blackflix; How Netflix’s Algorithm Exposes Technology’s Racial Bias, delves into the racial ramifications of Netflix’s much-lauded content recommendation algorithm. Many users may have had strange realizations about themselves or their preferences due to collisions with the system that the article calls “uncannily spot-on.” To sum it up: Netflix is really good at showing us what we want to watch, but only based on what we have already watched. When it comes to race, sexuality, even feminism (how many movies have I watched in the category “Movies With a Strong Female Lead?”), Netflix stays on course by only showing you similarly diverse films to what you have already selected. The article states,

“Or perhaps I could see the underlying problem, not in what we’re being shown, but in what we’re not being shown. I could see the fact that it’s not until you express specific interest in “black” content that you see how much of it Netflix has to offer. I could see the fact that to the new viewer, whose preferences aren’t yet logged and tracked by Netflix’s algorithm, “black” movies and shows are, for the most part, hidden from view.”

This sort of “default” suggests quite a lot about what Netflix has decided to put forward as normal or inoffensive content. To be fair, they do stress the importance of logging preferences from the initial sign up, but there is something annoying about the idea that there are people who can live in a bubble of straight, white, (or black and white) content. There are among those people some who might really enjoy and appreciate a powerful and relevant film like Fruitvale Station. If it wants to stay current, Netflix needs to show more appreciation or even awareness of its technical bias.

Chelsea Kerwin, April 1, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta