Questionable Journals Fake Legitimacy

September 13, 2019

The problem of shoddy or fraudulent research being published as quality work continues to grow, and it is becoming harder to tell the good from the bad. Research Stash describes “How Fake Scientific Journals Are Bypassing Detection Filters.” In recent years, regulators and the media have insisted scientific journals follow certain standards. Instead of complying, however, some of these “predatory” journals have made changes that just make them look like they have mended their ways. The write-up cites a study out of the Gandhinagar Institute of Technology in India performed by Naman Jain, a student of Professor Mayank Singh. Writer Dinesh C Sharma reports:

“The researchers took a set of journals published by Omics, which has been accused of publishing predatory journals, with those published by BMC Publishing Group. Both publish hundreds of open access journals across several disciplines. Using data-driven analysis, researchers compared parameters like impact factors, journal name, indexing in digital directories, contact information, submission process, editorial boards, gender, and geographical data, editor-author commonality, etc. Analysis of this data and comparison between the two publishers showed that Omics is slowly evolving. Of the 35 criteria listed in the Beall’s list and which could be verified using the information available online, 22 criteria are common between Omics and BMC. Five criteria are satisfied by both the publishers, while 13 are satisfied by Omics but not by BMC. The predatory publishers are changing some of their processes. For example, Omics has started its online submission portal similar to well-known publishers. Earlier, it used to accept manuscripts through email. Omics dodges most of the Beall’s criteria to emerge as a reputed publisher.”

Jain suggests we update the criteria for identifying quality research and use more data analytics to identify false and misleading articles. He offers his findings as a starting point, and we are told he plans to present his research at a conference in November.

Cynthia Murrell, September 13, 2019

Research Suggests Better Way to Foil Hate Groups

September 9, 2019

It is no secret that internet search and social media companies have a tough time containing the spread of hate groups across their platforms. Now a study from George Washington University and the University of Miami posits why. Inverse reports, “‘Global Hate Highways’ Reveal How Online Hate Clusters Multiply and Thrive.” This is my favorite quote from the article—“In it, [researchers] observe that hate spreads online like a diseased flea, jumping from one body to the next.”

The study tracked certain hate “clusters” across international borders and through different languages as they hopped from one platform to another. Current strategies for limiting the spread of such groups include the “microscopic approach” of banning individual users and the “macroscopic approach” that bans whole ideologies. Not only does the latter approach often run afoul of free speech protections, as the article points out, it is also error-prone—algorithms have trouble distinguishing conversations about hate speech from those that are hate speech (especially where parody is used.) Besides, neither of these approaches have proven very effective. The study suggests another way; reporter Sarah Sloat writes:

“The mathematical mapping model used here showed that both these policing techniques can actually make matters worse. That’s because hate clusters thrive globally not on a micro or macro scale but in meso scale — this means clusters interconnect to form networks across platforms, countries, and languages and are quickly able to regroup or reshape after a single user is banned or after a group is banned from a single platform. They self-organize around a common interest and come together to remove trolls, bots, and adverse opinions. …

“A better way to curb the spread of hate, the researchers posit, would involve randomly banning a small fraction of individuals across platforms, which is more likely to cause global clusters to disconnect. They also advise platforms to send in groups of anti-hate advocates to bombard hate-filled spaces together with individual users to influence others to question their stance.

“The goal is to prevent hate-filled online pits that radicalize individuals like the Christchurch shooter, an Australian who attacked in New Zealand, covered his guns with the names of other violent white supremacists and citations of ancient European victories, and posted a 74-page racist manifesto on the website 8chan.”

The researchers’ approach does not require any data on individuals, nor does it rely on banning ideas wholesale. Instead, it is all about weakening the connections that keep online hate groups going. Can their concept help society dissipate hate?

Cynthia Murrell, September 9, 2019

Thinking about Real News

September 7, 2019

Now that AI has gotten reasonably good at generating fake news, we have a study that emphasizes how dangerous such false articles can be. The Association for Psychological Science reports, “Fake News Can Lead to False Memories.” While the study, from the University College Cork, was performed on Irish citizens ahead of a vote on an abortion referendum, its results can easily apply to voters in any emotional or partisan contest. Like, say, next year’s U.S. presidential election.

Researchers recruited 3,140 likely voters and had them read six articles relevant to the referendum, two of which were accounts of scandalous behavior that never actually happened. We learn:

“After reading each story, participants were asked if they had heard about the event depicted in the story previously; if so, they reported whether they had specific memories about it. The researchers then informed the eligible voters that some of the stories they read had been fabricated, and invited the participants to identify any of the reports they believed to be fake. Finally, the participants completed a cognitive test. Nearly half of the respondents reported a memory for at least one of the made-up events; many of them recalled rich details about a fabricated news story. The individuals in favor of legalizing abortion were more likely to remember a falsehood about the referendum opponents; those against legalization were more likely to remember a falsehood about the proponents. Many participants failed to reconsider their memory even after learning that some of the information could be fictitious. And several participants recounted details that the false news reports did not include.

We note:

“‘This demonstrates the ease with which we can plant these entirely fabricated memories, despite this voter suspicion and even despite an explicit warning that they may have been shown fake news,’ [lead author Gillian] Murphy says.”

Indeed it does. Even those who scored high on the cognitive test were susceptible to false memories, though those who scored lower were more likely to recall stories that supported their own opinions. At least the more intelligent among us seem better able to question their own biases. Alas, not only the intelligent vote.

In addition to fake articles that can now be generated quickly and easily with the help of AI, we are increasingly subjected to convincing fake photos and videos, too. Let us hope the majority of the population learns to take such evidence with a grain of salt, and quickly. Always consider the source.

Cynthia Murrell, September 9, 2019

Incognito Mode Update Hinders Publisher Paywalls

September 3, 2019

Google’s effort to bolster the privacies of Chrome’s Incognito Mode does not sit well with one writer at BetaNews. Randall C. Kennedy insists, “Google Declares War on Private Property.” The headline seems to conflate the term “private” with “proprietary,” but never mind. The point is the fix makes it easier for dishonest readers to avoid paywalls, and that is a cause for concern. The write-up explains:

“Google has announced that it is closing a loophole that allowed website operators to detect whether someone was viewing their content under the browser’s Incognito Mode. This detection had become an important part of enforcing paywall restrictions since even tech-unsavvy visitors had learned to bypass the free per-month trial article counts at sites like nytimes.com by visiting them with Incognito Mode active (and thus disabling the sites’ ability to track how many free articles the user read via a cookie.) The content publishing community’s response to this blatant theft of property has been to simply block users from visiting their sites under Incognito Mode. And the way they detect if the mode is active is by monitoring the Chrome FileSystem API and looking for signs of private browsing. Now, with version 76, Google has closed this API ‘loophole’ and is promising to continue thwarting any future workarounds that seek to identify Incognito Mode browsing activity.”

Google says the change is to protect those who would circumvent censorship in repressive nations. However, in doing so, it thwarts publishers who desperately need, and deserve, to get paid for their work. Kennedy suspects Google’s real motivation is its own profits—if content creators cannot enforce paywalls, he reasons, their only recourse will be to display Google’s ads alongside their content. Perhaps.

Cynthia Murrell, September 3, 2019

Elsevier: Exemplary Customer Service

August 26, 2019

Academic publishers’ journals are expensive and are notoriously protective of their content. Elsevier is the country’s largest academic publisher as well as the biggest paywall perpetrator. California is big on low cost, effective education, particularly the University of California.

The University of California and Elsevier have butted heads over access for months, but in July 2019 Elsevier pulled the plug on recent research. The Los Angeles Times explains the details in the article, “In Act Of Brinkmanship, A Big Publisher Cuts Off UC’s Access To Its Academic Journals.”

Elsevier’s contract with UC expired in 2018. UC is willing to renegotiate a contract with Elsevier, but UC wants the new contract to include an open access clause, meaning all work produced on its campuses will be free to the public.

Academic publishers usually print scholarly material for free, but require expensive subscription fees to access content. UC wants to change the system to where researchers pay to have their papers published, but not for subscriptions. UC creates 10% of all published research in the US and is the largest producer of academic content in favor of open access.

Elsevier and other academic publishers are profit gluttons, while hiding behind pay walls. UC wants to continue its relationship with Elsevier, but the former agreement would raise subscription and access costs to exorbitant amounts. The University of California found its contract with Elsevier to be cost prohibitive, so they took a stand and demanded open access for UC research.

“UC isn’t the only institution to stage a frontal assault on this model. Open access has been spreading in academia and in scholarly publishing; academic consortiums in Germany and Sweden also have demanded read-and-publish deals with Elsevier, which cut them off after they failed to reach deals last year. Those researchers are still cut off, according to Gemma Hersh, Elsevier’s vice president for global policy. Smaller deals have been made in recent months with research institutions in Norway and Hungary.

We noted this statement:

….Under the circumstances, it looks like Elsevier may have picked a fight with the wrong adversary. While the open-access movement is growing, ‘the reality is that the majority of the world’s articles are still published under the subscription model, and there is a cost associated with reading those articles,’ Hersh says.”

The academic publishing paywall seems to be under siege. There is pressure to reduce costs in higher education and many professors and professional staff are demanding open access.

Elsevier may be perceived as mishandling its customers.

Whitney Grace, August 26, 2019

Knewz: Who New?

August 23, 2019

DarkCyber read “News Corp Is Apparently Working on a News App Called Knewz.”

My memory was jarred. What? Knewz. Will this service channel:

  • Dow Jones News/Retrieval
  • The Wall Street Journal Interactive Edition
  • Dow Jones Interactive and Reuters Business Briefing.
  • Factiva?

News Corp. wants to fight back against the “free news” available from the evil upstarts. Well, Google News is no longer an upstart. Facebook, maybe? But what about Bing News, or the quite useful Big Project.

Knews? From News Corp.?

The write up states:

The service will be called Knewz.com, and take the form of both a traditional website and a mobile app. It will draw from a variety of national outlets such as The New York Times and NBC News, as well as more partisan news sites like The Daily Caller and ThinkProgress.

Many years ago, Dow Jones launched a system which made news available.

Here’s a personal anecdote. I subscribe to the dead tree edition of the Wall Street Journal. If I want online access to the News Corp. property, I have to navigate to a Web page or call the hot line. I create an online subscription. But when the print subscription is renewed, I have to do this over and over and over again.

There is no connection between the print and online services. It seems that when I renew the print subscription, the online service would be updated and continue working. But no. That’s just not possible for a company struggling with modernization since the late 1980s and the initiatives of Richard Levine and others.

Is this type of system elegance “Knewz”?

Stephen E Arnold, August 23, 2019

Smart Software but No Mention of Cathy O’Neil

August 21, 2019

I read “Flawed Algorithms Are Grading Millions of Students’ Essays.” I also read Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil, which was published in 2016. My recollection is that Ms. O’Neil made appearances on some podcasts; for instance, Econ Talk, a truly exciting economics-centric program. It is indeed possible that the real news outfit Motherboard/Vice did not. Before I read the article, I did a search of the text for “O’Neil” and “Weapons of Math Destruction.” I found zero hits. Why? The author, editor, and publisher did not include a pointer to her book. Zippo. There’s a reference to the “real news” outfit ProPublica. There’s a reference to the Vice investigation. Would this approach work in freshman composition with essays graded by a harried graduate student?

Here’s the point. Ms. O’Neil did a very good job of explaining the flaws of automated systems. Recycling is the name of the game. After all, DarkCyber is recycling this “original” essay containing “original” research, isn’t it?

I noted this passage in the write up:

Research is scarce on the issue of machine scoring bias, partly due to the secrecy of the companies that create these systems. Test scoring vendors closely guard their algorithms, and states are wary of drawing attention to the fact that algorithms, not humans, are grading students’ work. Only a handful of published studies have examined whether the engines treat students from different language backgrounds equally, but they back up some critics’ fears.

Yeah, but there is a relatively recent book on the subject.

I noted this statement in the write up:

Here’s the first sentence from the essay addressing technology’s impact on humans’ ability to think for themselves…

I like the “ability to think for themselves.”

So do I. In fact, I would suggest that this write up is an example of the loss of this ability.

A mere 2,000 words and not a room or a thought or a tiny footnote about Ms. O’Neil. Flawed? I leave it to you to decide.

Stephen E Arnold, August 21, 2019

Google Accused of Favoritism by an Outfit with Google Envy?

August 10, 2019

I read in the Jeff Bezos owned Washington Post this story: “YouTube’s Arbitrary Standards: Stars Keep Making Money Even after Breaking the Rules.” The subtitle is a less than subtle dig at what WaPo perceives as the soft, vulnerable underbelly of Googzilla:

Moderators describe a chaotic workplace where exceptions for lucrative influencers are the norm.

What is the story about? The word choice in the headlines make the message clear: Google is a corrupt, Wild West. The words in the headline and subhead I noted are:

arbitrary

money

breaking

chaotic

exceptions

lucrative

norm.

Is it necessary to work through the complete write up? I have the frame. This is “real news”, which may be as problematic as the high school management methods in operation at Google.

Let’s take a look at a couple of examples of “real news”:

Here’s the unfair angle:

With each crisis, YouTube has raced to update its guidelines for which types of content are allowed to benefit from its powerful advertising engine — depriving creators of those dollars if they break too many rules. That also penalizes YouTube, which splits the advertising revenue with its stars.

Nifty word choice: crisis, race, powerful, dollars, break, and the biggie “advertising revenue.”

That’s it. Advertising revenue. Google has. WaPo doesn’t. Perhaps, just perhaps, Amazon wants. Do you think?

Now the human deciders. Do they decide? WaPo reports the “real news” this way:

But unlike at rivals like Facebook and Twitter, many YouTube moderators aren’t able to delete content themselves. Instead, they are limited to recommending whether a piece of content is safe to run ads, flagging it to higher-ups who make the ultimate decision.

The words used are interesting:

unlike

Facebook

Twitter

aren’t

limited

recommending

higher ups

Okay, that’s enough for me. I have the message.

What if WaPo compared and contrasted YouTube with Twitch, an Amazon owned gaming platform. In my lectures at the TechnoSecurity & Digital Forensics Conference, I showed LE and intel professionals, Twitch’s:

online gambling

soft porn

encoded messages

pirated first run motion pictures

streaming US television programs

Twitch talent can be banned; for example, SweetSaltyPeach. But this star resurfaced with ads a few days later as RachelKay. Same art. Same approach which is designed to appeal the the Twitch audience. How do I know? Well, those pre roll ads and the prompt removal of the ban. Why put RachelKay back on the program? Maybe ad revenue?

My question is, “Why not dive into the toxic gaming culture and the failure of moderation on Twitch?” The focus on Google is interesting, but explaining that problems are particular to Google is interesting.

One thing is certain: The write up is so blatantly anti Google that it is funny.

Why not do a bit of research into the online streaming service of the WaPo’s owner?

Oh, right, that’s not “real news.”

What’s my point? Amazon is just as Googley as Google. Perhaps an editor at the WaPo should check out Twitch before attacking what is not much different than Amazon’s own video service.

Stephen E Arnold, August 10, 2019

Google Pumps Cash into DeepMind: A Cost Black Hole Contains Sour Grapes

August 8, 2019

DarkCyber believes that some of the major London newspapers are not wearing happy face buttons when talking about Google. The reasons boil down to money. Google has it in truckloads courtesy of advertising. London newspapers don’t because advertisers love print less these days.

I read “DeepMind Losses Mount as Google Spends Heavily to Win AI Arms Race.” The write up is a good example of bad decisions the now ageing whiz kids are making. Sour grapes? More like sour grapes journalism.

Straight away smart software is going to migrate through many human performed activities. Getting software to work, not send deliveries to the wrong house, pick out the exact person of interest from a sea of faces, and make decisions which are slightly more reliable than the LIBOR folks delivered — this is the future.

The future is expensive unless one gets really lucky. Right, that’s like the “I’m feeling lucky” thing Google provides courtesy of advertisers’ spending.

Back to the bitter vintage write up: The London newspaper states:

Its annual accounts from Companies House show losses of more than £470m in 2018, up from £302m the year before, and its expenses rose from £334m to £568m. Of the £1.03bn due for repayment this year, £883m is owed to parent company Alphabet.

Okay, investments (losses). This is not news. What is news is the tiny hint that there may be some value in looking at the repayments issue? Well, why not look into the tax implications of such inside debts?

Another non news factoid: It costs money to hire people who can make AI work. What about the future of AI if a company does not have smart people? There are some case examples about this type of misstep in non Googley businesses. What are the differences? Similarities? How about a smidgen of research and analysis.

Recycling numbers without context is — to be frank — like a commercial database summarizing an article from a linguistics journal published a year ago. Great for some, but for most, nothing substantive or useful.

Poor Google. The company is investing in a city and country which has the distinction of newspapers which grouse incessantly about a company that’s been around 20 or so years.

Will Google deploy its technology to report the news? Perhaps that would make an interesting write up. Recycling public financial data with a couple of ounces of lousy whine is not satisfying to those in Harrod’s Creek, Kentucky.

Stephen E Arnold, August 8, 2019

Elsevier: A Fun House Mirror of the Google?

August 5, 2019

Is Elsevier like Google? My hunch is that most people would see few similarities. In Google: The Digital Gutenberg, the third monograph in my Google trilogy, I noted:

  1. Google is the world’s largest publisher. Each search results page output is a document. Those documents make Google an publisher of import.
  2. Google uses its technology to create a walled garden for content. Rules must be followed to access that content for certain classes of users; for example, advertisers. I know that this statement does not mean much, if anything to most people, but think about AMP, its rules, and why it is important.
  3. Google is a content recycler. Original content on Google is usually limited to its own blog posts. The majority of content on Google is created by other people, and some of those people pay Google a variable, volatile fee to get that content in front of users (who, by the way, are indirect content generators).

Therefore, Google is the digital Gutenberg.

Now Elsevier:

  1. Elsevier published content for a fee from a specialized class of authors.
  2. Elsevier, like other professional publishers, rely on institutions for revenue who typically subscribe to services, an approach Google is slowly making publicly known and beginning to use.
  3. Elsevier is an artifact of the older Gutenberg world which required control or gatekeepers to keep information out of the wrong hands.

What’s interesting is that one can consider that Google is becoming more like Elsevier? Or, alternatively, Elsevier is trying to become more like Google?

The questions are artificial because both firms:

  1. See themselves as natural control points and arbiters of data access
  2. Evidence management via arrogance; that is, what’s good for the firm is good for those in the know
  3. Revenue diversification has become a central challenge.

I thought of my Digital Gutenberg work when I read “Elsevier Threatens Others for Linking to Sci-Hub But Does So Itself.” I noted this statement (which in an era of fake news may or may not be accurate):

I learned this morning that the largest scholarly publisher in the world, Elsevier, sent a legal threat to Citationsy for linking to Sci-Hub. There are different jurisdictional views on whether linking to copyright material is or is not a copyright violation. That said, the more entertaining fact is that scholarly publishers frequently end up linking to Sci-Hub. Here’s one I found on Elsevier’s own ScienceDirect site ….

Key point: We do what we want. We are the gatekeepers. Very Googley.

Stephen E Arnold, August 4, 2019

Next Page »

  • Archives

  • Recent Posts

  • Meta