Google-Publishers Partnership Chases True News

September 22, 2017

It appears as though Google is taking the issue of false information, and perhaps even their role in its perpetuation, seriously; The Drum reveals, “Google Says it Wants to Fund the News, Not Fake It.” Reporters Jessica Goodfellow and Ronan Shields spoke with Google’s Madhav Chinnappa to discuss the Digital News Initiative (DNI), which was established in 2015. The initiative, a project on which Google is working with European news publishers, aims to leverage technology in support of good journalism. As it turns out, Wikipedia’s process suggests an approach; having discussed the “collaborative content” model with Chinnappa, the journalists write:

To this point, he also discusses DNI’s support of Wikitribune, asserting that it and Wikipedia are ‘absolutely incredible and misunderstood,’ pointing out the diligence that goes into its editing and review process, despite its decentralized means of doing so. The Wikitribune project tries to take some of this spirit of Wikipedia and apply this to news, adds Chinnappa. He further explains that [Wikipedia & Wikitribune] founder Jimmy Wales’ opinion is that the mainstream model of professional online publishing, whereby the ‘journalist writes the article and you’ve got a comment section at the bottom and it’s filled with crazy people saying crazy things’, is flawed. He [Wales] believes that’s not a healthy model. What Wikitribune wants to do is actually have a more rounded model where you have the professional journalist and then you have people contributing as well and there’s a more open and even dialogue around that,’ he adds. ‘If it succeeds? I don’t know. But I think it’s about enabling experimentation and I think that’s going to be a really interesting one.’

Yes, experimentation is important to the DNI’s approach. Chinnappa believes technical tools will be key to verifying content accuracy. He also sees a reason to be hopeful about the future of journalism—amid fears that technology will eventually replace reporters, he suggests such tools, instead, will free journalists from the time-consuming task of checking facts. Perhaps; but will they work to stem the tide of false propaganda?

Cynthia Murrell, September 22, 2017

Old School Publishing: On the Ropes?

September 18, 2017

If you are interested professional publishing, you will want to read “We’ve Failed: Pirate Black Open Access Is Trumping Green and Gold and We Must Change Our Approach.” The “colorful” metaphors aside, there are some interesting statements in the article, which is available online without a fee.

I noted this passage:

Not for the first time, pirates are delivering where the established players and legal channels are not.

I also highlighted this idea for professional publishers:

What if, like the airline industry, publishers unbundled their product and started to test the value of some of the elements that form the bundle?

Please, read the full article, which is free I wish to reiterate, and think about the business decisions companies dependent on the business model for professional information services.

There’s nothing like an uncomfortable coach class seat.

Stephen E Arnold, September 18, 2017

Bing and Google: The News Battle

September 15, 2017

I read “Bing Battles Google News with Its Own Make-Over.” I noted the alliteration: Bing battle. I immediately thought, “Google Gropes.” Both of these companies are trying to reinvent the newspaper using zeros and ones, not dead trees. Let’s look at some of the points I highlighted:

I noted this statement everyone’s most lovable online ad vendor:

Google redesigned their desktop Google News website. Their [sic] new UI has a clean and uncluttered look.

Microsoft responded. I circled this statement:

Microsoft recently updated their Bing News experience that will help users in finding the most up to date and well-rounded information.

Note that the pivot of both sentences is a subjective assertion: “Clean and uncluttered” for the GOOG, and “most up to date and well rounded.”

Some facts would be useful. I am not sure what “clean” or “uncluttered” means. My recollection is that Einstein’s desk like most “dead tree” newspapers are organized in an eclectic manner. Facts supporting these assertions might be difficult to conjure.

The “most up to date” statement should be easy to back up. What’s the latency of the system? The superlative “most” means that Bing is the top dog in news. Hmmm. I don’t buy this.

My point is that the write up provides a useful idea: Neither Bing nor Google has figured out how to present “news” to each system’s online users. The implicit idea is that “dead tree” methods are of little use. Inspiration comes from each system’s response to what the other system does.

Cold War methods applied to online “news”? That’s what the write signals me.

Let’s step back.

Online users have different reasons for wanting news. Some folks chase sports, which as I recall was the most read section of the “dead tree” newspaper company at which I once worked. Other people have quite different reasons for scanning the news; for example, there are some who read the obituaries, others seek cartoons, and others want the latest on the real housewives.

Bing and Google have to figure out how to meet these diverse needs because the “dead tree” crowd has fallen in the forest.

The write up tells me one thing: Neither Google nor Microsoft has any idea about reinventing what “dead tree” newspapers used to do.

Now what? Shape the news to fit what each company’s filters “decide” is “real news”?

Stephen E Arnold, September 15, 2017

A New and Improved Content Delivery System

September 7, 2017

Personalized content and delivery is the name of the game in PRWEB’s, “Flatirons Solutions Launches XML DITA Dynamic Content Delivery Solutions.”  Flatirons Solutions is a leading XML-based publishing and content management company and they recently released their Dynamic Content Delivery Solution.  The Dynamic Content Delivery Solution uses XML-based technology will allow enterprises to receive more personalized content.  It is advertised that it will reduce publishing and support costs.  The new solution is built with the Mark Logic Server.

By partnering with Mark Logic and incorporating their industry-leading XML content server, the solution conducts powerful queries, indexing, and personalization against large collections of DITA topics. For our clients, this provides immediate access to relevant information, while producing cost savings in technical support, and in content production, maintenance, review and publishing. So whether they are producing sales, marketing, technical, training or help documentation, clients can step up to a new level of content delivery while simultaneously improving their bottom line.

The Dynamic Content Delivery Solution is designed for government agencies and enterprises that publish XML content to various platforms and formats.  Mark Logic is touted as a powerful tool to pool content from different sources, repurpose it, and deliver it to different channels.

MarkLogic finds success in its core use case: slicing and dicing for publishing.  It is back to the basics for them.

Whitney Grace, September 7, 2017


Academic Publication Rights Cause European Dispute

September 4, 2017

Being published is the bread and butter of intellectuals, especially academics. publication, in theory, is a way for information to be shared across the globe, but it also has become big business. In a recent Chemistry World article the standoff between Germany’s Project DEAL (a consortium comprised of German universities) and Dutch publisher, Elsevier, is examined along with possible fall-out from the end result.

At the heart of the dispute is who controls the publications. Currently, Elsevier holds the cards and has wielded their power to make a clear point on the matter. Project DEAL, though, is not going down without a fight and Chemistry World quotes Horst Hippler, a physical chemist and chief negotiator for Project DEAL, as saying,

In the course of digitisation, science communication is undergoing a fundamental transformation process. Comprehensive, free and – above all – sustainable access to scientific publications is of immense importance to our researchers. We therefore will actively pursue the transformation to open access, which is an important building block in the concept of open science. To this end, we want to create a fair and sustainable basis through appropriate licensing agreements with Elsevier and other scientific publishers.

As publications are moving farther from ink and paper and more to digital who owns the rights to the information is becoming murkier. It will be interesting to see how this battle plays out and if any more disgruntled academics jump on board.

Catherine Lamsfuss, September 4, 2017

Demonizing the Ever Helpful Alaphabet Google XXVI Things

September 2, 2017

Gentle reader, I am horrified at the indirect vilification of my beloved Alphabet Google XXVI things. You must judge for yourself. Navigate to “A Serf on Google’s Farm.” A serf, as I understand the term is a person who is in thrall to a noble. The noble provides the land, and the serf the labor. As our modern world embraces the precepts of the Great Chain of Being, serfs are below the one percent. Thus, it is. In the Dark Ages, one did not grouse too much about the one percent. Bad things could happen because that was the mechanism for the Great Chain of Being. It was a perception that the top spot was occupied by a deity. The lower levels were ranked by their station in life. In short, it was and is good to be up near the top of the pecking order.

The write up makes clear that publishers find themselves lower in the Great Chain of Digital Being than they were in the pre-Google era. Yep, when the king disowned an annoying son, life was not as good outside the castle as it was inside the castle.

Publisher types are now looking at the castle from the mud and straw vantage points close to the pigs and chickens. Big change. The trip to the castle may have been short in terms of steps but long in terms of the Great Chain of Being.

The article points out that Google has put publishers and related content types in the squalid hovels built near the castle walls. Life can be fun when the wine and mead are available, and the harvest is good. But at other times, those lice and muddy lanes were a bummer.

The write up points out that the Google has assembled an advertising Catch 22. Get with the program and you may be squeezed by the program. Thus it was for serfs and thus it is for those who have little choice but accept Google’s way of life.

I noted three statements which characterize the world as perceived by a digital serf:

  1. as the adage puts it, if you don’t pay for the product, you are the product. Google isn’t doing us any favors. We get these services for free because Google’s empire and the vast amounts of money it brings in every year is built on the unimaginable amounts of data that come from, among other places, DoubleClick for Publishers and Analytics. We’re [the article author’s company] just one of a kabillion [sic] sites allowing Google to harvest our data.
  2. Running TPM [the article author’s company] absent Google’s various services is almost unthinkable. Like I literally would need to give it a lot of thought how we’d do without all of them. Some of them are critical and I wouldn’t know where to start for replacing them. In many cases, alternatives don’t exist because no business can get a footing with a product Google lets people use for free.
  3. And in general Google tends to be a relatively benign overlord….Google’s monopoly control is almost comically great. It’s a monopoly at every conceivable turn and consistently uses that market power to deepen its hold and increase its profits. Just the interplay between DoubleClick and Adexchange is textbook anti-competitive practices.

My view is that the Google has been operating in a consistent manner since it was inspired by the Yahoo, GoTo, Overture pay to play model. That shift from better Web search to the ad thing took place before the Google initial public offering. That works out to 13 years ago.

In that span of time, publishers wanted the world to be like the good old days of print which put the publishers in the role of gatekeepers and power brokers. Nice try, but publishers were unable to adapt to the Googley world. Just like the hapless retail giants, the failure to take advantage of digital opportunities has put Sears, JC Penny, and other “giants” outside the castle walls. Wattle, not Walmart, is the go to operating model.

Forget Google. Had there been no Google, another outfit would have filled the void. Google is a reflection of today’s version of the Middle Ages.

Do I feel sorry for traditional publishers? Nope. These outfits embrace systems and methods like XML, slicing and dicing, and surfing on Google as the skateboard wheels that will carry them to the future.

The wheels spin but don’t win X Games competitions.

Now Google itself is vulnerable. There is Facebook, the Chinese outfits, and the Bezos transformer machine. Perhaps publishers should think about ways to exploit Google’s flaws instead of grousing about Google being Google for 13 years. The Alphabet Google XXVI things are not likely to change their stripes overnight.

Publishers might find life easier if they quit complaining and name calling. Meeting user needs might be a path forward. But Google bashing is so easy and so much fun. Figuring out how to make money is work. Who wants to do that?

Stephen E Arnold, September 2, 2017

Google: A Me Too from Mountain View

August 7, 2017

It is a tough world out there for a seller of online ads. From my point of view, the concentration of online advertising in the hands of Facebook and Google is a natural consequence of digital disintermediation. He who is most like the old Bell Telephone wins.

What does one do when an upstart comes up with a better idea? If one is a giant company’s chief innovator, the answer is obvious: Imitate, then use the power of scale to take lots of money.

I thought about this characteristic of online when I read “Google Reportedly Building Its Own Snapchat Competitor.” I would have used the word “killer,” not competitor, but that’s why I am a 74 year old retired person in rural Kentucky.

The write up (which may be a recycled variant of another real journalism effort) said:

Google is working on its answer to Snapchat. It’s called Stamp — a portmanteau of “stories” and “AMP,” the acronym for Accelerated Mobile Pages …The new platform would be similar to Snapchat’s Discover feature, where publishers create and share made-for-Snapchat (or repurposed-for-Snapchat) content.

Didn’t Google try to buy Snap when it was just Snapchat?

Moral of the story:

The model and wife of Snapchat CEO Evan Spiegel has historically not been too thrilled about other tech companies ripping off her husband’s product. “Do they have to steal all of my partner’s ideas? I’m so appalled by that … When you directly copy someone, that’s not innovation.”


Nah, that’s innovation the online way.

Stephen E Arnold, August 7, 2017

Big Data as Savior of Newspapers? Tell That to NYT Editors

August 7, 2017

This would be ironic. The SmartDataCollective posits, “Is Big Data the Salvation of the Newspaper Industry?” The write-up tells us that several prominent publications are turning to data analysis to boost their bottom lines and, they hope, save themselves from extinction. Writer Rehan Ijaz cites this post from the US Chamber of Commerce Foundation as he describes ways the New York Times and the Financial Times are leveraging data. He quotes publishing pro, David Soloff:

The Financial Times, one of our global publisher customers, uses big data analytics to optimize pricing on ads by section, audience, targeting parameters, geography, and time of day. Our friends at the FT sell more inventory because the team knows what they have, where it is and how it should be priced to capture the opportunity at hand. To boot, analytics reveal previously undersold areas of the publication, enabling premium pricing and resulting in found margin falling straight to the bottom line.

What about the venerable New York Times? That paper hired a data scientist in 2014, yet now is slashing staff, we learn from Reuters’ piece, “New York Times Offers Buyouts, Scraps Public Editor Position.” It is, in fact, most editors facing unemployment (because clear prose and verified facts are so last century, I suppose.) Reporters Jessica Toonkel and Narottam Medhora reveal:

The newspaper said it would eliminate the in-house watchdog position of public editor as it shifts focus to reader comments. ‘Today, our followers on social media and our readers across the internet have come together to collectively serve as a modern watchdog, more vigilant and forceful than one person could ever be,’ publisher Arthur Sulzberger Jr said in a memo, which was reviewed by Reuters.

“Vigilant and forceful?” Is “correct” not a consideration? Professional editors exist for a reason; crowdsourcing will not always suffice. Also, call me old-fashioned, but I think facts should be confirmed before publication. This is an interesting choice for the Times to be making particularly now, amid the “fake news” commotion.

Cynthia Murrell, August 7, 2017

Can an Algorithm Tame Misinformation Online?

June 23, 2017

UCLA researchers are working on an algorithmic solution to the “fake news” problem, we learn from the article, “Algorithm Reads Millions of Posts on Parenting Sites in Bid to Understand Online Misinformation” at TechRadar. Okay, it’s actually indexing and text analysis, not “reading,” but we get the idea. Reporter Duncan Geere tells us:

There’s a special logic to the flow of posts on a forum or message board, one that’s easy to parse by someone who’s spent a lot of time on them but kinda hard to understand for those who haven’t. Researchers at UCLA are working on teaching computers to understand these structured narratives within chronological posts on the web, in an attempt to get a better grasp of how humans think and communicate online.

Researchers used the hot topic of vaccinations, as discussed on two parenting forums, as their test case. Through an examination of nearly 2 million posts, the algorithm was able to come to accurate conclusions, or “narrative framework.” Geere writes:

While this study was targeted at conversations around vaccination, the researchers say the same principles could be applied to any topic. Down the line, they hope it could allow for false narratives to be identified as they develop and countered by targeted messaging.

The phrase “down the line” is incredibly vague, but the sooner the better, we say (though we wonder exactly what form this “targeted messaging” will take). The original study can be found here at eHealth publisher JMIR Publications.

Cynthia Murrell, June 23, 2017


Academic Publisher Retracts Record Number of Papers

June 20, 2017

To the scourge of fake news we add the problem of fake research. Retraction Watch announces “A New Record: Major Publisher Retracting More Than 100 Studies from Cancer Journal over Fake Peer Reviews.”  We learn that Springer Publishing Company has just retracted 107 papers from a single journal after discovering their peer reviews had been falsified. Faking the integrity of cancer research? That’s pretty low. The article specifies:

To submit a fake review, someone (often the author of a paper) either makes up an outside expert to review the paper, or suggests a real researcher — and in both cases, provides a fake email address that comes back to someone who will invariably give the paper a glowing review. In this case, Springer, the publisher of Tumor Biology through 2016, told us that an investigation produced “clear evidence” the reviews were submitted under the names of real researchers with faked emails. Some of the authors may have used a third-party editing service, which may have supplied the reviews. The journal is now published by SAGE. The retractions follow another sweep by the publisher last year, when Tumor Biology retracted 25 papers for compromised review and other issues, mostly authored by researchers based in Iran.

The article shares Springer’s response to the matter, some from their official statement and some from a spokesperson. For example, we learn the company cut ties with the “Tumor Biology” owners, and that the latest fake reviews were caught during a process put in place after that debacle.  See the story for more details.

Cynthia Murrell, June 20, 2017

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta