January 8, 2014
Here is some content excitement of interest to journalists and bloggers everywhere. MakeUseOf informs us that “Feedly Was Stealing Your Content—Here’s the Story, and Their Code.” Apparently, the aggregation site was directing shared links to copies on their own site instead of to original articles, essentially stealing traffic. Writer James Bruce, eager to delve deeper into the code, makes it clear that he is following up on a discovery originally revealed by The Digital Reader.
“Not only were Feedly scraping the content from your site, they were then stripping any original social buttons and rewriting the meta-data. This means that when someone subsequently shared the item, they would in fact be sharing the Feedly link and not the original post. Anyone clicking on that link would go straight to Feedly.
So what, you might ask? When a post goes viral, it can be of huge benefit to the site in question — raising page views and ad revenues, and expanding their audience. Feedly was outright stealing that specific benefit away from the site to expand its own user base. The Feedly code included checks for mobile devices that would direct the users to the relevant appstore page.
It wasn’t ‘just making the article easier to view’ — it was stealing traffic, plain and simple. That’s really not cool.”
The write-up goes on to detail the ways Feedly has responded to discoveries, where the issue stands now, and “what we have learnt”: Feedly made some bad choices in the pursuit of a streamlined reading experience. As a parting shot, Bruce cites another example of a bad call by the company—it briefly required a Google+ account to log in. He has a point there.
Cynthia Murrell, January 08, 2014
March 26, 2013
A top news story used to either make or break a reporter, though it can still do so today, the old channels are mostly closed and monitored by the Internet beat. Reporters used to have to bribe sources and the best information continues to come from the source. In a throwback to the old days, The Guardian says, “Wall Street Journal Blames Beijing Troublemaking For US Bribery Probe.” The accusation is that the Chinese Wall Street Journal office bribed government officials with expensive gifts for information. The US Justice Department was already conducting an investigation on the Journal’s parent company News Corporation under the Foreign Corrupt Practices Act.
News Corporation believes that someone only wants to make trouble for the Journal and they are upset over the allegations. They also believe a Chinese government agent tipped off authorities. In an internal investigation, News Corporation did not find anything wrong.
How did this happen?
“The newspaper believes the bribery allegation came in relation to the Journal’s reporting of events in Chongqing, the province in which disgraced Chinese official Bo Xilai once had a power base.”
“The report also comes in the wake of claims that China has hacked into the systems of US newspapers – allegations that are denied by Beijing.”
The proper authorities are conducting further investigation, while the US, England, and China argue back and forth, name-calling and the like. The new Chinese premier Li Keqiang even made a statement that everyone should forget this event and concentrate on preventing further cyber attacks. Only in a perfect world or if something bigger comes along, like North Korea gaining an atom bomb.
Whitney Grace, March 26, 2013
November 14, 2012
Many narratives follow the phrase big data and ZDNet discusses the story that EMC tells about big data. Katharin Winkler, vice president of corporate sustainability for EMC, explained the elusive concept in layman’s terms at the Verge @ Greenbuild summit. The main reason that this concept needs to be brought down to a lower level is because big data is affecting everyone’s lives. It has effectively “escaped the data center.”
Back in 2000, two exabytes of new information were created in the world. In 2011, Winkler said the world was creating data at a rate of more than two exabytes of new information everyday.
In the article, “EMC Explains Making Big Data More Concrete to General Public” we learned about EMC’s strategy:
Winkler briefly outlined EMC’s overall strategy, dubbed “The Human Face of Big Data,” which is designed make big data more comprehensible for everyday Internet users. That strategy includes a book of the same name being published later this month, which features images from more than 150 photojournalists worldwide, demonstrating that basically every moment of our lives can now be chronicled in the cloud.
The possibilities with big data may seem overwhelming at times. Inherently, the opportunities are endless. However, these insights and information can only be delivered to decision-makers with the proper infrastructure technologies in place. We have had our eyes on PolySpot for their agile solutions in this department.
Megan Feil, November 14, 2012
September 15, 2012
It looks like the healthcare field may finally be entering the twenty-first century. Agilex informs us that “Maine’s HealthInfoNet Supports CDC Program to Demonstrate the Preventive Care Value of Health Information Exchanges.” We believe the Health Info Exchange (HIE) idea is a good analytics sector, and look forward to following its growth.
The CDC program referred to in the title is long-windedly called “Demonstrating the Preventive Care Value of Health Information Exchanges”, and is being led by Agilex. In 2009, Maine was one of the first states to launch an HIE, a system that is maintained by HealthInfoNet. Since they have had time to work out any kinks, and because almost 80 percent of Maine residents have at least one record in the system, that state is the first to participate in the program.
The press release states:
“HealthInfoNet is using an open-source application called popHealth to de-identify, aggregate and securely transmit clinical quality measures to the Maine Center for Disease Control and Prevention (Maine CDC). Sponsored by the Office of the National Coordinator for Health IT (ONC), popHealth was developed to automate reporting of meaningful use measures from a provider’s electronic health record system while ensuring de-identification of the transmitted data. The application was selected for this program due to its ability to create population-level data that has been de-identified at both the patient and provider level. This population-level data can be used to inform statewide public health and heart disease prevention strategies.”
It sounds like popHealth is a valuable resource. Another important piece of the puzzle is the open source CONNECT platform, that allows HIE’s to share data externally, yet securely, via the Nationwide Health Information Network. See the article for more details.
Headquartered close to DC in Chantilly, Virginia, Agilex serves clients in federal, state, and local governments as well as corporations. They supply mission and technology consulting, software and solution development, and system integration services. In a nod to the company’s commitment to quality, their name combines “agility” with “expertise”. Agilex was founded in 2007.
Cynthia Murrell, September 15, 2012
September 8, 2012
KiteDesk, a company focused on integrating multiple cloud services in one location, got a major redesign this week for the company’s official launch. According to the article released about the service on Tech Crunch, titled “KiteDesk Goes Where Greplin Failed: Aggregates Cloud Services for Search, Discovery & Interoperability,” the platform lets users connect email, contacts, calendar events, documents from social networking, and more in your KiteDesk account. From there, you can search all of these services at once and organize the data. KiteDesk is not the first company to try to aggregate the cloud, but most other startups have not fared well.
The article gives this insight:
“[…]KiteDesk co-founder and CEO Jack Kennedy says that he thinks companies that have attempted to compete in this space have been too narrowly focused to achieve the goals that are emerging for this class of software. ‘We see Personalized Information as a “Macro Trend” that’s buttressed by other trends like BYOD, consumerization of I.T., and a gradually diminishing line between personal and professional systems,’ he explains.”
KiteDesk may succeed where others have failed by focusing more on letting users move files between services and creating streams to customize data instead of simply searching and sharing. The company is currently taking sign-ups for the free service and we look forward to seeing more from this niche.
Andrea Hayden, September 08, 2012
September 3, 2012
Many folks are alarmed and confused about the current state of technology patents, and rightly so. We have found an interesting paper that explains in great detail what has been happening, why and how, and what the trajectory means for the future. To be sure, “The Giants Among Us” (PDF) from Stanford Technology Law Review is not a coffee-break-length piece. It is, however, full of important facts, insights, and observations. A must-read for anyone concerned about today’s tech patent landscape.
The paper, written by Tom Ewing and Robin Feldman, begins with this observation:
“The patent world is quietly undergoing a change of seismic proportions. In a few short years, a
handful of entities have amassed vast treasuries of patents on an unprecedented scale. To give some
sense of the magnitude of this change, our research shows that in a little more than five years, the
most massive of these has accumulated 30,000-60,000 patents worldwide, which would make it the
5th largest patent portfolio of any domestic US company and the 15th largest of any company in the
“These entities, which we call mass aggregators, do not engage in the manufacturing of products
nor do they conduct much research. Rather, they pursue other goals of interest to their founders and
Indeed. The rest of the paper supplies facts about such mass aggregators (particularly Intellectual Ventures); gives a nod to potential positive effects; delineates the potential damages from the trend; and wraps up with ideas on what can and should be done. Ewing and Feldman proscribe regulatory oversight, transparency, and undermining trolls’ profit motive.
Cynthia Murrell, September 03, 2012
August 10, 2012
In an unusually informative press release, Information Builders crows, “Columbus Zoo and Aquarium Selects Information Builders to Gain a Consolidated View of Business Operations.” The write up goes into unusual detail on how the organization is taking advantage of the software. It quotes zoo V.P. of Technology Services Gregg Oosterbaan:
“We’ve grown at a rapid rate in recent years, expanding our operations to include several new entities. We needed tools capable of capitalizing on our vast data assets, from point-of-sale, membership, and ticketing data to donations and animal health. We have a lot of different information systems, and we lacked a unified view across all of these entities, which made it difficult to make knowledgeable decisions. Information Builders is supplying the technology and services that we need to provide the best possible experiences for our animals and guests while continuing to grow our operations.”
The new unified data warehouse, created together by Information Builders and zoo staff, initially supports two reporting portals: a membership and a revenue portal. The software company’s WebFOCUS BI platform pulls from disparate data sources to produce valuable reports.
The Columbus Zoo and Aquarium, located in Columbus, Ohio, houses over 9,000 critters. The organization also spends over $1 million a year to support more than seventy conservation projects worldwide.
Headquartered in New York, Information Builders has offices around the world and remains one of the largest independent, privately held companies in the industry. The company has been in operation since 1975, possibly because of their stated focus on customer success.
Cynthia Murrell, August 10, 2012
March 21, 2012
Vivisimo, the information optimization software provider, recently rolled out an automated blog called the Vivisimo Daily. We find it interesting that the blog’s appearance coincides with the company’s step up in customer support marketing.
This microsite has a variety of different media sources in which it displays content, ranging from posts to videos. They even have a CXO mobile contest.
In reference to CXO, the editor’s note states:
Customer eXperience Optimization (CXO) connects customer-facing professionals in your sales, support and customer service organizations with all of the information they need for successful customer, partner and sales prospect interactions.
The service is operated by Paper.li, a quick and easy way for organizations to publish their own online newspapers. This is an example of a company utilizing Paper.li to assist with a content play. More original content might be a plus.
Jasmine Ashton, March 21, 2012
Sponsored by Pandia.com
February 7, 2012
Are discovery engines the cure for information overload? The Darwin Awareness Engine Blog lists “How to Manage Information Overload: 6 Ways Discovery Engines Help.” First, a distinction: a discovery engine goes further than a search engine, offering tools to refine a search, consolidate data, and apply context to the results.
Discovery engines, states writer Romain Goday, help users navigate overwhelming data because they: focus on topics, not people; go straight to the source of information; supply information through a single channel; let users discover what they didn’t know that they didn’t know; often go through a curation process; and reduce anxiety by combining and ranking sources. See the write up for details on each point.
The article asserts:
Managing information overload requires tools that deliver ‘awareness’ of topics and filter out irrelevant information will become indispensable. The challenge is to do so without losing the ability to make unexpected discoveries. Content discovery engines are addressing this need with a multitude of approaches. The market remains very fragmented but we can expect important players to emerge in the next few years.
I’m sure we can. Our concern is that information may be lost through the auto-selection process. Is it wise to rely on an AI for such an important task? Do we have a choice at this point, or has the big data monster grown too big for the human touch?
Cynthia Murrell, February 7, 2012
Sponsored by Pandia.com
December 18, 2011
According to the October 25 news release, FirstRain Recognized as “Innovative Business Analytics Company under $100M to Watch in 2011″ by Leading Market Research Firm, the analyst firm IDC has included FirstRain, an analytics software company, in its 2011 list of “Innovative Business Analytics Companies Under $100M to Watch.”
FirstRain is an analytics software company that uses its Business Monitoring Engine to provide professionals with access to the business Web. The company’s semantic-categorization technology instantly cuts through the clutter of consumer Web content, delivering only highly-relevant intelligence.
The company was highlighted in the “Cloud-based Analytics” category for their innovative use of semantic analysis to extract and deliver high-value information from the Web.
The value in using FirstRain is the breadth of its coverage, combined with its depth of selection and filtering so that it delivers the information that users need to see without cluttering their desktops or their minds with too much that is extraneous. It was easy to integrate into existing information delivery channels and because of the high relevance of the information that it delivered.
The fact that IDC even has a list of top business analytics companies shows how important search optimization software is becoming in the business world. Who knew that business intelligence would be the new black?
Jasmine Ashton, December 18, 2011
Sponsored by Pandia.com