The Data Sharing of Healthcare
December 8, 2016
Machine learning tools like the artificial intelligence Watson from IBM can and will improve healthcare access and diagnosis, but the problem is getting on the road to improvement. Implementing new technology is costly, including the actual equipment and training staff, and there is always the chance it could create more problems than resolving them. However, if the new technology makes a job easier and resolves situations then you are on the path to improvement. The UK is heading that way says TechCrunch in, “DeepMind Health Inks New Deal With UK’s NHS To Deploy Streams App In Early 2017.”
London’s NHS Royal Free Hospital will employ DeepMind Health in 2017, taking advantage of its data sharing capabilities. Google owns DeepMind Health and it focuses on driving the application of machine learning algorithms in preventative medicine. The NHS and DeepMind Health had a prior agreement in the past, but when the New Scientist made a freedom of information request their use of patients’ personal information came into question. The information was used to power the Streams app to sent alerts to acute kidney injury patients. However, ICO and MHRA shut down Streams when it was discovered it was never registered as a medical device.
The eventual goal is to relaunch Streams, which is part of the deal, but DeepMind has to repair its reputation. DeepMind is already on the mend with the new deal and registering Streams as a medical device also helped. In order for healthcare apps to function properly, they need to be tested:
The point is, healthcare-related AI needs very high-quality data sets to nurture the kind of smarts DeepMind is hoping to be able to build. And the publicly funded NHS has both a wealth of such data and a pressing need to reduce costs — incentivizing it to accept the offer of “free” development work and wide-ranging partnerships with DeepMind…
Streams is the first step towards a healthcare system powered by digital healthcare products. As already seen is the stumbling block protecting personal information and powering the apps so they can work. Where does the fine line between the two end?
Whitney Grace, December 8, 2016
Increasingly Sophisticated Cybercrime
December 8, 2016
What a deal! Pymnts.com tells us that “Hacked Servers Sell for $6 On The Dark Web.” Citing recent research from Kapersky Lab, the write-up explains:
Kaspersky Lab researchers exposed a massive global underground market selling more than 70,000 hacked servers from government entities, corporations and universities for as little as $6 each.
The cybersecurity firm said the newly discovered xDedic marketplace currently has a listing of 70,624 hacked Remote Desktop Protocol (RDP) servers for sale. It’s reported that many of the servers either host or provide access to consumer sites and services, while some have software installed for direct mail, financial accounting and POS processing, Kaspersky Lab confirmed.
Kapersky’s Costin Raiu notes the study is evidence that “cybercrime-as-a-service” is growing, and has been developing its own, well-organized infrastructure. He also observes that the victims of these criminals are not only the targets of attack, but the unwitting server-owners. xDedic, he says, represents a new type of cybercriminal marketplace.
Kapersky Lab recommends organizations take these precautions:
*Implement multi-layered approach to IT infrastructure security that includes a robust security solution
*Use of strong passwords in server authentication processes
*Establish an ongoing patch management process
*Perform regular security audits of IT infrastructures
*Invest in threat intelligence services”
Stay safe, dear readers.
Cynthia Murrell, December 8, 2016
When Censorship Means More Money, Facebook Leans In
December 8, 2016
The article on Vanity Fair titled Facebook Is Reportedly Building a Censorship Tool to Win Over China suggests that the people nervous about what it will mean to address the fake news proliferation are correct. The fear that Facebook managing fake news stories might lead to actual censorship of the news is not so far-fetched after all. The article states,
Auditing fake news is considered to be a slippery-slope problem for the company, which is just now starting to use fact-checkers to “grade” the veracity of news stories shared on its Web site and to crack down on false or partially false news stories shared on Facebook. Still, beneath it all, Facebook remains a publicly traded company with a fiduciary duty to its shareholders—and that duty is to make money.
Zuckerberg’s interest in capturing China’s 700M+ internet users has led to the creation of a censorship tool that can “automatically suppress content in specific geographic areas.” The tool has not been implemented (yet), but it suggests that Zuckerberg has a flexible relationship with freedom of information, especially where money is at stake. And there is a lot of money at stake. The article delves into the confusion over whether Facebook is a media company or not. But whatever type of company it is, it is a company. And that means money comes first.
Chelsea Kerwin, December 8, 2016
Europe and Disinformation: Denmark? Denmark.
December 7, 2016
If you want to catch up on what “Europe” is doing about disinformation, you will want to read “European Union Efforts to Counter Disinformation.” After you have worked through the short document, do a couple of queries on Bing, Google, Inxight, and Yandex for Copenhagen protests. With a bit of work, you will locate a December 4, 2016, write up from the estimable Express newspaper Web site. The story is “WAR ON DENMARK’S STREETS: Migrant Chaos Sparks Clashes between Police and Protestors.” Disinformation, misinformation, and reformation of information are different facets of this issue. However, a growing problem is the absence of information. Locating semi accurate “factoids” is a tough job. “Real” journalists prefer to recycle old information or just take what pops into their mobile phone’s browser. Hey, finding out things is really hard. People are really busy with the Facebook thing. Are you planning a holiday in Denmark where a policeman was shot in the head on December 6, 2016? No quotes because the source is the outstanding Associated Press. That outfit does not want people like me to recycle their factoids. Hey, where’s the story about the car burnings which have been increasing this year? Oh, never mind. If the information is not in Google, it does not exist. Convenient? You bet.
Stephen E Arnold, December 7, 2016
MC+A Is Again Independent: Search, Discovery, and Engineering Services
December 7, 2016
Beyond Search learned that MC+A has added a turbo-charger to its impressive search, content processing, and content management credentials. The company, based in Chicago, earned a gold star from Google for MC+A’s support and integration services for the now-discontinued Google Search Appliance. After working with the Yippy implementation of Watson Explorer, MC+A retains its search and retrieval capabilities, but expanded its scope. Michael Cizmar, the company’s president told Beyond Search, “Search is incredibly important, but customers require more multi-faceted solutions.” MC+A provides the engineering and technical capabilities to cope with Big Data, disparate content, cloud and mixed-environment platforms, and the type of information processing needed to generate actionable reports. [For more information about Cizmar’s views about search and retrieval, see “An Interview with Michael Cizmar.”
Cizmar added:
We solve organizational problems rooted in the lack of insight and accessibility to data that promotes operational inefficiency. Think of a support rep who has to look through five systems to find an answer for a customer on the phone. We are changing the way these users get to answers by providing them better insights from existing data securely. At a higher level we provide strategy support for executives looking for guidance on organizational change.
Alphabet Google’s decision to withdraw the Google Search Appliance has left more than 60,000 licensees looking for an alternative. Since the début of the GSA in 2002, Google trimmed the product line and did not move the search system to the cloud. Cizmar’s view of the GSA’s 12 year journey reveals that:
The Google Search Appliance was definitely not a failure. The idea that organizations wanted an easy-to-use, reliable Google-style search system was ahead of its time. Current GSA customers need some guidance on planning and recommendations on available options. Our point of view is that it’s not the time to simply swap out one piece of metal for another even if vendors claim “OEM” equivalency. The options available for data processing and search today all provide tremendous capabilities, including cognitive solutions which provide amazing capabilities to assist users beyond the keyword search use case.
Cizmar sees an opportunity to provide GSA customers with guidance on planning and recommendations on available options. MC+A understands the options available for data processing and information access today. The company is deeply involved in solutions which tap “smart software” to deliver actionable information.
Cizmar said:
Keyword search is a commodity at this point, and we helping our customers put search where the user is without breaking an established workflow. Answers, not laundry lists of documents to read, is paramount today. Customers want to solve specific problems; for example, reducing average call time customer support using smart software or adaptive, self service solutions. This is where MC+A’s capabilities deliver value.
MC+A is cloud savvy. The company realized that cloud and hybrid or cloud-on premises solutions were ways to reduce costs and improve system payoff. Cizmar was one of the technologists recognized by Google for innovation in cloud applications of the GSA. MC+A builds on that engineering expertise. Today, MC+A supports Google, Amazon, and other cloud infrastructures.
Cizmar revealed:
Amazon Elastic Cloud Search is probably doing as much business as Google did with the GSA but in a much different way. Many of these cloud-based offerings are generally solving the problem with the deployment complexities that go into standing up Elasticsearch, the open source version of Elastic’s information access system.
MC+A does not offer a one size fits all solution. He said:
The problem still remains of what should go into the cloud, how to get a solution deployed, and how to ensure usability of the cloud-centric system. The cloud offers tremendous capabilities in running and scaling a search cluster. However, with the API consumption model that we have to operate in, getting your data out of other systems into your search clusters remains a challenge. MC+A does not make security an afterthought. Access controls and system integrity have high priority in our solutions.
MC+A takes a business approach to what many engineering firms view as a technical problem. The company’s engineers examine the business use case. Only then does MC+A determine if the cloud is an option. If so, which product or projects capabilities meet the general requirements. After that process, MC+A implements its carefully crafted, standard deployment process.
Cizmar noted:
If you are a customer with all of your data on premises or have a unique edge case, it may not make sense to use a cloud-based system. The search system needs to be near to the content most of the time.
MC+A offers its white-labeled search “Practice in a Box” to former Google partners and other integrators. High-profile specialist vendors like Onix in Ohio are be able to resell our technology backed by the MC+A engineering team.
In 2017, MC+A will roll out a search solution which is, at this time, shrouded in secrecy. This new offering will go “beyond the GSA” and offer expanded information access functionality. To support this new product, MC+A will announce a specialized search practice.
He said:
This international practice will offer depth and breadth in selling and implementing solutions around cognitive search, assist, and analytics with products other than Google throughout the Americas. I see this as beneficial to other Google and non-Google resellers because, it allows other them to utilize our award winning team, our content filters, and a wealth of social proofs on a just in time basis.
For 2017, MC+A offers a range of products and services. Based on the limited information provided by the secrecy-conscious Michael Ciznar, Beyond Search believes that the company will offer implementation and support services for Lucene and Solr, IBM Watson, and Microsoft SharePoint. The SharePoint support will embrace some vendors supplying SharePoint centric like Coveo. Plus, MC+A will continue to offer software to acquire content and perform extract-transform-load functions on premises, in the cloud, or in hybrid configurations.,
MC+A’s approach offers a business-technology approach to information access.
For more information about MC+A, contact sales@mcplusa.com 312-585-6396.
Stephen E Arnold, December 7, 2016
Verizon Inches Closer to Yahoot (Sorry, I Meant Yahoo)
December 7, 2016
I read “AOL CEO Tim Armstrong Optimistic about Yahoo Deal.” The book the “Power of Positive Thinking” emphasizes optimism. Looking at the bright side is good. One can sing “Keep on the Sunny Side,” the snappy tune penned allegedly by June Carter Cash.
The write up points out:
AOL Chief Executive Tim Armstrong said he’s “cautiously optimistic” that Verizon’s acquisition of Yahoo will go through despite the internet company’s disclosure this fall that it suffered a significant data breach.
The point I found interesting was:
the digital media veteran said he’s been working closely with Yahoo Chief Executive Marissa Mayer on strategy and structural planning as if the deal will close. And he’s been impressed with some of Yahoo’s plans for 2017 outside of the integration work.
Perhaps the dynamic duo will craft a new local newspaper play with an enhanced weather map. Sound good? Sure does.
Yahoot. Amazing. Former Baby Bell. More amazing. Together. Most amazing.
Stephen E Arnold, December 7, 2016
The Information Not Accuracy Age
December 7, 2016
The impact of Google on our lives is clear through the company’s name being used colloquially as a verb. However, Quantum Run reminds us of their impact, quantifiable, in their piece called All hail Google. Google owns 80% of the smartphone market with over a billion android devices. Gmail’s users tally at 420 million users and Chrome has 800 million users. Also, YouTube, which Google owns, has one billion users. An interesting factoid the article pairs with these stats is that 94% of students equate Google with research. The article notes:
The American Medical association voices their concerns over relying on search engines, saying, “Our concern is the accuracy and trustworthiness of content that ranks well in Google and other search engines. Only 40 percent of teachers say their students are good at assessing the quality and accuracy of information they find via online research. And as for the teachers themselves, only five percent say ‘all/almost all’ of the information they find via search engines is trustworthy — far less than the 28 percent of all adults who say the same.
Apparently, cybercondria is a thing. The article correctly points to the content housed on the deep web and the Dark Web as untouched by Google. The major question sparked by this article is that we now have to question the validity of all the fancy numbers Quantum Run has reported.
Megan Feil, December 7, 2016
Google Search Results Are Politically Biased
December 7, 2016
Google search results are supposed to be objective and accurate. The key phrase in the last sentence was objective, but studies have proven that algorithms can be just as biased as the humans who design them. One would think that Google, one of the most popular search engines in the world, who have discovered how to program objective algorithms, but according to the International Business Times, “Google Search Results Tend To Have Liberal Bias That Could Influence Public Opinion.”
Did you ever hear Uncle Ben’s advice to Spider-Man, “With great power comes great responsibility.” This advice rings true for big corporations, such as Google, that influence the public opinion. CanIRank.com conducted a study the discovered searches using political terms displayed more pages with a liberal than a conservative view. What does Google have to say about it?
The Alphabet-owned company has denied any bias and told the Wall Street Journal: ‘From the beginning, our approach to search has been to provide the most relevant answers and results to our users, and it would undermine people’s trust in our results, and our company, if we were to change course.’ The company maintains that its search results are based on algorithms using hundreds of factors which reflect the content and information available on the Internet. Google has never made its algorithm for determining search results completely public even though over the years researchers have tried to put their reasoning to it.
This is not the first time Google has been accused of a liberal bias in its search results. The consensus is that the liberal leanings are unintentional and is an actual reflection of the amount of liberal content on the Web.
What is the truth? Only the Google gods know.
Whitney Grace, December 7, 2016
IBM Thinks Big on Data Unification
December 7, 2016
So far, the big data phenomenon has underwhelmed. We have developed several good ways to collect, store, and analyze data. However, those several ways have resulted in separate, individually developed systems that do not play well together. IBM hopes to fix that, we learn from “IBM Announces a Universal Platform for Data Science” at Forbes. They call the project the Data Science Experience. Writer Greg Satell explains:
Consider a typical retail enterprise, which has separate operations for purchasing, point-of-sale, inventory, marketing and other functions. All of these are continually generating and storing data as they interact with the real world in real time. Ideally, these systems would be tightly integrated, so that data generated in one area could influence decisions in another.
The reality, unfortunately, is that things rarely work together so seamlessly. Each of these systems stores information differently, which makes it very difficult to get full value from data. To understand how, for example, a marketing campaign is affecting traffic on the web site and in the stores, you often need to pull it out of separate systems and load it into excel sheets.
That, essentially, has been what’s been holding data science back. We have the tools to analyze mountains of data and derive amazing insights in real time. New advanced cognitive systems, like Watson, can then take that data, learn from it and help guide our actions. But for all that to work, the information has to be accessible.”
The article acknowledges that progress that has been made in this area, citing the open-source Hadoop and its OS, Spark, for their ability to tap into clusters of data around the world and analyze that data as a single set. Incompatible systems, however, still vex many organizations.
The article closes with an interesting observation—that many business people’s mindsets are stuck in the past. Planning far ahead is considered prudent, as is taking ample time to make any big decision. Technology has moved past that, though, and now such caution can render the basis for any decision obsolete as soon as it is made. As Satell puts it, we need “a more Bayesian approach to strategy, where we don’t expect to predict things and be right, but rather allow data streams to help us become less wrong over time.” Can the humans adapt to this way of thinking? It is reassuring to have a plan; I suspect only the most adaptable among us will feel comfortable flying by the seat of our pants.
Cynthia Murrell, December 7, 2016
Want to Get Published in a Science Journal? Just Dole out Some Cash
December 7, 2016
A Canadian, Tom Spears has managed to publish a heavily plagiarized paper in a science journal by paying some cash. Getting published in a scientific and medical journal helps in advancing the career. ‘
In an article published by SlashDot titled Science Journals Caught Publishing Fake Research For Cash, the author says:
In 2014, journalist Tom Spears intentionally wrote “the world’s worst science research paper…a mess of plagiarism and meaningless garble” — then got it accepted by eight different journals. He did it to expose journals which follow the publish-for-a-fee model, “a fast-growing business that sucks money out of research, undermines genuine scientific knowledge, and provides fake credentials for the desperate.
This is akin to students enlisting services of hackers over Dark Web to manipulate their grades and attendance records. However, in this case, there is no need of Dark Web or Tor browser. Paying some cash is sufficient.
The root of the problem can be traced to OMICS International, an India-based publishing firm that is buying publication companies of these medical journals and publishing whatever is sent to them for cash. In standard practice, the paper needs to be peer-reviewed and also checked for plagiarism before it is published. As written earlier, the separation line between the Dark and Open web seems to be thinning and one day will disappear altogether.
Vishal Ingole, December 7, 2016