New Years Resolutions in Personal Data Security
December 22, 2015
The article on ITProPortal titled What Did We Learn in Records Management in 2016 and What Lies Ahead for 2016? delves into the unlearnt lessons in data security. The article begins with a look back over major data breaches, including Ashley Madison, JP Morgan et al, and Vtech and gathers from them the trend of personal information being targeted by hackers. The article reports,
“A Crown Records Management Survey earlier in 2015 revealed two-thirds of people interviewed – all of them IT decision makers at UK companies with more than 200 employees – admitted losing important data… human error is continuing to put that information at risk as businesses fail to protect it properly…but there is legislation on the horizon that could prompt change – and a greater public awareness of data protection issues could also drive the agenda.”
The article also makes a few predictions about the upcoming developments in our approach to data protection. Among them includes the passage of the European Union General Data Protection Regulation (EU GDPR) and the resulting affect on businesses. In terms of apps, the article suggests that more people might start asking questions about the information required to use certain apps (especially when the data they request is completely irrelevant to the functions of the app.) Generally optimistic, these developments will only occur of people and businesses and governments take data breaches and privacy more seriously.
Chelsea Kerwin, December 22, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Internet Sovereignty, Apathy, and the Cloud
December 21, 2015
The OS News post titled Dark Clouds Over the Internet presents an argument that boils down to a choice between international accord and data sharing agreement, or the risk of the Internet being broken up into national networks. Some very worked up commenters engaged in an interesting discussion that spanned government overreaching, democracy, data security, privacy, and for some reason, climate change. One person summarized their opinion thusly:
“Best policy: don’t store data with someone else. There is no cloud. It’s just someone else’s computer.”
In response, a user named Alfman replied that companies are to blame for the current lack of data security, or more precisely, people are generally to blame for allowing this state of affairs to exist,
The privacy issues we’re now seeing are a direct consequence of corporate business models pushing our data into their central silos. None of this is surprising except perhaps how willing users have been to forgo their own privacy. Collectively, it seems that we are very willing to give up our rights for very little in exchange… makes it difficult to achieve critical mass around technologies promoting data independence.”
It is hard to argue with the apathy factor, with data breaches occurring regularly and so little being done by individuals to protect themselves. Good thing these commenters have figured it all out. Next up, solving climate change.
Chelsea Kerwin, December 21, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Computers Pose Barriers to Scientific Reproducibility
December 9, 2015
These days, it is hard to imagine performing scientific research without the help of computers. Phys.org details the problem that poses in its thorough article, “How Computers Broke Science—And What We Can Do to Fix It.” Many of us learned in school that reliable scientific conclusions rest on a foundation of reproducibility. That is, if an experiment’s results can be reproduced by other scientists following the same steps, the results can be trusted. However, now many of those steps are hidden within researchers’ hard drives, making the test of reproducibility difficult or impossible to apply. Writer, Ben Marwick points out:
“Stanford statisticians Jonathan Buckheit and David Donoho [PDF] described this issue as early as 1995, when the personal computer was still a fairly new idea.
‘An article about computational science in a scientific publication is not the scholarship itself, it is merely advertising of the scholarship. The actual scholarship is the complete software development environment and the complete set of instructions which generated the figures.’
“They make a radical claim. It means all those private files on our personal computers, and the private analysis tasks we do as we work toward preparing for publication should be made public along with the journal article.
This would be a huge change in the way scientists work. We’d need to prepare from the start for everything we do on the computer to eventually be made available for others to see. For many researchers, that’s an overwhelming thought. Victoria Stodden has found the biggest objection to sharing files is the time it takes to prepare them by writing documentation and cleaning them up. The second biggest concern is the risk of not receiving credit for the files if someone else uses them.”
So, do we give up on the test of reproducibility, or do we find a way to address those concerns? Well, this is the scientific community we’re talking about. There are already many researchers in several fields devising solutions. Poetically, those solutions tend to be software-based. For example, some are turning to executable scripts instead of the harder-to-record series of mouse clicks. There are also suggestions for standardized file formats and organizational structures. See the article for more details on these efforts.
A final caveat: Marwick notes that computers are not the only problem with reproducibility today. He also cites “poor experimental design, inappropriate statistical methods, a highly competitive research environment and the high value placed on novelty and publication in high-profile journals” as contributing factors. Now we know at least one issue is being addressed.
Cynthia Murrell, December 9, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Hewlett Packard: The Composable Future
December 5, 2015
I know that many folks are really excited about the new Hewlett Packard Enterprise outfit. I think the winners may be the lawyers and investment people, but that’s just my opinion. The “new” HPE entity is moving forward with innovations focused on composable infrastructure. I assume that within the composable infrastructure, Autonomy’s technology chugs along.
I read the enthusiastic “Hewlett Packard Enterprise’s First Big Moves As an Independent Company: Composable Infrastructure, Partnerships with Microsoft, and Cloud Brokering Top the List.”
I learned:
the company is focusing on helping customers maximize their internal data center operations.
But the headline emphasized cloud brokering. What’s the internal thing?
I noted this passage:
HPE says it’s the next-generation advancement beyond hyperconverged infrastructure, which has compute, network and storage components packaged together in a single appliance, but those resources are not as flexibly composable as HPE says Synergy is.
Okay. HPE uses a single application programming interface. HPE’s software magic assembles “the necessary infrastructure for the applications.”
The brokering thing involves the cloud. I thought Amazon and a couple of other outfits were dominating the cloud. HPE is a partner with Microsoft. Isn’t Microsoft the outfit unable to update Windows 10 in a coherent manner? Isn’t Microsoft the outfit which bought Nokia?
From my vantage point in Harrod’s Creek, I am not exactly sure how HPE will generate sustainable revenue with these announcements. Fortunately it is not my job to make these “innovations” generate high margin revenue.
The inclusion of the word “hope” in the article is about as far as the news story will go to cast doubt on the uptake for these services. I did like the last line too:
Stay tuned to see if it works.
My thought is that there is no “it.” HPE is trying a bunch of stuff in the hopes that one of the products starts cranking out the cash. I will be “tuned” and I hope my spelling checker does not change “composable” to “compostable.”
Stephen E Arnold, December 5, 2015
EHR Promises Yet to Be Realized
December 1, 2015
Electronic health records (EHRs) were to bring us reductions in cost and, just as importantly, seamless record-sharing between health-care providers. “Epic Fail” at Mother Jones explains why that has yet to happen. The short answer: despite government’s intentions, federation is simply not part of the Epic plan; vendor lock-in is too profitable to relinquish so easily.
Reporter Patrick Caldwell spends a lot of pixels discussing Epic Systems, the leading EHR vendor whose CEO sat on the Obama administration’s 2009 Health IT Policy Committee, where many EHR-related decisions were made. Epic, along with other EHR vendors, has received billions from the federal government to expand EHR systems. Caldwell writes:
“But instead of ushering in a new age of secure and easily accessible medical files, Epic has helped create a fragmented system that leaves doctors unable to trade information across practices or hospitals. That hurts patients who can’t be assured that their records—drug allergies, test results, X-rays—will be available to the doctors who need to see them. This is especially important for patients with lengthy and complicated health histories. But it also means we’re all missing out on the kind of system-wide savings that President Barack Obama predicted nearly seven years ago, when the federal government poured billions of dollars into digitizing the country’s medical records. ‘Within five years, all of America’s medical records are computerized,’ he announced in January 2009, when visiting Virginia’s George Mason University to unveil his stimulus plan. ‘This will cut waste, eliminate red tape, and reduce the need to repeat expensive medical tests.’ Unfortunately, in some ways, our medical records aren’t in any better shape today than they were before.”
Caldwell taps into his own medical saga to effectively illustrate how important interoperability is to patients with complicated medical histories. Epic seems to be experiencing push-back, both from the government and from the EHR industry. Though the company was widely expected to score the massive contract to modernize the Department of Defense’s health records, that contract went instead to competitor Cerner. Meanwhile, some of Epic’s competitors have formed the nonprofit CommonWell Health Alliance Partnership, tasked with setting standards for records exchange. Epic has not joined that partnership, choosing instead to facilitate interoperability between hospitals that use its own software. For a hefty fee, of course.
Perhaps this will all be straightened out down the line, and we will finally receive both our savings and our medical peace of mind. In the meantime, many patients and providers struggle with changes that appear to have only complicated the issue.
Cynthia Murrell, December 1, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Linguamatics Clears Copyright
December 1, 2015
What is a researcher’s dream? A researcher’s dream is to be able to easily locate and access viable, full-text resources without having to deal with any copyright issues. One might think that all information is accessible via the Internet and a Google search, but if this is how you think professional research happens then you are sadly mistaken. Most professional articles and journals are locked behind corporate academic walls with barbed wire made from copyright laws.
PR Newswire says in “Linguamatics Expands Cloud Text Mining Platform To Include Full-Text Articles” as way for life science researchers to legally bypass copyright. Linguamatics is a natural language processing text-mining platform and it will now incorporate the Copyright Clearance Center’s new text mining solution RightFind XML. This will allow researchers to have access to over 4,000 peer-reviewed journals from over twenty-five of scientific, technical, and medical publishers.
“The solution enables researchers to make discoveries and connections that can only be found in full-text articles. All of the content is stored securely by CCC and is pre-authorized by publishers for commercial text mining. Users access the content using Linguamatics’ unique federated text mining architecture which allows researchers to find the key information to support business-critical decisions. The integrated solution is available now, and enables users to save time, reduce costs and help mitigate an organization’s copyright infringement risk.”
I can only hope that other academic databases and publishers will adopt a similar and (hopefully) more affordable way to access full-text, viable resources. One of the biggest drawbacks to Internet research is having to rely on questionable source information, because it is free and readily available. Easier access to more accurate information form viable resources will not only improve information, but also start a trend to increase its access.
Whitney Grace, December 1, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Business Intelligence Services Partnership Between Swedish Tech Companies Zinnovate and Yellowfin
November 25, 2015
The article titled Business Intelligence Vendor Yellowfin Signs Global Reseller Agreement with Zinnovate on Sys-Con Media provides an overview of the recent partnership between the two companies. Zinnovate will be able to offer Yellowfin’s Business Intelligence solutions and services, and better fulfill the needs that small and mid-size businesses have involving enterprise quality BI. The article quotes Zinnovate CEO Hakan Nilsson on the exciting capabilities of Yellowfin’s technology,
“Flexible deployment options were also important… As a completely Web-based application, Yellowfin has been designed with SaaS hosting in mind from the beginning, making it simple to deploy on-premise or as a cloud-based solution. Yellowfin’s licensing model is simple. Clients can automatically access Yellowfin’s full range of features, including its intuitive data visualization options, excellent Mobile BI support and collaborative capabilities. Yellowfin provides a robust enterprise BI platform at a very competitive price point.”
As for the perks to Yellowfin, the Managing Director Peter Baxter explained that Zinnovate was positioned to help grow the presence of the brand in Sweden and in the global transport and logistics market. In the last few years, Zinnovate has developed its service portfolio to include customers in banking and finance. Both companies share a dedication to customer-friendly, intuitive solutions.
Chelsea Kerwin, November 25, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Data Fusion: Not Yet, Not Cheap, Not Easy
November 9, 2015
I clipped an item to read on the fabulous flight from America to shouting distance of Antarctica. Yep, it’s getting smaller.
The write up was “So Far, Tepid Responses to Growing Cloud Integration Hariball.” I think the words “hair” and “ball” convinced me to add this gem to my in flight reading list.
The article is based on a survey (nope, I don’t have the utmost confidence in vendor surveys). Apparently the 300 IT “leaders” experience
pain around application and data integration between on premises and cloud based systems.
I had to take a couple of deep breaths to calm down. I thought the marketing voodoo from vendors embracing utility services (Lexmark/Kapow), metasearch (Vivisimo, et al), unified services (Attivio, Coveo, et al), and licensees of conversion routines from outfits ranging from Oracle to “search consulting” in the “search technology” business had this problem solved.
If the vendors can’t do it, why not just dump everything in a data lake and let an open source software system figure everything out. Failing that, why not convert the data into XML and use the magic of well formed XML objects to deal with these issues?
It seems that the solutions don’t work with the slam dunk regularity of a 23 year old Michael Jordan.
Surprise.
The write up explains:
The old methods may not cut it when it comes to pulling things together. Two in three respondents, 59%, indicate they are not satisfied with their ability to synch data between cloud and on-premise systems — a clear barrier for businesses that seek to move beyond integration fundamentals like enabling reporting and basic analytics. Still, and quite surprisingly, there isn’t a great deal of support for applying more resources to cloud application integration. Premise-to-cloud integration, cloud-to-cloud integration, and cloud data replication are top priorities for only 16%, 10% and 10% of enterprises, respectively. Instead, IT shops make do with custom coding, which remains the leading approach to integration, the survey finds.
My hunch is that the survey finds that hoo-hah is not the same as the grunt work required to take data from A, integrate it with data from B, and then do something productive with the data unless humans get involved.
Shocker.
I noted this point:
As the survey’s authors observe. “companies consistently under estimate the cost associated with custom code, as often there are hidden costs not readily visible to IT and business leaders.”
Reality just won’t go away when it comes to integrating disparate digital content. Neither will the costs.
Stephen E Arnold, November 9, 2015
Neglect Exposes Private Medical Files
October 28, 2015
Data such as financial information and medical files are supposed to be protected behind secure firewalls and barriers that ensure people’s information does not fall into the wrong hands. While digital security is at the best it has ever been, sometimes a hacker does not to rely on his/her skills to get sensitive information. Sometimes all they need to do is wait for an idiotic mistake, such as what happened on Amazon Web Services wrote Gizmodo in “Error Exposes 1.5 Million People’s Private Records On Amazon Web Services.”
Tech junkie Chris Vickery heard a rumor that “strange data dumps” could appear on Amazon Web Services, so he decided to go looking for some. He hunted through AWS, found one such dump, and it was a huge haul or it would have been if Vickery was a hacker. Vickery discovered it was medical information belonging to 1.5 million people and from these organizations: Kansas’ State Self Insurance Fund, CSAC Excess Insurance Authority, and the Salt Lake County Database.
“The data came from Systema Software, a small company that manages insurance claims. It still isn’t clear how the data ended up on the site, but the company did confirm to Vickery that it happened. Shortly after Vickery made contact with the affected organizations, the database disappeared from the Amazon subdomain.”
The 1.5 million people should be thanking Vickery, because he alerted these organizations and the data was immediately removed from the Amazon cloud. It turns out that Vickery was the only one to access the data, but it begs the question what would happen if a malicious hacker had gotten hold of the data? You can count on that the medical information would have been sold to the highest bidder.
Vickery’s discovery is not isolated. Other organizations are bound to be negligent in data and your personal information could be posted in an unsecure area. How can you get organizations to better protect your information? Good question.
Whitney Grace, October 28, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Google Declares It Has the Best Cloud Service…Again
October 15, 2015
Google is never afraid to brag about its services and how much better they are compared to their competitors. Google brandishes its supposed superiority with cloud computing on its Google Cloud Platform Blog with the post, “Google Cloud Platform Delivers The Industry’s Best Technical And Differentiated Features.” The first line in the post even comes out as a blanket statement for how Google feels about its cloud platform: “I’ll come right out and say it: Google Cloud Platform is a better cloud.”
One must always take assertations from a company’s Web site as part of its advertising campaign to peddle the service. Google products and services, however, usually have quality written into their programming, but Google defends the above claim saying it has core advantages and technical differentiators in compute, storage, network, and distributed software tiers. Google says this is for two reasons:
“1. Cloud Platform offers features that are very valuable for customers, and very difficult for competitors to emulate.
- The underlying technologies, created and honed by Google over the last 15 years, enable us to offer our services at a much lower price point.”
Next the post explains the different features that make the cloud platform superior: live migration, scaling load balances, forty-five second boot times, three second archive restore, and 680,000 IOPS sustained Local SSD read rate. Google can offer these features, because it claims to have the best technology and software engineers. It does not stop there, because Google also offers its cloud platform at forty percent cheaper than other cloud platforms. It delves into details about why it can offer a better and cheaper service. While the argument is compelling, it is still Google cheerleading itself.
Google is one of the best technology companies, but it is better to test and review other cloud platforms rather than blinding following a blog post.
Whitney Grace, October 15, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph