January 13, 2016
The article titled We are All SkyNet in the Googlesphere on Disinformation refers to the Terminator’s controlling A.I., SkyNet, who determines the beginning of a machine age in the movie, and the conspiracy that Google is taking on that role in reality. Is it easy to understand the fear of Google’s reach, it does sometimes seem like a gigantic arm with a thousand hands groping about in cyberspace, and collecting little pieces of information that on their own seem largely harmless. The article discusses cloud computing and its relationship to the conspiracy,
“When you need your bits of info, your computer gathers them from the cloud again. The cloud is SkyNet’s greatest line of defense, as you can’t kill what is spread out over an entire network. Since the magnificent expose of the NSA and their ability to (at least) access every keystroke, file or phone call and Google’s (at minimum) complicity in managing the data, that is to say, nearly all data being collected, it’s hard to imagine the limitations to what any such Google AI program could learn.”
The article ends philosophically with the suggestion that the nature of a modern day SkyNet will depend on the data that it gathers from us, that we will create the monster in our likeness. This may not be where we expected the article to go, but it does make sense. Google as a company will not determine it, at least if literature has taught us anything.
Chelsea Kerwin, January 13, 2016
January 12, 2016
Cyber threats have been a concerning topics since computers became functional and daily tools for people. The idea of a hacker brings up images of IT geeks sitting in a dark basement with their laptops and cracking top secret codes in a matter of keystrokes. Hacking has turned from a limited crime to a huge international problem comparable to the mafia. While hackers are interested in targeting individuals, the bolder thieves target big businesses. News of Bahrain shares that “Biz Not Prepared For Cyber Threat,” translated from headline speech that means the business world would not withstand a cyber attack.
KPMG International released the 2015 KPMG CEO Outlook Study that found businesses are aware of risks associated with cyber attacks, but only forty-nine percent have prepared for one. The study surveyed 1,200 CEOs and one out of five are concerned about cyber risks. The concern has led many CEOs to take action with security measures and safety plans.
“ ‘The most innovative companies have recognized that cyber security is a customer experience, not just a risk that needs to be managed or a line item in the budget. In Bahrain, some firms are finding ways to turn cyber preparedness into a competitive advantage with customers, and they are using this as a differentiator.’ ”
Many companies that are attacked thought they were prepared for any threats, but they underestimated hackers’ intelligence, sophistication, and persistence.
Some of the companies with good cyber security are advertising their technical achievements to prevent attacks. It is a desirable feature, especially as more information is housed on cloud storage and businesses need to be aware of potential threats.
January 11, 2016
Everyone is running to the cloud to reserve their own personal data spot. Companies have migrated their services to the cloud to serve a growing mobile clientele. If you are not on the cloud, it is like you’re still using an old flip phone. The cloud is a viable and useful service that allows people to access their data anytime and anywhere. Business Insider reveals that cloud usage is heavily concentrated in the US: “Latest Data From The Valley’s Oldest VC Firm Shows One Big Flaw In The Hype Around The Cloud.”
Bessemer Venture Partners is the longest running venture capitalist company in Silicon Valley. To celebrate its 100th cloud investment, it surveyed where the company’s cloud investments are located. Seventy-six of the startups are in the US, eleven are in Israel, and four are in Canada.
“The fact that less than one-quarter of BVP’s cloud investments are in non-US startups shows the adoption of cloud technologies is lagging in the rest of the world. It’s also a reminder that, even after all these years of cloud hype, many countries are still concerned about some aspects of cloud technology.”
Cloud adoption around the world is slow due to the US invents a lot of new technology and the rest of the world must catch up. Security is another big concern and companies are hesitant to store sensitive information on a system with issues.
The cloud has only been on the market for ten years and has only gained attention in the past five. Cell phones, laptops, and using open source software took time to catch on as well.
January 7, 2016
Curious about which cloud storage system is “faster”? A partial answer appears in “AWS S3 vs Google Cloud vs Azure: Cloud Storage Performance.” The write up presents performance data for downloads, splitting up data across regions, and uploads. The three services evidence difference performance characteristics. Network throughput remains an issue. If you find one system performing poorly, perhaps the problem falls into the shadows of a link in the chain balking or, in some cases, being down.
The net net is that Google’s service appears to take some time to queue up the operation. Once underway, the Google is marginally quicker for some operations and pretty snappy for others. Azure, which does not surprise me too much, seems to be bringing up the rear. The retail giant Amazon which offloads some of its infrastructure costs to its cloud customers is in the middle of the pack.
For those wanting to move search to the low-cost, ever reliable cloud, those server farms under one’s own control may eliminate some restless nights. Interesting stuff. Now, what about that pricing?
Stephen E Arnold, January 7, 2016
December 29, 2015
Google search is supposed to be the most reliable and accurate search, so by proxy Google Drive should be easy to search as well, right? Wrong! Google Drive is like a cartoon black hole. It has an undisclosed amount of space and things easily get lost in it. Fear not, Google Drive users for Tech Republic has posted a nifty guide on how to use Google Drive’s search and locate your lost spreadsheets and documents: “Pro Tip: How To Use Google Drive’s New And Improved Search.”
Google drive can now be searched with more options: owner, keywords. Item name, shared with, date modified, file type, and located in. The article explains the quickest way to search Google Drive is with the standard wildcard. It is the search filter where you add an asterisk to any of the listed search types and viola, the search results list all viable options. The second method is described as the most powerful option, because it is brand new advanced search feature. By clicking on the drop down arrow box in the search box, you can access filters to limit or expand your search results.
“For anyone who depends upon Google Drive to store and manage their data, the new search tool will be a major plus. No longer will you have to dig through a vast array of search results to find what you’re looking for. Narrow the field down with the new Drive search box.”
The new search features are pretty neat, albeit standard for most databases. Why did it take Google so long to deploy them in the first place?
December 22, 2015
The article on ITProPortal titled What Did We Learn in Records Management in 2016 and What Lies Ahead for 2016? delves into the unlearnt lessons in data security. The article begins with a look back over major data breaches, including Ashley Madison, JP Morgan et al, and Vtech and gathers from them the trend of personal information being targeted by hackers. The article reports,
“A Crown Records Management Survey earlier in 2015 revealed two-thirds of people interviewed – all of them IT decision makers at UK companies with more than 200 employees – admitted losing important data… human error is continuing to put that information at risk as businesses fail to protect it properly…but there is legislation on the horizon that could prompt change – and a greater public awareness of data protection issues could also drive the agenda.”
The article also makes a few predictions about the upcoming developments in our approach to data protection. Among them includes the passage of the European Union General Data Protection Regulation (EU GDPR) and the resulting affect on businesses. In terms of apps, the article suggests that more people might start asking questions about the information required to use certain apps (especially when the data they request is completely irrelevant to the functions of the app.) Generally optimistic, these developments will only occur of people and businesses and governments take data breaches and privacy more seriously.
Chelsea Kerwin, December 22, 2015
December 21, 2015
The OS News post titled Dark Clouds Over the Internet presents an argument that boils down to a choice between international accord and data sharing agreement, or the risk of the Internet being broken up into national networks. Some very worked up commenters engaged in an interesting discussion that spanned government overreaching, democracy, data security, privacy, and for some reason, climate change. One person summarized their opinion thusly:
“Best policy: don’t store data with someone else. There is no cloud. It’s just someone else’s computer.”
In response, a user named Alfman replied that companies are to blame for the current lack of data security, or more precisely, people are generally to blame for allowing this state of affairs to exist,
The privacy issues we’re now seeing are a direct consequence of corporate business models pushing our data into their central silos. None of this is surprising except perhaps how willing users have been to forgo their own privacy. Collectively, it seems that we are very willing to give up our rights for very little in exchange… makes it difficult to achieve critical mass around technologies promoting data independence.”
It is hard to argue with the apathy factor, with data breaches occurring regularly and so little being done by individuals to protect themselves. Good thing these commenters have figured it all out. Next up, solving climate change.
Chelsea Kerwin, December 21, 2015
December 9, 2015
These days, it is hard to imagine performing scientific research without the help of computers. Phys.org details the problem that poses in its thorough article, “How Computers Broke Science—And What We Can Do to Fix It.” Many of us learned in school that reliable scientific conclusions rest on a foundation of reproducibility. That is, if an experiment’s results can be reproduced by other scientists following the same steps, the results can be trusted. However, now many of those steps are hidden within researchers’ hard drives, making the test of reproducibility difficult or impossible to apply. Writer, Ben Marwick points out:
“Stanford statisticians Jonathan Buckheit and David Donoho [PDF] described this issue as early as 1995, when the personal computer was still a fairly new idea.
‘An article about computational science in a scientific publication is not the scholarship itself, it is merely advertising of the scholarship. The actual scholarship is the complete software development environment and the complete set of instructions which generated the figures.’
“They make a radical claim. It means all those private files on our personal computers, and the private analysis tasks we do as we work toward preparing for publication should be made public along with the journal article.
This would be a huge change in the way scientists work. We’d need to prepare from the start for everything we do on the computer to eventually be made available for others to see. For many researchers, that’s an overwhelming thought. Victoria Stodden has found the biggest objection to sharing files is the time it takes to prepare them by writing documentation and cleaning them up. The second biggest concern is the risk of not receiving credit for the files if someone else uses them.”
So, do we give up on the test of reproducibility, or do we find a way to address those concerns? Well, this is the scientific community we’re talking about. There are already many researchers in several fields devising solutions. Poetically, those solutions tend to be software-based. For example, some are turning to executable scripts instead of the harder-to-record series of mouse clicks. There are also suggestions for standardized file formats and organizational structures. See the article for more details on these efforts.
A final caveat: Marwick notes that computers are not the only problem with reproducibility today. He also cites “poor experimental design, inappropriate statistical methods, a highly competitive research environment and the high value placed on novelty and publication in high-profile journals” as contributing factors. Now we know at least one issue is being addressed.
Cynthia Murrell, December 9, 2015
December 5, 2015
I know that many folks are really excited about the new Hewlett Packard Enterprise outfit. I think the winners may be the lawyers and investment people, but that’s just my opinion. The “new” HPE entity is moving forward with innovations focused on composable infrastructure. I assume that within the composable infrastructure, Autonomy’s technology chugs along.
the company is focusing on helping customers maximize their internal data center operations.
But the headline emphasized cloud brokering. What’s the internal thing?
I noted this passage:
HPE says it’s the next-generation advancement beyond hyperconverged infrastructure, which has compute, network and storage components packaged together in a single appliance, but those resources are not as flexibly composable as HPE says Synergy is.
Okay. HPE uses a single application programming interface. HPE’s software magic assembles “the necessary infrastructure for the applications.”
The brokering thing involves the cloud. I thought Amazon and a couple of other outfits were dominating the cloud. HPE is a partner with Microsoft. Isn’t Microsoft the outfit unable to update Windows 10 in a coherent manner? Isn’t Microsoft the outfit which bought Nokia?
From my vantage point in Harrod’s Creek, I am not exactly sure how HPE will generate sustainable revenue with these announcements. Fortunately it is not my job to make these “innovations” generate high margin revenue.
The inclusion of the word “hope” in the article is about as far as the news story will go to cast doubt on the uptake for these services. I did like the last line too:
Stay tuned to see if it works.
My thought is that there is no “it.” HPE is trying a bunch of stuff in the hopes that one of the products starts cranking out the cash. I will be “tuned” and I hope my spelling checker does not change “composable” to “compostable.”
Stephen E Arnold, December 5, 2015
December 1, 2015
Electronic health records (EHRs) were to bring us reductions in cost and, just as importantly, seamless record-sharing between health-care providers. “Epic Fail” at Mother Jones explains why that has yet to happen. The short answer: despite government’s intentions, federation is simply not part of the Epic plan; vendor lock-in is too profitable to relinquish so easily.
Reporter Patrick Caldwell spends a lot of pixels discussing Epic Systems, the leading EHR vendor whose CEO sat on the Obama administration’s 2009 Health IT Policy Committee, where many EHR-related decisions were made. Epic, along with other EHR vendors, has received billions from the federal government to expand EHR systems. Caldwell writes:
“But instead of ushering in a new age of secure and easily accessible medical files, Epic has helped create a fragmented system that leaves doctors unable to trade information across practices or hospitals. That hurts patients who can’t be assured that their records—drug allergies, test results, X-rays—will be available to the doctors who need to see them. This is especially important for patients with lengthy and complicated health histories. But it also means we’re all missing out on the kind of system-wide savings that President Barack Obama predicted nearly seven years ago, when the federal government poured billions of dollars into digitizing the country’s medical records. ‘Within five years, all of America’s medical records are computerized,’ he announced in January 2009, when visiting Virginia’s George Mason University to unveil his stimulus plan. ‘This will cut waste, eliminate red tape, and reduce the need to repeat expensive medical tests.’ Unfortunately, in some ways, our medical records aren’t in any better shape today than they were before.”
Caldwell taps into his own medical saga to effectively illustrate how important interoperability is to patients with complicated medical histories. Epic seems to be experiencing push-back, both from the government and from the EHR industry. Though the company was widely expected to score the massive contract to modernize the Department of Defense’s health records, that contract went instead to competitor Cerner. Meanwhile, some of Epic’s competitors have formed the nonprofit CommonWell Health Alliance Partnership, tasked with setting standards for records exchange. Epic has not joined that partnership, choosing instead to facilitate interoperability between hospitals that use its own software. For a hefty fee, of course.
Perhaps this will all be straightened out down the line, and we will finally receive both our savings and our medical peace of mind. In the meantime, many patients and providers struggle with changes that appear to have only complicated the issue.
Cynthia Murrell, December 1, 2015