Data Science Book: Free for Now

May 24, 2019

We spotted a post by Capri Granville which points to a free data science book. The post also provides a link to other free books. The Microsoft Research India book is “Foundations of Data Science” by Ravi Kannan. You can as of May 24, 2019, download the book without charge at this link: https://www.cs.cornell.edu/jeh/book.pdf. Cornell charges students about $55,188 for an academic year. DarkCyber believes that “free” may not be an operative word where the Theory Center used to love those big IBM computers. No, they were not painted Azure.

Stephen E Arnold, May 24, 2019

IBM Hyperledger: More Than a Blockchain or Less?

May 17, 2019

Though the IBM-backed open-source project Hyperledger has been prominent on the blockchain scene since 2016, The Next Web declares, “IBM’s Hyperledger Isn’t a Real Blockchain—Here’s Why.” Kadena president, and writer, Stuart Popejoy tells us:

“A blockchain is a decentralized and distributed database, an immutable ledger of events or transactions where truth is determined by a consensus mechanism — such as participants voting to agree on what gets written — so that no central authority arbitrates what is true. IBM’s definition of blockchain captures the distributed and immutable elements of blockchain but conveniently leaves out decentralized consensus — that’s because IBM Hyperledger Fabric doesn’t require a true consensus mechanism at all.

We noted this statement as well:

“Instead, it suggests using an ‘ordering service’ called Kafka, but without enforced, democratized, cryptographically-secure voting between participants, you can’t really prove whether an agent tampers with the ledger. In effect, IBM’s ‘blockchain’ is nothing more than a glorified time-stamped list of entries. IBM’s architecture exposes numerous potential vulnerabilities that require a very small amount of malicious coordination. For instance, IBM introduces public-key cryptography ‘inside the network’ with validator signatures, which fundamentally invalidates the proven security model of Bitcoin and other real blockchains, where the network can never intermediate a user’s externally-provided public key signature.”

Then there are IBM’s approaches to architecture, security flaws, and smart contracts to consider, as well as misleading performance numbers. See the article for details on each of those criticisms. Popejoy concludes with the prediction that better blockchains are bound to be developed, alongside a more positive approach to technology in general, across society.

Cynthia Murrell, May 17, 2019

Machine Learning and Data Quality

April 23, 2019

We’re updating our data quality files as part of the run up to my lecture at the TechnoSecurity & Digital Forensics Conference. A paper by Sanau.co is worth reading if you are thinking about how to solve some issues with the accuracy of the outputs of some machine learning systems. “Dear AI Startups: Your ML Models Are Dying Quietly.” The slow deterioration of certain Bayesian methods has been a subject I have addressed for years. The Sanau write up called to my attention another source of data deterioration or data rot; that is, seemingly logical changes made to field names and the insidious downstream consequences of these changes. The article provides useful explanations and a concrete example drawn from ecommerce. The article has a much broader application. Worth reading.

Stephen E Arnold, April 23, 2019

The Surf Is Up for the Word Dark

April 4, 2019

Just a short note. I read this puffy wuffy write up about a new market research report. Its title?

The Research Report “Dark Analytics Market: Global Industry Analysis 2013 – 2017 and Opportunity Assessment; 2018 – 2028 ” provides information on pricing, market analysis, shares, forecast, and company profiles for key industry participants

What caught my attention is not the authors’ attempt to generate some dough via open source data collection and a touch of Excel fever.

Here’s what caught my attention:

Dark analytics is the analysis of dark data present in the enterprises. Dark data is generally is referred as raw data or information buried in text, tables, figures that organizations acquire in various business operations and store it but, is unused to derive insights and for decision making in business. Organizations nowadays are realizing that there is a huge risk associated with losing competitive edge in business and regulatory issues that comes with not analyzing and processing this data. Hence, dark analytics is a practice followed in enterprises that advances in analyzing computer network operations and pattern recognition.

Yes, buried data treasure. Now the cost of locating, accessing, validating, and normalizing these time encrusted nuggets?

Answer: A lot. A whole lot. That’s part of the reason old data are not particularly popular in some organizations. The idea of using a consulting firm or software from SAP is not particularly thrilling to my DarkCyber team. (Our use of “dark” is different too.)

Stephen E Arnold, April 4, 2019

Content Management: Now a Playground for Smart Software?

March 28, 2019

CMS or content management systems are a hoot. Sometimes they work; sometimes they don’t. How does one keep these expensive, cranky databases chugging along in the zip zip world of content utilities which are really inexpensive?

Smart software and predictive analytics?

Managing a website is not what is used to be, and one of the biggest changes to content management systems is the use of predictive analytics. The Smart Data Collective discusses “The Fascinating Role of Predictive Analytics in CMS Today.” Reporter Ryan Kh writes:

“Predictive analytics is changing digital marketing and website management. In previous posts, we have discussed the benefits of using predictive analytics to identify the types of customers that are most likely to convert and increase the value of your lead generation strategy. However, there are also a lot of reasons that you can use predictive analytics in other ways. Improving the quality of your website is one of them. One of the main benefits of predictive analytics in 2019 is in improving the performance of content management systems. There are a number of different types of content management systems on the market, including WordPress, Joomla, Drupal, and Shopify. There are actually hundreds of content management systems on the market, but these are some of the most noteworthy. One of the reasons that they are standing out so well against their competitors is that they use big data solutions to get the most value for their customers.”

The author notes two areas in which predictive analytics are helping companies’ bottom lines: fraud detection and, of course, marketing optimization; the latter through capacities like more effective lead generation and content validation.

Yep, CMS with AI. The future with spin.

Cynthia Murrell, March 28, 2019

Federating Data: Easy, Hard, or Poorly Understood Until One Tries It at Scale?

March 8, 2019

I read two articles this morning.

One article explained that there’s a new way to deal with data federation. Always optimistic, I took a look at “Data-Driven Decision-Making Made Possible using a Modern Data Stack.” The revolution is to load data and then aggregate. The old way is to transform, aggregate, and model. Here’s a diagram from DAS43. A larger version is available at this link.das42 diagram

Hard to read. Yep, New Millennial colors. Is this a breakthrough?

I don’t know.

When I read “2 Reasons a Federated Database Isn’t Such a Slam-Dunk”, it seems that the solution outlined by DAS42 and the InfoWorld expert are not in sync.

There are two reasons. Count ‘em.

One: performance

Two: security.

Yeah, okay.

Some may suggest that there are a handful of other challenges. These range from deciding how to index audio, video, and images to figuring out what to do with different languages in the content to determining what data are “good” for the task at hand and what data are less “useful.” Date, time, and geocodes metadata are needed, but that introduces the not so easy to solve indexing problem.

So where are we with the “federation thing”?

Exactly the same place we were years ago…start ups and experts notwithstanding. But then one has to wrangle a lot of data. That’s cost, gentle reader. Big money.

Stephen E Arnold, March 8, 2019

Fragmented Data: Still a Problem?

January 28, 2019

Digital transitions are a major shift for organizations. The shift includes new technology and better ways to serve clients, but it also includes massive amounts of data. All organizations with a successful digital implementation rely on data. Too much data, however, can hinder organizations’ performance. The IT Pro Portal explains how data and something called mass data fragmentation is a major issue in the article, “What Is Mass Data Fragmentation, And What Are IT Leaders So Worried About It?”

The biggest question is: what exactly is mass data fragmentation? I learned:

“We believe one of the major culprits is a phenomenon called mass data fragmentation. This is essentially just a technical way of saying, ’data that is siloed, scattered and copied all over the place’ leading to an incomplete view of the data and an inability to extract real value from it. Most of the data in question is what’s called secondary data: data sets used for backups, archives, object stores, file shares, test and development, and analytics. Secondary data makes up the vast majority of an organization’s data (approximately 80 per cent).”

The article compares the secondary data to an iceberg, most of it is hidden beneath the surface. The poor visibility leads to compliance and vulnerability risks. In other words, security issues that put the entire organization at risk. Most organizations, however, view their secondary data as a storage bill, compliance risk (at least that is good), and a giant headache.

When surveyed about the amount of secondary data they have, it was discovered that organizations had multiple copies of the same data spread over the cloud and on premise locations. IT teams are expected to manage the secondary data across all the locations, but without the right tools and technology the task is unending, unmanageable, and the root of more problems.

If organizations managed their mass data fragmentation efficiently it would increase their bottom line, reduce costs, and reduce security risks. With more access points to sensitive data and they are not secure, it increases the risk of hacking and information being stolen.

Whitney Grace, January 28, 2019

Relatives Got You Down? Check Out BigQuery and Redshift

December 25, 2018

I read “Redshift Vs BigQuery: What Are The Factors To Consider Before Choosing A Data Warehouse.” With Oracle on the ropes and database technology chugging along, why pay attention to old school solutions?

The article sets out to compare and contrast BigQuery (one of the Google progeny known to have consorted with a certain Mr. Dremel.) Amazon has more database products and services than I can keep track of. But RedShift is one of them, and it is important if an intelware company uses AWS and the RedShift technology.

Which system is more “flexible”? I learned:

In the case of Redshift, if anything goes kaput during a transaction, Amazon Redshift allows users to perform roll-back to ensure that data get backs to the consistent state. BigQuery works on the principle of append-only data and its storage engine strictly follows this technique. This becomes a major disadvantage to the user when something goes wrong during the transaction process, forcing them to restart from the beginning or specific point. Another key point is that duplicating data in BigQuery is hard to achieve and costly. Both the technologies have reservations regarding insertion of streaming data, with Redshift taking edge by guaranteeing storage of data with additional care from the user. On the other hand, BigQuery supports de-duplication of streaming data in the most effective way by using time window.

The write up points out:

As compared to BigQuery, Redshift is considerably more expensive costing $0.08 per GB, compared to BigQuery which costs $0.02 per GB. However, BigQuery offers only storage and not queries. The platform charges separately for queries based upon processed data at $5/TB. As BigQuery lacks indexes and various analytical queries, the scanning of data is a huge and costly process. In most cases, users opt for Amazon Redshift as it is predictable, simple and encourages data usage and analytics.

Which is “better”? Not surprisingly, both are really swell. Helpful. But the Beyond Search goose was curious about:

  • Performance
  • Latency for different types of queries
  • Programming requirements

But swell is fine.

Stephen E Arnold, December 25, 2018

Data Science Gets Political

November 20, 2018

With the near ubiquitous use of big data science in every industry short of rock hunting, it was inevitable that there would be blowback. Recently, many tech companies began to feel some political heat due to their involvement with immigration agencies. We learned more from a recent Mercury News story, “Bay Area Cities May Boycott Tech Giants Contracting With ICE.”

According to the story:

“The policy comes as the local immigration debate shifts toward several prominent tech companies — including Palo Alto’s Palantir Technologies, Vigilant Solutions in Livermore and Amazon, which have been criticized for contracting with federal immigration agencies. Last week, advocates descended on Salesforce’s annual conference in San Francisco with an 14-foot-tall cage symbolizing ICE detention to protest the company’s contract with Customs and Border Protection.”

If this sounds a little farfetched or even unlikely, pay close attention to similar actions in Europe. There, when people pushed back against the intersection of politics and big data, it began to impact finances. And when pocketbooks begin to suffer, you can guarantee companies take notice. We don’t yet know if the same will happen in America, but we have a hunch this issue won’t vanish quietly.

Patrick Roland, November 20, 2018

Oracle: Grousing about Amazon and Wrestling with Revenue Alligators

November 14, 2018

One of my erstwhile fans sent me a link to a video allegedly revealing Larry Ellison’s deep disappointment with Amazon. Yep, Amazon, an online store with a bundle of database systems. You can view the video here.

News is news. But It seems that some time has passed since Oracle rolled out major technology announcements. What’s happened to Endeca by the way? Seeking Alpha’s “The Reason(s) Why Oracle’s Growth Story Is Crumbling” is semi news, and the write up raises the question, “What is happening with Oracle?”

Oracle’s quarterly earnings are down and the company’s growth is shrinking faster than the polar ice caps. Oracle might have made a mistake combining its cloud business together with its on-premise business. This move led to Oracle’s stock worth dropping:

“Several SA contributors have provided their take on those earnings, though, in my view, this piece by Shock Exchange puts it quite succinctly: Oracle’s cloud growth may have peaked. Indeed, Oracle’s Fiscal Q4 2018 cloud revenue of $1.57B was $200M below the Wall Street consensus, while 31% growth paled in comparison to SAP’s (SAP) 40% and Microsoft’s (MSFT) 53% for the same segment. For perspective, Oracle’s cloud revenue growth was 66% just a year ago.”

Despite the poor returns this year, Oracle stock is only a little off from its highest point, so the company is surfing along. Perhaps Amazon is a rallying point for the Oracle faithful?

Whitney Grace, November 14, 2018

Next Page »

  • Archives

  • Recent Posts

  • Meta