Taken in by Data Mining and Transfinancial Economics

April 2, 2014

Have you ever heard of transfinancial economics? It is a concept originated by Robert Searle and he writes about the topic and other related concepts on his blog The Economic Realms. Searle explains that:

“[Transfinancial economics] believes that apart from earned money, new unearned money could be electronically created without serious inflation notably for key climate change/ environmentally sustainable projects, and for high ethical/ social “enterprises.” “

It is a possible theory that could be explored, but while investigating Searle’s blog posts and his user profile it comes to light that Searle is either an extremely longwinded person or he is a dummy SEO profile. While trying to study his reasoning for transfinancial economics, he wrote a blog post that explains how data mining will be important to it.

He then copied the entire Wikipedia entry on data mining. Browsing through his other posts, he has copied other Wikipedia entries among a few original entries. If Searle is a real person, his blog follows a Pat Gunkel-esque writing style. He spins his ideas to connect to each other from his transfinancial economics to improvisation whistling. If you have time, you work through the entire blog for an analysis of the discipline and how transfinancial economics works. We doubt that Searle will be writing a book on the topic soon.

Whitney Grace, April 02 2014
Sponsored by ArnoldIT.com, developer of Augmentext

Digging for Data Gold

April 1, 2014

Tech Radar has an article that suggests an idea we have never heard before: “How Text Mining Can Help Your Business Dig Gold.” Be mindful that was a sarcastic comment. It is already common knowledge that text mining is advantageous tool to learn about customers, products, new innovations, market trends, and other patterns. One of big data’s main scopes is capturing that information from an organization’s data. The article explains how much data is created in a single minute from text with some interesting facts (2.46 million Facebook posts, wow!).

It suggests understanding the type of knowledge you wish to capture and finding software with a user-friendly dashboard. It ends on this note:

“In summary, you need to listen to what the world is trying to tell you, and the premier technology for doing so is “text mining.” But, you can lean on others to help you use this daunting technology to extract the right conversations and meanings for you.”

The entire article is an overview of what text mining can do and how it is beneficial. It does not go further than basic explanations or how to mine the gold in the data mine. That will require further reading. We suggest a follow up article that explains how text mining can also lead to fool’s gold.

Whitney Grace, April 01, 2014
Sponsored by ArnoldIT.com, developer of Augmentext

Funnelback Advocates Big Data Mining

February 9, 2014

It is a new year and, as usual, there are big plans for big data. Instead of looking ahead, however, lets travel back to the July 4, 2012 Squiz and Funnelback European User Summit. On that day, Ben Pottier gave a discussion on “Big Data Mining With Funnelback.” Essentially it is a sales pitch for the company, but it is also a primer to understanding big data and how people use data.

At the beginning of the talk, Pottier mentions a quote from the International Data Corporation:

“The total amount of global data is expected to grow almost 3 zettabytes during 2012.”

That is a lot of ones and zeroes. How much did it grow in 2013 and what is expected for 2014? However much global data is grown, Pottier emphasizes that most of Funnelback’s clients have 75,000 documents and as it grows bigger organizations need to address how to manage it. Over the basic explanation, Pottier explains the single biggest issue for big data is finding enterprise content. In the last five minutes, he discusses data mining’s importance and how it can automate work that used to be done manually.

In Pottier’s talk, he explains that search is a vital feature for big data. Ha! Interesting how search is stretched to cover just about any content related function. Maybe instead of big data it should be changed to big search.

Whitney Grace, February 09, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Visual Mining Redesign

January 18, 2014

We are familiar with Visual Mining and its range of dashboard and data visualization software. Currently, Visual Mining has been working on products that help users better understand and analyze actionable business data. Its enterprise software line NetCharts is compatible across all platforms, including mobile and tablets. The company recently released their Winter 2013 Chartline Newsletter.

Along with the usual end of the year greetings and gratitudes, the first note of business in the newsletter addresses is the Web site’s redesign.

Among the new features are:

  • “Live Demo We would like to invite you to take a virtual test drive of our live NetCharts Performance Dashboards (NCPD) demo to see our newly restyled dashboard KPI’s.
  • Blog Among the new items to explore on our site includes our new blog. This developer driven blog features new content with many different topics including tips and simple tricks to help you build and style your charts and dashboards. Keep coming back for lots more new content that will be added each month.
  • Chart Gallery We also have a new chart gallery, which features all new examples with many different kinds of chart types to demonstrate some of the countless possibilities. We also added new chart type categories such as Alerting Charts and Showcase Charts. The Alerting Charts include different chart types that use alert zones while the Showcase category features chart examples with new and unusual styling approaches to demonstrate the flexibility of our charts.”

We have to wonder if the redesign came from the lack of Web traffic. Most Web sites are losing traffic, among them are content processing vendors. Does Visual Mining hope to generate sales more traffic based on their new look? We hope so.

Whitney Grace, January 18, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Free Data Mining Book

January 7, 2014

We enjoy telling you about free resources, and here’s another one: Mining of Massive Datasets from Cambridge University Press. You can download the book without charge at the above link, or you can purchase a discounted hardcopy here, if you prefer. The book was developed by Anand Rajaraman and Jeff Ullman for their Stanford course unsurprisingly titled “Web Mining.” The material focuses on working with very large data sets and emphasizes an algorithmic approach.

The description reminds us:

“By agreement with the publisher, you can still download it free from this page. Cambridge Press does, however, retain copyright on the work, and we expect that you will obtain their permission and acknowledge our authorship if you republish parts or all of it. We are sorry to have to mention this point, but we have evidence that other items we have published on the Web have been appropriated and republished under other names. It is easy to detect such misuse, by the way, as you will learn in Chapter 3.”

Nice plug there at the end. If you’re looking for more info on working with monster datasets, check out this resource—the price is right.

Cynthia Murrell, January 07, 2013

Sponsored by ArnoldIT.com, developer of Augmentext

Palantirs Growth Continues Following the 2011 Move to Australia

November 11, 2013

The article titled The Rise and Rise of Palantir and Its Deep Domain Knowledge on Crikey follows the move of Palantir Technologies, a datamining company with a 2 million dollar investment from the CIA, to Canberra, Australia. Palantir has seen its fair share of press, good and bad, but ever since Anonymous hacked their system and discovered their plan to destroy WikiLeaks’ credibility in 2011, the adjective “ruthless” seems appropriate. The company, founded in 2002, moved to Australia in 2011 and has seen enormous success. The article explains,

“The Department of Defence began using some of its software in 2011 via third-party providers, but this year has seen the company grow rapidly… Top-flight lobbying firm Government Relations Australia was hired to represent them in Canberra and state capitals. In the last few weeks, the company has secured multi-year contracts with the Department of Defence’s Intelligence and Security branch worth nearly $2 million, all secured via limited tender…Those of course are the contracts we know about.”

The article speculates that Palantir is being utilized by the Australian government given the proven effectiveness of datamining for national security. While the ACLU believes they pose a massive threat to the privacy of civilians, governments continue to invest in cybersecurity companies.

Chelsea Kerwin, November 11, 2013

Sponsored by ArnoldIT.com, developer of Augmentext

There Is Much to be Learned about Visual Dating Mining

October 6, 2013

What is visual data mining? I know that data mining involves searching through data with a computer program in search of specific information. I am guessing that visual data mining includes the same aspect except it presents the data using various patterns. Am I right? Am I dead wrong? I do not know, but I do know the way to find the answer is to read Visual Data Mining-Theoyr by Arturas Mazeika, Michael H. Bohlen, and Simeoin Simoff.

Here is the item description from Amazon:

“The importance of visual data mining, as a strong sub-discipline of data mining, had already been recognized in the beginning of the decade. In 2005 a panel of renowned individuals met to address the shortcomings and drawbacks of the current state of visual information processing. The need for a systematic and methodological development of visual analytics was detected. This book aims at addressing this need. Through a collection of 21 contributions selected from more than 46 submissions, it offers a systematic presentation of the state of the art in the field. The volume is structured in three parts on theory and methodologies, techniques, and tools and applications.”

This book usually retails for a whooping $99.00 or $63.91 with the Amazon discount. It is still a hefty chunk of change for a 163 page book, which is why we are pleased to say if you are a member of ISBN Book Funder or OnlineBooks.com then it is available to you for free. Other books are free for members. If that does not appeal to you check our your local academic library.

Whitney Grace, October 06, 2013

Sponsored by ArnoldIT.com, developer of Augmentext

Big Data Joins The Justice League

September 1, 2013

The Justice League’s headquarters, either the Hall of Justice or the Watch Tower, has state of the art equipment to track bad guys and their criminal activities. We puny mortals might actually have a tool to put Batman’s own deductive skills to shame with big data, says The News Factor in the article, “Watch Out, Terrorists: Big Data Is On The Case.” Big data is nothing new, we just finally have the technology to aggregate the data and follow patterns using data mining and data visualization.

The Institute for the Study of Violent Groups is searching through ten years of data about suspected groups and individuals involved with terrorism and other crimes. The Institute is discovering patterns and information that was never possible before. Microsoft’s security researchers are up to their eyeballs in data on a daily basis that they analyze for cyber attacks. Microsoft recently allocated more resources to develop better network analytical tools.

The article says that while these organizations’ efforts are praiseworthy, the only way to truly slow cyber crime is to place a filter over the entire Internet. Here comes the company plug:

“That’s where new data-visualization technology, from vendors such as Tableau and Tibco Software, hold potential for making a big difference over time. These tools enable rank-and-file employees to creatively correlate information and assist in spotting, and stopping, cybercriminals.”

Big data’s superpowers are limited to isolated areas and where it has been deployed. Its major weakness is the entire Internet. Again, not the end all answer.

Whitney Grace, September 01, 2013

Sponsored by ArnoldIT.com, developer of Beyond Search

Another Information Priority: Legacy Systems

August 16, 2013

The hoohah about cloud computing, Big Data, and other “innovations” continues. Who needs Oracle when one has Hadoop? Why license SPSS or some other Fancy Dan analytics system when there are open choice analytics systems a mouse click away? Search? Lots of open source choices.

image

Image from http://sageamericanhistory.net/gildedage/topics/gildedage3.html

We have entered the Gilded Age of information and data analysis. Do I have that right?

The marketers and young MBAs chasing venture funding instead of building revenue shout, “Yes, break out the top hats and cigars. We are riding a hockey stick type curve.”

Well, sort of. I read “Business Intelligence, Tackling Legacy Systems Top Priorities for CIOs.” Behind the consultant speak and fluff, there lurk two main points:

  1. Professionals in the US government and I presume elsewhere are struggling to make sense of “legacy” data; that is, information stuffed in file cabinets or sitting in an antiquated system down the hall
  2. The problems information technology managers remain unresolved. After decades of effort by whiz kids, few organizations can provide basic information technology services.

As one Reddit thread made clear, most information technology professionals use Google to find a fix or read the manual. See Reddit and search for “secrets about work business”.

A useful comment about the inability to tap data appears in “Improving business intelligence and analytics the top tech priority, say Government CIOs.” Here’s the statement:

IT contracts expert Iain Monaghan of Pinsent Masons added: “Most suppliers want to sell new technology because this is likely to be where most of their profit will come from in future. However, they will have heavily invested in older technology and it will usually be cheaper for them to supply services using those products. Buyers need to balance the cost they are prepared to pay for IT with the benefits that new technology can deliver,” he said. “Suppliers are less resistant to renegotiating existing contracts if buyers can show that there is a reason for change and that the change offers a new business opportunity to the supplier. This is why constant engagement with suppliers is important. The contract is meant to embody a relationship with the supplier.”

Let me step back, way back. Last year my team and I prepared a report to tackle this question, “Why is there little or no progress in information access and content processing?”

We waded through the consultant chopped liver, the marketing baloney, and the mindless prose of thought leaders. Our finding was really simple. In fact, it was so basic we were uncertain about a way to present it without coming across like a stand up comedian at the Laugh House. To wit:

Computational capabilities are improving but the volume of content to be processed is growing rapidly. Software which could cope with basic indexing and statistical chores bottlenecks in widely used systems. As a result, the gap between what infrastructure and software can process and the amount of data to be imported, normalized, analyzed, and output is growing. Despite recent advances, most organizations are unable to keep pace with new content and changes to current content. Legacy content is in most cases not processed. Costs, time, and tools seem to be an intractable problem.

Flash forward to the problem of legacy information. Why not “sample” the data and use that? Sounds good. The problem is that even sampling is fraught with problems. Most introductory statistics courses explain the pitfalls of flawed sampling.

How prevalent is use of flawed sampling? Some interesting examples from “everywhere” appear on the American Association for Public Opinion Research. For me, I just need to reflect on the meetings in which I have participated in the last week or two, Examples:

  1. Zero revenue because no one matched the “product” to what the prospects wanted to buy
  2. Bad hires because no one double checked references. The excuse was, “Too busy” and “the system was down.”
  3. Client did not pay because “contracts person could not find a key document.”

Legacy data? Another problem of flawed business and technology practices. Will azure chip consultants and “motivated” MBAs solve the problem? Nah.Will flashy smart software be licensed and deployed? Absolutely. Will the list of challenges be narrowed in 2014? Good question.

Stephen E Arnold, August 16, 2013

Sponsored by Xenky

Data Mining is Part of Online Experience

August 8, 2013

How much are we revealing of ourselves online? Every day we are hearing new information about how even the safest internet users are most likely wide open to spying. It’s hard to say what NSA whistleblower Edward Snowden thought would happen, but the world’s reaction is probably pretty close. The NSA isn’t the only one peeking, as we learned in a recent TIME article, “This MIT Website Tracks Your Digital Footprint.”

According to the article about a program called Immersion:

Much like the government phone-surveillance programs, Immersion doesn’t need to access the content of communications. Instead, by gathering information about the senders and recipients of all the e-mails in an inbox, it can create a detailed portrait of the user’s social connections. Each person’s picture on Immersion is as unique as a fingerprint, but much more informative.

While, sure, we treasure our privacy as much as anyone, this news and the NSA fiasco isn’t really much of a bubble on our radar. Much like the President said it’s not really a huge deal. If anyone who spends a lot of time online thinks they have anything resembling privacy, we have a bridge we’d love to sell them.

Patrick Roland, August 08, 2013

Sponsored by ArnoldIT.com, developer of Beyond Search

Next Page »