July 24, 2015
Humans are visual creatures and they learn and absorb information better when pictures accompany it. In recent years, the graphic novel medium has gained popularity amongst all demographics. The amount of information a picture can communicate is astounding, but unless it is looked for it can be hard to find. It also cannot be searched by a search engine…or can it? Synaptica is in the process of developing the “OASIS Deep Image Indexing Using Linked Data,”
OASIS is an acronym for Open Annotation Semantic Imaging System, an application that unlocks image content by giving users the ability to examine an image closer than before and highlighting data points. OASIS is linked data application that enables parts of the image to be identified as linked data URIS, which can then be semantically indexed to controlled vocabulary lists. It builds an interactive map of an image with its features and conceptual ideas.
“With OASIS you will be able to pan-and-zoom effortlessly through high definition images and see points of interest highlight dynamically in response to your interaction. Points of interest will be presented along with contextual links to associated images, concepts, documents and external Linked Data resources. Faceted discovery tools allow users to search and browse annotations and concepts and click through to view related images or specific features within an image. OASIS enhances the ability to communicate information with impactful visual + audio + textual complements.”
OASIS is advertised as a discovery and interactive tool that gives users the chance to fully engage with an image. It can be applied to any field or industry, which might mean the difference between success and failure. People want to fully immerse themselves in their data or images these days. Being able to do so on a much richer scale is the future.
Whitney Grace, July 24, 2015
July 17, 2015
Summer time is here and what better way to celebrate the warm weather and fun in the sun than with some fantastic open source tools. Okay, so you probably will not take your computer to the beach, but if you have a vacation planned one of these tools might help you complete your work faster so you can get closer to that umbrella and cocktail. Datamation has a great listicle focused on “Hadoop And Big Data: 60 Top Open Source Tools.”
Hadoop is one of the most adopted open source tool to provide big data solutions. The Hadoop market is expected to be worth $1 billion by 2020 and IBM has dedicated 3,500 employees to develop Apache Spark, part of the Hadoop ecosystem.
As open source is a huge part of the Hadoop landscape, Datamation’s list provides invaluable information on tools that could mean the difference between a successful project and failed one. Also they could save some extra cash on the IT budget.
“This area has a seen a lot of activity recently, with the launch of many new projects. Many of the most noteworthy projects are managed by the Apache Foundation and are closely related to Hadoop.”
Datamation has maintained this list for a while and they update it from time to time as the industry changes. The list isn’t sorted on a comparison scale, one being the best, rather they tools are grouped into categories and a short description is given to explain what the tool does. The categories include: Hadoop-related tools, big data analysis platforms and tools, databases and data warehouses, business intelligence, data mining, big data search, programming languages, query engines, and in-memory technology. There is a tool for nearly every sort of problem that could come up in a Hadoop environment, so the listicle is definitely worth a glance.
June 11, 2015
Forbes’ article “The 50 Most Innovative Companies Of 2014: Strong Innovators Are Three Times More Likely To Rely on Big Data Analytics” points out how innovation is strongly tied to big data analytics and data mining these days. The Boston Consulting Group (BCG) studies the methodology of innovation. The numbers are astounding when companies that use big data are placed against those who still have not figured out how to use their data: 57% vs. 19%.
Innovation, however, is not entirely defined by big data. Most of the companies that rely on big data as key to their innovation are software companies. According to Forbes’ study, they found that 53% see big data as having a huge impact in the future, while BCG only found 41% who saw big data as vital to their innovation.
Big data cannot be and should not be ignored. Forbes and BCG found that big data analytics are useful and can have huge turnouts:
“BCG also found that big-data leaders generate 12% higher revenues than those who do not experiment and attempt to gain value from big data analytics. Companies adopting big data analytics are twice as likely as their peers (81% versus 41%) to credit big data for making them more innovative.”
Measuring innovation proves to be subjective, but one cannot die the positive effect big data analytics and data mining can have on a company. You have to realize, though, that big data results are useless without a plan to implement and use the data. Also take note that none of the major search vendors are considered “innovative,” when a huge part of big data involves searching for results.
Whitney Grace, June 11, 2015
June 2, 2015
It is time for an update to Apache’s headlining, open source, enterprise search software! The San Diego Times let us know that “DataStax Enterprise 4.7 Released” and it has a slew of updates set to make open source search enthusiasts drool. DataStax is a company that built itself around the open source Apache Cassandra software. The company specializes in enterprise applications for search and analytics.
The newest release of DataStax Enterprise 4.7 includes several updates to improve a user’s enterprise experience:
“…includes a production-certified version of Cassandra 2.1, and it adds enhanced enterprise search, analytics, security, in-memory, and database monitoring capabilities. These include a new certified version of Apache Solr and Live Indexing, a new DSE feature that makes data immediately available for search by leveraging Cassandra’s native ability to run across multiple data centers.”
The update also includes DataStax’s OpCenter 5.2 for enhanced security and encryption. It can be used to store encryption keys on servers and to manage admin security.
The enhanced search capabilities are the real bragging points: fault-tolerant search operations-used to customize failed search responses, intelligent search query routing-queries are routed to the fastest machines in a cluster for the quickest response times, and extended search analytics-using Solr search syntax and Apache Spark research and analytics tasks can run simultaneously.
DataStax Enterprise 4.7 improves enterprise search applications. It will probably pull in users trying to improve their big data plans. Has DataStax considered how its enterprise platform could be used for the cloud or on mobile computing?
Whitney Grace, June 2, 2015
May 22, 2015
The article titled Big Data Must Haves: Capacity, Compute, Collaboration on GCN offers insights into the best areas of focus for big data researchers. The Internet2 Global Summit is in D.C. this year with many exciting panelists who support the emphasis on collaboration in particular. The article mentions the work being presented by several people including Clemson professor Alex Feltus,
“…his research team is leveraging the Internet2 infrastructure, including its Advanced Layer 2 Service high-speed connections and perfSONAR network monitoring, to substantially accelerate genomic big data transfers and transform researcher collaboration…Arizona State University, which recently got 100 gigabit/sec connections to Internet2, has developed the Next Generation Cyber Capability, or NGCC, to respond to big data challenges. The NGCC integrates big data platforms and traditional supercomputing technologies with software-defined networking, high-speed interconnects and visualization for medical research.”
Arizona’s NGCC provides the essence of the article’s claims, stressing capacity with Internet2, several types of computing, and of course collaboration between everyone at work on the system. Feltus commented on the importance of cooperation in Arizona State’s work, suggesting that personal relationships outweigh individual successes. He claims his own teamwork with network and storage researchers helped him find new potential avenues of innovation that might not have occurred to him without thoughtful collaboration.
Chelsea Kerwin, May 22, 2014
Stephen E Arnold, Publisher of CyberOSINT at www.xenky.com
May 18, 2015
In plain English too. Navigate to “Top 10 Data Mining Algorithms in Plain English.” When you fire up an enterprise content processing system, the algorithms beneath the user experience layer are chestnuts. Universities do a good job of teaching students about some reliable methods to perform data operations. In fact, the universities do such a good job that most content processing systems include almost the same old chestnuts in their solutions. The decision to use some or all of the top 10 data mining algorithms has some interesting consequences, but you will have to attend one of my lectures about the weaknesses of these numerical recipes to get some details.
The write up is worth a read. The article includes a link to information which underscores the ubiquitous nature of these methods. This is the Xindong Wu et all write up “Top 10 Algorithms in Data Mining.” Our research reveals that dependence on these methods is more wide spread now than they were seven years ago when the paper first appeared.
The implication then and now is that content processing systems are more alike than different. The use of similar methods means that the differences among some systems is essentially cosmetic. There is a flub in the paper. I am confident that you, gentle reader, will spot it easily.
Now to the “made simple” write up. The article explains quite clearly the what and why of 10 widely used methods. The article also identifies some of the weaknesses of each method. If there is a weakness, do you think it can be exploited? This is a question worth considering I suggest.
Example: What is a weakness of k means:
Two key weaknesses of k-means are its sensitivity to outliers, and its sensitivity to the initial choice of centroids. One final thing to keep in mind is k-means is designed to operate on continuous data — you’ll need to do some tricks to get it to work on discrete data.
Note the key word “tricks.” When one deals with math, the way to solve problems is to be clever. It follows that some of the differences among content processing systems boils down to the cleverness of the folks working on a particular implementation. Think back to your high school math class. Was there a student who just spit out an answer and then said, “It’s obvious.” Well, that’s the type of cleverness I am referencing.
The author does not dig too deeply into PageRank, but it too has some flaws. An easy way to identify one is to attend a search engine optimization conference. One flaw turbocharges these events.
My relative Vladimir Arnold, whom some of the Arnolds called Vlad the Annoyer, would have liked the paper. So do I. The write up is a keeper. Plus there is a video, perfect for the folks whose attention span is better than a goldfish’s.
Stephen E Arnold, May 18, 2015
May 14, 2015
Mythologies usually develop over a course of centuries, but big data has only been around for (arguably) a couple decades—at least in the modern incarnate. Recently big data has received a lot of media attention and product development, which was enough to give the Internet time to create a big data mythology. The Globe and Mail wanted to dispel some of the bigger myths in the article, “Unearthing Big Myths About Big Data.”
The article focuses on Prof. Joerg Niessing’s big data expertise and how he explains the truth behind many of the biggest big data myths. One of the biggest items that Niessing wants people to understand is that gathering data does not equal dollar signs, you have to be active with data:
“You must take control, starting with developing a strategic outlook in which you will determine how to use the data at your disposal effectively. “That’s where a lot of companies struggle. They do not have a strategic approach. They don’t understand what they want to learn and get lost in the data,” he said in an interview. So before rushing into data mining, step back and figure out which customer segments and what aspects of their behavior you most want to learn about.”
Niessing says that big data is not really big, but made up of many diverse, data points. Big data also does not have all the answers, instead it provides ambiguous results that need to be interpreted. Have questions you want to be answered before gathering data. Also all of the data returned is not the greatest. Some of it is actually garbage, so it cannot be usable for a project. Several other myths are uncovered, but the truth remains that having a strategic big data plan in place is the best way to make the most of big data.
Whitney Grace, May 14, 2015
April 13, 2015
Bing is considered a search engine joke, but it might be working its way as a viable search solution…maybe. MakeUseOf notes, “How Bing Predicts Has Become So Good” due to Microsoft actually listening to its users and improving the search results with the idea that “Bing is for doing.” One way Microsoft is putting its search engine to work is with Bing Predicts, a tool that predicts who win competitions, weather, and other information analyzed from popular searches, social media, regional trends, and more.
It takes a bit more for Predicts to divine sporting event outcomes, for those Bing relies on historic team data, key player data, opinions from top news sources, and pre-game report predictions.
“Microsoft researcher, and serial predictor David Rothschild believes the prediction engine is ‘an interesting way to show users that Bing has a lot of horsepower beyond just providing good search results.’ Data is everything. Even regular Internet users understand the translation of data to power, so Microsoft’s bold step forward with their predictions underscores the confidence in their own algorithms, and their ability to handle the data coming into Redmond.”
Other than predicting games and the next American Idol winner, Bing Predicts has application for social fields and industry. Companies are already implementing some forms of future analysis and for social causes it can be used to predict the best ways to conserve resources, medicinal supplies, food, and even conservatism.
Whitney Grace, April 13, 2015
Stephen E Arnold, Publisher of CyberOSINT at www.xenky.com
February 2, 2015
I find the complaints about Google’s inability to handle time amusing. On the surface, Google seems to demote, ignore, or just not understand the concept of time. For the vast majority of Google service users, Google is no substitute for the users’ investment of time and effort into dating items. But for the wide, wide Google audience, ads, not time, are more important.
Does Google really get an F in time? The answer is, “Nope.”
In CyberOSINT: Next Generation Information Access I explain that Google’s time sense is well developed and of considerable importance to next generation solutions the company hopes to offer. Why the craw fishing? Well, Apple could just buy Google and make the bitter taste of the Apple Board of Directors’ experience a thing of the past.
Now to temporal matters in the here and now.
CyberOSINT relies on automated collection, analysis, and report generation. In order to make sense of data and information crunched by an NGIA system, time is a really key metatag item. To figure out time, a system has to understand:
- The date and time stamp
- Versioning (previous, current, and future document, data items, and fact iterations)
- Times and dates contained in a structured data table
- Times and dates embedded in content objects themselves; for example, a reference to “last week” or in some cases, optical character recognition of the data on a surveillance tape image.
For the average query, this type of time detail is overkill. The “time and date” of an event, therefore, requires disambiguation, determination and tagging of specific time types, and then capturing the date and time data with markers for document or data versions.
A simplification of Recorded Future’s handling of unstructured data. The system can also handle structured data and a range of other data management content types. Image copyright Recorded Future 2014.
Sounds like a lot of computational and technical work.
In CyberOSINT, I describe Google’s and In-Q-Tel’s investments in Recorded Future, one of the data forward NGIA companies. Recorded Future has wizards who developed the Spotfire system which is now part of the Tibco service. There are Xooglers like Jason Hines. There are assorted wizards from Sweden, countries the most US high school software cannot locate on a map, and assorted veterans of high technology start ups.
An NGIA system delivers actionable information to a human or to another system. Conversely a licensee can build and integrate new solutions on top of the Recorded Future technology. One of the company’s key inventions is numerical recipes that deal effectively with the notion of “time.” Recorded Future uses the name “Tempora” as shorthand for the advanced technology that makes time along with predictive algorithms part of the Recorded Future solution.
January 12, 2015
If you are a fan of “knowledge,” you probably follow the information provided by www.KDNuggets.com. I read “Research Leaders on Data Science and big Data Key Trends, Top Papers.” The information is quite interesting. I did note that the paper was kicked off with this statement:
As for the papers, we found that many researchers were so busy that they did not really have the time to read many papers by others. Of course, top researchers learn about works of others from personal interactions, including conferences and meetings, but we hope that professors have enough students who do read the papers and summarize the important ones for them!
Okay, everyone is really busy.
In the 13 experts cited, I noted that there were two papers that seemed to call attention to the issue of accuracy. These were:
“Preventing False Discovery in Interactive Data Analysis is Hard,” Moritz Hardt and Jonathan Ullman
“Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images,” Anh Nguyen, Jason Yosinski, Jeff Clune.
A related paper noted in the article is “Intriguing Properties of Neural Networks,” by Christian Szegdy, et al. The KDNuggets’ comment states:
It found that for every correctly classified image, one can generate an “adversarial”, visually indistinguishable image that will be misclassified. This suggests potential deep flaws in all neural networks, including possibly a human brain.
My take away is that automation is coming down the pike. Accuracy could get hit by a speeding output.
Stephen E Arnold, January 12, 2015