Try Before You Buy

March 22, 2013

The old saying that there is a hidden meaning in words means even more in the analytics world. The Semantria API system helps to turn unstructured data into data that makes sense. Semantria even offers a free demo so that customers can see how their system works. According to the website:

“Semantria’s API helps organizations to extract meaning from large amounts of unstructured text. The value of the content can only be accessed if you see the trends and sentiments that are hidden within. Add sophisticated text analytics and sentiment analysis to your application: turn your unstructured content into actionable data.”

Semantria provides users with a fully customizable and user friendly system. The system is also very user-friendly and users can gain valuable insight from their unstructured content with just the simple click of a button. The Semantria API uses the latest techniques for extraction of data so clients can be confident that they are using the latest technology and getting the best information possible. The convenient and attractive Pay-As-You-Go service makes sure that regardless of budget users can get the services they need. More importantly Semantria offers unlimited support and maintenance so users always know where to go to get the answers they need. One of the best benefits is the online demo that Semantria provides for anyone interested to take a peak at how their service works. The try before you buy approach makes it hard to stay away.

April Holmes, March 22, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

Visisimo Marketing Director Talks Vast Possibilities with Big Data

March 16, 2013

While anyone with their ear to the ground probably already knows that Vivisimo is expanding on its marketing efforts to be classified as a big data leader, all of the implications may not have been revealed just yet. A recent Pittsburgh Business TImes article, “A Big Discussion on Big Data,” clued us in.

IBM’s program director of big data marketing, Saman Haqqi, was included on a panel hosted last week at IBM. The panel was organized by MIT Enterprise Forum of Pittsburgh.

The article offered some insight into her perspective:

“Living in a world where data is everywhere and a huge amount of what we touch is now collecting this information the opportunity for working with “big data” is vast. ‘Just the way the Internet changed us in 20 or 30 years, big data will change how we live, work and play over the next 30 years,’ said Saman Haqqi, program director of big data marketing at IBM (formerly Vivisimo). The possibilities she said can’t be understated since already it touches everything from better weather forecasting to genomics research.”

This is interesting news, but there could be a potential complication with IBM and their big data powerhouse Watson. We wonder if this is a sign that IBM is moving on and focusing on the less flashy, but more practical business intelligence solutions for enterprise organizations, medical industries and others. We would not be too surprised if the super analytic brain slowly faded into the background.

Patrick Roland, March 16, 2013

Sponsored by ArnoldIT.com, developer of Beyond Search

The Dirty Truth about Data Visualization

March 14, 2013

Data Visualization is becoming the new thing when it comes to presentation, portfolios and even proposals however there is more to this fade than meets the eye. The NetMag article “Seven Secrets of Data Visualisation” provides an informative yet comical view of data visualization and the challenges that developers face. The article gives readers seven dirty little secrets about the data visualization world.

  1. Real data is ugly.
  2. A bar chart is usually better.
  3. There’s no substitute for real data.
  4. The devil is in the details.
  5. Animate only when appropriate.
  6. Visualization is not analysis.
  7. Data visualization takes more than code.

These secrets may be surprising to some but to most people once they think about it they actually make sense. For instance data is what it is. Though users are always trying to find ways to “clean” their data up and make it presentable, it still takes a lot of work to make something out of nothing. Whether it comes down to formatting or using special online tools, users need help to take their data from a bunch of random numbers and figures to something presentable and more importantly understandable. When it comes to animation sometimes less is definitely more. It can be tempting to add lots of animation and special effects to your data but in the long run all it does is add to the chaos. The number six secret is probably one of the most important. Though data visualization in many cases can aid analysis it is not a substitute for data, meaning that it still takes analytical skills, effort and expertise to help bring any data to life. Visualization Developers definitely face challenges and this article definitely lays the little secrets out on the table but it’s hard to call them dirty. One might say just say they hold duplicitous roles when it comes to talking about what really goes on.

April Holmes, March 14,2013

Sponsored by ArnoldIT.com, developer of Augmentext

Doubts About Big Data

March 13, 2013

More skepticism is in the air over big-data hype. ReadWrite’s Matt Asay cites a recent dustup between a New York Times reporter and Tesla Motors‘ head honcho Elon Musk in, “Tesla and the Fallacy of Data-Driven Decisions.” Reporter John Broder had given the Tesla Model S a scathing review, with which Musk took issue. In the end, both parties effectively supported their arguments using Broder’s test-drive data. Who was right? Even after looking at that data, it is hard to say; and that is the problem.

Asay asserts that the value of big data lies in pointing us in the right direction, not in employing it to prove points and draw conclusions. He writes:

“New York Times columnist David Brooks nails this in an op-ed piece, wherein he argues that Big Data, while very useful for guiding our intuitions, gets some things very wrong. Like the value of social connections. Or the context for answering a question. In fact, he speculates, Big Data might actually obscure Big Answers by complicating decisions and making it even harder to determine which statistically significant correlations between data are informative and not simply spurious.”

The new big-data industry made for a $4.5 billion market in 2010, but is projected to hit $23.8 billion in 2016. Is all this growth just a house of cards?

Not necessarily, but it is important to recognize what data can and cannot do. Analysis software can find patterns and draw preliminary conclusions, but human minds are still better at higher-order thinking. (And I hope they always will be.)

Cynthia Murrell, March 13, 2013

Sponsored by ArnoldIT.com, developer of Augmentext

Aussie Retailer Sticking with Mainframe Approach to Data

March 12, 2013

One Tasmanian organization refuses to buy into to the cloud-migration trend. TechEye informs us that retail outfit Coogans is saying, in effect, “Forget the Cloud, Get a Mainframe.” The company is forging ahead with plans to upgrade its old standby, the Unisys mainframe. The article tells us:

“To understand how unusual this is, you have to realise that Australia never really had the mainframe bug and there are only about six organisations in Australia to use Unisys’ mainframe systems. According to the Sydney Morning Herald, Coogans has been a loyal client of the Unisys and its predecessor, Burroughs, since before 1965, and this new fangled cloud tech just does not cut the mustard.

“It just took a weekend for Coogans to set up one mainframe, the latest Unisys Libra 460s, at each of its Hobart and Moonah locations in Tasmania and migrate its real-time custom production application, called Coogans Online Stock, Financial And Rental System, which was written in 1992 and is the centrepiece of the retailer’s IT architecture.”

It seems like a case of “if it ain’t broke, don’t fix it.” But won’t they regret eschewing the advantage of cloudy redundancy? Nope. The company maintains a separate disaster-recovery environment at a nearby building, complete with VPN–ensured redundancy.

Okay, so they’re covered there. Still, why resist what many consider an inevitable shift? IT manager Peter Jandera is simply uncomfortable with not knowing exactly where his company’s data is going, and with knowing that “all it would take is a person with a space to cut through a cable and the company is stuffed.” He emphasized that Coogans cannot afford to lose even a minute from their working day; he is simply unwilling to trust another organization with that responsibility. I can’t say I blame him.

If more organizations were to buck the cloud trend, would it mean new hope for systems like BRS Search and IBM’s STAIRS?

Cynthia Murrell, March 12, 2013

Sponsored by ArnoldIT.com, developer of Augmentext

Google Gets the Flu

March 10, 2013

Google is an all-knowing Internet well-spring of knowledge. Within a few keystrokes and mouse clicks, nearly all the world’s information is at your fingertips. Or maybe not. If you have been browsing the Slashdot forums you might have spied this post called “When Google Got Flu Wrong:”

“When influenza hit early and hard in the United States this year, it quietly claimed an unacknowledged victim: one of the cutting-edge techniques being used to monitor the outbreak. A comparison with traditional surveillance data showed that Google Flu Trends, which estimates prevalence from flu-related Internet searches, had drastically overestimated peak flu levels. The glitch is no more than a temporary setback for a promising strategy, experts say, and Google is sure to refine its algorithms. But with flu-tracking techniques based on mining of web data and on social media taking off, Nature looks at how these potentially cheaper, faster methods measure up against traditional epidemiological surveillance networks.”

The real treat comes from the conversation threads. The comments want to know where Google extrapolated its data from and many believe Google will do better next time. One prominent point made was the “Chicken Little pandemic;” the US is a culture of fear and the only way to get people to comply is instilling fear about a falling sky. The end point is that Google is not always right and comments do not show the best side of humanity.

Whitney Grace, March 10, 2013

Sponsored by ArnoldIT.com, developer of Beyond Search

Data Manipulation And Intent

March 8, 2013

Data supposedly tells us about what happened in a project, but while the data may record an action it does not record the intent behind it. The Tow Center for Digital Journalism takes a look at “What The Tesla Affair Tells Us About Data Journalism.” The article points out that intent can shape data, but the context is lost when it turns into cold hard facts. The truth about data is related to the recent Tesla test drive review. Tesla was very upset when New York Times reporter John Broder gave a poor review on the new car and stated that it did not factually represent it. Tesla did not release the data from Broder’s review, only the company’s interpretation of the review data.

At this point, no one can really tell the truth about the vehicle. Broder could provide context, but his opinion has already become devalued. It is also important to remember that Tesla only wanted the review for publicity and all negative truths were bad PR.

What we can learn is:

“So, to recap. The Tesla Affair reinforces that: data does not equal fact; that context matters enormously to data journalism; that trust and documentation are even more important in a world of data journalism; and that companies will continue to prioritize positive PR over good journalism in reviews of their products.”

Great, more reason to doubt data, but people have been manipulating it since time began. Will this become a greater trend, though? Is this a caution for consumer oriented analytics systems?

Whitney Grace, March 08, 2013

Sponsored by ArnoldIT.com, developer of Beyond Search

MicroStrategy Making No Small Changes to Business Intelligence

March 7, 2013

MicroStrategy rolls out big changes in the mobile computing marketplace in 2013 despite interior reorganization and the future looks bright for the business intelligence technology provider.

“MicroStrategy Doubles Down on Mobile Data Visualization,” gives us a look into the intelligence vender and its updated focus on cloud, social and visual analysis and how it will fare against opponents like Oracle and Tableau.

Mobile has been a big part of MicroStrategy’s focus for at least three years, and it has been steadfast in its strategy to build native apps for iPhone, iPad and Android devices. The native-app approach differs from that of many other BI vendors who are betting on HTML 5… In its latest release, MicroStrategy has continued to refine the online and offline performance of its mobile apps with smart caching, support for video and PDF content inside dashboards, and usage tracking of BI activities on the device.”

Big changes were announced for MicroStrategy’s 9.3 release and includes an upgrade to Visual Insight, Micro’s visual data discovery technology; the Cloud, now in its second year also looks promising and has more than 30,000 users. A big boost in social networking intelligence will also be an asset though it is still to be seen whether customers can handle their own internal data before they make the jump to social sphere origination.

A big question regarding MicroStrategy is whether or not they will continue to see growth under reconstruction after poor execution in 2012. Compared to the rest of the BI world, it really did not grow into its potential. However, it is a company to take note of and follow.

Leslie Radcliff, March 07, 2013

Sponsored by ArnoldIT.com, developer of Augmentext

Talend and Amazon Love

March 6, 2013

Talend has been collaborating with Amazon Web Services, we learn in Enhanced Online News’ post, “Talend Expands Big Data Integration Platform with Support for Amazon Redshift.” Business analysts, data scientists, and enterprise architects will soon be able to manage their Amazon Redshift data using three solutions from Talend, Talend Open Studio for Big Data, Talend Enterprise Big Data, and Talend Platform for Big Data. The write-up tells us:

“By using Talend’s connectors to Amazon Redshift, users will be able to load and extract data to and from Amazon Redshift, and also connect the Cloud-based data warehousing platform to the full spectrum of transactional, operational and analytic data sources. . . . Only Talend scales for today’s big data and cloud environments with more than 450 native connectors to relational databases, packaged applications, SaaS applications, files, legacy systems, Hadoop clusters, NoSQL databases, and more.

“In addition, Talend’s unique data quality capabilities, natively included in the Talend Platform for Big Data, eliminate inconsistent data, enforce rules and create consistent information through standardization.”

Talend says their Redshift connectors will be available within the month from Talend Exchange, the company’s community sharing platform, or directly from the Talend Studio. Future versions of the Talend Platform will have the connectors built in.

Talend was already a leader in open-source data management when its 2010 acquisition of Sopera boosted its standing in that market. It is a leading open-source vendor, providing middleware for both data management and application integration. The company takes pride in its powerful and flexible open solutions.

Cynthia Murrell, March 06, 2013

Sponsored by ArnoldIT.com, developer of Augmentext

Predictive Analysis Progress

March 6, 2013

Using data analysis to predict the future, a feat naturally called predictive analysis, is an intriguing facet of the analysis prism. GMA News takes a look at some progress on such software in, “New Software Can Predict Future News.” I suppose “new” is in the eye of the beholder; Recorded Future has been doing this for a couple of years now.

This article, though, covers research performed by Microsoft and the Technion-Israel Institute of Technology. Working with twenty-two years’ worth of New York Times articles and other information online, researchers have been testing ways of using this data to predict outbreaks of disease, violence, and other sources of significant mortality. Grim subject matter, to be sure, but imagine if we could take steps to deflect or minimize such occurrences before they happen. The article informs us:

“The system uses 22 years of New York Times archives, from 1986 to 2007, as well as data from the Internet to learn about what leads up to major news events. [Technion-Israel ‘s Kira]Radinsky said one useful source was DBpedia, a structured form of the information inside Wikipedia constructed using crowdsourcing. Other sources included WordNet, which helps software understand the meaning of words, and OpenCyc, a database of common knowledge. ‘We can understand, or see, the location of the places in the news articles, how much money people earn there, and even information about politics,’ Radinsky said. With all this information, researchers get valuable context not available in news articles, and which is necessary to figure out general rules for what events precede others.”

It appears that this project is far from complete, and Microsoft has formed no plans to bring it to market, according to Eric Horvitz of the Microsoft team. The article does acknowledge Recorded Future‘s work in this area, noting their strong customer base within the intelligence community.

Who can predict when the rest of us will get the chance to give this compelling technology a whirl?

Cynthia Murrell, March 06, 2013

Sponsored by ArnoldIT.com, developer of Augmentext

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta