New Version of Clarabridge to Take Businesses By Storm

December 10, 2012

The customer experience is more vital than ever because of real-time technologies and social media capabilities. Customers have the power to influence more than ever and businesses have the power to analyze and use this information for their benefit.Destination CRM reported on a new version of a top solution for Customer Experience Management in “Clarabridge Launches 5.5.”

This updated version of Clarabridge allows organizations to have a comprehensive collection and analysis of customer feedback data and it can be shared across the enterprise.

Sid Banerjee, CEO of Clarabridge said in a statement:

“We recognize that solving today’s customer experience challenges requires intelligently delivering customer experience to every business stakeholder, including the customer. The latest release of Clarabridge 5.5 revolutionizes the way companies engage with customers in real time. Regardless of how a customer reaches out to a business, through social media, email, or a different method of communication, Clarabridge 5.5 empowers companies to drive new levels of customer engagement, loyalty and retention.”

Expect more connectors, enhanced analytics and extended language capabilities in Clarabridge 5.5. We will expect to see many enterprise organizations benefit from this new technology.

Megan Feil, December 10, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

We Can Tackle Pebibytes of Data

December 9, 2012

How big will big data get? We have to find the words to describe the numbers first. Search Storage calls out a whole bevy of prefixes all too familiar to anyone interested in big data in the article, “Kilo, Mega, Giga, Tera, Peta, and All That.

There are several new prefixes that could apply too: Kibi, mebi, gibi, tebi, pebi, and all that, which are relatively new prefixes designed to express power-of-two multiples. These were created in order to eradicate any confusion that might arise between decimal (power-of-10) and binary (power-of-2) numeration terms.

Binary data stored in memory or on a hard drive, USB, etc. power-of-2 multipliers are used, the article informs us. Continuing on that note:

“Technically, the uppercase K should be used for kilo- when it represents 210. Therefore 1 KB (one kilobyte) is 210, or 1,024, bytes; 1 MB (one megabyte) is 220, or 1,048,576 bytes. The choice of power-of-10 versus power-of-2 prefix multipliers can appear arbitrary. It helps to remember that in common usage, multiples of bits are almost always expressed in powers of 10, while multiples of bytes are almost always expressed in powers of 2.”

We are waiting for the first company which asserts zeptobyte capabilities. Of course, another company would start asserting yoctobytes are no problem. Either way, the article cited is a good way to keep track of some commonly used big data buzzwords.

Megan Feil, December 09, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

Interesting Software Companies in Indiana List Features Megaputer

December 8, 2012

Recently we stumbled across an interesting site with lists of software companies all over the world. All lists are created based on location and narrowing in on the alphabetical list in Indiana, we saw Megaputer Intelligence listed as 114. While there are hordes of technology vendors offering similar solutions, Megaputer provides a single, intuitive package with PolyAnalyst.

Megaputer has stuck out in the industry as a great way to simplify analytics with their data and text mining solutions. Both structured and unstructured data are churned through PolyAnalyst. Knowledge, insights and opportunities are turned out on the other side and in the hands of decision-makers.

We learned about the following benefits after exploring more about Megaputer:

“-Empowers data analysts to create multi-step data analysis scenarios and report templates for decision makers through a simple drag-and-drop interface

-Presents insights derived from data modeling to business users in the form of easy to understand reports, thus enabling them to make more informed decisions.”

Essentially, PolyAnalyst covers the complete data analysis process from ETL and integration to data modeling and reporting. A comprehensive selection of algorithms for automated analysis of text and structured data is also a hallmark feature of this technology.

Megan Feil, December 08, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

ZyLAB Offers a Monkey

December 4, 2012

Though trade show freebies have been in hiding lately, ZyLAB recently offered one that caught our eye. A representative of the 2012 LawTech Europe Congress (LTEC), held November 12 in Prague, tweeted, “Get Your ZyLAB Monkey.” The link offers only a photo with no explanation, but we think this treat should please many of those interested in legal technology.

ZyLAB was one of the sponsors of the Congress, which was convened to address a pressing global imbalance in the eDiscovery world. The LTEC home page explains:

“Over the past few years there have been huge advances in Electronic Evidence support and guidelines for civil litigations in America and Western Europe. These advances have not been adequately mirrored in Central and Eastern Europe. As a result, multi-jurisdictional disputes have become more drawn out and complex in nature. In criminal proceedings, the lack of clear guidelines for the collection and processing of electronic evidence has led to low crime detection rates and ineffective criminal prosecutions. What this annual congress aims to achieve is address the imbalance by bringing together, the brightest minds in technology, law, governance, and compliance.”

The LTEC drew 640 participants with a roster of speakers prominent in the field. The list of sponsors is long, and includes some names very familiar to us, like Autonomy, Recommind, and, as mentioned, ZyLAB. I was interested to see that Mercedes Benz was also involved; I wonder if they gave away anything interesting.

ZyLAB was founded 1983, with its release of the first full-text retrieval software for the PC. Its current flagship product, the eDiscovery and data management solution ZyLAB Information Management Platform, was released in 2010. The company maintains headquarters in the Netherlands and the United States, and has offices around the world.

Cynthia Murrell, December 04, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

Altova Release New Version of MissionKit

November 30, 2012

Altova, a data management solutions provider and creator of XMLSpy, recently published the news release, “Altova Announces the Release of Version 2013 of MissionKit” on its website.

According to the article, Altova has released an integrated suite of XML, SQL, and UML tools. It offers automatic error correction and support for SQL stored procedures in data mapping projects. Prices start at $59 per product and are available for purchase in the Altova online shop.

The release states:

“Among the many updates and new features we incorporated into the Version 2013 release, one of the most significant is Smart Fix. Smart Fix is unique to XMLSpy 2013 and is a huge leap forward in intelligent XML editing. It provides options for fixing validation errors that developers can apply automatically, with a single click. It’s true XML alchemy,” said Alexander Falk, President and CEO for Altova. “With increased demands on developers today we are always looking for ways to incorporate efficiencies into our products. You simply won’t find this functionality in other tools.”

Altova’s MissionKit is certainly affordable and the suite offers great tools. However, it only saves you money if you plan on using equal numbers of XMLSpy and MapForce.

Jasmine Ashton, November 30, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

Facebook Mounts Technical PR Push

November 26, 2012

Info World recently reported on Facebook’s appetite for crunching data reaching new highs in the article, “Facebook Pushes the Limits of Hadoop.”

According to the article, since the social media giant has a billion users and a requirement to analyze more than 105 terabytes every 30 minutes, it has reached the upper limits of raw Hadoop capacity. The desperate need for more data crunching has lead to the company’s launch of the Prism Project, which supports geographically distributed Hadoop data stores.

In order to compensate for Hadoop’s capacity deficiency, the article states:

“Facebook’s business analysts push the business in a variety of ways. They rely heavily on Hive, which enables them to use Hadoop with standard business intelligence tools, as well as Facebook’s homegrown, closed source, end-user tool, HiPal. Hive, an open source project Facebook created, is the most widely used access layer within the company to query Hadoop using a subset of SQL. To make it even easier for business people, the company created HiPal, a graphical tool that talks to Hive and enables data discovery, query authoring, charting, and dashboard creation.”

Facebook plans to open-source prism soon but it is pretty urgent to start generating revenue from mobile in order to supplement the money it has lost in advertising. Will it succeed? We will see.

Jasmine Ashton, November 26, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

Big Data Profiling Hits Hadoop

November 21, 2012

IT News Online revealed some of the latest new featuring a leader in open source software: “Talend Simplifies Big Data Further with New Release of Enterprise Open Source Integration Platform.” Talend released version 5.2 of its next-generation integration platform, which is the only one that provides a unified environment for managing entire lifecycles for data, application, and process integration requirements. Version 5.2 has support for NoSQL databases and data profiling for Hadoop. Talend’s biggest concentration has been to make the Big Data process easier:

“In its mission to democratize big data, Talend has focused extensively on solutions that make deploying and managing Apache Hadoop and related technologies simple, without requiring specific expertise in these areas. With version 5.2, Talend has taken its big data strategy a step further by adding big data profiling for Hadoop, providing companies with the ability to discover and understand data in Hadoop clusters. Among the typical problems associated with data quality are duplication, incompleteness and inconsistency, which create inefficiencies in data processing. Talend Platform for Big Data includes new capabilities for visibility into big data in all its forms and locations.”

Version 5.2 also includes upgrades for products that use Talend’s Unifed Platform. Big Data is very complex and products like those from Talend make it easier to leverage the data and reap the benefits. LucidWorks has a Big Data search tool that was designed to find the hidden data in Big Data, making it another great tool for the Big Data handler. However, LucidWorks also has the trusted name and the customer support that others cannot boast.

Whitney Grace, November 21, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

dtSearch Rolls Out New Filters

November 21, 2012

Dr. Dobb’s  software development website recently reported on new proprietary search features that cover online and offline data types in the article, “New dtSearch Document Filter Products.”

According to the article, dtSearch, a text retrieval software company that allows users to instantly search terabytes of text, has announced the latest release of its product line. Version 7.70 sees improved document filters embedded across the entire dtSearch product line.

The article states:

“The new version extends the document filters to add image support to Word (.doc/.docx), PowerPoint (.ppt/.pptx), Excel (.xls/.xlsx), Access (.mdb/accdb), RTF, and email files including Thunderbird (mbox/.eml) and Outlook (.pst/.msg) files. The release displays these formats showing highlighted hits in context with both text and images. The release also adds support for Japanese Ichitaro documents.

dtSearch’s proprietary document filters support a broad range of data types from “Office” documents: MS Office, OpenOffice, RTF, PDF to emails and also MS Exchange, Outlook, Thunderbird — all with nested attachments.”

This company has received impressive reviews regarding their search power and indexing abilities. We can only assume that dtSearch 7.70 will be even better.

Jasmine Ashton, November 21, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

Endeca Explanation

November 19, 2012

We’ve turned up a useful summary of Endeca’s Information Discovery system; the description occurs within a post about using integration platform CloverETL with the Endeca product. “Oracle Endeca Information Discovery—CloverETL” is posted at Saichand Varanasi’s OBIEE, Endeca and ODI Blog. After referring readers to his Endeca overview, the blogger dives into the Clover. He writes:

“Today we will see how to create Clover ETL graph and populating data which will be used by MDEX engine for reporting (Studio). Endeca Information discovery helps organization to answer quickly on relevant data of both structured and Un structured. It helps to search and discover and analysis. Information is loaded from multiple data source systems and stored in a faceted data model that dynamically supports changing data. Information discovery enables an iterative approach. Integration features a new ETL tool, The integrator (Clover ETL) that lets you extract source records from a variety of source types flat files to databases.”

Next, Varanasi walks us through an example project. Along the way, he also explains how Endeca Information Discovery functions. A happy side effect, if you will. See the post for details.

Founded in 1999 and based in Cambridge, MA, Endeca was acquired by Oracle just over a year ago. The company has been at the forefront of faceted search technology, particularly for large e-commerce and online library systems.

Cynthia Murrell, November 19, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

Gleanster Report Gathers Best Practices in Agile BI

November 19, 2012

There are some new signposts along the road to getting the most return on business intelligence investments. The San Francisco Chronicle shares the post, “New Gleanster Research Reveals Best Practices in Agile Business Intelligence.” The research firm recently released a 38-page report which examines practices at 367 companies. The press release tells us:

“For major corporations, Agile BI tools can supplement their core BI initiatives, providing a friendlier front end that builds on existing investments. For smaller organizations, these tools may be the only ones they need.

“This Gleansight benchmark report examines how Top Performers are utilizing Agile BI to lower the barriers to better use of data. It looks at the technologies, organizational resources and performance metrics they have implemented and how they are achieving success in wiping away at least some of the challenges associated with effective business intelligence.”

The report is available here, but registration, and an agreement to receive marketing contact from third parties, is required. It might be worth the bother; the topics covered include the following: Reasons to Implement, Value Drivers, Challenges, Performance Metrics, Success Stories, and, last but not least, the Vendor Landscape. Vendor rankings, by the way, are based on feedback from industry practitioners, not from analysts.

Business tech information firm Gleanster keeps busy publishing white papers, case studies, and research reports from a variety of sources. They gather best practices and vendor information in order to give their clients an edge. The company is headquartered in Evanston, IL.

Cynthia Murrell, October 19, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta