IBM and Yahoo Hard at Work on Real-Time Data Handling

January 7, 2016

The article titled What You Missed in Big Data: Real-time Intelligence on SiliconAngle speaks to the difficulties of handling the ever-increasing volumes of real-time data for corporations. Recently, IBM created supplementary stream process services including a machine learning engine that comes equipped with algorithm building capabilities. The algorithms aid in choosing relevant information from the numerous connected devices of a single business. The article explains,

“An electronics manufacturer, for instance, could use the service to immediately detect when a sensor embedded in an expensive piece of equipment signals a malfunction and automatically alert the nearest technician. IBM is touting the functionality as a way to cut through the massive volume of machine-generated signals produced every second in such environments, which can overburden not only analysts but also the technology infrastructure that supports their work.”

Yahoo has been working on just that issue, and lately open-sourced its engineers’ answer. In a demonstration to the press, the technology proved able to power through 100 million vales in under three seconds. Typically, such a high number would require two and a half minutes. The target of this sort of technology is measuring extreme numbers like visitor statistics. Accuracy takes a back seat to speed through estimation, but at such a speed it’s worth the sacrifice.

Chelsea Kerwin, January 7, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta