PolySpot Solutions Break Silos to Deliver Information Efficiently

December 11, 2012

A recent article from Entrepreneur states what we have all been thinking over the last several years in particular. Big data is now a fact of life. Huge volumes of data are not only created by each of us on a regular basis, but we also utilize these pieces of data to inform us in every industry imaginable. The article, “ The Goliath of Big Data Meets Its David,” discusses this in regards to a potential new solution.

This solution comes from none other than a new Silicon Valley business-to-business startup Peaxy. The essential goal is to eradicate dependence on a certain brand of hardware or generation of server. Then, clients’ data can be freed from individual silos.

The article states:

By allowing the data to mingle freely in a single “namespace” composed of many servers, they say, you can glean insights from multiple blocks of data. Terranova gives the example of car manufacturers that need to marry proprietary engineering data with customer feedback in order to build accurate predictive models for vehicle maintenance problems.

Silos must be broken down; there is no doubt about that. We have seen much success in this regard from one company in particular: PolySpot. With over 100 connectors, their solutions deliver information securely across the enterprise in real-time.

Megan Feil, December 11, 2012

Sponsored by ArnoldIT.com developer of Augmentext

Searchbox API Makes Headlines

December 11, 2012

Searchbox is an API that locates internal or external documents using the power of Apache Lucene and Solr. Online semantic search for enterprises is their specialty, and now they are making headlines. Programmable Web gives Searchbox a mention in its latest API spotlight article, “API Spotlight: Shipping Simplified with TrackThis and Temando, and Lighting with Spark.”

The Searchbox component of the article is as follows:

“The Searchbox API makes it possible to locate any public or internal document that is on the Internet or a users intranet. The Searchbox utilizes the Apache Lucene Solr search engine to access information in your intranet, emails, and internal documents, shared drives, and the cloud. Then it indexes and consolidates them all into a searchable format for seamless and quick access. To learn more about the Searchbox API visit the Searchbox site as well as the Searchbox API blog post.”

Searchbox is definitely an up-and-comer in the world of enterprise search, specifically open source enterprise search. However, there are other more vetted options for organizations that are less willing to trust a newcomer. LucidWorks specializes in enterprise solutions based on Apache Lucene and Solr, and has the authority of several years of experience in the field. See if LucidWorks might benefit your organization.

Emily Rae Aldridge, December 11, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

Microsoft Wows with Machine Speech Translation in Real Time

December 11, 2012

Once again, we are catching up to our science fiction. The Next Web informs us about a recent leap in the field of machine translation with, “Amazing: Microsoft Turns Spoken English into Spoken Mandarin—in the Same Voice.” The article includes a nine-minute video slice of a presentation by Microsoft’s Rick Rashid that is well worth the viewing time (though the exciting part really starts about half-way through.) The video begins with a brief recap of the history of machine transcription and machine translation. Writer Alex Wilhelm tells us:

“In the video, the speaker explains and demonstrates improvements made to the machine understanding of his English words, which are automatically transcribed as he speaks. He then demonstrates having those words translated directly into Mandarin – if it’s actually Cantonese I’ll punish myself – text.

“This is when the fun begins. Microsoft, he says, has taken in oodles of data, and can thus have that translated Mandarin spoken. And the final kicker: he has fed the system an hour’s worth of his voice, and thus the software will speak in Mandarin, using his own tones.”

I would like to point out here that, despite the write-up’s title, “his own tones” does not quite equate to “the same voice.” It is close, though. Rashid attributes the leap to the development of Deep Neural Networks, a technique patterned after human brain behavior by researchers at Microsoft Research and the University of Toronto. The shift is indeed very impressive, and makes a future where we can all understand each other seem closer to possible.

We would be remiss, however, if we failed to mention that Google can still claim some advantage in this realm. Its Google Translate has been shown to generate more accurate text translations than Bing Translator. (See here and here for comparisons.)

So, Google, when do you debut your instant speech translation software? We can’t wait to see what you come up with.

Cynthia Murrell, December 11, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

IBM Takes Watson to the Hospital

December 11, 2012

The Inquirer recently reported on Watson’s new job in medicine in the article, “IBM Puts Watson to Work Training Doctors.”

According to the article, IBM has signed a deal with a US hospital to allow its super computer, Watson, to train doctors. The way this will work is medical students will help Watson improve its understanding of medical terminology, allowing both parties to learn more. The end goal is for Watson to be able to process and understand medical records.

David Ferrucci, IBM fellow and principal investigator of the Watson project said:

“The practice of medicine is changing and so should the way medical students learn. In the real world, medical case scenarios should rely on people’s ability to quickly find and apply the most relevant knowledge. Finding and evaluating multistep paths through the medical literature is required to identify evidence in support of potential diagnoses and treatment options.”

IBM is not the only company using computers to improve the medical field. Many hospitals are signing up for data analytics software that analyze various types of content in order to take the work-load off of doctors and nurses and improve the care of patients.

Jasmine Ashton, December 11, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

Agilex Presents Some Gloomy Info About IT Spending

December 11, 2012

The Federal Times recently reported on predictions of IT spending cuts nationwide in Nicole Blake Johnson’s article, “Experts Predict Billions Less in IT Spending.”

According to the article, TechAmerica foundation has recently downgraded its projected annual IT spending for the year 2017 from $85.7 billion to $73.5 billion. This is due to the fact that overall discretionary spending is being cut and IT spending has a compound annual growth rate of less than 1 percent.

Agencies should plan for fewer and short IT contracts and set their sights on projects that will lead to cost savings.

The article states:

“Most successful will be companies that can respond to OMB’s 2014 budget guidance, which directs agencies to invest in projects that will show a return on investment within 18 months, including projects to improve citizen services or administrative efficiencies, share services, adopt cloud computing, and improve IT security and information assets.”

If accurate, the outlook for search and content processing in 2013 may be less than rosy. Agilex Phanero is a content processing company which sells to the US government. Maybe it will thrive as commercial content processing companies wither and die?

Jasmine Ashton, December 11, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

Visualization Woes: Smart Software Creates Human Problems

December 10, 2012

I am not dependent on visualization to figure out what data imply or “mean.” I have been a critic of systems which insulate the professional from the source information and data. I read “Visualization Problem”. The article focuses on the system user’s inability to come up with a mental picture or a concept. I learned:

I know I am supposed to get better with time, but it feels that the whole visualization part shouldn’t be this hard, especially since I can picture my wonderland so easily. I tried picturing my tulpa in my wonderland, in black/white voids, without any background, even what FAQ_man guide says about your surroundings, but none has worked. And I really have been working on her form for a long time.

A “tulpa” is a construct. But the key point is that the software cannot do the work of an inspired human.

The somewhat plaintive lament trigger three thoughts about the mad rush to “smart software” which converts data into high impact visuals.

First, a user may not be able to conceptualize what the visualization system is supposed to deliver in the first place. If a person becomes dependent on what the software provides, the user is flying blind. In the case of the “tulpa” problem, the result may be a lousy output. In the case of a smart business intelligence system such as Palantir’s or Centrifuge Systems’, the result may be data which are not understood.

Second, the weak link in this shift from “getting one’s hands dirty” by reviewing data, looking at exceptions, and making decisions about the processes to be used to generate a chart or graph puts the vendor in control. My view is  that users of smart software have to do more than get the McDonald’s or KFC’s version of a good meal.

Third, with numerical literacy and a preference for “I’m feeling lucky” interfaces, the likelihood of content and data manipulation increases dramatically.

I am not able to judge a good “tulpa” from a bad “tulpa.” I do know that as smart software diffuses, the problem software will solve is the human factor. I think that is not such a good thing. From the author’s pain learning will result. For a vendor, from the author’s pain motivation to deliver predictive outputs and more training wheel functions will be what research and develop focuses upon.

I prefer a system with balance like Digital Reasoning’s: Advanced technology, appropriate user controls, and an interface which permits closer looks at data.

Stephen E Arnold, December 10, 2012

Companies Need Reliable Results Not another Plug and Play Experiment

December 10, 2012

The name Google instantly brings internet search, Android and mobile apps to mind, but that is just not enough for the Big G anymore. TechWeek’s article “Google Enterprise: More Than Just Apps” talks about a new device that Google representatives feel will take the enterprise by storm.

So, what is the next big step for Google? World enterprise domination via plug and play technology:

“This involves something called the ‘Google Search Appliance’ – a yellow box that can be plugged into the data center to look through and index business data. Recently launched Commerce Search is a similar project, but based in the cloud and focused on retail. A different part of the Enterprise department deals with geospatial products: Google Maps, Google Earth and the brand new Google Coordinate – the company’s first geo app to provide not just asset tracking, but the workflow management too.”

Of course this updated Google technology will be compatible with Chrome, Android and existing Google apps, but is this plug and play devise the right answer for sophisticated enterprise needs? What happens when a changes is needed to match unique enterprise requirements?  We have found that the mature solutions and dedicated customer service from Intrafind often meets the needs of enterprises with sophisticated requirements. Perhaps a commercial solution, built on open source can better match unique enterprise search needs than a plug and play appliance.

Jennifer Shockley, December 10, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

PolySpot Enables Monetization of Data with Information Delivery Networks

December 10, 2012

The question remains on the table for many businesses as to when they will begin to embrace big data as the tool that can revolutionize how they do business. From the Harvard Business Review comes an article “What a Big-Data Business Model Looks Like perfectly suited as reading for organizations who are curious about taking the plunge.

The author describes three main tools he has seen emerge to fit the needs of businesses looking to extract value from big data. The first involves utilizing data to create differentiated offerings. The second concentrates on brokering this information. The third is about building networks to deliver data where it’s needed, when it’s needed.

A quick skim through the article will point any smart business to the third and most exciting software option: delivery networks are said to  enable monetization of data.

The article tells us:

Content creators — the information providers and brokers — will seek placement and distribution in as many ways as possible. This means, first, ample opportunities for the arms dealers — the suppliers of the technologies that make all this gathering and exchange of data possible. It also suggests a role for new marketplaces that facilitate the spot trading of insight, and deal room services that allow for private information brokering.

Information must be in the hands or computers of those who need to use it at the moment they require the specific data. Luckily, PolySpot technologies have the capability to deliver information in this manner.

Megan Feil, December 10, 2012

Sponsored by ArnoldIT.com developer of Augmentext

SearchHub Provides Valuable Solr Reference Guide

December 10, 2012

LucidWorks continues to invest in the open source search technology that supports the fastest growing open source business models. One outlet for this support is SearchHub.org, a forum and user support center focusing specifically on Apache Lucene and Solr. The Solr Reference Guide is of particular interest to those who use Solr as their search platform of choice.

Consult the introduction to the guide:

“The Solr 4.0 Reference Guide describes all of the important features and functions of Apache Solr. It’s available online in the Documentation Center or to download as a PDF. Designed to provide complete, comprehensive documentation, the Solr 4.0 Reference Guide is intended to be more encyclopedic and less of a cookbook. It is structured to address a broad spectrum of needs, ranging from new developers getting started to well experienced developers extending their application or troubleshooting. It will be of use at any point in the application lifecycle, for whenever you need deep, authoritative information about Solr.”

SearchHub is full of useful podcasts, tutorials, and reference materials. It is yet another way that LucidWorks is not just giving back to the open source community, but is an integral, interwoven part of the open source search community. If out-of-the-box solutions are more appropriate for your organization than building upon the open source components yourself, investigate LucidWorks Search.

Emily Rae Aldridge, December 10, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

Mondeca Moves Into Electronic Patient Records

December 10, 2012

The healthcare world continues its creep into the twenty-first century, and now Mondeca is lending a hand with the process. The French company’s Web site announces, “Mondeca Helps to Bring Electronic Patient Record to Reality.” Tasked with implementing healthcare management systems across France, that country’s healthcare agency, ASIP Santé, has turned to Mondeca for help. The press release describes the challenge:

“The task is a daunting one since most healthcare providers use their own custom terminologies and medical codes. This is due to a number of issues with standard terminologies: 1) standard terminologies take too long to be updated with the latest terms; 2) significant internal data, systems, and expertise rely on the usage of legacy custom terminologies; and 3) a part of the business domain is not covered by a standard terminology.

“The only way forward was to align the local custom terminologies and codes with the standard ones. This way local data can be automatically converted into the standard representation, which will in turn allow to integrate it with the data coming from other healthcare providers.”

The process began by aligning the standard terminology Logical Observation Identifiers Names and Codes (LOINC) with the related terminology common in Paris hospitals. Mondeca helped the effort with their expertise in complex organizational and technical processes, like setting up collaborative spaces and aligning and exporting terminology.

Our question: Will doctors use these systems without introducing more costs and errors in the push for cost efficiency? Let us hope so.

Established in 1999, Mondeca serves clients in Europe and North America with solutions for the management of advanced knowledge structures: ontologies, thesauri, taxonomies, terminologies, metadata repositories, knowledge bases, and linked open data. The firm is based in Paris, France.

Cynthia Murrell, December 10, 2012

Sponsored by ArnoldIT.com, developer of Augmentext

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta