Augmenting Output for Clarity
February 12, 2011
We love it when mid tier consulting firms “discover” a trend. A Gartner expert has revealed “Relationship Context Metadata.” Here’s the definition of the polysyllabic concept:
“Relationship context metadata explicitly describes the relationship from which the identity information was obtained and the constraints imposed by the participants in that relationship on the use and disclosure of the information.”
The consulting firm’s example is that of a credit card number. Instead of just sending the number, you could tag the number with metadata that specifies how it is to be used and what relationship will be damaged if it is misused.
This mid-tier consulting firm is applying a new name, “relationship context,” to the social graph. Will we apply their metadata to information from Foursquare, Gowalla, and other social networking / geospatial services?
One of the goslings here in Harrod’s Creek asked, “Could we include a tag that means no stalking?”
Cynthia Murrell, February 12, 2011
Freebie
Not Search, But Mighty Useful
February 12, 2011
The article “50 Free iPad Apps for Business Users” at Datamation provides a helpful service. The authors have combed through the apps at the iTunes Store and compiled a list of free apps they feel are worthy of your attention:
Whoever claimed that business software costs a bundle never heard of the free iPad app.
Indeed, the iPad is showing that you don’t need a speedy notebook full of programs to run your company on the go; all you need is a tablet with free iPad apps.”
The offerings are organized by category: productivity, utilities, remote access, and knowledge.
It does look like there are some decent entries here- do check it out. Unfortunately, though, there is not a robust search app in the lot. (Job-hunting tools don’t count.) Search just doesn’t command respect in the free iPad app world. Sigh.
Cynthia Murrell February 12, 2011
Freebie
Microsoft Israel and the Cloud
February 11, 2011
Stories in The Globes are often wonderful, but they do go dead. Navigate to “Microsoft Israel to Recruit 100 for R&D”. The Globes reports that “Microsoft Israel president Moshe Lichtman is pumping up its work in cloud computing. The story said:
“The Microsoft Israel R&D center has 600 R&D employees. “Cloud computing is at the center of our vision. About 70% of the center’s development activity is focused on cloud computing,” Lichtman said. “This year, we will complete development of the first versions of 11 products, which will be launched on the global market.”
The Israeli branch also created a free security product that has seen millions of downloads and dominates a portion of the global free products market. The company has also developed technologies for Bing Mobile. Another star product is a feature that enables check-in via Facebook, Foursquare, and Messenger. Users can also receive messages when they’re nearing specific location, similar to a GPS. Like Google, Microsoft is expanding its footprint in Israel.
Whitney Grace, February 14, 2011
Freebie
Comparative Searches Ding Google
February 11, 2011
Macworld’s “Study: Bing Searches More Accurate than Google’s” raises an interesting point. Bing delivers more relevant results, according to Experian Hitwise, the outfit conducting the study. Here’s the key passage:
Bing and Yahoo, which is now using Microsoft’s Bing search technology, had the highest search success rates last month, reported Experian Hitwise, an Internet monitoring firm. More than 81 percent of searches on their sites led users to visit a Web site. However, Google, the dominant player in the search market, wasn’t as successful with its January searches.
In isolation, the study is not worth too much. But in the context of other information, maybe Bing is doing something to deliver better “accuracy”.
A recent post on ZDNet titled “Google Launches Algorithm Change to Tackle Content Copying. Will It Help?” explains some of the newer measures Google has employed in its battle against spam among search results. Having already rolled out a number of tools including a new fangled document classifier that can weed out “spammy words” from planted blog comments, Google seems to be taking this endeavor pretty seriously. Its next step in gaining a further reduction in spam levels includes identifying certain sites as lower quality due to an abundance of copied content or an overall lower percentage of original content.
If you trust the findings reported on Google’s Web spam blog, the initial response to the new search engine optimization procedures is positive; of course, as with any sort of judgment, there will be controversy and complaints. Per the article “… readers are expressing their gratefulness for the change. But others still have questions. They want to know how Google determines original content from scraped content. They’re wondering what happens to the small blog that posts something first and then is followed by larger sites that have more credibility. And what happens to sites that have a legitimate “mirror” of their content?” All valid queries. Guidelines will hopefully be published as the refining process continues, in which case it will be up to the website owners to comply.
In light of recent criticisms, Google has preemptively assured the web community that the quality valuation is totally independent of a site’s usage of Google ads. A valiant effort, but that probably matters little. ZDNet very delicately mentioned that the quality rulings will have an unknown impact on site operators. I would predict that once the gavel swings, be it fairly or unduly, the owners of what are deemed the ‘lower quality’ websites will likely be seeking an appeal, a lawsuit or perhaps even a good old fashioned round of trash talking.
Worth monitoring even if the notions of “precision” and “recall” are not of much interest to Experian, blog posters, and poobahs.
Stephen E Arnold, February 11, 2011
E Discovery Is Hot and Accelerating
February 11, 2011
LegalTech 2011 is over. Many announcements, according to several attendees with whom we spoke.
Technology continues its consumption of tasks that once occupied human existences, where time and perspiration were once the currency with which we purchased our goals. It seems the further we advance, the more we come to rely on automation, often for economical reasons. Now, even the legal system is queuing up to relinquish some of the burdens litigation bears. As a result the eDiscovery market is booming. One trend is predictive coding and its rapidly reproducing offspring.
Recommind, a California based leader in information management software, has released Axcelerate eDiscovery with Predictive Sampling, which promises its customers new advantages in the review process. Per a marketwire press release:
“Leading jurists have already written that the superiority of human, eyes-on review is a myth, so law firms continue to work with technology vendors to fill in much of this gap. Predictive Coding with Predictive Sampling enables users to comfortably leverage technology to attain a level of speed and accuracy that is not achievable with traditional linear review processes.”
Recommind’s INFOcus blog states that the predictive coding software enables “A more thorough, more accurate, more defensible and far more cost-effective document review” So what more could have been done to make this already revolutionary product even more appealing? This new version offers control of accuracy rates, as well as the ability to designate the number of pages to be examined. Keeping the divination theme running, even the cost of individual reviews is accessible before they happen.
The race is on to fill the demand for software that reduces the expenditure of both time and coin, especially in a climate where both seem to be in short supply. Why the boom? We think that in the uncertain economic climate litigation may be more interesting that innovation.
Sarah Rogers, February 11, 2011
Freebie
Protected: FAST Read: Ontolica
February 11, 2011
US Government, Domains, and Search
February 11, 2011
There’s been a flurry of search-related news from the US government. We have noticed that some health related content is moving around. Hearings with Health and Human Services executives produced 404 errors last week. We were able to locate the documents, but a 404 is suggestive. The FBI rolled out its new search service. Then we read about a security compliance glitch.
We found it interesting that half of the US Federal government’s sites have failed to comply with mandated security measures. In an article appearing on the NetworkWorld news site, we learned that The Office of Management and Budget issued said mandate requiring agencies to add DNS Security Extensions (DNSSEC) in 2008. The piece goes on to cite a study which claims that as of January 2011, fifty one percent of these agencies have failed to comply.
“DNSSEC is an Internet standard that prevents hackers from hijacking Web traffic and redirecting it to bogus sites. It allows Web sites to verify their domain names and corresponding IP addresses using digital signatures and public key encryption.”
Understanding the importance of safeguarding data and activity online, especially on the federal level, the question becomes why has the failure to adopt this precaution been so broad? Mark Beckett, Vice President of Marketing and Product Management for Secure64, offers his view stating that while the numbers of those in compliance this year as opposed to last have more than doubled, it is the low rate itself which illustrates the difficulty in employing the security measure. Beckett feels that as more parent domains and sub domains sign, the market for protection will expand creating more user friendly DNSSEC. Search can be tricky if the crawlers cannot access or find the content. Alternatively, search can be even more exciting if content that should not be indexed is.
Stephen E Arnold, February 11, 2011
Freebie
Will We Pay for News Online?
February 10, 2011
BBC Mobile’s article “News Corp Launches Daily Newspaper for iPad” prompts a few questions.
The article examines the Rupert Murdoch empire’s launch of the Daily, via Apple’s iTunes store. A dedicated staff of journalist has been hired, a choice which separates this from most device-specific news sources. Alongside traditional news articles will be interactive graphics, videos, Twitter feeds, and personalized content.
The Daily will cost 99 cents a week. That doesn’t sound like much, but will consumers be willing to pay anything to access news online?
“Mr Murdoch has made no secret of his desire to get consumers paying for news on the Web. The Wall Street Journal, The Times and The Sunday Times, all owned by Murdoch, have introduced pay walls for their websites. However, the Times has since seen an 87% drop in online readership.
We now have many sources for free, quality news coverage online, so it is no wonder readers are reluctant to pay. However, I predict that that flow will be stemmed in the coming years as companies become less willing to give their work away for free. Nevertheless, it is difficult to generate significant revenues online. Experimentation ahead.
Cynthia Murrell February 10, 2011
New Version of Clustify
February 10, 2011
Clustify, a document organization program created by the Pennsylvania based company Hot Neuron LLC, has recently been given the ability to thread e-mails based on their content. This feature will make the analysis tool more efficient by reducing the number of documents that must be segregated and grouped.
An easy to use program with a simple user interface, Clustify organizes documents into groups according to common attributes and qualities defined by the user. If the user doesn’t want to define the clustering criteria themselves, Clustify can scan documents for key words and themes while designating a “representative document,” allowing you to easily find files similar to the one representing it.
The company was set up in 2000 by Bill Dimm, a graduate of Cornell University who holds a Ph.D. in theoretical elementary particle physics. Clustify’s customers include New Jersey Legal and the Digital Archive.
Ryan Compton, February 10, 2011
Freebie
Reading the Cloud
February 10, 2011
At the recent New England Database Summit held at MIT, a popular topic was the cloud revolution, and pundits efforts to paint a bright color on its grayish lining.
One speaker in particular, UMass Senior Researcher Emmanuel Cecchet, introduced a “system focused on dynamic provisioning of database resources in the cloud.” Named for the now noteworthy sheep, Dolly is database platform-agnostic and uses virtualization-based replication for efficiently spawning database replicas. The research, a joint venture between Cecchet, a colleague and two graduate students, identifies flaws in the way current databases engage cloud services. The group claims their creation will correct those issues; for example, by improving efficiency in the name of metered pricing.
Another area of interest in the cloud conversation covered at the conference was the increasing strain cloud computation places on databases. James Starkey, whose solution is an SQL based relational database to share the workload among varied clouds, is a former MySQL designer and founder of NimbusDB. Some interesting choices for new terms are tossed out there, all of which can be found in the linked presentation.
While versions from both presenters have been prepared for release, no date has been set, leaving the industry and users alike to speculate on the success of these endeavors. We’ve got the hype, now we just need the technology to back it up. Amazon is taking Oracle to the cloud. Salesforce is moving with Database.com. There is progress. Let’s hope that database Dolly is more robust than cloned Dolly.
Stephen E Arnold, February 10, 2011
Freebie