Protected: Is Microsoft SharePoint a Facebook Service?

November 2, 2011

This content is password protected. To view it please enter your password below:

Protected: More Products for SharePoint Governance Problems

November 1, 2011

This content is password protected. To view it please enter your password below:

Protected: The Top Fortune 500 SharePoint Users

October 31, 2011

This content is password protected. To view it please enter your password below:

2012: Enterprise Search Yields to Metadata?

October 30, 2011

Oh, my. The search dragon has been killed by metadata.

You might find yourself on an elevator ready to get off on a specific floor. The rest of your trip will start from that point and that point only. The same is true for learning, conversing, actually just about anything. We all have a particular place we want to enter the conversation. MSDN’s Microsoft Enterprise Content Management (ECM) Team Blog’s recent posting on “Taxonomy: Starting from Scratch” was a breath of fresh air in the way it addressed anyone–no matter what floor they needed.

For the novices to Managed Metadata Service, a service providing tools to foster a rich corporate taxonomy, the article recommends a starting point: Introducing Enterprise Metadata Management

According to the article. The more seasoned users are reminded to point their browsers towards import capabilities. Of course, there are more specific needs, and links to go with them, addressed too.

The article recommends the following for the clients who need a comprehensive understanding of both common and specific corporate terms. The author Ryan Duguid states:

“The General Business Taxonomy consists of around 500 terms describing common functional areas that exist in most businesses.  The General Business Taxonomy can be imported in to the SharePoint 2010 term store within minutes and provides a great starting point for customers looking to build a corporate vocabulary and take advantage of the Managed Metadata Service.”

Overall, this article is worth keeping tucked away for a day when you might need information on WAND, SharePoint, or metadata and taxonomy in general because of the directness and the accessible next steps the variety of links offer.

Megan Feil, October 30, 2011

Sponsored by Pandia.com

SharePoint Search Best Practices

October 27, 2011

SharePoint search tips are of particular interest to us here at Beyond Search. We strive to sort the chafe from the wheat and sometimes turning to the source material is the best way to do that.

We noted a quite useful series of best practice articles from Microsoft’s own TechNet Web site. Navigate to “Best Practices for Search in SharePoint Server 2010.” The article explains the best methods for enterprise search and it applies to both SharePoint Server 2010 and Microsoft Search Server 2010.

What we like about this article is that it outlines the best methods without beating around the bush. As with many SharePoint plans, there’s a simple to follow list:

  1. Plan the deployment
  2. Start with a well-configured infrastructure
  3. Manage access by using Windows security groups or by using role claims for forms-based authentication or authentication using a Security Assertion Markup Language (SAML) security token
  4. Defragment the search database
  5. Monitor SQL Server latency
  6. Test the crawling and querying subsystems after you change any configuration or apply updates
  7. Review the antivirus policy.

Each step is given its own section with additional information that goes into further detail about how to deploy the ideas.

What we noted about this article is that it is an official Microsoft document.

We want to include our own best practice. When it comes to making findability brings smiles to SharePoint users’ faces, we rely on SurfRay Ontolica to deliver SharePoint 2010 search.

Whitney Grace, October 27, 2011

SurfRay

Protected: More Cheerleading for SharePoint Social Functions

October 26, 2011

This content is password protected. To view it please enter your password below:

Google and the Perils of Posting

October 21, 2011

I don’t want to make a big deal out of an simple human mistake from a button click. I just had eye surgery, and it is a miracle that I can [a] find my keyboard and [b] make any function on my computers work.

However, I did notice this item this morning and wanted to snag it before it magically disappeared due to mysterious computer gremlins. The item in question is “Last Week I Accidentally Posted”, via Google Plus at this url. I apologize for the notation style, but Google Plus posts come with the weird use of the “+” sign which is a killer when running queries on some search systems. Also, there is no title, which means this is more of a James Joyce type of writing than a standard news article or even a blog post from the addled goose in Harrod’s Creek.

To get some context you can read my original commentary in “Google Amazon Dust Bunnies.” My focus in that write up is squarely on the battle between Google and Amazon, which I think is more serious confrontation that the unemployed English teachers, aging hippies turned consultant, and the failed yet smarmy Web masters who have reinvented themselves as “search experts” think.

Believe me, Google versus Amazon is going to be interesting. If my research is on the money, the problems between Google and Amazon will escalate to and may surpass the tension that exists between Google and Oracle, Google and Apple, and Google and Viacom. (Well, Viacom may be different because that is a personal and business spat, not just big companies trying to grab the entire supply of apple pies in the cafeteria.)

In the Dust Bunnies write up, I focused on the management context of the information in the original post and the subsequent news stories. In this write up, I want to comment on four aspects of this second post about why Google and Amazon are both so good, so important, and so often misunderstood. If you want me to talk about the writer of these Google Plus essays, stop reading. The individual’s name which appears on the source documents is irrelevant.

1. Altering or Idealizing What Really Happened

I had a college professor, Dr. Philip Crane who told us in history class in 1963, “When Stalin wanted to change history, he ordered history textbooks to be rewritten.” I don’t know if the anecdote is true or not. Dr. Crane went on to become a US congressman, and you know how reliable those folks’ public statements are. What we have in the original document and this apologia is a rewriting of history. I find this interesting because the author could use other methods to make the content disappear. My question, “Why not?” And, “Why revisit what was a pretty sophomoric tirade involving a couple of big companies?”

2, Suppressing Content with New Content

One of the quirks of modern indexing systems such as Baidu, Jike, and Yandex is that once content is in the index, it can persist. As more content on a particular topic accretes “around” an anchor document, the document becomes more findable. What I find interesting is that despite the removal of the original post the secondary post continues to “hook” to discussions of that original post. In fact, the snippet I quoted in “Dust Bunnies” comes from a secondary source. I have noted and adapted to “good stuff” disappearing as a primary document. The only evidence of a document’s existence are secondary references. As these expand, then the original item becomes more visible and more difficult to suppress. In short, the author of the apologia is ensuring the findability of the gaffe. Fascinating to me.

3. Amazon: A Problem for Google

Read more

Gain Power, Lose Control? A Search Variant

October 20, 2011

The future of technology, like always, is fascinating: personal virtual assistants, customized search results, and big changes to information appliances. However, the new future Silicon Valley giants like Apple, Google and Facebook will be creating a mix of changes that will bring both unique benefits and some bad results.

It seems that the more advanced and powerful technology becomes, the more control users lose. We learn more in Datamation’s article, “How Apple, Google and Facebook Will Take Away Your Control,” which tells us:

“The more advanced this technology becomes, the bigger the decisions we’ll rely on them to make for us. Choices we now make will be “outsourced” to an unseen algorithm. We’ll voluntarily place ourselves at the mercy of thousands of software developers, and also blind chance. We will gain convenience, power and reliability. But we will lose control.”

Personal computers will no longer need to be maintained or customized. Personal assistants, like the iPhone 4s’ Siri, will place our words in context and learn what we “want.” Search algorithms will continue to customize to user attributes and actions.

Is the gain of convenience and reliability that we get from these shiny new toys worth it? Or is the shine just a distraction from the fact that we lose all control in search and technological decision making? I am not so sure the good will be outweighing the bad in this scenario, but I fear that we may be stuck in the cycle.

Andrea Hayden, October 20, 2011

Sponsored by Pandia.com

Protected: Into the Future of Sharepoint with a Smooth Sail

October 12, 2011

This content is password protected. To view it please enter your password below:

Lucid Imagination: Open Source Search Reaches for Big Data

September 30, 2011

We are wrapping up a report about the challenges “big data” pose to organizations. Perhaps the most interesting outcome of our research is that there are very few search and content processing systems which can cope with the digital information required by some organizations. Three examples merit listing before I comment on open source search and “big data”.

The first example is the challenge of filtering information required by orgnaizatio0ns produced within the organization and by the organizations staff, contractors, and advisors. We learned in the course of our investigation that the promises of processing updates to Web pages, price lists, contracts, sales and marketing collateral, and other routine information are largely unmet. One of the problems is that the disparate content types have different update and change cycles. The most widely used content management system based on our research results is SharePoint, and SharePoint is not able to deliver a comprehensive listing of content without significant latency. Fixes are available but these are engineering tasks which consume resources. Cloud solutions do not fare much better, once again due to latency. The bottom line is that for information produced within an organization employees are mostly unable to locate information without a manual double check. Latency is the problem. We did identify one system which delivered documented latency across disparate content types of 10 to 15 minutes. The solution is available from Exalead, but the other vendors’ systems were not able to match this problem of putting fresh, timely information produced within an organization in front of system users. Shocked? We were.

lucid decision copy

Reducing latency in search and content processing systems is a major challenge. Vendors often lack the resources required to solve a “hard problem” so “easy problems” are positioned as the key to improving information access. Is latency a popular topic? A few vendors do address the issue; for example, Digital Reasoning and Exalead.

Second, when organizations tap into content produced by third parties, the latency problem becomes more severe. There is the issue of the inefficiency and scaling of frequent index updates. But the larger problem is that once an organization “goes outside” for information, additional variables are introduced. In order to process the broad range of content available from publicly accessible Web sites or the specialized file types used by certain third party content producers, connectors become a factor. Most search vendors obtain connectors from third parties. These work pretty much as advertised for common file types such as Lotus Notes. However, when one of the targeted Web sites such as a commercial news services or a third-party research firm makes a change, the content acquisition system cannot acquire content until the connectors are “fixed”. No problem as long as the company needing the information is prepared to wait. In my experience, broken connectors mean another variable. Again, no problem unless critical information needed to close a deal is overlooked.

Read more

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta