A Detailed Look at SharePoint 2013

September 25, 2014

If you’re looking to pull back the curtain on SharePoint, check out “Deep-Dive of Search in SharePoint 2013, Office 365 and SharePoint Online ‘From the Trenches’” at the EPCGroup’s blog. That company has been implementing SharePoint & Office 365 hybrids for years, and is highly regarded by many SharePoint analysts. The introduction to the detailed article tells us:

“In this blog post, EPC Group’s Sr. Search Architects will cover the key service applications and services that power SharePoint 2013, Office 365 and SharePoint Online’s search to enable your organization’s data to easily be found on-demand as well to enable the accuracy of your search results.”

The first section lists SharePoint’s search applications and related services, and notes some things to keep in mind. For example, both “federated search” and “scopes” are now known as “result sources.” Also, a default crawl account must be established; the post explains:

“In order for search to properly work, the SharePoint 2013 Search service must configure a default crawl account which is also referred to as the default content access account. This account must be an active, Active Directory Domain Services domain account. This account should not be setup as an individual or a specific person in IT as EPC Group has seen SharePoint search issues caused by this account being deactivated and an entire organization’s SharePoint search cease to work until the account issue was resolved.”

The article delves into detail on the platform’s components: Search, Crawl, Content Processing, Analytics Processing, Search Administration, Search Index, Search Query, and Search Diagnostics. The flow charts and bulleted lists make this an easy resource to reference; I’d recommend bookmarking to anyone who has a SharePoint system to maintain.

Cynthia Murrell, September 25, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Elasticsearch Optimization Tips

September 25, 2014

One of the Elasticsearch experts at Found shares some of his wisdom in “Optimizing Elasticsearch Searches.” Writer and open source enthusiast Alex Brasetvik emphasizes that Elasticsearch often offers several ways to approach a problem, and that his suggestions can lead to improved performance. The post begins with a look at the way the platform’s filters work:

“Understanding how filters work is essential to making searches faster. A lot of search optimization is really about how to use filters, where to place them and when to (not) cache them….

“This is the key property of filters: the result will be the same for all searches, hence the result of a filter can be cached and reused for subsequent searches. Caching them is quite cheap, as you can store them as a compact bitmap. When you search with filters that have been cached, you are essentially manipulating in-memory bitmaps – which is just about as fast as it can possibly get.

“A rule of thumb is to use filters when you can and queries when you must: when you need the actual scoring from the queries.”

Brasetvik goes on to elaborate on points such as effective filter usage, combining filters, acceleration filters, aggregation issues, scoring, and important Things to Avoid. The helpful post concludes with a list of further resources.

Cynthia Murrell, September 25, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Automating Data with SharePoint to Boost Efficiency

September 25, 2014

Automating data with SharePoint in order to save cost and time is the subject of an upcoming webinar, “SharePoint Automates EHS Programs: Easy, Flexible, Powerful.” Occurring October 1st, the free webinar focuses on how environmental, health, and safety managers can streamline data collection, processing, and reporting. Read the details in the article, “Automate EHS Data Collection & Reporting with Microsoft SharePoint to Save Time & Cost is Subject of October 1st Webinar.”

The press release says:

“Environmental, health and safety programs require the ongoing routine tasks of data collection, data processing, data analysis, corrective action tracking, and report generation. The essentially manual and time-consuming process places a significant strain on already stretched EHS resources. However, with the use of Microsoft SharePoint — already available in many companies and institutions — EHS managers can automate these tasks to cut both processing time and costs.”

Stephen E. Arnold has a vested interest in SharePoint news and events. His career is focused on following the latest in search, and he makes his findings available via ArnoldIT.com. His SharePoint feed is particularly helpful for users who need to keep up with the latest SharePoint news, tips, and tricks.

Emily Rae Aldridge, September 25, 2014

New York Times Online: An Inside View

September 24, 2014

Check out the presentation “The Surprising Path to a Faster NYTimes.com.”

I was surprised at some of the information in the slide deck. First, I thought the New York Times was first online in the 1970s via LexisNexis.

image

This is not money. See http://bit.ly/1rus9y8

I thought that was an exclusive deal and reasonably profitable for both LexisNexis and the New York Times. When the newspaper broke off that exclusive to do its own thing, the revenue hit on the New York Times was immediate. In addition, the decision had significant cost implications for the newspaper.

The New York Times needed to hire people who allegedly create an online system. The newspaper had to license software, write code, hire consultants, maintain computers not designed to set type and organize circulation. The New York Times had to learn on the fly about converting content for online content processing. Learning that one does not know anything after thinking one knew everything is a very, very inefficient way to get into the online business. In short, the blow off of the LexisNexis deal added significant initial and then ever increasing on-going costs to the New York Times Co. I don’t think anyone at the New York Times has ever sat down to figure out the cost of that decision to become the Natty Bumpo of the newspaper publishing world.

I had heard that the newspaper raked in the 1970s seven figures a year while LexisNexis did the heavy lifting. Yep, that included figuring out how to put the newspaper content on tape into a suitable form for LexisNexis’ mainframe system. Figuring this out inside the New York Times in the early 1990s made this sound: Crackle, crackle, whoosh. That is the sound of a big company burning money not for a few months but for DECADES, folks. DECADES.

image

Photo from US Fish and Wildlife.

When the newspaper decided that it could do an online service itself and presumably make more money, the newspaper embarked on the technical path discussed in the slide deck. Few recall that the fellow who set up the journal Online worked on the online version of the newspaper. I recall speaking to that person shortly after he and the newspaper parted ways. He did not seem happy with budgets, technology, or vision. But, hey, that was decades ago.

image

How some information companies solve common problems with new tools. Image thanks to Enlgishrussia.com at http://bit.ly/1ps0MPF.

In the slide deck, we get an insider’s view of trying to deal with the problem of technical decisions made decades ago. What’s interesting is that the cost of the little adventure by the newspaper does not reflect the lost revenue from the LexisNexis exclusive. The presentation does illustrate quite effectively how effort cannot redress technical decisions made in the past.

This is an infrastructure investment problem. Unlike a physical manufacturing facility, an information centric business is difficult to re-engineer. There is the money problem. It costs a lot to rip and replace or put up a new information facility and then cut it over when it is revved and ready. But information centric businesses have another problem. Most succeed by virtue of luck. The foundation technology is woven into the success of the business, but in ways that are often non replicable.

The New York Times killed off the LexisNexis money flow. Then it had to figure out how to replicate that LexisNexis money flow and generate a bigger profit. What happened? The New York Times spent more money creating the various iterations of the Times Online, lost the LexisNexis money, and became snared in the black hole of trying to figure out how to make online information generate lots of dough. I am suggesting that the New York Times may be kidding itself with the new iteration of the Times Online service.

Read more

Getty Art: Is There a Catch?

September 24, 2014

Navigate to “100,000 Digitized Art History Materials from the Getty Research Institute Now Available in the Digital Public Library of America.” Interesting collection. Our view is that a person may want to verify that there are no fees, encumbrances, or late arriving letters from lawyers informing one of a copyright violation. It’s not that I don’t trust the Getty folks, but I think prudence is appropriate despite the warm, rah rah, cheery words in the announcement.

Stephen E Arnold, September 25, 2014

Watson and Its API

September 24, 2014

Short honk: Attention, Watson fans. check out the documentation “Example Post for Answers with Evidence.” Put your code hat on.

Stephen E Arnold, September 25, 2014

Google X: Another Wizard Blasts Off with a Tether

September 24, 2014

I read “Google X Founder Sebastian Thrun Has Left His Role As Google VP And Fellow.” Google’s moon shot research facility sent Babak Parviz (also Amirparviz packing). Dr. Parviz landed at Amazon, not far from Microsoft where Dr. Parviz worked on Microsoft’s contact lens project.

Now Sebastian Thrun (yep, the online learning, Udacity guy) has left the mothership. He has a tether as an advisor. The article reports:

Thrun has been in more of an advisory role at Google for a while now, with Chris Urmson leading the self-driving car project, and Ivy Ross leading Glass. Astro Teller continues to run Google X.

Astro is related to Edward Teller, a scientist of note.

What’s with the Google X operation? For something that is supposed to be really secret, the departure of high level experts seems to be a bit of a secrecy risk.

The write up mentions a number of secret Google X projects, including the mysterious “indoor localization” operation and Flux. The Loon balloons are ready to float over various countries. How will some countries react to Loons. Maybe with a demo of the SU 27 and SU 35 firepower?

The Google X outfit is of interest to me because of the very non secret relationship between a Google founder and a certain marketer. The marketer may have had a bit of a re-entry problem earlier this year.

Google X has impact. Some may not be what the doctor ordered.

Ah, I long for the good old days of precision and recall. Technological revolutions, marital discord, and secrecy leakage are indications of some interesting management methods.

Stephen E Arnold, September 24, 2014

Artificial Intelligence Text for Free

September 24, 2014

Short honk: Artificial intelligence is in the news. If you want to brush up on your expertise, you can download Artificial Intelligence: Foundations of Computational Agents by David Poole and Alan Mackworth. Although published in 2010, the book is quite useful. Get your copy at this link http://bit.ly/1sVhWaq.

Stephen E Arnold, September 24, 2014

Red Hat: The Cloud Is the Future

September 24, 2014

I read “Red Hat CEO Announces a Shift from Client-Server to Cloud Computing.” With Red Hat the poster child for the economic viability of an open source business model, this shift seems to mark a break with Red Hat’s past focus.

The article reports:

In case you haven’t gotten the point yet, Whitehurst [Red Hat big gun] states, “We want to be the undisputed leader in enterprise cloud.” In Red Hat’s future, Linux will be the means to a cloud, not an end unto itself.

No problem with this move. Most of the organizations with which I have contact bemoan the cost of on premises computing. The cloud, as I understanding their MBA-tinged reasoning, is cheaper. Cut back on staff, eliminate the expensive weekend triage sessions with engineers who charge more than roving physicians in New Jersey, and the hassles of human resources professionals who complain about body shops, background checks, and turnover—these themes surface.

The move should be okay for Red Hat. The company is moving in a new direction. Existing customers will be okay for the foreseeable future.

On a related note, I was scanning one of the less and less heavily visited LinkedIn enterprise search bulletin boards. What did I see? A brave soul was looking for a hosted version of Solr, presumably for its facets and perceived zippy performance.

In one of the comments—an “expert” mentioned that Lucid Works, which invokes from me the thought, “Really?”—said that the Lucid Works cloud offering was no longer available.

I suppose this is an example of contrarianism, but if the statement were true, maybe Lucid Works knows something that has eluded Red Hat? Interesting question. My hunch is that Red Hat knows what it is doing.

Stephen E Arnold, September 23, 2014

Federal Agencies Perpetually Battle Connectivity Loss

September 24, 2014

This may be stating the obvious, but ComputerWorld declares that “IT Outages Are an Ongoing Problem for the U.S. Government.” The article cites a recent report sponsored by Symantec and performed by MeriTalk, which runs a network for government IT workers. Though the issues that originally plagued HealthCare.gov were their own spectacular kettle of fish, our federal government’s other computer networks are no paragons of efficiency. Writer Patrick Thibodeau tells us:

“Specifically, the survey found that 70% of federal agencies have experienced downtime of 30 minutes of more in a recent one-month period. Of that number, 42% of the outages were blamed on network or server problems and 29% on Internet connectivity loss….

“The report is interesting because it surveys two distinct government groups, 152 federal ‘field workers,’ or people who work outside the office, and 150 IT professionals. Because the field workers are outside the office, some of the outages may be the result of local connectivity problems at either a hotel, home or other remote site. But, overall, the main reason for loss of access to data was a government outage.”

The write-up goes on to note that most workers can go on with their tasks via some other method, like by telephone (48%), through their personal devices (33%), or with some other workaround like Google Apps (24%). Imagine how much more efficient government workers could be if they were not frequently required to get creative just to do basic parts of their jobs.

Cynthia Murrell, September 24, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta