How the Cloud Might Limit SharePoint Functionality

June 25, 2015

In the highly anticipated SharePoint Server 2016, on-premises, cloud, and hybrid functionality are all emphasized. However, some are beginning to wonder if functionality can suffer based on the variety of deployment chosen. Read all the details in the Search Content Management article, “How Does the Cloud Limit SharePoint Search and Integration?”
The article begins:
“All searches are not created equal, and tradeoffs remain for companies mulling deployment of the cloud, on-premises and hybrid versions of Microsoft’s collaboration platform, SharePoint. SharePoint on-premises has evolved over the years with a focus on customization and integration with other internal systems. That is not yet the case in the cloud with SharePoint Online, and there are still unique challenges for those who look to combine the two products with a hybrid approach.”
The article goes on to say that there are certain restrictions, especially with search customization, for the SharePoint Online deployment. Furthermore, a good amount of configuration is required to maximize search for the hybrid version. To keep up to date on how this might affect your organization, and the required workarounds, stay tuned to ArnoldIT.com. Stephen E. Arnold is longtime search professional, and his work on SharePoint is conveniently collocated in a dedicated feed to maximize efficiency.
Emily Rae Aldridge, June 25, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Deep Learning System Surprises Researchers

June 24, 2015

Researchers were surprised when their scene-classification AI performed some independent study, we learn from Kurzweil’s article, “MIT Deep-Learning System Autonomously Learns to Identify Objects.”

At last December’s International Conference on Learning Representations, a research team from MIT demonstrated that their scene-recognition software was 25-33 percent more accurate than its leading predecessor. They also presented a paper describing the object-identification tactic their software chose to adopt; perhaps this is what gave it the edge. The paper’s lead author, and MIT computer science/ engineering associate professor, Antonio Torralba ponders the development:

“Deep learning works very well, but it’s very hard to understand why it works — what is the internal representation that the network is building. It could be that the representations for scenes are parts of scenes that don’t make any sense, like corners or pieces of objects. But it could be that it’s objects: To know that something is a bedroom, you need to see the bed; to know that something is a conference room, you need to see a table and chairs. That’s what we found, that the network is really finding these objects.”

Researchers being researchers, the team is investigating their own software’s initiative. The article tells us:

“In ongoing work, the researchers are starting from scratch and retraining their network on the same data sets, to see if it consistently converges on the same objects, or whether it can randomly evolve in different directions that still produce good predictions. They’re also exploring whether object detection and scene detection can feed back into each other, to improve the performance of both. ‘But we want to do that in a way that doesn’t force the network to do something that it doesn’t want to do,’ Torralba says.”

Very respectful. See the article for a few more details on this ambitious AI, or check out the researchers’ open-access paper here.

Cynthia Murrell, June 24, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

New Analysis Tool for Hadoop Data from Oracle

June 23, 2015

Oracle offers new ways to analyze Hadoop data, we learn from the brief write-up, “Oracle Zeroes in on Hadoop Data with New Analytics Tool” at PCWorld. Use of the Hadoop open-source distributed file system continues to grow  among businesses and other organizations, so it is no surprise to see enterprise software giant Oracle developing such tools. This new software is dubbed Oracle Big Data Spatial and Graph. Writer Katherine Noyes reports:

“Users of Oracle’s database have long had access to spatial and graph analytics tools, which are used to uncover relationships and analyze data sets involving location. Aiming to tackle more diverse data sets and minimize the need for data movement, Oracle created the product to be able to process data natively on Hadoop and in parallel using MapReduce or in-memory structures.

“There are two main components. One is a distributed property graph with more than 35 high-performance, parallel, in-memory analytic functions. The other is a collection of spatial-analysis functions and services to evaluate data based on how near or far something is, whether it falls within a boundary or region, or to process and visualize geospatial data and imagery.”

The write-up notes that such analysis can reveal connections for organizations to capitalize upon, like relationships between customers or assets. The software is, of course, compatible with Oracle’s own Big Data Appliance platform, but can be deployed on other Hadoop and NoSQL systems, as well.

Cynthia Murrell, June 23, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Major SharePoint Features Disclosed

June 23, 2015

SharePoint Server 2016 has caused quite a stir, with users wondering what features will come through in the final version. At Microsoft Ignite last month, rumors turned to legitimate features. Read more about separating fact from fiction in the newest SharePoint release in the CIO article, “Top 4 Revelations about SharePoint.”

The article begins:

“Some of the biggest news to come out of Microsoft Ignite last month was the introduction and the first public demonstration of SharePoint Server 2016 – a demo that quelled a lot of speculation and uneasiness in the SharePoint administrator community. Here are the biggest takeaways from the conference, with an emphasis on the on-premises product.”

The article goes on to say that users can look forward to a full on-premises version, bolstered administrative features, four roles to divide the workload, and an emphasis on hybrid functions.  For users that need to stay in the loop with SharePoint updates and changes, stay tuned to ArnoldIT.com. Stephen E. Arnold is a longtime leader in search, and his Web site offers a unique SharePoint feed to keep all the latest tips, tricks, and news in one convenient location.

Emily Rae Aldridge, June 23, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

MIT Discover Object Recognition

June 23, 2015

MIT did not discover object recognition, but researchers did teach a deep-learning system designed to recognize and classify scenes can also be used to recognize individual objects.  Kurzweil describes the exciting development in the article, “MIT Deep-Learning System Autonomously Learns To Identify Objects.”  The MIT researchers realized that deep-learning could be used for object identification, when they were training a machine to identify scenes.  They complied a library of seven million entries categorized by scenes, when they learned that object recognition and scene-recognition had the possibility of working in tandem.

“ ‘Deep learning works very well, but it’s very hard to understand why it works — what is the internal representation that the network is building,’ says Antonio Torralba, an associate professor of computer science and engineering at MIT and a senior author on the new paper.”

When the deep-learning network was processing scenes, it was fifty percent accurate compared to a human’s eighty percent accuracy.  While the network was busy identifying scenes, at the same time it was learning how to recognize objects as well.  The researchers are still trying to work out the kinks in the deep-learning process and have decided to start over.  They are retraining their networks on the same data sets, but taking a new approach to see how scene and object recognition tie in together or if they go in different directions.

Deep-leaning networks have major ramifications, including the improvement for many industries.  However, will deep-learning be applied to basic search?  Image search still does not work well when you search by an actual image.

Whitney Grace, June 23, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Latest Version of DataStax Enterprise Now Available

June 19, 2015

A post over at the SD Times informs us, “DataStax Enterprise 4.7 Released.” Enterprise is DataStax’s platform that helps organizations manage Apache Cassandra databases. Writer Rob Marvin tells us:

“DataStax Enterprise (DSE) 4.7 includes a production-certified version of Cassandra 2.1, and it adds enhanced enterprise search, analytics, security, in-memory, and database monitoring capabilities. These include a new certified version of Apache Solr and Live Indexing, a new DSE feature that makes data immediately available for search by leveraging Cassandra’s native ability to run across multiple data centers. …

“DSE 4.7 also adds enhancements to security and encryption through integration with the DataStax OpsCenter 5.2 visual-management and monitoring console. Using OpsCenter, developers can store encryption keys on servers outside the DSE cluster and use the Lightweight Directory Access Protocol to manage admin security.”

Four main features/ updates are listed in the write-up: extended search analytics, intelligent query routing, fault-tolerant search operations, and upgraded analytics functionality. See the article for details on each of these improvements.

Founded in 2010, DataStax is headquartered in San Mateo, California. Clients for their Cassandra-management software (and related training and professional services) range from young startups to Fortune 100 companies.

Cynthia Murrell, June 19, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Make Your Data Pretty

June 19, 2015

It is very easy to read and interpret data when it is represented visually.  Humans are visual creatures and it can be easier to communicate via pictures for an explanation.  Infographics are hugely popular on the Internet and some of them have achieved meme status.  While some data can be easily represented using Adobe Photoshop or the Microsoft Office Suite, more complex data needs more complex software to simplify it visually.

Rather than spending hours on Google, searching for a quality data visualization tool Usability Tools has rounded up “21 Essential Data Visualization Tools.”  What is great about this list is that it features free services that available to improve how you display data on your Web site, project, or whatever your specific needs are.

Some of the choices are obvious, such as Google Charts and Wolfram Alpha, but there are some stand outs that combine JavaScript and draw on Internet resources.  Plus they are also exceedingly fun to play with.  They include: Timeline.js, Tableau Public, PiktoChart, Canva, and D3.js.

None of the data visualization tools are better than the others, in fact the article’s author says what you want to use is based on your need:

“As you can see, there is plenty of Data Visualization tools that will make you understand your users in a better, more insightful way. There are many tools being launched every day, but I managed to collect those that are the most popular in the ‘industry’. Of course, they have both strong and weak sides, since there is no one perfect tool to visualize the metrics. All I can do is to recommend you trying them yourself and combining them in order to maximize the efficiency of visualizing data.”

It looks like it is time to start playing around with data toys!

Whitney Grace, June 19, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Chris McNulty at SharePoint Fest Seattle

June 18, 2015

For SharePoint managers and users, continued education and training is essential. There are lots of opportunities for virtual and face-to-face instruction. Benzinga gives some attention to one training option, the upcoming SharePoint Fest Seattle, in their recent article, “Chris McNulty to Lead 2 Sessions and a Workshop at SharePoint Fest Seattle.”

The article begins:

“Chris McNulty will preside over a full day workshop at SharePoint Fest Seattle on August 18th, 2015, as well as conduct two technical training sessions on the 19th and 20th. Both the workshops and sessions are to be held at the Washington State Convention Center in downtown Seattle.”

In addition to all of the great training opportunities at conferences and other face-to-face sessions, staying on top of the latest SharePoint news and online training opportunities is also essential. For a one-stop-shop of all the latest SharePoint news, stay tuned to Stephen E. Arnold’s Web site, ArnoldIT.com, and his dedicated SharePoint feed. He has turned his longtime career in search into a helpful Web service for those that need to stay on top of the latest SharePoint happenings.

Emily Rae Aldridge, June 18, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Basho Enters Ring With New Data Platform

June 18, 2015

When it comes to enterprise technology these days, it is all about making software compliant for a variety of platforms and needs.  Compliancy is the name of the game for Basho, says Diginomica’s article, “Basho Aims For Enterprise Operational Simplicity With New Data Platform.”  Basho’s upgrade to its Riak Data Platform makes it more integration with related tools and to make complex operational environments simpler.  Data management and automation tools are another big seller for NoSQL enterprise databases, which Basho also added to the Riak upgrade.  Basho is not the only company that is trying to improve NoSQL enterprise platforms, these include MongoDB and DataStax.  Basho’s advantage is delivering a solution using the  Riak data platform.

Basho’s data platform already offers a variety of functions that people try to get to work with a NoSQL database and they are nearly automated: Riak Search with Apache Solr, orchestration services, Apache Spark Connector, integrated caching with Redis, and simplified development using data replication and synchronization.

“CEO Adam Wray released some canned comment along with the announcement, which indicates that this is a big leap for Basho, but also is just the start of further broadening of the platform. He said:

‘This is a true turning point for the database industry, consolidating a variety of critical but previously disparate services to greatly simplify the operational requirements for IT teams working to scale applications with active workloads. The impact it will have on our users, and on the use of integrated data services more broadly, will be significant. We look forward to working closely with our community and the broader industry to further develop the Basho Data Platform.’”

The article explains that NoSQL market continues to grow and enterprises need management as well as automation to manage the growing number of tasks databases are used for.  While a complete solution for all NoSQL needs has been developed, Basho comes fairly close.

Whitney Grace, June 18, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Magic May Not Come From Pre-Made Taxonomies

June 17, 2015

There are hundreds of companies that advertise they can increase information access, retrieval and accuracy for enterprise search by selling prefabricated taxonomies.  These taxonomies are industry specific and are generated by using an all-or-nothing approach, rather than individualizing them for each enterprise search client.  It turns out that the prefabricated taxonomies are not guaranteed to help enterprise search; in fact, they might be a waste of money.  The APQC Blog posted “Make Enterprise Search Magical Without Money” that uses an infographic to explain how organizations can improve their enterprise search without spending a cent.

APQC found that “best-practice organizations don’t have significantly better search technology.  Instead, they meet employees’ search needs with superior processes and approaches the content management.”

How can it be done?

The three steps are quite simple:

  1. Build taxonomies that reflect how people actually think and work-this can be done with focus groups and periodically reviewing taxonomies and metadata. This contributes to better and more effective content management.
  2. Use scope, metadata, and manual curation to ensure search returns the most relevant results-constantly the taxonomies for ways to improve and how users are actually users search.
  3. Clear out outdated, irrelevant, and duplicate content that’s cluttering up your search results-keep taxonomies updated so they continue to deliver accurate results.

These are really simple editing steps, but the main problem organizations might have is actually implementing the steps.  Will they assign the taxonomy creation task to the IT department or information professionals?  Who will be responsible for setting up focus groups and monitoring usage?  Yes, it is easy to do, but it takes a lot of time.

Whitney Grace, June 17, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta