Finally The Truth On The Autonomy Purchase
January 27, 2013
The truth is out! After weeks of discussion going back and forth between Autonomy, HP, and everyone else in between Business Insider reports that, “Meg Whitman Admits That HP ‘Paid Too Much’ For Autonomy.” Meg Whitman is the CEO of HP and she has officially declared that her company paid $11 billion too much for the Autonomy mess, along with the Department of Justice investigation. While Whitman admitted the fiasco, she and the board do not take the blame; instead they have placed it with the Deloitte auditors. So the finger pointing continues.
For those of you who have not been following the news, in November 2012, HP said they were going to write off $8.8 billion of the $11 billion. Not a problem, until $5 billion of the write-off came with an “improper accounting” tag. Whitman stated in a recent interview with The Wall Street Journal that the write down was worth 85% of the company.
Here is part of the interview:
“Murray: You had people doing do diligence at the time for the board?
Whitman: Absolutely. But we didn’t go in and question Deloitte and say, ‘Are you appropriately accounting for the revenues and the operating profits’?
Murray: What does it say more broadly about corporate governance? If the board can’t find an 85% hole –
Whitman: Alan, that’s not fair at all in my view. The company turned out to be smaller, slower growth and somewhat less profitable than we anticipated, but we’re still investing in Autonomy. We just announced yesterday that we’re hiring 50 more engineers in the meaning-based compute side of this.”
And let us begin another round of the blame game. Well, Autonomy was not my fault. It was the accounting firm! Yes. Evil accountants. Do these folks wear masks and black capes?
Whitney Grace, January 27, 2013
Sponsored by ArnoldIT.com, developer of Beyond Search
Information Confusion: Search Gone South
January 26, 2013
I read “We Are Supposed to Be Truth Tellers.” I think the publication is owned by a large media firm. The point of the write up is that “real news” has a higher aspiration and may deal with facts with a smidgen of opinion.
I am not a journalist. I am a semi retired guy who lives in rural Kentucky. I am not a big fan of downloading and watching television programs. The idea that I would want to record multiple shows, skip commercials, and then feel smarter and more informed as a direct result of those activities baffles me.
Here’s what I understand:
A large company clamped down on a subsidiary’s giving a recording oriented outfit a prize for coming up with a product that allows the couch potato to skip commercials. The fallout from this corporate decision caused a journalist to quit and triggered some internal grousing.
The article addresses these issues, which I admit, are foreign to me. Here’s one of the passages which caught my attention:
CNET reporters need to either be resigning or be reporting this story, or both. On CNET. If someone higher up removes their content then they should republish it on their personal blogs. If they are then fired for that they should sue the company. And either way, other tech sites, including this one, would be more than happy to make them job offers.
I agree I suppose. But what baffles me are these questions:
- In today’s uncertain financial climate, does anyone expect senior management to do more than take steps to minimize risk, reduce costs, and try to keep their jobs? I don’t. The notion that senior management of a media company embraces the feel good methods of Whole Earth or the Dali Lama is out of whack with reality in my opinion.
- In the era of “weaponized information,” pay to play search traffic, and sponsored content from organizations like good old ArnoldIT—what is accurate. What is the reality? What is given spin? I find that when I run a query for “gourmet craft spirit” I get some darned interesting results. Try it. Who are these “gourmet craft spirit” people? Interesting stuff, but what’s news, what’s fact, and what’s marketing? If I cannot tell, how about the average Web surfer who lets online systems predict what the user needs before the user enters a query?
- At a time when those using online to find pizza and paradise, can users discern when a system is sending false content? More importantly, can today’s Fancy Dan intelligence systems from Palantir-likeand i2 Group-like discern “fake” information from “real” information? My experience is that with sufficient resources, these advanced systems can output results which are shaped by crafty humans. Not exactly what the licensees want or know about.
Net net: I am confused about the “facts” of any content object available today and skeptical of smart systems’ outputs. These can be, gentle reader, manipulated. I see articles in the Wall Street Journal which report on wire tapping. Interesting but did not the owner of the newspaper find itself tangled in a wire tapping legal matter? I read about industry trends from consulting firms who highlight the companies who pay to be given the high intensity beam and the rah rah assessments. Is this Big Data baloney sponsored content, a marketing trend, or just the next big thing to generate cash in a time of desperation. I see conference programs which feature firms who pay for platinum sponsorships and then get the keynote, a couple of panels, and a product talk. Heck, after one talk, I get the message about sentiment analysis. Do I need to hear from this sponsor four or five more times. Ah, “real” information? So what’s real?
In today’s digital world, there are many opportunities for humans to exercise self interest. The dust up over the CBS intervention is not surprising to me. The high profile resignation of a real journalist is a heck of a way to get visibility for “ethical” behavior. The subsequent buzz on the Internet, including this blog post, are part of the information game today.
Thank goodness I am sold and in a geographic location without running water, but I have an Internet connection. Such is progress. The ethics stuff, the assumptions of “real” journalists, and the notion of objective, fair information don’t cause much of stir around the wood burning stove at the local grocery.
“Weaponized information” has arrived in some observers’ consciousness. That is a step forward. That insight is coming after the train left the station. Blog posts may not be effective in getting the train to stop, back up, and let the late arrivals board.
Stephen E Arnold, January 26, 2013
Computer Automation Is Making Researchers Obsolete
January 26, 2013
In archives and libraries around the world, piles of historic documents are sitting gathering dust. One of the problems librarians and archivists have with these documents is that they do not have a way to historically date them. The MIT Technology Review may solve that problem, says the article, “The Algorithms That Automatically Date Medieval Manuscripts.” Gelila Tilahun and other people from the University of Toronto have created algorithms that use language and common phrases to date the documents. Certain words and expressions can date a document to a specific time period. It sounds easy, but according to the article it is a bit more complex:
“However, the statistical approach is much more rigorous than simply looking for common phrases. Tilahun and co’s computer search looks for patterns in the distribution of words occurring once, twice, three times and so on. “Our goal is to develop algorithms to help automate the process of estimating the dates of undated charters through purely computational means,” they say. This approach reveals various patterns that they then test by attempting to date individual documents in this set. They say the best approach is one known as the maximum prevalence technique. This is a statistical technique that gives a most probable date by comparing the set of words in the document with the distribution in the training set.”
Tilahun and his team want their algorithms used for more than dating old documents as well. It can be used to find forgeries and verify authorship. The dating tool opens many more opportunities to explore history, but the down side is that research is getting more automated. Librarians and scholars may be kicked out and sent to work at Wal-Mart.
Whitney Grace, January 26, 2013
Sponsored by ArnoldIT.com, developer of Beyond Search
Forrester Fills the Gap in Search Market Size Estimates
January 25, 2013
I used to enjoy the search market size estimates of IDC (the time it takes to find info group), Gartner (the magic quad folks), Forrester (yep, the “wave” people), and Ovum (we do it all experts), among others.
I read “Growth of Big Data in Businesses Intensifies Global Demand for Enterprise Search Solutions, Finds Frost & Sullivan” and found several items of interest in the brief news story which arrived via Germany. Is Germany a leader in enterprise search? I heard that 99 percent of Germany’s search means Google. The numerous open source players are not setting the non-German world on fire, but I could be wrong. Check out GoPubMed, for example, of an interesting system which has a modest profile.
Now to the size of the search market.
The first thing I noticed was the nod to Big Data, which is certainly the hook on which many dreams for Big Money hang. With enterprise search vendors looking for a way to gain traction in a market which has been caught in awkward positions when licensing and deploying “search,” new words and new Velcro patches are needed. I won’t mention the Hewlett Packard Autonomy matter nor the Fast Search & Transfer matter nor the millions pumped into traditional search vendors with little chance of paying back the investments. No. No. No.
I want to quote this statement from :
The growth of Big Data across verticals presents the enterprise search solutions market with further opportunities. Since newer data types are not confined to a relational database within an organization, solutions that can search information outside the scope of these relational frameworks are widely accepted. Demand for personalized search tools that operate in a pool of unlimited data from internal servers, the Internet, or third-party sources is also growing.
Ah, but how does one crawfish away from exaggeration? Easy. I noted:
However, the disparity between customer expectations and actual search outcomes could dissuade future investments. Customers expect a single query to retrieve the right results immediately. Therefore, search providers must offer timely and relevant results, taking into account the continuous addition of new data to repositories.
But “How big is the market? my inner child yelps. The answer:
PolySpot Disseminates Big Data Gold
January 25, 2013
Even though many companies started researching big data initiatives for their organization, they did not actively pursue the technologies or the workforce needed to turn their data into gold. Experts in the field are opining in their predictions that 2013 will be the year that big data really hits and companies utilizing it will have a competitive advantage against others who are behind the curve. GigaOM reports on an opportunity for professionals interested in big data in the brief write-up, “Meet Big Data Bigwigs at Structure: Data.”
The opportunity for networking and learning from industry experts will be from March 20-12 and is called Structure:Data. The article tells us more:
‘Whether we know it or not, data — big, small or otherwise — is becoming a central component to the way we live our lives,’ says GigaOM writer Derrick Harris in his big data predictions for 2013. At Structure:Data we’ll delve into what lies ahead for big data, as we explore the technical and business opportunities that the growth of big data has created. Topics include case studies of big data implementations, the future of Hadoop, machine learning, the looming data-scientist crisis and the top trends in big data technologies.
There are plenty of insights and opportunities to be mined from big data and there are some firms that are already tapping into big data. Tools like PolySpot make this an easy feat for small businesses to large corporations with their scalable solutions to disseminate insights from terabytes in real time across the enterprise.
Megan Feil, January 25, 2012
Sponsored by ArnoldIT.com, developer of Beyond Search
Apache Lucene Solr Updates
January 25, 2013
The DZone Big Data/BI Zone has let us know that a new version of Apache Lucene Solr has hit the internet. Apache Lucene Solr 3.6.2 has been unveiled, and it will roll into many other products that build upon the open source code. Read the details in, “Apache Lucene Solr 3.6.2.”
The gist of the release is in the first few lines:
“Apache Lucene and Solr PMC recently announced another version of Apache Lucene library and Apache Solr search server numbred 3.6.2. This is a minor bugfix release concentrated mainly on bugfixes in Apache Lucene library.
Apache Lucene 3.6.2 library can be downloaded from the following address: http://lucene.apache.org/core/mirrors-core-3x-redir.html?. Apache Solr 3.6.2 can be downloaded at the following URL address: http://lucene.apache.org/solr/mirrors-solr-3x-redir.html?”
Two products sure to be affected and improved by the update are LucidWorks Search and LucidWorks Big Data. LucidWorks chooses to use Lucene Solr as its foundation because of its dependability, agility, and strong developer and user communities. LucidWorks and any product that builds on open source and is going to be strong, secure, and continuously updated, just by its nature, and therefore a better choice than a proprietary option.
Emily Rae Aldridge, January 25, 2013
Sponsored by ArnoldIT.com, developer of Augmentext
The Truth Behind Cloud Computing Costs
January 25, 2013
Cloud computing allows users ease and accessibility and lower costs, but Datamation analyzes, “What Are The Hidden Costs Of Cloud Computing?” Datamation pulls it information from Symantec’s “Avoiding the Hidden Costs of Cloud 2013” survey, that shows cloud adoption is very high. Rogue deployments came from 77% of the survey-takers, meaning the clouds were not approved by corporate IT. Does anyone else here security breech problems?
Cloud storage may be lighter than air, but it is also costing money. That is not the scary point, however:
“Perhaps even more concerning is the fact that 43 percent of respondents admitted that they have lost data in the cloud. In Elliot’s view, there are a number of reasons for the cloud data loss. For one, the cloud data provider could have lost the data in a failure of some sort. What is more likely, though, is that some form of user human error led to the data loss.
‘The user could have accidentally misplaced the data and literally just could not find it,’ [Dave Elliott, senior manager, Global Cloud Marketing at Symantec] said.”
Cloud users are also not fully using all of the storage they pay for and they also do not have data duplication in place. What can we learn from this? Install security policies for the cloud, reevaluate the costs, and make sure everything is consistent! In truth, does anyone even notice the costs compared to responsibilities being shuffled around and cutting staff?
Whitney Grace, January 25, 2013
Sponsored by ArnoldIT.com, developer of Beyond Search
The Toping of YouTube
January 25, 2013
YouTube reigns supreme over Internet video distribution, but could its dominance be over in 2013? All Things D predicts in, “YouTube’s Reign Threatened By A Spotified Revolution, And Other Reel Truths For Video In 2013” that things are going to change this year. Internet video consumption and creation has grown considerably with the mobile market, but along with this record growth people are becoming more discerning where they get their content:
“People are still watching just as much video — but they are now looking to different sources on the Web. In the past five months there has been a 34 percent drop in the total volume of video consumed on YouTube compared with the rest of the Web. YouTube views peaked in June 2012 at 18.3 billion, but have since declined to 12 billion on November 2012. comScore’s Video Metrix measured total Web video views in June 2012 at 32.9 billion; fast-forward to November 2012 and total video views across the Web hit 40 billion. While YouTube lost about six billion views within that five-month period, the other half of Web video shot up by 13 billion.”
Other predictions include TV networks and other major media outlets will look for ways to gain more viewers by experimenting more with web content. Also no one has even tackled video discovery to meet the needs of mobile and social Web. Whoever creates that algorithm will be writing her or his own check. Google’s YouTube may see a steady stream of competition, but do not forget that Google is always planning and working on new projects. The search engine giant will not fall this year.
Whitney Grace, January 25, 2013
Sponsored by ArnoldIT.com, developer of Beyond Search
Stunning Visuals Show How Datasets Connect
January 25, 2013
Data analysis can be tricky business, especially when you have been staring at a computer screen and all the information blurs together. What if there was a way to make the data more visually stimulating, not to mention could take out the guesswork in correlations? Gigaom may have found the answer, “Has Ayasdi Turned Machine Learning Into A Magic Bullet?” Ayasdi is a startup company that has created software for visually mapping hidden connections in massive datasets. The company just opened its doors with $10.25 million in funding, but what is really impressive is their software offering:
“At its core, Ayasdi’s product, a cloud-based service called the Insight Discovery Platform, is a mix of distributed computing, machine learning and user experience technologies. It processes data, discovers the correlations between data points, and then displays the results in a stunning visualization that’s essentially a map of the dataset and the connections between every point within it. In fact, Ayasdi is based on research into the field of topological data analysis, which Co-founder and President Gunnar Carlsson describes a quest to present data as intuitively as possible based solely on the similarity of (or distance between, in a topological sense) the data points.”
The way the software works is similar to social networking. Social networking software maps connections between users and their content, but the algorithms do not understand what the connections mean. Ayasdi makes it easier for its users to attach meaning to the correlations. The article also points out that Ayasdi’s software is hardly a new concept, but for some working in BI it takes out a lot of the discovery work. The software may be really smart, but humans are still needed to interpret the data.
Whitney Grace, January 25, 2013
Sponsored by ArnoldIT.com, developer of Beyond Search
A Nokia Quote to Note about Google
January 24, 2013
I read “Nokia CEO Stephen Elop: Google Is Making Its Open Ecosystem More Closed.” I was not surprised at Nokia’s suggestion. What I noted was this alleged quotation:
“The situation that Android is facing, where the amount of fragmentation that you’re seeing is increasing as people take it in different directions, is of course offset by Google’s efforts to turn an open ecosystem into something that’s quite a bit more closed as you’ve seen quite recently.
Who cares? Maybe Samsung? What if Samsung goes its own direction with mobile operating systems? Impossible? Absolutely. Why would Samsung surf on Android and then want to do its own thing? Crazy?
Stephen E Arnold, January 24, 2013