Y Combinator Founder Warns Against Googley Dollars
September 18, 2012
More anti-Google chatter. Are we missing something? We think Google is super, don’t you?
This time, it is the founder of Y Combinator stirring the pot. According to Business Insider, “Top Startup Advisor Paul Graham Just Warned Against Taking Google’s Money.” Apparently, Graham recently emailed that advice to his entrepreneurs, warning them away from “lowball offers” from Google Ventures. He maintains that the Google-backed venture capital firm demands unreasonably favorable terms for its largesse. See the article for specifics about his assertions.
Google Ventures’ managing partner Bill Maris vehemently denies the accusations. Reporter Owen Thomas writes:
“It’s simply not in Google Ventures’ self-interest to behave in the way Graham alleges it has, Maris said: ‘We work really hard on our relationships and our reputation … and most of us are entrepreneurs ourselves.'”
Thomas goes on to observe:
“Google Ventures also has extensive programs to help the entrepreneurs it backs with technology, analytics, design, and marketing—for which it might reasonably believe it should command good terms when investing. What is surprising is that whatever approach Google Ventures took to these Y Combinator startups caused enough friction to provoke Graham’s cautionary note.”
Yes, it was surprising. According to Maris, some of Graham’s clients have chosen to ignore his advice. Will doing so pay off?
Cynthia Murrell, September 18, 2012
Sponsored by ArnoldIT.com, developer of Augmentext
Google: More Than 18 Percent Search
September 18, 2012
Search Engine Watch emphasizes the importance of perspective when it asks, “Is Google Search Really Only 18.5% Organic Results?” The short answer—um, no. The question was prompted by a Jitbit blog post that made an outrageous claim– that Google’s search results pages are only 18.5 percent “actual search results.” Search Engine Watch’s Thom Craver explains the original posters methodology:
“The author’s ‘reasoning’ suggests on his 1280×960 resolution screen, the search results take up a box 535 pixels wide by 425 pixels tall, 18.5 percent of his window if you multiply the resolutions and consider square pixels to be the same as measuring something in square inches. . . .
“The author jumps around between different numbers of links, trying to make a point that out of all the links, only five were actual ‘search’ results, leading to a claim that only 11 percent of the links are actual search results, then later suggesting an ads to results ratio of 8:7, ‘which is 47% of the links are actual results.'”
Ummm. . . okay. Google engineer Matt Cutts responded to the post with a list of three major problems (just three?) with the original poster’s reasoning. Essentially, he had counted a lot of things as “non-search” space he probably shouldn’t have—like the search box. And the tools in the left column. And the white space on the page.
Perhaps the original (faulty) observation only got any traction because it tapped into a simmering unease with Google’s increased search results monetization? Nah, that couldn’t be it.
Cynthia Murrell, September 18, 2012
Sponsored by ArnoldIT.com, developer of Augmentext
No Wonder Search Is Broken. Software Does Not Work.
September 17, 2012
Several years ago, I ran across a Microsoft centric podcast hosted by an affable American, Scott Hanselman. At the time he worked for a company developing software for the enterprise. Then I think he started working at Microsoft and I lost track of him.
I read “Everything’s Broken and Nobody’s Upset.” The author was Scott Hanselman, who is “a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee.”
The article is a list of bullet points. Each bullet point identifies a range of software problems. Some of these were familiar; for example, iPhoto’s choking on large numbers of pictures on my wife’s new Mac laptop. Others were unknown to me; for example, the lousy performance of Gmail. Hopefully Eric Brewer, founder of Inktomi, can help improve the performance of some Google services.
Answer to the Google query “Why are Americans…”
The problems, Mr. Hanselman, identifies can be fixed. He writes:
Here we are in 2012 in a world of open standards on an open network, with angle brackets and curly braces flying at gigabit speeds and it’s all a mess. Everyone sucks, equally and completely.
- Is this a speed problem? Are we feeling we have to develop too fast and loose?
- Is it a quality issue? Have we forgotten the art and science of Software QA?
- Is it a people problem? Are folks just not passionate about their software enough to fix it?
I think it’s all of the above. We need to care and we need the collective will to fix it.
My reaction was surprise. I know search, content processing, and Fancy Dan analytics do not work as advertised, as expected, or, in some cases, very well despite the best efforts of rocket scientists.
The idea that the broad world of software is broken was an interesting idea. Last week, I struggled with a client who could not explain what its new technology actually delivered to a user. The reason was that the words the person was using did not match what the new software widget actually did. Maybe the rush to come up with clever marketing catchphrases is more important than solving a problem for a user?
In the three disciplines we monitor—search, content processing, and analytics—I do not have a broad method for remediating “broken” software. My team and I have found that the approach outlined by Martin White and I in Successful Enterprise Search Management is just ignored by those implementing search. I can’t speak for Martin, but my experience is that the people who want to implement a search, content processing or analytics system demonstrate these characteristics. These items are not universally shared, but I have gathered the most frequent actions and statements over the last year for the list. The reason for lousy search-related systems:
- Short cuts only, please. Another consultant explained that buying third party components was cheaper, quicker, and easier than looking at the existing search related system
- Something for nothing. The idea is that a free system is going to save the day.
- New is better. The perception that a new system from a different vendor would solve the findability problem because it was different
- We are too busy. The belief that talking to the users of a system was a waste of time. The typical statement about this can be summarized, “Users don’t know what they want or need.”
- No appetite for grunt work. This is an entitlement problem because figuring out metrics like content volume, processing issues for content normalization, and reviewing candidate term lists is not their job or too hard.
- No knowledge. This is a weird problem caused in part by point-and-click interfaces or predictive systems like Google’s. Those who should know about search related issues do not. Therefore, education is needed. Like recalcitrant 6th graders, the effort required to learn is not there.
- Looking for greener pastures. Many of those working on search related projects are looking to jump to a different and higher paying job in the organization or leave the company to do a start up. As a result, search related projects are irrelevant.
The problem in search, therefore, is not the technology. Most of the systems are essentially the same as those which have been available for decades. Yes, decades. Precision and recall remain in the 80 percent range. Predictive systems chop down data sets to more usable chunks but prediction is a hit and miss game. Automated indexing requires a human to keep the system on track.
The problem is anchored in humans: Their knowledge, their ability to prioritize search related tasks, their willingness to learn. Net net: Software is not getting much better, but it is prettier than a blinking dot on a VAX terminal. Better? Nah. Upset? Nope, there are distractions and Facebook pals to provide assurances that everything is A-OK.
Stephen E Arnold, September 17, 2012
Sponsored by Augmentext
SharePoint 2013 of Little Consequence for the End User
September 17, 2012
For all of the buzz surrounding the release of SharePoint 2013, there may be little to no immediate impact for the actual end user. Developers and enterprise search bloggers are highly anticipating the full release as well as the SharePoint conference in November. However, Mark Miller at CMS Wire argues, “Why SharePoint 2013 Isn’t for You,” in his most recent article.
Miller states:
There is the marketing coming out of Microsoft, but the Man-on-the-Street conversation is mainly from developers and IT Pros who are talking to each other about how to set it up, how to optimize it, how it is different from 2010. This has absolutely nothing of relevance for people using SharePoint on a day-to-day basis. The day-to-day talk is a distraction to SharePoint end users. In general, the users are not interested in the technology, they are interested in the solutions the technology can provide . . . We had the same type of situation after the release of SharePoint 2010. At that time, I took the same position: It’s going to take two to three years for SharePoint 2013 to become relevant to the daily user.
So for users who need a better interface now, what is to be done? One option gaining acceptance and popularity is the addition of a third-party solution. Vendors such as Fabasoft Mindbreeze offer a suite of solutions to maximize enterprise search and overall functionality. Fabasoft Mindbreeze Enterprise can be added to an existing SharePoint infrastructure to add the values of quality, usability, and style. It takes a long time to turn a big ship, and SharePoint is definitely the biggest ship on the market. Therefore, many organizations will benefit more from the intuitive infrastructure and agility of a smart third-party solution like Fabasoft Mindbreeze.
Emily Rae Aldridge, September 17, 2012
Sponsored by ArnoldIT.com, developer of Augmentext.
FlowForce Beta 3 Automates Data Transformations
September 17, 2012
The Altova Blog recently reported on the release of the latest edition of Altova’s FlowForce Server, a new product that automates the execution of MapForce data transformations, in the article “FlowForce Server Beta 3 Now Available,”
According to the article, this new tool is designed to provide comprehensive management and control over data transformations performed by dedicated high speed servers. the beta test period FlowForce Beta 3 has been extended to March 31, 2013 and it is available in a 32-bit version as well as a 64-bit version.
The article states:
“FlowForce Server Beta 3 adds support for remote job requests via an HTTP client and job parameters that can be passed to any step in a job. When used together with the request interface, job parameters empower the HTTP client to specify input values in the job request.
FlowForce Server Beta 3 also permits any job to be called as a step within another job, implements individual job queues that make it possible to control server resources used by jobs, and adds many more refinements and enhancements.”
For more information on the free beta version of this solution, check out the FlowForce Server Beta 3 download page.
Jasmine Ashton, September 17, 2012
Sponsored by ArnoldIT.com, developer of Augmentext
Survey Finds Majority of Marketers Fail to Utilize Data Effectively
September 17, 2012
The Harvard Business Review recently reported on the results a recent survey of nearly 800 marketers are fortune 1000 companies in the article “Marketers Flunk the Big Data Test.”
According to the article, the vast majority of marketers rely too heavily on their intuition and not enough on data and statistics. The survey found that on average, marketers rely on data for just 11 percent of all customer related decisions. On the other end of the spectrum, the few marketers that do utilize data tend to rely on it too heavily.
The article states:
“While most marketers underuse data, a small fraction (11% in this study) just can’t get enough. These data hounds consult dashboards daily, and base most decisions on data. They have a “plugged in” personality type and thrive on external stimulation — so they love data and all forms of feedback including data on marketing effectiveness, input from managers or peers, and frequent interaction with others. We call these marketers “Connectors” and they’re exactly what most CMOs are looking for. But these types of marketers are actually severe underperformers (they receive much lower performance ratings from their managers than average marketers do). The problem is, they don’t have the statistical aptitude or judgment required to use data effectively. Every time they see a blip on the dashboard, they adjust — and end up changing direction so often that they lose sight of end goals.”
It’s no surprise that marketers are failing to utilize data efficiently, since only 5 percent have a statistics background. It is important that marketers continually reiterate their business goals so they do not get distracted and make common data interpretation mistakes.
Jasmine Ashton, September 17, 2012
Sponsored by ArnoldIT.com, developer of Augmentext
Bottlenose Offers Real Time Social Media Search
September 17, 2012
Venture Beat recently reported on the new social media search engine, Bottlenose, in the article, “Move Over Google, Bottlenose Launches Search Engine for the ‘Now’ Era.”
According to the article, Bottlenose has spent the last two years perfecting its advanced solution. It is search for social networks and aims to organize the world’s attention by creating a filter that finds, sorts, and organizes the social updates of the greatest importance, as they happen, around any given query.
When explaining how it works, the article states:
“Bottlenose, which runs atop a javascript and HTML5 platform, spits out a “Now” page for every query that includes top stories, trending topics related to the subject, trending people, images, recent links, and fresh comments from social networks. Spivack likens the pages to Wikipedia entries, except that Bottlenose pages are automatically edited based on what the crowd is sharing. The pages will also appear in search engines, exposing Bottlenose’s brainpower and its ever-changing pages to the traditional searchers of the world.”
While Bottlenose has a lot to offer, it is going to take a lot to convince consumers and professionals that it offers more than just the run-of-the-mill search experience.
Jasmine Ashton, September 17, 2012
Sponsored by ArnoldIT.com, developer of Augmentext
Foundem Predicts Dire EU Legal Mess for Google
September 17, 2012
Ah, Google and its legal woes. Has it become too much, or do they consider it a cost of doing business via controlled chaos? PCPro informs us, “Foundem Claims Google Faces Tsunami of Litigation.” The vertical search, price comparison site Foundem has been a thorn in Google’s side for a couple of years now. The UK-based company filed a complaint with the EU in 2010, claiming the search behemoth rigged its results to favor its own services over those of competitors, including Foundum. The company
now asserts there will soon be a “tsunami” (their word) of EU litigation against Google following the judgment they seem certain will go their way.
Writer Stewart Mitchell notes:
“An infringement decision is by no means guaranteed, but with Europe’s competition chiefs mulling over Google’s proposals for changes to its business methods, action through courts is a distinct possibility if Google doesn’t agree to make changes voluntarily.
‘Unbeknownst to its shareholders, Google’s increasingly anti-competitive practices have been quietly accruing billions of dollars of antitrust liabilities,’ said Foundem boss Shivaun Raff. ‘It is impossible to know how many companies have been harmed or destroyed by these practices — it could be hundreds or even thousands — but whatever the number, the consequence of abuse on a grand scale is liabilities on a grand scale.'”
I’d say that qualifies as a “tsunami,” if it indeed comes to pass. According to Raff, the EU’s investigation has come to a provisional infringement decision. For now, Google says it is cooperating with EU officials to bring its practices in line with their requirements. No word on when a final judgment is expected.
Cynthia Murrell, September 17, 2012
Sponsored by ArnoldIT.com, developer of Augmentext
IntelTrax Top Stories: September 7 to September 13
September 17, 2012
This week the IntelTrax advanced intelligence blog published articles on current trends related to big data, fraud detection, and analytics solutions that will help both of the previously stated problems.
“Real Time Analytics Makes an Impact” discusses how companies have spent the last couple of years making it so that their analytics solutions have zero lag time.
The article states:
“Operational Intelligence, basically, is real-time analytics over operational and social data. Operational intelligence, or OI as we like to call it, provides three important capabilities. First is real-time visibility over a wide variety of data. Second is real-time insight using real-time continuous analytics, and third is what we call right-time action, which means being able to take action in time to make a measurable difference in the business. We decided to focus on Operational Intelligence because it addresses some very important business problems that we felt were not well served by traditional software products today. These problems include service assurance in telco, social analytics for dynamic selling and brand management, real-time supply chain management, smart grid management in electrical utilities, and dynamic pricing in retail. These are just some of the examples.”
One way that analytics solutions have positively impacted a variety of industries is through the detection of fraud. “Fraud Analytics Deliver on Fine Art Forgeries” explains a new niche in fraud analytics that helps prevent substantial losses from individuals and museums.
The article informs:
“Just as with credit card fraud detection, the data sets created by digital authentication are quite large. Similarly, the modeling tools are extremely sophisticated, looking for patterns that would be unlikely from the painter just as a given purchase would be unlikely for a credit card holder. Zeroing in on the fraud can save an enterprise millions of dollars. Digital authentication is not real-time — it took two days to identify the fake Van Gogh. But in the world of art, that’s more than fast enough.”
When discussing advancements made in the industry, the information is often more well received when it comes from experts in the field. “Analytic News is Best From the Experts” showcases on experts opinion on the topic:
“Werner Vogels, a data guru as chief technology officer for Amazon Web Services, has been touting his interpretation of big data for almost two years. For him, managing a behemoth like Amazon, it’s not exactly what big data is, but what can be done with it.
“Big data is the collection and analysis of large amounts of data to create a competitive advantage,” he told a conference earlier this year.
“I am an infrastructure guy and for me big data is when your data sets become so large that you have to start innovating how to collect, store, organise, analyse and share it.”
Since technology is continuing to progress at rapid rates it is important the companies seek out a data analytics provider that evolves with the times. Digital Reasoning’s solutions, not only will protect your business from fraud, but its automated understanding for Big Data allows companies to find the necessary information they need to stay ahead of the competition.
Jasmine Ashton, September 17, 2012
Sponsored by ArnoldIT.com, developer of Augmentext.
For the Budding Analytics Community: A Cookbook
September 16, 2012
The Probability and Statistics Cookbook is available from matthias.vallentin.net. Now you may need to do some brushing up before holding a pre game tailgate party. For examples, if you aren’t familiar with discrete distributions, you many need some of those math refreshers. For the top math chefs out there, the book is a deal.
Stephen E Arnold, September 16, 2012
Sponsored by Augmentext