April 10, 2014
If you see the world through Google (www.google.com) colored glasses, you might think the search king can do no wrong, such as in this recent Medium.com article, “Why the Future Belongs to Google.” https://medium.com/mobile-culture/994daa5d0fee However, it’s starting to look like even those wearing the glasses are not happy.
According to the drum-thumping Medium.com piece:
“The search giant has infiltrated almost every sphere of our digital interaction and made the experience richer, more satisfying and rather beautiful…There are many big-name brands which often try to achieve this, but either their endeavour feels too intrusive or they just fail without a whimper.”
Pardon us, but if there’s one thing Google constantly stumbles over it’s how intrusive its latest and greatest ideas are. http://www.wordstream.com/articles/google-failures-google-flops We’re not just talking long-lost flops like Google Buzz, but new “innovations” like its flu-tracker http://www.forbes.com/sites/stevensalzberg/2014/03/23/why-google-flu-is-a-failure/ and the most recent run of backlash that seems to have finally put a bullet in the motherboard of Google Glass, according to TechCruch http://techcrunch.com/2014/03/15/why-we-hate-google-glass-and-all-new-tech/ and others http://www.designntrend.com/articles/11970/20140321/negative-feedback-is-dimming-google-glasss-fate.htm. We are more than a little suspicious of the Medium.com article that claims google is unintrusive. It makes us wonder how deeply Google has intruded on that writer’s brain.
Patrick Roland, April 10, 2014
April 7, 2014
The article titled Search 60 Million Scanned Newspaper Pages on Gizmo’s Freeware refers to Google’s abandoned project to digitize a bulk of newspapers for its Google News Archive (related to Google News.) That project went live in 2006 but since the announcement in 2011 that Google would not be adding more content, the site has been on and offline. In spite of this, the article notes,
“However, the results of its original efforts are still online, in the form of a searchable archive of 60 million pages from some of the world’s best-known newspapers (and some little-known ones too). The scans go right back to the 1800s, and offer a fascinating glimpse into the past. So if you’re into history, or just curious to see whether anyone you know was mentioned in a newspaper in recent years, head to http://news.google.com/newspapers and see what you can find.”
What they forget to mention is to do it now if you want to at all. Who knows how long it will be available? In the meantime, it offers an inexhaustible resource of oddities, human interest stories and history lessons from countless newspapers. If you are confused by the results, check out the Google support page, which has some instructions on searching for articles within a specific date range.
Chelsea Kerwin, April 07, 2014
March 31, 2014
You might have noticed a simple change in Google’s design recently. The search engine designers decided to remove underlined links, increased font sizes, and deleted the yellow box around AdSense results. Fast Company is pleased with the removal of these Web 1.0 design techniques, because it pushes Google forward into modern times when people use more than a mouse to Web surf. Fast Company discusses their opinion of the change in “How Google’s Redesigned Results Augur A More Beautiful Web.”
Fast Company notes that hyperlinks are not only ugly to the eye, but they also hurt reading comprehension. The design change was headed by Larry Page, but the article also points out that the golden search results would never be touched unless:
“But taste alone is seldom enough to woo Google. Google would never, ever remove hyperlinks on its main page (and again, AdSense!) if it hadn’t tested the new design, and if the company wasn’t completely sure that the design wouldn’t impact Google’s ability to generate clicks. It’s pretty safe to conclude that underlines are a superfluous marker, at least on Google’s pages, which we’ve all used countless times–especially when link text is still the same old shade of blue, filling the “hey, I’m a link that you can click” role on its own.”
The article ends by saying the Web is becoming more beautiful. Fine by us, but beauty is only as substantial as content. Functionality and usefulness is more important that appearance. We don’t live in Victorian times anymore.
March 26, 2014
I read a number of write ups about the new Google cloud pricing. The main idea, in my opinion, that unifies the different reports is, “Everybody loves a bargain.” Consider “Google Slashes Cloud Prices: Google vs AWS Price Comparison.”
The essay-editorial begins with the invocation of the Google-Amazon joust:
Google threw down the gauntlet to challenge AWS public cloud supremacy by announcing significant price reductions across its Google Cloud Platform. The eye-opening price cuts covered compute (32-percent reduction), storage (68-percent reduction), and BigQuery (85-percent reduction). Google also signaled that future reductions could follow Moore’s Law — citing that historically public cloud prices have dropped only 6 to 8 percent annually as compared to 20- to 30-percent reductions in hardware prices.
The fact that neither Amazon nor Google provide much detail about their actual costs, profits, number of customers, and goals for their cloud services is not of much interest. Explanations of how pricing thresholds operate and migrate excite little curiosity.
Google, playing the Google Search Appliance card, seems to suggest that Amazon’s pricing is complicated. Yep, it is and it is very difficult to pin down with confidence what something will cost until the bits have been chomped and the Amazon accounting system processes its inputs and bills the customer. There is chatter about “sustained use” pricing, on demand pricing, and heavy reserved instance pricing, and in the article I have used as a pivot point for my comments, a cheer for RightScale’s services. These will help the cloud customer figure out what cloud computing costs.
First, the pricing is an example of the WalMarting of technical services. Doesn’t the entire world want lower prices? Once a market has been “won,” what happens? Creative destruction? I refer you, gentle reader, to WalMart’s challenges to rekindle (pun intended) that Sam Walton fire. The profit flat line is not good news to some WalMart stakeholders. But the Google pricing is little more than an old-fashioned price war in a Walton-like march for market share.
Second, Amazon has a bit of a cost problem. The murky Amazon financials, the hard to figure out side companies, and the blurring of revenues from product and services lines are tough to parse. Amazon is working overtime to generate no friction revenue (Prime pricing) and constrain costs. The results are a robust top line and growing pressure on expenses at “everyone’s favorite” online store. Google is cutting prices at a time when Amazon is maybe less than prepared for a price war.
March 21, 2014
I find Glass interesting. I find the work of Babak Amirparviz (also publishing as Babak Parviz and some variants) thought provoking. The combination of Dr. Amirparviz and Glass is fascinating. I read “The Top 10 Google Glass Myths.” I did not read about a woman cited for driving with Google Glass, stories about bar fights among non Glass wearers and Glass owners, or Google’s own how to not be a “glasshole.” Tasty, right?
In the myths there was no mention of these myths or semi factoids:
- Glass is part of a larger project involving self assembly of devices within a human body
- The contact lens is a version of work done at the University of Washington and partially supported by Microsoft
- The bioengineering work requires specialized facilities, not a run of the mill Silicon Valley cube
- The health research involves protein manipulation that could maybe permit fixing up genetic issues like hereditary health time bombs
- The Google robot acquisitions have a relationship to the nanotech work underway at Google.
Since no one knows about these five points, I suppose these are not really myths. The Google myths are more like marketing statements, not comments anchored in the engineering that underpins Google Glass and contact lens. Not worth worrying about since nanotech is not search. Google is a search company, not a synthetic biology outfit.
Stephen E Arnold, March 21, 2014
March 21, 2014
I highly recommend that you read Jacques Ellul’s The Technological Bluff (La Bluff technologique) (1988) available from Amazon and at this link as of March 21, 2014.
I read “Occupy Founder Calls on Obama to Appoint Eric Schmidt ‘CEO of America’.” According to the write up, Justine Tunney, a Google engineer (who must be really smart by definition, right?) “is demanding that the tech industry take over the US government.”
Like Swift’s “A Modest Proposal,” I find the idea interesting. No doubt Mr.Schmidt is flattered by one of the Google elite’s idea. According to the write up:
Yasha Levine, a reporter for Silicon Valley publication Pando Daily, noted the seeming discrepancy between Tunney’s former anarchist beliefs and her current role at Google. Since her arrival at the firm, he writes, “she has become an astroturfer par excellence for the company, including showing up in a comment section to bash my reporting on Google’s vast for-profit surveillance operation.”
Amusing in a way. Now back to Ellul. His informed monograph points out that solving a problem via technology and technologists may not deliver the solution anticipated. I am confident that Justine Tunney is familiar with Ellul’s viewpoint and rejects it.
An old French theologian-philosopher is definitely not Google material. I would suggest that the alleged recommendations to retire all government employees with full pensions, transfer administrative authority to the tech industry, and appoint Eric Schmidt CEO of America are rational within the Googley world.
For an old person in rural Kentucky, the ideas seem to ignore Jacques Ellul’s insights and remind me of the crazy lists that IDC type writers cook up to seem informed.
Around the cast iron stove in Harrod’s Creek, the ideas might be greeted with considerable skepticism. If you work at an outfit that wants to defeat death and build a phone inside a human body via self assembly nanotech, Justine Tunney’s ideas make perfect sense. At least that’s my assumption.
Stephen E Arnold, March 21, 2014
March 20, 2014
I read the BBC summary of Larry Page’s TED interview. You can find the BBC write up in this story: “Ted 2014: Larry Page on Google’s Robotic Future.” The statement in the allegedly accurate article I noted is [emphasis added by me]:
Mr Page was also asked about the Edward Snowden revelations, following a surprise appearance from the whistle-blower at Ted. “It is disappointing that the government secretly did this stuff and didn’t tell us about it,” said Mr Page. “It is not possible to have a democracy if we have to protect our users from the government. The government has done itself a tremendous disservice and we need to have a debate about it,” he added.
Clear enough, if accurate.
Next, I noted “US Tech Giants Knew of NSA Data Collection, Agency’s Top Lawyer Insists.” Again I don’t know if the information in the Guardian article is accurate. Nevertheless, I noted this passage:
Asked during a Wednesday hearing of the US government’s institutional privacy watchdog if collection under the law, known as Section 702 or the Fisa Amendments Act, occurred with the “full knowledge and assistance of any company from which information is obtained,” De replied: “Yes.” [Rajesh De is the NSA general counsel]
The Guardian story then tosses in:
Google, Microsoft, Facebook and AOL – claimed they did not know about a surveillance practice described as giving NSA vast access to their customers’ data. Some, like Apple, said they had “never heard” the term Prism.
A small thing. A discontinuity. Probably just a misunderstanding. It is more fun to think about Google smart watches, Google balloons, and improving “search.” Precision, recall, accuracy—hmmm.
Stephen E Arnold, March 290, 2014
March 18, 2014
Navigate to “Why Google Doesn’t Have a Research Lab.” You will read how Google does research without a Bell Labs’ type operation. According to the write up:
“There doesn’t need to be a protective shell around our researchers where they think great thoughts,” says Spector. “It’s a collaborative activity across the organization; talent is distributed everywhere.” He says this approach allows Google make fundamental advances quickly—since its researchers are close to piles of data and opportunities to experiment—and then rapidly turn those advances into products.
If you are not familiar with Dr. Spector, you can get the Google biography at http://bit.ly/1fVC4qM.
With regard to Glass, the article states:
Spector even claims that his company’s secretive Google X division, home of Google Glass and the company’s self-driving car project (see “Glass, Darkly” and “Google’s Robot Cars Are Safer Drivers Than You or I”), is a product development shop rather than a research lab, saying that every project there is focused on a marketable end result. “They have pursued an approach like the rest of Google, a mixture of engineering and research [and] putting these things together into prototypes and products,” he says.
I find this interesting. My exposure to synthetic biology suggests that something more than a group of cubicles and some lab equipment is likely to be needed. For example, the machines required to engineer nanodevices require robots. Perhaps Google’s interest in robots is more than high tech gadget collecting?
When fooling around with protein manipulation, some basic requirements are not likely to be found in a Silicon Valley slap up building.
Important? Probably not. Dr. Babak Amirparviz can probably work out of his tiny garage. No official Google bio is available for this innovator. You may find his inventions with Dr. Whitesides’ interesting (US 8,574,924) or Dr. Amirparviz’ patent document Assay Device and Method (US 20100279310). I suppose these systems and methods can work in a Google snack area next to the microwave and coffee machine.
Red herrings probably thrive in Google’s “projects” set up.
At least, MIT finds this plausible.
Stephen E Arnold, March 18, 2014
March 15, 2014
Run a query for Google Flu Trends on Google. The results point to the Google Flu Trends Web site at http://bit.ly/1ny9j58. The graphs and charts seem authoritative. I find the colors and legends difficult to figure out, but Google knows best. Or does it?
A spate of stories have appeared in New Scientist, Smithsonian, and Time that pick up the threat that Google Flu Trends does not work particularly well. The Science Magazine podcast presents a quite interesting interview with David Lazar, one of the authors of “The Parable of Google Flu: Traps in Big Data Analysis.”
The point of the Lazar article and the greedy recycling of the analysis is that algorithms can be incorrect. What is interesting is the surprise that creeps into the reports of Google’s infallible system being dead wrong.
For example, Smithsonian Magazine’s “Why Google Flu Trends Can’t Track the Flu (Yet)” states, “The vaunted big data project falls victim to periodic tweaks in Google’s own search algorithms.” The write continues:
A huge proportion of the search terms that correlate with CDC data on flu rates, it turns out, are caused not by people getting the flu, but by a third factor that affects both searching patterns and flu transmission: winter. In fact, the developers of Google Flu Trends reported coming across particular terms—those related to high school basketball, for instance—that were correlated with flu rates over time but clearly had nothing to do with the virus. Over time, Google engineers manually removed many terms that correlate with flu searches but have nothing to do with flu, but their model was clearly still too dependent on non-flu seasonal search trends—part of the reason why Google Flu Trends failed to reflect the 2009 epidemic of H1N1, which happened during summer. Especially in its earlier versions, Google Flu Trends was “part flu detector, part winter detector.”
Oh, oh. Feedback loops, thresholds, human bias—Quite a surprise apparently.
Time Magazine’s “Google’s Flu Project Shows the Failings of Big Data” realizes:
GFT and other big data methods can be useful, but only if they’re paired with what the Science researchers call “small data”—traditional forms of information collection. Put the two together, and you can get an excellent model of the world as it actually is. Of course, if big data is really just one tool of many, not an all-purpose path to omniscience, that would puncture the hype just a bit. You won’t get a SXSW panel with that kind of modesty.
Scientific American’s “Why Big Data Isn’t Necessarily Better Data” points out:
Google itself concluded in a study last October that its algorithm for flu (as well as for its more recently launched Google Dengue Trends) were “susceptible to heightened media coverage” during the 2012-2013 U.S. flu season. “We review the Flu Trends model each year to determine how we can improve—our last update was made in October 2013 in advance of the 2013-2014 flu season,” according to a Google spokesperson. “We welcome feedback on how we can continue to refine Flu Trends to help estimate flu levels.”
The word “hubris” turns up in a number of articles about this “surprising” suggestion that algorithms drift.
Forget Google and its innocuous and possibly ineffectual flu data. The coverage of the problems with the Google Big Data demonstration have significance for those who bet big money that predictive systems can tame big data. For companies licensing Autonomy- or Recommind-type search and retrieval systems, the flap over flu trends makes clear that algorithmic methods require baby sitting; that is, humans have to be involved and that involvement may introduce outputs that wander off track. If you have used a predictive search system, you probably have encountered off center, irrelevant results. The question “Why did the system display this document?” is one indication that predictive search may deliver a load of fresh bagels when you wanted a load of mulch.
For systems that do “pre crime” or predictive analyses related to sensitive matters, uninformed “end users” can accept what a system outputs and take action. This is the modern version of “Ready, Fire, Aim.” Some of these actions are not quite as innocuous as over-estimating flu outbreaks. Uninformed humans without knowledge of context and biases in the data and numerical recipes can find themselves mired in a swamp, not parked at the local Starbuck’s.
And what about Google? The flu analyses illustrate one thing: Google can fool itself in its effort to sell ads. Accuracy is not the point of Google or many other online information retrieval services.
Painful? Well, taking two aspirins won’t cure this particular problem. My suggestion? Come to grips with rigorous data analysis, algorithm behaviors, and old fashioned fact checking. Big Data and fancy graphics are not, by themselves, solutions to the clouds of unknowing that swirl through marketing hyperbole. There is a free lunch if one wants to eat from trash bins.
Stephen E Arnold, March 15, 2014
March 13, 2014
March 14, 2014
Yep, it’s illogical. How can a free online service get a price tag. Easy as Amazon’s boosting the fee for Prime and Facebook’s cooking up whizzy new types of advertising. But the big news is tucked between the lines of “Desktop Search to Decline $1.4 Billion as Google Users Shift to Mobile.”
Here’s a tasty factoid:
In the scope of Google’s overall ad revenues, mobile search is gaining significant share. Up from 19.4% in 2013, mobile search will comprise an estimated 26.7% of the company’s total ad revenues this year. Desktop search declined to 63.0% of Google’s ad revenues in 2013, having already fallen from 72.7% in 2012.
You may have noticed how lousy the search results are from Bing, Google, and Yahoo. Even the metasearch engines are struggling. Just run some queries on Ixquick.com or DuckDuckGo.com and do some results comparisons.
Because most of the world’s Internet users rely on Google to deliver comprehensive and accurate results, users are unaware of the information that is not easily findable. Investigators and professional researchers are increasingly aware that finding information is getting harder, a log harder if our research is on the beam.
As users shift from desktops to mobile the GoTo/Overture advertising model loses efficiency. There are a number of reasons, including the difficulty of entering queries while riding a crowded bus to the small screens to the dorky big type interfaces that are gaining popularity to the need to provide a brain dead single / limited function app to help a person locate pizza.
For Google and other desktop centric companies, the shift has implications for advertising revenue. Smaller screens and changing behavior means the old GoTo / Overture model won’t work. The impact on traditional Web sites is not good. Here’s a report for a company that did the search engine optimization thing, the redesign thing, and the new marketing “experts” thing. Looks grim, doesn’t it.
I won’t name the owner of this set of red arrows, but you can check out your own Web site and blog usage stats and compare your “performance” to this outfit’s.