Google Research Shares Some Key Findings of 2013

August 20, 2014

Google is famous for its very curious research arm, and now the company has published its favorite findings of 2013. We learn of the generous gesture from eWeek’s “Google Shares Research Findings with Scientific World,” where writer Todd R. Weiss discusses reports on the roundup originally posted in a Google Research blog post. It is a very interesting list, and worth checking out in full. What caught my eye were the reports on machine learning and natural language processing. Weiss writes:

“Machine learning is a continuing topic, as seen in papers including … the paper ‘Efficient Estimation of Word Representations in Vector Space,’ which looks at a ‘simple and speedy method for training vector representations of words,’ according to the post.

“’The resulting vectors naturally capture the semantics and syntax of word use, such that simple analogies can be solved with vector arithmetic. For example, the vector difference between “man” and “woman” is approximately equal to the difference between “king” and “queen,” and vector displacements between any given country’s name and its capital are aligned,’ the post read.”

Weiss next turns to natural language processing with the report, “Token and Type Constraints for Cross-Lingual Part-of-Speech Tagging.” He quotes the paper:

“Constructing part-of-speech taggers typically requires large amounts of manually annotated data, which is missing in many languages and domains. In this paper, we introduce a method that instead relies on a combination of incomplete annotations projected from English with incomplete crowd-sourced dictionaries in each target language. The result is a 25 percent error reduction compared to the previous state of the art.”

The article concludes by noting that Google has is no stranger to supporting the research community, pointing to its App Engine for Research Awards program. It also notes that the company grants access to the Google infrastructure to academics for research purposes. Will all this generosity help Google in the PR arena?

Cynthia Murrell, August 20, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Salesforce Snaps Up RelateIQ

August 20, 2014

Bubble? What bubble? ZDNet informs us that “Salesforce Acquired Big Data Startup RelateIQ” for a sum approaching $400 million. The deal will be Salesforce’s second-largest acquisition, following their purchase of “marketing cloud” outfit ExactTarget last year for $2.5 billion. Reporter Natalie Gagliordi writes:

“According to a document filed Friday with the Securities and Exchange Commission, Salesforce will pay up to $390 million for the Palo Alto, California-based startup, which provides relationship intelligence via data science and machine learning. RelateIQ will become a Salesforce subsidiary, the filing says.

“On its website, RelateIQ says it’s built ‘the world’s first Relationship Intelligence platform’ that redefines the world of CRM. In a nutshell, the platform captures sales data from email, calendars and smartphone calls and social media to provide insights in real time.”

Relationship intelligence, eh? That’s indeed a new one (outside the discipline of sociology, anyway). RelateIQ launched in 2011, based out of Palo Alto. In nearby San Francisco, Salesforce was launched in 1999 by a former Oracle exec, Now, their success in cloud-based customer-relationship-management solutions has them operating offices around the world. Will their spending spree pay off?

Cynthia Murrell, August 20, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Short Honk: Buzzword Mania and the Internet of Things

August 14, 2014

Short honk: I don’t have too much to say about “Gartner: Internet of Things Has Reached Hype Peak .” Wow will have to suffice. The diagram in the article is amazing as well. A listicle is pretty darned limited when compared to a plotting of buzzwords from a consulting firm that vies with McKinsey, Bain, Boston Consulting, and Booz for respect. Another angle on this article is that it is published by a company that has taken a frisky approach to other folks’ information. For some background, check out “Are HP, Google, and IDC Out of Square.” I wanted to assemble a list of the buzzwords in the Network World article, but even for my tireless goslings, the task was too much. I could not figure out what the legends on the x and y axis meant. Do you know what a “plateau of productivity” is. I am not sure what “productivity” means unless I understand the definition in use by the writer.

One fact jumps out for me:

“As enterprises embark on the journey to becoming digital businesses, they will leverage technologies that today are considered to be ‘emerging’,” said Hung LeHong, vice president and Gartner fellow. “Understanding where your enterprise is on this journey and where you need to go will not only determine the amount of change expected for your enterprise, but also map out which combination of technologies support your progression.”

The person making this statement probably has a good handle on the unpleasantness of a legal dispute. For some color, please, see “Gartner Magic Quadrant in the News: Netscount Matter.”

Stephen E Arnold. August 14, 2014

Watson Is In the Army Now

August 14, 2014

First he is training to become a world-class chef, then he goes to medical school, now Watson is joining the army. Gigaom reports that “IBM And USAA Put Watson To Work In The Military.” While Watson will not go through boot camp or face deployment, the supercomputer will be used to help military personnel transition to civilian life. The IBM Engagement Advisor is an engagement tool that service people will be able to query with questions related to the transition experience, including health benefits and finances. The Engagement Advisor scans more than 3000 military documents and can even answer questions about the content.

This is another push by IBM to make Watson a feasible product.

“In a statement announcing the USAA application, which is in pilot, IBM SVP Mike Rhodin said: ‘Putting Watson into the hands of consumers is a critical milestone toward improving how we work and live.’ And make no mistake; IBM needs to get Watson out there and in use or risk squandering this lead.”

IBM’s product revenue dropped in the past two years. The company invested a huge amount of funds and man-hours developing Watson. Now IBM is focused on seeing a return on the investment. Watson is going beyond winning game shows to more practical applications.

Whitney Grace, August 14, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Watson Goes To Medical School

August 11, 2014

Watson has been trying his hand at becoming a gourmet chef, but now the smart machine plans to support healthcare providers with medical knowledge. According to Technology Review, “IBM Aims To Make Medical Expertise A Commodity” and support organizations by giving them a cheaper way to improve their expertise. The push to make medical knowledge a commodity comes from the rising worry that cancer rates are going to soar in the next decade and there will not be enough medical professionals to go around.

IBM wants to prove that Watson can be used beyond cooking and answering trivia questions. The computer has been deployed in two beta tests with hopes it will improve service quality and take the paper work load off healthcare professionals:

Lynda Chin, a professor of genomic medicine at MD Anderson and a leader of the center’s Watson project, anticipates that in the future that kind of product will be highly valued by general oncologists and regional cancer practices. ‘Physicians are too burdened on paperwork and squeezed on revenue to keep up with the latest literature,’ she says. That limits the care physicians can deliver, and it has financial consequences: ‘If you can’t make a decision based on your own knowledge, you have to refer the patient out, and that’s going to hurt your bottom line.’”

Dr. Watson has yet to earn money for IBM, whose revenue has decreased the past two years due to cloud deployment. The betas are only being used for research and development and they are demonstrating that Watson has trouble deciphering medical jargon.

IBM is trying to earn a buck on the changing medical industry. The article ends on how IBM will try to monopolize Watson for healthcare, but it is disappointing that the patients come off as an afterthought. Making money comes first, while saving lives is second.

Whitney Grace, August 11, 2014
Sponsored by ArnoldIT.com, developer of Augmentext

IBM Buzz Equals Revenues: The Breakthrough Assumption

August 8, 2014

I am no wizard of finance. I have kept track of money for my Cub Scout troop. I do understand this chart from Google Finance:

image

The blue column shows that revenue is going nowhere, maybe even trending down. The red line shows IBM’s profit margin which is flat. And the gold bar presents IBM’s operating income. Notice that it is flat. The flat lines are achieved by cost cutting, selling off dead end businesses, and introducing innovations like offices an employee has to sign up to use.

I have focused on IBM Watson because I am interested in search and content processing. To eliminate confusion, I don’t work in this field. It is a hobby. This is a fact that perplexes the public relations professionals who want me to write about their client. Yep, that works really well. If you read my comments in this blog, you will know that I take a slightly more skeptical approach to the search and content processing saucisson that flows across my desk here in Harrod’s Creek, Kentucky. If you are a fan of ground up mystery meat, you can check out my most recent saucisson reveal here.

What caught my attention today was not a report about IBM landing a major deal. Nope. I did not notice a story about IBM’s Jeopardy champ smashing Autonomy’s single quarter revenues prior to the company’s sale to Hewlett Packard. Nope. I did not read about a billion dollar licensing deal for IBM’s semantic technology to a mobile phone giant. Nope.

What I learned about was an IBM chip that does not use Von Neumann architecture. Now this is good news. In my intelligence community lecture about the computational limitations of today’s content processing systems, the culprit is Von Neumann’s approach to computing. In a nutshell, some numerical recipes cannot be calculated because of pesky hurdles like Big O or P=NP.

IBM, if I believe the flood of remarkably similar articles, has kicked Von Neumann to the side of the road with SyNapse. I do like the quirky capitalization and the association of a neural synapse in a brain and IBM’s innovation.

Check out “IBM Chip Processes Data Similar to the Way Your Brain Does.” You can find almost the same story in the New York Times, the Wall Street Journal, and other “real” journalistic constructs. (IBM’s public relations firm certainly delivered some serious content marketing in my opinion.)

Here’s a quote I noted from the Technology Review article:

The new chip is not yet a product, but it is powerful enough to work on real-world problems. In a demonstration at IBM’s Almaden research center, MIT Technology Review saw one recognize cars, people, and bicycles in video of a road intersection. A nearby laptop that had been programmed to do the same task processed the footage 100 times slower than real time, and it consumed 100,000 times as much power as the IBM chip. IBM researchers are now experimenting with connecting multiple SyNapse chips together, and they hope to build a supercomputer using thousands.

There is a glimpse of the future in this passage and a reminder that quite a bit of work remains; for example, “they [IBM researchers] hope to build a supercomputer…”

Hope.

In addition to low power consumption, the “breakthrough” gives IBM an opportunity to “create a library of ready-made blocks of code to make the process easier.”

Who is fabricating the chip? According to IBM’s statement in “New IBM SyNapse Chip Could Open Era of Vast Neural Networks,” the 5.4 billion transistor chip is Samsung. The IBM statement says:

The chip was fabricated using Samsung’s 28nm process technology that has a dense on-chip memory and low-leakage transistors.

That seems like a great idea. I wonder if any of the Samsung engineers learned anything from the exercise. Probably not. The dust up between Samsung and some of its other “partners” are probably fictional. Since IBM seems to be all thumbs when it comes to fabbing chips, the Samsung step may be a “we had no options” action.

IBM’s breakthrough is not just a chip. Nope. It seems to be:

a component of a complete end-to-end vertically integrated ecosystem spanning a chip simulator, neuroscience data, supercomputing, neuron specification, programming paradigm, algorithms and applications, and prototype design models. The ecosystem supports all aspects of the programming cycle from design through development, debugging, and deployment.

To speed along understanding of what IBM has figured out:

IBM has designed a novel teaching curriculum for universities, customers, partners, and IBM employees.

I assume this part of IBM’s master plan for generating more revenue and profit.

Several thoughts crossed my mind as I worked through some of the “real” news outfits’ reports about the SyNapse:

  1. How long will it be before IBM’s customers, partners, and employees create a product that generates revenue?
  2. Will the SyNapse eliminate the lengthy training and configuration processes for IBM Watson?
  3. Will Samsung and other customers, partners, and RIFfed IBM employees stand on the shoulders of the giants in IBM’s research centers and make money before IBM can gets its aircraft carrier fleet turned in a new direction?

I don’t want to rain on the very noisy parade, but I think neurosynaptic technology will require considerable time, money, effort, and coding. But if it boosts IBM’s stock price and creates sales opportunities, SyNapse will have played its part in making the revenue line and the net profit line perform a Cobra and blast upward like an SU 35.

While I wait for Watson, I will use Bing, Google, and Yandex for search. Limited and old fashioned technology that sort of works. Watson running on SyNapse, an interesting lab project that has produced some massive content marketing zing.

Stephen E Arnold, August 8, 2014

Dell Exec Criticizes HP Machine Project

July 24, 2014

Oh, dear. HP was so excited to declare it is working on a new kind of computer, dubbed simply the Machine. Dell’s head software honcho, however, decided to rain on its competitor’s parade, we learn from IT World’s “Dell Executive Says HP’s New Machine Architecture Is Laughable.” Apparently, the problem is that the new technology would render many existing programs obsolete. Gee, who’d ever want to support something so disruptive (besides, apparently, nearly everyone in Silicon Valley)? Writer James Niccolai reports:

“‘The notion that you can reach some magical state by rearchitecting an OS is laughable on the face of it,’ John Swainson, head of Dell’s software business, told reporters in San Francisco Thursday when asked to comment on the work. The basic elements of computing, like processor and memory, are likely to be reconfigured in some way, but not so radically that existing software won’t run, he said. ‘I don’t know many people who think that’s a really good idea.’”

Really? I think that’s called “technological progress,” and I believe many people are pretty keen on the idea. I, for one, haven’t always been pleased when required to update or swap out software, but I’m awfully glad I’m not running Windows 95 anymore. The write-up goes on:

“Jai Menon, head of Dell Research, said another advanced memory type — phase-change memory — is going to be here ‘sooner than what HP is banking on.’ Those are strong words from a company that isn’t exactly known for pushing the boundaries of computing, having built its business mainly on cheap servers and PCs. Dell’s long-term research looks out ‘two years and beyond,’ Menon said earlier in the day — not far enough that it’s likely to hustle a new memory technology to market itself. That didn’t stop Menon from claiming there are ‘at least two other types of memory technology better than what HP is banking on.’ He named phase-change memory as one of them — another technology HP has worked on in its labs.”

To be honest, we tend to be suspicious about big claims like HP’s Machine hype. However, to declare the project “laughable” because it accepts a changing software landscape seems short-sighted.

Cynthia Murrell, July 24, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Dont Hold Your Breath for HPs The Machine

July 22, 2014

The article titled Does HP Have a Development Pipeline or a Pipe Dream? by Steven J. Vaughan-Nichols on Computerworld answers the titular question with great certainty. Calling on HP’s layoffs, troubled management, and Moonshot flop, the article goes so far as to predict HP’s demise to be more probably than their delivering on this new technology. The article states,

“Let’s do a reality check on HP’s plans. It needs one major technology breakthrough, one major step forward in existing technology and a new operating system to boot. Even HP doesn’t expect to see all of this working anytime within the next three years, but to think it can happen within that kind of time frame would be wildly optimistic… If this idea were coming from, say, Apple, IBM or Intel, I’d have to give them the benefit of the doubt.”

“The Machine” that HP is so excited to promote also requires two pieces of in-the-works technology to be fully functional, memristors and silicon photonics. Both of these are innovative pieces of the puzzle that will allow for an entirely new system architecture. But when? The article seems to posit that HP is in a race for its own survival. Things may not be so dire as that at HP, but perhaps they are when even an HP employee has admitted that memristors are far from becoming a reality in “this decade.”

Chelsea Kerwin, July 22, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Harvard Professors Brawl of Words over Disruptive Innovation

July 21, 2014

The article titled Clayton Christensen Responds to New Yorker Takedown of ‘Disruptive Innovation’ on Businessweek consists of an interview with Christensen and his thoughts on Jill Lepore’s article. Two Harvard faculty members squabbling is, of course, fascinating, and Christensen defends himself well in this article with his endless optimism and insistence on calling Lepore “Jill.” The article describes disruptive innovation and Jill Lepore’s major problems with it as follows,

“The theory holds that established companies, acting rationally and carefully to stay on top, leave themselves vulnerable to upstarts who find ways to do things more cheaply, often with a new technology….Disruption, as Lepore notes, has since become an all-purpose rallying cry, not only in Silicon Valley—though especially there—but in boardrooms everywhere. “It’s a theory of history founded on a profound anxiety about financial collapse, an apocalyptic fear of global devastation, and shaky evidence,” she writes.”

Christensen refers Lepore to his book, in which he claims to answer all of her refutations to his theory. He, in his turn, takes issue with her poor scholarship, and considers her as trying to discredit him rather than work together to improve the theory through conversation and constructive criticism. In the end of the article he basically dares Lepore to come have a productive meeting with him. Things might get awkward at the Harvard cafeteria if these two cross paths.

Chelsea Kerwin, July 21, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

The Discovery of the “Adversarial” Image Blind Spot in Neural Networks

July 18, 2014

The article titled Does Deep Learning Have Deep Flaws on KDnuggets explains the implications of the results to a recent study of neural networks and image classification. The study, completed by Google, NYU and the University of Montreal, found that an as yet unknown flaw exists in neural networks when it comes to recognizing images that may be identical to the human eye. Neural networks can generate misclassified “adversarial” images that look exactly the same as a correctly classified image. The article goes on to explain,

“The network may misclassify an image after the researchers applied a certain imperceptible perturbation. The perturbations are found by adjusting the pixel values to maximize the prediction error. For all the networks we studied (MNIST, QuocNet, AlexNet), for each sample, we always manage to generate very close, visually indistinguishable, adversarial examples that are misclassified by the original network… The continuity and stability of deep neural networks are questioned. The smoothness assumption does not hold for deep neural networks any more.”

The article makes this statement and later links it to the possibility of these “adversarial” images existing even in the human brain. Since the study found that one perturbation can cause misclassification in separate networks, trained for different datasets, it suggests that these “adversarial” images are universal. Most importantly, the study suggests that AI has blind spots that have not been addressed. They may be rare, but as our reliance on technology grows, they must be recognized and somehow accounted for.

Chelsea Kerwin, July 18, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Next Page »