July 24, 2014
Oh, dear. HP was so excited to declare it is working on a new kind of computer, dubbed simply the Machine. Dell’s head software honcho, however, decided to rain on its competitor’s parade, we learn from IT World’s “Dell Executive Says HP’s New Machine Architecture Is Laughable.” Apparently, the problem is that the new technology would render many existing programs obsolete. Gee, who’d ever want to support something so disruptive (besides, apparently, nearly everyone in Silicon Valley)? Writer James Niccolai reports:
“‘The notion that you can reach some magical state by rearchitecting an OS is laughable on the face of it,’ John Swainson, head of Dell’s software business, told reporters in San Francisco Thursday when asked to comment on the work. The basic elements of computing, like processor and memory, are likely to be reconfigured in some way, but not so radically that existing software won’t run, he said. ‘I don’t know many people who think that’s a really good idea.’”
Really? I think that’s called “technological progress,” and I believe many people are pretty keen on the idea. I, for one, haven’t always been pleased when required to update or swap out software, but I’m awfully glad I’m not running Windows 95 anymore. The write-up goes on:
“Jai Menon, head of Dell Research, said another advanced memory type — phase-change memory — is going to be here ‘sooner than what HP is banking on.’ Those are strong words from a company that isn’t exactly known for pushing the boundaries of computing, having built its business mainly on cheap servers and PCs. Dell’s long-term research looks out ‘two years and beyond,’ Menon said earlier in the day — not far enough that it’s likely to hustle a new memory technology to market itself. That didn’t stop Menon from claiming there are ‘at least two other types of memory technology better than what HP is banking on.’ He named phase-change memory as one of them — another technology HP has worked on in its labs.”
To be honest, we tend to be suspicious about big claims like HP’s Machine hype. However, to declare the project “laughable” because it accepts a changing software landscape seems short-sighted.
Cynthia Murrell, July 24, 2014
July 22, 2014
The article titled Does HP Have a Development Pipeline or a Pipe Dream? by Steven J. Vaughan-Nichols on Computerworld answers the titular question with great certainty. Calling on HP’s layoffs, troubled management, and Moonshot flop, the article goes so far as to predict HP’s demise to be more probably than their delivering on this new technology. The article states,
“Let’s do a reality check on HP’s plans. It needs one major technology breakthrough, one major step forward in existing technology and a new operating system to boot. Even HP doesn’t expect to see all of this working anytime within the next three years, but to think it can happen within that kind of time frame would be wildly optimistic… If this idea were coming from, say, Apple, IBM or Intel, I’d have to give them the benefit of the doubt.”
“The Machine” that HP is so excited to promote also requires two pieces of in-the-works technology to be fully functional, memristors and silicon photonics. Both of these are innovative pieces of the puzzle that will allow for an entirely new system architecture. But when? The article seems to posit that HP is in a race for its own survival. Things may not be so dire as that at HP, but perhaps they are when even an HP employee has admitted that memristors are far from becoming a reality in “this decade.”
Chelsea Kerwin, July 22, 2014
July 21, 2014
The article titled Clayton Christensen Responds to New Yorker Takedown of ‘Disruptive Innovation’ on Businessweek consists of an interview with Christensen and his thoughts on Jill Lepore’s article. Two Harvard faculty members squabbling is, of course, fascinating, and Christensen defends himself well in this article with his endless optimism and insistence on calling Lepore “Jill.” The article describes disruptive innovation and Jill Lepore’s major problems with it as follows,
“The theory holds that established companies, acting rationally and carefully to stay on top, leave themselves vulnerable to upstarts who find ways to do things more cheaply, often with a new technology….Disruption, as Lepore notes, has since become an all-purpose rallying cry, not only in Silicon Valley—though especially there—but in boardrooms everywhere. “It’s a theory of history founded on a profound anxiety about financial collapse, an apocalyptic fear of global devastation, and shaky evidence,” she writes.”
Christensen refers Lepore to his book, in which he claims to answer all of her refutations to his theory. He, in his turn, takes issue with her poor scholarship, and considers her as trying to discredit him rather than work together to improve the theory through conversation and constructive criticism. In the end of the article he basically dares Lepore to come have a productive meeting with him. Things might get awkward at the Harvard cafeteria if these two cross paths.
Chelsea Kerwin, July 21, 2014
July 18, 2014
The article titled Does Deep Learning Have Deep Flaws on KDnuggets explains the implications of the results to a recent study of neural networks and image classification. The study, completed by Google, NYU and the University of Montreal, found that an as yet unknown flaw exists in neural networks when it comes to recognizing images that may be identical to the human eye. Neural networks can generate misclassified “adversarial” images that look exactly the same as a correctly classified image. The article goes on to explain,
“The network may misclassify an image after the researchers applied a certain imperceptible perturbation. The perturbations are found by adjusting the pixel values to maximize the prediction error. For all the networks we studied (MNIST, QuocNet, AlexNet), for each sample, we always manage to generate very close, visually indistinguishable, adversarial examples that are misclassified by the original network… The continuity and stability of deep neural networks are questioned. The smoothness assumption does not hold for deep neural networks any more.”
The article makes this statement and later links it to the possibility of these “adversarial” images existing even in the human brain. Since the study found that one perturbation can cause misclassification in separate networks, trained for different datasets, it suggests that these “adversarial” images are universal. Most importantly, the study suggests that AI has blind spots that have not been addressed. They may be rare, but as our reliance on technology grows, they must be recognized and somehow accounted for.
Chelsea Kerwin, July 18, 2014
July 15, 2014
Admit it, you have flirted with a chatbot before. You do not need to feel ashamed; everyone has tested a chatbot’s sentience before with love confessions and flirting. Most of the time the chatbot’s remarks are smart aleck or state that the relationship would be impossible. That might change says Kurzweil Accelerating Intelligence in the article “Search Engines Will Be Able To Flirt With Users By 2029.” Director of Google Engineering spoke at the recent Exponential Finance Conference about computers gaining near human intelligence. He claims that within fifteen years, people will be able to have an emotional relationship with a computer. The new science-fiction movie Her is a good example of how humans and computers will interact in the future.
Dr. Kurzweil’s comment alludes that Google is making process on a natural language search engine that will allow users to ask questions and get a meaningful response.
“ ‘That is the cutting edge of human intelligence,’ Dr. Kurzweil said. He was less impressed with claims that a chatbot emulating a 13-year-old Ukrainian boy called Eugene Goostman had passed the Turing test. Dr. Kurzweil said, ‘Eugene does not keep track of the conversation, repeats himself word for word, and responds with typical chatbot non sequiturs.’ “
It sounds like Google has something sequestered in its vaults, but people are still stuck flirting with chatbots. While you might have to wait before you can date an AI, at least these improvements will make speaking to an automated recording better.
Whitney Grace, July 15, 2014
July 14, 2014
Now this is quite the claim. Bloomberg Businessweek declares, “With ‘The Machine’ HP May Have Invented a New Kind of Computer.” At the heart lies something HP Labs has developed and dubbed memristor. This use of the historical term has been a bit controversial, but, whatever the case, they’ve claimed the name now. Writer Ashlee Vance explains:
“At the simplest level, the memristor consists of a grid of wires with a stack of thin layers of materials such as tantalum oxide at each intersection. When a current is applied to the wires, the materials’ resistance is altered, and this state can hold after the current is removed. At that point, the device is essentially remembering 1s or 0s depending on which state it is in, multiplying its storage capacity. HP can build these chips with traditional semiconductor equipment and expects to be able to pack unprecedented amounts of memory—enough to store huge databases of pictures, files, and data—into a computer.”
While more and more memory is always better and better, we’re not sure this counts as a “new kind of computer.” This seems more like the front edge of Moore’s law’s successor to me. Be that as it may, the development does promise to speed processing significantly. The new computers will also need a new OS. Unlike the OS’s we know and love, this Machine OS will assume the availability of the high-speed, constant memory store provided by the new tech. Linux and Android OS versions are also in the works. The write-up goes on to note that memristor fibers could conceivably replace Ethernet cables.
Vance says the engineering community is impressed with this development at HP, and that it has helped with recruitment for the company. According to a representative, we could see the Machine on shelves between 2017 and 2020, but the article points out that HP has missed earlier self-imposed deadlines around this project. We shall see.
Cynthia Murrell, July 14, 2014
July 7, 2014
There’s another way to bookmark Web pages. Stache from d3i offers several features that go beyond those offered by the browser basics. The app has been designed for Macs and Apple’s i-devices; Android and Windows users, I’m afraid, are out of luck. The description pledges:
“If a page is useful, Stache it! Stache turns cluttered browser bookmarks and overwhelming reading lists in to a beautiful, visual and fully searchable collection of useful pages. No more digging through endless lists of page titles, or spending your precious time organising your bookmarks into folders. In one click a web page becomes part of your personal repository of useful information, archived, searchable and accessible in seconds from all of your devices.”
Features include one-click bookmarking; a stored screenshot of each marked page; entire-page search (they call this “complete” search); and, of course, syncing to the (i) cloud. Mac users get additional features, like full-page archiving and bookmark importing/exporting. The app can be downloaded for Macs here, for iPhones, iPads, and iPods here, and for Safari or Chrome on a Mac here.
Founded in 2008, d3i specializes in designing apps for the iPhone and iPad. The company also developed the journaling app Momento, which integrates with Web services like Facebook, Twitter, Flickr, YouTube, and RSS feeds. D3i is based in Buckinghamshire, U.K.
Cynthia Murrell, July 07, 2014
June 6, 2014
If you want to download your brain into a computer and be immortal, there is bad news for you. According to Medium, a “Mathematical Model Of Consciousness Proves Human Experience Cannot Be Modeled On A Computer.” The article states that consciousness has been a taboo word in the scientific community. Neuroscientist Giulio Tononi of the University of Wisconsin has a theory that says consciousness cannot be broken down. There are other consciousness theories out there, but Tononi’s approach is different:
“What makes Tononi’s ideas different from other theories of consciousness is that it can be modeled mathematically using ideas from physics and information theory. That doesn’t mean this theory is correct. But it does mean that, for the first time, neuroscientists, biologists physicists and anybody else can all reason about consciousness using the universal language of science: mathematics.”
Phil Maguire and a team at the National University of Ireland tested Tononi’s theory and collected data on consciousness. Maguire and his team discovered that while they can pinpoint a single experience, i.e. smelling, and a single instance of that experience, they cannot replicate the experience exactly. Sure, they can capture a human smelling, but there are so many other experiences going on in the brain other than just smelling that it can’t be pinned down.
In other words, the brain cannot be compressed without losing information. The brain is so complex that ALL its processes cannot be mapped out. This new theory will not stop scientists from trying, but it does contribute to the bigger understanding how to make machines replicate human thought patterns. Computers will never be human, but at least the idea will always make a good plot for science fiction.
Whitney Grace, June 06, 2014
June 5, 2014
The overview of why indexing is hard on VisionDummy is titled The Curse of Dimensionality in Classification. The article provides a surprisingly readable explanation with an example of sorting images of cats and dogs. The first step would be creating features that would assign values to the images (such as different color or texture). From there, the article states,
“We now have 5 features that, in combination, could possibly be used by a classification algorithm to distinguish cats from dogs. To obtain an even more accurate classification, we could add more features, based on color or texture histograms, statistical moments, etc. Maybe we can obtain a perfect classification by carefully defining a few hundred of these features? The answer to this question might sound a bit counter-intuitive: no we can not!.”
Instead, simply adding more and more features, or increasing dimensionality, would lessen the performance of the classifier. A graph is provided with a sharp descending line after the point called the “optimal number of features.” At this point there would exist a three-dimensional feature space, making it possible to fully separate the classes (still dogs and cats). When more features are added passing the optimal amount, over fitting occurs and finding a general space without exceptions becomes difficult. The article goes on to suggest some remedies such as cross-fitting and feature extraction.
Chelsea Kerwin, June 05, 2014
June 5, 2014
The abstract titled Bell Systems Technical Journal, 1922-1983 on Alcetal-Lucent provides some insight into the workings of Bell Labs over the years. Alcatel-Lucent partnered with IEEE to make the journals accessible. While the search aspect may be so-so, the content provided is excellent, all the way back to the first issue in 1922. The article offers this summation of the historical importance of Bell Labs,
“Bell Labs is the source of many significant contributions, of course, in the area of telephony, but also in memory devices, imaging devices, system organization, computers and software technology, as well as acoustics, optics, switching, transmission, wireless and data communication. New principles, new materials, new devices, and new systems from Bell Telephone Laboratories resulted in new industries, hundreds of new products, and thousands of new jobs. The invention of the transistor in 1947, and subsequent advances … ultimately enabled the digital world.”
For those interested in the history of innovation and the foundations of the current era of technology, this compilation of Bell Labs’ journals provides a wealth of interesting articles and papers. Besides influencing the evolution of the telephone, Bell Labs also contributed to the formation of new industries in the areas of memory devices, computers and software, system organization and many others.
Chelsea Kerwin, June 05, 2014