September 28, 2014
I read “For Marriott, the Future of Travel Is a Virtual Reality Teleporter Phone Booth.” The article illustrates how Google’s and Facebook’s moves into augmented reality have influenced organizations not known for their mastery of bits and bytes. Finding a hotel is getting more difficult. But won’t virtual reality make it easier? Sure, according to the write up. Technology will just make travel so much better.
The examples presented in the article are important because:
- The craziness of an advanced technology companies’ engineers can infect professionals who may struggle to update their iPhone’s operating system
- The ease with which a writer can create an impression in a reader that “this stuff is just around the corner.”
- The money that a consulting firm can be made by shotgunning a suite of technologies into the strategic thinking of executives with degrees in food service and accounting, among other quasi technical fields.
What did the write up state about virtual reality as a business opportunity for a hotel and food outfit? Try this passage:
“We talked about the idea of virtual reality being another metaphor for the ‘future of travel,’” Dail says. “How can we take what was existing and use content to start the conversation, and really engage people with the brand on a whole new level, because you don’t think of hotels as being part of VR.
Navigate to Hotels.com or any other site that allows a hotel management team to post an image of their “property”. Now rent a room and show up at midnight. How often does that room match what you actually get?
In my experience, not too often. The spacious reception area for a Marriott property or the beautiful bed in a Marriott suite hotel is often not exactly what appears in the online service’s write up.
How will virtual reality address this issue? I have zero doubt that marketers will make hotel properties look their best. Food photographers have this art figured out. Those Whoppers look just like the pictures in ads.
The issue with technology is distortion. Reality, just like search results, may be sweetened. For example, slapping on a headset is unlikely when I book a hotel from an airport taxi. Believing what is presented online is not something I will buy into.
The blend of marketers and technology has made information retrieval anything but objective. A hotel “selling” virtual reality is going to follow the same path.
Lucrative for marketers. Not so good for others. Pumping up expectations contributes to twisting one’s ankle on reality. Will some folks care? I doubt it.
Stephen E Arnold, September 28, 2014
September 26, 2014
People say that a diverse set of skills is beneficial for career success, but it seems that humans cannot compete with Watson as he segways from being a professional chef to taking his hand a the medical field. According to EWeek, “IBM, Mayo Clinic Tap Watson To Boost Cancer Trial Research.” The Mayo Clinic will use Watson’s cognitive computing system to match cancer patients with clinical trials appropriate to their individual needs. After the trial in cancer is complete, Watson will then move onto to her illnesses and assign patients proper clinical trials for that as well.
IBM says that assigning patients to clinical trials is one the hardest parts of the clinical research. The task used to be done manually, but the Mayo Clinic has over 8,000 trials going on at once and it can be overwhelming to guarantee positive results. Watson’s computing algorithms will sift through patient data and assign appropriate trials with better accuracy and consistency.
“ “In an area like cancer, where time is of the essence, the speed and accuracy that Watson offers will allow us to develop an individualized treatment plan more efficiently so we can deliver exactly the care that the patient needs,’ said Dr. Steven Alberts, chair of medical oncology at Mayo Clinic, in a statement. Researchers hope the increased speed will also speed new discoveries.”
This version of Watson was specifically designed for the Mayo Clinic. The hope is that Watson’s abilities will enroll more patients in clinical trials, which surprisingly many go unfilled or incomplete. Mayo Clinic only has a 5 percent enrollment rate, while nationally the clinical trial enrollment rate is 3 percent.
Go Watson go! Don’t take away jobs, but enhance them to make life easier, better, and longer for people.
September 24, 2014
Check out the presentation “The Surprising Path to a Faster NYTimes.com.”
I was surprised at some of the information in the slide deck. First, I thought the New York Times was first online in the 1970s via LexisNexis.
This is not money. See http://bit.ly/1rus9y8
I thought that was an exclusive deal and reasonably profitable for both LexisNexis and the New York Times. When the newspaper broke off that exclusive to do its own thing, the revenue hit on the New York Times was immediate. In addition, the decision had significant cost implications for the newspaper.
The New York Times needed to hire people who allegedly create an online system. The newspaper had to license software, write code, hire consultants, maintain computers not designed to set type and organize circulation. The New York Times had to learn on the fly about converting content for online content processing. Learning that one does not know anything after thinking one knew everything is a very, very inefficient way to get into the online business. In short, the blow off of the LexisNexis deal added significant initial and then ever increasing on-going costs to the New York Times Co. I don’t think anyone at the New York Times has ever sat down to figure out the cost of that decision to become the Natty Bumpo of the newspaper publishing world.
I had heard that the newspaper raked in the 1970s seven figures a year while LexisNexis did the heavy lifting. Yep, that included figuring out how to put the newspaper content on tape into a suitable form for LexisNexis’ mainframe system. Figuring this out inside the New York Times in the early 1990s made this sound: Crackle, crackle, whoosh. That is the sound of a big company burning money not for a few months but for DECADES, folks. DECADES.
Photo from US Fish and Wildlife.
When the newspaper decided that it could do an online service itself and presumably make more money, the newspaper embarked on the technical path discussed in the slide deck. Few recall that the fellow who set up the journal Online worked on the online version of the newspaper. I recall speaking to that person shortly after he and the newspaper parted ways. He did not seem happy with budgets, technology, or vision. But, hey, that was decades ago.
How some information companies solve common problems with new tools. Image thanks to Enlgishrussia.com at http://bit.ly/1ps0MPF.
In the slide deck, we get an insider’s view of trying to deal with the problem of technical decisions made decades ago. What’s interesting is that the cost of the little adventure by the newspaper does not reflect the lost revenue from the LexisNexis exclusive. The presentation does illustrate quite effectively how effort cannot redress technical decisions made in the past.
This is an infrastructure investment problem. Unlike a physical manufacturing facility, an information centric business is difficult to re-engineer. There is the money problem. It costs a lot to rip and replace or put up a new information facility and then cut it over when it is revved and ready. But information centric businesses have another problem. Most succeed by virtue of luck. The foundation technology is woven into the success of the business, but in ways that are often non replicable.
The New York Times killed off the LexisNexis money flow. Then it had to figure out how to replicate that LexisNexis money flow and generate a bigger profit. What happened? The New York Times spent more money creating the various iterations of the Times Online, lost the LexisNexis money, and became snared in the black hole of trying to figure out how to make online information generate lots of dough. I am suggesting that the New York Times may be kidding itself with the new iteration of the Times Online service.
September 24, 2014
I read “Google X Founder Sebastian Thrun Has Left His Role As Google VP And Fellow.” Google’s moon shot research facility sent Babak Parviz (also Amirparviz packing). Dr. Parviz landed at Amazon, not far from Microsoft where Dr. Parviz worked on Microsoft’s contact lens project.
Now Sebastian Thrun (yep, the online learning, Udacity guy) has left the mothership. He has a tether as an advisor. The article reports:
Thrun has been in more of an advisory role at Google for a while now, with Chris Urmson leading the self-driving car project, and Ivy Ross leading Glass. Astro Teller continues to run Google X.
Astro is related to Edward Teller, a scientist of note.
What’s with the Google X operation? For something that is supposed to be really secret, the departure of high level experts seems to be a bit of a secrecy risk.
The write up mentions a number of secret Google X projects, including the mysterious “indoor localization” operation and Flux. The Loon balloons are ready to float over various countries. How will some countries react to Loons. Maybe with a demo of the SU 27 and SU 35 firepower?
The Google X outfit is of interest to me because of the very non secret relationship between a Google founder and a certain marketer. The marketer may have had a bit of a re-entry problem earlier this year.
Google X has impact. Some may not be what the doctor ordered.
Ah, I long for the good old days of precision and recall. Technological revolutions, marital discord, and secrecy leakage are indications of some interesting management methods.
Stephen E Arnold, September 24, 2014
September 23, 2014
GCN reports good news for Leidos: “Leidos To Produce Digital Maps For Intelligence Community.” The National Geospatial-Intelligence Agency granted Leidos $20 million for digital mapping production services to be used for national security and geospatial intelligence communities.
Leidos is a large supporter of the National System for Geospatial Intelligence, which governs the technology, policies, and programs behind geospatial intelligence. Leidos Inc. provides solutions for health, engineering, and national security. The company has made many endeavors in map-based intelligence, imagery, and geospatial intelligence, making them an ideal candidate for this new project.
“Under the single-award, indefinite delivery requirements contract, Leidos will work on production flow efficiencies and improved customer services for producing mapping deliverables to the intel community. It will also provide online and on-demand capabilities to the mapping production process, according to the company.”
Leidos is looking forward to making global products that will further NGA’s efforts to deliver them to clients. It looks like Teratext is branching out and trying its hand at mapmaking.
September 19, 2014
Machines know how to read, because they have been programmed to understand letters and numbers. They, however, do not comprehend what they are “reading” and cannot regurgitate it for users. The Research Blog that comments on Google’s latest news “Teaching Machines To Read Between The Lines (And A New Corpus With Entity Salience Annotations),” about how the search engine giant is using the New York Times Annotated Corpus to teach machines entity salience. Entity salience basically means machines can comprehend what they are “reading,” locate required information, and be able to use it. The New York Times Corpus is a large dataset with 1.8 million articles from twenty years. If a machine can learn salience from anything, it would be this collection.
Entity salience is determined by term ratios and complex search indexing done-brought to you by Knowledge Graph. The machine reading the article records the indicator for salience, byte offsets, entity index, mention count of entity determined by conference system, and other information to digest the document.
The system does work better with proper nouns:
“Since our entity resolver works better for named entities like WNBA than for nominals like “coach” (this is the notoriously difficult word sense disambiguation problem, which we’ve previously touched on), the annotations are limited to names.”
“What is one of the most-often overlooked things in machine learning that you wished more people would know about or would study more? What are some of the most interesting data science projects Google is working on?”
Norvig responded that there are many problems depending on the project you are working on and Google is doing a lot of data science projects, but nothing specific.
Machine learning and reading is being worked on. In short, machines are going to school.
September 15, 2014
If you follow the HP Autonomy firefights, you will enjoy “Autonomy Deal Fallout ‘More Extreme’ Than Hoped, says HP’s UK boss Andy Isherwood:
In spite of HP’s allegations that Lynch and senior Autonomy management inflated revenues with phantom deals and hidden low-margin sales, Isherwood liked what he found. Technologically at least it was a good buy, he insists….We’re seeing clearly a lot more customer buying so it’s not an issue with the product.”
I also noted the positive signal of one percent revenue growth. Mr. Isherwood asserts:
Despite the decline in outsourcing revenues, a global trend, HP surprised Wall Street last month with 1pc revenue growth – its first in a dozen quarters – on the back of increasing share of the PC market. The UK picture was better still, with only outsourcing in decline. After falling last year overall sales here are on track to grow 7pc, says Isherwood.
With this atta boy and positive financials, what went wrong with Autonomy? As Isherwood says:
Unsurprisingly, he has only good things to say about the leadership of Whitman, Lynch’s nemesis. She is nearly two years into a five-year plan to turn the HP oil tanker, with increased investment in research and development, and a focus on the big trends of cloud computing, mobile working and big data as part an attempt to turn HP’s scale and diversity to its advantage. “HP is a broad-based company,” says Isherwood. “Meg understood that immediately. At that time we had said we were going to hive off the PC business, but she came in and said ‘no’, the power is in the broad portfolio.”
If there’s no management culpability, HP wants its money back. Interesting.
Stephen E Arnold, September 15, 2014
September 12, 2014
A criminal hiding in a foreign land for over a decade may begin to feel sure he has escaped the long arm of U.S. law. Today’s technology, however, has rendered that sense of security false for at least one wanted suspect. We learn from NakedSecurity that “Facial Recognition Software Leads to Arrest After 14-Year Manhunt.”
Neil Stammer, of New Mexico, was charged with some very serious offenses back in 1999, but escaped while out on bond. Writer Lisa Vaas reports:
“The case went cold until January 2014, when FBI Special Agent Russ Wilson was assigned the job of fugitive coordinator in Albuquerque, New Mexico. Wilson created a new wanted poster for Stammer and posted it onto FBI.gov in hopes of generating tips.
“A special agent with the Diplomatic Security Service (DSS) – a branch of the US Department of State whose mission includes protecting US Embassies and maintaining the integrity of US visa and passport travel documents – was testing new facial recognition software designed to uncover passport fraud when he decided, ‘on a whim,’ to use the software on FBI wanted posters.
“A match showed up between Stammer’s wanted poster and a passport photo issued under a different name. Suspecting fraud, the DSS agent contacted the FBI. The tip soon led Wilson to Nepal, where Stammer was living under the name Kevin Hodges and regularly visiting the US Embassy there to renew his tourist visa.”
Apparently, Stammer/Hodges had gotten comfortable in Nepal, teaching English. An FBI agent observed that the suspect seemed quite surprised when a joint operation with the Nepalese government led to his location and arrest.
Though the facial-recognition search that produced this arrest was performed “on a whim,” local and federal law-enforcement agencies across the country are using or considering such software. Vaas emphasizes that these implementations are being made in the absence of any standardized best practices, though some are currently being crafted by the National Telecommunications & Information Administration.
Cynthia Murrell, September 12, 2014
September 11, 2014
I read “The Revolutionary Technique That Quietly Changed Machine Vision Forever.” The main idea is that having software figure out what an image “is” has become a slam dunk. Well, most of the time.
The write up from the tech cheerleaders at Technology Review says, “Machines are now almost as good as human at object recognition.”
A couple of niggling points. There is that phrase “almost as good”. Then there is the phrase “object recognition.”
Read the write up and then answer these questions:
- Is the method ready to analyze imagery fed by a drone to a warfighter during a live fire engagement?
- Is the system able to classify a weapon in a manner meaningful to field commander?
- Can the system discern a cancerous tissue from a non cancerous tissue with an image output from a medical imaging system?
- Does the method recognize objects in a image like the one shown below?
Image by Stephen E Arnold, 2013
If you pass this query to Google’s image recognition system, you get street scenes, not a person watching activities through an area cordoned off by government workers.
Google thinks the surveillance image is just like the scenes shown above. Note Google does not include observers or the all important police tape.
The write up states:
In other words, it is not going to be long before machines significantly outperform humans in image recognition tasks. The best machine vision algorithms still struggle with objects that are small or thin such as a small ant on a stem of a flower or a person holding a quill in their hand. They also have trouble with images that have been distorted with filters, an increasingly common phenomenon with modern digital cameras.
This stuff works in science fiction stories, however. Lab progress is not real world application progress.
Stephen E Arnold, September 11, 2014
September 4, 2014
I read “Google Backed Calico to Launch $1.5 Billion Aging Research Center.” The idea of wellness is a good one. The concept of life extension does not match up with information retrieval. As Google marginalizes blog search, Google’s initiatives are fascinating. The company has not been able to diversify its revenue stream from search based advertising. The company has been able to diversify its science projects. From Loon balloons to investments in quantum computing, Google’s activities remind me of a high school science fair on steroids.
I learned that this new venture which joins Google delivery drone investments is focused on:
The new San Francisco Bay Area facility will focus on drug discovery and early drug development for diseases like neurodegeneration and cancer. Calico’s larger aim is lifespan extension.
What’s this bode for good old fashioned relevant search results? More ads, less relevance is one possibility. Search is parked on an access road to the information highway I fear.
Stephen E Arnold, September 4, 2014