October 5, 2015
Short honk: I read “Facebook v. Google in Digital Video Battle: YouTube Is 11X Bigger.” The big factoid is in the headline: Facebook is a fraction of the size in terms of traffic than YouTube. A couple of thoughts: How rapidly is Facebook growing in video content compared to YouTube? What is Facebook’s monetization opportunity compared to the Alphabet Google’s opportunity? The chit chat I have heard is that Facebook’s growth in the last 18 months is more rapid that the Alphabet Google video revenue growth? I also have a suspicion that socially anchored monetization may generate a more stable stream of revenue for Facebook. My question, however, is, “When will Facebook surpass YouTube in revenue from video?”
Stephen E Arnold, October 5, 2015
September 24, 2015
I read “Google Charges Advertisers for Fake YouTube Video Views, Say Researchers.” My goodness, will criticism of Alphabet Google continue to escalate?
The trigger for the newspaper article’s story with the somewhat negative headline was an academic paper called “Understanding the Detection of Fake View Fraud in Video Content Portals.” The data presented in the journal by seven European wizards suggests that an Alphabet Google type company knows when a video is viewed by a software robot, not a credit card toting human.
“Fake view fraud” is a snappy phrase.
According to the Guardian newspaper write up about the technical paper:
The researchers’ paper says that while substantial effort has been devoted to understanding fraudulent activity in traditional online advertising such as search and banner ads, more recent forms such as video ads have received little attention. It adds that while YouTube’s system for detecting fake views significantly outperforms others, it may still be susceptible to simple attacks.
Is this a Volkswagen-type spoof? Instead of fiddling with fuel efficiency, certain online video portals are playing fast and loose with charging for video ads not displayed to a human with a PayPal account?
Years ago an outfit approached me with a proposition for a seminar about online advertising fraud. I declined. I am confident that the giant companies and their wizards in the ad biz possess business ethics which put the investment bankers to shame. I recall discussing systems and methods with a couple of with it New Yorkers. The lunch topic was dynamically relaxing the threshold for displaying content in response to certain queries.
My comment pointed to ways to determine if an ad “relevant” was relevant to a higher percentage of user queries. I called this “query and ad matching relaxation.”
I did not include a discussion of “relaxation” in my 2003-2004 study Google Version 2.0, which is now out of print. The systems and methods disclosed in technical papers by researchers who ended up working for large online advertising methods were just more plumbing for smart software.
When an ad does not match a query, that’s the challenge of figuring out what’s relevant and what’s irrelevant.
My thought in 2003 when I started writing the book was that most content was essentially spoofed and sponsored. I wanted to focus on more interesting innovations like the use of game theory in online advertising interfaces and the clever notion of “janitors” which were software routines able to clean up “bad” or “incomplete” data.
As I recall, that New York City guy was definitely interested in the notion of tuning ad results to generate money for the ad distribution and not so much for the advertiser. For me, no interest in lecturing a group of ad execs about their business. These folks can figure out the ins and outs of their business without inputs from an old person in Kentucky.
Mobile and video access to digital content do pose some interesting challenges in the online advertising world. My hunch is that the Alphabet Google type outfits and the intrepid researchers will find common ground. If the meeting progresses smoothly, perhaps a T shirt or mouse pad will be offered to some of the participants?
I remain confident that allegations about slippery behavior in online advertising are baseless. Online advertising is making life better and better for users everyday.
The experience of online advertising is thrilling. I am not sure the experience of receiving unwanted advertisements can be improved? Why read a Web page when one can view an overlay which obscures the desired content? Why work in a quite office? Answer: It is simply easier to hear the auto play videos on many Web pages. Why puzzle over a search results page which blurs sponsored hits from relevant content? By definition, displayed information is relevant information, gentle reader. Do you have a problem with that?
Google, according to the article, will chat up the seven experts who reported on the alleged fraud. I am confident that the confusion in the perceptions of the researchers will be replaced with crystal clear thinking.
Online ad fraud? What a silly notion.
Stephen E Arnold, September 24, 2015
August 26, 2015
Poor old Google. Imagine. Hassles with Google Now. Grousing from the no fun crowd in the European Commission. A new contact lens business. Exciting stuff.
Then the Googlers read “Facebook’s New Moments App Now Automatically Creates Music Videos From Your Photos.” The idea is that one or two of the half billion Facebookers who check their status multiple times a day can make a movie video automatically.
But instead of doing the professional video production thing, the video is created from the one’s shared photos.
I wonder how many of the young at heart will whip up and suck down videos of [a] children, [b] pets, [c] vacations, [d] tattoos (well, maybe not too many tattoos).
The idea is
With the update, Facebook Moments will automatically create a music video for any grouping of six or more photos. You can then tap this video in the app to customize it further by changing the included photos and selecting from about a dozen different background music options. When you’re finished making your optional edits to this video, one more tap will share the video directly to Facebook and tag the friend or friends with whom you’re already sharing those photos. The option to automatically create a video from your shared photos also makes Facebook Moments competitive with similar services like Flipagram, or those automatically created animations that Google Photos provides through its “Assistant” feature, which also helpfully builds out stories and collages.
Google may apply its Thought Vector research to the problem. The question is will Alphabet be able to spell success from its social services. Why would a grandmother care about a music video of a grandchild when there were Thought Vectors, Loon balloons, and eternal life to ponder?
Stephen E Arnold, August 26, 2015
August 12, 2015
i read “Firefox Sticks It to Microsoft, Redirects Cortana Searches in Windows 10.” Ah, ha. Windows 10. Some day, maybe. I assume the endless reboots upon updating are merely a rumor. The real story is that a Firefox user can use Cortana to search something other than Bing.
The write up says:
After blasting Microsoft’s attempts to set Edge as the default browser in Windows 10, Mozilla is enjoying some sweet revenge by steering Firefox users away from Bing. With the newly-released Firefox 40, users no longer have to use Bing for web searches from Cortana on the Windows 10 taskbar. Instead, Firefox will show results from whatever search engine the user has chosen as the default. Using Firefox isn’t the only way to replace Cortana’s Bing searches with Google or another search engine. But Firefox is currently the only browser that does so without the need for third-party extensions. (It wouldn’t be surprising, however, if Google follows suit.)
Poor Microsoft. The company has been trying to build bridges. The result is a dust up.
I long for the good old days. Microsoft would have been careful to avoid getting stuck in a squabble about its Siri and Google Voice killer Cortana.
What impact will this have? My hunch is that as Windows 10 flows into the hands of those who fondly recall Bob, then the issue will become more serious.
For now, this is amusing to me. Recall I am a person who abando0ned my Lumia Windows phone because the silly Cortana feature was in a location which made activation impossible to avoid. I dumped the phone. End of story.
Stephen E Arnold, August 12, 2015
August 10, 2015
I read “Why Facebook’s Video theft Problem Can’t Last.” My initial reaction was, “Sure it can.” The main point of the write up struck me as:
But then popular YouTuber Hank Green leveled a number of allegations at Facebook’s video team, including a charge of rampant copyright infringement from Facebook users who are uploading videos from YouTube and other platforms without creators’ consent. Facebook has responded that it has measures in place to address copyright infringement, including allowing users to report stolen content and suspending accounts guilty of repeated violations.
I noted this statement:
For Facebook, video represents an irresistible new business opportunity. Early experiments with running natively inside the News Feed showed that it kept users on the site longer — and kept them from clicking external links that took them to YouTube and elsewhere.
Money and irresistible are words which flow.
The gem appears deep in the write up:
Facebook hasn’t made it easy for creators like Green to find instances of copyright infringement — there’s no way to filter Facebook searches for videos. And even if the stolen videos can be found, creators must fill out multiple forms, meaning it could be several days (and countless views) before a stolen video is taken down.
I find it interesting that search and retrieval may not do the trick. Then the bureaucratic process adds a deft touch.
I will file this item in my follow up folder. I know I can search my system for text files. Search which does not allow one to find information may be a tactic which serves other purposes. Is flawed search a business advantage? If one cannot find something, does that mean the “something” is not there?
Stephen E Arnold, August 10, 2015
August 7, 2015
Short honk: Forget words in English. “There’s Finally a Good Way to Text in Sign Language” explains that a new mobile keyboard app allows American Sign Language speakers to send text messages to a hearing impaired individual. The write up states:
Signily also includes animated signs for many popular ASL phrases that don’t have exact English translations. This makes texting a more natural experience for signers.
How does one parse and search these messages? Think look up table maybe? Will semantic vendors be able to make sense of animated signs?
Sure, semantic search is just super. And the meeting to discuss this?
Stephen E Arnold, August 7, 2015
August 7, 2015
I noted the article “IBM Adds Medical Images to Watson, Buying Merge Healthcare for $1 Billion.” The company is in the content management business. Medical images are pretty much of a hassle whether in the good old fashioned film form or in digital versions. The few opportunities I have had to looked at murky gray or odd duck enhanced color images, I marveled at how a professional would make sense of the data displayed. Did this explanation trigger thoughts of IBM FileNet?
The image processing technology available from specialist firms permitting satellite or surveillance image analysis are a piece of cake compared to the medical imaging examples I reviewed. From my point of view the nifty stuff available to an analyst looking at the movement of men and equipment were easier to figure out.
Merge delivers a range of image and content management services to health care outfits. The systems can work with on premises systems and park data in the cloud in a way that keeps the compliance folks happy.
According to the write up:
When IBM set up its Watson health business in April, it began with a couple of smaller medical data acquisitions and industry partnerships with Apple, Johnson & Johnson and Medtronic. Last week, IBM announced a partnership with CVS Health, the large pharmacy chain, to develop data-driven services to help people with chronic ailments like diabetes and heart disease better manage their health.
Now Watson is plopping down a $1 billion to get a more substantive, image centric, and—dare I say it—more traditional business.
The idea I learned:
“We’re bringing Watson and analytics to the largest data set in health care — images,” John Kelly, IBM’s senior vice president of research who oversees the Watson business, said in an interview.
The idea, as I understand the management speak, is that Watson will be able to perform image analysis, thus allowing IBM to convert Watson into a significant revenue generator. IBM does need all the help it can get. The company has just achieved a milestone of sorts; IBM’s revenue has declined for 13 consecutive quarters.
My view is that the integration of the Merge systems with the evolving Watson “solution” will be expensive, slow, and frustrating to those given the job of making image analysis better, faster, and cheaper.
My hunch is that the time and cost required to integrate Watson and Merge will be an issue in six or nine months. Once the “integration” is complete, the costs of adding new features and functions to keep pace with regulations and advances in diagnosis and treatment will create a 21st century version of FileNet. (FileNet, as you, gentle reader, know as the 2006 acquisition. At the time, nine years ago, IBM said that the FileNet technology would
“advance its Information on Demand initiative, IBM’s strategy for pursuing the growing market opportunity around helping clients capture insights from their information so it can be used as a strategic asset. FileNet is a leading provider of business process and content management solutions that help companies simplify critical and everyday decision making processes and give organizations a competitive advantage.”
FileNet was an imaging technology for financial institutions and a search system which allowed a person with access to the system to locate a check or other scanned document.)
And FileNet today? Well, like many IBM acquisitions it is still chugging along, just part of the services oriented architecture at Big Blue. Why, one might ask, was the FileNet technology not applicable to health care? I will leave you to ponder the answer.
I want to be optimistic about the upside of this Merge acquisition for the companies involved and for the health care professionals who will work with the Watsonized system. I assume that IBM will put on a happy face about Watson’s image analysis capabilities. I, however, want to see the system in action and have some hard data, not M&A fluff, about the functionality and accuracy of the merged systems.
At this moment, I think Watson and other senior IBM managers are looking for a way to make a lemon grove from Watson. Nothing makes bankers and deal makers happy than a big, out of the blue acquisition.
Now the job is to find a way to sell enough lemons to pay for the maintenance and improvement of the lemon grove. I assume Watson has an answer to on going costs for maintenance and enhancements, bug finding and stomping, and the PR such activities trigger. Yep, costs and revenue. Boring but important to IBM’s stakeholders.
Stephen E Arnold, August 7, 2015
July 28, 2015
Navigate to “Deep Neural Network Can Match Infrared Facial Images to Those Taken Naturally.” The write up explains that an infrared snap of a person’s face can be matched (mapped) to a normal picture of a human’s face. The idea is that there are wave signatures. I find this interesting. The write up states:
To use such a system for correlating infrared images with natural light counterparts, then, would require a large dataset of both types of images of the same people. The duo discovered that such a dataset existed as part of other research being done at the University Notre Dame. After being given access to it, they “taught” their system to pick out natural light images of people based on half of the infrared images in the dataset they were given. The other half was used to test how well the system worked. The results were not perfect, by any means—the system was able to make correct matches 80 percent of the time (which dropped to just 55 percent when it had only one photo to use), but marks a dramatic improvement in the technology.
The approach has a number of search related applications. Worth monitoring.
Stephen E Arnold, July 28, 2015
July 23, 2015
I read “AP Makes One Million Minutes of Historical Footage Available on YouTube.” This struck me as an anomaly. The AP is an outfit which, as I recall, rattled sabers and showed knives to people who quote from their articles. Also, the AP is in a revenue hunt; that is, the good old days of newspapers are history. The company is, like many outfits sired in the stable of dead tree journalism, adapting. Need a real time news feed with search, the AP offers this via a tie up with a former Bell Labs’ person. I will wager $1.00 in pennies that you cannot name the vendor? Send your answers to benkent2020 at yahoo dot com.
The AP write up reports that lots of video has been digitized and placed on YouTube. There are links to videos which AP finds interesting. The word “find” brings up an interesting question: “How does one locate a video?” and “How does one locate a series of related videos?” and “How does one find a video with a specific segment of text in it?” and “How does one find a video with a specific image in it?”
The answer, gentle reader, is that one cannot. I know that AP is excited about this collection. I assume that Google is pleased that the collection is not on Facebook.
As a user, the approach to locating a video is somewhat unsatisfying. Prepare your patient self to guess keywords, click, and watch in serial fashion one million videos. Well, maybe a couple.
Without search, this collection, like Google’s Life Magazine images, is useful to folks with time on their hands and even more time on their hands. A dump is not useful to me. To you, gentle reader, and to the executives at AP, I am picking nits. The problem is that these nits are the size of the synthetic creatures in Jurassic World. Big nits. My hunch is that the ad revenue from these videos will be the size of regular, run of the mill nits. I hope I am wrong. Don’t forget to submit the name of the AP’s real time, online news intelligence service. I will accept entries for 24 hours.
Stephen E Arnold, July 23, 2015
July 14, 2015
If you spend any time with Google Maps (civilian edition), you will find blurred areas, gaps, and weird distortions which cause me to ask, “Where did that building go?”
If you really spend a lot of time with Google Maps, you will be able to see my two dogs, Max and Tess, in a street view scene.
And zoomed in. Aren’t the doggies wonderful?
The article “The Curious Case of Google Street View’s Missing Streets” is not interested in seeing what the wonky Google autos capture. The write up pokes at me with this line:
Many towns and cities are littered with small gaps in the Street View imagery.
The write up explains that Google recognizes that gaps are a known issue. The article gets close to something quite interesting when it states:
In extreme cases, whole countries are affected. Privacy has been a particular issue in Germany, where many people objected to the roll-out of Street View. Google now has Street View images only for big cities in Germany, like Berlin and Frankfurt, and appears to have given up on the rest of the country completely. Zoom out over Europe in Street View mode and Germany is mostly a blank island in a sea of blue.
Want to do something fun the author of the write up did not bother to do? Locate a list of military facilities in the UK. Then try to locate those on a Google Map. Next try to locate those on a Bing.com map (oops, Uber map)?
Notice anything unusual? Now relate your thoughts to the article’s list of causes.
If not, enjoy the snap of Max and Tess.
Stephen E Arnold, July 14, 2015