YouTube: Search for Comments

November 18, 2013

I am not a video goose. I cannot recall the last time I commented on a video. However, I have asked some of my researchers to search for YouTube comments. Well, my recollection is that YouTube “comments” search is not particularly helpful.

I read “Forced Google Plus Integration on YouTube Backfires, Petition Hits 112,000.” I learned that Google is requiring a Google Plus account in order to make comments about a YouTube video. Some YouTube fans are not happy. The big question is, “Will Google listen?”

What is important is that the article reports a modest movement to post YouTube comments on Reddit.com. Reddit’s search function leaves something to be desired. However, my researchers have informed me that Reddit search does work reasonably well.

My view is that Google is trying to cement its revenue opportunities. Google Plus is part of the grand strategy. Search is not number one on the agenda in my opinion. The emergence of an option like Reddit may be an important step. Google fans may have to fend for themselves as Google works overtime to make sure it can hit its revenue numbers.

Those criticizing Google may find that their actions misfire.

Stephen E Arnold, November 18, 2013

Readin, Riten, Rithmatik, and Guzzlin

November 8, 2013

I read “America’s Media Guzzling Ways.” Good word “guzzling” or “guzzlin” as it is pronounced in rural Kentucky. The write up contained a factoid that I find difficult to grasp; to wit:

The amount of media data, measured in printed text, that Americans consumed last year. That’s 6.9 zettabytes—6.9 million-million gigabytes—to be exact.

Let’s assume that the figure is dead accurate or a couple of zettabytes, plus or minus. According the article, each person in the US spends 15 hours a day checking Facebook, watching videos, and tapping screens.

My reaction is that the consumption of media contributes to these observed events yesterday:

  1. A sponsored event at a trade show was attended by about 15 people. None of those at the hoe down were employees of the company. I suppose the guzzling of digital content was more important than showing up and pretending to be thrilled that potential customers were eating free snacks and drinking no name beverages. YouTube cannot wait, people.
  2. A conference program that did not include information about one of the speakers. Heck, it was an oversight even thought that speaker was paid to attend, received a free hotel room, and a free registration. Facebook posts take priority with this outfit I surmise.
  3. A sign at the National Press Club that contained a misspelling. It is the spelling checker’s fault was one explanation. SMS spelling is the way to go. LOL
  4. Asking for directions from a bus driver elicited this statement when I asked, “Where is 999 9th Street, NW.” The professional driver replied, “Dude, my iPhone is not connecting. Ask someone else.” The bus driver did not meet my gaze. He was frantically scanning the street for a mobile phone shop.

The article helps me understand why information presented on a mobile device is perceived as accurate, complete, and current. The grazing public has neither the time nor the grit to do much reading, writing, or arithmetic I fear. Oh, as the National Press Club sign maker would have it: Readin, writin, rithmetic, and guzzlin.

One person looked for Cuba Libre Restaurant using Google Maps. No joy. The system displayed four choices, none of which was the desired restaurant. The smart system made it impossible for the iPhone user to locate the destination. Fascinatin’.

Stephen E Arnold, November 8, 2013

SharePoint Being Prepped For Rich Media Content

October 4, 2013

According to PR Newswire, a very important event will take place on September 26, 2013: “Equilibrium And Metalogix To Discuss How To Optimize SharePoint For Rich Media.” Executives from each of the above companies will host a webinar called,” “Enhancing SharePoint to Manage Large Files including Rich Media Content.” The presenters will be Sean Barger, Founder, and Laura Clemons, VP Product Management, from Equilibrium and Trevor Hellebuyck, CTO of Metalogix. The group will describe the newest solutions for making SharePoint capable to work with rich media, including scalability, management of large digital media asset libraries, mobility, audio, video, and CAT storage/distribution.

Here is a more detailed list of the topics:

“During the event, the presenters will discuss how the combination of Equilibrium’s MediaRich ECM for SharePoint and Metalogix StoragePoint can improve any Microsoft SharePoint deployment without requiring modifications. Attendees will also learn:

  • Best practices for management of large files in SharePoint
  • How to overcome common issues, such as slow uploads/downloads and time-outs
  • How to optimize SharePoint for video playback”

Rich media is the next phase of content management as documentation moves away from basic paper replication. It is important to be able to search these content types, as Arnold IT’s Steve Arnold has mentioned, because as the content changes search needs to become richer and more thorough to meet the demands.

Whitney Grace, October 4, 2013

Rich Media: Too Expensive to Store?

July 30, 2013

I saw an interesting post called “Cost of Storing All Human Audio visual Experiences.” I am no logician, but if one stores “all”, then isn’t the cost infinite? The person writing the post presents some data which pegs the cost for seven billion people at about $1 trillion a year.

Several observations:

  1. With the emergence of smart nanodevices with audio and video capabilities, perhaps the estimate is off the mark?
  2. Once the data are captured, who manages the content? Likely candidates include nation states, companies which operate as nation states, or venture funded start ups?
  3. How does one find a particular time segment germane to a query pertinent to a patent claim?

Interesting question if one sets aside the “all”. The next time I look for a video on YouTube or Vimeo, I will ask myself, “What type of search system is needed to deal with even larger volumes of rich media?”

Is the new Dark Ages of information access fast approaching? Yikes! Has the era already arrived?

Stephen E Arnold, July 30, 2013

Sponsored by Xenky

Image Search and, of Course, Google

June 13, 2013

Many years ago I lectured in Japan. On that visit, I saw a demonstration of a photo recognition system. Click on a cow and the system would return other four legged animals —- most of the time. Some years later I was asked to review facial recognition systems after a notable misfire in a major city. Since then, my team and I check out the systems which become known to us.

Progress is being made. That’s encouraging. However, a number of challenges have to be resolved. These range from false positives to context failures. In the case of a false positive, the person or thing in the picture is not the person or thing one sought. In the case of context failure, the cow painted on the side of a truck is not the same as a cow standing in a field with many other cows clumped around.

Software is bumping up against computational boundaries. The methods available have to be optimized to run in available resources. When there are bigger and faster systems, then fancier math can be used. Today’s innovations boil down, in my opinion, to clever manipulations of well known systems and methods. The reason many software systems perform in a similar manner is that these systems share many procedures. Innovation is often optimization and packaging, not a leap frog to more sophisticated numerical procedures. Trimming, chopping down, and streamlining via predictive methods are advancing the ball down the field.

I read with interest “Improving Photo Search: A Step across the Semantic Gap.” Google has rolled out enhanced photo search. The system works better than other systems. As Google phrases it:

We built and trained models similar to those from the winning team using software infrastructure for training large-scale neural networks developed at Google in a group started by Jeff Dean and Andrew Ng. When we evaluated these models, we were impressed; on our test set we saw double the average precision when compared to other approaches we had tried. We knew we had found what we needed to make photo searching easier for people using Google. We acquired the rights to the technology and went full speed ahead adapting it to run at large scale on Google’s computers. We took cutting edge research straight out of an academic research lab and launched it, in just a little over six months. You can try it out at photos.google.com. Why the success now? What is new? Some things are unchanged: we still use convolutional neural networks — originally developed in the late 1990s by Professor Yann LeCun in the context of software for reading handwritten letters and digits. What is different is that both computers and algorithms have improved significantly. First, bigger and faster computers have made it feasible to train larger neural networks with much larger data. Ten years ago, running neural networks of this complexity would have been a momentous task even on a single image — now we are able to run them on billions of images. Second, new training techniques have made it possible to train the large deep neural networks necessary for successful image recognition.

The use of “semantics” is also noteworthy. As I wrote in my analysis of Google Voice for a large investment bank, “Google has an advantage because it has data others do not have.” When it comes to predictive methods and certain types of semantics, the Google data sets give it an advantage over some rivals.

What applied to Google Voice applies to Google photo search. Google is able to tap its data to make educated guesses about images. The semantics and the infrastructure have a turbo boosting effect on Google.

The understatement in the Google message should not be taken at face value. The Google is increasing its lead over its rivals and preparing to move into completely new areas of revenue generation. Images? A step but an important one.

Stephen E Arnold, June 13, 2013

Sponsored by Xenky

ArnoldIT Announces New Gourmet De Ville Video Shorts

May 15, 2013

Gourmet De Ville, a new ArnoldIT information service launched in January of this year, will now be adding a video service to its print coverage of artisanal food and spirits.
Jasmine Ashton, editor for Gourmetdeville.com, will be hosting the weekly videos spotlighting innovative recipes and the latest industry trends.
Ashton remarked:

“I am very excited to be a part of this new service and look forward to sharing my insights on the craft food and beverage sector with viewers. I believe that the demand for information on gourmet food, beverages, and industry leaders is exploding. Gourmet De Ville makes high value information available in a concise, easy to understand format. Our videos will simply be another avenue to explore this content.”

In her first video, Ashton covers Limoncello Tiramisu. She introduces the video by saying:

“In Italy, it’s common to have a bottle of Limoncello brought out after dessert. The tangy lemon liqueur is believed to help you digest all that great pasta and rich sauces.
But we heard about a chef in Florida who makes Limoncello Tiramisu — an after-dinner drink and dessert all rolled into one.”

We are looking forward to watching Gourmet De Ville’s video coverage in the coming weeks and believe that they will be a refreshing addition to the content that is already available on the site.

Ric Manning, May 15, 2013

Sponsored by ArnoldIT.com, developer of Beyond Search

Augmentext Video Live on GourmetDeVille.com

May 8, 2013

Short honk: The folks at GourmetDeVille, an information service covering artisanal and craft spirits sent me a link to a short news video. The talent is Jasmine Ashton, one of the ArnoldIT writing and research team. Ms. Ashton told me that she worked with Augmentext to develop this program. I wanted to send a happy quack to Ms. Ashton and her colleagues. I understand that more videos will be forthcoming. Although I am not a video person, I understand that video is a high value information type. I applaud the effort. Check out the 90 second news item at http://goo.gl/nLISJ.

Stephen E Arnold, May 8, 2013

Spnsored by HighGainBlog

Measuring the Worth of Social Media

April 26, 2013

When social media first started any companies thought it was a useless tool meant only for the younger crowd. Why would an established company want to promote itself through a tool meant for kids? According to LucidChart in the article, “Is Social Media Worthless?” the company polled 3,000 users on their Web behavior and found that social media played a small role in driving Web traffic. Social media experts are trying to justify the place it plays in a company’s business, but if it loses money it does not have any value.

“Social media denizens are, by and large, in it for themselves. They’ll promote your brand if it means their share of a cool $20 million, or a free pair of glasses, or whatever else you’re offering them. Everyone likes to pat themselves on the back and feel like they’re serving their fellow man, whether that’s retweeting a funny video or donating money to a favorite charity. But if your fans aren’t buying your product, or influencing others to buy your product, they’re not really fans at all. And all the brand awareness in the world isn’t going to change that.”

Harsh words for social media if the source is accurate. It echoes the era of MySpace filled with teenage girls and boys touting their social lives and interests. Social media has its place, but it needs to be reexamined in generating revenue for real life companies.

Whitney Grace, April 26, 2013

Sponsored by ArnoldIT.com, developer of Beyond Search

Autonomy Lands Rich Media Deal

April 24, 2013

Autonomy just scored a plum placement, we learn from, “ENCO Systems Selects HP Autonomy for Audio and Video Processing,” hosted at Market Watch. ENCO, who makes the radio and TV audio-automation software DAD and DADtv, just selected Autonomy’s IDOL server for inclusion in the next version of enCaption, their automated captioning-generation system. We learn from the press release:

“ENCO Systems provides live automated captioning solutions to the broadcast industry, leveraging technology to deliver closed captioning by taking live audio data and turning it into text. ENCO Systems is capitalizing on IDOL’s unique ability to understand meaning, concepts and patterns within massive volumes of spoken and visual content to deliver more accurate speech analytics as part of enCaption3. . . .

“enCaption3 is the only fully automated speech recognition-based closed captioning system for live television that does not require speaker training. It gives broadcasters the ability to caption their programming, including breaking news and weather, any time, day or night, since it is always on and always available. enCaption3 provides captioning in near real time–with only a 3 to 6 second delay–in nearly 30 languages.”

Despite the soured relationship between HP and Autonomy, whom the tech giant snapped up in 2011, HP continues to leverage this increasingly valuable resource. Founded in 1996, Autonomy grew from research originally performed at Cambridge University.

Two engineers from MIT launched ENCO back in 1983, with a focus on computer-based process control applications in the industrial realm. The company branched into digital audio delivery and radio automation in 1991; since then, broadcasters large and small around the world have come to rely on their technologies.

Cynthia Murrell, April 24, 2013

Sponsored by ArnoldIT.com, developer of Augmentext

YouTube and Its Content Play

March 31, 2013

Quite an interesting write up this: “Lessons Learned from YouTube’s $300 M Hole.” First, the “m” means millions. Second, the write up provides a useful thumbnail about Google’s content play for YouTube. The idea, if I understand it, was to enlist fresh thinking and get solid content from some “big names”.

The innovative approach elicited this comment in the write up:

There were a lot of recipients of this money, and many of them were major media companies trying their hand at online video that received some fat checks, up to $5M a piece, to launch TV-like channels. What we all found out is that, no matter how hard you push them and how much money you spend on them, YouTube doesn’t work like TV…and funding it that way is daft.

If the article is accurate, one channel earned back money. The other hundred or so did not.

The lessons learned from Google’s “daft” approach struck me as confirmation of the observations I offered in a report to a former client about Thomson Reuters’ “what the heck” leap into the great Gangnam style of YouTube; namely:

  • Viewership and Google money are correlated
  • Online video is different from “real” TV shows
  • Meeting needs of users is different from meeting the needs of advertisers.

The conclusion of the write up struck me as illustrative of the Google approach:

With regards to the last lesson, allow me to submit to any YouTube employees out there that the ad agency doesn’t have the power in this equation. YouTube is a young company, it does not need to convert 100% of its value to dollars. Please, let the advertisers figure out for themselves how to tackle this very new medium instead of trying to shape the medium to meet their needs. Seems to me, that’s the strategy that got Google where it is today.

In short, Google may have lost contact with what made it successful.

Stephen E Arnold, March 31, 2013

« Previous PageNext Page »