Google and Its Amazing, Proliferating Services

August 22, 2019

It is all about the live streaming, backed by strong DVR capabilities. Digital Trends asks and answers, “What Is YouTube TV? Here’s Everything You Need to Know.” At a pricy $50 a month (minimum), the service is quite the entertainment investment. For some, though, it may be worth it. Writer Josh Levenson insists that the available features, particularly YouTube TV’s version of a cloud-storage DVR, more than make up for its limitations. These shortfalls include fewer channels than competitors, like AT&T TV Now (formerly DirecTV Now) and Sling TV, and support for fewer devices. He tells us:

“Out of all the various features baked into YouTube TV, one stands out from the crowd: Cloud DVR. Granted, that’s a tool that most live TV streaming services offer these days, but Google has hit the nail on the head offering a more natural experience—letting you record as much content as you want, which can be stored for up to nine months at an end, putting an end to the storage limits that most competitors impose. …

We also noted:

“Like most streaming services, YouTube TV also offers its customers the option to watch the content on multiple screens at once. To be specific, you’ll have the option to create up to six sub-accounts for family members, of which three can watch at the same time. There is no option to upgrade to a higher plan, either—so that’s a firm cap at three streams at the same time, but that should be more than enough for most families.”

But will most households have a device on hand that can play YouTube TV? To run the service on a 4K television, one needs a set-top stream-capable box or a dedicated streaming stick. And as with any service but PlayStation Vue, viewing on a Playstation 4 is out, but all Xbox Ones are supported. It can be run through a Chrome or Firefox browser on a PC or from the operating system on Android and Apple devices. YouTube TV is also supported on Android TV, Apple TV, Chromecast, Fire TV, Roku OS, Vizio SmartCast televisions, and post-2016 smart TVs from LG and Samsung.

Yes, most could probably find something on which to watch YouTube TV. Is it worth the monthly cost? How long will Google stick with the service? Who has time for multiple streaming services? What about How can a YouTuber message another? What about child suitable options? Perhaps benched AI whiz Mustafa Suleyman is available to contribute to resolving thorny YouTube questions?

Many questions for a company with remarkable management acumen.

Cynthia Murrell, August 22, 2019

Audio Data Set: Start Your AI Engines

August 16, 2019

Machine learning projects have a new source of training data. BoingBoing announces the new “Open Archive of 240,000 Hours’ Worth of Talk Radio, Including 2.8 Billion Words of Machine-Transcription.” A project of MIT Media Lab, Radiotalk holds a wealth of machine-generated transcriptions of talk radio broadcasts between October 2018 and March 2019. Naturally, the text is all tagged with machine-readable metadata. The team hopes their work will enrich research in natural language processing, conversational analysis, and social sciences. Writer Cory Doctorow comments:

“I’m mostly interested in the social science implications here: talk radio is incredibly important to the US political discourse, but because it is ephemeral and because recorded speech is hard to data-mine, we have very little quantitative analysis of this body of work. As Gretchen McCulloch points out in her new book on internet-era language, Because Internet, research on human speech has historically relied on expensive human transcription, leading to very small and corpuses covering a very small fraction of human communication. This corpus is part of a shift that allows social scientists, linguists and political scientists to study a massive core-sample of spoken language in our public discourse.”

The metadata attached to these transcripts includes information about geographical location, speaker turn boundaries, gender, and radio program information. Curious readers can access the researchers’ paper here (PDF).

Cynthia Murrell, August 16, 2019

Deep Fake Round Up

August 5, 2019

DarkCyber spotted “8 Deepfake Examples That Terrified the Internet.” This type of article is interesting because it catalogs items which can be forgotten or which become difficult to locate even with the power of Bing, DuckDuckGo, or Google search at one’s fingertips. The Dark Cyber team was not “terrified.” In fact, we were amused once again by item three: “Zuckerberg speaks frankly.”

Stephen E Arnold, August 5, 2019

Pinterest Offers Soothing Activities for Stressed Users

July 31, 2019

Pinterest does send email about “pins that might interest you.” Distressed? Well, that’s different.

A remarkable new feature at Pinterest aims to help distressed users, reports in, “New Pinterest Tools Help Calm Anxiety, Reduce Stress.” The soothing activities are accessed by searching for phrases like “stress quotes,” “work anxiety,” or related terms. They will even keep these searches private—a rare mercy these days. Writer Stephanie Mlot reports:

“But don’t expect the usual litany of colorful thumbnails and interspersed ads. These new resources look different from the rest of Pinterest—‘because the experience is kept separate,’ according to product manager Annie Ta. ‘People’s interactions with these activities are private and not connected to their account,’ she explained in a blog announcement. ‘This means we won’t show recommendations or ads based on their use of these resources.’

We noted this “do not track” comment too:

“Pinterest also does not track who uses them; all activity is stored anonymously using a third-party service. And, as always, if someone searches for self-harm-related content, they will be directed to the National Suicide Prevention Lifeline—just two taps away.”

Ta stressed these tools were developed in response to a startling statistic—the Centers for Disease Control reports that more than half of Americans will be diagnosed with a mental disorder or illness at one time or another. The folks at Pinterest also noticed millions of emotional-health-related searches coming across their platform. Though these activities do not take the place of professional care, Pinterest hopes they will help their users cope with distress in their lives.

Cynthia Murrell, July 31, 2019

Sockpuppet Image Source

July 23, 2019

I read “Turn Selfies into Classical Portraits with the AI That Fuels Deepfakes.” I gave the system a spin. I uploaded a picture from this week’s DarkCyber. The system generated a wonderful image usable by anyone with access to a source of images; for example, Bing Images or Facebook. Here’s the result:


Working well. Cloud centric or a laptop? I loved the explanation: “Huge traffic.” Back to those scaling lectures.

Stephen E Arnold, July 23, 2019

From the Desk of Captain Obvious: How Image Recognition Mostly Works

July 8, 2019

Want to be reminded about how super duper image recognition systems work? If so, navigate to the capitalist’s tool “Facebook’s ALT Tags Remind Us That Deep Learning Still Sees Images as Keywords.” The DarkCyber teams knows that this headline is designed to capture clicks and certainly does not apply to every image recognition system available. But if the image is linked via metadata to something other than a numeric code, then images are indeed mapped to words. Words, it turns out, remain useful in our video and picture first world.

Nevertheless, the write up offers some interesting comments, which is what the DarkCyber research team expects from the capitalist tool. (One of our DarkCyber team saw Malcolm Forbes at a Manhattan eatery keeping a close eye on a spectacularly gaudy motorcycle. Alas, that Mr. Forbes is no longer with us, although the motorcycle probably survives somewhere unlike the “old” Forbes’ editorial policies.

Here’s the passage:

For all the hype and hyperbole about the AI revolution, today’s best deep learning content understanding algorithms are still remarkably primitive and brittle. In place of humans’ rich semantic understanding of imagery, production image recognition algorithms see images merely through predefined galleries of metadata tags they apply based on brittle and naïve correlative models that are trivially confused.

Yep, and ultimately the hundreds of millions of driver license pictures will be mapped to words; for example, name, address, city, state, zip, along with a helpful pointer to other data about the driver.

The capitalist tool reminds the patient reader:

Today’s deep learning algorithms “see” imagery by running it through a set of predefined models that look for simple surface-level correlative patterns in the arrangement of its pixels and output a list of subject tags much like those human catalogers half a century ago.

Once again, no push back from Harrod’s Creek. However, it is disappointing that new research is not referenced in the article; for example, the companies involved in Darpa Upside.

Stephen E Arnold, July 8, 2019

Audio Search: Google Gets with the Program

March 27, 2019

Searching audio files has been difficult. Exalead, before Dassault bought the company, dabbled in audio search. One could key in a key word and jump to the segment of a file which contained the word or phrase. That was in 2006, maybe 2007. That was, despite my advanced age and inability to recall the innovations from search and retrieval wizards, more than a decade ago.

I read “Google Podcast in Episode Search Is Coming, Shows Now Being Fully Transcribed.” The write up reports:

Google Podcasts is now automatically generating transcripts of episodes and is using them as metadata to help listeners search for shows, even if they don’t know the title or when it was published.

I spoke with a person who translates audio recordings from one language into English. Here are some highlights from that chat:

  • “Even though I am a native speaker and fluent in English, it is very, very difficult to make out what some people are saying. I slow down the recording. I listen several times. I fiddle with the sound.”
  • “Accents pose a problem. For example, if a person is speaking one language but learned that language by osmosis, the pronunciation is often strange. In some cases, I have no idea what the person speaking is trying to communicate. Some people do not articulate or put the stresses where a native speaker puts them.
  • “Muddled sounds pose big challenges. I am not sure why but even modern recording equipment drops sounds. In some cases, rustling or tapping fuzzes what the person is saying.”

Net net: How accurate will the transcripts be? The answer is going to be like the accuracy scores for facial recognition? Maybe 50 percent to 75 percent accurate out of the gate. But better than nothing, when one wants to sell ads which match the translated key words, right? Will Steve Gibson stop creating transcripts of Security Now? Probably not.

Stephen E Arnold, March 27, 2019

Facial Recognition and Image Recognition: Nervous Yet?

November 18, 2018

I read “A New Arms Race: How the U.S. Military Is Spending Millions to Fight Fake Images.” The write up contained an interesting observation from an academic wizard:

“The nightmare situation is a video of Trump saying I’ve launched nuclear weapons against North Korea and before anybody figures out that it’s fake, we’re off to the races with a global nuclear meltdown.” — Hany Farid, a computer science professor at Dartmouth College

Nothing like a shocking statement to generate fear.

But there is a more interesting image recognition observation. “Facebook Patent Uses Your Family Photos For Targeted Advertising” reports that a the social media sparkler has an invention that will

attempt to identify the people within your photo to try and guess how many people are in your family, and what your relationships are with them. So for example if it detects that you are a parent in a household with young children, then it might display ads that are more suited for such family units. [US20180332140]

While considering the implications of pinpointing family members and linking the deduced and explicit data, consider that one’s fingerprint can be duplicated. The dupe allows a touch ID to be spoofed. You can get the details in “AI Used To Create Synthetic Fingerprints, Fools Biometric Scanners.”

For a law enforcement and intelligence angle on image recognition, watch for DarkCyber on November 27, 2018. The video will be available on the Beyond Search blog splash page at this link.

Stephen E Arnold, November 18, 2018

Bing: Getting More Visual

October 27, 2018

Bing Gets Visual, But Stays Behind The Curve

Microsoft’s red-headed step child of the search world is slowly, and steadily attempting its next stab at greatness. While the little search engine that could has been trying valiantly to overtake Google for years, it is making concrete steps in the right direction with news we discovered in a recent Android Community story, “Bing Update Brings Text Transcription, Education Carousel, Visual Search.”

The update that has us most excited is its visual search:

“Bing also lets you copy and search the actual text that you see on your camera. For example, you take a pic of the menu in the restaurant, tap the text and search how to pronounce it and what it actually is. You can use it to take pictures of phone numbers, serial numbers, email addresses, navigate to an address, etc.”

As expected, Bing is a little behind the curve. While Bing is just beginning to blossom in the world of visual search, Google is already there and also adding greater visual cues aimed at retaining visitors. By incorporating more pictures and videos, and less text, the king of the mountain is looking to hold its grip on users. We would love to see Bing outduel Google someday, but we don’t see it on the horizon.

Patrick Roland, October 25, 2018

Design Tool Picular Taps into Google Image Color Data

October 9, 2018

We learn from a write-up at Fast Company that “Google Image Search Is Now a Design Tool.” More specifically, the new design tool Picular taps into Google Image Search for its data. This is an intriguing approach. Writer and associate editor Katharine Schwab writes:

Picular is a new color search tool that lets you enter any search term and presents you with a slew of options, basing all of its color choices on what pops up first in Google image search. It’s a color-picker, courtesy of internet hive mind. For instance, if you type the word ‘desert’ into Picular’s search bar, the tool scrapes the top 20 image results from Google and finds the most dominant color in each image. It presents these results in a series of tiles: A sea of sandy browns and oranges, with a few blues (presumably from the sky) thrown in. Each tile has the color’s RGB code that instantly copies to your clipboard when you click on the tile, making it easy to instantly try out the colors in your work. Picular is a quick and handy way to get color ideas for a design project, especially because you can type in more emotional, evocative words and see what Google instantly associates with each idea.”

And where does Google get their associations? From its algorithms’ studies of human nature, of course. It may at first seem odd to consult an AI to better know the colors of human emotions and ideas, but some color associations we think of as natural actually vary from culture to culture, and Google extracts its data from around the world. Such a tool could certainly help designers and, especially, advertisers better connect with their intended audiences through color. Picular was created by Future Memories, a digital studio out of Sweden that was founded in 2014.

Cynthia Murrell, October 9, 2018

Next Page »

  • Archives

  • Recent Posts

  • Meta