Filtering for Fuzziness the YouTube Way

February 11, 2019

Software examines an item of content. The smart part of the software says, “This is a bad item.” Presumably the smart software has rules or has created rules to follow. So far, despite the artificial intelligence hyperbole, smart software is competent in certain narrow applications. But figuring out if an object created by a human, intentionally or unintentionally, trying to create information which another finds objectionable is a tough job. Even humans struggle.

For example, a video interview — should one exist — of Tim O’Reilly explains “The Fundamental Problem with Silicon Valley’s Favorite Strategy” could be considered offensive to some readers and possibly to  to practitioners of “blitz growth”. When money is at stake along with its sidekick power, Mr. O’Reilly could be viewed as crossing “the line.”

How would YouTube handle this type of questionable content? Would the video be unaffected? Would it be demoted because it crossed “the line” because unfettered capitalism is the go to business model for many companies, including YouTube’s owner? If flagged, what happens to the video?

The Hexus article “YouTube Video Promotion AI Change Is a “Historic Victory” may provide some insight into my hypothetical example which does not involve hate speech, controlled substances, trafficking, and other allegedly “easy to resolve” edge cases.

I noted this statement:

The key change being implemented by YouTube this year is in the way it “can reduce the spread of content that comes close to – but doesn’t quite cross the line of – violating our Community Guidelines“. Content that “could misinform users in harmful ways,” will find its influence reduced. Videos “promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11,” will be affected by the tweaked recommendation AI, we are told.YouTube is clear that it won’t be deleting these videos, as long as they comply with Community Guidelines. Furthermore, such borderline videos will still be featured for users that have the source channels in their subscriptions.

I think this means, “Link buried deep in the results list.” Now fewer and fewer users of search systems dig into the subsequent pages of possibly relevant hits. That’s why search engine optimization people are in business. Relevance and objectivity are of zero importance. Appearing at the top of a results list, preferable as the first result is the goal of some SEO experts. Appearing deep in a results list generates almost zero traffic.

The Hexus write up continued:

At the weekend former Google engineer Guillaume Chaslot admitted that he helped to build the current AI used to promote recommended videos on YouTube. In a thread of Tweets, Chaslot described the impending changes as a “historic victory”.His opinion comes from seeing and hearing of people falling down the “rabbit hole of YouTube conspiracy theories, with flat earth, aliens & co”.

So what? The write up points out:

Unfortunately there is an asymmetry with BS.

When monopolies decide, what happens?

Certainly this is a question which warrants some effort on the part of graduate students to answer. The companies involved may not be the optimal source of accurate information.

Stephen E Arnold, February 11, 2019


Comments are closed.

  • Archives

  • Recent Posts

  • Meta