Does Smart Software Understand Kid Vids?
April 26, 2018
The growth of AI and predictive analytics across the spectrum has become a universal rah rah. Super powered computers and their data crunching power is being utilized by industries great and small. However, the producers of AI technology might not be getting rich off this revolution. We learned more from a recent Market Watch story, “IBM Earnings Show AI is Not Paying Off Yet.”
According to the story:
“’The bulls were hoping for a clean modest beat on this key growth segment, which represents the underpinnings of the IBM turnaround story in 2018 and beyond,’ Ives said in a note to clients. In an email, Ives said he does not have an estimate for Watson itself. ‘It’s a major contributing factor to strategic imperatives and helping drive double-digit growth…’”
Despite these less than stellar results, the big names in tech aren’t getting scared away by AI yet. In fact, it is still a boom investment time. Intel, for one, is betting a large chunk on cash on AI. We will be watching this development closer, since we all know that AI can be the greatest product in the world, but if it keeps losing money it might just end up in the graveyard. (Unlikely, we know.)
But—and there seems to be a “but” when it comes to the capabilities of smart software—we noticed that Google seems to be relying on humans to make sure that children’s videos are not violent, chuck full of objectionable material, or inappropriate for kiddie viewing. According to “For the First Time, Parents Will Be Able to Limit YouTube Kids to Human-Reviewed Channels and Recommendations,”
The new features will allow parents to lock down the YouTube Kids app so it only displays those channels* that have been reviewed by humans, not just algorithms. And this includes both the content displayed within the app itself, as well as the recommended videos. A later update will allow parents to configure which videos and channels, specifically, can be viewed.
A few observations seem to be warranted:
- Google’s vaunted smart software cannot determine what’s appropriate for children. Therefore, Google is now assuming the role that old school, chain smoking, ink stained editors once performed. Back to the past?
- If the smart software cannot figure out what video is okay for children, how accurate is Google’s ad matching software. Is it possible that the ad matching system is able to perform in a “good enough” manner? Will advertisers lose confidence that their money is putting messages in front of the “right” eye balls?
- Perhaps Google has caught the same case of sniffles that IBM Watson has been suffering? The failure of smart software with regard to kid vids suggests that hyperbole is not the same as actual performance.
The kid vid matter is as significant as the Facebook Cambridge Analytica matter. Could these be different facets of the same assumption that technology is a go getter?
Stephen E Arnold, April 26, 2018