Reputation Repair Via Content Moderation? Possibly a Long Shot
December 30, 2021
Meta (formerly known as Facebook) is launching another shot in the AI war. CNet reports, “Facebook Parent Meta Uses AI to Tackle New Types of Harmful Content.” The new tool is intended to flag posts containing misinformation and those promoting violence. It also seems designed to offset recent criticism of the company, especially charges it is not doing enough to catch fake COVID-19 news.
As Meta moves forward with its grand plans for the metaverse, it is worth noting the company predicts this tech will also work on complex virtual reality content. Eventually. Writer Queenie Wong tells us:
“Generally, AI systems learn new tasks from examples, but the process of gathering and labeling a massive amount of data typically takes months. Using technology Meta calls Few-Shot Learner, the new AI system needs only a small amount of training data so it can adjust to combat new types of harmful content within weeks instead of months. The social network, for example, has rules against posting harmful COVID-19 vaccine misinformation, including false claims that the vaccine alters DNA. But users sometimes phrase their remarks as a question like ‘Vaccine or DNA changer?’ or even use code words to try to evade detection. The new technology, Meta says, will help the company catch content it might miss. … Meta said it tested the new system and it was able to identify offensive content that conventional AI systems might not catch. After rolling out the new system on Facebook and its photo-service Instagram, the percentage of views of harmful content users saw decreased, Meta said. Few-Shot Learner works in more than 100 languages.”
Yep, another monopoly type outfit doing the better, faster, cheaper thing while positioning the move as a boon for users. Will Few-Shot help Meta salvage its reputation?
Cynthia Murrell, December 30, 2021