Me Too Innovation Is Real News. Is It?

August 14, 2017

I saw links to a Wall Street Journal write up titled “In Tech, Imitation Is the New Innovation.” To view the document, you will have to [a] buy a dead tree version of the paper, [b] borrow one from a friendly neighbor or a low rise office building with newspapers scattered inside the entrance because who arrives when one can be on vacation, or [c] pay for an online subscription to one of the outfits wanting the US government to bail the newspaper companies out. (Is this an imitation of the Chrysler and GM bailouts? May be, may be.) You can find the story on page A-1 with a jump to page A-8 in the August 10, 2017 edition.

The main point of the write up is that the titans of Silicon Valley have run out of ideas. In order to get new ideas, the companies copy other companies. If the task of copying is tough, the big company may buy the outfit with the idea. Think how well that has worked out for Dodge Ball.

The focus of the write up is the general inability of the titans to come up with new ideas that capture eyeballs. Facebook is the focus, but I think of Google as one of the premier companies using piggyback innovation.

An interesting example of quasi innovation is the Google patent application 2017/0228436 A1, which is a continuation of a patent series reaching back seven years to 2019. The seven year old patent itself nods its head to a Korean patent dating from 2002. The August 2017 patent application reaches back 15 years.

The idea of “standing on the shoulders of giants” romanticizes the fact that coming up with something that captures users is difficult. Very difficult.

What strikes me as “Providing Results to Parameterless Search Queries” is that Google’s “invention” is similar to the “me too” approach to creating something new referenced in the Wall Street Journal write up. Facebook is doing what seems “natural.” Imitation is natural because the original “good idea” cooked up at Harvard needs oomph. Data enables refinement of ideas that may be decades old.

Innovation is less about innovation by copying or acquiring. Innovation is now a way to exploit comprehensive data.

Stephen E Arnold, August 14, 2017

Free Content Destroying Print Media

August 8, 2017

Today’s generation has no concept of having to wait for the day’s top stories till the newspaper is delivered. If they want to know something (or even they don’t) they simply turn on their Smart phone, tablet or even watch! With news stories available 24/7 with automatic alerts, most people under thirty can’t possibly fathom paying for it.

It almost wasn’t that way. According to Poynter,

In the 1990s, a cantankerous, bottom-line-obsessed and visionary Tribune Company executive named Charles Brumback pushed something that was called The New Century News Network. The top print news organizations, including The New York Times, The Washington Post and Times-Mirror would form a network in which they’d house their content online and charge for it. Members would get paid based on usage. They even started a newswire that was similar to what we know as Google News.

Unfortunately, the heads of print media couldn’t see the future and how their pockets would be deflated due to the giving away of their content to online giants such as Facebook and Yahoo and Google.

Now, these same short-sighted network bigwigs are wanting Congress to intervene on their behalf. As the article points out, “running to Congress seems belated and impotent.”

Catherine Lamsfuss, August 8, 2017

Lest Chinese Conglomerates Forget

August 4, 2017

Alphabet, the parent company of Google last week was fined $2.7 billion for abusing its position in search engine results. This should provide Chinese companies with global ambitions a precursor on what lies ahead for them.

In an editorial published by China Daily and titled Google’s Fine a Reminder, the author says:

Fining of Google should remind Chinese enterprises intent on going global that they should abide by local laws and regulations to avoid possible economic losses resulting from any malpractices and wrongdoings.

China is a closed ecosystem where Google, Facebook, Apple, or Amazon have absolute no dominance unlike in rest of the economies. Here, homegrown companies rule the roost. However, with burgeoning profits fuelled by domestic consumption, the Chinese companies are looking to expand to other markets.

With a reputation of lofting rules, Google getting fined by EU regulators should tell Chinese companies if they break the law of the land, expect being penalized, heavily.

Vishal Ingole, August 4, 2017

Facebook Grapples with Moderation

August 1, 2017

Mashable’s Alex Hazlett seems quite vexed about the ways Facebook is mishandling the great responsibility that comes with its great power in, “Facebook’s Been Making It Up All Along and We’re Left Holding the Bag.” Reporting on a recent leak from the Guardian of Facebook moderator documents, Hazlett writes:

It confirmed what a lot of people had long suspected: Facebook is making it up as they go along and we’re the collateral damage. The leaked moderator documents cover how to deal with depictions of things like self-harm and animal cruelty in exceedingly detailed ways. A first read through suggests that the company attempted to create a rule for every conceivable situation, and if they missed one, well they’d write that guideline when it came up. It suggests they think that this is just a question of perfecting the rules, when they’ve been off-base from the outset.

The article notes that communities historically craft and disseminate the rules, ethics, and principles that guide their discourse; in this case, the community is the billions of Facebook users across the globe, and those crucial factors are known only to the folks in control (except what was leaked, of course.) Hazlett criticizes the company for its “generic platitudes” and lack of transparency around an issue that now helps shape the very culture of the entire world. He observes:

Sure, if Facebook had decided to take an actual stand, they’d have had detractors. But if they’d been transparent about why, their users would have gotten over it. If you have principles, and you stick to them, people will adjust. Instead, Facebook seems to change their policies based on the level of outrage that is generated. It contributes to a perception of them as craven and exploitative. This is why Facebook lurches from stupid controversy to stupid controversy, learning the hard way every. single. time.

These days, decisions by one giant social media company can affect millions of people, often in ways, those affected don’t even perceive, much less understand. A strategy of lurching from one controversy to another does seem unwise.

Cynthia Murrell, August 1, 2017

Free Content Destroying Print Media

July 27, 2017

Today’s generation has no concept of having to wait for the day’s top stories till the newspaper is delivered. If they want to know something (or even they don’t) they simply turn on their Smart phone, tablet or even watch! With news stories available 24/7 with automatic alerts, most people under thirty can’t possibly fathom paying for it.

It almost wasn’t that way. According to Poynter,

In the 1990s, a cantankerous, bottom-line-obsessed and visionary Tribune Company executive named Charles Brumback pushed something that was called The New Century News Network. The top print news organizations, including The New York Times, The Washington Post and Times-Mirror would form a network in which they’d house their content online and charge for it. Members would get paid based on usage. They even started a newswire that was similar to what we know as Google News.

Unfortunately, the heads of print media couldn’t see the future and how their pockets would be deflated due to the giving away of their content to online giants such as Facebook and Yahoo and Google.

Now, these same short-sighted network bigwigs are wanting Congress to intervene on their behalf. As the article points out, “running to Congress seems belated and impotent.”

Catherine Lamsfuss, July 27, 2017

Western in Western Out

July 26, 2017

A thoughtful piece at Quartz looks past filter bubbles to other ways mostly Western developers are gradually imposing their cultural perspectives on the rest of the world—“Silicon Valley Has Designed Algorithms to Reflect Your Biases, Not Disrupt Them.” Search will not get you objective information, but rather the content your behavior warrants. Writer Ramesh Srinivasan introduces his argument:

Silicon Valley dominates the internet—and that prevents us from learning more deeply about other people, cultures, and places. To support richer understandings of one another across our differences, we need to redesign social media networks and search systems to better represent diverse cultural and political perspectives. The most prominent and globally used social media networks and search engines— Facebook and Google—are produced and shaped by engineers from corporations based in Europe and North America. As a result, technologies used by nearly 2 billion people worldwide reflect the design perspectives of the limited few from the West who have power over how these systems are developed.

It is worth reading the whole article for its examination of the issue, and suggestions for what to do about it. Algorithm transparency, for example, would at least let users know what principles guide a platform’s  content selections. Taking input from user communities in other cultures is another idea. My favorite is a proposal to prioritize firsthand sources over Western interpretations, even ones with low traffic or that are not in English. As Srinivasan writes:

Just because this option may be the easiest for me to understand doesn’t mean that it should be the perspective I am offered.

That sums up the issue nicely.

Cynthia Murrell, July 26, 2017

Instagram Reins in Trolls

July 21, 2017

Photo-sharing app Instagram has successfully implemented DeeText, a program that can successfully weed out nasty and spammy comments from people’s feeds.

Wired in an article titled Instagram Unleashes an AI System to Blast Away Nasty Comments says:

DeepText is based on recent advances in artificial intelligence, and a concept called word embeddings, which means it is designed to mimic the way language works in our brains.

DeepText initially was built by Facebook, Instagram’s parent company for preventing abusers, trolls, and spammers at bay. Buoyed by the success, it soon implemented on Instagram.

The development process was arduous wherein a large number of employees and contractors for months were teaching the DeepText engine how to identify abusers. This was achieved by telling the algorithm which word can be abusive based on its context.

At the moment, the tools are being tested and rolled out for a limited number of users in the US and are available only in English. It will be subsequently rolled out to other markets and languages.

Vishal Ingole, July 21, 2017

Software That Detects Sarcasm on Social Media

July 20, 2017

Technion-Israel Institute of Technology Faculty of Industrial Engineering and Management has developed Sarcasm SIGN, a software that can detect sarcasm in social media content. People with learning difficulties will find this tool useful.

According to an article published by Digital Journal titled Software Detects Sarcasm on Social Media:

The primary aim is to interpret sarcastic statements made on social media, be they Facebook comments, tweets or some other form of digital communication.

As we move towards a more digitized world where the majority of our communications are through digital channels, people with learning disabilities are at the receiving end. As machine learning advances so do the natural language capabilities. Tools like these will be immensely helpful for people who are unable to understand the undertones of communication.

The same tool can also be utilized by brands for determining who is talking about them in a negative way. Now ain’t that wonderful Facebook?

Vishal Ingole, July 20, 2017

Facebook Factoid: Deleting User Content

July 6, 2017

Who knows if this number is accurate. I found the assertion of a specific number of Facebook deletions interesting. Plus, someone took the time to wrap the number is some verbiage about filtering, aka censorship. The factoid appears in “Facebook Deletes 66,000 Posts a Week to Curb Hate Speech, Extremism.”

Here’s the passage with the “data”:

Facebook has said that over the past two months, it has removed roughly 66,000 posts on average per week that were identified as hate speech.

My thought is that the 3.2 million “content objects” is neither high nor low. The number is without context other than my assumption that Facebook has two billion users per month. The method used to locate and scrub the data seems to be a mystical process powered by artificial intelligence and humans.

One thing is clear to me: Figuring out what to delete will give both the engineers writing the smart software and the lucky humans who get paid to identity inappropriate content in the musings of billions of happy Facebookers seems to be a somewhat challenging task.

What about those “failures”? Good question. What about that “context”? Another good question. Without context what have we with this magical 66,000? Not much in my opinion. One can’t find information if it has been deleted. That’s another issue to consider.

Stephen E Arnold, July 6, 2017

Facebook to Tackle Terrorism with Increased Monitoring

July 5, 2017

Due to recent PR nightmares involving terrorism organizations, Facebook is revamping their policies and policing of terrorism content within the social media network. A recent article in Digital Trends, Facebook Fights Against Terrorist Content on Its Site Using A.I., Human Expertise, explains how Zuckerberg and his team of anti-terrorism experts are changing the game in monitoring Facebook for terrorism activity.

As explained in the article,

To prevent AI from flagging a photo related to terrorism in a post like a news story, human judgment is still required. In order to ensure constant monitoring, the community operations team works 24 hours a day and its members are also skilled in dozens of languages.” Recently Facebook was in the news for putting their human monitors at risk by accidentally revealing personal information to the terrorists they were investigating on the site. As Facebook increase the number of monitors, it seems the risk to those monitors also increases.

The efforts put forth by Facebook are admirable, yet we can’t help wonder how – even with their impressive AI/human team – the platform can monitor the sheer number of live-streaming videos as those numbers continue to increase. The threats, terrorist or otherwise, present in social media continue to grow with the technology and will require a much bigger fix than more manpower.

Catherine Lamsfuss, July 5, 2017

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta