Trouble Ahead for Deep Fakes and Fancy Technology?
January 3, 2020
At a New Year’s get together, a person mentioned a review of the film “Cats.” I don’t go to movies, but the person’s comments intrigued me. I returned home and tracked down “How Cats Became a Box Office Catastrophe.”
I noted one sentence in the write which was:
We probably don’t need to remind you of the backlash the internet unleashed upon Cats the moment the Cats trailer dropped. Viewers gasped in horror as Universal’s vision of adding cat fur and features to the proportions of a human body was finally revealed. It was uncomfortable to look at, a clear example of the uncanny valley, where viewers are unsettled by artificially constructed beings that are just shy of realism.
The write then added:
Beyond subjective opinions, critics highlighted several issues including glitchy and unpolished CGI that could have been a result of its rushed production, that took place within a single year. In contrast, this year’s photo-realistic Lion King movie began work in 2016.
Two points: Backlash for the context and the “unpolished CGI.”
What happens when the rough hewn nature of other fantastical technology, swathed in investor hype and marketers’ misrepresentations, is understood?
Exciting for some in 2020.
Stephen E Arnold, January 3, 2020
Another Google Gaffe?
December 30, 2019
Censorship is an intriguing job. A human — chock full of failings — has to figure out if an object is offensive, defensive, or maybe-sive.
If true, the BBC story “YouTube Admits Error over Bitcoin Video Purge” documents a misstep. DarkCyber loves the GOOG, and the research team doubts any anecdote suggesting a Google gaffe took place. For example:
Many video-makers have complained that YouTube’s current systems let so-called “copyright trolls” make false claims on their videos, while its automated detection tools often fail to understand when material has been legally used.
The BBC reports:
YouTube said in a statement that it had “made the wrong call” and confirmed that any content mistakenly removed would be restored. “With the massive volume of videos on our site, sometimes we make the wrong call,” it said.”When it’s brought to our attention that a video has been removed mistakenly, we act quickly to reinstate it.” It said there had been no changes to its polices, and insisted there would be “no penalty” to any channels that were affected by the incident.
I liked the idea that Googzilla is an it, very 2020. And the individuals who depend on YouTube for some money.
Yeah, well, you know, err.
Stephen E Arnold, December 30, 2019
YouTube Supplied Music Leads To Massive Video Demonetization
December 10, 2019
YouTube cheats its content creators. The video sharing platform is constantly changing its rules, demonetizing videos without notice, and deleting videos for “offensive” content. YouTube claims it loves its creators and offers tools and services for assistance. One of these services is offering royalty free music for videos, but content creators beware of video platforms offering free music. Torrent Freak reports on how, “‘Royalty Free’ Music Supplied By YouTube Results In Mass Video Demonetization.”
Matt Lowne is a popular game streaming YouTuber, think Pewdiepie except he only has 56 million views. To avoid copyright strikes which lead to demonetization, YouTubers avoid copyrighted content such as music and video clips. Lowne used a track called “Dreams” by Joakim Kraud from YouTube’s audio library for his video introductions. Lowne posted a video, then he was barraged with emails stating that he used SonyATV, PeerMusic, Warner Chappell, LatinAutor, and Audiam material.
Now all of Lowne’s profit from ads are split between the claimant companies and he gets the crumbs. Composer Joakim Karud allows anyone to use his music royalty free which makes him a popular artist on YouTube. Lowne filed a claim to contest the copyright violation, but he only did it for one of his videos. If he filed a claim on every one of his videos, he could get three strikes and be suspended indefinitely from YouTube. Lowne is not the only YouTuber with this problem and the companies filing the copyright claim may have legitimate grounds:
“Sure enough, if one turns to the WhoSampled archive, Dreams is listed as having sampled Weaver of Dreams, a track from 1956 to which Sony/ATV Music Publishing LLC and Warner/Chappell Music, Inc. own the copyrights. If the trend of claims against ‘Dreams’ continues, there is potential for huge upheaval on YouTube and elsewhere. Countless thousands of videos use the track and as a result it has become very well-known.”
To make matters even worse, YouTube issued an authorized statement that said “Dreams” was never listed in its official audio library. “Dreams” was listed as a royalty free music on an unofficial channel that claimed to be the YouTube audio library. Oh boy! It is even more important to double check if music is royalty free. Maybe it would be better to use music in the public domain or hire someone to compose original music?
Whitney Grace, December 10, 2019
TikTok Messaging
November 29, 2019
Is TikTok a platform for anti-nation state propaganda? (If you don’t know about TikTok, this write up will make no sense. Stop reading.)
The answer is, “Yep.”
A good explanation of what young people are doing with short videos appears in “Teen Who Went Viral with TikTok Hair Tutorial Tells ITV News People Need to Know about China Threat to Uighur Muslims.”
This is important for several reasons:
- TikTok is a China based outfit. DarkCyber thinks that Chinese officials will be talking about TikTok and coming up with some creative ideas to prevent hair tutorial type information from going global.
- Teens and other TikTok users may be difficult to guide down the path of truth and justice. More meetings will be necessary.
- More attention on the Uighur matter may not be desirable. More meetings will ensue.
Net net: TikTok may be invited to some meetings and given an opportunity to be re-educated. Just a thought. Russia re-educated Apple about Crimea. China and Russia may share ideas when their joint military exercise with Iran takes place.
Worth monitoring.
Stephen E Arnold, November 29, 2019
Microsoft Buys AnyVision: Why?
October 30, 2019
We noted “Why Did Microsoft Fund an Israeli Firm That Surveils West Bank Palestinians?” The write up stated:
Microsoft has invested in a startup that uses facial recognition to surveil Palestinians throughout the West Bank, in spite of the tech giant’s public pledge to avoid using the technology if it encroaches on democratic freedoms. AnyVision, which is headquartered in Israel but has offices in the United States, the United Kingdom and Singapore, sells an “advanced tactical surveillance” software system, Better Tomorrow. It lets customers identify individuals and objects in any live camera feed, such as a security camera or a smartphone, and then track targets as they move between different feeds.
The write up covers the functions of the firm’s technology. The contentious subject of facial recognition is raised.
However, one question was not asked, “Why?” Microsoft took action despite employee push back on certain projects.
The answer is, “Possess a technology that gets Microsoft closer to Amazon’s capabilities in this particular technical niche.
Microsoft has to beef up in a number of technical spaces. It may have a demanding client and a major project which requires certain capabilities. Marketing is one thing; delivering is another.
Stephen E Arnold, October 30, 2019
TikTok: True Colors?
October 22, 2019
Since it emerged from China in 2017, the video sharing app TikTok has become very popular. In fact, it became the most downloaded app in October of the following year, after merging with Musical.ly. That deal opened up the U.S. market, in particular, to TikTok. Americans have since been having a blast with the short-form video app, whose stated mission is to “inspire creativity and joy.” The Verge, however, reminds us where this software came from—and how its owners behave—in the article, “It Turns Out There Really Is an American Social Network Censoring Political Speech.”
Reporter Casey Newton grants that US-based social networks have their limits, removing hate speech, violence, and sexual content from their platforms. However, that is a far cry from the types of censorship that are common in China. Newton points to a piece by Alex Hern in The Guardian that details how TikTok has directed its moderators to censor content about Tiananmen Square, Tibetan independence, and the Falun Gong religious group. It is worth mentioning that TikTok’s producer, ByteDance, maintains a separate version of the app (Douyin) for use within China’s borders. This suppression documented in the Guardian story, then, is specifically for the rest of us. Newton writes:
“As Hern notes, suspicions about TikTok’s censorship are on the rise. Earlier this month, as protests raged, the Washington Post reported that a search for #hongkong turned up ‘playful selfies, food photos and singalongs, with barely a hint of unrest in sight.’ In August, an Australian think tank called for regulators to look into the app amid evidence it was quashing videos about Hong Kong protests. On the one hand, it’s no surprise that TikTok is censoring political speech. Censorship is a mandate for any Chinese internet company, and ByteDance has had multiple run-ins with the Communist party already. In one case, Chinese regulators ordered its news app Toutiao to shut down for 24 hours after discovering unspecified ‘inappropriate content.’ In another case, they forced ByteDance to shutter a social app called Neihan Duanzi, which let people share jokes and videos. In the aftermath, the company’s founder apologized profusely — and pledged to hire 4,000 new censors, bringing the total to 10,000.”
For its part, TikTok insists the Guardian-revealed guidelines have been replaced with more “localized approaches,” and that they now consult outside industry leaders in creating new policies. Newton shares a link to TikTok’s publicly posted community guidelines, but notes it contains no mention of political posts. I wonder why that could be.
Cynthia Murrell, October 22, 2019
Bias: Female Digital Assistant Voices
October 17, 2019
It was a seemingly benign choice based on consumer research, but there is an unforeseen complication. TechRadar considers, “The Problem with Alexa: What’s the Solution to Sexist Voice Assistants?” From smart speakers to cell phones, voice assistants like Amazon’s Alexa, Microsoft’s Cortana, Google’s Assistant, and Apple’s Siri generally default to female voices (and usually sport female-sounding names) because studies show humans tend to respond best to female voices. Seems like an obvious choice—until you consider the long-term consequences. Reporter Olivia Tambini cites a report UNESCO issued earlier this year that suggests the practice sets us up to perpetuate sexist attitudes toward women, particularly subconscious biases. She writes:
“This progress [society has made toward more respect and agency for women] could potentially be undone by the proliferation of female voice assistants, according to UNESCO. Its report claims that the default use of female-sounding voice assistants sends a signal to users that women are ‘obliging, docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command like “hey” or “OK”.’ It’s also worrying that these voice assistants have ‘no power of agency beyond what the commander asks of it’ and respond to queries ‘regardless of [the user’s] tone or hostility’. These may be desirable traits in an AI voice assistant, but what if the way we talk to Alexa and Siri ends up influencing the way we talk to women in our everyday lives? One of UNESCO’s main criticisms of companies like Amazon, Google, Apple and Microsoft is that the docile nature of our voice assistants has the unintended effect of reinforcing ‘commonly held gender biases that women are subservient and tolerant of poor treatment’. This subservience is particularly worrying when these female-sounding voice assistants give ‘deflecting, lackluster or apologetic responses to verbal sexual harassment’.”
So what is a voice-assistant maker to do? Certainly, male voices could be used and are, in fact, selectable options for several models. Another idea is to give users a wide variety of voices to choose from—not just different genders, but different accents and ages, as well. Perhaps the most effective solution would be to use a gender-neutral voice; one dubbed “Q” has now been created, proving it is possible. (You can listen to Q through the article or on YouTube.)
Of course, this and other problems might have been avoided had there been more diversity on the teams behind the voices. Tambini notes that just seven percent of information- and communication-tech patents across G20 countries are generated by women. As more women move into STEM fields, will unintended gender bias shrink as a natural result?
Cynthia Murrell, October 17, 2019
Geospatial Innovation: SenSat
October 8, 2019
Last week, there was conference chatter about geo-spatial technology. The conference focused on LE and intel technology and knowing where an entity is remains an important capability for certain software systems.
There was also talk in one of my sessions about “innovation drift.” This is my way of characterizing the movement of “good ideas” from the US to other countries. “Drift” is inevitable: Economic, political, and social pressure ensures that digital ideas move.
I noted this morning (Sunday, October 6, 2019) the article “Tencent Leads $10 Million Investment in SenSat to Create Real-Time Simulated Realities.” The write up reported:
SenSat, a geospatial technology startup that digitizes real-world places for infrastructure projects, has raised $10 million in a series A round of funding led by Chinese tech titan Tencent, with participation from Russian investment firm Sistema Venture Capital.
SenSat processes satellite and other imagery. Then the company’s software constructs representations of what’s on the ground. The write up pointed out:
[SenSat] said it translates the real world into a version that can be understood by machines and is thus suitable for training artificial intelligence (AI) systems.
DarkCyber noted this statement in the write up:
SenSat constitutes part of another growing trend across the technology spectrum: the meshing of large swathes of disparate data to generate real and meaningful insights.
The technology developed by SenSat, founded in London in 2015, is interesting.
For DarkCyber, the most important information in the write up was the assertion that the company has obtained financial support from companies in China and Russia.
The idea, DarkCyber believes, is that the technological drift is not going to be left to chance. Reconstructions like the ones generated by SenSat, Cape Analytics, and others are likely to make the targeting options of nanodrones more interesting.
Drift is one thing; directed and managed technology drift is another.
Stephen E Arnold, October 8, 2019
Amazon AWS, DHS Tie Up: Meaningful or Really Meaningful?
October 7, 2019
In my two lectures at the TechnoSecurity & Digital Forensics conference in San Antonio last week, my observations about Amazon AWS and the US government generated puzzled faces. Let’s face it. Amazon means a shopping service for golf shirts and gym wear.
I would like to mention — very, very briefly because interest in Amazon’s non shopping activities is low among some market sectors — “DHS to Deploy AWS-Based Biometrics System.” The deal is for Homeland Security:
to deploy a cloud-based system that will process millions of biometrics data and support the department’s efforts to modernize its facial recognition and related software.
The system will run on the AWS GovCloud platform. Amazon snagged this deal from the incumbent Northrop Grumman. AWS takes over the program in 2021. DarkCyber estimates that the contract will be north of $80 million, excluding ECOs and scope changes.
This is not a new biometrics system. Its been up and running since the mid 1990s. What’s interesting is that the seller of golf shirts displaced one of the old line vendors upon which the US government has traditionally relied.
DarkCyber finds this suggestive which is a step toward really meaningful. Watch for “Dark Edge: Amazon Policeware”. It will be available in the next few months.
Stephen E Arnold, October 7, 2019
Yale Image Search: Innovation and Practicality
September 5, 2019
Yale University, according to Open Culture, has made available 170,000 photographs which document the Depression. Well, not just the Depression. The review conducted by DarkCyber revealed photos into the 1940s.
What sets this image collection apart is its interface. Unlike the near impossible presentations of other august institutions, Yale has hit upon:
- A map based approach
- A “search by photographer”
- Useful basic photo information.
There’s even a functional, clear search component with old fashioned fields. (Google, why not check it out? Not all good ideas originate near Standford.)
Kudos to Yale. DarkCyber hopes that other online image archives learn from what Yale has rolled out. A little “me too” from Internet Archive, the American Memory project, and assorted museums would be welcomed here in Harrod’s Creek. (One river shore photo looked a great deal like Tibby the Dog’s favorite playground.)
Stephen E Arnold, September 5, 2019