IBM Debate Contest: Human Judges Are Unintelligent

February 12, 2019

I was a high school debater. I was a college debater. I did extemp. I did an event called readings. I won many cheesey medals and trophies. Also, I have a number of recollections about judges who shafted me and my team mate or just hapless, young me.

I learned:

Human judges mean human biases.

When I learned that the audience voted a human the victor over the Jeopardy-winning, subject matter expert sucking, and recipe writing IBM Watson, I knew the human penchant for distortion, prejudice, and foul play made an objective, scientific assessment impossible.

ibm debate

Humans may not be qualified to judge state of the art artificial intelligence from sophisticated organizations like IBM.

The rundown and the video of the 25 minute travesty is on display via Engadget with a non argumentative explanation in words in the write up “IBM AI Fails to Beat Human Debating Champion.” The real news report asserts:

The face-off was the latest event in IBM’s “grand challenge” series pitting humans against its intelligent machines. In 1996, its computer system beat chess grandmaster Garry Kasparov, though the Russian later accused the IBM team of cheating, something that the company denies to this day — he later retracted some of his allegations. Then, in 2011, its Watson supercomputer trounced two record-winning Jeopardy! contestants.

Yes, past victories.

Now what about the debate and human judges.

My thought is that the dust up should have been judged by a panel of digital devastators; specifically:

  • Google DeepMind. DeepMind trashed a human Go player and understands the problems humanoids have being smart and proud
  • Amazon SageMaker. This is a system tuned with work for a certain three letter agency and, therefore, has a Deep Lens eye to spot the truth
  • Microsoft Brainwave (remember that?). This is a system which was the first hardware accelerated model to make Clippy the most intelligent “bot” on the planet. Clippy, come back.

Here’s how this judging should have worked.

  1. Each system “learns” what it takes to win a debate, including voice tone, rapport with the judges and audience, and physical gestures (presence)
  2. Each system processes the video, audio, and sentiment expressed when the people in attendance clap, whistle, laugh, sub vocalize “What a load of horse feathers,” etc.
  3. Each system generates a score with 0.000001 the low and 0.999999 the high
  4. The final tally would be calculated by Facebook FAIR (Facebook AI Research). The reason? Facebook is among the most trusted, socially responsible smart software companies.

The notion of a human judging a machine is what I call “deep stupid.” I am working on a short post about this important idea.

A human judged by humans is neither just nor impartial. Not Facebook FAIR.

An also participated award goes to IBM marketing.

participant meda

IBM snagged an also participated medal. Well done.

Stephen E Arnold, February 13, 2019

Google News: Not So Much News As Control and Passive Aggressive Offense

February 12, 2019

I read “One Analyst’s Attempts to Demystify the Types of Traffic Google Sends Publishers.” The write up explains some of the clever ways Google manages its traffic and any related data linked to the traffic and content objects.

To put it another way, Google is continuing its effort to control content for its own purposes, not the publishers’, not the users’ or the advertisers’ goals.

The article makes it clear that Google is adapting in a passive aggressive manner to the shift from desktop boat anchor search to the more popular mobile device approach to search.

Users want information and no longer are troubled with thinking up a query, deciding what service to use, or questioning the provenance of the information.

The write up takes a bit of time to figure out. There are acronyms, Googley lingo, and data which may be unfamiliar to most readers. Spend a few minutes and AMP up your understanding of what Google is doing to help out — wait for it — itself.

Surprise, right?

The downstream implications of this approach are interesting. Perhaps an analyst will tackle the issues related to:

  • Time disconnects between event and inclusion of “news”
  • Ability to “route” and “filter” from within the Google walled garden
  • Implications of inserting “relevant” ads into what may be shaped streams so that ad inventory can be whittled down.

Interesting and just the tip of the Google content management iceberg.

Stephen E Arnold, February 12, 2019

Once a Phone Company, Always a Phone Company

February 12, 2019

American life is not complete without the media generating some form of fear. The newest craze scaring people from the airwaves is their location data. PPC Land reports the story in, “Carriers Are Only One Source Used By Data Aggregators, And This Source Is Now A Threat In The US.” One way that mobile phone providers make a profit is selling their customers’ information to advertisers and other third party agencies. Among the user information sold is a customer’s location.

It sounds banal at first—your location is sold, then ads for specific products and services near you pop up on your mobile device. Then the Big Brother syndrome and privacy fears kick in. The big stink is that bounty hunters can use customers’ data to track targets down to their specific location. Yes, that is scary, but how many people have bounty hunters stalking them?

Mobile phone carriers assure customers that their safety and privacy are top priority. Roadside assistance is referenced as one way specific location information is used. The FCC and Congress are abuzz about this threat, but how are phone providers really selling the information?

“Mobile Carriers use data aggregators to monetize location data. Verizon has contracts with LocationSmart and Zumigo. Verizon says the location data used by the location aggregator programs are limited to coarse (rather than precise) location information. Coarse location information is derived from the Verizon network and is significantly less accurate than a precise location. Precise information are usually from GPS, and is obtained with apps installed on mobile phones (like maps, or car services).”

But mobile phone providers are not the only ways to track an individual’s location: cell IDs, Wifi, beacons, landlines, carriers, beacons, SDKs on apps that use locations, GSIDS, and IP addresses are all used to track location. Phones are a handy device.

Whitney Grace, February 12, 2019

DarkCyber for February 12, Now Available

February 12, 2019

DarkCyber for February 12, 2019, is now available at www.arnoldit.com/wordpress and on Vimeo at https://www.vimeo.com/316376994. The program is a production of Stephen E Arnold. It is the only weekly video news shows focusing on the Dark Web and lesser known Internet services.

This week’s story line up includes: Italy’s facial recognition system under fire; Marriott trains 500,000 employees to spot human traffickers; a new Dark Web search system from Portugal; and the most popular digital currencies on the hidden Web.

The first story explores the political criticism of Italy’s facial recognition system for law enforcement. The database of reference images contains about one third of Italy’s population. The system integrates with other biometric systems including the fingerprint recognition modules which is operating at several of Italy’s busiest airports. Despite the criticism, government authorities have no practical way to examine images for a match to a person of interest. DarkCyber believes image recognition is going to become more important and more widely used as its accuracy improves and costs come down.

The second story discusses Marriott Corporation’s two year training program. The hotel chain created information to help employees identify cues and signals of human trafficking. The instructional program also provides those attending with guidelines for taking appropriate action. Marriott has made the materials available to other groups. But bad actors have shifted their mode of operation to include short term rentals from Airbnb type vendors. Stephen E Arnold, producer of DarkCyber and author of “CyberOSINT: Next Generation Information Access, said: ”The anonymity of these types of temporary housing makes it easier for human traffickers to avoid detection. Prepaid credit cards, burner phones, and moving victims from property to property create an additional set of challenges for law enforcement”

The third story provides information about a new hidden Web indexing service. The vendor is Dogdaedis. The system uses “artificial intelligence” to index automatically the hidden services its crawler identifies. A number of companies are indexing and analyzing the Dark Web. Furthermore the number of Dark Web and hidden Web sites is decreasing due to increased pressure from law enforcement. Bad actors have adapted, shifting from traditional single point hidden Web sites to encrypted chat services.

The final story extracts from a Recorded Future report the most popular digital currencies on the Dark Web. Bitcoin is losing ground to Litecoin and Monero.

A new blog Dark Cyber Annex is now available at www.arnoldit.com/wordpress. Cyber crime, Dark Web, and company profiles are now appearing on a daily basis.

Kenny Toth, February 12, 2019

Filtering for Fuzziness the YouTube Way

February 11, 2019

Software examines an item of content. The smart part of the software says, “This is a bad item.” Presumably the smart software has rules or has created rules to follow. So far, despite the artificial intelligence hyperbole, smart software is competent in certain narrow applications. But figuring out if an object created by a human, intentionally or unintentionally, trying to create information which another finds objectionable is a tough job. Even humans struggle.

For example, a video interview — should one exist — of Tim O’Reilly explains “The Fundamental Problem with Silicon Valley’s Favorite Strategy” could be considered offensive to some readers and possibly to  to practitioners of “blitz growth”. When money is at stake along with its sidekick power, Mr. O’Reilly could be viewed as crossing “the line.”

How would YouTube handle this type of questionable content? Would the video be unaffected? Would it be demoted because it crossed “the line” because unfettered capitalism is the go to business model for many companies, including YouTube’s owner? If flagged, what happens to the video?

The Hexus article “YouTube Video Promotion AI Change Is a “Historic Victory” may provide some insight into my hypothetical example which does not involve hate speech, controlled substances, trafficking, and other allegedly “easy to resolve” edge cases.

I noted this statement:

The key change being implemented by YouTube this year is in the way it “can reduce the spread of content that comes close to – but doesn’t quite cross the line of – violating our Community Guidelines“. Content that “could misinform users in harmful ways,” will find its influence reduced. Videos “promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11,” will be affected by the tweaked recommendation AI, we are told.YouTube is clear that it won’t be deleting these videos, as long as they comply with Community Guidelines. Furthermore, such borderline videos will still be featured for users that have the source channels in their subscriptions.

I think this means, “Link buried deep in the results list.” Now fewer and fewer users of search systems dig into the subsequent pages of possibly relevant hits. That’s why search engine optimization people are in business. Relevance and objectivity are of zero importance. Appearing at the top of a results list, preferable as the first result is the goal of some SEO experts. Appearing deep in a results list generates almost zero traffic.

The Hexus write up continued:

At the weekend former Google engineer Guillaume Chaslot admitted that he helped to build the current AI used to promote recommended videos on YouTube. In a thread of Tweets, Chaslot described the impending changes as a “historic victory”.His opinion comes from seeing and hearing of people falling down the “rabbit hole of YouTube conspiracy theories, with flat earth, aliens & co”.

So what? The write up points out:

Unfortunately there is an asymmetry with BS.

When monopolies decide, what happens?

Certainly this is a question which warrants some effort on the part of graduate students to answer. The companies involved may not be the optimal source of accurate information.

Stephen E Arnold, February 11, 2019

Observation from Orbit: Gaining Traction

February 11, 2019

The day has arrived. ZeroHedge tells us the “‘Largest Fleet of Satellites in Human History’ Set to Revolutionize Space-Based Spying.” Writer Tyler Durden tells us about Planet Labs, an aerospace firm out of San Francisco that has launched almost 300 satellites for the express purpose of imaging specified sections of the Earth, on demand. We observe that Amazon is also into satellites now, and Google Alphabet and its Loon unit are making “we love satellites too” noises.

Though most of Planet Labs’ customers are currently agricultural companies seeking snapshots of their immense fields, the NGA is a noteworthy exception. We’re told:

“Their most important customer is the National Geospatial Intelligence Agency (NGA) – the government body responsible for analyzing satellite photos from its 2.7 million square-foot headquarters south of Washington D.C. staffed with 14,500 employees. ‘I’m quite excited about capabilities such as what Planet’s putting up in space,’ says NGA director Robert Cardillo. …

We also noted:

“NGA’s capabilities are of course top secret, however they have been collecting the bulk of their images from three multi-billion dollar satellites the size of a city bus, according to satellite tracker Ted Molczan- who uses giant binoculars.”

Contrast that description to that of Planet’s satellites, which are the size of a loaf of bread. If you’re curious, navigate to the article for that image. (You can also see a photo of those giant binoculars in action.) Is the world ready for this level of satellite surveillance? In my humble opinion, anyone surprised by this development has not been paying close attention. Founded in 2010, Planet Labs was organized by former NASA scientists.

Amazon’s edged toward satellite management services. Perhaps there is a connection?

Cynthia Murrell, February 11, 2019

Amazonia for February 11, 2019

February 11, 2019

Amazon has been bulldozing away and pushing some jungle undergrowth into the parking lot of major media outlets. Let’s take a quick look at what’s shaking at the electronic bookstore on steroids:

In a New York We May Be Gone

I learned in “Facing Opposition, Amazon Reconsiders NY Headquarters Site, Two Officials Say.” The source? The Washington Post or what some of the DarkCyber researchers call the “Bezos Bugle.” The push back has ranged from allegations of subsidizing a successful company to suggestions that taxpayer money could directly benefit shareholders of Amazon. I learned:

In the past two weeks, the state Senate nominated an outspoken Amazon critic to a state board where he could potentially veto the deal, and City Council members for the second time aggressively challenged company executives at a hearing where activists booed and unfurled anti-Amazon banners. K ey officials, including freshman U.S. Rep. Alexandria Ocasio-Cortez (D-N.Y.), whose district borders the proposed Amazon site, have railed against the project.

Worth monitoring because if the JEDI deal goes to Microsoft, would Amazon bail out of Virginia?

Indiscreet Pictures and Allegations of Blackmail

Amazon once was a relatively low profile outfit. Then the rocket ships, the Bezos divorce, the JEDI dust up, and now a spat. One headline captures the publicity moment: “Jeff Bezos Says Enquirer Threatened to Publish Revealing Pics.” I don’t want to unzip this allegation. You can expose yourself to the “facts” by running queries on objective search systems like Bing, Google, and Yandex. Alternatively one can turn to the Daily Mail and its full frontal report on this allegedly accurate news story.

Movie Madness

I don’t know anything about the Hollywood movie game. I noted “Woody Allen Sues Amazon for $68 Million for Refusing to Release His Films.” In the context of allegations of blackmail, this adds another facet to the diamond reputation of the humble online bookstore. According to the write up:

Allen blames the studio’s unwillingness to release his films on “a 25-year old, baseless allegation against Mr. Allen” — specifically, Allen’s adopted stepdaughter, Dylan Farrow, telling the world that he sexually assaulted her when she was a child. The suit claims that Farrow’s comments shouldn’t affect the Amazon deal, since the “allegation was already well known to Amazon (and the public) before Amazon entered into four separate deals with Mr. Allen—and, in any event it does not provide a basis for Amazon to terminate the contract.”

Amazon is taking a moral stand it seems. Interesting in the context of the blackmail allegations. Another PR coup?

Accounting Methods or Fraud?

The Los Angeles Times reported that some Amazon delivery drivers’ tips were not paid to the drivers as an add on to their pay. The tips were calculated as part of their regular wage. “Where Does a tip to an Amazon Driver Go? In Some Cases, Toward the Driver’s Base Pay” reported:

Amazon guarantees third-party drivers for its Flex program a minimum of $18 to $25 per hour, but the entirety of that payment doesn’t always come from the company. If Amazon’s contribution doesn’t reach the guaranteed wage, the e-commerce giant makes up the difference with tips from customers, according to documentation shared by five drivers.

Is this an accounting method related in some way to Enron’s special purpose entities? But in the context of blackmail and a legal battle with Woody Allen, I am not sure how to interpret the LA Times’ report if it is accurate.

Amazon and Facial Recognition

Amazon has thrown some support behind the idea that facial recognition systems may require a bit of regulation. I learned about this interest in “Amazon Weighs In on Potential Legislative Framework for Facial Recognition.” The idea is that responsible use of facial recognition technology may be a good idea. The write up stated:

…Researchers at the Massachusetts Institute of Technology published a study that found Rekognition, Amazon Web Services’ (AWS) object detection API, failed to reliably determine the sex of female and darker-skinned faces in specific scenarios.

Image recognition systems do vary in accuracy. The fancy lingo is outside the scope of this week’s write up. Examples of errors are interesting, particularly when systems confuse humans with animals or identify a person as a malefactor when that individual is an individual of sterling character. Eighty percent accuracy is a pretty good score in my experience. Stated another way, a system making 20 mistakes per 100 outputs is often close enough for horseshoes. A misidentified individual may have another point of view.

Alexa Gets a New Skill

The Digital Reader reported that you can now have Alexa play a choose your own adventure audiobook. Amazon wants to make sure it has a grip on the emerging trend of “interactive fiction.” Perfect for the mobile phone, zip zip zip reader.

Baby Activity API

The engineers at Amazon have chopped another trail through the digital jungle. Programmable Web reported that Amazon’s new baby activity skill API let parents track infant data hands free. Parents should be able to track their baby’s data. Are third parties tracking the infant as well? The write up states:

The new API includes several pre-built interfaces for tracking specific data points, including Weight, Sleep, DiaperChange, and InfantFeeding. Amazon plans to continue adding to these interfaces in hopes of streamlining integration.

If a third party were to have access to these data, combining the baby data with other timeline data might yield some useful items of information at some point in the future. Behavioral cues, purchases, social interactions, and videos watched could provide useful insights to an analyst.

More Live Streaming and a Possible Checkmate for QVC

Amazon Live Is the Retailer’s Latest Effort to Take on QVC with Live Streamed Video” states:

Amazon is taking on QVC with the launch of Amazon Live, which features live-streamed video shows from Amazon talent as well as those from brands that broadcast their own live streams through a new app, Amazon Live Creator.

Will the Twitch model work for remarkable products like super exclusive Tanzanite? QVC may try to compete. DarkCyber believes that effort would tax the shopping channel in several ways. Some cloud pros might suggest putting QVC offering on a cloud service. Will AWS make the short list?

 Amazon Space

Atlantic reported that the electronic bookstore “has 288M sq. ft. of warehouses, offices, retail stores, and data centers.”

Quite an Amazon-scale week.

Stephen E Arnold, February 11, 2019

Flagships Lost in a Sea of Money, Fame, and Power

February 10, 2019

I read “The Ethical Dilemma Facing Silicon Valley’s Next Generation.” The headline sounds like an undergraduate essay created by a Red Bull crazed philosophy major at Duquesne University. (I should know. I attended Duquesne when working on an advanced degree in — wait for it — medieval religious literature.)

But this essay is not going to be read by a slightly off kilter professor with a passion for Søren Aabye Kierkegaard and Augustine’s On Christian Teaching.

No. This essay is aimed at those interested in technology and the intersection of Silicon Valley, Stanford University, and the scorched earth approach of “move fast and break things” wizards.

The write up includes this observation:

Stanford is known as “The Farm” because the verdant 8,000-acre campus was once home to founder Leland Stanford’s horses, but today tech firms and venture capitalists treat the 16,000-person student body like their own minor league ball club.

And the university is now flicking the switch on the archives of the university library which contains documents like Pausanias’s description of the temple of Apollo at Delphi. Stanford’s leaders, professors, and students may have forgotten the injunction (which I have anglicized):

Know thyself or gnóthi sautón

But universities, public and private, want to be just like Stanford.

The Ringer reports:

Professors are revamping courses to address the ethical challenges tech companies are grappling with right now. And university president Marc Tessier-Lavigne has made educating students on the societal impacts of technology a tentpole of his long-term strategic plan.

I found this item of information interesting:

In 2013, Stanford began directly investing in students’ companies, much like a venture capital firm.

One would think that universities provided education. The Ringer makes this somewhat surprising statement:

Stanford and computer science programs across the country may not be adequately equipped to wade through the ethical minefield that is expanding along with tech’s influence.

Who is equipped? Consultants from McKinsey, Bain, or Booz Allen? Politicians? Perhaps universities should seek council from the top three officials in Virginia to add an East Coast flair to the ethical challenge? What about individual thinkers? Jeffrey Skilling (Wharton and Enron) and Martin Shkreli (the pharma bro)? Soon El Chapo (a bro-chacho) will have time on his hands once a verdict is rendered in his trial.

Courses about ethics are sprouting like flowers after April showers in a temperate zone.

I underlined in yellow this passage which is almost bittersweet:

The [ethics] course’s popularity is a sign that the gravity of the moment is weighing on many Stanford minds. Antigone Xenopoulos, a junior majoring in symbolic systems (a techie-fuzzie hybrid major that incorporates computer science, linguistics, and philosophy), is a research assistant for CS181. She wasn’t the only student who quoted a line from Spider-Man to me—with great power comes great responsibility—when referencing the current landscape. “If they’re going to give students the tools to have such immense influence and capabilities, [Stanford] should also guide those students in developing ethical compasses,” she says.

Yep, Spiderman. Spiderman.

Net net:

  1. Stanford is not the “problem”; Stanford is one member of a class of entities which cultivate and harvest the problem
  2. Silicon Valley has and continues to function as a high school science club without a teacher supervisor
  3. Technology, unlike a cat, cannot be put back in a bag.

Years ago I did some work for an investment bank. One of the people in a meeting was filled with the the George Gilder observation about convergence. I asked this question of the group of 12 high powered people:

Do you think technology could be like gerbils or rabbits?

The question evoked silence.

The situation today is that the interaction of technology has created ecologies in which new creatures are thriving. The result is that certain facets of a pre-technology world have been crushed, killed, or left to starve by the new digital animals and their inventions.

The Ringer’s article reminded me that “ethics” and the ability to understand oneself are in danger of extinction.

As one of the investment bankers for whom I did some work was fond of saying, “Interesting. No?”

Stephen E Arnold, February 10, 2019

Allegations Aloft on the Karma Feathered Wing of a Raven: Reuters and the UAE

February 9, 2019

Activists, diplomats, and foreign leaders were allegedly among the targets of a surveillance operation in the United Arab Emirates, according to Reuters’ article, “Exclusive: UAD Used Cyber Super-Weapon to Spy on iPhones of Foes.” Dubbed Project Raven, the operation broke into targets’ iPhones using a hack known as “Karma,” which may or may not still be operational after Apple updated the iPhone’s software in 2017. Indeed, the breaches were made possible by a flaw in Apple’s iMessage app in the first place: hackers found they could establish their connections by implanting malware through iMessage, even if the user never used the app.

Some may be surprised learn who was involved in Project Raven; reporters Joel Schectman and Christopher Bing write:

“Raven was largely staffed by U.S. intelligence community veterans, who were paid through an Emirati cyber security firm named DarkMatter, according to documents reviewed by Reuters. … The UAE government purchased Karma from a vendor outside the country, the operatives said. Reuters could not determine the tool’s creator.

I also noted this statement:

“The operatives knew how to use Karma, feeding it new targets daily, in a system requiring almost no input after an operative set its target. But the users did not fully understand the technical details of how the tool managed to exploit Apple vulnerabilities. People familiar with the art of cyber espionage said this isn’t unusual in a major signals intelligence agency, where operators are kept in the dark about most of what the engineers know of a weapon’s inner workings. …

Did the method work? I learned:

“The Raven team successfully hacked into the accounts of hundreds of prominent Middle East political figures and activists across the region and, in some cases, Europe, according to former Raven operatives and program documents.”

The article names a few of Raven’s known victims, including the noteworthy human rights activist Tawakkol Karman, also known as the Iron Woman of Yemen. Having been a prominent leader of her country’s Arab Spring protests in 2011, Karman is used to hacking notices popping up on her phone. However, even she was bewildered that Americans, famously champions of human rights, were involved.

Cynthia Murrell, February 09, 2019

Free Web Search and Objective Results

February 8, 2019

I spotted a story from the Moscow Times called “Google Began Censoring Search Results in Russia, Reports Say.” I read:

Google began complying with Russian requirements and has deleted around 70 percent of the websites blacklisted by authorities, an unnamed Google employee told Russia’s Vedomosti business daily Wednesday. An unnamed Roskomnadzor source reportedly confirmed the information to the paper. On Thursday, a Roskomnadzor spokesman told the state-run RIA Novosti news agency that the regulator had established a “constructive dialogue” with Google over filtering content.

Let’s assume the report is accurate.

Is this the model for filtering content in online indexes which Google developed to comply with different countries’ laws and regulations?

If the Russian regulatory authority is “fully satisfied”, the Google system appears to be working.

Several questions crossed my mind; to wit:

  1. Has Google used this system to filter content in other countries; for example, the US, Brazil, or Iran?
  2. Does the system work with acceptable reliability? Some potentially objectionable can be located via a Google image query to cite one example?
  3. What is the economic payoff of Google find a solution to its pre-filtering disputes with Russia?

Interesting, particularly when one asks the question, “Am I getting accurate information when running a query on Google, regardless of the country in which the query appears to have been launched?”

If search results are shaped, what does one do to locate potentially useful information? One answer, I suppose, is to pay for commercial online access. Another may be to assume that what’s online IS the correct data set? One could ask those in one’s social network, but that too may be filtered.

But free services are free. Free services may have other characteristics as well. What does “free” mean? Hmmm.

Stephen E Arnold, February 8, 2019

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta