The New Doing Gooder Google

January 17, 2020

Google’s cheerleading unit likes to remind us, amid the constant criticisms, that the company makes some positive contributions to society. For example, it seems their AI has gotten good at detecting cancer. We learn from AndoridCentral that “Google’s AI Is Better at Detecting Cancer than Doctors, Says Study.” About the same research, Ausdroid reports, “Google Publish their Impressive Breast Cancer Screening Using AI Results.” The capabilities are courtesy of technology developed by Google acquisition DeepMind. The study was performed by Google Health in conjunction with Cancer Research UK Imperial Centre, Northwestern University, and Royal Surrey County Hospital. Researchers used deep-learning tools to create AI detection models and applied them to almost 30,000 patients for whom results were already known. Muhammad Jarir Kanji of AndroidCentral writes:

“The system was trained using a large dataset of mammograms from women in the two countries. Even more telling than its better accuracy than doctors was the fact that it did so with far less information than the radiologists it was competing with, who also had access to the patients’ medical history and previous mammograms in their deliberations. … While the paper noted that ‘AI may be uniquely poised to help with’ the challenge of detecting breast cancer, Darzi said the system was not yet at a stage where it could replace a human reader.”

Emphasis on “yet.” Meanwhile, Ausdroid’s Scott Plowman emphasizes:

“The data sets were also NOT used to train the AI system and thus we totally unknown to the system.

Comparing the positive results from the AI to those patients who ended up having biopsy-confirmed breast cancer the AI demonstrated a ‘statistically significant’ improvement in ‘absolute specificity’ of 1.2% (UK – double read), and 5.7% (USA – single read) and an improvement in absolute sensitivity of 2.7% (UK) and 9.4% (USA). For reference, sensitivity is the ability to correctly identify lesions and specificity is how accurate it is at identifying those without lesions. This means that it has a reduction in both false positives and false negatives.”

If Google’s PR team spins more stories like this one, they just might be able to burnish the company’s reputation.

Cynthia Murrell, January 08, 2020

Enterprise Search and the AI Autumn

January 13, 2020

DarkCyber noted this BBC write up: “Researchers: Are We on the Cusp of an AI Winter?” Our interpretation of the Beeb story can be summarized this way:

“Yikes. Maybe this stuff doesn’t work very  well?”

The Beeb explains in Queen’s English based on quotes of experts:

Gary Marcus, an AI researcher at New York University, said: “By the end of the decade there was a growing realization that current techniques can only carry us so far.”

He [Gary Marcus and AI wizard at NYU] thinks the industry needs some “real innovation” to go further. “There is a general feeling of plateau,” said Verena Rieser, a professor in conversational AI at Edinburgh’s Herriot Watt University. One AI researcher who wishes to remain anonymous said we’re entering a period where we are especially skeptical about AGI.

Well, maybe.

But the enterprise search cheerleaders have not gotten the memo. The current crop of “tap your existing unstructured information” companies assert that artificial intelligence infuses their often decades old systems with zip.

The story is being believed by venture outfits. The search for the next big thing is leading to making sense of unstructured text. After all, the world is awash in unstructured text. Companies have to solve this problem or red ink and extinction are just around the corner.

Net net: AI is a collection of tools, some useful, some not too useful. Enterprise search vendors are looking for a way to make sales to executives who don’t know or don’t care about past failures to index unstructured text on a company wide basis with a single system.

Stephen E Arnold, January 13, 2020

Journalists: Welcome Your New Colleague Artificial Intelligence

January 12, 2020

Life is going to become more interesting for journalists. I use the word in its broadest possible sense. That includes the DarkCyber team, gentle reader. Who needs humans when smart software is available.

AI-Written Articles Are Copyright-Protected, Rules Chinese Court” explains that software can create content. Then that content is protected by copyright laws.

DarkCyber noted this statement:

According to state media outlet China News Service (CNS), a court in Shenzhen this month ruled in favor of Tencent, which claimed that work created by its Dreamwriter robot had been copied by a local financial news company. The Shenzhen Nanshan District People’s Court ruled that, in copying the Dreamwriter article, Shanghai Yingxun Technology Company had infringed Tencent’s copyright. Dreamwriter is an automated writing system created by Tencent and based on the company’s own algorithms.

Presumably software can ingest factoids, apply algorithms, and output new, fresh, and original information. No hanging out at the Consumer Electronic Show looking for solid information at real technology event. Imagine the value of creating “real” news without having to pay humans. No hotel, airplane, taxi, or meal expenses.

Special content can be produced on an industrialized scale like Double Happiness ping pong balls.

Upsides for journalists include:

  • Opportunities to explore new careers in fast food, blogging, and elder care
  • Time to study with coal miners learning to code
  • Mental space to implement entrepreneurial ideas like elderberry products designed for those who suffer from certain allergic responses.

Downsides, but only a few, of course, are:

  • No or reduced income
  • Loss of remaining self respect
  • Weight loss due to items one and two in this downside list
  • No need to have lunch with these content generators.

One question: What’s a nation state able to do with content robots?

Stephen E Arnold, January 12, 2020

Why Black Boxes in Smart Software?

January 5, 2020

I read “Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition.” The source is HDSR, which appears to be hooked up to MIT. Didn’t MIT find an alleged human trafficker an ideal source of contributions and worthy of a bit of “black boxing”? (See “Jeffrey Epstein’s money bought a cover-up at the MIT Media Lab.”) The answer seems obvious: Keep prying eyes out. Prevent people from recognizing how mundane flashy stuff actually is.

The write up from HDSR states:

The belief that accuracy must be sacrificed for interpretability is inaccurate. It has allowed companies to market and sell proprietary or complicated black box models for high-stakes decisions when very simple interpretable models exist for the same tasks.

The write up moves with less purpose that Jeffrey Epstein.

I noted this statement as well:

Let us insist that we do not use black box machine learning models for high-stakes decisions unless no interpretable model can be constructed that achieves the same level of accuracy. It is possible that an interpretable model can always be constructed—we just have not been trying. Perhaps if we did, we would never use black boxes for these high-stakes decisions at all.

I love the privileged tone of the passage.

Here’s my take:

Years ago I prepared for a European country’s intelligence service an analysis of the algorithms used in smart software. I thought this was an impossible job. But after making some calls, talking to wizards, and doing a bit of reading about what’s taught in computer science classes, my team and I unearthed several interesting factoids:

  1. The black box became the marketing hot button in the mid 1990s. The outfit adding oomph to mystery and secrecy was Autonomy. If you are not familiar with the company, think Bayesian maths. Keep the neuro linguistic programming mechanism under wraps differentiated Autonomy from its competition.
  2. Computer science and advanced mathematics courses around the world incorporated into their courses of study some useful and mostly reliable methods; for example, k means. There were another nine computational touchstones we identified. Did we miss a few? Probably, but my team concluded that most of the fancy math outfits were using a handful of procedures and fiddling with thresholds, training data, and workflows to deliver their solutions. Why reveal to anyone that under the hood most of the fancy stuff for NLP, text analytics, machine learning, and the other buzzwords which seem so 2020 were the same.
  3. My team also identified that each of the widely used, what we called “good enough” methods, could be manipulated. Change a threshold here, modify training data there, create a feedback loop and rules there—the system output results that appeared quite accurate, even useful. Putting the methods in a black box disguised for decades the simple methods used by Cambridge Analytica to skew outputs and probably elections. Differentiation comes not from the underlying methods; uniqueness is a result of the little bitty tweaks. Otherwise, most systems are just lik the competitions’ systems.

Net net: Will transparent methods prevail? Unlikely. Making something clear reduces its perceived value. Just think how linking Jeffrey Epstein to MIT alters the outputs about good judgment.

Black boxes? Very useful indeed. Secrets? Selective revelation of facts? Millennial marketing? All useful

Stephen E Arnold, January 5, 2020

Smart Software: Is Control Too Late Arriving?

January 4, 2020

I read “US Government Limits Exports of Artificial Intelligence Software.” The main idea is that smart software is important. The insight may be arriving after the train has left the station. The trusty Thomson Reuters’ report states:

It comes amid growing frustration from Republican and Democratic lawmakers over the slow roll-out of rules toughening up export controls, with Senate Minority Leader Chuck Schumer, a Democrat, urging the Commerce Department to speed up the process.

And the reason (presented via a quote from an expert) seems to be “rival powers like China.”

I took a quick spin through other items in my newsfeed this morning, Saturday, January 3, 2020. Here’s a selection of five items. Remember. It’s Saturday and a day when many Silicon Valley types get ready for some football.

Not far from where I am writing this, more than 100 exchange students are working in teams to master a range of technologies, including smart software. One group is Chinese; another is German. Will the smart software encountered by these students be constrained in some way? What if the good stuff has been internalized, summarized, and emailed to fellow travelers in another country?

DarkCyber has a question, “Is it perhaps a little late in the game to change the rules?”

Stephen E Arnold, January 4, 2020

Emergent Neuron Network

December 31, 2019

I want to keep this item short. The information in “Brain-Like Functions Emerging in a Metallic Nanowire Network” may be off base. However, if true, the emergent behavior in a nanowire network is suggestive. We noted this statement:

The joint research team recently built a complex brain-like network by integrating numerous silver (Ag) nanowires coated with a polymer (PVP) insulating layer approximately 1 nanometer in thickness. A junction between two nanowires forms a variable resistive element (i.e., a synaptic element) that behaves like a neuronal synapse. This nanowire network, which contains a large number of intricately interacting synaptic elements, forms a “neuromorphic network”. When a voltage was applied to the neuromorphic network, it appeared to “struggle” to find optimal current pathways (i.e., the most electrically efficient pathways). The research team measured the processes of current pathway formation, retention and deactivation while electric current was flowing through the network and found that these processes always fluctuate as they progress, similar to the human brain’s memorization, learning, and forgetting processes. The observed temporal fluctuations also resemble the processes by which the brain becomes alert or returns to calm. Brain-like functions simulated by the neuromorphic network were found to occur as the huge number of synaptic elements in the network collectively work to optimize current transport, in the other words, as a result of self-organized and emerging dynamic processes.

What can the emergent nanowire structure do? The write up states:

Using this network, the team was able to generate electrical characteristics similar to those associated with higher order brain functions unique to humans, such as memorization, learning, forgetting, becoming alert and returning to calm. The team then clarified the mechanisms that induced these electrical characteristics.

DarkCyber finds the emergent behavior interesting and suggestive. Worth monitoring because there may be one individual working at Google who will embrace a nanowire implant. A singular person indeed.

Stephen E Arnold, December 31, 2019

How to Be Numero Uno in AI Even Though the List Has a Math Error and Is Incomplete

December 24, 2019

DarkCyber spotted an interesting college ranking. Unlike some of the US college guides which rank institutions of higher learning, the league table published by Yicai Global takes a big data approach. (Please, keep in mind that US college rankings are not entirely objective. There are niceties like inclusions, researcher bias, and tradition which exert a tiny bit of magnetic pull on these scoreboards.)

According to “Six Chinese Colleges Place in CSRankings’ Top Ten AI List”, the US and other non-Chinese institutions are simply not competitive. Note that “six” in the headline.

How were these interesting findings determined? The researchers counted the number of journal articles published by faculty at the institutions in the sample. DarkCyber noted this statement about the method:

CSRankings is an authoritative global ranking of computer science higher educational institutions compiled by the AMiner team at Tsinghua. Its grading rests entirely on the number of scholarly articles faculty members publish.

The more papers—whether good, accurate, or science fiction—was the sole factor. There you go. Rock solid research.

But let’s look at the rankings:

  1. Top AI institution in the world: Tsinghua University.
  2. Not listed. Maybe Carnegie Mellon University
  3. Peking University
  4. University of the Chinese Academy of Sciences
  5. Not listed. Maybe MIT?
  6. Nanyang Technological University
  7. Not listed. Maybe Stanford, the University of Washington, or UCal Berkeley?
  8. Shanghai Jiao Tong University
  9. Not listed. Maybe Cambridge University
  10. Not listed. DarkCyber would plug in École nationale supérieure des Mines de Saint-Étienne whose graduates generally stick together or maybe the University of Michigan located in the knowledge wonderland that is Ann Arbor?

Notice that there are five Chinese institutions in the Top 10 list. Yeah, I know the source document said “six.” But, hey, this is human intelligence, not artificial intelligence at work.

Who’s in the Top 10. Apparently Carnegie Mellon and MIT were in the list, but that’s fuzzy. The write up references another study which ranked “all area” schools. Does MIT teach literature or maybe ethics?

To sum up: Interesting source, wonky method, and incomplete listing. Plus, there that weird six but just five thing.

CSRankings’ Liao Shumin may want to fluff her or his calligraphy brush for the next go round; otherwise, an opportunity to do some holiday coal mining in Haerwusu may present itself. “Holiday greetings from Inner Mongolia” may next year’s follow up story.

Stephen E Arnold, December 24, 2019

An Artificial Intelligence Doubter: Remarkable

December 18, 2019

The often cynical and always irreverent Piekniewski’s blog has posted another perspective of the AI field in, “AI Update, Late 2019 – Wizards of Oz.” AI and deep learning scholar Filip Piekniewski has made a habit of issuing take-downs of AI propaganda, and this time he takes aim at self-driving cars, OpenAI, Microsoft’s DeepMind, and more. He writes:

“The whole field of AI resembles a giant collective of wizards of Oz. A lot of effort is put in to convincing gullible public that AI is magic, where in fact it is really just a bunch of smoke and mirrors. The wizards use certain magical language, avoiding carefully to say anything that would indicate their stuff is not magic. I bet many of these wizards in their narcissistic psyche do indeed believe wholeheartedly they have magical powers….”

Be that as it may, the post is a good read for anyone who wants to see past the hubris. For example, we learn several self-driving AI companies have been having financial and/or technical difficulties while news stories around that tech sound less and less rosy. Also, the much-hyped OpenAI text algorithm was hacked, and turns out to be much less threatening (or impressive) than originally proclaimed. Then there’s the robot that has trouble with its Rubic’s cube project, the firing of Element AI’s CEO, and the disappointments of AI-based radiology. See the write-up for more. The post, however, concludes on a positive note:

“In practice, even though there is no magic, there is a lot of useful stuff one can do with that smoke and mirror not just deception and ripping off naive investors. I’m currently working on something that certainly uses what would be called AI, lots of visual perception and is in ways autonomous, but unlike some of these other moon shots seems quite doable (doable does not mean easy!) with today’s technology and moreover seems to provide a huge economical value. More on that soon, once Accel Robotics gets out of stealth mode and we publicly announce what we are up to. Stay tuned!”

Will people listen to a critic? Dial 1 800 YOU-WISH for an answer.

Cynthia Murrell, December 18, 2019

Xnor Touch Points

November 29, 2019

If you are not familiar with, navigate to the company’s Web site and read the cultural information. There is a reference to diversity, the company being a “high growth start up,” and something called “ethics touch points.”

I think one of the touch points is not honoring deals with licensee, but my information comes from a razzle dazzle publication. “Wyze’s AI-Powered Person Detection Feature Will Temporarily Disappear Next Year” asserts:

Wyze’s security cameras will temporarily lose their person detection feature in January 2020 after the AI startup it partnered with on the feature abruptly terminated their agreement. In a post on its forums, Wyze said that its agreement with included a clause allowing the startup to terminate the contract “at any moment without reason.”

There’s a reference to “mistakes,” in the tradition of 21st century information, there’s no definition of mistake.

I noted this passage: passage “Wyze’s low prices come with risks.”

Back up.

What’s an ethical touch point? states:

Xnor is actively engaging in conversations around the ethical implications of AI within our society through “ethics touch points” that exist within our normal working patterns. These touch points allow is to actively review specific AI use cases and make informed decisions without compromising the speed in which we operate as a start-up.

Maybe recognizing a face is not good? When is recognizing a face good? I struggle with the concept of ethics mostly because I am flooded with examples of ethical crossroads each day. Was a certain lawyer in Ukraine for himself or for others? Was the fuselage failure of a 777 a mistake or a downstream consequence of an ethical log jam? Was the disappearance of certain map identifiers a glitch or an example of situational ethical analysis?

With about $15 million in funding, the two year old company is an interesting one. What’s interesting is that Madrona Ventures may find itself with some thorns in its britches after pushing through the thicket of ethical touch points.

In 2017, ran a story with this headline: “AI Startup Raises $2.6M To Bring AI To All Devices.” See the word “all.”

That should have come with a footnote maybe? Other possibilities are: [a] the technology does not work, [b] Wyze did not pay a bill, [c] has done what Aristotle did ineffectively.

Stephen E Arnold, November 29, 2019


November 24, 2019

Here is a useful roundup of information for those interested in machine learning. Forbes presents a thorough collection of observations, citing several different sources, about the impact of deep learning on the AI field in, “Amazon Saw 15-Fold Jump in Forecast Accuracy with Deep Learning and Other AI Stats.” As the title indicates, under the heading AI Business Impact, writer Gil Press reports:

“When Amazon switched from traditional machine learning techniques to deep learning in 2015, it saw a 15-fold increase in the accuracy of its forecasts, a leap that has enabled it to roll-out its one-day Prime delivery guarantee to more and more geographies; MasterCard has used AI to cut in half the number of times a customer has their credit card transaction erroneously declined, while at the same time reducing fraudulent transactions by about 40%; and using predictive analytics to spot cyber attacks and waves of fraudulent activity by organized crime groups helped Mastercard’s customers avoid some $7.5 billion worth of damage from cyberattacks in just the past 10 months [Fortune]”

A couple other examples under AI Business Impact include the Press Association’s RADAR news service, which generated 50,000 local news stories in three months with the help of but six human reporters; and the Rotterdam Roy Dutch Shell refinery’s sensor data analysis that helped them avoid spending about $2 million in maintenance trips.

Press arranges the rest of his AI information into several more headings: AI Business Adoption, where we learn nearly all respondents to an IFS survey of business leaders have plans to implement AI functionality; AI Consumer Attitudes, where he tells us a pessimistic 10% of Mozilla-surveyed consumers think AI will make our lives worse; AI Research Results, under which is reported that AI can now interpret head CT scans as well as a highly trained radiologist; AI Venture Capital Investments; AI Market Forecasts; and AI Quotable Quotes. The article concludes with this noteworthy quotation:

“‘To make deliberate progress towards more intelligent and more human-like artificial systems… we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans’—Francois Chollet

We recommend interested readers check out the article for its many more details.

Cynthia Murrell, November 22, 2019

Next Page »

  • Archives

  • Recent Posts

  • Meta