CyberOSINT banner

Amusing Mistake Illustrates Machine Translation Limits

May 12, 2016

Machine translation is not quite perfect yet, but we’ve been assured that it will be someday. That’s the upshot of Business Insider’s piece, “This Microsoft Exec’s Hilarious Presentation Fail Shows Why Computer Translation is so Difficult.” Writer Matt Weinberger relates an anecdote shared by Microsoft research head Peter Lee. The misstep occurred during a 2015 presentation, for which Lee set up Skype Translator to translate his words over the speakers into Mandarin as he went. Weinberger writes:

“Part of Lee’s speech involved a personal story of growing up in a ‘snowy town’ in upper Michigan. He noticed that most of the crowd was enraptured — except for a few native Chinese speakers in the crowd who couldn’t stop giggling. After the presentation, Lee says he asked one of those Chinese speakers the reason for the laughter. It turns out that ‘snowy town’ translates into ‘Snow White’s Town.’ Which seems innocent enough, except that it turns out that ‘Snow White’s town’ is actually Chinese slang for ‘a town where a prostitute lives,’ Lee says. Whoops.

“Lee says it wasn’t caught in the profanity filters because there weren’t actually any bad words in the phrase. But it’s the kind of regional flavor where a direct translation of the words can’t bring across the meaning.”

Whoops indeed. The article notes that another problem with Skype Translator is its penchant for completely disregarding non-word utterances, like “um” and “ahh,” that often carry necessary meaning.  We’re reminded, though, that these and other problems are expected to be ironed out within the next few years, according to Microsoft Research chief scientist Xuedong Huang. I wonder how many more amusing anecdotes will arise in the meantime.


Cynthia Murrell, May 12, 2016

Sponsored by, publisher of the CyberOSINT monograph


Artificial Intelligence Spreading to More Industries

May 10, 2016

According to MIT Technology Review, it has finally happened. No longer is artificial intelligence the purview of data wonks alone— “AI Hits the Mainstream,” they declare. Targeted AI software is now being created for fields from insurance to manufacturing to health care. Reporter Nanette Byrnes  is curious to see how commercialization will affect artificial intelligence, as well as how this technology will change different industries.

What about the current state of the AI field? Byrnes writes:

“Today the industry selling AI software and services remains a small one. Dave Schubmehl, research director at IDC, calculates that sales for all companies selling cognitive software platforms —excluding companies like Google and Facebook, which do research for their own use—added up to $1 billion last year. He predicts that by 2020 that number will exceed $10 billion. Other than a few large players like IBM and Palantir Technologies, AI remains a market of startups: 2,600 companies, by Bloomberg’s count. That’s because despite rapid progress in the technologies collectively known as artificial intelligence—pattern recognition, natural language processing, image recognition, and hypothesis generation, among others—there still remains a long way to go.”

The article examines ways some companies are already using artificial intelligence. For example, insurance and financial firm USAA is investigating its use to prevent identity theft, while GE is now using it to detect damage to its airplanes’ engine blades. Byrnes also points to MyFitnessPal, Under Armor’s extremely successful diet and exercise tracking app. Through a deal with IBM, Under Armor is blending data from that site with outside research to help better target potential consumers.

The article wraps up by reassuring us that, despite science fiction assertions to the contrary, machine learning will always require human guidance. If you doubt, consider recent events—Google’s self-driving car’s errant lane change and Microsoft’s racist chatbot. It is clear the kids still need us, at least for now.


Cynthia Murrell, April 10, 2016

Sponsored by, publisher of the CyberOSINT monograph

Microsoft Says That AI Is Stupid

May 7, 2016

I know there is a difference among:

  • What senior managers believe about their minions’ innovations
  • What marketers say about the technology the engineer wizards are crafting in the innovation microwave
  • What “real” journalists angling for a job with some tailwind write
  • What the reality of an innovation is, right now.

But these differences are essentially irrelevant. We are in the era of IBM Watson, Facebook-Google investing in smart software, and big universities doing cartwheels for research which raised nary en eyebrow 18 months ago.

Navigate to “Microsoft Research Chief: AI Is Still Too Stupid to Wipe Us Out (and Will Be for Decades).” I am okay with the notion that smart software is becoming more important. From my vantage point in rural Kentucky, I am aware of the marketing money available to those who would shill smart software. I know about the cash lust of venture outfits who are in search of the next big thing. I am aware that smart software works reasonably well when applied to advertising and Amazon-style recommendations.

I find the use of the word “stupid” interesting. I noted this passage which quotes a Microsoft guru in the artificial intelligence stuff:

“Yes, deep learning has achieved human-level performance in object recognition but what does that mean? It means the machine makes about the same number of errors as the human. “The reason the machine is as good as the human at this is because it can distinguish between 157 varieties of mushroom, whereas it makes all kinds of stupid mistakes that humans wouldn’t make.”

Why comment? Microsoft Tay made evident some flaws. Perhaps IBM Watson avoids public demonstrations like Tay to avoid making weaknesses vivid? Facebook and Google are angling to reduce costs and generate revenue. AI is one path to explore. But “stupid”? Interesting word.

Stephen E Arnold, May 7,  2016

Why the UK Shouldn’t Be Concerned About the Gobbling up of Their Tech Industry

May 5, 2016

The article on MotherBoard titled Why the US Is Buying Up So Many UK Artificial Intelligence Companies surveys the rising tech community in the UK. There is some concern about the recent trend in UK AI and machine learning startups being acquired by US giants (HP and Autonomy, Google and DeepMind, Microsoft and Swiftkey, and Apple and VocalIQ.) It makes sense in terms of the necessary investments and platforms needed to support cutting-edge AI which are not available in the UK, yet. The article explains,

“And as AI increasingly becomes core to many tech products, experts become a limited resource. “All of the big US companies are working on the subject and then looking at opportunities everywhere—“…

Many of the snapped-up UK firms are the fruits of research at Britain’s top universities—add to the list above Evi Technologies (Amazon), Dark Blue Labs (Google), Vision Factory (also Google) that are either directly spun out of Cambridge, Oxford, or University College London…”

The results of this may be more positive for the UK tech industry than it appears at first glance. There are some companies, like DeepMind, that demand to stay in the UK, and there are other industry players who will return to the UK to launch their own ventures after spending years absorbing and contributing to the most current technologies and advancements.


Chelsea Kerwin, May 5, 2016

Sponsored by, publisher of the CyberOSINT monograph


Software That Contains Human Reasoning

April 20, 2016

Computer software has progressed further and keeps advancing faster than we can purchase the latest product.  Software is now capable of holding simple conversations, accurately translating languages, GPS, self-driving cars, etc.  The one thing that that computer developers cannot program is human thought and reason.  The New York Times wrote “Taking Baby Steps Toward Software That Reasons Like Humans” about the goal just out of reach.

The article focuses on Richard Socher and his company MetaMind, a deep learning startup working on pattern recognition software.  He along with other companies focused on artificial intelligence are slowly inching their way towards replicating human thought on computers.  The progress is slow, but steady according to a MetaMind paper about how machines are now capable of answering questions of both digital images and textual documents.

“While even machine vision is not yet a solved problem, steady, if incremental, progress continues to be made by start-ups like Mr. Socher’s; giant technology companies such as Facebook, Microsoft and Google; and dozens of research groups.  In their recent paper, the MetaMind researchers argue that the company’s approach, known as a dynamic memory network, holds out the possibility of simultaneously processing inputs including sound, sight and text.”

The software that allows computers to answer questions about digital images and text is sophisticated, but the data to come close to human capabilities is not only limited, but also nonexistent.  We are coming closer to understanding the human brain’s complexities, but artificial intelligence is not near Asimov levels yet.



Whitney Grace, April 20, 2016
Sponsored by, publisher of the CyberOSINT monograph

Bing: Search Engine for Developers. Git Moving

April 18, 2016

I read “Bing Just Became the Best Search Engine for Developers.” I was surprised that the word “operators” was left out of the headline. DevOps has become a rallying cry for many. According to the write up:

Almost always as developers we end up on Stack Overflow or Mozilla Developer Network, but now Microsoft’s Bing has given us something even better: executable code directly in search results.

I noted this statement:

Thanks to a collaboration with HackerRank, if you search for something like string concat C#, you’ll get an interactive code editor with a result that can be run directly from that page to see how it works.

My thought is that Bing is nosing into new territory. Is it possible that there could be some unforeseen consequences along the lines of the Microsoft Tay chatbot? Nah, Microsoft would not provide a function that might compromise a searcher’s computer.

Stephen E Arnold, April 18, 2016

Microsoft Azure Plans Offers Goldilocks and Three Bears Strategy to Find Perfect Fit

April 15, 2016

The article on eWeek titled Microsoft Debuts Azure Basic Search Tier relates the perks of the new plan from Microsoft, namely, that it is cheaper than the others. At $75 per month (and currently half of for the preview period, so get it while it’s hot!) the Basic Azure plan has lower capacity when it comes to indexing, but that is the intention. The completely Free plan enables indexing of 10,000 documents and allows for 50 megabytes of storage, while the new Basic plan goes up to a million documents. The more expensive Standard plan costs $250/month and provides for up to 180 million documents and 300 gigabytes of storage. The article explains,

“The new Basic tier is Microsoft’s response to customer demand for a more modest alternative to the Standard plans, said Liam Cavanagh, principal program manager of Microsoft Azure Search, in a March 2 announcement. “Basic is great for cases where you need the production-class characteristics of Standard but have lower capacity requirements,” he stated. Those production-class capabilities include dedicated partitions and service workloads (replicas), along with resource isolation and service-level agreement (SLA) guarantees, which are not offered in the Free tier.”

So just how efficient is Azure? Cavanagh stated that his team measured the indexing performance at 15,000 documents per minute (although he also stressed that this was with batches organized into groups of 1,000 documents.) With this new plan, Microsoft continues its cloud’s search capabilities.



Chelsea Kerwin, April 15,  2016

Sponsored by, publisher of the CyberOSINT monograph

Microsoft Does Cognitive Too

April 8, 2016

I read “Microsoft Launches Cognitive Services Based on Project Oxford and Bing.” I immediately thought of MIcrosoft’s smart chatbot adventure. Do I doubt the efficacy of Microsoft’s smart systems? No, I just think that the same approach manifested in Tay probably exists in the suite of APIs announced on March 30, 2016.

I learned:

The brand name Cognitive Services is a nod to IBM’s Watson, which for the past few years has been marketed as a “cognitive computing” product — that is, one that’s based on the way the human brain works.

That is working out very well for IBM. There is a recipe book and many projects. Revenues? Well, sure. Some.

Microsoft offers a search API. That, one hopes, will actually work reasonably well. Microsoft’s track record in the information access department has been interesting.

According to this Microsoft page, there are give search APIs which are available for preview. Use is like a taxi ride, and that type of metered pricing is often unsettling.

The five APIs are:

  1. Bing Autosuggest
  2. Bing Image Search
  3. Bing News Search
  4. Bing Video Search
  5. Bing Web Search.

I assume one can mix in academic knowledge, entity linking, and knowledge exploration. In addition, it appears thate is a language understanding intelligent service called Luis. I noted linguistic analysis as an API as well. And for good measure, one can tap text analytics.

For a developer, these Lego blocks offer an opportunity to code up a solution.

On the other hand, there are goodies from outfits from Baidu to Facebook, from Google to from which to choose.

Just as IBM is saddled with the Jeopardy and recipe book, Microsoft is going to have to live with Tay’s capabilities.

What happens if Tay works into a routine search query? That will be intriguing. Perhaps Tay and Watson can get together and do smart thing?

Stephen E Arnold, April 8, 2016

Analysis of Microsoft Chatbot Fail

April 3, 2016

I am looking forward to artificial intelligence adventures. I got a bang out of the Google self driving auto running into a bus. I chuckled when I learned that a Microsoft AI demo went off the rails.

If you want to know what happened, I suggest you scan “Poor Software QA Is Root Cause of TAY-FAIL (Microsoft’s AI Twitter Bot).” The write up works through many explanations.

The reason, however, boils down to lousy quality assurance. I would suggest that this explanation is not unique to Microsoft. Why did those construction workers demolish the house that was A OK? I wonder how one can get Microsoft’s smart auto numbering to work.

Pesky humans.

Stephen E Arnold, April 3, 2016

Google Joins Microsoft in the Management Judgment Circus Ring

April 2, 2016

First there was Microsoft and the Tay “learning” experiment. That worked out pretty well if you want a case example of what happens when smart software meets the average Twitter user. Microsoft beat a hasty retreat but expected me to fall for the intelligent API announcements at its home brew conferences.


Buy this management reminder poster at this link.

Then we had the alleged April 1 prank from the Alphabet Google thing. Gentle reader, the company eager to solve death created a self driving car which ran into a bus. A more interesting example, however, was the apparently “human” decision to pull a prank on Gmail users.

According to “Google Reverses Gmail April 1 Prank after Users Mistakenly Put GIFs into Important Emails”:

“Today, Gmail is making it easier to have the last word on any email with Mic Drop. Simply reply to any email using the new ‘Send + Mic Drop’ button. Everyone will get your message, but that’s the last you’ll ever hear about it. Yes, even if folks try to respond, you won’t see it,” Google explained when it launched the button on April 1.

Let’s step back from these interesting examples of large companies doing odd duck things and ask this question:

Does financial success and possibly unprecedented market impact improve human decision making?

I would suggest that the science and math club mentality may not scale in the judgment. Whether it is alleged malware techniques to force an old school programmer to write Never10 or creating a situation in which an employee to employee relationship gives new meaning to the joke word “glasshole”, the human judgment angle may need some scrutiny.

Tay was enough for me to consider creating a Tortured Tay segment for this blog to complement Weakly Watson. Alphabet Google’s prank, however, is in a class of its own.

Fiddling with Gmail’s buttons was an idea without merit. Users are on autopilot. Think how users wince when Apple fools with iTunes’ interface. Now shift from an entertainment app to a “real work” app.

Judgment is important. Concentration of user attention requires more than a math club management style. What worked in high school may not work in other situations.

Stephen E Arnold, April 2, 2016

Next Page »