Be Like Cortana, Really Microsoftish

February 20, 2017

We noted “Microsoft Adds More AI Tools to Dev Cognitive Services Suite.” The battle for lock in continues. Facebook, Google, and others in the online oligopolistic club want to initiate members to their group. The best way, it seems, is to shower the developers with freebies. This is a variant of the Xalisco approach to drug distribution in the United States. Free stuff gets folks coming back for me. Well, that’s the theory.

The write up says:

Microsoft has released three artificial intelligence (AI) tools used in its Skype Translator, Bing search and Cortana speech recognition services to developers as part of a bundle of 25 tools in Microsoft Cognitive Services.

Yes, cognitive. That’s the IBM Watson word, isn’t it? The write up adds:

The collection of tools will enable developers to add features such as emotion and sentiment detection, vision and speech recognition, and language understanding to their applications, according to Microsoft, which claims that they will require “zero expertise in machine learning” to use.

How are these tools working? I would ask Tay, but I prefer a less biased type of Microsoft smart software. And Cortana? Isn’t that the intrusive thing in Windows 10. I can type, thank you.

But, hey, free is free. What’s the long term cost? Good question. Perhaps I can ask Bing? On the other hand, I could swing by H&R Block and ask Watson.

Stephen E Arnold, February 20, 2017

Trendy Smart Software Companies

February 15, 2017

I read “Winton Labs Names Cohort of European AI and Data Science Startups.” Winton Labs has hooked up with the Alan Turing Institute. Here are the companies that one interested in smart software may want to watch:

  • Alterest
  • Cognitiv+. I am not too keen on companies with characters in their name. The firm’s Web site is www.cognitivplus.com. Is this smart?
  • SMAP Entergy
  • Terrabotics
  • Warwick Analytics.

Previous smart software outfits pipped by Winton included IntelligentX and CheckRecipient.

Stephen E Arnold, February 15, 2017

The Pros and Cons of Human Developed Rules for Indexing Metadata

February 15, 2017

The article on Smartlogic titled The Future Is Happening Now puts forth the Semaphore platform as the technology filling the gap between NLP and AI when it comes to conversation. The article posits that in spite of the great strides in AI in the past 20 years, human speech is one area where AI still falls short. The article explains,

The reason for this, according to the article, is that “words often have meaning based on context and the appearance of the letters and words.” It’s not enough to be able to identify a concept represented by a bunch of letters strung together. There are many rules that need to be put in place that affect the meaning of the word; from its placement in a sentence, to grammar and to the words around – all of these things are important.

Advocating human developed rules for indexing is certainly interesting, and the author compares this logic to the process of raising her children to be multi-lingual. Semaphore is a model-driven, rules-based platform that allows us to auto-generate usage rules in order to expand the guidelines for a machine as it learns. The issue here is cost. Indexing large amounts of data is extremely cost-prohibitive, and that it before the maintenance of the rules even becomes part of the equation. In sum, this is a very old school approach to AI that may make many people uncomfortable.

Chelsea Kerwin, February 15, 2017

Who Owns What AI Outfit?

February 13, 2017

Yep, there is a crazy logo graphic. However, you will find a chart I found useful. Navigate to “The Race For AI: Google, Twitter, Intel, Apple In A Rush To Grab Artificial Intelligence Startups.” Look for the subhead “Major Acquirers In Artificial Intelligence Since 2011.” Google was a gobbler of smart software companies. There were three surprises on the list:

  1. Yahoot (sorry I meant Yabba Dabba Hoot, er, Yahoo) acquired some AI smarts while it was shopping itself, ignoring security problems, and making management history.
  2. Amazon, the outfit with the Echo gizmo, snagged a couple of companies. What happens if Alexa gets smarter? Answer: More pain for Apple, Google, and Microsoft. These are three companies unable to create a new product category which generates buzz while selling laundry detergent.
  3. Twitter seems to be making a bit of an effort to become more than the amplifier of trumpet music.

Interesting run down. Now about that crazy chart which I find unreadable. Here you go:

image

Nifty, eh?

Stephen E Arnold, February 13, 2017

Captain Obvious Report: Smart Software Will Kill Jobs

February 12, 2017

I love the chatter about artificial intelligence. Lists Like “Experts Have Come Up with 23 Guidelines to Avoid an AI Apocalypse” are amusing. The outfits applying smart software are focused on revenues, deals, market share, big contracts, and money. I am not sure worrying about how a Boston Dynamics-type robot will operate when deployed in a war zone in swarm autonomous mode is going to do much for the apocalypse worriers.

There  are the obvious statements about smart software. You know. Search systems that deliver information to you before you knew you needed that information. A digital mom or a persistent and ever present significant other. Enter our Captain Obvious report. Read on.

I read the “Experts Have Come Up with…” article can absorbed this injunction:

the Asilomar AI Principles (after the beach in California, where they were thought up), the guidelines cover research issues, ethics and values, and longer-term issues – everything from how scientists should work with governments to how lethal weapons should be handled. On that point: “An arms race in lethal autonomous weapons should be avoided,” says principle 18.

Interesting. You can look at the complete list, sort of like a year end top 10 films output, at this link.

Enter Captain Obvious. Navigate to “Google’s Diane Greene: Machine Learning Will Cost Jobs, So Skills Training Is Essential.” I love it when Googlers make it so easy for folks with dull normal IQs to get good advice for working in the post smart software world. But our intrepid Captain Obvious intellect spotted this gem:

Greene said, “machines are better than humans” at some tasks. Recently they’ve started to do better at some kinds of image and speech recognition, and they’re performing tasks such as finding signs of disease, such as retinopathy, from images more accurately than humans.

Yikes, aren’t these jobs performed by people with college educations and maybe graduate degrees?

Captain Obvious enters:

people, especially those that are computer-literate, shouldn’t have a problem getting new jobs. “This has happened before in the world,” she said, such as during the Industrial Revolution. “There’s new jobs they can easily do. It’s all about training.” But others need to be helped through the transition.

So no work. Retraining. What about some folks who are not too bright? asks Captain Obvious. These people can work on “new” jobs at Google maybe?

Stephen E Arnold, February 11, 2017

IBM on Cognitive Computing Safari in South Africa

February 9, 2017

The article on ZDNet titled IBM to Use AI to Tame Big Data in Its Second African Research Lab discusses the 12th global research unit IBM has opened. This one is positioned in South Africa for data analytics and cognitive computing as applied to healthcare and urban development. Dr. Solomon Assefa, IBM’s Director of Research for Africa, mentions in the article that the lab was opened in only 18 months. He goes on,

Assefa said the facility will combine industrial research with a startup incubator, working closely with Wits’ own entrepreneur accelerator in the same innovation hub, known as the Tshimologong Precinct. Tshimologong is part of a major urban renewal project by Wits and the City of Johannesburg.

Nowhere else in the world is there an innovation hub that houses a world class research lab,” Assefa said. “One thing we agreed on from the start is that we will make the lab accessible to startups and entrepreneurs in hub.

The lab is funded by a ten-year investment program of roughly $60M and maintains an open door policy with the University of the Witswatersrand (Wits), The Department of Trade and Industry, and the Department of Science and Technology. The immediate focuses of several early applications include Cape region forest fire prevention, disease monitoring, and virtual reality.

Chelsea Kerwin, February 9, 2017

Gradescope Cuts Grading Time in Half, Makes Teachers Lives 50% More Bearable

February 8, 2017

The article titled Professors of the World, Rejoice: Gradescope Brings AI to Grading on Nvidia might more correctly be titled: TAs of the World, Rejoice! In my experience, those hapless, hardworking, underpaid individuals are the ones doing most of the grunt work on college campuses. Any grad student who has faced a stack of essays or tests when their “real work” is calling knows the pain and redundancy of grading. Gradescope is an exciting innovation that cuts the time spent grading in half. The article explains,

The AI isn’t used to directly grade the papers; rather, it turns grading into an automated, highly repeatable exercise by learning to identify and group answers, and thus treat them as batches. Using an interface similar to a photo manager, instructors ensure that the automatically suggested answer groups are correct, and then score each answer with a rubric. In this way, input from users lets the AI continually improve its future predictions.

The trickiest part of this technology was handwriting recognition, and the Berkeley team used a “recurrent neural network trained using the Tesla K40 and GEForce GTX 980 Ti GPUs.” Interestingly, the app was initially created at least partly to prevent cheating. Students have been known to alter their answers after the fact and argue a failure of grading, so a digital record of the paper is extremely useful. This might sound like the end of teachers, but in reality it is the beginning of a giant, global teacher party!

Chelsea Kerwin, February 8, 2017

Big Guns Want to Make Artificial Intelligence Ethical

January 31, 2017

I read “Apple Joins Research Group for Ethical AI with Fellow Tech Giants.” The write up informed me that:

As artificial intelligence becomes an increasingly powerful force in industry and society, some of the world’s biggest companies are worrying about how the technology can be used ethically, and how the public will perceive its spread. To combat these problems (and others), five tech companies — Google, Amazon, Microsoft, Facebook, and IBM — set up a research group called the Partnership on AI.

Apple is on the bus.

I don’t want to be skeptical, but there are some outfits actively working on smart software for government use cases. There is, in effect, a shadow business in artificial intelligence and smart software for warfighting, intelligence, and law enforcement.

image

A prototype autonomous weapon hunts for the enemy. For more images of the device, navigate to this link.

Sure, it’s great that the consumer facing outfits are going to meet and talk about how to keep children and partially informed users of mobile phones from negative uses of smart software. But I had two thoughts flit through my addled goose brain.

What are the outfits listed in the Carahsoft round up of Carahsoft IT solutions for government doing to make sure smart software is ethical. If you are not familiar with Carahsoft’s lists, you can check them out at this link.

Also, there are the US government programs to advance the use of smart software. Some of these ideas are interesting to me, but I am not sure how they will fly in a grade school. Examples include self directing swarms of weaponized mini drones released from an aircraft to autonomous imagery analysis systems which can deploy countermeasures automatically when folks face a threat.

Finally, there are the wizards working at various government research centers. These range from the little known units of consulting companies to university related research organizations.

In short, the notion of making artificial intelligence ethical is an interesting one for commercial enterprises. I wonder if the folks will chit chat about other topics when the members sit down for Philz coffee. There’s nothing like a helpful conversation among publicly traded companies who have a mandate to maximize their revenues.

I don’t want to be a fuddy duddy, but what does “ethics” mean?

Stephen E Arnold, January 31, 2017

IBM Explains Buggy Whip to Control Corvettes

January 19, 2017

I love IBM. I enjoy the IBM Watson marketing. I get a kick out of the firm’s saga of declining quarterly revenue. Will IBM make it 19 quarters in a row? I am breathless.

I read “IBM’s Rometty Lays Out AI Considerations, Ethical Principals.” The main idea, as I understand it, is:

artificial intelligence should be used to advance and augment humans not replace them. Transparency of AI development is also necessary.

Since smart software is dependent upon numerical recipes, I am not sure that the many outfits involved in fiddling with procedures, systems, and methods are going to make clear what their wizards are doing. Furthermore, IBM, in my opinion, is a bit of a buggy whip outfit. The idea that a buggy whip can control a bright 18 year old monitoring a drone swarm relying on artificial intelligence to complete a mission. Maybe IBM will equip Watson with telepathy?

The write up explains:

Commonly referred to as artificial intelligence, this new generation of technology and the cognitive systems it helps power will soon touch every facet of work and life – with the potential to radically transform them for the better…As with every prior world-changing technology, this technology carries major implications. Many of the questions it raises are unanswerable today and will require time, research and open discussion to answer.

Okay. What’s DeepMind up to? What about those folks at Facebook, Baidu, Microsoft, MIT, and most of the upscale French universities doing? Are the insights of researchers in Beijing finding their way into the media channel?

Well, IBM is going to take action if the information in the “real” journalistic write up is on the money. Here’s what Big Blue is going to do in its continuing effort to become a plus for stakeholders:

  1. IBM’s systems will augment human intelligence. Sounds good but the direction of some smart software is to make it easy for humans to get a pizza. The digital divide delivers convenience to lots of folks and big paydays to those in the top tier who find a way to sell stuff. Alexa, I need paper towels.
  2. Transparency. Right, that’s a great idea, but how it plays out in the real world is going to be a bit hit and miss. Actually, more miss than hit. The big money folks want to move to “winner take all” plays. Amazon Alexa has partners. Amazon keeps some money as it continues it march to global digital Wal-Mart-ism.
  3. Skills. Yep, the smart software movers and shakers buy promising outfits. Even the allegedly independent folks in Montréal are finding Microsoft a pretty nifty place to work.

Perhaps the folks doing smart software will meet and agree on some rules. Better yet, the US government can legislate rules and then rely on the United Nations or NGOs to promulgate them. Wait. There is a better way. Why not use a Vulcan mind meld?

I understand the IBM has to take the high road, but when a drone swarm makes its own decisions, whipping the rule books may not have much effect. Love those MBA chestnuts like buggy whips.

Stephen E Arnold, January 19, 2017

How Google Used Machine Learning and Loved It

January 16, 2017

If you use any search engine other than Google, except for DuckDuckGo, people cringe and doubt your Internet savvy.  Google has a reputation for being the most popular, reliable, and accurate search engine in the US.  It has earned this reputation, because, in many ways, it is the truth.  Google apparently has one upped itself, however, says Eco Consultancy in the article, “How Machine Learning Has Made Google Search Results More Relevant.”

In 2016, Google launched RankBrain to improve search relevancy in its results.  Searchmatics conducted a study and discovered that it worked.  RankBrain is an AI that uses machine learning to understand the context behind people’s search.  RankBrain learns the more it is used, similar to how a person learns to read.  A person learning to read might know a word, but can understand what it is based off context.

This increases Google’s semantic understanding, but so have the amount of words in a search query.  People are reverting to their natural wordiness and are not using as many keywords.  At the same time, back linking is not as important anymore, but the content quality is becoming more valuable for higher page rankings.  Bounce rates are increasing in the top twenty results, meaning that users are led to a more relevant result than pages with higher optimization.

RankBrain also shows Google’s growing reliance on AI:

With the introduction of RankBrain, there’s no doubt that Google is taking AI and machine learning more seriously.  According to CEO, Sundar Pichai, it is just the start. He recently commented that ‘be it search, ads, YouTube, or Play, you will see us — in a systematic way — apply machine learning in all these areas.’  Undoubtedly, it could shape more than just search in 2017.

While the search results are improving their relevancy, it spells bad news for marketers and SEO experts as their attempts to gain rankings are less effective.

Whitney Grace, January 16, 2016

Next Page »