Artificial Intelligence Will Make Humans Smarter at Work

October 17, 2017

Relatively no industry has been untouched by the past decade’s advances in artificial intelligence. We could go on and make a laundry list of which businesses in particular, but we have a hunch you are very close to one right now. According to a recent Enterprise CIO article, “From Buzzword to Boardroom: What’s Next for Machine Learning?” human intelligence is becoming obsolete in certain fields.

As demonstrated in previous experiments, no human brain is able to process as much data at comparable speed and accuracy as machine-learning systems can and as a result, deliver a sound, data-based result within nanoseconds.

While that should make you sit up and take notice, the article is not as apocalyptic as that quote might lead you to believe. In fact, there is a silver lining in all this AI. We humans will just have to work hard to get there. The story continues:

It must also leave room for creativity and innovation. Insights and suggestions gained with the aid of artificial intelligence should stimulate, not limit. Ultimately, real creativity and genuine lateral thinking still comes from humans.

We have to agree with this optimistic line of thinking. These machines are not exactly stealing our jobs, but forcing humans to reevaluate their roles. If you can properly combine AI, big data, and search for your role, chances are an employee, like yourself, will become invaluable instead of obsolete.

Patrick Roland, October 17, 2017

CEOs AI Hyped but Not Many Deploy It

October 17, 2017

How long ago was big data the popular buzzword?  It was not that long ago, but now it has been replaced with artificial data and machine learning.  Whenever a buzzword is popular, CEOs and other leaders become obsessed with implementing it within their own organizations.  Fortune opens up about the truth of artificial intelligence and its real deployment in the editorial, “The Hype Gap In AI”.

Organization leaders have high expectations for artificial intelligence, but the reality is well below them.  According to a survey cited in the editorial, 85% of executives believe that AI will change their organizations for the better, but only one in five executives have actually implemented AI into any part of their organizations.  Only 39% actually have an AI strategy plan.

Hype about AI and its potential is all over the business sector, but very few really understand the current capabilities.  Even fewer know how they can actually use it:

But actual adoption of AI remains at a very early stage. The study finds only about 19% of companies both understand and have adopted AI; the rest are in various stages of investigation, experimentation, and watchful waiting. The biggest obstacle they face? A lack of understanding —about how to adapt their data for algorithmic training, about how to alter their business models to take advantage of AI, and about how to train their workforces for use of AI.

Organizations view AI as an end-all solution, similar to how big data was the end all solution a few years ago.  What is even worse is that while big data may have had its difficulties, understanding it was simpler than understanding AI.  The way executives believe AI will transform their companies is akin to a science fiction solution that is still very much in the realm of the imagination.

Whitney Grace, October 17, 2017

Big Data and Big Money Are on a Collision Course

October 16, 2017

A recent Forbes article has started us thinking about the similarities between long-haul truckers and Wall Street traders. Really! The editorial penned by JP Morgan, “Informing Investment Decisions Using Machine Learning and Artificial Intelligence,” showcases the many ways in which investing is about to be overrun with big data machines. Depending on your stance, it is either thrilling or frightening.

The story claims:

Big data and machine learning have the potential to profoundly change the investment landscape. As the quantity and the access to data available have grown, many investors continue to evaluate how they can leverage data analysis to make more informed investment decisions. Investment managers who are willing to learn and to adopt new technologies will likely have an edge.

Sounds an awful lot like the news we have been reading recently about how almost two million truck drivers could be out of work in the next decade thanks to self-driving cars. If you have money in trucking, the amount saved is amazing, but if that’s how you make your living things have suddenly become chilly. Sounds like the future of Wall Street, according to this story.

It continues:

Big data and machine learning strategies are already eroding some of the advantage of fundamental analysts, equity long-short managers and macro investors, and systematic strategies will increasingly adopt machine learning tools and methods.

If you ask us, it’s not a matter of if but when. Nobody wants to lose their job due to efficiency, but it’s pretty much impossible to stop. Money talks and saving money talks loudest to companies and business owners, like investment firms.

Patrick Roland, October 16, 2017

AI Predictions for 2018

October 11, 2017

AI just keeps gaining steam, and is positioned to be extremely influential in the year to come. KnowStartup describes “10 Artificial Intelligence (AI) Technologies that Will Rule 2018.” Writer Biplab Ghosh introduces the list:

Artificial Intelligence is changing the way we think of technology. It is radically changing the various aspects of our daily life. Companies are now significantly making investments in AI to boost their future businesses. According to a Narrative Science report, just 38% percent of the companies surveys used artificial intelligence in 2016—but by 2018, this percentage will increase to 62%. Another study performed by Forrester Research predicted an increase of 300% in investment in AI this year (2017), compared to last year. IDC estimated that the AI market will grow from $8 billion in 2016 to more than $47 billion in 2020. ‘Artificial Intelligence’ today includes a variety of technologies and tools, some time-tested, others relatively new.

We are not surprised that the top three entries are natural language generation, speech recognition, and machine learning platforms, in that order. Next are virtual agents (aka “chatbots” or “bots”), then decision management systems, AI-optimized hardware, deep learning platforms, robotic process automation, text analytics & natural language processing, and biometrics. See the write-up for details on each of these topics, including some top vendors in each space.

Cynthia Murrell, October 11, 2017

Medieval Thoughts in a Mobile Smart Bubble

October 6, 2017

I read two articles this morning when the recalcitrant Vodaphone network finally decided that resolving links from Siena, Italy, was okay today. Yesterday the zippy technology did not work as Sillycon Valley wizards and “real” journalists expect.

The first write up is one of those “newspapers should be run by “real” journalists operating from a rock-solid, independent position as gatekeepers of the “truth.” You can draw your own conclusion about this “real” journalistic cartwheel by reading “If Journalists Take Sides, Who Will Speak Truth to Power?

I noted this passage:

The essential argument was recently laid out by an outlet called 888.hu: “The international media, with a few exceptions, generally write bad things about the government because a small minority with great media influence does everything to tarnish the reputation of Hungary in front of the world – prestige that has been built over hundreds of years by patriots.”

The “real” Guardian newspaper presents opinion and news by blending observations, mixed sources, and “news.” Technology, zeros and ones, facts experts accept in order to win a grant, get tenure, or prove merit.

Navigate to “The Seven Deadly Sins of AI Predictions.

Your are correct: medievalism meets “real” journalism. The argument in this “real” hard technology write up is that baloney, hoohah, and sci-fi has made “articiiial intelligence” into today’s boogeyman.

Chill out because those touting smart software and those who are afraid that a “real” Terminator will jump out of a police flying patrol car with Robocop are are coming to your city, village, or mud hut.

As readers of Beyond Search will be able to verify, I have poked fun at Technology Review for recycling the Watson confection with little or no critical analysis. I have also had a merry time commenting about the disconnect between the monopolistic systems which define “facts” and the old school journalists who flop between infatuation and odd ball criticism of the services which have captured their attention.

The reality is that artificial intelligence has been taking baby steps for decades. Computing power, data, and well-known numerical recipes can be combined to permit marketers to do what they have been doing for many years: Identify what’s hot and deliver more of that hotness in order to generate money via ads or provide services for which companies and governments will pay.

The notion that technology generates hyperbole is the stuff of entrepreneurs’ dreams. Today’s smart software is little more than making available some of the less crazy ideas from Star Trek.

Let me cite an example from “Seven Deadly”:

machine learning is very brittle, and it requires lots of preparation by human researchers or engineers, special-purpose coding, special-purpose sets of training data, and a custom learning structure for each new problem domain.

I am interested in watcching people struggle to make an app for adding ringtons to an Android mobile phone work. I am interested in watching people struggle with laptops which combine a keyboard and a touchscreen. I am interested in the conflation of news, opinion, facts, “weaponized” information, shaped data to sell ads, and online services providing a user what the user “really wants.”

AI raises some interesting challenges. First, for those “real” newspapers and magazines, I hope that more criticcal thinking is applied to the “real” story. I hope that regulators do more than flop around like a fish dumped on the dock. I hope that smart software can remediate some of the problems humans seem to be manufacturing with more efficiency than Kia implements on its assembly lines.

What’s the “truth” in the Guardian “real” news story, opinion, blog quoting write up. What’s the path forward for a champion of IBM Watson and the richly funded MTI IBM AI lab?

These are big issues. Digital Svanarola’s? Maybe not.

Stephen E Arnold, October 6, 2017

AI Is Key to Unstructured Data

October 5, 2017

Companies are now inclined to keep every scrap of data they create or collect, but what use is information that remains inaccessible? An article at CIO shares “Three Ways to Make Sense Out of Dark Data.” Contributor Sanjay Srivastava writes:

Most organizations sit on a mountain of ‘dark’ data – information in emails and texts, in contracts and invoices, and in PDFs and Word documents – which is hard to automatically access and use for descriptive, diagnostic, predictive, or prescriptive automations. It is estimated that some 80 percent of enterprise data is dark. There are three ways companies can address this challenge: use artificial intelligence (AI) to unlock unstructured data, deploy modular and interoperable digital technologies, and build traceability into core design principles.

Naturally, the piece elaborates on each of these suggestions. For example, we’re reminded AI uses natural language processing, ontology detection, and other techniques to plumb unstructured data. Interoperability is important because new processes must be integrated into existing systems. Finally, Srivastava notes that AI challenges the notion of workforce governance, and calls for an “integrated command and control center” for traceability. The article concludes:

Digital technologies such as computer vision, computational linguistics, feature engineering, text classification, machine learning, and predictive modeling can help automate this process.  Working together, these digital technologies enable pharmaceutical and life sciences companies to move from simply tracking issues to predicting and solving potential problems with less human error. Interoperable digital technologies with a reliable built-in governance model drive higher drug quality, better patient outcomes, and easier regulatory compliance.

Cynthia Murrell, October 5, 2017

Smart Software with a Swayed Back Pony

October 1, 2017

I read “Is AI Riding a One-Trick Pony?” and felt those old riding sores again. Technology Review nifty new technology old. Bayesian methods date from the 18th century. The MIT write up has pegged Geoffrey Hinton, a beloved producer of artificial intelligence talent, as the flag bearer for the great man theory of smart software.

Dr. Hinton is a good subject for study. But the need to generate clicks and zip in the quasi-academic world of bit time universities may be engaged in “practical” public relations. For example, the write up praises Dr. Hinton’s method of “back propagation.” At the same time, the MIT publication points out the method of neural networks popular today:

you change each of the weights in the direction that best reduces the error overall. The technique is called “backpropagation” because you are “propagating” errors back (or down) through the network, starting from the output.
This makes sense. The idea is that the method allows the real world to be subject to a numerical recipe.

The write up states:

Neural nets can be thought of as trying to take things—images, words, recordings of someone talking, medical data—and put them into what mathematicians call a high-dimensional vector space, where the closeness or distance of the things reflects some important feature of the actual world.

Yes, reality. The way the brain works. A way to make software smart. Indeed a one trick pony which can be outfitted with a silver bridle, a groomed mane and tail, and black liquid shoe polish on its dainty hooves.

The sway back? A genetic weakness. A one trick pony with a sway back may not be able to carry overweight kiddies to the Artificial Intelligence Restaurant, however.

MIT’s write up suggests there is a weakness in the method; specifically:

these “deep learning” systems are still pretty dumb, in spite of how smart they sometimes seem.

Why?

Neural nets are just thoughtless fuzzy pattern recognizers, and as useful as fuzzy pattern recognizers can be—hence the rush to integrate them into just about every kind of software—they represent, at best, a limited brand of intelligence, one that is easily fooled.

Software, the article points out that:

And though we’ve started to get a better handle on what kinds of changes will improve deep-learning systems, we’re still largely in the dark about how those systems work, or whether they could ever add up to something as powerful as the human mind.

There is hope too:

Essentially, it is a procedure he calls the “exploration–compression” algorithm. It gets a computer to function somewhat like a programmer who builds up a library of reusable, modular components on the way to building more and more complex programs. Without being told anything about a new domain, the computer tries to structure knowledge about it just by playing around, consolidating what it’s found, and playing around some more, the way a human child does.

We have a braided mane and maybe a combed tail.

But what about that swayed back, the genetic weakness which leads to a crippling injury when the poor pony is asked to haul a Facebook or Google sized child around the ring? What happens if low cost, more efficient ways to create training data, replete with accurate metadata and tags for human things like sentiment and context awareness become affordable, fast, and easy?

My thought is that it may be possible to do a bit of genetic engineering and make the next pony healthier and less expensive to maintain.

Stephen E Arnold, October 1, 2017

Microsoft AI: Another Mobile Type Play?

September 27, 2017

I read a long discussion of Microsoft’s artificial intelligence activities. The article touches briefly on the use of Bing and LinkedIn data. (I have distributed this post via LinkedIn, so members can get a sense of the trajectory of the MSFT AI effort.

I noted several quotes, but I urge you to read the original article “One Year Later, Microsoft AI and Research Grows to 8k People in Massive Bet on Artificial Intelligence.”

[1] Microsoft is looking to avoid missing giant opportunities as it did with mobile and social media, so it is giving its AI strategy a lot of attention and resources

[2] Artificial intelligence is one of the key topics of Nadella’s upcoming book, Hit Refresh.

[3] The initial announcement between Microsoft and Cortana might not be the end of the AI collaboration between the Seattle-area tech giant…

[4] We’ve [Microsoft AI team] largely built what I would call wedges of competency — a great speech recognition system, a great vision and captioning system, great object recognition system…

[5] When you think about the Microsoft Graph and Office Graph, now augmented with the LinkedIn Graph, it’s just amazing.

[6] Now, the question is whether Microsoft “can really execute with differentiation.

[7] Microsoft is looking to avoid missing giant opportunities as it did with mobile and social media, so it is giving its AI strategy a lot of attention and resources.

Stephen E Arnold, September 27, 2017

The Dark Potential Behind Neural Networks

September 27, 2017

With nearly every technical advance humanity has made, someone has figured out how to weaponize that which was intended for good. So too, it seems, with neural networks. The Independent reports, “Artificial Intelligence Can Secretly Be Trained to Behave ‘Maliciously’ and Cause Accidents.”  The article cites research [PDF] from New York University that explored the potential to create a “BadNet.” They found it was possible to modify a neural net’s code to the point where they could even cause tragic physical “accidents,” and that such changes would be difficult to detect. Writer Aatif Sulleyman explains:

Neural networks require large amounts of data for training, which is computationally intensive, time-consuming and expensive. Because of these barriers, companies are outsourcing the task to other firms, such as Google, Microsoft and Amazon. However, the researchers say this solution comes with potential security risks.

 

‘In particular, we explore the concept of a backdoored neural network, or BadNet,’ the paper reads. ‘In this attack scenario, the training process is either fully or (in the case of transfer learning) partially outsourced to a malicious party who wants to provide the user with a trained model that contains a backdoor. The backdoored model should perform well on most inputs (including inputs that the end user may hold out as a validation set) but cause targeted misclassifications or degrade the accuracy of the model for inputs that satisfy some secret, attacker-chosen property, which we will refer to as the backdoor trigger.’

Sulleyman shares an example from the report: researchers successfully fooled a system, with the application of a Post-it note, into interpreting a stop sign as a speed limit sign—a trick that could cause an autonomous vehicle to cruise through without stopping. Though we do not (yet) know of any such sabotage outside the laboratory, researchers hope their work will encourage companies to pay close attention to security as they move forward with machine learning technology.

Cynthia Murrell, September 27, 2017

 

Chatbots: The Negatives Seem to Abound

September 26, 2017

I read “Chatbots and Voice Assistants: Often Overused, Ineffective, and Annoying.” I enjoy a Hegelian antithesis as much as the next veteran of Dr. Francis Chivers’ course in 19th Century European philosophy. Unlike some of Hegel’s fans, I am not confident that taking the opposite tack in a windstorm is the ideal tactic. There are anchors, inboard motors, and distress signals.

The article points out that quite a few people are excited about chatbots. Yep, sales and marketing professionals earn their keep by crating buzz in order to keep their often-exciting corporate Beneteau 22’s afloat. With VCs getting pressured by those folks who provided the cash to create chatbots, the motive force for an exciting ride hurtles onward.

The big Sillycon Valley guns have been army the chatbot army for years. Anyone remember Ask Jeeves when it pivoted to a human powered question answering machine into a customer support recruit. My recollection is that the recruit washed out, but your mileage may vary.

With Amazon, Facebook, Google, IBM, and dozens and dozens of companies with hard-to-remember names on the prowl, chatbots are “the future.” The Infoworld article is a thinly disguised “be careful” presented as “real news.”

That’s why I wrote a big exclamation point and the words “A statement from the Captain Obvious crowd” next to this passage:

Most of us have been frustrated with misunderstandings as the computer tries to take something as imprecise as your voice and make sense of what you actually mean. Even with the best speech processing, no chatbots are at 100-percent recognition, much less 100-percent comprehension.

I am baffled by this fragment, but I am confident it makes sense to those who were unaware that dealing with human utterances is a pretty tough job for the Googlers and Microsofties who insist their systems are the cat’s pajamas. Note this indication of Infoworld quality in thought an presentation:

It seems very inefficient to resort to imprecise systems when we have [sic]

Yep, an incomplete thought which my mind filled in as saying, “humans who can maybe answer a question sometimes.”

The technology for making sense of human utterance is complex. Baked into the systems is the statistical imprecision that undermines the value of some chatbot implementations.

My thought is that Infoworld might help its readers if it were to answer questions like these:

  • What are the components of a chatbot system? Which introduce errors on a consistent basis?
  • How can error rates of chatbot systems be reduced in an affordable, cost effective manner?
  • What companies are providing third party software to the big girls and boys in the chatbot dodge ball game?
  • Which mainstream chatbot systems have exemplary implementations? What are the metrics behind “exemplary”?
  • What companies are making chatbot technology strides for languages other than English?

I know these questions are somewhat more difficult to answer than a write up which does little more than make Captain Obvious roll his eyes. Perhaps Infoworld and its experts might throw a bone to their true believers?

Stephen E Arnold, September 26, 2017

Next Page »

  • Archives

  • Recent Posts

  • Meta