Applying Blockchain Technology to AI Systems

June 25, 2018

The founder of Ocean Protocol and BigchainDB, Trent McConaghy, has written a detailed piece for the BigchainDB’s Blog titled, “Blockchains for Artificial Intelligence: from Decentralized Model Exchanges to Model Audit Trails.” In it, the engineer explains what blockchain technology offers the AI field. See the article for his philosophy, an interesting history lesson on AI and data, and his assertion that the performance issues inherent in blockchain tech are no big deal. Not yet, anyway.

After this thorough introduction, the piece spells out six opportunities McConaghy foresees for this blessed union: Data sharing for better models; Data sharing for qualitatively new models, including “new planet-level data for new planet-level insights” (more on that in a moment); Audit trails on data and models for more trustworthy predictions; a Shared global registry of training data and models; Data and models as IP assets for data and model exchange; and AI DAOs (Decentralized Autonomous Organizations), or “code that owns itself.” See the piece for details on each of these ideas.

Back to the planet-level data concept, which I found interesting. McConaghy references the Interplanetary Database, or IPDB, in his explanation. We’re told:

“IPDB is structured data on a global scale, rather than piecemeal. Think of the World Wide Web as a file system on top of the internet; IPDB is its database counterpart. (I think the reason we didn’t see more work on this sooner is that semantic web work tried to go there, from the angle of upgrading a file system. But it’s pretty hard to build a database by ‘upgrading’ a file system! It’s more effective to say from the start that you’re building a database, and designing as such.) ‘Global variable’ gets interpreted a bit more literally. …

I also noted this statement:

“Overall, we get a whole new scale for diversity of datasets and data feeds. Therefore, we have qualitatively new data. Planetary level structured data. From that, we can build qualitatively new models, that make relations which among inputs & outputs which weren’t connected before. With the models and from the models, we will get qualitatively new insights. I wish I could be more specific here, but at this point it’s so new that I can’t think of any examples. But, they will emerge!”

We are curious to see what does emerge, and to what purposes the technology is applied. Stay tuned.

Cynthia Murrell, June 25, 2018

Psychic Software: Yeah, We Know

June 23, 2018

Coming soon to a search system near you? Science Alert declares, “Scientists Have Invented a Software That Can ‘See’ Several Minutes Into the Future.”

Writer Mike McRae reports on the predictive-analysis progress of researchers at Germany’s University of Bonn. The goal—to predict a sequence of activities five minutes into the future. McRae explains their two approaches:

“The team tested two approaches using different types of artificial neural network: one that anticipated future actions and reflected before anticipating again, and another that built a matrix in one hit before crunching the probabilities. As you’d expect, the deeper they looked into the future, the more mistakes they made. ‘Accuracy was over 40 percent for short forecasting periods, but then declined the more the algorithm needed to look into the future,’ says Gall. The reflective approach did a little better than the matrix method when looking at the next 20 seconds, but the two different neural networks were equally matched when looking beyond 40 seconds. At the extreme end, the scientists discovered their trained program could correctly predict an action and its duration 3 minutes in the future roughly 15 percent of the time. That might not sound impressive, but it does establish solid ground for future artificial intelligence that could potentially develop super-human foresight.”

The researchers plan to present their results at the IEEE Conference on Computer Vision and Pattern Recognition in Salt Lake City, which McRae hopes will generate more interest in predictive software. Though we already have cautionary tales about the limits of this technology, there remain many positive possibilities, he notes.

We wonder when Recorded Future will adopt this approach.

Cynthia Murrell, June 23, 2018

Ink Stained Wretches: You Are Redundant, Says Smart Software

June 23, 2018

Artificial intelligence is revolutionizing skilled and unskilled tasks with wicked efficiency, from juggling the wait times on a customer service line to sentencing hardened criminals. One skilled task that looks like it will be the next on the chopping block is journalism. We found out more from a recent Big Think story, “AI’s Newest Target for Worker Displacement: Journalism.”

The story followed a Bloomberg News employee who recently lost his job:

“AI, he believes, had a part in ending his 12-year employment. He estimates that one-quarter of his tasks were taken by software that crawled filings and press releases for news and flashed headlines automatically. Another recently unemployed journalist, who requested anonymity to speak on the record, said that as much as 60% of her tasks had become automated over the past few years.”

However, not everyone is ready to doom journalism because of AI. Some actually think AI will save the writerly vocation. “It can do the background research, call contacts to get their statements, report on the daily news cycle and so forth,” says one commentator. This seems to make sense and only muddies the water of AI and it’s general impact on our lives. It is either the end of humanity or the start of a new, wonderful era. It all depends on your perspective.

AI may not write like a crafty NYT or WSJ wizard, but no vacations, no breaks, and no health care. What’s not to like?

Patrick Roland, June 23, 2018

AI Bias Can Be Fixed. And There Is a Tooth Fairy

June 22, 2018

AI algorithms were supposed to remove the bias and human error from many decisions, from loan applications to criminal sentencing. As with most techno-fixes, experts soon discovered that algorithms have unforeseen flaws, like how some AI can accidentally create more bias. We learned one way this is being corrected from a recent Fast Company story, “This Tool Lets You See—And Correct—The Bias in an Algorithm.”

According to the article:

“The tool uses statistical methods to identify when groups of people are treated unfairly by an algorithm–defining unfairness as predictive parity, meaning that the algorithm is equally likely to be correct or incorrect for each group. ‘In the past, we have found models that are highly accurate overall, but when you look at how that error breaks down over subgroups, you’ll see a huge difference…’”

This is an interesting tool. We need to be able to correct these biases since most experts agree that we are also the cause of many of them. Finding a solution to this ever-growing problem provides hope not only for future use of AI technology, but also for financial investment in these technologies.

Has anyone seen the tooth fairy today?

Patrick Roland, June 22, 2018

Artificial Intelligence and the New Normal: Over Promising and Under Delivering

June 15, 2018

IBM has the world’s fastest computer. That’s intriguing. Now Watson can output more “answers” in less time. Pity the poor user who has to figure out what’s right and what’s no so right. Progress.

Perhaps a wave of reason is about to hit the AI field. Blogger Filip Piekniewski forecasts, “AI Winter is Well on its Way.” While the neural-networking approach behind deep learning has been promising, it may fall short of the hype some companies have broadcast. Piekniewski writes:

“Many bets were made in 2014, 2015 and 2016 when still new boundaries were pushed, such as the Alpha Go etc. Companies such as Tesla were announcing through the mouths of their CEO’s that fully self-driving car was very close, to the point that Tesla even started selling that option to customers [to be enabled by future software update]. We have now mid 2018 and things have changed. Not on the surface yet, NIPS conference is still oversold, the corporate PR still has AI all over its press releases, Elon Musk still keeps promising self driving cars and Google CEO keeps repeating Andrew Ng’s slogan that AI is bigger than electricity. But this narrative begins to crack. And as I predicted in my older post, the place where the cracks are most visible is autonomous driving – an actual application of the technology in the real world.”

This post documents a certain waning of interest in deep learning, and notes an apparently unforeseen limit to its scale. Most concerning so far, of course, are the accidents that have involved self-driving cars; Piekniewski examines that problem from a technical perspective, so see the article for those details. Whether the AI field will experience a “collapse,” as this post foresees, or we will simply adapt to more realistic expectations, we cannot predict.

Cynthia Murrell, June 15, 2018

A Healthy Perspective on AI Advances

June 11, 2018

We see a lot of write ups that hype AI, like this piece at The Verge on DeepMind’s venture into chess playing.

An article at DataCenterNews, “Expert Opinion: 5 Myths Surrounding the ‘AI Hype Train’” warns us that, despite such performances, AI has a long way to go before it becomes ubiquitously useful. We’re told:

“While the technology represents an exciting new utility with a wide variety of potential use cases, it does not herald the arrival of a brave new sci-fi future. … While there is no doubt that the nascent technology shows cause for excitement, it’s also clear that for the vast majority of businesses, the time to embrace AI is still somewhere down the road.”

Here are the five myths the article busts: AI is going to replace all jobs; AI is a singular, tangible product; Every enterprise needs an AI strategy; AI technologies are autonomous; and AI will quickly become smarter than humans (yikes!) See the article for the reasons each of these ideas is (as of yet) untrue. The piece concludes:

“AI represents an exciting collection of emerging technologies, and for many businesses, it will eventually make a big impact. Through productivity and personalization improvements as a result of AI, global GDP in 2030 will be 14 percent higher – the equivalent to US$15.7 trillion. As AI achieves more widespread adoption, it’s important for business leaders not to get distracted by shiny objects and keep their eyes on their business objectives for now.”

We concur. Do not succumb to the hype and buy into AI technology if it is not (yet) right for your business. Meanwhile, stay tuned for more developments because some of Google’s professionals are not completely comfortable with the use of the firm’s smart software for war fighting.

Cynthia Murrell, June 11, 2018

AI Speed Bumps Needed

June 6, 2018

The most far-reaching problem with AI is its potential for machine learning to pick up, and implement, the wrong lessons. Technology Review draws our attention to a new service that tests algorithms for bias in, “This Company Audits Algorithms to See How Biased They Are.” Founded by Cathy O’Neal, the mathematician and social scientist behind the book Weapons of Math Destruction, the small company is called O’Neil Risk Consulting and Algorithmic Auditing (ORCAA). In analyzing an algorithm for fairness, the company considers many factors, from the programmers themselves to the data generated. O’Neil offers these assessments as a way for companies to certify their algorithms bias-free, which she suggests makes for a strong marketing tool.

Meanwhile, the Electronic Frontier Foundation warns we are already becoming too reliant on AI in its post, “Math Can’t Solve Everything: Questions We Need to Be Asking Before Deciding an Algorithm is the Answer.” In introducing their list, Staff Attorney Jamie Lee Williams and Product Manager Lena Gunn emphasize:

“Across the globe, algorithms are quietly but increasingly being relied upon to make important decisions that impact our lives. This includes determining the number of hours of in-home medical care patients will receive, whether a child is so at risk that child protective services should investigate, if a teacher adds value to a classroom or should be fired, and whether or not someone should continue receiving welfare benefits. The use of algorithmic decision-making is typically well-intentioned, but it can result in serious unintended consequences. In the hype of trying to figure out if and how they can use an algorithm, organizations often skip over one of the most important questions: will the introduction of the algorithm reduce or reinforce inequity in the system?”

The article urges organizations to take these five questions into account: Will this algorithm influence decisions with the potential to negatively impact people’s lives? Can the available data actually lead to a good outcome? Is the algorithm fair? How will the results (really) be used by humans? And, will people affected by these decisions have any influence over the system? For each entry, the post explains why, and how, to employ each of these questions, complete with examples of AI bias that have occurred already. It all comes down to this—as Williams and Gunn write, “We must not use algorithms to avoid making difficult policy decisions or to shirk our responsibility to care for one another.”

Cynthia Murrell, June 6, 2018

AI Datasets the Key to Better Smart Software

June 5, 2018

The potential for bias in artificial intelligence is a topic on practically every tech junkie’s lips these days. As the world depends more and more on machine learning, we are seeing more clearly the limits and flaws this system possesses. Namely, it’s potential for bias. We learned of this and of solutions in a recent Venture Beat story, “Datasheets Could Be The Solution to Biased AI.”

According to the story:

“Given that many machine learning and deep learning model development efforts use public datasets such as ImageNet or COCO — or private datasets produced by others — it’s important to be able to convey the context, biases, and other material aspects of a training dataset to those interested in using it.”

These datasets are proving to be incredibly valuable and an incredible way to remove the bias from AI. For example, datasets have been used to detect abnormalities in CT scans using artificial intelligence. In fact, they detect the problematic areas at incredible speeds. According to the story: “ seeks to speed up the process by leveraging artificial intelligence to screen CT scans in under 10 seconds to find any abnormalities.” Wow, that’s an incredible leap. When AI is given an appropriate foundation like that, the potential for success is off the charts.

Patrick Roland, June 5, 2018

AI: A Little Helper for Those Seeking Information

May 31, 2018

Search is a powerful tool and big data software has only improved search’s quality.  Search can now locate items in all data structures, ranging from structured to unstructured.  Do users, however, actually find the answers they want?  InfoWorld runs through the impact AI has had on search in the article, “The Wonders Of AI-Or The Shortcomings Of Search?”

In essence, Google and Amazon’s subsecond search results have spoiled and ruined users.  Users are so use to accurate and quick results that they expect all Web sites, software, and hardware to do the same.  These search tools are actually providing users with an information overload.

One the other hand, AI makes search and other tools more robust.  Organizations use AI not only to power search, but to feed and filter data to make business recommendations.  Google and Amazon are not the only ones using it.  Other companies that use AI to power their businesses are Uber, Tesla, Spotify, Pandora, Netflix, and Bristol Meyers Squibb.  AI takes the search out of search:

“Those last points are crucial. A structural shift is under way. AI cuts through the clutter to provide not endless pages of results to wade through, but with specific recommendations tailored to you as the seeker of knowledge—or simply as the seeker of where to find the best Chicago-style pizza while away from home on a business trip. (Which is not to admit, certainly not in print, that I have not supplemented my normal whole-foods, plant-based, no-meat-or-dairy nutrition by indulging in such a cheesy, guilty pleasure. I present it merely for illustration.) The key construct: AI-driven systems present either the single best solution or a tight shortlist of best-fit solutions.”

AI also augments search by providing recommendations that are related to the original query, but are simply suggestions.  This requires that AI be fed a lot of data, so that it can offer proactive assistance.

Big data and AI are empowering, but they do need a checks and balances system.  The solution is to combine AI search and regular search into one tool: the curated list and the raw data list.

Whitney Grace, May 31, 2018


IBM: Watson Wizards Available for a New Job?

May 28, 2018

I know that newspapers do real “news.” I know I worked for a reasonably good newspaper. I, therefore, assume that the information is true in the story “Some IBM Watson Employees Said They Were Laid Off Thursday.” The Thursday for those who have been on a “faire le pont” is May 24, 2018.

The write up states:

IBM told some employees in the United States and other countries on Thursday that they were being laid off. The news was reported on websites, which cited social media and Internet posts by IBM employees.

IBM also seems to be taking the reduction in force approach to success by nuking some of the Big Blue team in its health unit. (See “‘Ugly Day:’ IBM Laying Off Workers in Watson Health Group, Including Triangle.”)

I noted this statement in the Cleveland write up:

Since 2012, the Cleveland Clinic has collaborated with IBM on electronic medical records and other tools employing Watson, IBM’s supercomputer. The Clinic and IBM Watson Health have worked together to identify new cancer treatments, improve electronic medical records and medical student education, and look at the adoption of genomic-based medicine.

The issue may relate to several facets of Watson:

  1. Partners do not have a good grasp of the time and effort required to create questions which Watson is expected to answer. High powered smart people are okay with five minute conversations with an IBM Watson engineer, but extend those chats to a couple of hours over weeks, then the Watson thing is not the time saver some hoped
  2. Watson, like other smart systems, works within a tightly bounded domain. As new issues arise, questions by users cannot be answered in a way that is “spontaneously helpful.” The reason is that Watson and similar systems are just processing queries. if one does not know what one does not know, asking and answering questions can range from general to naive to dead wrong in my experience
  3. Watson and similar systems are inevitably compared to Google’s ability to locate a pizza restaurant as one drives a van in an unfamiliar locale. Watson does not work like Google.

Toss in the efficiency of using one’s experience or asking a colleague, and Watson gets in the way. Like many smart systems, users do not want to become expert Watson or similar system users. The smart system is supposed to or is expected to provide answers a person can use.

The problem with the Watson approach is that it is old fashioned search. A user has to figure out from a list of results or outputs what’s what. Contrast that to next generation information access systems which provide an answer.

IBM owns technology which performs in a more intelligent and useful way than the Watson solution.

Why IBM chased the same dream that cratered many firms with key word search technology has intrigued me. Was it the crazy idea that marketing would make search work? IBM Watson seems to be to be a potpourri of home brew code, acquired metasearch technology like Vivisimio, and jacking in open source software.

What distinguished it was the hope that marketing would make Watson into a billion dollar business.

It seems as if that dream has suffered a setback. One weird consequence is the use of the word “cognitive.” Vendors worldwide describe their systems as “cognitive search.”

From my point of view, search and retrieval is a utility. One cannot perform digital work without finding a file, content, or some other digital artifact.

No matter how many “governance” experts, how many “search” experts, how many MBAs, how many content management experts, how many information professionals want search to be the next big thing—search is still a utility. Forget this when one dreams of billions in revenue, and there is a disconnect between dreams and reality.

Effective “search” is not a single method or system. Effective search is not just smart software. Effective search is not a buzzword like “cognitive” or “artificial intelligence.”

Finding information and getting useful “answers” requires multiple tools and considerable thought.

My hunch is that the apparent problems with Watson “health” foreshadow even more severe changes for the game show winners, its true believers, and the floundering experts who chant “cognitive” at every opportunity.

Search is difficult, and in my decades of work in information access, I have not found the promised land. Silver bullets, digital bags of garlic, and unicorn dreams have not made information access a walk in the park.

Cognitive? Baloney. Remember. Television programs like Jeopardy do what’s called post production. A flawed cancer treatment may not afford this luxury. Winning a game show is TV. Sorry, IBM. Watson’s business is reality which may make a great business school case study.

Stephen E Arnold, May 28, 2018

Next Page »

  • Archives

  • Recent Posts

  • Meta