Artificial Intelligence Will Make Humans Smarter at Work
October 17, 2017
Relatively no industry has been untouched by the past decade’s advances in artificial intelligence. We could go on and make a laundry list of which businesses in particular, but we have a hunch you are very close to one right now. According to a recent Enterprise CIO article, “From Buzzword to Boardroom: What’s Next for Machine Learning?” human intelligence is becoming obsolete in certain fields.
As demonstrated in previous experiments, no human brain is able to process as much data at comparable speed and accuracy as machine-learning systems can and as a result, deliver a sound, data-based result within nanoseconds.
While that should make you sit up and take notice, the article is not as apocalyptic as that quote might lead you to believe. In fact, there is a silver lining in all this AI. We humans will just have to work hard to get there. The story continues:
It must also leave room for creativity and innovation. Insights and suggestions gained with the aid of artificial intelligence should stimulate, not limit. Ultimately, real creativity and genuine lateral thinking still comes from humans.
We have to agree with this optimistic line of thinking. These machines are not exactly stealing our jobs, but forcing humans to reevaluate their roles. If you can properly combine AI, big data, and search for your role, chances are an employee, like yourself, will become invaluable instead of obsolete.
Patrick Roland, October 17, 2017
CEOs AI Hyped but Not Many Deploy It
October 17, 2017
How long ago was big data the popular buzzword? It was not that long ago, but now it has been replaced with artificial data and machine learning. Whenever a buzzword is popular, CEOs and other leaders become obsessed with implementing it within their own organizations. Fortune opens up about the truth of artificial intelligence and its real deployment in the editorial, “The Hype Gap In AI”.
Organization leaders have high expectations for artificial intelligence, but the reality is well below them. According to a survey cited in the editorial, 85% of executives believe that AI will change their organizations for the better, but only one in five executives have actually implemented AI into any part of their organizations. Only 39% actually have an AI strategy plan.
Hype about AI and its potential is all over the business sector, but very few really understand the current capabilities. Even fewer know how they can actually use it:
But actual adoption of AI remains at a very early stage. The study finds only about 19% of companies both understand and have adopted AI; the rest are in various stages of investigation, experimentation, and watchful waiting. The biggest obstacle they face? A lack of understanding —about how to adapt their data for algorithmic training, about how to alter their business models to take advantage of AI, and about how to train their workforces for use of AI.
Organizations view AI as an end-all solution, similar to how big data was the end all solution a few years ago. What is even worse is that while big data may have had its difficulties, understanding it was simpler than understanding AI. The way executives believe AI will transform their companies is akin to a science fiction solution that is still very much in the realm of the imagination.
Whitney Grace, October 17, 2017
Big Data and Big Money Are on a Collision Course
October 16, 2017
A recent Forbes article has started us thinking about the similarities between long-haul truckers and Wall Street traders. Really! The editorial penned by JP Morgan, “Informing Investment Decisions Using Machine Learning and Artificial Intelligence,” showcases the many ways in which investing is about to be overrun with big data machines. Depending on your stance, it is either thrilling or frightening.
The story claims:
Big data and machine learning have the potential to profoundly change the investment landscape. As the quantity and the access to data available have grown, many investors continue to evaluate how they can leverage data analysis to make more informed investment decisions. Investment managers who are willing to learn and to adopt new technologies will likely have an edge.
Sounds an awful lot like the news we have been reading recently about how almost two million truck drivers could be out of work in the next decade thanks to self-driving cars. If you have money in trucking, the amount saved is amazing, but if that’s how you make your living things have suddenly become chilly. Sounds like the future of Wall Street, according to this story.
It continues:
Big data and machine learning strategies are already eroding some of the advantage of fundamental analysts, equity long-short managers and macro investors, and systematic strategies will increasingly adopt machine learning tools and methods.
If you ask us, it’s not a matter of if but when. Nobody wants to lose their job due to efficiency, but it’s pretty much impossible to stop. Money talks and saving money talks loudest to companies and business owners, like investment firms.
Patrick Roland, October 16, 2017
Blockchain Quote to Note: The Value of Big Data as an Efficient Error Reducer
September 6, 2017
I read “Blockchains for Artificial Intelligence: From Decentralized Model Exchanges to Model Audit Trails.” The foundation of the write up is that blockchain technology can be used to bring more control to data and models. The idea is an interesting one. I spotted a passage tucked into the lower 20 percent of the article which I judged to be a quote to note. Here’s the passage I highlighted:
as you added more data — not just a bit more data but orders of magnitude more data — and kept the algorithms the same, then the error rates kept going down, by a lot. By the time the datasets were three orders of magnitude larger, error was less than 5%. In many domains, there’s a world of difference between 18% and 5%, because only the latter is good enough for real-world application. Moreover, the best-performing algorithms were the simplest; and the worst algorithm was the fanciest. Boring old perceptrons from the 1950s were beating state-of-the-art techniques.
Bayesian methods date from the 18th century and work well. Despite LaPlacian and Markovian bolt ons, the drift problem bedevils some implementations. The solution? Pump in more training data, and the centuries old techniques work like a jazzed millennial with a bundle of venture money.
Care to name a large online outfit which may find this an idea worth nudging forward? I don’t think it will be Verizon Oath or Tronc.
Stephen E Arnold, September 6, 2017
Big Data Visualization the Open Source Way
August 10, 2017
Big Data though was hailed in a big way, it is yet to gain full steam because of a shortage of talent. Companies working in this domain are taking another swipe by offering visualization tools for free.
The Customize Windows in an article titled List of Open Source Big Data Visualization Tools:
There are some growing number of websites which write about Big Data, cloud computing and spread wrong information to sell some others paid things.
Many industries have tried the freemium route to attract talent and promote the industry. For instance, Linux OS maker Penguin Computing offered its product for free to users. This move sparked interest among users who wanted to try something other than Windows and Mac.
The move created a huge user base of Linux users and also attracted talent to promote research and development.
Big Data players it seems is following the exact strategy by offering data visualization tools free, which they will monetize later. All that is needed now is patience.
Vishal Ingole, August 10, 2017
Big Data Can Reveal Darkest Secrets
August 8, 2017
Surveys for long have been used by social and commercial organizations for collecting data. However, with easy access to Big Data, it seems people while responding to surveys, lie more than thought about earlier.
In an article by Seth Stephens-Davidowitz published by The Guardian and titled Everybody Lies: How Google Search Reveals Our Darkest Secrets, the author says:
The power in Google data is that people tell the giant search engine things they might not tell anyone else. Google was invented so that people could learn about the world, not so researchers could learn about people, but it turns out the trails we leave as we seek knowledge on the internet are tremendously revealing.
As per the author, impersonal and anonymity of Internet and ease of access is one of the primary reasons why Internet users reveal their darkest secrets to Google (in form of queries).
Big Data which is a form of data scourged from various sources can be a reliable source of information. For instance, surveys say that around 10% of American men are gay. Big Data, however, reveals that only 2-3% of men are actually gay. To know more about interesting insights on Big Data, courtesy Google, read the article here.
Vishal Ingole, August 8, 2017
Big Data Too Is Prone to Human Bug
August 2, 2017
Conventional wisdom says Big Data being a realm of machines is immune from human behavioral traits like discrimination. Insights from data scientists, however, are different.
According to an article published by PHYS.ORG titled Discrimination, Lack of Diversity, and Societal Risks of Data Mining Highlighted in Big Data, the author says:
Despite the dramatic growth in big data affecting many areas of research, industry, and society, there are risks associated with the design and use of data-driven systems. Among these are issues of discrimination, diversity, and bias.
The crux of the problem is the way data is mined, processed and decisions made. At every step, humans need to be involved in order to tell machines how each of these processes are executed. If the person guiding the system is biased, these biases are bound to seep into the subsequent processes in some way.
Apart from decisions like granting credit, human resources which also is being automated may have diversity issues. The fundamental remains the same in this case too.
Big Data was touted as the next big thing and may turn out to be so, but most companies are yet to figure out how to utilize it. Streamlining the processes and making them efficient would be the next step.
Vishal Ingole, August 2, 2017
Machine Learning Does Not Have the Mad Skills
July 25, 2017
Machine learning and artificial intelligence are computer algorithms that will revolutionize the industry, but The Register explains there is a problem with launching it: “Time To Rethink Machine Learning: The Big Data Gobble Is OFF The Menu.” The technology industry is spouting that 50 percent of organizations plan to transform themselves with machine learning, but the real truth is that it is less than 15 percent.
The machine learning revolution has supposedly started, but in reality, the cannon has only be fired and the technology has not been implemented. The problem is that while companies want to use machine learning, they are barely getting off the ground with big data and machine learning is much harder. Organizations do not have workers with the skills to launch machine learning and the tech industry as a whole has a huge demand for skilled workers.
Part of this inaction comes down to the massive gap between ML (and AI) myth and reality. As David Beyer of Amplify Partners puts it: ‘Too many businesses now are pitching AI almost as though it’s batteries included.’ This is dangerous because it leads companies to either over-invest (and then face a tremendous trough of disillusionment), or to steer clear when the slightest bit of real research reveals that ML is very hard and not something the average Python engineer is going to spin up in her spare time.
Organizations also do not have the necessary amount of data to make machine learning feasible and they also lack the corporate culture to do the required experimentation for machine learning to succeed.
This article shares a story that we have read many times before. The tech industry gets excited about the newest shiny object, it explodes in popularity, then they realize that the business world is not ready for implementing the technology.
Whitney Grace, July 25, 2017
Big Data in Biomedical
July 19, 2017
The biomedical field which is replete with unstructured data is all set to take a giant leap towards standardization with Biological Text Mining Unit.
According to PHYS.ORG, in a peer review article titled Researchers Review the State-Of-The-Art Text Mining Technologies for Chemistry, the author states:
Being able to transform unstructured biomedical research data into structured databases that can be more efficiently processed by machines or queried by humans is critical for a range of heterogeneous applications.
Scientific data has fixed set of vocabulary which makes standardization and indexation easy. However, most big names in Big Data and enterprise search are concentrating their efforts on e-commerce.
Hundreds of new compounds are discovered every year. If the data pertaining to these compounds is made available to other researchers, advancements in this field will be very rapid. The major hurdle is the data is in an unstructured format, which Biological Text Mining Unit standards intend to overcome.
Vishal Ingole, July 19, 2017
The Big Problems of Big Data
June 30, 2017
Companies are producing volumes of data. However, no fully functional system is able to provide actionable insights to decision makers in real time. Bayesian methods might pave the way to the solution seekers.
In an article published by PHYS and titled Advances in Bayesian Methods for Big Data, the author says:
Bayesian methods provide a principled theory for combining prior knowledge and uncertain evidence to make sophisticated inference of hidden factors and predictions.
Though the methods of data collection have improved, analyzing and presenting actionable insights in real time is still a big problem for Big Data adopters. Human intervention is required at almost every step which defies the entire purpose of an intelligent system. Hopefully, Bayesian methods can resolve these issues. Experts have been reluctant to adopt Bayesian methods owing to the fact that they are slow and are not scalable. However, with recent advancements in machine learning, the method might work.
Vishal Ingole, June 30, 2017